Data Pump Consistent parameter?
Hi All,
Is there any consistent parameter in data pump as it is in exp/imp.
Becuase we are using data pump for backups and want to disable consitenct parameter.
Please let me know how I can disable consistent parameter in Data Pump.
Thanks
if it;s not backup method then How you do the logical
full database backup????
From my thinking it's called logical DB backup (when
u are using exp or expdp)There are many reasons that export shouldn't be used as backup method,
1. It's very slow to do export on huge database ( which you already experiencing) the import will take much longer.
2. You only have a 'snapshot' of your database at time of backup, in the event of disaster, you will lost all data changes after backup.
3. It has performance impact on busy database ( which you also experiencing)
Other than all these, if you turn CONSISTENT to N, your 'logic' backup is logically corrupted.
Similar Messages
-
Using Data Pump Storage Parameter option
I am creating database replica of our production environment - the db name don't have to be the same.
My option is to use Oracle data pump to move the data from source database to target database.
I performed the same schenerio for our Windows 2003 evnironment with no problem.
Doing the same for Linux, I am getting tablespace creation error as you can see below:
Lix
Linux-x86_64 Error: 2: No such file or directory Failing sql is: CREATE TABLESPACE "INQUIRY" DATAFILE '/oraappl/pca/vprod/vproddata/inquiry01.dbf' SIZE 629145600 LOGGING ONLINE PERMANENT BLOCKSIZE 8192 EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO ORA-39083: Object type TABLESPACE failed to create with error: ORA-01119: error in creating database file '/oraappl/pca/vprod/vproddata/medical01.dbf' ORA-27040: file create error, unable to create file
My question is do we have to create the tablespaces or data pump should use the default tablespace location already being used by the new database?Hi Richard,
I am working creating my extra database using duplicate command as you suggested.
I got everything up until I got this error:
channel ORA_AUX_DISK_1: reading from backup piece /oraappl/pca/backups/weekly/vproddata/rman/VPR ackupset/2013_08_27/o1_mf_nnndf_TAG20130827T083750_91s7dz0r_.bkp ORA-19870: error reading backup piece /oraappl/pca/backups/weekly/vproddata/rman/VPROD/backupset 3_08_27/o1_mf_nnndf_TAG20130827T083750_91s7dz0r_.bkp ORA-19505: failed to identify file "/oraappl/pca/backups/weekly/vproddata/rman/VPROD/backupset/2 08_27/o1_mf_nnndf_TAG20130827T083750_91s7dz0r_.bkp" ORA-27037: unable to obtain file status Linux-x86_64 Error: 2: No such file or directory
It is defaulting to the recovery location of the production database, instead of the auxiliary db.
My next option was to catalog the backup files, even that is not working. Any suggestion? -
Consistent parameter in Data Pump.
Hi All,
As we know that there is no consistent parameter is data pump, can any one tell me how DP takes care of this.
From net i got the below one liner as
Data Pump Export determines the current time and uses FLASHBACK_TIME .But I failed to understand what exactly it meant.
Regards,
SphinxThis is the equivalent of consistent=y in exp. If you would use flashback_time=systimestamp to get the data pump export to be "as of the point in time the export began, every table will be as of the same commit point in time".
According to the docs:
“The SCN that most closely matches the specified time is found, and this SCN is used to enable the Flashback utility. The export operation is performed with data that is consistent as of this SCN.” -
FILESIZE parameter in DATA PUMP
Hi All,
As per the data pump syntax if we define FILESIZE paramter is create dmp files with that mention file.
But My question is if I ignore FILESIZE parameter, How oracle define the size of parameter.
I am creating dpexp with following syntax for dpexp file. It creates dmp files with name SCHEMA.ENV.080410..p1*%U*.dmp with 1,2,3 etc.
IT creates the files with different size.
JOB_NAME=SCHEMA.ENV.080410..p1
DIRECTORY=dump_dir
DUMPFILE=dump_dir:SCHEMA.ENV.080410..p1%U.dmp
LOGFILE=SCHEMA.ENV.080410..p1.explog
PARALLEL=16
CONTENT=ALL
EXCLUDE=INDEX,CONSTRAINT,TABLE_STATISTICS
TABLES= TABLE NAMESuser4005330 wrote:
Hi All,
As per the data pump syntax if we define FILESIZE paramter is create dmp files with that mention file.
But My question is if I ignore FILESIZE parameter, How oracle define the size of parameter.
I am creating dpexp with following syntax for dpexp file. It creates dmp files with name SCHEMA.ENV.080410..p1*%U*.dmp with 1,2,3 etc.
IT creates the files with different size.
JOB_NAME=SCHEMA.ENV.080410..p1
DIRECTORY=dump_dir
DUMPFILE=dump_dir:SCHEMA.ENV.080410..p1%U.dmp
LOGFILE=SCHEMA.ENV.080410..p1.explog
PARALLEL=16
CONTENT=ALL
EXCLUDE=INDEX,CONSTRAINT,TABLE_STATISTICS
TABLES= TABLE NAMESAs you defined PARALLEL=16, Data Pump will create 16 process and each process will write to it's own file, that's the reason why you get different sized files -
Data pump with flashback_scn or flashback_time
Dear Gurus,
The Oracle database version in used is 11gR2. We don't have flashback enabled for the database. However to run a data pump export with consistency, can we turn on FLASHBACK_SCN or FLASHBACK_TIME?
Best
rac110gHow to set the parameter
The logical conclusion and field-tested best practice approved by Oracle Support is, to set the UNDO_RETENTION parameter to at least the estimated time for the data pump import. Don’t forget to size your UNDO tablespace accordingly, since the retention only works as long as there is enough undo space available.
Note
IMPDP uses flashback technology (flashback table) on the source database to achive consistency, so the UNDO tablespace there is worth a glance as well.
check this one link for expdp/impdp undo requirements and possible problems
http://www.usn-it.de/index.php/2010/05/05/oracle-impdp-ora-1555-and-undo_retention/
Edited by: Asad99 on Mar 26, 2013 10:42 PM -
Hi,
Can you please explain me about CONSISTENT and COMPRESS parameters in Data Pump Export.
Thanks,
Suresh Bommalata..Use FLASHBACK_SCN and FLASHBACK_TIME for this functionality against the consistent parameter
http://oracledatapump.blogspot.com/2009/04/mapping-between-parameters-of-datapump.html
Edited by: user00726 on Aug 18, 2010 10:56 PM -
An Oracle data pump situation that I need help with
Oracle 10g running on Sun Solaris:
I have written a unix shell script that exports data into a dump file(data pump export using expdp). Similarly I have import script that imports the data from the dump file(import using impdp). These are not schema exports. In other words, we have logically divided our schema into 4 different groups based on their functionality (group1, group2,group3 and group4). Each of these groups consists of about 30-40 tables. For expdp, there are 4 parfiles. group1.par, group2.par, group3.par and group4.par. Depending on what parameter you pass while running the script, the respective par file will be picked and an export will be done.
For example,
While running,
exp_script.ksh group1
will pick up group1.par and will export into group1.dmp.
Similarly,
exp_script.ksh group3
will pick up group3.par and will export into group3.dmp.
My import script also needs the parameter to be passed to pick the right dmp file.
For example,
imp_script.ksh group3
will pick up group3.dmp and import every table that group3.dmp has(Here, import process does not need par files - all it needs is the right dmp file. )
I am now faced with a difficulty, where, I must use Oracle's dbms_datapump API, to achieve the above same(and not expdp and impdp). I haven't used this API before. How best can I use this API to stick with my above stategy -- which is, I eventually need group1.dmp, group2.dmp, group3.dmp and group4.dmp. I will need to pass table names that each group contains. How do I do this using this API ? Can someone point me to an example, or perhaps suggest ?
ThanksOr atleast, how do you do a table export using dbms_datapump ?
-
Data Pump - how can I take a hot export of a database?
Hi,
I have a running database, and I want to kick off an export using Data Pump. I don't want any transactions that take place once I kick off the export to be in the export file. Is this how Data pump works?
Thanks.You can get a consistent set of data, but from ether exp or expdp, you can't get a consistent set of metadata. For example:
If you user flashback parameter, the data in the dumpfile will be data that was entered before your flashback parameter. This means that if your flashback equates to scn 1234 and someone enters data at 1235, the dumpfile will not have that new data. But, if someone creates a new table at scn 1236, that table may or may not be in the dumpfile. This is true for both exp and expdp.
Hope this helps.
Dean -
Hi
I am trying to import data in Oracle 11g Release2(11.2.0.1) using impdp utitlity and getting below errror
UDI-00018: Data Pump client is incompatible with database version 11.2.0.1.0
Export dump has taken in database with oracle 11g Release 1(11.1.0.7.0) and I am trying to import in higher version of the database. Is there any parameter I have to set to avoid this error?AUTHSTATE=compat
A__z=! LOGNAME
CLASSPATH=/app/oracle/11.2.0/jlib:.
HOME=/home/oracle
LANG=C
LC__FASTMSG=true
LD_LIBRARY_PATH=/app/oracle/11.2.0/lib:/app/oracle/11.2.0/network/lib:.
LIBPATH=/app/oracle/11.2.0/JDK/JRE/BIN:/app/oracle/11.2.0/jdk/jre/bin/classic:/app/oracle/11.2.0/lib32
LOCPATH=/usr/lib/nls/loc
LOGIN=oracle
LOGNAME=oracle
MAIL=/usr/spool/mail/oracle
MAILMSG=[YOU HAVE NEW MAIL]
NLSPATH=/usr/lib/nls/msg/%L/%N:/usr/lib/nls/msg/%L/%N.cat
NLS_DATE_FORMAT=DD-MON-RRRR HH24:MI:SS
ODMDIR=/etc/objrepos
ORACLE_BASE=/app/oracle
ORACLE_HOME=/app/oracle/11.2.0
ORACLE_SID=AMT6
ORACLE_TERM=xterm
ORA_NLS33=/app/oracle/11.2.0/nls/data
PATH=/app/oracle/11.2.0/bin:.:/usr/bin:/etc:/usr/sbin:/usr/ucb:/home/oracle/bin:/usr/bin/X11:/sbin:.:/usr/local/bin:/usr/ccs/bin
PS1=nbsud01[$PWD]:($ORACLE_SID)>
PWD=/nbsiar/nbimp
SHELL=/usr/bin/ksh
SHLIB_PATH=/app/oracle/11.2.0/lib:/usr/lib
TERM=xterm
TZ=Europe/London
USER=oracle
_=/usr/bin/env -
Data Pump - expdp and slow performance on specific tables
Hi there
I have af data pump export af a schema. Most of the 700 tables is exported very quickly (direct path) but a couple of them seems to be extremenly slow.
I have chekced:
- no lobs
- no long/raw
- no VPD
- no partitions
- no bitmapped index
- just date, number, varchar2's
I'm runing with trace 400300
But I'm having trouble reading the output from it. It seems that some of the slow performning tables is runinng with method 4??? Can anyone find an explanation for the method in the trace:
1 > direct path (i think)
2 > external table (i think)
4 > ?
others?
I have done some stats using v$filestat/v$session_wait (history) - and it seems that we always wait for DB seq file read - and doing lots and lots of SINGLEBLKRDS. Not undo is read
I have a table 2.5 GB -> 3 minutes
and then this (in my eyes) similar table 2.4 GB > 1½ hrs.
There are 367.000 blks (8 K) and avg rowlen = 71
I'm on Oracle 11.2 on a Linux box with plenty of RAM and CPU power.
Trace file /opt/oracle112/diag/rdbms/prod/prod/trace/prod_dw00_24268.trc
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
ORACLE_HOME = /opt/oracle112/product/11.2.0.2/dbhome_1
System name: Linux
Node name: tiaprod.thi.somethingamt.dk
Release: 2.6.18-194.el5
Version: #1 SMP Mon Mar 29 22:10:29 EDT 2010
Machine: x86_64
VM name: Xen Version: 3.4 (HVM)
Instance name: prod
Redo thread mounted by this instance: 1
Oracle process number: 222
Unix process pid: 24268, image: [email protected] (DW00)
*** 2011-09-20 09:39:39.671
*** SESSION ID:(401.8395) 2011-09-20 09:39:39.671
*** CLIENT ID:() 2011-09-20 09:39:39.671
*** SERVICE NAME:(SYS$BACKGROUND) 2011-09-20 09:39:39.671
*** MODULE NAME:() 2011-09-20 09:39:39.671
*** ACTION NAME:() 2011-09-20 09:39:39.671
KUPP:09:39:39.670: Current trace/debug flags: 00400300 = 4195072
*** MODULE NAME:(Data Pump Worker) 2011-09-20 09:39:39.672
*** ACTION NAME:(SYS_EXPORT_SCHEMA_09) 2011-09-20 09:39:39.672
KUPW:09:39:39.672: 0: ALTER SESSION ENABLE PARALLEL DML called.
KUPW:09:39:39.672: 0: ALTER SESSION ENABLE PARALLEL DML returned.
KUPC:09:39:39.693: Setting remote flag for this process to FALSE
prvtaqis - Enter
prvtaqis subtab_name upd
prvtaqis sys table upd
KUPW:09:39:39.819: 0: KUPP$PROC.WHATS_MY_ID called.
KUPW:09:39:39.819: 1: KUPP$PROC.WHATS_MY_ID returned.
KUPW:09:39:39.820: 1: worker max message number: 1000
KUPW:09:39:39.822: 1: Full cluster access allowed
KUPW:09:39:39.823: 1: Original job start time: 11-SEP-20 09:39:38 AM
KUPW:09:39:39.862: 1: KUPP$PROC.WHATS_MY_NAME called.
KUPW:09:39:39.862: 1: KUPP$PROC.WHATS_MY_NAME returned. Process name: DW00
KUPW:09:39:39.862: 1: KUPV$FT_INT.GET_INSTANCE_ID called.
KUPW:09:39:39.866: 1: KUPV$FT_INT.GET_INSTANCE_ID returned. Instance name: prod
KUPW:09:39:39.870: 1: ALTER SESSION ENABLE RESUMABLE called.
KUPW:09:39:39.870: 1: ALTER SESSION ENABLE RESUMABLE returned.
KUPW:09:39:39.871: 1: KUPF$FILE.INIT called.
KUPW:09:39:39.996: 1: KUPF$FILE.INIT returned.
KUPW:09:39:39.998: 1: KUPF$FILE.GET_MAX_CSWIDTH called.
KUPW:09:39:39.998: 1: KUPF$FILE.GET_MAX_CSWIDTH returned.
KUPW:09:39:39.998: 1: Max character width: 1
KUPW:09:39:39.998: 1: Max clob fetch: 32757
KUPW:09:39:39.998: 1: Max varchar2a size: 32757
KUPW:09:39:39.998: 1: Max varchar2 size: 7990
KUPW:09:39:39.998: 1: In procedure GET_PARAMETERS
KUPW:09:39:40.000: 1: In procedure GET_METADATA_FILTERS
KUPW:09:39:40.001: 1: In procedure GET_METADATA_TRANSFORMS
KUPW:09:39:40.002: 1: In procedure GET_DATA_FILTERS
KUPW:09:39:40.004: 1: In procedure GET_DATA_REMAPS
KUPW:09:39:40.005: 1: In procedure PRINT_MT_PARAMS
KUPW:09:39:40.005: 1: Master table : "SYSTEM"."SYS_EXPORT_SCHEMA_09"
KUPW:09:39:40.005: 1: Metadata job mode : SCHEMA_EXPORT
KUPW:09:39:40.005: 1: Debug enable : TRUE
KUPW:09:39:40.005: 1: Profile enable : FALSE
KUPW:09:39:40.005: 1: Transportable enable : FALSE
KUPW:09:39:40.005: 1: Metrics enable : FALSE
KUPW:09:39:40.005: 1: db version : 11.2.0.2.0
KUPW:09:39:40.005: 1: job version : 11.2.0.0.0
KUPW:09:39:40.005: 1: service name :
KUPW:09:39:40.005: 1: Current Edition : ORA$BASE
KUPW:09:39:40.005: 1: Job Edition :
KUPW:09:39:40.005: 1: Abort Step : 0
KUPW:09:39:40.005: 1: Access Method : AUTOMATIC
KUPW:09:39:40.005: 1: Data Options : 0
KUPW:09:39:40.006: 1: Dumper directory :
KUPW:09:39:40.006: 1: Master only : FALSE
KUPW:09:39:40.006: 1: Data Only : FALSE
KUPW:09:39:40.006: 1: Metadata Only : FALSE
KUPW:09:39:40.006: 1: Estimate : BLOCKS
KUPW:09:39:40.006: 1: Data error logging table :
KUPW:09:39:40.006: 1: Remote Link :
KUPW:09:39:40.006: 1: Dumpfile present : TRUE
KUPW:09:39:40.006: 1: Table Exists Action :
KUPW:09:39:40.006: 1: Partition Options : NONE
KUPW:09:39:40.006: 1: Tablespace Datafile Count: 0
KUPW:09:39:40.006: 1: Metadata Filter Index : 1 Count : 10
KUPW:09:39:40.006: 1: 1 Name - INCLUDE_USER
KUPW:09:39:40.006: 1: Value - TRUE
KUPW:09:39:40.006: 1: Object Name - SCHEMA_EXPORT
KUPW:09:39:40.006: 1: 2 Name - SCHEMA_EXPR
KUPW:09:39:40.006: 1: Value - IN ('TIA')
KUPW:09:39:40.006: 1: 3 Name - NAME_EXPR
KUPW:09:39:40.006: 1: Value - ='ACC_PAYMENT_SPECIFICATION'
KUPW:09:39:40.006: 1: Object - TABLE
KUPW:09:39:40.006: 1: 4 Name - INCLUDE_PATH_EXPR
KUPW:09:39:40.006: 1: Value - IN ('TABLE')
KUPW:09:39:40.006: 1: 5 Name - ORDERED
KUPW:09:39:40.006: 1: Value - FALSE
KUPW:09:39:40.006: 1: Object - TABLE_DATA
KUPW:09:39:40.006: 1: 6 Name - NO_XML
KUPW:09:39:40.006: 1: Value - TRUE
KUPW:09:39:40.006: 1: Object - XMLSCHEMA/EXP_XMLSCHEMA
KUPW:09:39:40.006: 1: 7 Name - XML_OUTOFLINE
KUPW:09:39:40.006: 1: Value - FALSE
KUPW:09:39:40.006: 1: Object - TABLE/TABLE_DATA
KUPW:09:39:40.006: 1: 8 Name - XDB_GENERATED
KUPW:09:39:40.006: 1: Value - FALSE
KUPW:09:39:40.006: 1: Object - TABLE/TRIGGER
KUPW:09:39:40.007: 1: 9 Name - XDB_GENERATED
KUPW:09:39:40.007: 1: Value - FALSE
KUPW:09:39:40.007: 1: Object - TABLE/RLS_POLICY
KUPW:09:39:40.007: 1: 10 Name - PRIVILEGED_USER
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: MD remap schema Index : 4 Count : 0
KUPW:09:39:40.007: 1: MD remap other Index : 5 Count : 0
KUPW:09:39:40.007: 1: MD Transform ddl Index : 2 Count : 11
KUPW:09:39:40.007: 1: 1 Name - DBA
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: Object - JOB
KUPW:09:39:40.007: 1: 2 Name - EXPORT
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: 3 Name - PRETTY
KUPW:09:39:40.007: 1: Value - FALSE
KUPW:09:39:40.007: 1: 4 Name - SQLTERMINATOR
KUPW:09:39:40.007: 1: Value - FALSE
KUPW:09:39:40.007: 1: 5 Name - CONSTRAINTS
KUPW:09:39:40.007: 1: Value - FALSE
KUPW:09:39:40.007: 1: Object - TABLE
KUPW:09:39:40.007: 1: 6 Name - REF_CONSTRAINTS
KUPW:09:39:40.007: 1: Value - FALSE
KUPW:09:39:40.007: 1: Object - TABLE
KUPW:09:39:40.007: 1: 7 Name - OID
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: Object - TABLE
KUPW:09:39:40.007: 1: 8 Name - RESET_PARALLEL
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: Object - INDEX
KUPW:09:39:40.007: 1: 9 Name - OID
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: Object - TYPE
KUPW:09:39:40.007: 1: 10 Name - OID
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: Object - INC_TYPE
KUPW:09:39:40.007: 1: 11 Name - REVOKE_FROM
KUPW:09:39:40.008: 1: Value - SYSTEM
KUPW:09:39:40.008: 1: Object - ROLE
KUPW:09:39:40.008: 1: Data Filter Index : 6 Count : 0
KUPW:09:39:40.008: 1: Data Remap Index : 7 Count : 0
KUPW:09:39:40.008: 1: MD remap name Index : 8 Count : 0
KUPW:09:39:40.008: 1: In procedure DISPATCH_WORK_ITEMS
KUPW:09:39:40.009: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:40.009: 1: KUPC$QUEUE.TRANSCEIVE called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:39:40.036: 1: KUPC$QUEUE.TRANSCEIVE returned. Received 2011
KUPW:09:39:40.036: 1: DBMS_LOB.CREATETEMPORARY called.
KUPW:09:39:40.037: 1: DBMS_LOB.CREATETEMPORARY returned.
KUPW:09:39:40.038: 1: Flags: 18
KUPW:09:39:40.038: 1: Start sequence number:
KUPW:09:39:40.038: 1: End sequence number:
KUPW:09:39:40.038: 1: Metadata Parallel: 1
KUPW:09:39:40.038: 1: Primary worker id: 1
KUPW:09:39:40.041: 1: In procedure GET_TABLE_DATA_OBJECTS
KUPW:09:39:40.041: 1: In procedure CREATE_MSG
KUPW:09:39:40.041: 1: KUPV$FT.MESSAGE_TEXT called.
KUPW:09:39:40.041: 1: KUPV$FT.MESSAGE_TEXT returned.
KUPW:09:39:40.041: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:40.041: 1: KUPC$QUEUE_INT.SEND called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:39:40.044: 1: KUPC$QUEUE_INT.SEND returned.
KUPW:09:39:40.044: 1: Estimate in progress using BLOCKS method...
KUPW:09:39:40.044: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
KUPW:09:39:40.044: 1: Old Seqno: 0 New Path: SCHEMA_EXPORT/TABLE/TABLE_DATA PO Num: -5 New Seqno: 62
KUPW:09:39:40.046: 1: Created type completion for duplicate 62
KUPW:09:39:40.046: 1: In procedure CREATE_MSG
KUPW:09:39:40.046: 1: KUPV$FT.MESSAGE_TEXT called.
KUPW:09:39:40.046: 1: KUPV$FT.MESSAGE_TEXT returned.
KUPW:09:39:40.046: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:40.046: 1: KUPC$QUEUE_INT.SEND called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:39:40.047: 1: KUPC$QUEUE_INT.SEND returned.
KUPW:09:39:40.047: 1: Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
KUPW:09:39:40.048: 1: In procedure CONFIGURE_METADATA_UNLOAD
KUPW:09:39:40.048: 1: Phase: ESTIMATE_PHASE Filter Name: Filter Value:
KUPW:09:39:40.048: 1: DBMS_METADATA.OPEN11.2.0.0.0 called.
KUPW:09:39:40.182: 1: DBMS_METADATA.OPEN11.2.0.0.0 returned. Source handle: 100001
KUPW:09:39:40.182: 1: DBMS_METADATA.SET_FILTER called. metadata_phase: ESTIMATE_PHASE
KUPW:09:39:40.182: 1: DBMS_METADATA.SET_FILTER returned. In function GET_NOEXP_TABLE
KUPW:09:39:40.194: 1: DBMS_METADATA.SET_PARSE_ITEM called.
*** 2011-09-20 09:39:40.325
KUPW:09:39:40.325: 1: DBMS_METADATA.SET_PARSE_ITEM returned.
KUPW:09:39:40.325: 1: DBMS_METADATA.SET_COUNT called.
KUPW:09:39:40.328: 1: DBMS_METADATA.SET_COUNT returned.
KUPW:09:39:40.328: 1: DBMS_METADATA.FETCH_XML_CLOB called.
*** 2011-09-20 09:39:42.603
KUPW:09:39:42.603: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
KUPW:09:39:42.603: 1: In procedure CREATE_TABLE_DATA_OBJECT_ROWS
KUPW:09:39:42.603: 1: In function GATHER_PARSE_ITEMS
KUPW:09:39:42.603: 1: In function CHECK_FOR_REMAP_NETWORK
KUPW:09:39:42.603: 1: Nothing to remap
KUPW:09:39:42.603: 1: In procedure BUILD_OBJECT_STRINGS
KUPW:09:39:42.604: 1: In procedure LOCATE_DATA_FILTERS
KUPW:09:39:42.604: 1: In function NEXT_PO_NUMBER
KUPW:09:39:42.620: 1: In procedure DETERMINE_METHOD_PARALLEL
KUPW:09:39:42.620: 1: flags mask: 0
KUPW:09:39:42.620: 1: dapi_possible_meth: 1
KUPW:09:39:42.620: 1: data_size: 3019898880
KUPW:09:39:42.620: 1: et_parallel: TRUE
KUPW:09:39:42.620: 1: object: TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION" <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
KUPW:09:39:42.648: 1: l_dapi_bit_mask: 7
KUPW:09:39:42.648: 1: l_client_bit_mask: 7
KUPW:09:39:42.648: 1: TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION" either, parallel: 12 <<<<< Here is says either (I thought that was method ?) <<<<<<<<<<<<<<<<
KUPW:09:39:42.648: 1: FORALL BULK INSERT called.
KUPW:09:39:42.658: 1: FORALL BULK INSERT returned.
KUPW:09:39:42.660: 1: DBMS_LOB.TRIM called. v_md_xml_clob
KUPW:09:39:42.660: 1: DBMS_LOB.TRIM returned.
KUPW:09:39:42.660: 1: DBMS_METADATA.FETCH_XML_CLOB called.
KUPW:09:39:42.678: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
KUPW:09:39:42.678: 1: DBMS_LOB.CREATETEMPORARY called.
KUPW:09:39:42.678: 1: DBMS_LOB.CREATETEMPORARY returned.
KUPW:09:39:42.678: 1: In procedure UPDATE_TD_ROW_EXP with seqno: 62
KUPW:09:39:42.680: 1: 1 rows fetched
KUPW:09:39:42.680: 1: In function NEXT_PO_NUMBER
KUPW:09:39:42.680: 1: Next table data array entry: 1 Parallel: 12 Size: 3019898880 Method: 4Creation_level: 0 <<<<<<<<<<<<<<<< HERE IT SAYS METHOD = 4 and PARALLEL=12 (I'm not using the parallel parameter ???) <<<<<<<<<<<<<<<<<<
KUPW:09:39:42.681: 1: In procedure UPDATE_TD_BASE_PO_INFO
KUPW:09:39:42.683: 1: Updated 1 td objects with bpo between 1 and 1
KUPW:09:39:42.684: 1: Send table_data_varray called. Count: 1
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:39:42.695: 1: Send table_data_varray returned.
KUPW:09:39:42.695: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:42.695: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
KUPW:09:39:42.695: 1: Old Seqno: 62 New Path: PO Num: -5 New Seqno: 0
KUPW:09:39:42.695: 1: Object count: 1
KUPW:09:39:42.697: 1: 1 completed for 62
KUPW:09:39:42.697: 1: DBMS_METADATA.CLOSE called. Handle: 100001
KUPW:09:39:42.697: 1: DBMS_METADATA.CLOSE returned.
KUPW:09:39:42.697: 1: In procedure CREATE_MSG
KUPW:09:39:42.697: 1: KUPV$FT.MESSAGE_TEXT called.
KUPW:09:39:42.698: 1: KUPV$FT.MESSAGE_TEXT returned.
KUPW:09:39:42.698: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:42.698: 1: KUPC$QUEUE_INT.SEND called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:39:42.699: 1: KUPC$QUEUE_INT.SEND returned.
KUPW:09:39:42.699: 1: Total estimation using BLOCKS method: 2.812 GB
KUPW:09:39:42.699: 1: In procedure CONFIGURE_METADATA_UNLOAD
KUPW:09:39:42.699: 1: Phase: WORK_PHASE Filter Name: BEGIN_WITH Filter Value:
KUPW:09:39:42.699: 1: DBMS_METADATA.OPEN11.2.0.0.0 called.
KUPW:09:39:42.837: 1: DBMS_METADATA.OPEN11.2.0.0.0 returned. Source handle: 200001
KUPW:09:39:42.837: 1: DBMS_METADATA.SET_FILTER called. metadata_phase: WORK_PHASE
KUPW:09:39:42.837: 1: DBMS_METADATA.SET_FILTER returned. In function GET_NOEXP_TABLE
KUPW:09:39:42.847: 1: DBMS_METADATA.SET_PARSE_ITEM called.
KUPW:09:39:42.964: 1: DBMS_METADATA.SET_PARSE_ITEM returned.
KUPW:09:39:42.964: 1: DBMS_METADATA.SET_COUNT called.
KUPW:09:39:42.967: 1: DBMS_METADATA.SET_COUNT returned.
KUPW:09:39:42.967: 1: KUPF$FILE.OPEN_CONTEXT called.
KUPW:09:39:42.967: 1: KUPF$FILE.OPEN_CONTEXT returned.
KUPW:09:39:42.968: 1: DBMS_METADATA.FETCH_XML_CLOB called. Handle: 200001
*** 2011-09-20 09:40:01.798
KUPW:09:40:01.798: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
KUPW:09:40:01.798: 1: Object seqno fetched:
KUPW:09:40:01.799: 1: Object path fetched:
KUPW:09:40:01.799: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:40:01.799: 1: In procedure COMPLETE_EXP_OBJECT
KUPW:09:40:01.799: 1: KUPF$FILE.FLUSH_LOB called.
KUPW:09:40:01.815: 1: KUPF$FILE.FLUSH_LOB returned.
KUPW:09:40:01.815: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
KUPW:09:40:01.815: 1: Old Seqno: 226 New Path: PO Num: -5 New Seqno: 0
KUPW:09:40:01.815: 1: Object count: 1
KUPW:09:40:01.815: 1: 1 completed for 226
KUPW:09:40:01.815: 1: DBMS_METADATA.CLOSE called. Handle: 200001
KUPW:09:40:01.816: 1: DBMS_METADATA.CLOSE returned.
KUPW:09:40:01.816: 1: KUPF$FILE.CLOSE_CONTEXT called.
KUPW:09:40:01.820: 1: KUPF$FILE.CLOSE_CONTEXT returned.
KUPW:09:40:01.821: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:40:01.821: 1: KUPC$QUEUE.TRANSCEIVE called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:40:01.827: 1: KUPC$QUEUE.TRANSCEIVE returned. Received 2012
KUPW:09:40:01.827: 1: DBMS_LOB.CREATETEMPORARY called.
KUPW:09:40:01.828: 1: DBMS_LOB.CREATETEMPORARY returned.
KUPW:09:40:01.828: 1: Process order range: 1..1
KUPW:09:40:01.828: 1: Method: 1
KUPW:09:40:01.828: 1: Parallel: 1
KUPW:09:40:01.828: 1: Creation level: 0
KUPW:09:40:01.830: 1: BULK COLLECT called.
KUPW:09:40:01.830: 1: BULK COLLECT returned.
KUPW:09:40:01.830: 1: In procedure BUILD_OBJECT_STRINGS
KUPW:09:40:01.836: 1: In procedure MOVE_DATA UNLOADing process_order 1 TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION" <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
KUPW:09:40:01.839: 1: KUPD$DATA.OPEN called.
KUPW:09:40:01.840: 1: KUPD$DATA.OPEN returned.
KUPW:09:40:01.840: 1: KUPD$DATA.SET_PARAMETER - common called.
KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - common returned.
KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - flags called.
KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - flags returned.
KUPW:09:40:01.843: 1: KUPD$DATA.START_JOB called.
KUPW:09:40:01.918: 1: KUPD$DATA.START_JOB returned. In procedure GET_JOB_VERSIONThis is how I called expdp:
expdp system/xxxxxxxxx schemas=tia directory=expdp INCLUDE=TABLE:\" =\'ACC_PAYMENT_SPECIFICATION\'\" REUSE_DUMPFILES=Y LOGFILE=expdp:$LOGFILE TRACE=400300Hi there ...
I have read the note - thats where I found the link to the trace note 286496.1 - on now to setup a trace
But I still need an explanation for the methods (1,2,4 etc)
regards
Mette -
How can I use the data pump export from external client?
I am trying to export a bunch of table from a DB but I cant figure out how to do it.
I dont have access to a shell terminal on the server itself, I can only login using TOAD.
I am trying to use TOAD's Data Pump Export utility but I keep getting this error:
ORA-39070: Unable to open the log file.
ORA-39087: directory name D:\TEMP\ is invalid
I dont understand if its because I am setting the parameter file wrong or if the utility is trying to find that directory on the server whereas I am thinking its going to dump it to my local filesystem where that directory exists.
I'd hate to have to use SQL Loader to create ctl files for each and every table...
Here is my parameter file:
DUMPFILE="db_export.dmp"
LOGFILE="exp_db_export.log"
DIRECTORY="D:\temp\"
TABLES=ACCOUNT
CONTENT=ALL
(just trying to test it on one table so far...)
P.S. Oracle 11g
Edited by: trant on Jan 13, 2012 7:58 AMORA-39070: Unable to open the log file.
ORA-39087: directory name D:\TEMP\ is invalidDirectory here it should not be physical location, its a logical representation.
For that you have to create a directory from SQL level, like create directory exp_dp..
then you have to use above created directory as DIRECTORY=exp_dp
HTH -
Data Pump using 11g2 and 11g1 question during migration
Our DBE put our test database on 11g2 and our production on 11g1. Due to some ingest failure we wanted to move the data in test(11g2) to production(11g1) using data pump, however I was told that you cannot go from 11g2 to 11g1. I was also told that because the database contained public synonyms that I would have to recreate all public synonyms. he said it had something to do with lbascys. Can someone clarify this for me..
user11171364 wrote:
... Can I still use these parameters during the import ONLY without having used them during the export.Nope, read the restriction : during the import "+This parameter is valid only when the NETWORK_LINK parameter is also specified.+"
http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_import.htm#sthref299
Consequently, you cannot use it within your dumpfile.
Nicolas. -
Data pump, Query "1=2" performance?
Hi guys
I am trying to export a schema using data pump however I need no data from a few of the tables since they are irrelevant but I'd still like to have the structure of the table itself along with any constraints and such.
I thought of using the QUERY parameter with a "1=2" query making it so that I can filter out all data from certain tables in the export while giving me everything else.
While this works I wonder if data pump/oracle is smart enough to not run this query through the entire table? If it does perform a full table scan then can anybody recommend any other way of excluding just the data of certain tables while still getting the table structure itself along with anything else related to it?
I have been unable to find such information after searching the net for a good while.
Regards
AlexThanks.
Does that mean 1=2 actually scans the entire table so it should be avoided in the future?
Regards
Alex -
I am trying to perform a data pump export on a table using a query within a parfile and I am getting some odd behaviour. The database version is 10.2.0.4.3 and the OS is AIX 5.3. The query looks like this.
QUERY="POSDECLARATIONQUEUE:where SESSIONID in (select 'B.SESSIONID' from POSACCOUNT A, POSDECLARATIONQUEUE B, POSDECLARATIONSESSION C where 'B.SESSIONID' = 'C.ID' and 'C.ACCOUNTID' = 'A.ID' and 'A.SITE' = '10252')"
This works but gets 0 rows. If I run the query against the instance in an SQLPlus session as below then I get 0 rows returned.
select * from POSDECLARATIONQUEUE where SESSIONID in (select 'B.SESSIONID' from POSACCOUNT A, POSDECLARATIONQUEUE B, POSDECLARATIONSESSION C where 'B.SESSIONID' = 'C.ID' AND 'C.ACCOUNTID' = 'A.ID' and 'A.SITE' = '10252');
If I take out the single quotes from around the columns within the query against the instance within SQLPlus, I get over 2000 rows returned.
SQL> select count(*) from POSDECLARATIONQUEUE where SESSIONID in (select B.SESSIONID from POSACCOUNT A, POSDECLARATIONQUEUE B, POSDECLARATIONSESSION C where B.SESSIONID = C.ID and C.ACCOUNTID = A.ID and A.SITE = 10252);
COUNT(*)
2098
If I remove the single quotes from the parfile query then I get the following error within the data pump export.
UDE-00014: invalid value for parameter, 'schemas'.
The SCHEMAS option is not specified within the parfile and the TABLES option only specifies the table POSDECLARATIONQUEUE.
Can someone assist with this, I just can't seem to be able to get the syntax right for it to work within data pump.
Kind Regards.
Graeme.
Edited by: user12219844 on Apr 14, 2010 3:34 AMIt looks like your query might be a little wrong:
This is what you have:
QUERY="POSDECLARATIONQUEUE:where SESSIONID in (select 'B.SESSIONID' from POSACCOUNT A, POSDECLARATIONQUEUE B, POSDECLARATIONSESSION C where 'B.SESSIONID' = 'C.ID' and 'C.ACCOUNTID' = 'A.ID' and 'A.SITE' = '10252')"
This is what I would have thought it should look like:
QUERY=POSDECLARATIONQUEUE:"where SESSIONID in (select B.SESSIONID from POSACCOUNT A, POSDECLARATIONQUEUE B, POSDECLARATIONSESSION C where B.SESSIONID = C.ID and C.ACCOUNTID = A.ID and A.SITE = 10252)"
You want double " arount the complete query, and you don't need the single ' around all of the =. The single ' are treating those values as strings and it says
'B.SESSIONID' = 'C.ID'
is the string B.SESSIONID equal to the string C.ID
In your query that you used in sql was
B.SESSIONID = C.ID
which says is the value stored B.SESSIONID equal to the value stored at C.ID
Which is what you want.
Dean -
Is it possible to Grant Nested Roles using Data Pump Export?
I'm on Oracle 10.2.0.5, trying various Data Pump Parameters to obtain an Export containing a statement like "GRANT ParentRole TO ChildRole;" .
This is to Import to 11.2.0.2, on the Windows x64 Platform. I'm using SQLFILE= Parameter in an IMPDP to check the effect of various EXPDP Parameters.
I can get the "CREATE ROLE" Statements with a Full EXPDP using FULL=Y and INCLUDE=ROLE:"IN('ParentRole','ChildRole')"
I can get the Grants of Objects to Roles with a 2nd Schema EXPDP using SCHEMAS=('MySchema') - e.g. I get "GRANT SELECT ON MySchema.MyTable TO ParentRole;"
But I can get the Parameters so that a Role Being Granted to Another Role is Exported.
Is this possible?Can you give an example of the grants, a real example so I can try to create this here. I'm thinking it is a grant that you want, but not sure which grant. There are a bunch of different grants.
Dean
Maybe you are looking for
-
Development and consolidation same system
Hello All, Can i declare both the development and consolidation as the same system without specifying test and production system in a track? If yes then is there any other factors to be considered? Also the SC state is in grey whereas if i click sync
-
Need help trying to Navigate tree
Hi there, Im a flex novice and im not sure if this is the right place for this post but i couldnt find anywhere else so apologies in advance. Im trying to make a flex application which takes in information from an xml file, plugs it into a navigation
-
I've created a 'page source' file for a website. I plan to upload it online into an archive. But I'm worried that, since it's a Safari 'page source' file, someone without a Mac (or Safari) won't be able to open it. Does anyone know if this would be a
-
Unable to access new sub account
Hi, I recently set up a new email sub account, all seemed to go smoothly but when I try to log in, I get the following message -We're experiencing some technical difficulties... We're sorry, but BT Yahoo Mail can't load due to a temporary error. You
-
Will Infinity be better in the evenings than Total...
I used to have BTTBB3, with a comfortable 7Mbps during the day, but a rubbish <1Mbps every evening from 7.30 to 11.30. You could almost use it as a timer. Funnily the evening was when I really wanted to use it. The help desk checked and my LINE SPE