FILESIZE parameter in DATA PUMP
Hi All,
As per the data pump syntax if we define FILESIZE paramter is create dmp files with that mention file.
But My question is if I ignore FILESIZE parameter, How oracle define the size of parameter.
I am creating dpexp with following syntax for dpexp file. It creates dmp files with name SCHEMA.ENV.080410..p1*%U*.dmp with 1,2,3 etc.
IT creates the files with different size.
JOB_NAME=SCHEMA.ENV.080410..p1
DIRECTORY=dump_dir
DUMPFILE=dump_dir:SCHEMA.ENV.080410..p1%U.dmp
LOGFILE=SCHEMA.ENV.080410..p1.explog
PARALLEL=16
CONTENT=ALL
EXCLUDE=INDEX,CONSTRAINT,TABLE_STATISTICS
TABLES= TABLE NAMES
user4005330 wrote:
Hi All,
As per the data pump syntax if we define FILESIZE paramter is create dmp files with that mention file.
But My question is if I ignore FILESIZE parameter, How oracle define the size of parameter.
I am creating dpexp with following syntax for dpexp file. It creates dmp files with name SCHEMA.ENV.080410..p1*%U*.dmp with 1,2,3 etc.
IT creates the files with different size.
JOB_NAME=SCHEMA.ENV.080410..p1
DIRECTORY=dump_dir
DUMPFILE=dump_dir:SCHEMA.ENV.080410..p1%U.dmp
LOGFILE=SCHEMA.ENV.080410..p1.explog
PARALLEL=16
CONTENT=ALL
EXCLUDE=INDEX,CONSTRAINT,TABLE_STATISTICS
TABLES= TABLE NAMESAs you defined PARALLEL=16, Data Pump will create 16 process and each process will write to it's own file, that's the reason why you get different sized files
Similar Messages
-
Consistent parameter in Data Pump.
Hi All,
As we know that there is no consistent parameter is data pump, can any one tell me how DP takes care of this.
From net i got the below one liner as
Data Pump Export determines the current time and uses FLASHBACK_TIME .But I failed to understand what exactly it meant.
Regards,
SphinxThis is the equivalent of consistent=y in exp. If you would use flashback_time=systimestamp to get the data pump export to be "as of the point in time the export began, every table will be as of the same commit point in time".
According to the docs:
“The SCN that most closely matches the specified time is found, and this SCN is used to enable the Flashback utility. The export operation is performed with data that is consistent as of this SCN.” -
Data Pump Consistent parameter?
Hi All,
Is there any consistent parameter in data pump as it is in exp/imp.
Becuase we are using data pump for backups and want to disable consitenct parameter.
Please let me know how I can disable consistent parameter in Data Pump.
Thanksif it;s not backup method then How you do the logical
full database backup????
From my thinking it's called logical DB backup (when
u are using exp or expdp)There are many reasons that export shouldn't be used as backup method,
1. It's very slow to do export on huge database ( which you already experiencing) the import will take much longer.
2. You only have a 'snapshot' of your database at time of backup, in the event of disaster, you will lost all data changes after backup.
3. It has performance impact on busy database ( which you also experiencing)
Other than all these, if you turn CONSISTENT to N, your 'logic' backup is logically corrupted. -
Using Data Pump Storage Parameter option
I am creating database replica of our production environment - the db name don't have to be the same.
My option is to use Oracle data pump to move the data from source database to target database.
I performed the same schenerio for our Windows 2003 evnironment with no problem.
Doing the same for Linux, I am getting tablespace creation error as you can see below:
Lix
Linux-x86_64 Error: 2: No such file or directory Failing sql is: CREATE TABLESPACE "INQUIRY" DATAFILE '/oraappl/pca/vprod/vproddata/inquiry01.dbf' SIZE 629145600 LOGGING ONLINE PERMANENT BLOCKSIZE 8192 EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO ORA-39083: Object type TABLESPACE failed to create with error: ORA-01119: error in creating database file '/oraappl/pca/vprod/vproddata/medical01.dbf' ORA-27040: file create error, unable to create file
My question is do we have to create the tablespaces or data pump should use the default tablespace location already being used by the new database?Hi Richard,
I am working creating my extra database using duplicate command as you suggested.
I got everything up until I got this error:
channel ORA_AUX_DISK_1: reading from backup piece /oraappl/pca/backups/weekly/vproddata/rman/VPR ackupset/2013_08_27/o1_mf_nnndf_TAG20130827T083750_91s7dz0r_.bkp ORA-19870: error reading backup piece /oraappl/pca/backups/weekly/vproddata/rman/VPROD/backupset 3_08_27/o1_mf_nnndf_TAG20130827T083750_91s7dz0r_.bkp ORA-19505: failed to identify file "/oraappl/pca/backups/weekly/vproddata/rman/VPROD/backupset/2 08_27/o1_mf_nnndf_TAG20130827T083750_91s7dz0r_.bkp" ORA-27037: unable to obtain file status Linux-x86_64 Error: 2: No such file or directory
It is defaulting to the recovery location of the production database, instead of the auxiliary db.
My next option was to catalog the backup files, even that is not working. Any suggestion? -
Data Pump Export issue - no streams pool created and cannot automatically c
I am trying to use data pump on a 10.2.0.1 database that has vlm enabled and getting the following error :
Export: Release 10.2.0.1.0 - Production on Tuesday, 20 April, 2010 10:52:08
Connected to: Oracle Database 10g Release 10.2.0.1.0 - Production
ORA-31626: job does not exist
ORA-31637: cannot create job SYS_EXPORT_TABLE_01 for user E_AGENT_SITE
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT_INT", line 600
ORA-39080: failed to create queues "KUPC$C_1_20100420105208" and "KUPC$S_1_20100420105208" for Data Pump job
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPC$QUE_INT", line 1555
ORA-00832: no streams pool created and cannot automatically create one
This is my script (that I currently use on other non vlm databases successfully):
expdp e_agent_site/<password>@orcl parfile=d:\DailySitePump.par
this is my parameter file :
DUMPFILE=site_pump%U.dmp
PARALLEL=1
LOGFILE=site_pump.log
STATUS=300
DIRECTORY=DATA_DUMP
QUERY=wwv_document$:"where last_updated > sysdate-18"
EXCLUDE=CONSTRAINT
EXCLUDE=INDEX
EXCLUDE=GRANT
TABLES=wwv_document$
FILESIZE=2000M
My oracle directory is created and the user has rights
googling the issue says that the shared pool is too small or streams_pool_size needs setting. shared_pool_size = 1200M and when I query v$parameter it shows that streams_pool_size = 0
I've tried alter system set streams_pool_size=1M; but I just get :
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-04033: Insufficient memory to grow pool
The server is a windows enterprise box with 16GB ram and VLM enabled, pfile memory parameters listed below:
# resource
processes = 1250
job_queue_processes = 10
open_cursors = 1000 # no overhead if set too high
# sga
shared_pool_size = 1200M
large_pool_size = 150M
java_pool_size = 50M
# pga
pga_aggregate_target = 850M # custom
# System Managed Undo and Rollback Segments
undo_management=AUTO
undo_tablespace=UNDOTBS1
# vlm support
USE_INDIRECT_DATA_BUFFERS = TRUE
DB_BLOCK_BUFFERS = 1500000
Any ideas why I cannot run data pump? I am assuming that I just need to set streams_pool_size but I don't understand why I cannot increase the size of it on this db. It is set to 0 on other databases that work fine and I can set it which is why I am possibly linking the issue to vlm
thanks
RobertSGA_MAX_SIZE?
SQL> ALTER SYSTEM SET streams_pool_size=32M SCOPE=BOTH;
ALTER SYSTEM SET streams_pool_size=32M SCOPE=BOTH
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-04033: Insufficient memory to grow pool
SQL> show parameter sga_max
NAME TYPE VALUE
sga_max_size big integer 480M
SQL> show parameter cache
NAME TYPE VALUE
db_16k_cache_size big integer 0
db_2k_cache_size big integer 0
db_32k_cache_size big integer 0
db_4k_cache_size big integer 0
db_8k_cache_size big integer 0
db_cache_advice string ON
db_cache_size big integer 256M
db_keep_cache_size big integer 0
db_recycle_cache_size big integer 0
object_cache_max_size_percent integer 10
object_cache_optimal_size integer 102400
session_cached_cursors integer 20
SQL> ALTER SYSTEM SET db_cache_size=224M SCOPE=both;
System altered.
SQL> ALTER SYSTEM SET streams_pool_size=32M SCOPE=both;
System altered.Lukasz -
Hi
I am trying to import data in Oracle 11g Release2(11.2.0.1) using impdp utitlity and getting below errror
UDI-00018: Data Pump client is incompatible with database version 11.2.0.1.0
Export dump has taken in database with oracle 11g Release 1(11.1.0.7.0) and I am trying to import in higher version of the database. Is there any parameter I have to set to avoid this error?AUTHSTATE=compat
A__z=! LOGNAME
CLASSPATH=/app/oracle/11.2.0/jlib:.
HOME=/home/oracle
LANG=C
LC__FASTMSG=true
LD_LIBRARY_PATH=/app/oracle/11.2.0/lib:/app/oracle/11.2.0/network/lib:.
LIBPATH=/app/oracle/11.2.0/JDK/JRE/BIN:/app/oracle/11.2.0/jdk/jre/bin/classic:/app/oracle/11.2.0/lib32
LOCPATH=/usr/lib/nls/loc
LOGIN=oracle
LOGNAME=oracle
MAIL=/usr/spool/mail/oracle
MAILMSG=[YOU HAVE NEW MAIL]
NLSPATH=/usr/lib/nls/msg/%L/%N:/usr/lib/nls/msg/%L/%N.cat
NLS_DATE_FORMAT=DD-MON-RRRR HH24:MI:SS
ODMDIR=/etc/objrepos
ORACLE_BASE=/app/oracle
ORACLE_HOME=/app/oracle/11.2.0
ORACLE_SID=AMT6
ORACLE_TERM=xterm
ORA_NLS33=/app/oracle/11.2.0/nls/data
PATH=/app/oracle/11.2.0/bin:.:/usr/bin:/etc:/usr/sbin:/usr/ucb:/home/oracle/bin:/usr/bin/X11:/sbin:.:/usr/local/bin:/usr/ccs/bin
PS1=nbsud01[$PWD]:($ORACLE_SID)>
PWD=/nbsiar/nbimp
SHELL=/usr/bin/ksh
SHLIB_PATH=/app/oracle/11.2.0/lib:/usr/lib
TERM=xterm
TZ=Europe/London
USER=oracle
_=/usr/bin/env -
Data Pump - expdp and slow performance on specific tables
Hi there
I have af data pump export af a schema. Most of the 700 tables is exported very quickly (direct path) but a couple of them seems to be extremenly slow.
I have chekced:
- no lobs
- no long/raw
- no VPD
- no partitions
- no bitmapped index
- just date, number, varchar2's
I'm runing with trace 400300
But I'm having trouble reading the output from it. It seems that some of the slow performning tables is runinng with method 4??? Can anyone find an explanation for the method in the trace:
1 > direct path (i think)
2 > external table (i think)
4 > ?
others?
I have done some stats using v$filestat/v$session_wait (history) - and it seems that we always wait for DB seq file read - and doing lots and lots of SINGLEBLKRDS. Not undo is read
I have a table 2.5 GB -> 3 minutes
and then this (in my eyes) similar table 2.4 GB > 1½ hrs.
There are 367.000 blks (8 K) and avg rowlen = 71
I'm on Oracle 11.2 on a Linux box with plenty of RAM and CPU power.
Trace file /opt/oracle112/diag/rdbms/prod/prod/trace/prod_dw00_24268.trc
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
ORACLE_HOME = /opt/oracle112/product/11.2.0.2/dbhome_1
System name: Linux
Node name: tiaprod.thi.somethingamt.dk
Release: 2.6.18-194.el5
Version: #1 SMP Mon Mar 29 22:10:29 EDT 2010
Machine: x86_64
VM name: Xen Version: 3.4 (HVM)
Instance name: prod
Redo thread mounted by this instance: 1
Oracle process number: 222
Unix process pid: 24268, image: [email protected] (DW00)
*** 2011-09-20 09:39:39.671
*** SESSION ID:(401.8395) 2011-09-20 09:39:39.671
*** CLIENT ID:() 2011-09-20 09:39:39.671
*** SERVICE NAME:(SYS$BACKGROUND) 2011-09-20 09:39:39.671
*** MODULE NAME:() 2011-09-20 09:39:39.671
*** ACTION NAME:() 2011-09-20 09:39:39.671
KUPP:09:39:39.670: Current trace/debug flags: 00400300 = 4195072
*** MODULE NAME:(Data Pump Worker) 2011-09-20 09:39:39.672
*** ACTION NAME:(SYS_EXPORT_SCHEMA_09) 2011-09-20 09:39:39.672
KUPW:09:39:39.672: 0: ALTER SESSION ENABLE PARALLEL DML called.
KUPW:09:39:39.672: 0: ALTER SESSION ENABLE PARALLEL DML returned.
KUPC:09:39:39.693: Setting remote flag for this process to FALSE
prvtaqis - Enter
prvtaqis subtab_name upd
prvtaqis sys table upd
KUPW:09:39:39.819: 0: KUPP$PROC.WHATS_MY_ID called.
KUPW:09:39:39.819: 1: KUPP$PROC.WHATS_MY_ID returned.
KUPW:09:39:39.820: 1: worker max message number: 1000
KUPW:09:39:39.822: 1: Full cluster access allowed
KUPW:09:39:39.823: 1: Original job start time: 11-SEP-20 09:39:38 AM
KUPW:09:39:39.862: 1: KUPP$PROC.WHATS_MY_NAME called.
KUPW:09:39:39.862: 1: KUPP$PROC.WHATS_MY_NAME returned. Process name: DW00
KUPW:09:39:39.862: 1: KUPV$FT_INT.GET_INSTANCE_ID called.
KUPW:09:39:39.866: 1: KUPV$FT_INT.GET_INSTANCE_ID returned. Instance name: prod
KUPW:09:39:39.870: 1: ALTER SESSION ENABLE RESUMABLE called.
KUPW:09:39:39.870: 1: ALTER SESSION ENABLE RESUMABLE returned.
KUPW:09:39:39.871: 1: KUPF$FILE.INIT called.
KUPW:09:39:39.996: 1: KUPF$FILE.INIT returned.
KUPW:09:39:39.998: 1: KUPF$FILE.GET_MAX_CSWIDTH called.
KUPW:09:39:39.998: 1: KUPF$FILE.GET_MAX_CSWIDTH returned.
KUPW:09:39:39.998: 1: Max character width: 1
KUPW:09:39:39.998: 1: Max clob fetch: 32757
KUPW:09:39:39.998: 1: Max varchar2a size: 32757
KUPW:09:39:39.998: 1: Max varchar2 size: 7990
KUPW:09:39:39.998: 1: In procedure GET_PARAMETERS
KUPW:09:39:40.000: 1: In procedure GET_METADATA_FILTERS
KUPW:09:39:40.001: 1: In procedure GET_METADATA_TRANSFORMS
KUPW:09:39:40.002: 1: In procedure GET_DATA_FILTERS
KUPW:09:39:40.004: 1: In procedure GET_DATA_REMAPS
KUPW:09:39:40.005: 1: In procedure PRINT_MT_PARAMS
KUPW:09:39:40.005: 1: Master table : "SYSTEM"."SYS_EXPORT_SCHEMA_09"
KUPW:09:39:40.005: 1: Metadata job mode : SCHEMA_EXPORT
KUPW:09:39:40.005: 1: Debug enable : TRUE
KUPW:09:39:40.005: 1: Profile enable : FALSE
KUPW:09:39:40.005: 1: Transportable enable : FALSE
KUPW:09:39:40.005: 1: Metrics enable : FALSE
KUPW:09:39:40.005: 1: db version : 11.2.0.2.0
KUPW:09:39:40.005: 1: job version : 11.2.0.0.0
KUPW:09:39:40.005: 1: service name :
KUPW:09:39:40.005: 1: Current Edition : ORA$BASE
KUPW:09:39:40.005: 1: Job Edition :
KUPW:09:39:40.005: 1: Abort Step : 0
KUPW:09:39:40.005: 1: Access Method : AUTOMATIC
KUPW:09:39:40.005: 1: Data Options : 0
KUPW:09:39:40.006: 1: Dumper directory :
KUPW:09:39:40.006: 1: Master only : FALSE
KUPW:09:39:40.006: 1: Data Only : FALSE
KUPW:09:39:40.006: 1: Metadata Only : FALSE
KUPW:09:39:40.006: 1: Estimate : BLOCKS
KUPW:09:39:40.006: 1: Data error logging table :
KUPW:09:39:40.006: 1: Remote Link :
KUPW:09:39:40.006: 1: Dumpfile present : TRUE
KUPW:09:39:40.006: 1: Table Exists Action :
KUPW:09:39:40.006: 1: Partition Options : NONE
KUPW:09:39:40.006: 1: Tablespace Datafile Count: 0
KUPW:09:39:40.006: 1: Metadata Filter Index : 1 Count : 10
KUPW:09:39:40.006: 1: 1 Name - INCLUDE_USER
KUPW:09:39:40.006: 1: Value - TRUE
KUPW:09:39:40.006: 1: Object Name - SCHEMA_EXPORT
KUPW:09:39:40.006: 1: 2 Name - SCHEMA_EXPR
KUPW:09:39:40.006: 1: Value - IN ('TIA')
KUPW:09:39:40.006: 1: 3 Name - NAME_EXPR
KUPW:09:39:40.006: 1: Value - ='ACC_PAYMENT_SPECIFICATION'
KUPW:09:39:40.006: 1: Object - TABLE
KUPW:09:39:40.006: 1: 4 Name - INCLUDE_PATH_EXPR
KUPW:09:39:40.006: 1: Value - IN ('TABLE')
KUPW:09:39:40.006: 1: 5 Name - ORDERED
KUPW:09:39:40.006: 1: Value - FALSE
KUPW:09:39:40.006: 1: Object - TABLE_DATA
KUPW:09:39:40.006: 1: 6 Name - NO_XML
KUPW:09:39:40.006: 1: Value - TRUE
KUPW:09:39:40.006: 1: Object - XMLSCHEMA/EXP_XMLSCHEMA
KUPW:09:39:40.006: 1: 7 Name - XML_OUTOFLINE
KUPW:09:39:40.006: 1: Value - FALSE
KUPW:09:39:40.006: 1: Object - TABLE/TABLE_DATA
KUPW:09:39:40.006: 1: 8 Name - XDB_GENERATED
KUPW:09:39:40.006: 1: Value - FALSE
KUPW:09:39:40.006: 1: Object - TABLE/TRIGGER
KUPW:09:39:40.007: 1: 9 Name - XDB_GENERATED
KUPW:09:39:40.007: 1: Value - FALSE
KUPW:09:39:40.007: 1: Object - TABLE/RLS_POLICY
KUPW:09:39:40.007: 1: 10 Name - PRIVILEGED_USER
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: MD remap schema Index : 4 Count : 0
KUPW:09:39:40.007: 1: MD remap other Index : 5 Count : 0
KUPW:09:39:40.007: 1: MD Transform ddl Index : 2 Count : 11
KUPW:09:39:40.007: 1: 1 Name - DBA
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: Object - JOB
KUPW:09:39:40.007: 1: 2 Name - EXPORT
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: 3 Name - PRETTY
KUPW:09:39:40.007: 1: Value - FALSE
KUPW:09:39:40.007: 1: 4 Name - SQLTERMINATOR
KUPW:09:39:40.007: 1: Value - FALSE
KUPW:09:39:40.007: 1: 5 Name - CONSTRAINTS
KUPW:09:39:40.007: 1: Value - FALSE
KUPW:09:39:40.007: 1: Object - TABLE
KUPW:09:39:40.007: 1: 6 Name - REF_CONSTRAINTS
KUPW:09:39:40.007: 1: Value - FALSE
KUPW:09:39:40.007: 1: Object - TABLE
KUPW:09:39:40.007: 1: 7 Name - OID
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: Object - TABLE
KUPW:09:39:40.007: 1: 8 Name - RESET_PARALLEL
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: Object - INDEX
KUPW:09:39:40.007: 1: 9 Name - OID
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: Object - TYPE
KUPW:09:39:40.007: 1: 10 Name - OID
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: Object - INC_TYPE
KUPW:09:39:40.007: 1: 11 Name - REVOKE_FROM
KUPW:09:39:40.008: 1: Value - SYSTEM
KUPW:09:39:40.008: 1: Object - ROLE
KUPW:09:39:40.008: 1: Data Filter Index : 6 Count : 0
KUPW:09:39:40.008: 1: Data Remap Index : 7 Count : 0
KUPW:09:39:40.008: 1: MD remap name Index : 8 Count : 0
KUPW:09:39:40.008: 1: In procedure DISPATCH_WORK_ITEMS
KUPW:09:39:40.009: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:40.009: 1: KUPC$QUEUE.TRANSCEIVE called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:39:40.036: 1: KUPC$QUEUE.TRANSCEIVE returned. Received 2011
KUPW:09:39:40.036: 1: DBMS_LOB.CREATETEMPORARY called.
KUPW:09:39:40.037: 1: DBMS_LOB.CREATETEMPORARY returned.
KUPW:09:39:40.038: 1: Flags: 18
KUPW:09:39:40.038: 1: Start sequence number:
KUPW:09:39:40.038: 1: End sequence number:
KUPW:09:39:40.038: 1: Metadata Parallel: 1
KUPW:09:39:40.038: 1: Primary worker id: 1
KUPW:09:39:40.041: 1: In procedure GET_TABLE_DATA_OBJECTS
KUPW:09:39:40.041: 1: In procedure CREATE_MSG
KUPW:09:39:40.041: 1: KUPV$FT.MESSAGE_TEXT called.
KUPW:09:39:40.041: 1: KUPV$FT.MESSAGE_TEXT returned.
KUPW:09:39:40.041: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:40.041: 1: KUPC$QUEUE_INT.SEND called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:39:40.044: 1: KUPC$QUEUE_INT.SEND returned.
KUPW:09:39:40.044: 1: Estimate in progress using BLOCKS method...
KUPW:09:39:40.044: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
KUPW:09:39:40.044: 1: Old Seqno: 0 New Path: SCHEMA_EXPORT/TABLE/TABLE_DATA PO Num: -5 New Seqno: 62
KUPW:09:39:40.046: 1: Created type completion for duplicate 62
KUPW:09:39:40.046: 1: In procedure CREATE_MSG
KUPW:09:39:40.046: 1: KUPV$FT.MESSAGE_TEXT called.
KUPW:09:39:40.046: 1: KUPV$FT.MESSAGE_TEXT returned.
KUPW:09:39:40.046: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:40.046: 1: KUPC$QUEUE_INT.SEND called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:39:40.047: 1: KUPC$QUEUE_INT.SEND returned.
KUPW:09:39:40.047: 1: Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
KUPW:09:39:40.048: 1: In procedure CONFIGURE_METADATA_UNLOAD
KUPW:09:39:40.048: 1: Phase: ESTIMATE_PHASE Filter Name: Filter Value:
KUPW:09:39:40.048: 1: DBMS_METADATA.OPEN11.2.0.0.0 called.
KUPW:09:39:40.182: 1: DBMS_METADATA.OPEN11.2.0.0.0 returned. Source handle: 100001
KUPW:09:39:40.182: 1: DBMS_METADATA.SET_FILTER called. metadata_phase: ESTIMATE_PHASE
KUPW:09:39:40.182: 1: DBMS_METADATA.SET_FILTER returned. In function GET_NOEXP_TABLE
KUPW:09:39:40.194: 1: DBMS_METADATA.SET_PARSE_ITEM called.
*** 2011-09-20 09:39:40.325
KUPW:09:39:40.325: 1: DBMS_METADATA.SET_PARSE_ITEM returned.
KUPW:09:39:40.325: 1: DBMS_METADATA.SET_COUNT called.
KUPW:09:39:40.328: 1: DBMS_METADATA.SET_COUNT returned.
KUPW:09:39:40.328: 1: DBMS_METADATA.FETCH_XML_CLOB called.
*** 2011-09-20 09:39:42.603
KUPW:09:39:42.603: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
KUPW:09:39:42.603: 1: In procedure CREATE_TABLE_DATA_OBJECT_ROWS
KUPW:09:39:42.603: 1: In function GATHER_PARSE_ITEMS
KUPW:09:39:42.603: 1: In function CHECK_FOR_REMAP_NETWORK
KUPW:09:39:42.603: 1: Nothing to remap
KUPW:09:39:42.603: 1: In procedure BUILD_OBJECT_STRINGS
KUPW:09:39:42.604: 1: In procedure LOCATE_DATA_FILTERS
KUPW:09:39:42.604: 1: In function NEXT_PO_NUMBER
KUPW:09:39:42.620: 1: In procedure DETERMINE_METHOD_PARALLEL
KUPW:09:39:42.620: 1: flags mask: 0
KUPW:09:39:42.620: 1: dapi_possible_meth: 1
KUPW:09:39:42.620: 1: data_size: 3019898880
KUPW:09:39:42.620: 1: et_parallel: TRUE
KUPW:09:39:42.620: 1: object: TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION" <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
KUPW:09:39:42.648: 1: l_dapi_bit_mask: 7
KUPW:09:39:42.648: 1: l_client_bit_mask: 7
KUPW:09:39:42.648: 1: TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION" either, parallel: 12 <<<<< Here is says either (I thought that was method ?) <<<<<<<<<<<<<<<<
KUPW:09:39:42.648: 1: FORALL BULK INSERT called.
KUPW:09:39:42.658: 1: FORALL BULK INSERT returned.
KUPW:09:39:42.660: 1: DBMS_LOB.TRIM called. v_md_xml_clob
KUPW:09:39:42.660: 1: DBMS_LOB.TRIM returned.
KUPW:09:39:42.660: 1: DBMS_METADATA.FETCH_XML_CLOB called.
KUPW:09:39:42.678: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
KUPW:09:39:42.678: 1: DBMS_LOB.CREATETEMPORARY called.
KUPW:09:39:42.678: 1: DBMS_LOB.CREATETEMPORARY returned.
KUPW:09:39:42.678: 1: In procedure UPDATE_TD_ROW_EXP with seqno: 62
KUPW:09:39:42.680: 1: 1 rows fetched
KUPW:09:39:42.680: 1: In function NEXT_PO_NUMBER
KUPW:09:39:42.680: 1: Next table data array entry: 1 Parallel: 12 Size: 3019898880 Method: 4Creation_level: 0 <<<<<<<<<<<<<<<< HERE IT SAYS METHOD = 4 and PARALLEL=12 (I'm not using the parallel parameter ???) <<<<<<<<<<<<<<<<<<
KUPW:09:39:42.681: 1: In procedure UPDATE_TD_BASE_PO_INFO
KUPW:09:39:42.683: 1: Updated 1 td objects with bpo between 1 and 1
KUPW:09:39:42.684: 1: Send table_data_varray called. Count: 1
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:39:42.695: 1: Send table_data_varray returned.
KUPW:09:39:42.695: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:42.695: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
KUPW:09:39:42.695: 1: Old Seqno: 62 New Path: PO Num: -5 New Seqno: 0
KUPW:09:39:42.695: 1: Object count: 1
KUPW:09:39:42.697: 1: 1 completed for 62
KUPW:09:39:42.697: 1: DBMS_METADATA.CLOSE called. Handle: 100001
KUPW:09:39:42.697: 1: DBMS_METADATA.CLOSE returned.
KUPW:09:39:42.697: 1: In procedure CREATE_MSG
KUPW:09:39:42.697: 1: KUPV$FT.MESSAGE_TEXT called.
KUPW:09:39:42.698: 1: KUPV$FT.MESSAGE_TEXT returned.
KUPW:09:39:42.698: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:42.698: 1: KUPC$QUEUE_INT.SEND called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:39:42.699: 1: KUPC$QUEUE_INT.SEND returned.
KUPW:09:39:42.699: 1: Total estimation using BLOCKS method: 2.812 GB
KUPW:09:39:42.699: 1: In procedure CONFIGURE_METADATA_UNLOAD
KUPW:09:39:42.699: 1: Phase: WORK_PHASE Filter Name: BEGIN_WITH Filter Value:
KUPW:09:39:42.699: 1: DBMS_METADATA.OPEN11.2.0.0.0 called.
KUPW:09:39:42.837: 1: DBMS_METADATA.OPEN11.2.0.0.0 returned. Source handle: 200001
KUPW:09:39:42.837: 1: DBMS_METADATA.SET_FILTER called. metadata_phase: WORK_PHASE
KUPW:09:39:42.837: 1: DBMS_METADATA.SET_FILTER returned. In function GET_NOEXP_TABLE
KUPW:09:39:42.847: 1: DBMS_METADATA.SET_PARSE_ITEM called.
KUPW:09:39:42.964: 1: DBMS_METADATA.SET_PARSE_ITEM returned.
KUPW:09:39:42.964: 1: DBMS_METADATA.SET_COUNT called.
KUPW:09:39:42.967: 1: DBMS_METADATA.SET_COUNT returned.
KUPW:09:39:42.967: 1: KUPF$FILE.OPEN_CONTEXT called.
KUPW:09:39:42.967: 1: KUPF$FILE.OPEN_CONTEXT returned.
KUPW:09:39:42.968: 1: DBMS_METADATA.FETCH_XML_CLOB called. Handle: 200001
*** 2011-09-20 09:40:01.798
KUPW:09:40:01.798: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
KUPW:09:40:01.798: 1: Object seqno fetched:
KUPW:09:40:01.799: 1: Object path fetched:
KUPW:09:40:01.799: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:40:01.799: 1: In procedure COMPLETE_EXP_OBJECT
KUPW:09:40:01.799: 1: KUPF$FILE.FLUSH_LOB called.
KUPW:09:40:01.815: 1: KUPF$FILE.FLUSH_LOB returned.
KUPW:09:40:01.815: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
KUPW:09:40:01.815: 1: Old Seqno: 226 New Path: PO Num: -5 New Seqno: 0
KUPW:09:40:01.815: 1: Object count: 1
KUPW:09:40:01.815: 1: 1 completed for 226
KUPW:09:40:01.815: 1: DBMS_METADATA.CLOSE called. Handle: 200001
KUPW:09:40:01.816: 1: DBMS_METADATA.CLOSE returned.
KUPW:09:40:01.816: 1: KUPF$FILE.CLOSE_CONTEXT called.
KUPW:09:40:01.820: 1: KUPF$FILE.CLOSE_CONTEXT returned.
KUPW:09:40:01.821: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:40:01.821: 1: KUPC$QUEUE.TRANSCEIVE called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:40:01.827: 1: KUPC$QUEUE.TRANSCEIVE returned. Received 2012
KUPW:09:40:01.827: 1: DBMS_LOB.CREATETEMPORARY called.
KUPW:09:40:01.828: 1: DBMS_LOB.CREATETEMPORARY returned.
KUPW:09:40:01.828: 1: Process order range: 1..1
KUPW:09:40:01.828: 1: Method: 1
KUPW:09:40:01.828: 1: Parallel: 1
KUPW:09:40:01.828: 1: Creation level: 0
KUPW:09:40:01.830: 1: BULK COLLECT called.
KUPW:09:40:01.830: 1: BULK COLLECT returned.
KUPW:09:40:01.830: 1: In procedure BUILD_OBJECT_STRINGS
KUPW:09:40:01.836: 1: In procedure MOVE_DATA UNLOADing process_order 1 TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION" <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
KUPW:09:40:01.839: 1: KUPD$DATA.OPEN called.
KUPW:09:40:01.840: 1: KUPD$DATA.OPEN returned.
KUPW:09:40:01.840: 1: KUPD$DATA.SET_PARAMETER - common called.
KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - common returned.
KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - flags called.
KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - flags returned.
KUPW:09:40:01.843: 1: KUPD$DATA.START_JOB called.
KUPW:09:40:01.918: 1: KUPD$DATA.START_JOB returned. In procedure GET_JOB_VERSIONThis is how I called expdp:
expdp system/xxxxxxxxx schemas=tia directory=expdp INCLUDE=TABLE:\" =\'ACC_PAYMENT_SPECIFICATION\'\" REUSE_DUMPFILES=Y LOGFILE=expdp:$LOGFILE TRACE=400300Hi there ...
I have read the note - thats where I found the link to the trace note 286496.1 - on now to setup a trace
But I still need an explanation for the methods (1,2,4 etc)
regards
Mette -
How can I use the data pump export from external client?
I am trying to export a bunch of table from a DB but I cant figure out how to do it.
I dont have access to a shell terminal on the server itself, I can only login using TOAD.
I am trying to use TOAD's Data Pump Export utility but I keep getting this error:
ORA-39070: Unable to open the log file.
ORA-39087: directory name D:\TEMP\ is invalid
I dont understand if its because I am setting the parameter file wrong or if the utility is trying to find that directory on the server whereas I am thinking its going to dump it to my local filesystem where that directory exists.
I'd hate to have to use SQL Loader to create ctl files for each and every table...
Here is my parameter file:
DUMPFILE="db_export.dmp"
LOGFILE="exp_db_export.log"
DIRECTORY="D:\temp\"
TABLES=ACCOUNT
CONTENT=ALL
(just trying to test it on one table so far...)
P.S. Oracle 11g
Edited by: trant on Jan 13, 2012 7:58 AMORA-39070: Unable to open the log file.
ORA-39087: directory name D:\TEMP\ is invalidDirectory here it should not be physical location, its a logical representation.
For that you have to create a directory from SQL level, like create directory exp_dp..
then you have to use above created directory as DIRECTORY=exp_dp
HTH -
Data Pump using 11g2 and 11g1 question during migration
Our DBE put our test database on 11g2 and our production on 11g1. Due to some ingest failure we wanted to move the data in test(11g2) to production(11g1) using data pump, however I was told that you cannot go from 11g2 to 11g1. I was also told that because the database contained public synonyms that I would have to recreate all public synonyms. he said it had something to do with lbascys. Can someone clarify this for me..
user11171364 wrote:
... Can I still use these parameters during the import ONLY without having used them during the export.Nope, read the restriction : during the import "+This parameter is valid only when the NETWORK_LINK parameter is also specified.+"
http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_import.htm#sthref299
Consequently, you cannot use it within your dumpfile.
Nicolas. -
Data pump, Query "1=2" performance?
Hi guys
I am trying to export a schema using data pump however I need no data from a few of the tables since they are irrelevant but I'd still like to have the structure of the table itself along with any constraints and such.
I thought of using the QUERY parameter with a "1=2" query making it so that I can filter out all data from certain tables in the export while giving me everything else.
While this works I wonder if data pump/oracle is smart enough to not run this query through the entire table? If it does perform a full table scan then can anybody recommend any other way of excluding just the data of certain tables while still getting the table structure itself along with anything else related to it?
I have been unable to find such information after searching the net for a good while.
Regards
AlexThanks.
Does that mean 1=2 actually scans the entire table so it should be avoided in the future?
Regards
Alex -
I am trying to perform a data pump export on a table using a query within a parfile and I am getting some odd behaviour. The database version is 10.2.0.4.3 and the OS is AIX 5.3. The query looks like this.
QUERY="POSDECLARATIONQUEUE:where SESSIONID in (select 'B.SESSIONID' from POSACCOUNT A, POSDECLARATIONQUEUE B, POSDECLARATIONSESSION C where 'B.SESSIONID' = 'C.ID' and 'C.ACCOUNTID' = 'A.ID' and 'A.SITE' = '10252')"
This works but gets 0 rows. If I run the query against the instance in an SQLPlus session as below then I get 0 rows returned.
select * from POSDECLARATIONQUEUE where SESSIONID in (select 'B.SESSIONID' from POSACCOUNT A, POSDECLARATIONQUEUE B, POSDECLARATIONSESSION C where 'B.SESSIONID' = 'C.ID' AND 'C.ACCOUNTID' = 'A.ID' and 'A.SITE' = '10252');
If I take out the single quotes from around the columns within the query against the instance within SQLPlus, I get over 2000 rows returned.
SQL> select count(*) from POSDECLARATIONQUEUE where SESSIONID in (select B.SESSIONID from POSACCOUNT A, POSDECLARATIONQUEUE B, POSDECLARATIONSESSION C where B.SESSIONID = C.ID and C.ACCOUNTID = A.ID and A.SITE = 10252);
COUNT(*)
2098
If I remove the single quotes from the parfile query then I get the following error within the data pump export.
UDE-00014: invalid value for parameter, 'schemas'.
The SCHEMAS option is not specified within the parfile and the TABLES option only specifies the table POSDECLARATIONQUEUE.
Can someone assist with this, I just can't seem to be able to get the syntax right for it to work within data pump.
Kind Regards.
Graeme.
Edited by: user12219844 on Apr 14, 2010 3:34 AMIt looks like your query might be a little wrong:
This is what you have:
QUERY="POSDECLARATIONQUEUE:where SESSIONID in (select 'B.SESSIONID' from POSACCOUNT A, POSDECLARATIONQUEUE B, POSDECLARATIONSESSION C where 'B.SESSIONID' = 'C.ID' and 'C.ACCOUNTID' = 'A.ID' and 'A.SITE' = '10252')"
This is what I would have thought it should look like:
QUERY=POSDECLARATIONQUEUE:"where SESSIONID in (select B.SESSIONID from POSACCOUNT A, POSDECLARATIONQUEUE B, POSDECLARATIONSESSION C where B.SESSIONID = C.ID and C.ACCOUNTID = A.ID and A.SITE = 10252)"
You want double " arount the complete query, and you don't need the single ' around all of the =. The single ' are treating those values as strings and it says
'B.SESSIONID' = 'C.ID'
is the string B.SESSIONID equal to the string C.ID
In your query that you used in sql was
B.SESSIONID = C.ID
which says is the value stored B.SESSIONID equal to the value stored at C.ID
Which is what you want.
Dean -
Is it possible to Grant Nested Roles using Data Pump Export?
I'm on Oracle 10.2.0.5, trying various Data Pump Parameters to obtain an Export containing a statement like "GRANT ParentRole TO ChildRole;" .
This is to Import to 11.2.0.2, on the Windows x64 Platform. I'm using SQLFILE= Parameter in an IMPDP to check the effect of various EXPDP Parameters.
I can get the "CREATE ROLE" Statements with a Full EXPDP using FULL=Y and INCLUDE=ROLE:"IN('ParentRole','ChildRole')"
I can get the Grants of Objects to Roles with a 2nd Schema EXPDP using SCHEMAS=('MySchema') - e.g. I get "GRANT SELECT ON MySchema.MyTable TO ParentRole;"
But I can get the Parameters so that a Role Being Granted to Another Role is Exported.
Is this possible?Can you give an example of the grants, a real example so I can try to create this here. I'm thinking it is a grant that you want, but not sure which grant. There are a bunch of different grants.
Dean -
Exporting whole database (10GB) using Data Pump export utility
Hi,
I have a requirement that we have to export the whole database (10GB) using Data Pump export utility because it is not possible to send the 10GB dump in a CD/DVD to the system vendor of our application (to analyze few issues we have).
Now when i checked online full export is available but not able to understand how it works, as we never used this data pump utility, we use normal export method. Also, will data pump reduce the size of the dump file so it can fit in a DVD or can we use Parallel Full DB export utility to split the files and include them in a DVD, is it possible.
Please correct me if i am wrong and kindly help.
Thanks for your help in advance.You need to create a directory object.
sqlplus user/password
create directory foo as '/path_here';
grant all on directory foo to public;
exit;
then run you expdp command.
Data Pump can compress the dumpfile if you are on 11.1 and have the appropriate options. The reason for saying filesize is to limit the size of the dumpfile. If you have 10G and are not compressing and the total dumpfiles are 10G, then by specifying 600MB, you will just have 10G/600MB = 17 dumpfiles that are 600MB. You will have to send them 17 cds. (probably a few more if dumpfiles don't get filled up 100% due to parallel.
Data Pump dumpfiles are written by the server, not the client, so the dumpfiles don't get created in the directory where the job is run.
Dean -
Best Approach for using Data Pump
Hi,
I configured a new database which I set up with schemas that I imported in from another production database. Now, before this database becomes the new production database, I need to re-import the schemas so that the data is up-to-date.
Is there a way to use Data Pump so that I don't have to drop all the schemas first? Can I just export the schemas and somehow just overwrite what's in there already?
Thanks,
NoraHi, you can use the NETWORK_LINK parameter for import data from other remote database.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_import.htm#i1007380
Regards. -
Data pump with flashback_scn or flashback_time
Dear Gurus,
The Oracle database version in used is 11gR2. We don't have flashback enabled for the database. However to run a data pump export with consistency, can we turn on FLASHBACK_SCN or FLASHBACK_TIME?
Best
rac110gHow to set the parameter
The logical conclusion and field-tested best practice approved by Oracle Support is, to set the UNDO_RETENTION parameter to at least the estimated time for the data pump import. Don’t forget to size your UNDO tablespace accordingly, since the retention only works as long as there is enough undo space available.
Note
IMPDP uses flashback technology (flashback table) on the source database to achive consistency, so the UNDO tablespace there is worth a glance as well.
check this one link for expdp/impdp undo requirements and possible problems
http://www.usn-it.de/index.php/2010/05/05/oracle-impdp-ora-1555-and-undo_retention/
Edited by: Asad99 on Mar 26, 2013 10:42 PM
Maybe you are looking for
-
For what reason on my tablet with OS Android 4.1 does not work flash on the sites?
Hi! I recently bought a Google Nexus 7 tablet with Android 4.1 operating system and on my tablet does not work flash sites. Why adobe does not release a new version of flash for android 4.1? Why it was necessary to buy a company Macromedia, if adobe
-
I would like to put my pictures on the web so that certain clients can view their photos with password. I photograph weddings and would prefer to make html slide shows of their wedding pictures so that they can view them anywhere with any computer. I
-
How can I watch MKVs on a DVD player and keep 5.1?
Hi, I have several MKV files which I want to convert to play in a DVD player/Xbox 360. I want to keep the hi def and 5.1 sound in the newly encoded file. I have tried using Handbrake with some success. The video is outputted in hi def and it plays in
-
Printing check box in script side by side
Hi All, I have a requirement like i have to print the check box in SAP script side by side like this 1) check box 2) checkbox2 3) check box3. And i have checked the sap symbols and character formats i am getting only the marked check boxes. how to
-
No results with Search in 9.0.2
None of our searches (advanced search, basic search and custom search) are working. It doesn't work to find a name of a page and no keywords either. What could be the problem?