Expdp/impdp oracle schema
Hello All,
We are running 11gr2 on HPUX.
I am support to take a dump of a schema (say XYZ) and import it to an other server (server2) running same database with same OS.
Just wanted to confirm the steps what I am proceeding.
1> expdp xyz/xyz schemas=xyz directory=dump_xyz dumpfile=xyz.dmp logfile=xyz.log
(here one question, after expdp do I need to give system/pass or xyz/syz???????)
2> after exp done, then import it to other side,
3> Once import done then create a sql script on server 1 and move it to server2 and then run it here to have same grant and privs for the user XYZ on server 2 also.
or is there any way to exp all its grants, privs along with the objects when I was taking dump on server1 ?????
Thanks in ADV!!!!!
Thanks budy for this valuable information.
Will keep in mind for future reference.
Now as per my scenario, if I created user xyz on target first and then imported into it.
This user heaving all privs (as I mentioned above) and my basic need is, this user must have only select to all objects.
select privilege from dba_sys_privs where grantee='XYZ';
PRIVILEGES
CREATE DATABASE LINK
UNLIMITED TABLESPACE
CREATE SESSION
which priv do I need to revoke to prevent this user to create or midify the object. and what priv do I need to give this user to select any object ?????
Please suggest.
and also while executing suggested sql on target :
SQL> select DBMS_METADATA.GET ('USER','XYZ') from dual;
select DBMS_METADATA.GET ('USER','APPS_QUERY') from dual
ERROR at line 1:
ORA-00904: "DBMS_METADATA"."GET": invalid identifier
Similar Messages
-
EXPDP/IMPDP : Clone schema with datapump
Hello,
I have a full export with the following parameters:
DIRECTORY=dir$export
DUMPFILE=datapump_exp_FULL.dmp
LOGFILE=datapump_exp_FULL.log
FULL=Y
And I want to import only one schema into a new schema with the following parameters:
DIRECTORY=dir$export
DUMPFILE=datapump_exp_FULL.dmp
LOGFILE=datapump_imp_SCHEMA2.log
SCHEMAS=SCHEMA1
REMAP_SCHEMA=SCHEMA1:SCHEMA2
TABLE_EXISTS_ACTION=REPLACE
TRANSFORM=OID:N
But for what reason ever IMPDP tries to import other schemas like SYSMAN, APEX... also. This throws a load of ORA-31684 (object already exists), what means TABLE_EXISTS_ACTION=REPLACE doesn't work also.
I also tried to import without SCHEMAS=SCHEMA1, but didn't succeed anyway...
So, how can I force IMPDP to import only SCHEMA1, without touching the others ?
Regards,
Mynzeither use INCLUDE PARAMETER as per documentation
http://docs.oracle.com/cd/E11882_01/server.112/e10701/dp_import.htm#i1007761OR
EXCLUDE=SCHEMA:"IN ('OUTLN','SYSTEM','SYSMAN','FLOWS_FILES','APEX_030200','APEX_PUBLIC_USER','ANONYMOUS')"OR
Using NOT-LIKE with EXCLUDETake care of those quotation marks.. -
Expdp & impdp of R12 schema into 11g DB
Hi,
i need to take a backup of GL schema from R12 instance using expdp &
import it into 11g DB( standalone DB) using impdp.
i have used the following for expdp in R12(10g DB)
$expdp system/<pass> schemas=gl directory=dump_dir dumpfile=gl.dmp logfile=gl.log
export was successfull.
How to import into an 11g DB( This is on a different machine).
regards,
CharanRefer to "Oracle® Database Utilities 11g Release 1 (11.1)" Manual.
Data Pump Import
http://download.oracle.com/docs/cd/B28359_01/server.111/b28319/dp_import.htm#i1007653
Oracle® Database Utilities 11g Release 1 (11.1)
http://download.oracle.com/docs/cd/B28359_01/server.111/b28319/toc.htm -
Impdp import schema to different tablespace
Hi,
I have PRTDB1 schema in PRTDTS tablespace, I need encrypt this tablespace.
and I have done this steps:
1. I created new encrypted PRTDTS_ENC tablespace.
2. I take export PRTDB1 schema. (expdp system/system schema=PRTDB1 directory=datapump dumpfile=data.dmp logfile=log1.log)
and I want import this dump to new PRTDTS_ENC tablespace.
can you give me impdp command, which I need?Hello,
ORA-14223: Deferred segment creation is not supported for this table
Failing sql is:
CREATE TABLE "PRTDB1"."DATRE_CONT" ("BRON" NUMBER NOT NULL ENABLE, "PCDID" NUMBER(9,0) NOT NULL ENABLE, "OWNERSID" NUMBER NOT NULL ENABLE, "VALUESTR" VARCHAR2(4000 CHAR), "VALNUM" NUMBER, "VALDATE" DATE, "VALXML" "XMLTYPE") SEGMENT CREATION DEFERRED PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING So it means that these Tables "need" a Segment but, they are created with the option SEGMENT CREATION DEFERRED.
A workaround is to create a Segment for these Tables on the Source Database before exporting them. Then, at the Import you won't have the option SEGMENT CREATION DEFERRED. The following Note of MOS detail this Solution:
- *Error Ora-14223 Deferred Segment Creation Is Not Supported For This Table During Datapump Import [ID 1293326.1]*
The link below gives more informations about the DEFERRED SEGMENT CREATION:
http://docs.oracle.com/cd/E14072_01/server.112/e10595/tables002.htm#CHDGJAGB
With the DEFERRED SEGMENT CREATION feature, a Segment is created automatically at the first insert on the Table. So the Tables for which the error ORA-14223 occurs are empty.
You may also create them separately by using a script generated by the SQLFILE parameter in which you change the option SEGMENT CREATION DEFERRED by SEGMENT CREATION IMMEDIATE.
Hope this help.
Best regards,
Jean-Valentin -
Data Pump - expdp / impdp utility question
HiAll,
As a basic exercise to learn Data pump, I am trying to export schema scott and then want to load the dump file into a test_t schema in the same database.
--1. created dir object from sysdba
CREATE DIRECTORY dpump_dir1 AS 'C:\output_dir';
--2. created dir on the OS as c:/output_dit
--3. run expdp
expdp system/*****@orcl schemas=scott DIRECTORY=dpump_dir1 JOB_NAME=hr DUMPFILE=scott_orcl_nov5.dmp PARALLEL=4
--4. create test_t schema and grant dba to it.
--5. run impdp
impdp system/*****@orcl schemas=test_t DIRECTORY=dpump_dir1 JOB_NAME=hr DUMPFILE=scott_orcl_nov5.dmp PARALLEL=8
it fails here as ORA39165 : schema test_t not found. However the schema test_t does exist.
So, I am not sure why it should give this error. It seems that the schema of Scott from the expdp dump file can not be loaded to any other schema but only to a schema named scott...Is it right? If yes then how can I load all the objects of schema say scott to another schema say test_t? It would be helpful if you can please show the respective expdp and impdp command.
Thanks a lot
KSThe test_t schema does not exist in the export dump file, you should remap from input scott schema to the target test_t schema.
REMAP_SCHEMA : http://download.oracle.com/docs/cd/E11882_01/server.112/e22490/dp_import.htm#SUTIL927
Nicolas. -
Hello,
i would like to use expdp and impdp.
As i installed XE11 on Linux, i unlocked the HR account:
ALTER USER hr ACCOUNT UNLOCK IDENTIFIED BY hr;
and use the expdp:
expdp hr/hr DUMPFILE=hrdump.dmp DIRECTORY=DATA_PUMP_DIR SCHEMAS=HR
LOGFILE=hrdump.log
This quits with:
ORA-39006: internal error
ORA-39213: Metadata processing is not available
The alert_XE.log reported:
ORA-12012: error on auto execute of job "SYS"."BSLN_MAINTAIN_STATS_JOB"
ORA-06550: line 1, column 807:
PLS-00201: identifier 'DBSNMP.BSLN_INTERNAL' must be declared
I read some entries here and did:
sqlplus sys/******* as sysdba @?/rdbms/admin/catnsnmp.sql
sqlplus sys/******* as sysdba @?/rdbms/admin/catsnmp.sql
sqlplus sys/******* as sysdba @?/rdbms/admin/catdpb.sql
sqlplus sys/******* as sysdba @?/rdbms/admin/utlrp.sql
I restarted the database, but the result of expdp was the same:
ORA-39006: internal error
ORA-39213: Metadata processing is not available
What's wrong with that? What can i do?
Do i need "BSLN_MAINTAIN_STATS_JOB" or can this set ro FALSE?
I created the database today on 24.07. and the next run for "BSLN_MAINTAIN_STATS_JOB"
is on 29.07. ?
In the Windows-Version it is working correct, but not in the Linux-Version.
Best regardsHello gentlemen,
back to the origin:
'Is expdp/impdp working on XE11'
The answer is simply yes.
After a view days i found out that:
- no stylesheets installed are required for this operation
- a simple installation is enough
And i did:
SHELL:
mkdir /u01/app > /dev/null 2>&1
mkdir /u01/app/oracle > /dev/null 2>&1
groupadd dba
useradd -g dba -d /u01/app/oracle oracle > /dev/null 2>&1
chown -R oracle:dba /u01/app/oracle
rpm -ivh oracle-xe-11.2.0-1.0.x86_64.rpm
/etc/init.d/./oracle-xe configure responseFile=xe.rsp
./sqlplus sys/********* as sysdba @/u01/app/oracle/product/11.2.0/xe/rdbms/admin/utlfile.sql
SQLPLUS:
ALTER USER hr IDENTIFIED BY hr ACCOUNT UNLOCK;
GRANT CONNECT, RESOURCE to hr;
GRANT read, write on DIRECTORY DATA_PUMP_DIR TO hr;
expdp hr/hr dumpfile=hr.dmp directory=DATA_PUMP_DIR schemas=hr logfile=hr_exp.log
impdp hr/hr dumpfile=hr.dmp directory=DATA_PUMP_DIR schemas=hr logfile=hr_imp.log
This was carried out on:
OEL5.8, OEL6.3, openSUSE 11.4
For explanation:
We did the style-sheet-installation for XE10 to have the expdp/impd functionality.
Thanks for your assistance
Best regards
Achim
Edited by: oelk on 16.08.2012 10:20 -
Error using expdp on Oracle XE
Hi,
When I try and do an export in Oracle XE I get the following error:
D:\>expdp SYSTEM/vodafone SCHEMAS=XE_DATA DIRECTORY=dmpdir DUMPFILE=xedata.dmp
Export: Release 10.2.0.1.0 - Production on Thursday, 29 June, 2006 10:47:10
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Express Edition Release 10.2.0.1.0 - Producti
n
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 475
ORA-29283: invalid file operation
Any ideas??
/JohnHi guys,
The problem was that Oracle didn't have access to the directory I created. So I created a directory in the Oracle home directory and it worked..
Thanks,
John -
Expdp/impdp :: Constraints in Parent child relationship
Hi ,
I have one table parent1 and tables child1, child2 and chld3 have foreign key created on this parent1.
Now I want to do some deletion on parent1. But since number of records are very high on parent1 , we are going with expdp / impdp with querry option.
I have taken query level expdp on parent1. Now I dropped parent1 with cascade constraints option and all the foreign keys created by child1 ,2 and 3 which references parent1 are automatically dropped.
Now If i fire the impdp for the query level dump file , will these foreign key constraints will get created automatically on child1 ,2 and 3 or I need to manually re-create it ??
Regards,
AnuHi,
The FK's will not be in the dumpfile - see the example code below where i generate a sqlfile following pretty much the process you would have done. THis is because the FK is part of the DDL for the child table not the parent.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning option
OPS$ORACLE@EMZA3>create table a (col1 number);
Table created.
OPS$ORACLE@EMZA3>alter table a add primary key (col1);
Table altered.
OPS$ORACLE@EMZA3>create table b (col1 number);
Table created.
OPS$ORACLE@EMZA3>alter table b add constraint x foreign key (col1) references a(col1);
Table altered.
OPS$ORACLE@EMZA3>
EMZA3:[/oracle/11.2.0.1.2.DB/bin]# expdp / include=TABLE:\"=\'A\'\"
Export: Release 11.2.0.3.0 - Production on Fri May 17 15:45:50 2013
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning option
Starting "OPS$ORACLE"."SYS_EXPORT_SCHEMA_04": /******** include=TABLE:"='A'"
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 0 KB
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
. . exported "OPS$ORACLE"."A" 0 KB 0 rows
Master table "OPS$ORACLE"."SYS_EXPORT_SCHEMA_04" successfully loaded/unloaded
Dump file set for OPS$ORACLE.SYS_EXPORT_SCHEMA_04 is:
/oracle/11.2.0.3.0.DB/rdbms/log/expdat.dmp
Job "OPS$ORACLE"."SYS_EXPORT_SCHEMA_04" successfully completed at 15:45:58
Import: Release 11.2.0.3.0 - Production on Fri May 17 15:46:16 2013
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning option
Master table "OPS$ORACLE"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded
Starting "OPS$ORACLE"."SYS_SQL_FILE_FULL_01": /******** sqlfile=a.sql
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Job "OPS$ORACLE"."SYS_SQL_FILE_FULL_01" successfully completed at 15:46:17
-- CONNECT OPS$ORACLE
ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
-- new object type path: SCHEMA_EXPORT/TABLE/TABLE
CREATE TABLE "OPS$ORACLE"."A"
( "COL1" NUMBER
) SEGMENT CREATION IMMEDIATE
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "SYSTEM" ;
-- new object type path: SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
ALTER TABLE "OPS$ORACLE"."A" ADD PRIMARY KEY ("COL1")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "SYSTEM" ENABLE;
-- new object type path: SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
DECLARE I_N VARCHAR2(60);
I_O VARCHAR2(60);
NV VARCHAR2(1);
c DBMS_METADATA.T_VAR_COLL;
df varchar2(21) := 'YYYY-MM-DD:HH24:MI:SS';
stmt varchar2(300) := ' INSERT INTO "SYS"."IMPDP_STATS" (type,version,flags,c1,c2,c3,c5,n1,n2,n3,n4,n5,n6,n7,n8,n9,n10,n11,n12,d1,cl1) VALUES (''I'',6,:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,NULL,:14,:15,NULL,:16,:17)';
BEGIN
DELETE FROM "SYS"."IMPDP_STATS";
c(1) := 'COL1';
DBMS_METADATA.GET_STAT_INDNAME('OPS$ORACLE','A',c,1,i_o,i_n);
EXECUTE IMMEDIATE stmt USING 0,I_N,NV,NV,I_O,0,0,0,0,0,0,0,0,NV,NV,TO_DATE('2013-05-17 15:43:24',df),NV;
DBMS_STATS.IMPORT_INDEX_STATS('"' || i_o || '"','"' || i_n || '"',NULL,'"IMPDP_STATS"',NULL,'"SYS"');
DELETE FROM "SYS"."IMPDP_STATS";
END;
/Regards,
Harry
http://dbaharrison.blogspot.com/ -
System generated Index names different on target database after expdp/impdp
After performing expdp/impdp to move data from one database (A) to another (B), the system name generated indexes has different names on the target database, which caused a major issue with GoldenGate. Could anyone provide any tricks on how to perform the expdp/impdp and have the same system index name generated to be the same on both source and target?
Thanks in advance.
JLWhile I do not agree with Sb choice of wording his solution is correct. I suggest you drop and recreate the objects using explicit naming then you will get the same names on the target database after import for constraints, indexes, and FK's.
A detailed description of the problem this caused with Golden Gate would be interesting. The full Oracle and Golden Gate versions is use might also be important to the solution, if one exists other than explicitl naming.
HTH -- Mark D Powell --
Edited by: Mark D Powell on May 30, 2012 12:26 PM -
Expdp+Impdp: Does the user have to have DBA privilege?
Is a "normal" user (=Without DBA privilege) allowed to export and import (with new expdp/impdp) his own schema?
Is a "normal" user (=Without DBA privilege) allowed to export and import (with new expdp/impdp) other (=not his own) schemas ?
If he is not allowed: Which GRANT is necessary to be able to perform such expdp/impdp operations?
Peter
Edited by: user559463 on Feb 28, 2010 7:49 AMHello,
Is a "normal" user (=Without DBA privilege) allowed to export and import (with new expdp/impdp) his own schema?Yes, a User can always export its own objects.
Is a "normal" user (=Without DBA privilege) allowed to export and import (with new expdp/impdp) other (=not his own) schemas ?Yes, if this User has EXP_FULL_DATABASE and IMP_FUL_DATABASE Roles.
So, you can create a User and GRANT it EXP_FULL_DATABASE and IMP_FULL_DATABASE Roles and, being connected
to this User, you could export/import any Object from / to any Schemas.
On databases, on which there're a lot of export/import operations, I always create a special User with these Roles.
NB: In DataPump you should GRANT also READ, WRITE Privileges on the DIRECTORY (if you use "dump") to the User.
Else, be accurate on the choice of your words, as previously posted, DBA is a Role not a Privilege which has another meaning.
Hope this help.
Best regards,
Jean-Valentin -
Transfer db from windows to linux (expdp / impdp)
I am trying to copy my database schema from a Windows XP system to a Linux server (RHEL4).
I have Oracle 10g XE installed on both systems (Release 10.2.0.1.0). NLS_LANG is set to AMERICAN_AMERICA.AL32UTF8 on both systems.
As a test, I tried copying the HR schema to the HRDEV schema as described here: http://download-west.oracle.com/docs/cd/B25329_01/doc/admin.102/b25107/impexp.htm#sthref408
It worked on the Windows machine.
On the Linux machine, here is the command and error log:
[root@mydomain tmp]# expdp SYSTEM/******** SCHEMAS=hr DIRECTORY=dmpdir DUMPFILE=schema.dmp LOGFILE=expschema.log
Export: Release 10.2.0.1.0 - Production on Wednesday, 27 June, 2007 16:19:03
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 475
ORA-29283: invalid file operation
Any ideas? I am learning Linux and am not very advanced yet :--)
Thanks!Here are two more attempts as the "oracle" user in Linux. In the second attempt, I set NOLOGFILE=y since I thought there might be a problem with the log file permission.
-bash-3.00$ sqlplus / as sysdba
SQL*Plus: Release 10.2.0.1.0 - Production on Wed Jun 27 19:38:00 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
SQL> select directory_name, directory_path from dba_directories;
DIRECTORY_NAME
DIRECTORY_PATH
DATA_PUMP_DIR
/usr/lib/oracle/xe/app/oracle/admin/XE/dpdump/
DMPDIR
/usr/lib/oracle/xe/tmp
SQL> grant read,write on directory dmpdir to hr;
Grant succeeded.
SQL> exit
Disconnected from Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
-bash-3.00$ expdp SYSTEM/password SCHEMAS=hr DIRECTORY=dmpdir DUMPFILE=schema1.dmp LOGFILE=expschema1.log
Export: Release 10.2.0.1.0 - Production on Wednesday, 27 June, 2007 19:41:21
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 475
ORA-29283: invalid file operation
-bash-3.00$ expdp SYSTEM/password SCHEMAS=hr DIRECTORY=dmpdir DUMPFILE=schema1.dmp nologfile=y
Export: Release 10.2.0.1.0 - Production on Wednesday, 27 June, 2007 19:42:56
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file "/usr/lib/oracle/xe/tmp/schema1.dmp"
ORA-27040: file create error, unable to create file
Linux Error: 13: Permission denied
-bash-3.00$ -
[ETL] TTS vs expdp/impdp vs ctas (dblink)
Hi, all.
The database is oracle 10gR2 on a unix machine.
Assuming that the db size is about 1 tera bytes (table : 500 giga, index : 500 giga),
how much faster is TTS (transportable tablespace) over expdp/impdp, and over ctas (dblink) ?
As you know, the speed of etl depends on the hardware capacity. (io capacity, network bandwith, the number of cpu)
I just would like to hear general guide from your experience.
Thanks in advance.
Best Regards.869578 wrote:
Hi, all.
The database is oracle 10gR2 on a unix machine.
Assuming that the db size is about 1 tera bytes (table : 500 giga, index : 500 giga),
how much faster is TTS (transportable tablespace) over expdp/impdp, and over ctas (dblink) ?
As you know, the speed of etl depends on the hardware capacity. (io capacity, network bandwith, the number of cpu)
I just would like to hear general guide from your experience.
Thanks in advance.
Best Regards.http://docs.oracle.com/cd/B19306_01/server.102/b14231/tspaces.htm#ADMIN01101
Moving data using transportable tablespaces is much faster than performing either an export/import or unload/load of the same data. This is because the datafiles containing all of the actual data are just copied to the destination location, and you use an export/import utility to transfer only the metadata of the tablespace objects to the new database.
If you really want to know "how much faster" you're going to have to benchmark. Lots of variables come in to play so best to determine this in your actual environment.
Cheers, -
EXP/IMP..of table having LOB column to export and import using expdp/impdp
we have one table in that having colum LOB now this table LOB size is approx 550GB.
as per our knowldge LOB space can not be resused.so we have alrady rasied SR on that
we are come to clusion that we need to take backup of this table then truncate this table and then start import
we need help on bekow ponts.
1)we are taking backup with expdp using parellal parameter=4 this backup will complete sussessfully? any other parameter need to keep in expdp while takig backup.
2)once truncate done,does import will complete successfully..?
any SGA PGA or undo tablespace size or undo retention we need to increase.. to completer susecfully import? because its production Critical database?
current SGA 2GB
PGA 398MB
undo retention 1800
undo tbs 6GB
please any one give suggestion to perform activity without error...also suggest parameter need to keep during expdp/impdp
thanks an advance.Hi,
From my experience be prepared for a long outage to do this - expdp is pretty quick at getting lobs out but very slow at getting them back in again - a lot of the speed optimizations that may datapump so excellent for normal objects are not available for lobs. You really need to test this somewhere first - can you not expdp from live and load into some test area - you don't need the whole database just the table/lob in question. You don;t want to find out after you truncate the table that its going to take 3 days to import back in....
You might want to consider DBMS_REDEFINITION instead?
Here you precreate a temporary table (with same definitiion as the existing one), load the data into it from the existing table and then do a dictionary switch to swap them over - giving you minimal downtime. I think this should work fine with LOBS at 10g but you should do some research and see if it works fine. You'll need a lot of extra tablespace (temporarily) for this approach though.
Regards,
Harry -
Log file's format in expdp\impdp
Hi all,
I need to set log file format for expdp\impdp utility. I have this format for my dump file - filename=<name>%U.dmp which generates unique names for dump files. How can i generate unique names for log files? It'd better if dump file name and log file names will be the same.
Regards,
rustam_tjHi Srini, thanks for advice.
I read doc which you suggest me. The only thing which i found there is:
Log files and SQL files overwrite previously existing files.
So i cant keep previos log files?
My OS is HP-UX (11.3) and database version is 10.2.0.4
Regards,
rustam -
Bug report: Oracle schema parser/regex compiler failing valid xs:patterns
Hello,
The Oracle schema regex compiler does not allow valid patterns with more than a certain number of branches. An XSDException is thrown, namely 'invalid facet 'pattern' in element 'simpleType'.
I have also tried using multiple <xs:pattern> elements, which also conforms to the official schema specification, but no luck there either.
Here are my sample regexs which validate correctly against all branch examples in XML Spy 4.0:
<!-- as single multi branchpattern-->
<xs:pattern value="aqua|black|blue|fuchsia|gray|green|lime|maroon|navy|olive|purple|red|silver|teal|white|yellow|[A-Za-z]{1,2}[0-9]{2,4}|rgb\([0-9]{1,3}%{0,1},[0-9]{1,3}%{0,1},[0-9]{1,3}%{0,1}\)"/>
<!-- as multiple patterns-->
<xs:pattern value="aqua|black|blue|fuchsia|gray|green|lime|maroon"/>
<xs:pattern value="navy|olive|purple|red|silver|teal|white|yellow"/>
<xs:pattern value="[A-Za-z]{1,2}[0-9]{2,4}"/>
<xs:pattern value="rgb\([0-9]{1,3}%{0,1},[0-9]{1,3}%{0,1},[0-9]{1,3}%{0,1}\)"/>
(Note that the Oracle regex compiler will allow any one of these patterns, it is only when the compiler concatenates them as per the spec that the exception is thrown.)
Here are the W3 specifications for patterns/schema regex, which impose no limit on pattern branches whether they are part of one or multiple xs:pattern 'value' attributes:
(http://www.w3.org/TR/xmlschema-2/#regexs)
[Definition:] A regular expression is composed from zero or more ·branch·es, separated by | characters.
[Definition:] A branch consists of zero or more ·piece·s, concatenated together.
(http://www.w3.org/TR/2001/REC-xmlschema-2-20010502/datatypes.html#rf-pattern)
Schema Representation Constraint: Multiple patterns
If multiple <pattern> element information items appear as [children] of a <simpleType>, the [value]s should be combined as if they appeared in a single ·regular expression· as separate ·branch·es.
NOTE: It is a consequence of the schema representation constraint Multiple patterns (§4.3.4.3) and of the rules for ·restriction· that ·pattern· facets specified on the same step in a type derivation are ORed together, while ·pattern· facets specified on different steps of a type derivation are ANDed together.
Thus, to impose two ·pattern· constraints simultaneously, schema authors may either write a single ·pattern· which expresses the intersection of the two ·pattern·s they wish to impose, or define each ·pattern· on a separate type derivation step.
Many thanks
Kevin SmithHello,
The Oracle schema regex compiler does not allow valid patterns with more than a certain number of branches. An XSDException is thrown, namely 'invalid facet 'pattern' in element 'simpleType'.
I have also tried using multiple <xs:pattern> elements, which also conforms to the official schema specification, but no luck there either.
Here are my sample regexs which validate correctly against all branch examples in XML Spy 4.0:
<!-- as single multi branchpattern-->
<xs:pattern value="aqua|black|blue|fuchsia|gray|green|lime|maroon|navy|olive|purple|red|silver|teal|white|yellow|[A-Za-z]{1,2}[0-9]{2,4}|rgb\([0-9]{1,3}%{0,1},[0-9]{1,3}%{0,1},[0-9]{1,3}%{0,1}\)"/>
<!-- as multiple patterns-->
<xs:pattern value="aqua|black|blue|fuchsia|gray|green|lime|maroon"/>
<xs:pattern value="navy|olive|purple|red|silver|teal|white|yellow"/>
<xs:pattern value="[A-Za-z]{1,2}[0-9]{2,4}"/>
<xs:pattern value="rgb\([0-9]{1,3}%{0,1},[0-9]{1,3}%{0,1},[0-9]{1,3}%{0,1}\)"/>
(Note that the Oracle regex compiler will allow any one of these patterns, it is only when the compiler concatenates them as per the spec that the exception is thrown.)
Here are the W3 specifications for patterns/schema regex, which impose no limit on pattern branches whether they are part of one or multiple xs:pattern 'value' attributes:
(http://www.w3.org/TR/xmlschema-2/#regexs)
[Definition:] A regular expression is composed from zero or more ·branch·es, separated by | characters.
[Definition:] A branch consists of zero or more ·piece·s, concatenated together.
(http://www.w3.org/TR/2001/REC-xmlschema-2-20010502/datatypes.html#rf-pattern)
Schema Representation Constraint: Multiple patterns
If multiple <pattern> element information items appear as [children] of a <simpleType>, the [value]s should be combined as if they appeared in a single ·regular expression· as separate ·branch·es.
NOTE: It is a consequence of the schema representation constraint Multiple patterns (§4.3.4.3) and of the rules for ·restriction· that ·pattern· facets specified on the same step in a type derivation are ORed together, while ·pattern· facets specified on different steps of a type derivation are ANDed together.
Thus, to impose two ·pattern· constraints simultaneously, schema authors may either write a single ·pattern· which expresses the intersection of the two ·pattern·s they wish to impose, or define each ·pattern· on a separate type derivation step.
Many thanks
Kevin Smith
Maybe you are looking for
-
Take Control of your Travel: Webcast with SAP Travel OnDemand customer UST Global
You are invited to take control of your travel - register here to attend the SAP Travel onDemand webcast and learn how to manage your travel and expenses. SAP Travel onDemand: How UST Global runs SAP to manage travel expenses like never before Date -
-
New here but in need of a bit of help....
Good evening, my problem is basically the amount of duplicates i have in my library and the fact that i am unable to delete them properly. My whole library seems to be duplicated at least once sometimes twice.... and i was hoping there might be a pro
-
Acknowledgement handling failed for type SystemAck.
Hi I have a scanario where i have a BPM which expects a transport acknowledgement . I have sent a message and in the MDT of the Adapter Engine the message status is seen to be successful/Delivered but when i see the Audit log it says that error occur
-
Podcast application shuts down after seconds of use on 5th gen ptouch. On/off attmpt does not fix. Ideas?
-
Is there any tool in DW for chencking browser compatibility ?
Hello all, I have some doubts. When i develop one website in Dreamweaver, i can check the out put design into all the browser compatibility. But I need to know is any tools having to check browser compatibility for website. Please reply to this threa