오라클 10g R2에서 expdp 받은 파일이 impdp 할 때 오류가 발생합니다.
To, All Oracle DBAs
IBM AIX 서버에서 expdp를 이용하여 BIS라는 스키마의 전체 자료를 다음의 par 파일을 작성하여 익스포트 받았습니다.
directory=bis
dumpfile=bis.dmp
logfile=bis.log
schemas=bis
CONTENT=ALL
그리고, 윈도우7상에서 아래와 같은 imp.par 파일을 이용하여 impdp 하였습니다.
directory=bis
dumpfile=bis.dmp
logfile=imp_bis.log
그런데, 아래와 같은 오류가 떴습니다.
D:\bis> impdp bis/bis parfile=imp.par
Import: Release 10.2.0.3.0 - Production on 금요일, 02 4월, 2010 23:51:14
Copyright (c) 2003, 2005, Oracle. All rights reserved.
접속 대상: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, OLAP and Data Mining options
ORA-39002: 부적합한 작업
ORA-31694: 마스터 테이블 "BIS"."SYS_IMPORT_FULL_01"이(가) 로드/로드 취소를 실패함
ORA-31640: 읽기를 위해 덤프 파일 "d:\bis\bis.dmp"을(를) 열 수 없음
ORA-19505: "d:\bis\bis.dmp" 파일을 확인하는데 실패했습니다
ORA-27046: 파일 크기는 논리 블록 크기의 배가 아닙니다
OSD-04012: 파일 크기 불일치 (OS 1697225879)
구글링을 해도 쉽사리 원인을 찾기가 쉽지 않네요...
글 수정: Korean_Tramper
문제는 ftp로 다운받을때 ASCII 모드로 받아서 그렇더라구요
그래서 다시 IBM 서버에서 ftp BINARY 모드로 받으니 임포트가 되더라구요...
Similar Messages
-
EXP/IMP..of table having LOB column to export and import using expdp/impdp
we have one table in that having colum LOB now this table LOB size is approx 550GB.
as per our knowldge LOB space can not be resused.so we have alrady rasied SR on that
we are come to clusion that we need to take backup of this table then truncate this table and then start import
we need help on bekow ponts.
1)we are taking backup with expdp using parellal parameter=4 this backup will complete sussessfully? any other parameter need to keep in expdp while takig backup.
2)once truncate done,does import will complete successfully..?
any SGA PGA or undo tablespace size or undo retention we need to increase.. to completer susecfully import? because its production Critical database?
current SGA 2GB
PGA 398MB
undo retention 1800
undo tbs 6GB
please any one give suggestion to perform activity without error...also suggest parameter need to keep during expdp/impdp
thanks an advance.Hi,
From my experience be prepared for a long outage to do this - expdp is pretty quick at getting lobs out but very slow at getting them back in again - a lot of the speed optimizations that may datapump so excellent for normal objects are not available for lobs. You really need to test this somewhere first - can you not expdp from live and load into some test area - you don't need the whole database just the table/lob in question. You don;t want to find out after you truncate the table that its going to take 3 days to import back in....
You might want to consider DBMS_REDEFINITION instead?
Here you precreate a temporary table (with same definitiion as the existing one), load the data into it from the existing table and then do a dictionary switch to swap them over - giving you minimal downtime. I think this should work fine with LOBS at 10g but you should do some research and see if it works fine. You'll need a lot of extra tablespace (temporarily) for this approach though.
Regards,
Harry -
Expdp & impdp of R12 schema into 11g DB
Hi,
i need to take a backup of GL schema from R12 instance using expdp &
import it into 11g DB( standalone DB) using impdp.
i have used the following for expdp in R12(10g DB)
$expdp system/<pass> schemas=gl directory=dump_dir dumpfile=gl.dmp logfile=gl.log
export was successfull.
How to import into an 11g DB( This is on a different machine).
regards,
CharanRefer to "Oracle® Database Utilities 11g Release 1 (11.1)" Manual.
Data Pump Import
http://download.oracle.com/docs/cd/B28359_01/server.111/b28319/dp_import.htm#i1007653
Oracle® Database Utilities 11g Release 1 (11.1)
http://download.oracle.com/docs/cd/B28359_01/server.111/b28319/toc.htm -
RAC 11GR2 + expdp/impdp
Hi all,
I have installed Rac 11gr2 on linux RedHat.
I have to migrate data from a 10.1 database to the 11.2 database.
Could you tell me which version of expdp and impdp should i use?
Thanks.You use the 10g datapump expdp on the 10g DB (source) and the 11g impdp on the 11g DB (target).
Kind regards
Uwe Hesse
http://uhesse.wordpress.com -
Log file's format in expdp\impdp
Hi all,
I need to set log file format for expdp\impdp utility. I have this format for my dump file - filename=<name>%U.dmp which generates unique names for dump files. How can i generate unique names for log files? It'd better if dump file name and log file names will be the same.
Regards,
rustam_tjHi Srini, thanks for advice.
I read doc which you suggest me. The only thing which i found there is:
Log files and SQL files overwrite previously existing files.
So i cant keep previos log files?
My OS is HP-UX (11.3) and database version is 10.2.0.4
Regards,
rustam -
Expdp/impdp :: Constraints in Parent child relationship
Hi ,
I have one table parent1 and tables child1, child2 and chld3 have foreign key created on this parent1.
Now I want to do some deletion on parent1. But since number of records are very high on parent1 , we are going with expdp / impdp with querry option.
I have taken query level expdp on parent1. Now I dropped parent1 with cascade constraints option and all the foreign keys created by child1 ,2 and 3 which references parent1 are automatically dropped.
Now If i fire the impdp for the query level dump file , will these foreign key constraints will get created automatically on child1 ,2 and 3 or I need to manually re-create it ??
Regards,
AnuHi,
The FK's will not be in the dumpfile - see the example code below where i generate a sqlfile following pretty much the process you would have done. THis is because the FK is part of the DDL for the child table not the parent.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning option
OPS$ORACLE@EMZA3>create table a (col1 number);
Table created.
OPS$ORACLE@EMZA3>alter table a add primary key (col1);
Table altered.
OPS$ORACLE@EMZA3>create table b (col1 number);
Table created.
OPS$ORACLE@EMZA3>alter table b add constraint x foreign key (col1) references a(col1);
Table altered.
OPS$ORACLE@EMZA3>
EMZA3:[/oracle/11.2.0.1.2.DB/bin]# expdp / include=TABLE:\"=\'A\'\"
Export: Release 11.2.0.3.0 - Production on Fri May 17 15:45:50 2013
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning option
Starting "OPS$ORACLE"."SYS_EXPORT_SCHEMA_04": /******** include=TABLE:"='A'"
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 0 KB
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
. . exported "OPS$ORACLE"."A" 0 KB 0 rows
Master table "OPS$ORACLE"."SYS_EXPORT_SCHEMA_04" successfully loaded/unloaded
Dump file set for OPS$ORACLE.SYS_EXPORT_SCHEMA_04 is:
/oracle/11.2.0.3.0.DB/rdbms/log/expdat.dmp
Job "OPS$ORACLE"."SYS_EXPORT_SCHEMA_04" successfully completed at 15:45:58
Import: Release 11.2.0.3.0 - Production on Fri May 17 15:46:16 2013
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning option
Master table "OPS$ORACLE"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded
Starting "OPS$ORACLE"."SYS_SQL_FILE_FULL_01": /******** sqlfile=a.sql
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Job "OPS$ORACLE"."SYS_SQL_FILE_FULL_01" successfully completed at 15:46:17
-- CONNECT OPS$ORACLE
ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
-- new object type path: SCHEMA_EXPORT/TABLE/TABLE
CREATE TABLE "OPS$ORACLE"."A"
( "COL1" NUMBER
) SEGMENT CREATION IMMEDIATE
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "SYSTEM" ;
-- new object type path: SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
ALTER TABLE "OPS$ORACLE"."A" ADD PRIMARY KEY ("COL1")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "SYSTEM" ENABLE;
-- new object type path: SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
DECLARE I_N VARCHAR2(60);
I_O VARCHAR2(60);
NV VARCHAR2(1);
c DBMS_METADATA.T_VAR_COLL;
df varchar2(21) := 'YYYY-MM-DD:HH24:MI:SS';
stmt varchar2(300) := ' INSERT INTO "SYS"."IMPDP_STATS" (type,version,flags,c1,c2,c3,c5,n1,n2,n3,n4,n5,n6,n7,n8,n9,n10,n11,n12,d1,cl1) VALUES (''I'',6,:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,NULL,:14,:15,NULL,:16,:17)';
BEGIN
DELETE FROM "SYS"."IMPDP_STATS";
c(1) := 'COL1';
DBMS_METADATA.GET_STAT_INDNAME('OPS$ORACLE','A',c,1,i_o,i_n);
EXECUTE IMMEDIATE stmt USING 0,I_N,NV,NV,I_O,0,0,0,0,0,0,0,0,NV,NV,TO_DATE('2013-05-17 15:43:24',df),NV;
DBMS_STATS.IMPORT_INDEX_STATS('"' || i_o || '"','"' || i_n || '"',NULL,'"IMPDP_STATS"',NULL,'"SYS"');
DELETE FROM "SYS"."IMPDP_STATS";
END;
/Regards,
Harry
http://dbaharrison.blogspot.com/ -
System generated Index names different on target database after expdp/impdp
After performing expdp/impdp to move data from one database (A) to another (B), the system name generated indexes has different names on the target database, which caused a major issue with GoldenGate. Could anyone provide any tricks on how to perform the expdp/impdp and have the same system index name generated to be the same on both source and target?
Thanks in advance.
JLWhile I do not agree with Sb choice of wording his solution is correct. I suggest you drop and recreate the objects using explicit naming then you will get the same names on the target database after import for constraints, indexes, and FK's.
A detailed description of the problem this caused with Golden Gate would be interesting. The full Oracle and Golden Gate versions is use might also be important to the solution, if one exists other than explicitl naming.
HTH -- Mark D Powell --
Edited by: Mark D Powell on May 30, 2012 12:26 PM -
Expdp+Impdp: Does the user have to have DBA privilege?
Is a "normal" user (=Without DBA privilege) allowed to export and import (with new expdp/impdp) his own schema?
Is a "normal" user (=Without DBA privilege) allowed to export and import (with new expdp/impdp) other (=not his own) schemas ?
If he is not allowed: Which GRANT is necessary to be able to perform such expdp/impdp operations?
Peter
Edited by: user559463 on Feb 28, 2010 7:49 AMHello,
Is a "normal" user (=Without DBA privilege) allowed to export and import (with new expdp/impdp) his own schema?Yes, a User can always export its own objects.
Is a "normal" user (=Without DBA privilege) allowed to export and import (with new expdp/impdp) other (=not his own) schemas ?Yes, if this User has EXP_FULL_DATABASE and IMP_FUL_DATABASE Roles.
So, you can create a User and GRANT it EXP_FULL_DATABASE and IMP_FULL_DATABASE Roles and, being connected
to this User, you could export/import any Object from / to any Schemas.
On databases, on which there're a lot of export/import operations, I always create a special User with these Roles.
NB: In DataPump you should GRANT also READ, WRITE Privileges on the DIRECTORY (if you use "dump") to the User.
Else, be accurate on the choice of your words, as previously posted, DBA is a Role not a Privilege which has another meaning.
Hope this help.
Best regards,
Jean-Valentin -
Hello,
i would like to use expdp and impdp.
As i installed XE11 on Linux, i unlocked the HR account:
ALTER USER hr ACCOUNT UNLOCK IDENTIFIED BY hr;
and use the expdp:
expdp hr/hr DUMPFILE=hrdump.dmp DIRECTORY=DATA_PUMP_DIR SCHEMAS=HR
LOGFILE=hrdump.log
This quits with:
ORA-39006: internal error
ORA-39213: Metadata processing is not available
The alert_XE.log reported:
ORA-12012: error on auto execute of job "SYS"."BSLN_MAINTAIN_STATS_JOB"
ORA-06550: line 1, column 807:
PLS-00201: identifier 'DBSNMP.BSLN_INTERNAL' must be declared
I read some entries here and did:
sqlplus sys/******* as sysdba @?/rdbms/admin/catnsnmp.sql
sqlplus sys/******* as sysdba @?/rdbms/admin/catsnmp.sql
sqlplus sys/******* as sysdba @?/rdbms/admin/catdpb.sql
sqlplus sys/******* as sysdba @?/rdbms/admin/utlrp.sql
I restarted the database, but the result of expdp was the same:
ORA-39006: internal error
ORA-39213: Metadata processing is not available
What's wrong with that? What can i do?
Do i need "BSLN_MAINTAIN_STATS_JOB" or can this set ro FALSE?
I created the database today on 24.07. and the next run for "BSLN_MAINTAIN_STATS_JOB"
is on 29.07. ?
In the Windows-Version it is working correct, but not in the Linux-Version.
Best regardsHello gentlemen,
back to the origin:
'Is expdp/impdp working on XE11'
The answer is simply yes.
After a view days i found out that:
- no stylesheets installed are required for this operation
- a simple installation is enough
And i did:
SHELL:
mkdir /u01/app > /dev/null 2>&1
mkdir /u01/app/oracle > /dev/null 2>&1
groupadd dba
useradd -g dba -d /u01/app/oracle oracle > /dev/null 2>&1
chown -R oracle:dba /u01/app/oracle
rpm -ivh oracle-xe-11.2.0-1.0.x86_64.rpm
/etc/init.d/./oracle-xe configure responseFile=xe.rsp
./sqlplus sys/********* as sysdba @/u01/app/oracle/product/11.2.0/xe/rdbms/admin/utlfile.sql
SQLPLUS:
ALTER USER hr IDENTIFIED BY hr ACCOUNT UNLOCK;
GRANT CONNECT, RESOURCE to hr;
GRANT read, write on DIRECTORY DATA_PUMP_DIR TO hr;
expdp hr/hr dumpfile=hr.dmp directory=DATA_PUMP_DIR schemas=hr logfile=hr_exp.log
impdp hr/hr dumpfile=hr.dmp directory=DATA_PUMP_DIR schemas=hr logfile=hr_imp.log
This was carried out on:
OEL5.8, OEL6.3, openSUSE 11.4
For explanation:
We did the style-sheet-installation for XE10 to have the expdp/impd functionality.
Thanks for your assistance
Best regards
Achim
Edited by: oelk on 16.08.2012 10:20 -
[ETL] TTS vs expdp/impdp vs ctas (dblink)
Hi, all.
The database is oracle 10gR2 on a unix machine.
Assuming that the db size is about 1 tera bytes (table : 500 giga, index : 500 giga),
how much faster is TTS (transportable tablespace) over expdp/impdp, and over ctas (dblink) ?
As you know, the speed of etl depends on the hardware capacity. (io capacity, network bandwith, the number of cpu)
I just would like to hear general guide from your experience.
Thanks in advance.
Best Regards.869578 wrote:
Hi, all.
The database is oracle 10gR2 on a unix machine.
Assuming that the db size is about 1 tera bytes (table : 500 giga, index : 500 giga),
how much faster is TTS (transportable tablespace) over expdp/impdp, and over ctas (dblink) ?
As you know, the speed of etl depends on the hardware capacity. (io capacity, network bandwith, the number of cpu)
I just would like to hear general guide from your experience.
Thanks in advance.
Best Regards.http://docs.oracle.com/cd/B19306_01/server.102/b14231/tspaces.htm#ADMIN01101
Moving data using transportable tablespaces is much faster than performing either an export/import or unload/load of the same data. This is because the datafiles containing all of the actual data are just copied to the destination location, and you use an export/import utility to transfer only the metadata of the tablespace objects to the new database.
If you really want to know "how much faster" you're going to have to benchmark. Lots of variables come in to play so best to determine this in your actual environment.
Cheers, -
Expdp impdp fails from 10g to 11g db version
Hello folks,
Export DB Version : 10.2.0.4
Import DB Version : 11.2.0.1
Export Log File
Export: Release 10.2.0.4.0 - Production on Wednesday, 03 November, 2010 2:19:20
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, Data Mining and Real Application Testing options
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 45 GB
. . exported "DYM"."CYCLE_COUNT_MASTER" 39.14 GB 309618922 rows
Master table "DYM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
Dump file set for DYM.SYS_EXPORT_SCHEMA_01 is:
Job "DYM"."SYS_EXPORT_SCHEMA_01" successfully completed at 02:56:49
Import Log File
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
ORA-31693: Table data object "DYM_PRJ4"."CYCLE_COUNT_MASTER" failed to load/unload and is being skipped due to error:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 10:54:38
from 10g expdp to 11g impdp is not allowed ? any thoughts appreciated ??Nope , I do not see any error file.
Current log# 2 seq# 908 mem# 0:
Thu Nov 04 11:58:20 2010
DM00 started with pid=530, OS id=1659, job SYSTEM.SYS_IMPORT_FULL_02
Thu Nov 04 11:58:20 2010
DW00 started with pid=531, OS id=1661, wid=1, job SYSTEM.SYS_IMPORT_FULL_02
Thu Nov 04 11:58:55 2010
DM00 started with pid=513, OS id=1700, job SYSTEM.SYS_IMPORT_FULL_02
Thu Nov 04 11:58:55 2010
DW00 started with pid=520, OS id=1713, wid=1, job SYSTEM.SYS_IMPORT_FULL_02
Thu Nov 04 12:00:54 2010
Thread 1 cannot allocate new log, sequence 909
Private strand flush not complete
Current log# 2 seq# 908 mem# 0: ####################redo02.log
Thread 1 advanced to log sequence 909 (LGWR switch)
Current log# 3 seq# 909 mem# 0: ###################redo03.log
Thu Nov 04 12:01:51 2010
Thread 1 cannot allocate new log, sequence 910
Checkpoint not complete
Current log# 3 seq# 909 mem# 0:###################redo03.log -
Use expdp/impdp to reorganize a tablespace to remove additional datafile ?
Oracle 10g (10.2.0.1)
We had a tablespace with a single datafile WORK1, WORK1 filled up, colleague added two datafiles WORK2 and WORK3 (instead of resizing original larger).
I resized WORK1, increasing by 500Mb.
I was able to drop WORK3, but not WORK2 (ORA-03262: the file is non-empty)
My proposed solution is to expdp the tablespace, drop the tablespace and datafiles, recreate the tablespace with a correctly sized datafile and finally impdp the tablespace.
Is this solution valid ?
Any hints at syntax would be useful1. Map your datafile.
2. If no segments in datafile, drop it and go to 5.
2. Shrink the datafile up to where the data ends.
3. Rebuild/move the last object in the data file,
4. Go to 1
5. Fin
To map data file...
accept file_num char prompt 'File ID: ';
SET PAGESIZE 70
SET LINESIZE 132
SET NEWPAGE 0
SET VERIFY OFF
SET ECHO OFF
SET HEADING ON
SET FEEDBACK OFF
SET TERMOUT ON
COLUMN file_name FORMAT a50 HEADING 'File Name'
COLUMN owner FORMAT a10 TRUNC HEADING 'Owner'
COLUMN object FORMAT a30 TRUNC HEADING 'Object'
COLUMN obj_type FORMAT a2 HEADING ' '
COLUMN block_id FORMAT 9999999 HEADING 'Block|ID'
COLUMN blocks FORMAT 999,999 HEADING 'Blocks'
COLUMN mbytes FORMAT 9,999.99 HEADING 'M-Bytes'
SELECT 'free space' owner,
' ' object,
' ' obj_type,
f.file_name,
s.block_id,
s.blocks,
s.bytes/1048576 mbytes
FROM dba_free_space s,
dba_data_files f
WHERE s.file_id = TO_NUMBER(&file_num)
AND s.file_id = f.file_id
UNION
SELECT owner,
segment_name,
DECODE(segment_type, 'TABLE', 'T',
'INDEX', 'I',
'ROLLBACK', 'RB',
'CACHE', 'CH',
'CLUSTER', 'CL',
'LOBINDEX', 'LI',
'LOBSEGMENT', 'LS',
'TEMPORARY', 'TY',
'NESTED TABLE', 'NT',
'TYPE2 UNDO', 'U2',
'TABLE PARTITION','TP',
'INDEX PARTITION','IP', '?'),
f.file_name,
s.file_id,
s.block_id,
s.blocks,
s.bytes/1048576
FROM dba_extents s,
dba_data_files f
WHERE s.file_id = TO_NUMBER(&file_num)
AND s.file_id = f.file_id
ORDER
BY file_id,
block_id -
EXPDP/IMPDP run simultaneously
Are there any foreseeable problems running expdp and impdp contemporaneously? I'd imagine that 10g could handle it, but maybe there may be some sort of deadlock issues. Purely theoretical situation.
Captain Obvious: "Be kind, he's a newbie."I had an import running that bumped up against a known bug that caused it to take a long time to complete. I was worried that it might cause a problem when the scheduled export kicked off later tonight. Is it that the dump that is being imported, was supposed to be in the export dump scheduled later. If yes, then only those tables which have been imported, till the point of the time at which the export starts and export that particular table is going, will be exported.
But if import is finished, then you will get all the tables/objects in the exp dump.
Anand
Edited by: Anand... on Oct 16, 2008 5:24 AM -
Hi Aman,
Sorry about that. Posting it as new one:
SQL> ALTER USER SCOTT DEFAULT TABLESPACE TEST;
User altered.
SQL> ALTER USER TEST DEFAULT TABLESPACE TEST;
User altered.
SQL> ALTER TABLESPACE TEST
2 STORAGE
3 MAXEXTENTS UNLIMITED;
STORAGE
ERROR at line 2:
ORA-02142: missing or invalid ALTER TABLESPACE option
SQL> EXIT
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Pr
oduction
With the Partitioning, OLAP and Data Mining options
C:\Documents and Settings\Rafialvi>expdp system/manager directory=MYDIR dumpfile
=expdpf.dmp schemas=scott
Export: Release 10.2.0.1.0 - Production on Monday, 15 February, 2010 16:34:57
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 475
ORA-29283: invalid file operation
I tried in linux still the same error persist:
[oracle@dbcl1n1 AUCD1 ~]$ expdp system/system directory=MYDIR dumpfile=expdpf.dmp schemas=adprod
Export: Release 10.2.0.4.0 - 64bit Production on Monday, 15 February, 2010 3:22:33
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 488
ORA-29283: invalid file operation
Thanks,
Rafi.
C:\Documents and Settings\Rafialvi>
Thanks,
RafiHi Khaja,
You was quite right thanks man.I was not creating directory.But still struggliing with below problem on windows and export is going on linux...
code code
SQL*Plus: Release 10.2.0.1.0 - Production on Mon Feb 15 17:24:09 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Enter user-name: /as sysdba
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL> create directory mydir3 as 'C:\oracle\product\10.2.0\expdptest';
Directory created.
SQL> grant read,write on mydir3 to public;
grant read,write on mydir3 to public
ERROR at line 1:
ORA-00942: table or view does not exist
SQL> grant read,write on directory mydir3 to public;
Grant succeeded.
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Pr
oduction
With the Partitioning, OLAP and Data Mining options
C:\Documents and Settings\Rafialvi>expdp system/manager directory=MYDIR3
Export: Release 10.2.0.1.0 - Production on Monday, 15 February, 2010 17:28:47
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/******** directory=MYDIR3
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 320 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SYNONYM/SYNONYM
Processing object type SCHEMA_EXPORT/TYPE/TYPE_SPEC
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/PRE_TABLE_ACTION
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/GRANT/CROSS_SCHEMA/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_
PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/VIEW/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/VIEW/GRANT/CROSS_SCHEMA/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/VIEW/COMMENT
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/POST_TABLE_ACTION
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
. . exported "SYSTEM"."REPCAT$_AUDIT_ATTRIBUTE" 5.960 KB 2 rows
. . exported "SYSTEM"."REPCAT$_OBJECT_TYPES" 6.515 KB 28 rows
. . exported "SYSTEM"."REPCAT$_RESOLUTION_METHOD" 5.656 KB 19 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_STATUS" 5.304 KB 3 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_TYPES" 5.921 KB 2 rows
. . exported "SYSTEM"."DEF$_AQCALL" 0 KB 0 rows
. . exported "SYSTEM"."DEF$_AQERROR" 0 KB 0 rows
. . exported "SYSTEM"."DEF$_CALLDEST" 0 KB 0 rows
. . exported "SYSTEM"."DEF$_DEFAULTDEST" 0 KB 0 rows
. . exported "SYSTEM"."DEF$_DESTINATION" 0 KB 0 rows
. . exported "SYSTEM"."DEF$_ERROR" 0 KB 0 rows
. . exported "SYSTEM"."DEF$_LOB" 0 KB 0 rows
. . exported "SYSTEM"."DEF$_ORIGIN" 0 KB 0 rows
. . exported "SYSTEM"."DEF$_PROPAGATOR" 0 KB 0 rows
. . exported "SYSTEM"."DEF$_PUSHED_TRANSACTIONS" 0 KB 0 rows
. . exported "SYSTEM"."DEF$_TEMP$LOB" 0 KB 0 rows
. . exported "SYSTEM"."LOGSTDBY$APPLY_MILESTONE" 0 KB 0 rows
. . exported "SYSTEM"."LOGSTDBY$APPLY_PROGRESS":"P0" 0 KB 0 rows
. . exported "SYSTEM"."LOGSTDBY$EVENTS" 0 KB 0 rows
. . exported "SYSTEM"."LOGSTDBY$HISTORY" 0 KB 0 rows
. . exported "SYSTEM"."LOGSTDBY$PARAMETERS" 0 KB 0 rows
. . exported "SYSTEM"."LOGSTDBY$PLSQL" 0 KB 0 rows
. . exported "SYSTEM"."LOGSTDBY$SCN" 0 KB 0 rows
. . exported "SYSTEM"."LOGSTDBY$SKIP" 0 KB 0 rows
. . exported "SYSTEM"."LOGSTDBY$SKIP_TRANSACTION" 0 KB 0 rows
. . exported "SYSTEM"."MVIEW$_ADV_INDEX" 0 KB 0 rows
. . exported "SYSTEM"."MVIEW$_ADV_PARTITION" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_AUDIT_COLUMN" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_COLUMN_GROUP" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_CONFLICT" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_DDL" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_EXCEPTIONS" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_EXTENSION" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_FLAVORS" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_FLAVOR_OBJECTS" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_GENERATED" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_GROUPED_COLUMN" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_INSTANTIATION_DDL" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_KEY_COLUMNS" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_OBJECT_PARMS" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_PARAMETER_COLUMN" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_PRIORITY" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_PRIORITY_GROUP" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_REFRESH_TEMPLATES" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_REPCAT" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_REPCATLOG" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_REPCOLUMN" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_REPGROUP_PRIVS" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_REPOBJECT" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_REPPROP" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_REPSCHEMA" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_RESOLUTION" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_RESOLUTION_STATISTICS" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_RESOL_STATS_CONTROL" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_RUNTIME_PARMS" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_SITES_NEW" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_SITE_OBJECTS" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_SNAPGROUP" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_OBJECTS" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_PARMS" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_REFGROUPS" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_SITES" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_TARGETS" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_USER_AUTHORIZATIONS" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_USER_PARM_VALUES" 0 KB 0 rows
. . exported "SYSTEM"."SQLPLUS_PRODUCT_PROFILE" 0 KB 0 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
C:\ORACLE\PRODUCT\10.2.0\EXPDPTEST\EXPDAT.DMP
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at 17:29:13
C:\Documents and Settings\Rafialvi>dumpfile=expdpf.dmp schemas=scott
'dumpfile' is not recognized as an internal or external command,
operable program or batch file.
C:\Documents and Settings\Rafialvi>impdp system/manager directory=MYDIR3 dumpfil
e=expdpf.dmp remap_schema=scott:test
Import: Release 10.2.0.1.0 - Production on Monday, 15 February, 2010 17:30:46
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31640: unable to open dump file "C:\oracle\product\10.2.0\expdptest\expdpf.d
mp" for read
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.
C:\Documents and Settings\Rafialvi>expdp scott/tiger directory=MYDIR3
Export: Release 10.2.0.1.0 - Production on Monday, 15 February, 2010 17:35:51
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file "C:\oracle\product\10.2.0\expdptest\expdat
.dmp"
ORA-27038: created file already exists
OSD-04010: <create> option specified, file already exists
C:\Documents and Settings\Rafialvi>dumpfile=expdpf.dmp schemas=scott
'dumpfile' is not recognized as an internal or external command,
operable program or batch file.
C:\Documents and Settings\Rafialvi>expdp scott/tiger directory=MYDIR3
Export: Release 10.2.0.1.0 - Production on Monday, 15 February, 2010 17:36:19
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file "C:\oracle\product\10.2.0\expdptest\expdat
.dmp"
ORA-27038: created file already exists
OSD-04010: <create> option specified, file already exists
C:\Documents and Settings\Rafialvi>dumpfile=expdptest.dmp schemas=scott
'dumpfile' is not recognized as an internal or external command,
operable program or batch file.
C:\Documents and Settings\Rafialvi>expdp scott/tiger directory=MYDIR3
Export: Release 10.2.0.1.0 - Production on Monday, 15 February, 2010 17:36:43
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file "C:\oracle\product\10.2.0\expdptest\expdat
.dmp"
ORA-27038: created file already exists
OSD-04010: <create> option specified, file already exists
C:\Documents and Settings\Rafialvi>dumpfile=expdptest3.dmp schemas=scott
'dumpfile' is not recognized as an internal or external command,
operable program or batch file.
C:\Documents and Settings\Rafialvi>expdp scott/tiger directory=MYDIR3
Export: Release 10.2.0.1.0 - Production on Monday, 15 February, 2010 17:47:31
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file "C:\oracle\product\10.2.0\expdptest\expdat
.dmp"
ORA-27038: created file already exists
OSD-04010: <create> option specified, file already exists
C:\Documents and Settings\Rafialvi>dumpfile=expdptest13.dmp schemas=scott
'dumpfile' is not recognized as an internal or external command,
operable program or batch file.
C:\Documents and Settings\Rafialvi>expdp scott/tiger directory=MYDIR3
Export: Release 10.2.0.1.0 - Production on Monday, 15 February, 2010 17:48:32
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file "C:\oracle\product\10.2.0\expdptest\expdat
.dmp"
ORA-27038: created file already exists
OSD-04010: <create> option specified, file already exists
C:\Documents and Settings\Rafialvi>dumpfile=expdptest131.dmp schemas=scott
'dumpfile' is not recognized as an internal or external command,
operable program or batch file.
C:\Documents and Settings\Rafialvi>expdp scott/tiger directory=MYDIR3
Export: Release 10.2.0.1.0 - Production on Monday, 15 February, 2010 17:50:15
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file "C:\oracle\product\10.2.0\expdptest\expdat
.dmp"
ORA-27038: created file already exists
OSD-04010: <create> option specified, file already exists
How to get rid of below error khaja.
Thanks,
Rafi. -
Datapump expdp impdp and sequences
Hi all. I have a 10g XE database on my dev machine witha whole load of test data in it. I want to copy the schema to another machine without the data which I can do with expdp usr/pwd CONTENT=metadata_only and I can import it with impdp and everyhing works tickety boo. Except that all the sequences are populated where the test data left off. Can someone please tell me how to copy the schema with the sequences reset please ? I'm guessing either I can export the schema resetting the sequences (somehow) or export EXCLUDING the sequeces and create them seperately (somehow). Thanks for reading.
I don't think you can reset the sequences directly. You can run the import to an sql file and then use search/replace on it:
$ impdp user/pass dumpfile=test.dmp directory=MY_DIR include=SEQUENCE sqlfile=sequences.sql
You will have several lines like this inside "sequences.sql":
CREATE SEQUENCE "USER"."SEQ_NAME" MINVALUE 1 MAXVALUE 99999999 INCREMENT BY 1 START WITH 1857 CACHE 20 ORDER CYCLE ;
Then just use regular expressions to replace "START WITH NNNN" by "START WITH 1"
Maybe you are looking for
-
Can anyone please help me. I have had my ipad2 for about 5 months and it has worked fine. I am now having issues charging it. I am using the cords that came with it and the outlet that I have used to charge it in the past, but when plugged in it i
-
How can I access images from IPhoto on my Mac to Photoshop CS6
How can I access images from IPhoto library on my Mac to Photoshop CS6? I enable jpg with format "Photoshop" and this is what comes up - "Could not complete your request because Photoshop does not recognize this type of file"
-
HP vs 19 monter will not let me have access to the moniter menu items
Will HP Keep this forum on- line so that questions and answers can be used in future. this is a great way to add solid infomation that people like me, 74, can look through common and those aren't so common problems. Great Service.
-
Accessing XML Publisher Reports from Java application
How to accesss the XDO Reports from Java application? I am wondering that we need to get the URL for accessing the report and call the URL with in Java program? Is that right? Is there any other option? Any help is greatly appreciated.
-
Creating message type in XI with imported objects
Hi, I have imported an Idoc Structure into XI. Now I need to create a message type with structure from imported Idoc + some additional fields. Can some one tell me how to go about this. Because once I drag and drop the imported Idoc structure, I does