Data Pump Export with remap tables
Hello we have two database on different servers. both running same platform windows 2008 ORacle11r2. we have two stars(schema) on both server. both schema have partition table and every partition has it own tablespace. Star one is empty its name is Say "Msoon" i want to populate Msoon with ONLY DATA of Other star say M2 i have tried different command but every time i get error. here is my command in which i'm extracting one table from database
expdp M1396_1447/Aa123456 DIRECTORY=data_pump_dir DUMPFILE=quest_dg2.dmp tables=M1396_1447.M1396_1447_DG2 CONTENT= DATA_ONLY
Now i want to import its data in Msoon star table MSOON02_DG2 which is same. just name is change but i get the following error
impdp MSOON/Aa123456 DIRECTORY=data_pumpdir DUMPFILE=QUEST_DG22.dmp remap_table=M1396_1447.M1396_1447_DG2:MSOON02.MSOON02_DG2 CONTENT= DATA_ONLY
Import: Release 11.2.0.1.0 - Production on Wed Jan 12 20:34:17 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "MSOON"."SYS_IMPORT_FULL_12" successfully loaded/unloaded
Starting "MSOON"."SYS_IMPORT_FULL_12": MSOON/******** DIRECTORY=data_pump_dir DUMPFILE=QUEST_DG22.dmp logfile=MY_L.log remap_table=M1396_1447.M1396_1447_DG2:MSOON02.MSOON02_DG2 CONTENT= DATA_ONLY
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
ORA-39126: Worker unexpected fatal error in KUPW$WORKER.UPATE_TD_ROW_IMP [15]
TABLE_DATA:"M1396_1447"."MSOON02.MSOON02_DG2":"DEF_PART_M1396_1447_DG2"
ORA-31603: object "MSOON02.MSOON02_DG2" of type TABLE not found in schema "M1396_1447"
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
ORA-06512: at "SYS.KUPW$WORKER", line 8171
----- PL/SQL Call Stack -----
object line object
handle number name
00000003B2AD5BB0 18990 package body SYS.KUPW$WORKER
00000003B2AD5BB0 8192 package body SYS.KUPW$WORKER
00000003B2AD5BB0 18552 package body SYS.KUPW$WORKER
00000003B2AD5BB0 4105 package body SYS.KUPW$WORKER
00000003B2AD5BB0 8875 package body SYS.KUPW$WORKER
00000003B2AD5BB0 1649 package body SYS.KUPW$WORKER
00000003B29D51D0 2 anonymous block
ORA-39126: Worker unexpected fatal error in KUPW$WORKER.UPATE_TD_ROW_IMP [15]
TABLE_DATA:"M1396_1447"."MSOON02.MSOON02_DG2":"DEF_PART_M1396_1447_DG2"
ORA-31603: object "MSOON02.MSOON02_DG2" of type TABLE not found in schema "M1396_1447"
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
ORA-06512: at "SYS.KUPW$WORKER", line 8171
----- PL/SQL Call Stack -----
object line object
handle number name
00000003B2AD5BB0 18990 package body SYS.KUPW$WORKER
00000003B2AD5BB0 8192 package body SYS.KUPW$WORKER
00000003B2AD5BB0 18552 package body SYS.KUPW$WORKER
00000003B2AD5BB0 4105 package body SYS.KUPW$WORKER
00000003B2AD5BB0 8875 package body SYS.KUPW$WORKER
00000003B2AD5BB0 1649 package body SYS.KUPW$WORKER
00000003B29D51D0 2 anonymous block
Job "MSOON"."SYS_IMPORT_FULL_12" stopped due to fatal error at 20:34:19
Edited by: Oracle Studnet on Jan 12, 2011 7:36 AM
i have some problem with data pump parallel processing. here is my statement
expdp M1396_1447/Aa123456 DIRECTORY=data_pump_dir DUMPFILE=SRV03Msoon02_jt%U.dmp logfile= jt.log tables=M1396_1447.M1396_1447_jt CONTENT= DATA_ONLY parallel=4 One server two i want to import it
impdp MSOON02/Aa123456 DIRECTORY=data_pump_dir DUMPFILE=SRV03Msoon02_jt%U.dmp logfile= jtJT.log REMAP_SCHEMA=M1396_1447:MSOON02 remap_table=M1396_1447_JT:MSOON02_JT CONTENT= DATA_ONLY parallel=4 i get following Error if i use parallel. if i omit parallel it take two long to import data. it took 2.5hr to insert 2.35Gb data in one partition of table.
One more thing if i use parallel, all partitions of table containing no record imoprt successfully but all those partition that have data sized in GBs fail the following error. plz it is requested to help me out of this problem. here is log file of data pump
Import: Release 11.2.0.1.0 - Production on Thu Jan 13 14:05:04 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "MSOON02"."SYS_IMPORT_FULL_05" successfully loaded/unloaded
Starting "MSOON02"."SYS_IMPORT_FULL_05": msoon02/******** DIRECTORY=data_pump_dir DUMPFILE=SRV03 Msoon02_jt%U.dmp logfile= REMAP_SCHEMA=M1396_1447:MSOON02 remap_table=M1396_1447_JT:MSOON02_JT CONTENT= DATA_ONLY parallel=4
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
ORA-31693: Table data object "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20091201" failed to load/unload and is being skipped due to error:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-31693: Table data object "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20100101" failed to load/unload and is being skipped due to error:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-31693: Table data object "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20091101" failed to load/unload and is being skipped due to error:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
. . imported "MSOON02"."MSOON02_JT":"DEF_PART_M1396_1447_JT" 0 KB 0 rows
. . imported "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20100201" 0 KB 0 rows
. . imported "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20100301" 0 KB 0 rows
. . imported "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20100401" 0 KB 0 rows
. . imported "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20100501" 0 KB 0 rows
. . imported "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20100601" 0 KB 0 rows
. . imported "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20100701" 0 KB 0 rows
. . imported "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20100801" 0 KB 0 rows
. . imported "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20100901" 0 KB 0 rows
. . imported "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20101001" 0 KB 0 rows
Job "MSOON02"."SYS_IMPORT_FULL_05" completed with 3 error(s) at 14:21:32Edited by: Oracle Studnet on Jan 13, 2011 1:20 AM
Similar Messages
-
Hi All,
I am new to DBA and when I run expdp for the full database,I am getting the below error
" ORA-39139: Data Pump does not support XMLSchema objects. TABLE_DATA:"APPS"."abc_TAB" will be skipped.
I am facing issue with exp utility,so I have to use expdp.
Kindly help me to avoid these errors.
Thank,Thanks Srini for the Doc - A good one.
We have scheduled a daily full export backup. Now this is having problems.
-> exp expuser/exppaswd file=/ora/exp.dmp log=/ora/exp.log direct=Y statistics=none full=Y
error=
Table window348_TAB will be exported in conventional path.
. . exporting table window348_TAB 0 rows exported
Table wtpParameters350_TAB will be exported in conventional path.
. . exporting table wtpParameters350_TAB 0 rows exported
. exporting synonyms
EXP-00008: ORACLE error 1406 encountered
ORA-01406: fetched column value was truncated
EXP-00000: Export terminated unsuccessfully
Oracle requested to drop/disable the policy. But one policy assigned to many tables and one table having more than one policies.
We haven't set any policy, all are default one.
1. If I disable/drop policies how it will affect the application?
2. How I can disable or drop it for all the objects in the database?
3. How I can identify the policies - whether OLS or VPD
Pelase help me to get a daily full export backup or any alternate solution for the same.
Thanks,
Asim -
Data Pump Export with Column Encryption
Hi All-
I have the following two questions:
- Can I export encrypted column data to a file, preserving the data encrypted? (ie. I don't want clear text in resulting file)
- If yes, what would another person need to decrypt the generated file? (ie. I want to give the resulting file to a customer)
Thanks a lot for your help!
JuanHi,
expdp and impdp works only on oracle database. You you need to have oracle database on source and destination to use expdp/impdp. You can also use expdp with ENCRYPTION_PASSWORD option so that password can be passed to client for doing impdp at their end with given password.
Regards -
Hi all
During data pump export (big table ) dmp has 289GB (FILESIZE=10G) I had error
. . exported "TEST3"."INC_T_ZMLUVY_PARTNERI" 6.015 GB 61182910 rows
. . exported "TEST3"."PAR_T_VZTAHY" 5.798 GB 73121325 rows
ORA-31693: Table data object "TEST3"."INC_T_POISTENIE_PARAMETRE_H" failed to load/unload and is being skipped due to error:
ORA-31617: unable to open dump file "/u01/app/oracle/backup/STARINS/exp_polska03.dmp" for write
ORA-19505: failed to identify file "/u01/app/oracle/backup/STARINS/exp_polska03.dmp"
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
. . exported "TEST3"."PAY_T_UCET_POLOZKA" 5.344 GB 97337823 rows
and export continued until :
Job "SYSTEM"."SYS_EXPORT_SCHEMA_02" completed with 1 error
(it take 8hours)
can You help me if dmp is now ok ? If impdp will conntiue after reading exp_polska03.dmp , (dump has 28 file exp_polska1 - exp_polska28)?Maximaly I 'm exporting this one table again.. is this solution ok ?
thak BranoWhat is the expdp parameters used?
what's the total export dump size?( Try estimate_only=y)
ORA-31617: unable to open dump >file "/u01/app/oracle/backup/STARINS/exp_polska03.dmp" for write
ORA-19505: failed to identify >file "/u01/app/oracle/backup/STARINS/exp_polska03.dmphttp://systemr.blogspot.com/2007/09/section-oracle-database-utilities-title.html
Check the above one.
I guess the file is removed from the location or not enough permission to write. -
How can I use the data pump export from external client?
I am trying to export a bunch of table from a DB but I cant figure out how to do it.
I dont have access to a shell terminal on the server itself, I can only login using TOAD.
I am trying to use TOAD's Data Pump Export utility but I keep getting this error:
ORA-39070: Unable to open the log file.
ORA-39087: directory name D:\TEMP\ is invalid
I dont understand if its because I am setting the parameter file wrong or if the utility is trying to find that directory on the server whereas I am thinking its going to dump it to my local filesystem where that directory exists.
I'd hate to have to use SQL Loader to create ctl files for each and every table...
Here is my parameter file:
DUMPFILE="db_export.dmp"
LOGFILE="exp_db_export.log"
DIRECTORY="D:\temp\"
TABLES=ACCOUNT
CONTENT=ALL
(just trying to test it on one table so far...)
P.S. Oracle 11g
Edited by: trant on Jan 13, 2012 7:58 AMORA-39070: Unable to open the log file.
ORA-39087: directory name D:\TEMP\ is invalidDirectory here it should not be physical location, its a logical representation.
For that you have to create a directory from SQL level, like create directory exp_dp..
then you have to use above created directory as DIRECTORY=exp_dp
HTH -
Interface Problems: DBA = Data Pump = Export Jobs (Job Name)
Hello Folks,
I need your help in troubleshooting an SQL Developer interface problem.
DBA => Data Pump => Export Jobs (Job Name) => Data Pump Export => Job Scheduler (Step):
-a- Job Name and Job Description fields are not visible. Well the fields are there but each of them just 1/2 character wide. I can't see/enter anything in the fields.
Import Wizard:
-b- Job Name field under the first "Type" wizard's step looks exactly the same as in Export case.
-c- Can't see any row under "Chose Input Files" section (I see just ~1 mm of the first row and everything else is hidden).
My env:
-- Version 3.2.20.09, Build MAIN-09.87
-- Windows 7 (64 bit)
It could be related to the fact that I did change fonts in the Preferences. As I don't know what is the default font I can't change it back to the default and test (let me know what is the default and I will test it).
PS
-- Have tried to disable all extensions but DBA Navigator (11.2.0.09.87). It didn't help
-- There are no any messages in the console if I run SQL Dev under cmd "sqldeveloper\bin\sqldeveloper.exe
Any help is appreciated,
YuryHi Yury,
a-I see those 1/2 character size text boxes (in my case on frequency) when the pop up dialog is too small - do they go away when you make it bigger?
b- On IMPORT the name it starts with IMPORT - if it is the half character issue have you tried making the dialog bigger?
c-I think it is size again but my dialog at minimum size is already big enough.
Have you tried a smaller font - or making the dialogs bigger (resizing from the corners).
I have a 3.2.1 version where I have not changed the fonts from Tools->Preferences->CodeEditor->Fonts appears to be:
Font Name: DialogInput
Font size: 12
Turloch
-SQLDeveloper Team -
Pre Checks before running Data Pump Export/Import
Hi,
Oracle :-11.2
OS:- Windows
Kindly share the pre-checks required for data pump export and import which should be followed by a DBA.
ThanksWhen you do a tablespace mode export, Data Pump is essentially doing a table mode export of all of the tables in the tablespaces mentioned. So if you have this:
tablespace a contains table 1
table 2
index 3a(on table 3)
tablespace b contains index 1a on table 1
index 2a on table 2
table 3
and if you expdp tablespaces=a ...
you will get table 1, table 2, index 1a, and index 2a.
My belief is that you will not get table 3 or index 3a. The way I understand the code to work is that you get the tables in the tablespaces you mention and their dependent objects, but not the other way around. You could easily verify this to make sure.
Dean -
How-to list the contents of a Data Pump Export file?
How can I list the contents of a 10gR2 Data Pump Export file? I'm looking at the Syntax Diagram for Data Pump Import and can't see a list-only option.
Regards,
Al Malinuse the parameter SQLFILE in the impdp which writes all the sql ddl's to the specified file.
http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_import.htm
SQLFILE
Default: none
Purpose
Specifies a file into which all of the SQL DDL that Import would have executed, based on other parameters, is written.
Syntax and Description
SQLFILE=[directory_object:]file_name
The file_name specifies where the import job will write the DDL that would be executed during the job. The SQL is not actually executed, and the target system remains unchanged. The file is written to the directory object specified in the DIRECTORY parameter, unless another directory_object is explicitly specified here. Any existing file that has a name matching the one specified with this parameter is overwritten.
Note that passwords are not included in the SQL file. For example, if a CONNECT statement is part of the DDL that was executed, it will be replaced by a comment with only the schema name shown. In the following example, the dashes indicate that a comment follows, and the hr schema name is shown, but not the password.
-- CONNECT hr
Therefore, before you can execute the SQL file, you must edit it by removing the dashes indicating a comment and adding the password for the hr schema (in this case, the password is also hr), as follows:
CONNECT hr/hr
For Streams and other Oracle database options, anonymous PL/SQL blocks may appear within the SQLFILE output. They should not be executed directly.
Example
The following is an example of using the SQLFILE parameter. You can create the expfull.dmp dump file used in this example by running the example provided for the Export FULL parameter. See FULL.
impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmpSQLFILE=dpump_dir2:expfull.sql
A SQL file named expfull.sql is written to dpump_dir2.
Message was edited by:
Ranga
Message was edited by:
Ranga -
Exporting whole database (10GB) using Data Pump export utility
Hi,
I have a requirement that we have to export the whole database (10GB) using Data Pump export utility because it is not possible to send the 10GB dump in a CD/DVD to the system vendor of our application (to analyze few issues we have).
Now when i checked online full export is available but not able to understand how it works, as we never used this data pump utility, we use normal export method. Also, will data pump reduce the size of the dump file so it can fit in a DVD or can we use Parallel Full DB export utility to split the files and include them in a DVD, is it possible.
Please correct me if i am wrong and kindly help.
Thanks for your help in advance.You need to create a directory object.
sqlplus user/password
create directory foo as '/path_here';
grant all on directory foo to public;
exit;
then run you expdp command.
Data Pump can compress the dumpfile if you are on 11.1 and have the appropriate options. The reason for saying filesize is to limit the size of the dumpfile. If you have 10G and are not compressing and the total dumpfiles are 10G, then by specifying 600MB, you will just have 10G/600MB = 17 dumpfiles that are 600MB. You will have to send them 17 cds. (probably a few more if dumpfiles don't get filled up 100% due to parallel.
Data Pump dumpfiles are written by the server, not the client, so the dumpfiles don't get created in the directory where the job is run.
Dean -
Hi,
Can you please explain me about CONSISTENT and COMPRESS parameters in Data Pump Export.
Thanks,
Suresh Bommalata..Use FLASHBACK_SCN and FLASHBACK_TIME for this functionality against the consistent parameter
http://oracledatapump.blogspot.com/2009/04/mapping-between-parameters-of-datapump.html
Edited by: user00726 on Aug 18, 2010 10:56 PM -
HR master data change export with Interface-Toolbox PU12
Hello out there !
I implemented a HR master data change export with PU12 at one of my customers.
It works fine - with one little problem:
It is only possible to export one period (actual or selected).
Is there any possibility to export more than one period (also periods in the future) ?
The request is to export also master data with validity start date in a future period.
Thanks for any help.
Greetings, Holger MächtigDear Holger,
I am very curious about your problem last year. Is it possible to do a future export with PU12?
I am now working with PU12 for the first time and I have set up a export file in PU12.
But when I am doing an update without filling the name for the file layout I have the following error:
"An error occurred when opening the "export file"
When I am filling the file layout I have the following error:
"An error occurred when writing to an export file" (Error Number E107).
"File processing: end of file".
When I am not doing an update the export file runs and seems right.
Do you know what I am doing wrong?
Thanks in advance!
Kind regards,
Yvette -
Check for directory for Data Pump Export
Hi
In order to perform Data Pump Export, It is required to create the same as a pre-requisite.
e.g :
SQL> create directory expdp_dir as '/u01/backup/exports';
I need to know if there is any way to know if the folder is already created.
Please advice
Regards,
DheerajDheeraj Kumar M wrote:
I need to know if there is any way to know if the folder is already created.Do you mean see if OS folder exists and is accessible to oracle user? Try creating file in that directory using UTL_FILE. If it succeeds - you are OK. Another option is writing java SP to check OS folder existence and permissions.
SY. -
Data Pump Export error - network mounted path
Hi,
Please have a look at the data pump error i am getting while doing export. I am running on version 11g . Please help with your feedback..
I am getting error due to Network mounted path for directory OverLoad it works fine with local path. i have given full permissions on network path and utl_file able to create files but datapump fail with below error messages.
Oracle 11g
Solaris 10
Getting below error :
ERROR at line 1:
ORA-39001: invalid argument value
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3444
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3693
ORA-06512: at line 64
DECLARE
p_part_name VARCHAR2(30);
p_msg VARCHAR2(512);
v_ret_period NUMBER;
v_arch_location VARCHAR2(512);
v_arch_directory VARCHAR2(20);
v_rec_count NUMBER;
v_partition_dumpfile VARCHAR2(35);
v_partition_dumplog VARCHAR2(35);
v_part_date VARCHAR2(30);
p_partition_name VARCHAR2(30);
v_partition_arch_location VARCHAR2(512);
h1 NUMBER; -- Data Pump job handle
job_state VARCHAR2(30); -- To keep track of job state
le ku$_LogEntry; -- For WIP and error messages
js ku$_JobStatus; -- The job status from get_status
jd ku$_JobDesc; -- The job description from get_status
sts ku$_Status; -- The status object returned by get_status
ind NUMBER; -- Loop index
percent_done NUMBER; -- Percentage of job complete
--check dump file exist on directory
l_file utl_file.file_type;
l_file_name varchar2(20);
l_exists boolean;
l_length number;
l_blksize number;
BEGIN
p_part_name:='P2010110800';
p_partition_name := upper(p_part_name);
v_partition_dumpfile := chr(39)||p_partition_name||chr(39);
v_partition_dumplog := p_partition_name || '.LOG';
SELECT COUNT(*) INTO v_rec_count FROM HDB.PARTITION_BACKUP_MASTER WHERE PARTITION_ARCHIVAL_STATUS='Y';
IF v_rec_count != 0 THEN
SELECT
PARTITION_ARCHIVAL_PERIOD
,PARTITION_ARCHIVAL_LOCATION
,PARTITION_ARCHIVAL_DIRECTORY
INTO v_ret_period , v_arch_location , v_arch_directory
FROM HDB.PARTITION_BACKUP_MASTER WHERE PARTITION_ARCHIVAL_STATUS='Y';
END IF;
utl_file.fgetattr('ORALOAD', l_file_name, l_exists, l_length, l_blksize);
IF (l_exists) THEN
utl_file.FRENAME('ORALOAD', l_file_name, 'ORALOAD', p_partition_name ||'_'|| to_char(systimestamp,'YYYYMMDDHH24MISS') ||'.DMP', TRUE);
END IF;
v_part_date := replace(p_partition_name,'P');
DBMS_OUTPUT.PUT_LINE('inside');
h1 := dbms_datapump.open (operation => 'EXPORT',
job_mode => 'TABLE'
dbms_datapump.add_file (handle => h1,
filename => p_partition_name ||'.DMP',
directory => v_arch_directory,
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.add_file (handle => h1,
filename => p_partition_name||'.LOG',
directory => v_arch_directory,
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
dbms_datapump.metadata_filter (handle => h1,
name => 'SCHEMA_EXPR',
value => 'IN (''HDB'')');
dbms_datapump.metadata_filter (handle => h1,
name => 'NAME_EXPR',
value => 'IN (''SUBSCRIBER_EVENT'')');
dbms_datapump.data_filter (handle => h1,
name => 'PARTITION_LIST',
value => v_partition_dumpfile,
table_name => 'SUBSCRIBER_EVENT',
schema_name => 'HDB');
dbms_datapump.set_parameter(handle => h1, name => 'COMPRESSION', value => 'ALL');
dbms_datapump.start_job (handle => h1);
dbms_datapump.detach (handle => h1);
END;
/Hi ,
I tried to generate dump with expdp instead of API, got more specific error logs.
but on same path log file got create.
expdp hdb/hdb DUMPFILE=P2010110800.dmp DIRECTORY=ORALOAD TABLES=(SUBSCRIBER_EVENT:P2010110800) logfile=P2010110800.log
Export: Release 11.2.0.1.0 - Production on Wed Nov 10 01:26:13 2010
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file "/nfs_path/lims/backup/hdb/datapump/P2010110800.dmp"
ORA-27054: NFS file system where the file is created or resides is not mounted with correct options
Additional information: 3Edited by: Sachin B on Nov 9, 2010 10:33 PM -
Oracle 10g - Data Pump: Export / Import of Sequences ?
Hello,
I'm new to this forum and also to Oracle (Version 10g). Since I could not find an answer to my question, I open this post in hoping to get some help from the experienced users.
My question concerns the Data Pump Utility and what happens to sequences which were defined in the source database:
I have exported a schema with the following command:
"expdp <user>/<pass> DIRECTORY=DATA_PUMP_DIR DUMPFILE=dumpfile.dmp LOGFILE=logfile.log"
This worked fine and also the import seemed to work fine with the command:
"impdp <user>/<pass> DIRECTORY=DATA_PUMP_DIR DUMPFILE=dumpfile.dmp"
It loaded the exported objects directly into the schema of the target database.
BUT:
Something has happened to my sequences. :-(
When I want to use them, all sequences start again with value "1". Since I have already included data with higher values in my tables, I get into trouble with the PK of these tables because I used sequences sometimes as primary key.
My question go in direction to:
1. Did I something wrong with Data Pump Utility?
2. How is the correct way to export and import sequences that they keep their actual values?
3. When the behaviour described here is correct, how can I correct the values that start again from the last value that was used in the source database?
Thanks a lot in advance for any help concerning this topic!
Best regards
FireFighter
P.S.
It might be that my english sounds not perfect since it is not my native language. Sorry for that! ;-)
But I hope that someone can understand nevertheless. ;-)My question go in direction to:
1. Did I something wrong with Data Pump Utility?I do not think so. But may be with the existing schema :-(
2. How is the correct way to export and import
sequences that they keep their actual values?If the Sequences exist in the target before the import, oracle does not drop and recreate it. So you need to ensure that the sequences do not already exist in the target or the existing ones are dropped before the import.
3. When the behaviour described here is correct, how
can I correct the values that start again from the
last value that was used in the source database?You can either refresh with the import after the above correction or drop and manually recreate the sequences to START WITH the NEXT VALUE of the source sequences.
The easier way is to generate a script from the source if you know how to do it -
Data Pump issue with nls date format
Hi Friends,
I have a database with nls date format 'DD/MM/YYYY HH24:MI:SS' from where I wish to take export from. I have a target database with nls date format 'YYYY/MM/DD HH24:MI:SS' . I have a few tables whose create statements have some date fields with DEFAULT '01-Jan-1950' and these CREATE TABLE statements when processed by Data pump is getting failed in my target database. These tables are not getting created due to this error
Failing sql is:
CREATE TABLE "MCS_OWNER"."SECTOR_BLOCK_PEAK" ("AIRPORT_DEPART" VARCHAR2(4) NOT NULL ENABLE, "AIRPORT_ARRIVE" VARCHAR2(4) NOT NULL ENABLE, "CARRIER_CODE" VARCHAR2(3) NOT NULL ENABLE, "AC_TYPE_IATA" VARCHAR2(3) NOT NULL ENABLE, "PEAK_START" VARCHAR2(25) NOT NULL ENABLE, "PEAK_END" VARCHAR2(25), "BLOCK_TIME" VARCHAR2(25), "FLIGHT_TIME" VARCHAR2(25), "SEASON" VARC
ORA-39083: Object type TABLE failed to create with error:
ORA-01858: a non-numeric character was found where a numeric was expected
The table create sql which adds a column as ' VALID_FROM DATE DEFAULT '01-jan-1970' ' which I think is the issue. Appreciate if someone can suggest a way to get around with this. I have tried altering the nls of source db to be same as target database. Still the impdp fails.
Database is 10.2.0.1.0 on Linux X86 64 bit.
Thanks,
SSN
Edited by: SSNair on Oct 27, 2010 8:25 AMAppreciate if someone can suggest a way to get around with this.change the DDL that CREATE TABLE to include TO_DATE() function.
With Oracle characters between single quote marks are STRINGS!
'This is a string, 2009-12-31, not a date'
When a DATE datatype is desired, then use TO_DATE() function.
Maybe you are looking for
-
Name zip file and zipped file in File Receiver
Dear experts, I have a "RFC to File"-scenario. So I have a File Receiver and the received XML needs to be zipped. I have a XSLT mapping that hands over a dynamic file name to the File Receiver. Now when I use the PayloadZipBean my resulting ZIP-file
-
Cannot install LCCS Navigator AIR app - "Sorry, an error has occurred"
I am unable to install the LCCS "SDK Navigator Application" from the LCCS Developer portal. When I try to run the AIR installer, I get the following error message: Sorry, an error has occurred. This application cannot be installed because this instal
-
As scheduled in the roadworks thing I saw, work was to be performed on my road (by UK POWER NETWORKS (EDF ENERGY)) no doubt to power up the fibre cabinet, and this morning four vans appear, working on the cabinet again. When walking by I asked an OR
-
I am looking at iTunes Store now. Song I bought yesterday on another Mac says "Purchased" where it used to say "$1.29". Which is correct. So how do I get this song on to this Mac? Does not show up in "Purchased" tab. Do I need to be on the same netwo
-
MacBook pro retina 15" V.10.8.3
I have MacBook pro retina 15" It was very fast everytime. When I update to 10.8.3, it hungs all the time and I can't work on it. I never hered the sound of the fans and the computer was always cool. Now the fans running most of time and the computer