ORA-39000: bad dump file specification
Hi ,
i exported the dumpfile with %U and its genereated successfully with suffix of 101..102..till 243.
But while importing its searching the file from 01 02 and so in other env.
Could you please assist.
Actual file name start with:
Dump file set for EXPDPUSER.EXPFULL_NW is:
+LMDRPRDERDWDB_ASM_DATA_DISK9/asm_dpump/prod_data_expdb_nw101.dmp
+LMDRPRDERDWDB_ASM_DATA_DISK8/asm_dpump/prod_data_expdb_nw201.dmp
+LMDRPRDERDWDB_ASM_DATA_DISK9/asm_dpump/prod_data_expdb_nw102.dmp
+LMDRPRDERDWDB_ASM_DATA_DISK8/asm_dpump/prod_data_expdb_nw202.dmp
+LMDRPRDERDWDB_ASM_DATA_DISK9/asm_dpump/prod_data_expdb_nw103.dmp
+LMDRPRDERDWDB_ASM_DATA_DISK8/asm_dpump/prod_data_expdb_nw203.dmp
+LMDRPRDERDWDB_ASM_DATA_DISK9/asm_dpump/prod_data_expdb_nw104.dmp
+LMDRPRDERDWDB_ASM_DATA_DISK8/asm_dpump/prod_data_expdb_nw204.dmp
+LMDRPRDERDWDB_ASM_DATA_DISK9/asm_dpump/prod_data_expdb_nw105.dmp
+LMDRPRDERDWDB_ASM_DATA_DISK8/asm_dpump/prod_data_expdb_nw242.dmp
+LMDRPRDERDWDB_ASM_DATA_DISK9/asm_dpump/prod_data_expdb_nw143.dmp
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31640: unable to open dump file "+LMDRSITERDB_ASM_DATA1/dpump/PROD_data_expdb_NW01.dmp" for read
ORA-17503: ksfdopn:2 Failed to open file +LMDRSITERDB_ASM_DATA1/dpump/PROD_data_expdb_NW01.dmp
ORA-15173: entry 'PROD_data_expdb_NW01.dmp' does not exist in directory 'dpump'Thanks,
Mishra wrote:
Hi ,
i exported the dumpfile with %U and its genereated successfully with suffix of 101..102..till 243.
But while importing its searching the file from 01 02 and so in other env.
Could you please assist.
Use the SAME format (with the %U) that you used for the export.
This:
"+LMDRSITERDB_ASM_DATA1/dpump/PROD_data_expdb_NW01.dmp"does not look anything like these:
"+LMDRPRDERDWDB_ASM_DATA_DISK9/asm_dpump/prod_data_expdb_nw101.dmp":p
Similar Messages
-
Encountering ORA-39000: bad dump file specification while using datapump
Hello,
I am trying to use datapump to take a export of a schema(Meta_data only). However, I want the dump file to be named after the date & time of the export taken.
When i use the following command -- the job runs perfectly.
expdp system@***** dumpfile=expdp-`date '+%d%m%Y_%H%M%S'`.dmp directory=EXP_DP logfile=expdp-`date '+%d%m%Y_%H%M%S'`.log SCHEMAS=MARTMGR CONTENT=METADATA_ONLY
However, I want to run the export using an parfile. but if use the below parfile, i am encountering the following errors.
USERID=system@*****
DIRECTORY=EXP_DP
SCHEMAS=TEST
dumpfile=expdp-`date '+%d%m%Y_%H%M%S'`
LOGFILE=MARTMGR.log
CONTENT=METADATA_ONLY
expdp parfile=martmgr.par
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning option
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-39157: error appending extension to file "expdp-`date '+%d%m%Y_%H%M%S'`"
ORA-07225: sldext: translation error, unable to expand file name.
Additional information: 7217
How do i append date&time to the dumpfile name when using a parfile.
Thanks
RohitI got the below error while using the dumpfile parameter as dumpfile=dump_$export_date.dmp
Export: Release 11.2.0.2.0 - Production on Thu Feb 7 16:46:22 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning option
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-39157: error appending extension to file "dump_$export_date.dmp"
ORA-07225: sldext: translation error, unable to expand file name.
Additional information: 7217
Script i used is as follows
export ORACLE_HOME=/orcl01/app/oracle/product/11.2.0.2
export ORACLE_SID=$1
export PATH=$PATH:$ORACLE_HOME/bin
echo $ORACLE_HOME
export export_date=`date '+%d%m%Y_%H%M%S'`
echo $export_date
expdp parfile=$2
parfile is
DIRECTORY=EXP_DP
SCHEMAS=MARTMGR
dumpfile=dump_$export_date.dmp
LOGFILE=MARTMGR.log
CONTENT=METADATA_ONLY -
ORA-39000: bad dump file specification in 11gRAC environment...
Hi All,
I am having two version of Database. Source database is non-RAC environment and target is having RAC environment. I want to perform expdp from 10g and impdp to 11g RAC env.
expdp db is : Oracle 10.2.0.4 (Enterprise Edition) and impdp is : Oracle 11.2.0.1 (EE) RAC environment.
when I do impdp its giving error:
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31640: unable to open dump file "/home/oracle/try_1.dmp" for read
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
and the below is the syntax for impdp:
impdp system/<password>@<service_name> remap_schema=scott10g:scott11g dumpfile=try_1.dmp logfile=scott11g.log directory=mydir
Is there any specific method to do impdp in RAC?
Thanks...see the below method:
~~~~~~~~~~~~~~
SQL> select * from dba_directories;
SYS DATA_PUMP_DIR /u01/app/oracle/admin/prod_db/dpdump/
SYS DPUMP_DIR /u01/app/oracle/product/10.2.0/db_1/impdata_dir
SQL> !
[oracle@temple01 ~]$ expdp system/manager dumpfile=try_1.dmp logfile=try.log directory=data_pump_dir schemas=tom
Export: Release 10.2.0.1.0 - Production on Monday, 15 November, 2010 17:08:53
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/******** dumpfile=try_1.dmp logfile=try.log directory=data_pump_dir schemas=tom
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 0 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/u01/app/oracle/admin/prod_db/dpdump/try_1.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at 17:09:03
[oracle@temple01 ~]$ cd /u01/app/oracle/admin/prod_db/dpdump/
[oracle@temple01 dpdump]$ impdp system/manager dumpfile=try_1.dmp logfile=try_imp.log directory=data_pump_dir remap_schema=tom:test1
Import: Release 10.2.0.1.0 - Production on Monday, 15 November, 2010 17:10:33
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01": system/******** dumpfile=try_1.dmp logfile=try_imp.log directory=data_pump_dir remap_schema=tom:test1
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Job "SYSTEM"."SYS_IMPORT_FULL_01" successfully completed at 17:10:34
Edited by: CKPT on Nov 15, 2010 5:20 PM -
ORA-31619: invalid dump file "D:\oracle-export\mitest-19-05-2009.dmp"
HI
I have took logical backup (export) on linux and import (windows 2003 server).
export command:
$expdp miqa/miqa schemas=miqa directory=backup_dir CONTENT=all dumpfile=miqa-`date +%d-%m-%Y`.dmp CONTENT=all logfile=miqa-`date +%d-%m-%Y`.log'
when i import on windows 2003 server i am getting the following error
grant create any directory to rnddb;
create or replace directory backup_dir as 'd:\export';
GRANT READ, WRITE ON DIRECTORY backup_dir TO rnddb;
c:\>impdp rnddb/rnddb directory=backup_dir dumpfile=mitest-19-05-2009.dmp SCHEMAS=mitest REMAP_SCHEMA=mitest:rnddb CONTENT=all logfile=mitest-19-05-2009.log
I am getting the following error.
Import: Release 10.1.0.2.0 - Production on Wednesday, 17 June, 2009 14:45
Copyright (c) 2003, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Produ
tion
With the Partitioning, OLAP and Data Mining options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31619: invalid dump file "D:\oracle-export\mitest-19-05-2009.dmp"
pls could you help me where i done mistake.
Thanks
Settu GopalORA-31619: invalid dump file "string"
Cause: Either the file was not generated by Export or it was corrupted.
Action: If the file was indeed generated by Export, report this as an Import bug and submit the export file to Oracle Customer Support.
How was the dump file transferred between Linux and Windows server? -
Hello,
I am getting the following error when I import the DB. The background is that I had successfully imported the DB from the dmp file using the command below and by first unzipping zzz.dmp.gz file. However, I needed to reinstall my Oracle 11g XE DB. The unzipped zzz.dmp.gz file was still in my folder, but I accidently opened it in Notepad++. Now, everytime I unzip this .dmp.gz file, the .dmp is a bad dmp and throws the error below. Is there a solution? I am using Windows 7.
SQL> $impdp "' / as sysdba'" parfile=C:\xxx\yyy.prfl
Import: Release 11.2.0.2.0 - Production on Fri Jan 4 09:30:21 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Express Edition Release 11.2.0.2.0 - Productio
n
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31640: unable to open dump file "c:\xxx\zzz.dmp" for read
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 5) Access is denied.
SQL>979963 wrote:
Hello,
I am getting the following error when I import the DB. The background is that I had successfully imported the DB from the dmp file using the command below and by first unzipping zzz.dmp.gz file. However, I needed to reinstall my Oracle 11g XE DB. The unzipped zzz.dmp.gz file was still in my folder, but I accidently opened it in Notepad++. Now, everytime I unzip this .dmp.gz file, the .dmp is a bad dmp and throws the error below. Is there a solution? I am using Windows 7.
SQL> $impdp "' / as sysdba'" parfile=C:\xxx\yyy.prfl
Import: Release 11.2.0.2.0 - Production on Fri Jan 4 09:30:21 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Express Edition Release 11.2.0.2.0 - Productio
n
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31640: unable to open dump file "c:\xxx\zzz.dmp" for read
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 5) Access is denied.line above can indicate that another thread holds the file open.
you could resort to typical Microsoft "solution" & give system 3-fingered salute Ctrl-Alt_Del & reboot -
Impdp in RAC : can not specify dump file in node 1
I have oracle cluster RAC in 2 nodes : rac1, rac2.
I have created directory in 2 nodes, and copied file dump to directory of rac1.
When i run impdp from rac1, the impdp did not use file dump in rac1, it found file dump in rac2, so it thrown error : ORA-39000: bad dump file specification
ORA-31640: unable to open dump file "/u01/datapump/aa.DMP" for read
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory.
I must copied file dump to rac2, the impdp run normally.
What the master with cluster RAC, how to config RAC so the impdp read file dump in login node.File dump has been exported in another standalone server.
I create dir in OS level first then create dir in database, then copy file pump.
I think this problem relate to config in oracle cluster, the impdp is not check file dump in 2 nodes.
I found this problem when I not create the datapump in rac2. It allways return the error log file create fail. I must created datapump dir in rac2, then impdp just used path in rac2, file log was created in rac2, impdp not use datapump in rac1. -
Dear Experts,
Can we read the export dumps created using expdp using ORACLE_DATAPUMP driver (11.2.0.3)?
For example:
1) Export a table using expdp
expdp testuser/testuser DIRECTORY=TEST_DUR tables=TABLE_1 DUMPFILE=test.dmp LOGFILE=test.log
2) Read the dump file created in step-1
CREATE TABLE my_test(
id number)
ORGANIZATION EXTERNAL (
TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY test_dir
LOCATION ('test.dmp')
)REJECT LIMIT UNLIMITED;
select * from my_test;
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-31619: invalid dump file "/u08/export/test.dmp"
Here's the version info:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE 11.2.0.3.0 Production
TNS for Linux: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production
Is it possible to read the dumpfile like this????
Appreciate your answer in this regard.
Thanks
Pcorrect - from the oracle docs:
Note:
When Data Pump uses external tables as the data access mechanism, it uses the ORACLE_DATAPUMP
access driver. However, it is important to understand that the files that Data Pump creates when it uses external tables are not
compatible with files created when you manually create an external table using the SQL CREATE TABLE ... ORGANIZATION EXTERNAL
statement. One of the reasons for this is that a manually created external table unloads only data (no metadata), whereas Data Pump maintains both data and metadata information for all objects involved.
Cheers,
Harry -
I am getting an error ORA-39143 when importing a dump file
I get ORA-39143: dump file "c:\oraload\expdat.dmp" may be an original export dump file
when importing a dump file created by 9.2 exp into 10.2 database using impdp
I need to use the impdp utility with the table_exists_action option which doesn't appear to exist in imp
What is the proper mix of versions/flags/options to export from a 9.2 database and import into an existing 10.2 database while replacing existing data?
thankswell its on the link i provided
"When tables are manually created before data is imported, the CREATE TABLE statement in the export dump file will fail because the table already exists. To avoid this failure and continue loading data into the table, set the import parameter IGNORE=y. Otherwise, no data will be loaded into the table because of the table creation error."
original import tries to append (insert)
there is no parameter here like table_exists_action=truncate/replace
1. drop all the tables (dynamic sql), so there will be no errors because of already existent tables
or
2. truncate all the tables (dynamic sql) + use IGNORE=Y
(or
3. drop the whole schema in target database, recreate it, then do the import) -
Bad character set in dump file?
(Sorry for re-posting this question; posting in in the "Globalization" section was apparently a bad idea as nobody replied.)
Hi,
IMPDP-ing a dump file that someone has handed me over into Oracle XE results in special characters, i.e. Umlauts, being messed up.
In a hex editor, the dump file shows
a) the token WE8MSWIN1252 near the beginning, but
b) Umlauts obviously being encoded in DOS 850, for example "König" is encoded as 4b 94(!) 6e 69 67.
Does this prove that the dump file is badly formatted and that I have to resign myself to the complicated approach mentioned at the end of <https://forums.oracle.com/message/4142008#4142008>?
TIA
TA>Pl post exact OS and database versions, along with the complete export
>and import commands used, and the first 15 lines of the export and import
>log files
Is this necessary to judge whether the dump file is consistent? But ok, here we go:
OS: Win XP
database versions: see log files
export command: see log file (further details, if any, unknown)
import command: impdp with parameter file
dumpfile=ddd.dmp
TABLES=MWEUSER.%
TABLE_EXISTS_ACTION=REPLACE
>first 15 lines of the export log file
Export: Release 11.2.0.1.0 - Production on Sat Jun 29 23:30:51 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Release 11.2.0.1.0 - Production
Starting "SYSTEM"."SYS_EXPORT_FULL_01": system/******** directory=expdp_dir dumpfile=ddd.dmp full=yes logfile=ddd.log reuse_dumpfiles=y
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 363.8 MB
Processing object type DATABASE_EXPORT/TABLESPACE
Processing object type DATABASE_EXPORT/PROFILE
Processing object type DATABASE_EXPORT/SYS_USER/USER
Processing object type DATABASE_EXPORT/SCHEMA/USER
Processing object type DATABASE_EXPORT/ROLE
Processing object type DATABASE_EXPORT/GRANT/SYSTEM_GRANT/PROC_SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
>first 15 lines of the import log file
Import: Release 11.2.0.2.0 - Production on Mo Jul 22 22:18:29 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Express Edition Release 11.2.0.2.0 - Production
Master table "SYS"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_TABLE_01": sys/******** AS SYSDBA parfile=f:\ddd\repxe.par
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
. . imported "MWEUSER"."TABLE_A" 111.8 MB 1994065 rows
. . imported "MWEUSER"."TABLE_B" 23.46 MB 1559342 rows
. . imported "MWEUSER"."TABLE_C" 23.75 MB 159752 rows
. . imported "MWEUSER"."TABLE_D" 14.11 MB 10453 rows
. . imported "MWEUSER"."TABLE_E" 9.239 MB 58690 rows -
ORA-6522 -- module: Bad object file type
Hello!
I have created a external procedure. When I call it I get the following message:
ORA-06522: '/sys/app/oracle/product/9.2.0/lib/shell.o' is not a valid load
module: Bad object file type
I cannot find anything explaining what "Bad object file type" means. Do you have any suggestions?Hello!
I have created a external procedure. When I call it I get the following message:
ORA-06522: '/sys/app/oracle/product/9.2.0/lib/shell.o' is not a valid load
module: Bad object file type
I cannot find anything explaining what "Bad object file type" means. Do you have any suggestions? -
Hi,
Version 11203
I am trying to import schema into exadata machine using impdp.
The impdp is failed to open dump file.
Starting "ICCS"."SYS_IMPORT_FULL_01": userid=iccs/******** full=y TABLE_EXISTS_ACTION=APPEND
DIRECTORY=EXP_FOR_EXADATA PARALLEL=4 DUMPFILE=exp_dataONLY_iccs_25062012_%U.dmp LOGFILE=exp_dataONLY_iccs_25062012.log
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
ORA-31693: Table data object "ICCS"."INF_CALL_ERR" failed to load/unload and is being skipped due to error:
ORA-31640: unable to open dump file "/restore/psdwh/exp_dataONLY_iccs_25062012_04.dmp" for read
ORA-19505: failed to identify file "/restore/psdwh/exp_dataONLY_iccs_25062012_04.dmp"
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
ORA-31693: Table data object "ICCS"."CC_AGG3" failed to load/unload and is being skipped due to error:
ORA-31640: unable to open dump file "/restore/psdwh/exp_dataONLY_iccs_25062012_02.dmp" for read
ORA-19505: failed to identify file "/restore/psdwh/exp_dataONLY_iccs_25062012_02.dmp"
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
. . imported "ICCS"."CUST_S" 19.78 MB 580379 rows
. . imported "ICCS"."CUST_SUM_CD" 16.79 MB 843460 rows
ORA-31693: Table data object "ICCS"."BILL_CUST_SALE_TMP_C" failed to load/unload and is being skipped due to error:
ORA-31640: unable to open dump file "/restore/psdwh/exp_dataONLY_iccs_25062012_02.dmp" for read
ORA-19505: failed to identify file "/restore/psdwh/exp_dataONLY_iccs_25062012_02.dmp"
.....I verified that the file is exist in the file system :
[oracle@dm01db01 psdwh]$ ls -l /restore/psdwh/exp_dataONLY_iccs_25062012_04.dmp
-rwxrwxrwx 1 oracle oinstall 701464576 Jun 25 10:36 /restore/psdwh/exp_dataONLY_iccs_25062012_04.dmpI verified that the file has the same permission as oracle user :
dm01db01{oracle} /home/oracle >id
uid=1000(oracle) gid=1001(oinstall) groups=101(fuse),1001(oinstall),1002(dba)I also verified that i have a the right permission on the directory ABOVE :
cd /restore
ls -l
drwxrwxrwx 3 oracle oinstall 9216 Jun 27 19:15 psdwhI veified that the directory exists:
SQL> select * from dba_directories;
OWNER DIRECTORY_NAME DIRECTORY_PATH
SYS EXP_FOR_EXADATA /restore/psdwhI have read,write permission on the directory to PUBLIC , and as you can see there are some tables which successfully imported:
. . imported "ICCS"."CUST_S" 19.78 MB 580379 rows
. . imported "ICCS"."CUST_SUM_CD" 16.79 MB 843460 rowsPlease note that i am getting such an error each time that i am using expdp with PARALLEL and having %U in the dumpfile name .
e.g:
expdp userid=iccs/iccs@psdwh content=data_only EXCLUDE=statistics DIRECTORY=EXP_FOR_EXADATA
PARALLEL=4 DUMPFILE=exp_dataONLY_iccs_25062012_%U.dmp LOGFILE=exp_dataONLY_iccs_25062012.logIf i am exporting without PARALLEL and %U , mean having just one dump file , i have no problem at all to import the file.
Any Advice ?
ThanksThe issue is most likely due to the directory not being accessible from all nodes in the rac. If it works with parallel=1 but not parallel>1 then the other nodes that are being used to import data, and since the file exists, then please check that the disk is accessible from all nodes in the rack.
Dean -
ORA-39095: Dump file space has been exhausted
Hello Friends,
DB Version : 10.2.0.4
OS : SUN Solaris 10
We have a server with 10 RAC databases running on it. From 1st node we have a scheduled export job for all the databases. All the databases are using a common script and we are passing parameters (example as below)
/opt/oracle/admin/DBA/dbatools/expdpdb.sh -d w111pr1 (here w111pr1 is the SID)
Out of 10 databases, full export job is working fine for 9 databases. only for 1 database (which i mentioned above), export is failing with below error
ORA-39095: Dump file space has been exhausted: Unable to allocate 16384 bytes
Job "OPS$ORACLE"."SYS_EXPORT_FULL_39" stopped due to fatal error at 00:06:13
After analyzing I found that in the script we are not using '%U' option for dump file which might cause the problem. But if this is the real issue, it should fail for other 9 databases also. But even without '%U' option, it is working for other databases.
One more thing.... all the export backups will run in a difference of 1/2 hour. Do we need anything with OS processes?
Anyone faced this kind of issue earlier? If so please let me know what can be a better way to handle it.
Thanks in advance...
P.S: export is executing fine when issued manually.what are your dumpfile, parallel and filesize parameter values in the shell script? does the shell script has one size fits all parameter settings? Also are there any other errors? Can you share the expdp command ?
Edited by: rjamya on Jun 7, 2012 4:41 AM -
How to avoid ORA-01918 and ORA-00959 while impoting a dump file
While importing a dump file, I encountered the ora-01918 (user does not exist) and ora-00959 (tablespace does not exist) errors. This has occurred due to the fact that the user and tablespace mentioned in the dump file did not exist in my database.
Is there a way to avoid these errors via specifying some values to IMP parameters?
Appreciate help.user7864190 wrote:
While importing a dump file, I encountered the ora-01918 (user does not exist) and ora-00959 (tablespace does not exist) errors. This has occurred due to the fact that the user and tablespace mentioned in the dump file did not exist in my database.
Is there a way to avoid these errors via specifying some values to IMP parameters?
Appreciate help.If you import with FULL=Y parameter (and of course if you exported with FULL=Y), imp will create tablespace and users.
imp full=y file=xxxxxxxxxx.dmpOn the other hand, it could be better to create tablespaces manually (according to your disk structure)
Btw, this is wrong forum. :)
Regards
Gokhan -
Hello,
Starting yesterday I started getting the BSOD. I have uploaded my dump files to this dropbox account:
www.dropbox.com/sh/w3ryago8pfcw2y2/AAAR5i6Crx5q-p--xdESJjnMa
There are lots of them as my PC crashed many times.
I attempted to analyze them with windbg but all I can see is:
Unable to load image \SystemRoot\system32\ntkrnlpa.exe, Win32 error 0n2
*** WARNING: Unable to verify timestamp for ntkrnlpa.exe
Also I ran sfc /scannow and it found no integrity violations.
After the first BSOD I was barely able to get Windows to load without another immediate crash. I couldn't boot my PC last night but today seems okay so far. CPU temp was fine, actually quite cool as I had my window open right by the PC with the side off
the case and it was cold last night. Today, so far, seems to be running okay...so far anyways..
Windows update recently installed a driver for my on-board video card (Nvidia GeForce 6150n) on the 18th I think as I started using a 2nd monitor using the on-board card in addition to my PCI Express Ati Radeon HD. To use the second screen, I switched
the primary video in CMOS to on-board instead of PCI-E 16x so the on-board Nvidia card would be enabled.
I also installed brand new ram (4gb DDR2) about a month ago
and a new power supply 2 months ago. The power supply is not "exactly" the same voltage as the one that was in there but it's dang close. But I don't have any power leeching components.
Other than that nothing new has happened. After I got the crash, I tried to enter CMOS yesterday and it glitched out on me twice. I hope it's not my motherboard..
I am willing to do a system re-format as I am running Windows 7 32bit on a 64bit AMD so I wouldn't mind having 64 bit Windows anyways but I'd like to know what the problem is before I do this. Thank you.
Update: Just did a MalwareByte's Anti-Malware scan and no malware but found these items:
Scan Date: 5/22/2014
Scan Time: 6:08:00 PM
Logfile:
OS: Windows 7 Service Pack 1
CPU: x86
File System: NTFS
Scan Type: Threat Scan
Result: Completed
Objects Scanned: 261539
Time Elapsed: 8 min, 10 sec
Processes: 0
(No malicious items detected)
Modules: 0
(No malicious items detected)
Registry Keys: 0
(No malicious items detected)
Registry Values: 0
(No malicious items detected)
Registry Data: 0
(No malicious items detected)
Folders: 2
PUP.Optional.CrossRider.A, C:\Users\Justin\AppData\Local\Google\Chrome\User Data\Default\Extensions\ecoccdldklbjglocbgbfpmpehjegkode, , [dd97c58f413a74c2503e7106649ec23e],
PUP.Optional.CrossRider.A, C:\Users\Justin\AppData\Local\Google\Chrome\User Data\Default\Extensions\ecoccdldklbjglocbgbfpmpehjegkode\0.1_0, , [dd97c58f413a74c2503e7106649ec23e],
Files: 9
PUP.Optional.OptimumInstaller.A, C:\Users\Justin\Downloads\Player-Chrome.exe, , [6b090a4a4437d363a05f361717ea718f],
PUP.Optional.Outbrowse, C:\Users\Justin\Downloads\install-flashplayer.exe, , [561ee173fe7d1125edf4d9a443be5aa6],
PUP.Optional.CrossRider.A, C:\Users\Justin\AppData\Local\Google\Chrome\User Data\Default\Extensions\ecoccdldklbjglocbgbfpmpehjegkode\0.1_0\icon-128.png, , [dd97c58f413a74c2503e7106649ec23e],
PUP.Optional.CrossRider.A, C:\Users\Justin\AppData\Local\Google\Chrome\User Data\Default\Extensions\ecoccdldklbjglocbgbfpmpehjegkode\0.1_0\icon-16.png, , [dd97c58f413a74c2503e7106649ec23e],
PUP.Optional.CrossRider.A, C:\Users\Justin\AppData\Local\Google\Chrome\User Data\Default\Extensions\ecoccdldklbjglocbgbfpmpehjegkode\0.1_0\icon-48.png, , [dd97c58f413a74c2503e7106649ec23e],
PUP.Optional.CrossRider.A, C:\Users\Justin\AppData\Local\Google\Chrome\User Data\Default\Extensions\ecoccdldklbjglocbgbfpmpehjegkode\0.1_0\jquery-1.10.2.min.js, , [dd97c58f413a74c2503e7106649ec23e],
PUP.Optional.CrossRider.A, C:\Users\Justin\AppData\Local\Google\Chrome\User Data\Default\Extensions\ecoccdldklbjglocbgbfpmpehjegkode\0.1_0\manifest.json, , [dd97c58f413a74c2503e7106649ec23e],
PUP.Optional.CrossRider.A, C:\Users\Justin\AppData\Local\Google\Chrome\User Data\Default\Extensions\ecoccdldklbjglocbgbfpmpehjegkode\0.1_0\tweetwiki.js, , [dd97c58f413a74c2503e7106649ec23e],
PUP.Optional.CrossRider.A, C:\Users\Justin\AppData\Local\Google\Chrome\User Data\Default\Extensions\ecoccdldklbjglocbgbfpmpehjegkode\0.1_0\_DS_Store, , [dd97c58f413a74c2503e7106649ec23e],
Physical Sectors: 0
(No malicious items detected)Hi,
All of the attached DMP files are of the
WHEA_UNCORRECTABLE_ERROR (124) bug check.
A fatal hardware error has occurred. This fatal error displays data from the Windows Hardware Error Architecture (WHEA).
If we run an !errrec on the 2nd parameter of the bug check (address of the WER structure) we get the following:
BugCheck 124, {0, 867d2024, b2000010, 10c0f}
===============================================================================
Section 2 : x86/x64 MCA
Descriptor @ 867d2134
Section @ 867d22bc
Offset : 664
Length : 264
Flags : 0x00000000
Severity : Fatal
Error : BUSLG_OBS_ERR_*_NOTIMEOUT_ERR (Proc 0 Bank 4)
Status : 0xb200001000010c0f
This is a pretty complicated error compared to other possibilities it could have been, as it implies you have a hardware fault somewhere along the bus (not very definitive on its own).
In our case however, we can go a bit deeper with this one:
0: kd> .formats 0xb200001000010c0f
Evaluate expression:
Hex: b2000010`00010c0f
Decimal: -5620492266238833649
Octal: 1310000001000000206017
Binary: 10110010 00000000 00000000 00010000 00000000 00000001 00001100 00001111
Chars: ........
Time: ***** Invalid FILETIME
Float: low 9.61613e-041 high -7.45059e-009
Double: -7.41853e-068
Now that we have this info, we'd refer to the AMD manual:
63 VAL Valid
62 OVER Status Register Overflow
61 UC Uncorrected Error
60 EN Error Condition Enabled
59 MISCV Miscellaneous-Error Register Valid
58 ADDRV Error-Address Register Valid
57 PCC Processor-Context Corrupt
56–32 Other Information
31–16 Model-Specific Error Code
15–0 MCA Error Code
In our specific case, bit 63 of the status code is set, so it is valid. Bit 62 of the status code is
unset, therefore there's no overflow. Bit 61 implies an uncorrected error has occurred, and Bit 60 implies an error condition was enabled. Last but not least, Bit 57 is
set and implies a corrupted processor context.
If we take a look at the last 15 bits of the binary:
Binary: [trimming] 00001100 00001111
In our case, the MSB in binary (00001100 ), implies a bus error occurred (as I mentioned above). We need to further decode the bits to understand more:
[0000]1100 00001111
0000 = 1PPT RRRR IILL.
PP = Participation Processor, T = Timeout,
R = Memory Transaction Type, I = Memory and/or I/O, and finally
L = Cache Level.
With this said, it appears we actually have a faulty motherboard as the processor seems fine, etc. The only other thing I see possible is bad RAM, which you can figure out whether or not is the case by running no less than ~8 passes of Memtest:
Memtest86+:
Download Memtest86+ here:
http://www.memtest.org/
Which should I download?
You can either download the pre-compiled ISO that you would burn to a CD and then boot from the CD, or you can download the auto-installer for the USB key. What this will do is format your USB drive, make it a bootable device, and then install the necessary
files. Both do the same job, it's just up to you which you choose, or which you have available (whether it's CD or USB).
Do note that some older generation motherboards do not support USB-based booting, therefore your only option is CD (or Floppy if you really wanted to).
How Memtest works:
Memtest86 writes a series of test patterns to most memory addresses, reads back the data written, and compares it for errors.
The default pass does 9 different tests, varying in access patterns and test data. A tenth test, bit fade, is selectable from the menu. It writes all memory with zeroes, then sleeps for 90 minutes before checking to see if bits have changed (perhaps because
of refresh problems). This is repeated with all ones for a total time of 3 hours per pass.
Many chipsets can report RAM speeds and timings via SPD (Serial Presence Detect) or EPP (Enhanced Performance Profiles), and some even support changing the expected memory speed. If the expected memory speed is overclocked, Memtest86 can test that memory performance
is error-free with these faster settings.
Some hardware is able to report the "PAT status" (PAT: enabled or PAT: disabled). This is a reference to Intel Performance acceleration technology; there may be BIOS settings which affect this aspect of memory timing.
This information, if available to the program, can be displayed via a menu option.
Any other questions, they can most likely be answered by reading this great guide here:
http://forum.canardpc.com/threads/28864-FAQ-please-read-before-posting
Regards,
Patrick
“Be kind whenever possible. It is always possible.” - Dalai Lama -
HOW TO IMPORT DATA MORE THAN ONCE FROM THE SAME EXPORT DUMP FILE?
before asking my question i'd like to mention that i'm a french spoke...
so my english is a little bit bad. sorry for that.
my problem is : IMPORT
how to import data a SECOND TIME from an export dump file within oracle?
My Export dump file was made successfully (Full Export) and then i
tried to import datas for the first time.
I got this following message in my logfile: I ADDED SOME COMMENTS
Warning: the objects were exported by L1, not by you
. importing SYSTEM's objects into SYSTEM
REM ************** CREATING TABLESPACES *****
REM *********************************************
IMP-00015: following statement failed because the object already exists:
"CREATE TABLESPACE "USER_DATA" DATAFILE 'E:\ORANT\DATABASE\USR1ORCL.ORA' SI"
"ZE 3145728 DEFAULT STORAGE (INITIAL 10240 NEXT 10240 MINEXTENTS 1 MAX"
"EXTENTS 121 PCTINCREASE 50) ONLINE PERMANENT"
IMP-00015: following statement failed because the object already exists:
"CREATE TABLESPACE "ROLLBACK_DATA" DATAFILE 'E:\ORANT\DATABASE\RBS1ORCL.ORA"
"' SIZE 10485760 DEFAULT STORAGE (INITIAL 10240 NEXT 10240 MINEXTENTS "
"1 MAXEXTENTS 121 PCTINCREASE 50) ONLINE PERMANENT"
etc........
IMP-00017: following statement failed with ORACLE error 1119:
"CREATE TABLESPACE "L1" DATAFILE 'E:\ORADATA\L1.DBF' SIZE 1048576000 "
"DEFAULT STORAGE (INITIAL 10240 NEXT 10240 MINEXTENTS 1 MAXEXTENTS 121 PCTIN"
"CREASE 50) ONLINE PERMANENT"
IMP-00003: ORACLE error 1119 encountered
ORA-01119: error in creating database file 'E:\ORADATA\L1.DBF'
ORA-09200: sfccf: error creating file
OSD-04002: unable to open file
O/S-Error: (OS 3) The system cannot find the path specified
--->etc..........
the drive E: with the folder E:\ORADATA didn't exist, but after
all that i created it.
see below, before my IMPORT statement
REM ********************* CREATING USER *********
REM ********************************************
IMP-00017: following statement failed with ORACLE error 959:
"CREATE USER "L1" IDENTIFIED BY VALUES 'A6E0DAA6865E7627' DEFAULT TABLESPACE"
" "L1" TEMPORARY TABLESPACE "TEMPORARY_DATA""
IMP-00003: ORACLE error 959 encountered
ORA-00959: tablespace 'L1' does not exist
IMP-00017: following statement failed with ORACLE error 959:
"CREATE USER "MLCO" IDENTIFIED BY VALUES '56AC6447B7D50467' DEFAULT TABLESPA"
"CE "MLCO" TEMPORARY TABLESPACE "TEMPORARY_DATA""
IMP-00003: ORACLE error 959 encountered
ORA-00959: tablespace 'MLCO' does not exist
ETC.......
REM ********************* GRANTING ROLES ***********
REM ************************************************
IMP-00017: following statement failed with ORACLE error 1917:
"GRANT ALTER ANY TABLE to "L1" "
IMP-00003: ORACLE error 1917 encountered
ORA-01917: user or role 'L1' does not exist
ETC.........
IMP-00017: following statement failed with ORACLE error 1918:
"ALTER USER "L1" DEFAULT ROLE ALL"
IMP-00003: ORACLE error 1918 encountered
ORA-01918: user 'L1' does not exist
-- that is normal, since the creation of the
tablespace failed !!
REM******************************
IMP-00015: following statement failed because the object already exists:
"CREATE ROLLBACK SEGMENT RB_TEMP STORAGE (INITIAL 10240 NEXT 10240 MINEXTENT"
"S 2 MAXEXTENTS 121) TABLESPACE "SYSTEM""
IMP-00015: following statement failed because
. importing SCOTT's objects into SCOTT
IMP-00015: following statement failed because the object already exists:
"CREATE SEQUENCE "EVT_PROFILE_SEQ" MINVALUE 1 MAXVALUE 999999999999999999999"
"999999 INCREMENT BY 1 START WITH 21 CACHE 20 NOORDER NOCYCLE"
ETC............
importing L1's objects into L1
IMP-00017: following statement failed with ORACLE error 1435:
"ALTER SCHEMA = "L1""
IMP-00003: ORACLE error 1435 encountered
ORA-01435: user does not exist
REM *************** IMPORTING TABLES *******************
REM ****************************************************
. importing SYSTEM's objects into SYSTEM
. . importing table "AN1999_BDAT" 243 rows imported
. . importing table "BOPD" 112 rows imported
. . importing table "BOINFO_AP" 49
ETC................
. . importing table "BO_WHF" 2 rows imported
IMP-00015: following statement failed because the object already exists:
"CREATE TABLE "DEF$_CALL" ("DEFERRED_TRAN_DB" VARCHAR2(128),
IMP-00015: following statement failed because the object already exists:
"CREATE SYNONYM "DBA_ROLES" FOR "SYS"."DBA_ROLES""
IMP-00015: following statement failed because the object already exists:
"CREATE SYNONYM "DBA_ERRORS" FOR "SYS"."DBA_ERRORS""
IMP-00008: unrecognized statement in the export file:
. importing L1's objects into L1
IMP-00017: following statement failed with ORACLE error 1435:
"ALTER SCHEMA = "L1""
IMP-00008: unrecognized statement in the export file:
J
Import terminated successfully with warnings.
-------------------------------------b]
So after analysing this log file, i created
the appropriate drives and folders... as the
import statement doesn't see them.
E:\ORADATA G:\ORDATA etc...
And i started to [b]IMPORT ONE MORE TIME. with:
$ IMP73 sys/pssw Full=Y FILE=c:\temp\FOLD_1\data_1.dmp BUFFER=64000
COMMIT=Y INDEXFILE=c:\temp\FOLD_1\BOO_idx.sql
LOG=c:\temp\FOLD_1\BOO_log.LOG DESTROY=Y IGNORE=Y;
after that i could not see the users nor the
tables created.
and the following message appeared in the log file:
Warning: the objects were exported by L1, not by you
. . skipping table "AN1999_BDAT"
. . skipping table "ANPK"
. . skipping table "BOAP"
. . skipping table "BOO_D"
ETC.....skipping all the tables
. . skipping table "THIN_PER0"
. . skipping table "UPDATE_TEMP"
Import terminated successfully without warnings.
and only 2 new tablespaces (originally 3) were
created without any data in ( i check that in
the Oracle Storage manager : the tablespaces exit
with 0.002 used space; originally 60 M for each !!)
so,
How to import data (with full import option) succefully
MORE THAN ONE TIME from an exported dump file ?
Even if we have to overwrite tablespaces , tables and users.
thank you very muchThe Member Feedback forum is for suggestions and feedback for OTN Developer Services. This forum is not monitored by Oracle support or product teams and so Oracle product and technology related questions will not be answered. We recommend that you post this thread to the appropriate Database forum.
The main URL is:
http://forums.oracle.com/forums/index.jsp?cat=18
Maybe you are looking for
-
Hey guys, I just bought a new iPhone but i can't restore my backup files beacuse it keeps saying "itunes could not restore backup because the password was incorrect" but I really don't know where to put the password to restore it. I really have some
-
Error Message that Files Cannot Be Imported -- Are My Files Lost?
Hi all -- Here's my sad story: I was importing a whole slew of files from my Canon PowerShot A520, everything appeared to be going swimingly, when iPhoto appeared to freeze up. After a force quit, it became apparent that the pix had never made it int
-
Help on Sql Script to be insert in a Package
Hi Experts, does someone can help on this. I need to make an export of a fact table , with a filter on Category and Time. I make Test on the Apshell, but fail to filter on properties. Here is my code : select * from [apshell].[dbo].[tblfactfinance] w
-
I am trying to view a live webcam on my iPad. The website is saying the webcast usesQuickTime. I thought this was possible on iPad. Any suggestions please.
-
AppleWorks won't quit in drawing environment.
Though AppleWorks has been used quite extensively in our lab with few problems, it seems that when students are working on a drawing document, their work appears to be saved, but AppleWorks won't quit, unless the Force Quit option is used. I seem to