Data pump Operation = 'SQL_FILE'
Hellow! I need to exporting metadata from one shema to sql file.
DECLARE
DPJob NUMBER;
BEGIN
DPJob := DBMS_DataPump.open (Operation => 'SQL_FILE', Job_Mode => 'SCHEMA',
Job_Name => 'DPEXPJOB4');
DBMS_DataPump.add_File (Handle => DPJob,
FileName => 'DDL.SQL', Directory => 'DATA_PUMP_DIR',
FileType => DBMS_DataPump.KU$_File_Type_SQL_File);
DBMS_DataPump.metadata_Filter (Handle => DPJob,
Name => 'SCHEMA_EXPR', Value => 'IN (''SCOTT'') ');
DBMS_DataPump.start_Job (Handle => DPJob);
END;
but i have take an error, when i try execute this script:
invalid operation
Cause: The current API cannot be executed because of inconsistencies between the API and the current definition of the job. Subsequent messages supplied by DBMS_DATAPUMP.GET_STATUS will further describe the error.
Action: Modify the API call to be consistent with the current job or redefine the job in a manner that will support the specified API.
status of data pump job:
select * from dba_datapump_jobs;
6 SYS DPEXPJOB4 SQL_FILE SCHEMA DEFINING 1 1 2
How can i get sql file for my schema?
Edited by: alligator on 11.11.2009 9:38
I ' connected as sysdba and it works ; wow very happy ;)
Similar Messages
-
Help needed with Export Data Pump using API
Hi All,
Am trying to do an export data pump feature using the API.
while the export as well as import works fine from the command line, its failing with the API.
This is the command line program:
expdp pxperf/dba@APPN QUERY=dev_pool_data:\"WHERE TIME_NUM > 1204884480100\" DUMPFILE=EXP_DEV.dmp tables=PXPERF.dev_pool_data
Could you help me how should i achieve the same as above in Oracle Data Pump API
DECLARE
h1 NUMBER;
h1 := dbms_datapump.open('EXPORT','TABLE',NULL,'DP_EXAMPLE10','LATEST');
dbms_datapump.add_file(h1,'example3.dmp','DATA_PUMP_TEST',NULL,1);
dbms_datapump.add_file(h1,'example3_dump.log','DATA_PUMP_TEST',NULL,3);
dbms_datapump.metadata_filter(h1,'NAME_LIST','(''DEV_POOL_DATA'')');
END;
Also in the API i want to know how to export and import multiple tables (selective tables only) using one single criteria like "WHERE TIME_NUM > 1204884480100\"Yes, I have read the Oracle doc.
I was able to proceed as below: but it gives error.
============================================================
SQL> SET SERVEROUTPUT ON SIZE 1000000
SQL> DECLARE
2 l_dp_handle NUMBER;
3 l_last_job_state VARCHAR2(30) := 'UNDEFINED';
4 l_job_state VARCHAR2(30) := 'UNDEFINED';
5 l_sts KU$_STATUS;
6 BEGIN
7 l_dp_handle := DBMS_DATAPUMP.open(
8 operation => 'EXPORT',
9 job_mode => 'TABLE',
10 remote_link => NULL,
11 job_name => '1835_XP_EXPORT',
12 version => 'LATEST');
13
14 DBMS_DATAPUMP.add_file(
15 handle => l_dp_handle,
16 filename => 'x1835_XP_EXPORT.dmp',
17 directory => 'DATA_PUMP_DIR');
18
19 DBMS_DATAPUMP.add_file(
20 handle => l_dp_handle,
21 filename => 'x1835_XP_EXPORT.log',
22 directory => 'DATA_PUMP_DIR',
23 filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
24
25 DBMS_DATAPUMP.data_filter(
26 handle => l_dp_handle,
27 name => 'SUBQUERY',
28 value => '(where "XP_TIME_NUM > 1204884480100")',
29 table_name => 'ldev_perf_data',
30 schema_name => 'XPSLPERF'
31 );
32
33 DBMS_DATAPUMP.start_job(l_dp_handle);
34
35 DBMS_DATAPUMP.detach(l_dp_handle);
36 END;
37 /
DECLARE
ERROR at line 1:
ORA-39001: invalid argument value
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3043
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3688
ORA-06512: at line 25
============================================================
i have a table called LDEV_PERF_DATA and its in schema XPSLPERF.
value => '(where "XP_TIME_NUM > 1204884480100")',above is the condition i want to filter the data.
However, the below snippet works fine.
============================================================
SET SERVEROUTPUT ON SIZE 1000000
DECLARE
l_dp_handle NUMBER;
l_last_job_state VARCHAR2(30) := 'UNDEFINED';
l_job_state VARCHAR2(30) := 'UNDEFINED';
l_sts KU$_STATUS;
BEGIN
l_dp_handle := DBMS_DATAPUMP.open(
operation => 'EXPORT',
job_mode => 'SCHEMA',
remote_link => NULL,
job_name => 'ldev_may20',
version => 'LATEST');
DBMS_DATAPUMP.add_file(
handle => l_dp_handle,
filename => 'ldev_may20.dmp',
directory => 'DATA_PUMP_DIR');
DBMS_DATAPUMP.add_file(
handle => l_dp_handle,
filename => 'ldev_may20.log',
directory => 'DATA_PUMP_DIR',
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
DBMS_DATAPUMP.start_job(l_dp_handle);
DBMS_DATAPUMP.detach(l_dp_handle);
END;
============================================================
I dont want to export all contents as the above, but want to export data based on some conditions and only on selective tables.
Any help is highly appreciated. -
Data Pump Export error - network mounted path
Hi,
Please have a look at the data pump error i am getting while doing export. I am running on version 11g . Please help with your feedback..
I am getting error due to Network mounted path for directory OverLoad it works fine with local path. i have given full permissions on network path and utl_file able to create files but datapump fail with below error messages.
Oracle 11g
Solaris 10
Getting below error :
ERROR at line 1:
ORA-39001: invalid argument value
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3444
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3693
ORA-06512: at line 64
DECLARE
p_part_name VARCHAR2(30);
p_msg VARCHAR2(512);
v_ret_period NUMBER;
v_arch_location VARCHAR2(512);
v_arch_directory VARCHAR2(20);
v_rec_count NUMBER;
v_partition_dumpfile VARCHAR2(35);
v_partition_dumplog VARCHAR2(35);
v_part_date VARCHAR2(30);
p_partition_name VARCHAR2(30);
v_partition_arch_location VARCHAR2(512);
h1 NUMBER; -- Data Pump job handle
job_state VARCHAR2(30); -- To keep track of job state
le ku$_LogEntry; -- For WIP and error messages
js ku$_JobStatus; -- The job status from get_status
jd ku$_JobDesc; -- The job description from get_status
sts ku$_Status; -- The status object returned by get_status
ind NUMBER; -- Loop index
percent_done NUMBER; -- Percentage of job complete
--check dump file exist on directory
l_file utl_file.file_type;
l_file_name varchar2(20);
l_exists boolean;
l_length number;
l_blksize number;
BEGIN
p_part_name:='P2010110800';
p_partition_name := upper(p_part_name);
v_partition_dumpfile := chr(39)||p_partition_name||chr(39);
v_partition_dumplog := p_partition_name || '.LOG';
SELECT COUNT(*) INTO v_rec_count FROM HDB.PARTITION_BACKUP_MASTER WHERE PARTITION_ARCHIVAL_STATUS='Y';
IF v_rec_count != 0 THEN
SELECT
PARTITION_ARCHIVAL_PERIOD
,PARTITION_ARCHIVAL_LOCATION
,PARTITION_ARCHIVAL_DIRECTORY
INTO v_ret_period , v_arch_location , v_arch_directory
FROM HDB.PARTITION_BACKUP_MASTER WHERE PARTITION_ARCHIVAL_STATUS='Y';
END IF;
utl_file.fgetattr('ORALOAD', l_file_name, l_exists, l_length, l_blksize);
IF (l_exists) THEN
utl_file.FRENAME('ORALOAD', l_file_name, 'ORALOAD', p_partition_name ||'_'|| to_char(systimestamp,'YYYYMMDDHH24MISS') ||'.DMP', TRUE);
END IF;
v_part_date := replace(p_partition_name,'P');
DBMS_OUTPUT.PUT_LINE('inside');
h1 := dbms_datapump.open (operation => 'EXPORT',
job_mode => 'TABLE'
dbms_datapump.add_file (handle => h1,
filename => p_partition_name ||'.DMP',
directory => v_arch_directory,
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.add_file (handle => h1,
filename => p_partition_name||'.LOG',
directory => v_arch_directory,
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
dbms_datapump.metadata_filter (handle => h1,
name => 'SCHEMA_EXPR',
value => 'IN (''HDB'')');
dbms_datapump.metadata_filter (handle => h1,
name => 'NAME_EXPR',
value => 'IN (''SUBSCRIBER_EVENT'')');
dbms_datapump.data_filter (handle => h1,
name => 'PARTITION_LIST',
value => v_partition_dumpfile,
table_name => 'SUBSCRIBER_EVENT',
schema_name => 'HDB');
dbms_datapump.set_parameter(handle => h1, name => 'COMPRESSION', value => 'ALL');
dbms_datapump.start_job (handle => h1);
dbms_datapump.detach (handle => h1);
END;
/Hi ,
I tried to generate dump with expdp instead of API, got more specific error logs.
but on same path log file got create.
expdp hdb/hdb DUMPFILE=P2010110800.dmp DIRECTORY=ORALOAD TABLES=(SUBSCRIBER_EVENT:P2010110800) logfile=P2010110800.log
Export: Release 11.2.0.1.0 - Production on Wed Nov 10 01:26:13 2010
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file "/nfs_path/lims/backup/hdb/datapump/P2010110800.dmp"
ORA-27054: NFS file system where the file is created or resides is not mounted with correct options
Additional information: 3Edited by: Sachin B on Nov 9, 2010 10:33 PM -
How to create a procedure of Data pump
i run the Data pump through annoymous block it run sucessful but when i try to this through procedure fail to do so. whatever schema i wanted to export pass in arrguments plz help where i'm commiting mistake
CREATE or replace PROCEDURE DATA_PUMP (SCH varchar2) IS
h1 NUMBER;
BEGIN
h1 := dbms_datapump.open (
operation => 'EXPORT' ,job_mode => 'SCHEMA');
dbms_datapump.add_file(
handle => h1
,filename => 'FAHEEM01.dmp'
,directory => 'PUMP_DIR'
,filetype => 1);
dbms_datapump.metadata_filter(
handle => h1
,name => 'SCHEMA_EXPR'
,value => 'IN(''SCH'')');
dbms_datapump.start_job(
handle => h1);
dbms_datapump.detach(handle => h1);
end;Oracle Studnet wrote:
i run the Data pump through annoymous block it run sucessful but when i try to this through procedure fail to do so. whatever schema i wanted to export pass in arrguments plz help where i'm commiting mistake
CREATE or replace PROCEDURE DATA_PUMP (SCH varchar2) IS
h1 NUMBER;
BEGIN
h1 := dbms_datapump.open (
operation => 'EXPORT' ,job_mode => 'SCHEMA');
dbms_datapump.add_file(
handle => h1
,filename => 'FAHEEM01.dmp'
,directory => 'PUMP_DIR'
,filetype => 1);
dbms_datapump.metadata_filter(
handle => h1
,name => 'SCHEMA_EXPR'
,value => 'IN(''SCH'')'); <--------------------------------------------------dbms_datapump.start_job(
handle => h1);
dbms_datapump.detach(handle => h1);
end;
yours above line treating as schema not variabble replace that line with
value =>'IN('||''''||SCH||''''||')');
Khurram -
Data pump error ORA-39065, status undefined after restart
Hi members,
The data pump full import job hung, continue client also hung, all of a sudden the window exited.
;;; Import> status
;;; Import> help
;;; Import> status
;;; Import> continue_client
ORA-39065: unexpected master process exception in RECEIVE
ORA-39078: unable to dequeue message for agent MCP from queue "KUPC$C_1_20090923181336"
Job "SYSTEM"."SYS_IMPORT_FULL_01" stopped due to fatal error at 18:48:03
I increased the shared_pool to 100M and then restarted the job with attach=jobname. After restarting, I have queried the status and found that everything is undefined. It still says undefined now and the last log message says that it has been reopened. Thats the end of the log file and nothing else is being recorded. I am not sure what is happening now. Any ideas will be appreciated. This is 10.2.0.3 version on windows. Thanks ...
Job SYS_IMPORT_FULL_01 has been reopened at Wednesday, 23 September, 2009 18:54
Import> status
Job: SYS_IMPORT_FULL_01
Operation: IMPORT
Mode: FULL
State: IDLING
Bytes Processed: 3,139,231,552
Percent Done: 33
Current Parallelism: 8
Job Error Count: 0
Dump File: D:\oracle\product\10.2.0\admin\devdb\dpdump\devtest%u.dmp
Dump File: D:\oracle\product\10.2.0\admin\devdb\dpdump\devtest01.dmp
Dump File: D:\oracle\product\10.2.0\admin\devdb\dpdump\devtest02.dmp
Dump File: D:\oracle\product\10.2.0\admin\devdb\dpdump\devtest03.dmp
Dump File: D:\oracle\product\10.2.0\admin\devdb\dpdump\devtest04.dmp
Dump File: D:\oracle\product\10.2.0\admin\devdb\dpdump\devtest05.dmp
Dump File: D:\oracle\product\10.2.0\admin\devdb\dpdump\devtest06.dmp
Dump File: D:\oracle\product\10.2.0\admin\devdb\dpdump\devtest07.dmp
Dump File: D:\oracle\product\10.2.0\admin\devdb\dpdump\devtest08.dmp
Worker 1 Status:
State: UNDEFINED
Worker 2 Status:
State: UNDEFINED
Object Schema: trm
Object Name: EVENT_DOCUMENT
Object Type: DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Completed Objects: 1
Completed Rows: 78,026
Completed Bytes: 4,752,331,264
Percent Done: 100
Worker Parallelism: 1
Worker 3 Status:
State: UNDEFINED
Worker 4 Status:
State: UNDEFINED
Worker 5 Status:
State: UNDEFINED
Worker 6 Status:
State: UNDEFINED
Worker 7 Status:
State: UNDEFINED
Worker 8 Status:
State: UNDEFINED39065, 00000, "unexpected master process exception in %s"
// *Cause: An unhandled exception was detected internally within the master
// control process for the Data Pump job. This is an internal error.
// messages will detail the problems.
// *Action: If problem persists, contact Oracle Customer Support. -
Schema export via Oracle data pump with Database Vault enabled question
Hi,
I have installed and configured Database Vault on an Oracle 11g-r2-11.2.0.3 to protect a specific schema (SCHEMA_NAME) via a realm. I have followed the following doc:
http://www.oracle.com/technetwork/database/security/twp-databasevault-dba-bestpractices-199882.pdf
to ensure that the sys and the system user has sufficient rights to complete a schedule Oracle data pump export operation.
I.e. I have granted to sys and system the following:
execute dvsys.dbms_macadm.authorize_scheduler_user('sys','SCHEMA_NAME');
execute dvsys.dbms_macadm.authorize_scheduler_user('system','SCHEMA_NAME');
execute dvsys.dbms_macadm.authorize_datapump_user('sys','SCHEMA_NAME');
execute dvsys.dbms_macadm.authorize_datapump_user('system','SCHEMA_NAME');
I have also create a second realm on the same schema (SCHEMA_NAME) to allow sys and system to maintain indexes for real-protected tables, To allow a sys and system to maintain indexes for realm-protected tables. This separate realm was created for all their index types: Index, Index Partition, and Indextype, sys and system have been authorized as OWNER to this realm.
However, when I try and complete an Oracle Data Pump export operation on the schema, I get two errors directly after the following line displayed in the export log:
Processing object type SCHEMA_EXPORT/TABLE/INDEX/DOMAIN_INDEX/INDEX:
ORA-39127: unexpected error from call to export_string :=SYS.DBMS_TRANSFORM_EXIMP.INSTANCE_INFO_EXP('AQ$_MGMT_NOTIFY_QTABLE_S','SYSMAN',1,1,'11.02.00.00.00',newblock)
ORA-01031: insufficient privileges
ORA-06512: at "SYS.DBMS_TRANSFORM_EXIMP", line 197
ORA-06512: at line 1
ORA-06512: at "SYS.DBMS_METADATA", line 9081
ORA-39127: unexpected error from call to export_string :=SYS.DBMS_TRANSFORM_EXIMP.INSTANCE_INFO_EXP('AQ$_MGMT_LOADER_QTABLE_S','SYSMAN',1,1,'11.02.00.00.00',newblock)
ORA-01031: insufficient privileges
ORA-06512: at "SYS.DBMS_TRANSFORM_EXIMP", line 197
ORA-06512: at line 1
ORA-06512: at "SYS.DBMS_METADATA", line 9081
The export is completed but with this errors.
Any help, suggestions, pointers, etc actually anything will be very welcome at this stage.
Thank youHi Srini,
Thank you very much for your help. Unfortunately after having followed the instructions of the DOC I am still getting the same errors ?
none the less thank you for your input.
I was also wondering if someone could tell me how to move this thread to the Database Security area of the forum, as I feel I may have posted the thread in the wrong place as it appears to be a Database Vault issue and not an imp/exp problem. ?
Edited by: zooid on May 20, 2012 10:33 PM
Edited by: zooid on May 20, 2012 10:36 PM -
We are seriously looking to begin using data pump and discovered via a metalink bulletin that data pump does not support named pipes or os compression utilities (Doc ID: Note:276521.1). Does anyone know a work around for this or have experience using data pump with very large databases and managing the size of the dmp file.
With oracle datapump, you can set the maximum size of your dumpfile and in this way also dumping on more directories.
I found the following on AskTom website. Ik looks like the things you want are possible with Oracle10g Release 2:
Fortunately, Oracle Database 10g Release 2 makes it easier to create and partially compress DMP files than the old tools ever did. EXPDP itself will now compress all metadata written to the dump file and IMPDP will decompress it automaticallyno more messing around at the operating system level. And Oracle Database 10g Release 2 gives Oracle Database on Windows the ability to partially compress DMP files on the fly for the first time (named pipes are a feature of UNIX/Linux). -
Is it possible to export to an asm directory?
I tried this:
CREATE DIRECTORY "DGT01_EXP" AS '+DGT01/EXP';
(yes, this directory does exist in ASM)
But when I tried using this directory for expdp, I got this error.
$ expdp datapump/&pw schemas=test directory=DGT01_EXP dumpfile=test.dmp
Export: Release 10.2.0.1.0 - Production on Thursday, 27 December, 2007 14:44:53
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Release 10.2.0.1.0 - Production
With the Real Application Clusters option
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 475
ORA-29283: invalid file operation
Please let me know if anyone has been able to do this.
Thanks.Hi,
Sorry, misread your first post. You can use data pump to export data to a disk but not to ASM.
If you are trying to move data from one database to another, you may use NETWORK_LINK parameter with datapump to directly read from the source database and copy it to destination database.
Regards -
Consistent parameter in Data Pump.
Hi All,
As we know that there is no consistent parameter is data pump, can any one tell me how DP takes care of this.
From net i got the below one liner as
Data Pump Export determines the current time and uses FLASHBACK_TIME .But I failed to understand what exactly it meant.
Regards,
SphinxThis is the equivalent of consistent=y in exp. If you would use flashback_time=systimestamp to get the data pump export to be "as of the point in time the export began, every table will be as of the same commit point in time".
According to the docs:
“The SCN that most closely matches the specified time is found, and this SCN is used to enable the Flashback utility. The export operation is performed with data that is consistent as of this SCN.” -
INCLUDE & EXCLUDE in Data Pump
Any reason why DATA PUMP does not allow INCLUDE parameter when you have the EXCLUDE parameter in your statement and vice versa?
"Data Pump offers much greater metadata filtering than original Export and Import. The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job. The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job. You cannot mix the two parameters in one job. Both parameters work with Data Pump Import as well, and you can use different INCLUDE and EXCLUDE options for different operations on the same dump file."
Source: http://www.oracle.com/technology/products/database/utilities/pdf/datapump11g2007_quickstart.pdf
Jonathan Ferreira
http://oracle4dbas.blogspot.com -
Why Data Pump tool cannot get in?
Why Data Pump tool cannot get in?
[oracle@hostp ~]$ impdp
Import: Release 10.2.0.1.0 - Production on Thursday, 18 September, 2008 22:30:05
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Username: system
Password:
UDI-00008: operation generated ORACLE error 1033
ORA-01033: ORACLE initialization or shutdown in progress
[oracle@hostp bdump]$ impdp sys/oracle directory='/u01/app/oracle/oradata/db10g/example01.dbf' dumpfile='dumpfile.dmp' transport_datafiles='disk1/transportdest/example01.dbf';
Import: Release 10.2.0.1.0 - Production on Thursday, 18 September, 2008 22:41:36
Copyright (c) 2003, 2005, Oracle. All rights reserved.
UDI-00008: operation generated ORACLE error 1033
ORA-01033: ORACLE initialization or shutdown in progress
Username: system
Password:
UDI-00008: operation generated ORACLE error 1033
ORA-01033: ORACLE initialization or shutdown in progressThe Oracle have finished the mounting progress. Because data file 5 is lost, it cannot be opened. Can I use impdp tool while it in the mounted condition?
SQL> startup
ORACLE instance started.
Total System Global Area 608174080 bytes
Fixed Size 1220820 bytes
Variable Size 213913388 bytes
Database Buffers 385875968 bytes
Redo Buffers 7163904 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 5 - see DBWR trace file
ORA-01110: data file 5: '/u01/app/oracle/oradata/db10g/example01.dbf'
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
[oracle@hostp ~]$ impdp
Import: Release 10.2.0.1.0 - Production on Friday, 19 September, 2008 9:01:22
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Username: system
Password:
UDI-00008: operation generated ORACLE error 1033
ORA-01033: ORACLE initialization or shutdown in progress -
I was recently tasked with copying three schemas from one 10gR2 database to another 10gR2 database on the same RHEL server. I decided to use Data Pump in network mode.
After performing the transfer, the count of database objects for the schemas were the same EXCEPT for LOBs. There were significantly fewer on the target (new) database as opposed to the source (old) database.
To be certain, I retried the operation using an intermediate dump file. Again, everything ran without error. The counts were the same as before - fewer LOBs. I also compared row counts of the tables, and they were identical on both systems.
Testing by the application user seemed to indicate everything was fine.
My assumption is that consolidation is going on when the data is imported, resulting in fewer LOBs ... haven't worked much (really, at all) with them before. Just looking for confirmation that this is the case and that nothing is "missing".
Here are the results:
ORIGINAL SOURCE DATABASE
OWNER OBJECT_TYPE STATUS CNT
COGAUDIT_DEV INDEX VALID 6
LOB VALID 12
TABLE VALID 21
COGSTORE_DEV INDEX VALID 286
LOB VALID 390
SEQUENCE VALID 1
TABLE VALID 200
TRIGGER VALID 2
VIEW VALID 4
PLANNING_DEV INDEX VALID 37
LOB VALID 15
SEQUENCE VALID 3
TABLE VALID 31
TRIGGER VALID 3
14 rows selected.Here are the counts on the BOPTBI (target) database:
NEW TARGET DATABASE
OWNER OBJECT_TYPE STATUS CNT
COGAUDIT_DEV INDEX VALID 6
LOB VALID 6
TABLE VALID 21
COGSTORE_DEV INDEX VALID 286
LOB VALID 98
SEQUENCE VALID 1
TABLE VALID 200
TRIGGER VALID 2
VIEW VALID 4
PLANNING_DEV INDEX VALID 37
LOB VALID 15
SEQUENCE VALID 3
TABLE VALID 31
TRIGGER VALID 3
14 rows selected.We're just curious ... thanks for any insight on this!
Chris
Edited by: 877086 on Aug 3, 2011 4:38 PM
Edited by: 877086 on Aug 3, 2011 4:40 PM
Edited by: 877086 on Aug 3, 2011 4:40 PMOK, here is the SQL that produced the object listing:
break on owner skip 1;
select owner,
object_type,
status,
count(*) cnt
from dba_objects
where owner in ('COGAUDIT_DEV', 'COGSTORE_DEV', 'PLANNING_DEV')
group by owner,
object_type,
status
order by 1, 2;Here is the export parameter file:
# cog_all_exp.par
userid = chamilton
content = all
directory = xfer
dumpfile = cog_all_%U.dmp
full = n
job_name = cog_all_exp
logfile = cog_all_exp.log
parallel = 2
schemas = (cogaudit_dev, cogstore_dev, planning_dev)Here is the import parameter file:
# cog_all_imp.par
userid = chamilton
content = all
directory = xfer
dumpfile = cog_all_%U.dmp
full = n
job_name = cog_all_imp
logfile = cog_all_imp.log
parallel = 2
reuse_datafiles = n
schemas = (cogaudit_dev, cogstore_dev, planning_dev)
skip_unusable_indexes = n
table_exists_action = replaceThe above parameter files were for the dumpfile version. For the original network link version, I omitted the dumpfile parameter and substituted "network_link = boptcog_xfer".
Chris
Edited by: 877086 on Aug 3, 2011 6:18 PM -
i have exported Hr schema from one macine using following code it export successfully but i tried to import this exported file in another machine it gave error. one thing more how i use package to import file as we exported by using data pump using package plz help
C:\Documents and Settings\remote>impdp hr/hr DIRECTORY=DATA_PUMP_DIR DUMPFILE=YAHOO.DMP full=y
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-31626: job does not exist
ORA-31637: cannot create job SYS_IMPORT_FULL_01 for user HR
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT_INT", line 663
ORA-39080: failed to create queues "KUPC$C_1_20090320121353" and "KUPC$S_1_20090
320121353" for Data Pump job
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPC$QUE_INT", line 1665
ORA-01658: unable to create INITIAL extent for segment in tablespace SYSTEM
DECLARE
handle NUMBER;
BEGIN
handle := dbms_datapump.open (
operation => 'EXPORT' ,job_mode => 'SCHEMA');
dbms_datapump.add_file(
handle => handle
*,filename => 'YAHOO.dmp'*
*,directory => 'DATA_PUMP_DIR'*
*,filetype => 1);*
dbms_datapump.metadata_filter(
handle => handle
*,name => 'SCHEMA_EXPR'*
*,value => 'IN(''HR'')');*
dbms_datapump.start_job(
handle => handle);
dbms_datapump.detach(handle => handle);
end;+*<Moderator edit - deleted contents of MOS Doc 752374.1 - pl do not post such contents - it is a violation of your Support agreement - locking this thread>*+
-
Data Pump - Trying to Export to *Another Server* Gives Error
hi experts,
I'm using 10.2.0.4 on Windows.
I want to create a Scheduler job to periodically refresh my test database from my production database which resides on a different Windows server.
Using the Database Control GUI to create the Export job, if I specify a Directory that is on the other server, the job fails to create and I get this error:
Export Submit Failed
Errors: ORA-39002: invalid operation ORA-39070: Unable to open the log file. ORA-29283: invalid file operation ORA-06512: at "SYS.UTL_FILE", line 488 ORA-29283: invalid file operation Exception : ORA-39002: invalid operation ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79 ORA-06512: at "SYS.DBMS_DATAPUMP", line 2953 ORA-06512: at "SYS.DBMS_DATAPUMP", line 3189 ORA-06512: at line 2
But if I use a local directory ie one that is on the same server that the export will run on.... no problem - job is created and it executes fine.
?? What is required to be able to export to a non-local destination?
Thanks, JohnThanks for the replies and ideas.
This is what the GUI generated:
declare
h1 NUMBER;
begin
begin
h1 := dbms_datapump.open (operation => 'EXPORT', job_mode => 'SCHEMA', job_name => 'DP_EXPORT_SSU', version => 'COMPATIBLE');
end;
begin
dbms_datapump.set_parallel(handle => h1, degree => 1);
end;
begin
dbms_datapump.add_file(handle => h1, filename => 'EXPSSU.LOG', directory => 'DP_FROMULTRAPRD', filetype => 3);
end;
begin
dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 0);
end;
begin
dbms_datapump.metadata_filter(handle => h1, name => 'SCHEMA_EXPR', value => 'IN(''SSU'')');
end;
begin
dbms_datapump.add_file(handle => h1, filename => 'EXPSSU%U.DMP', directory => 'DP_FROMULTRAPRD', filetype => 1);
end;
begin
dbms_datapump.set_parameter(handle => h1, name => 'INCLUDE_METADATA', value => 1);
end;
begin
dbms_datapump.set_parameter(handle => h1, name => 'DATA_ACCESS_METHOD', value => 'AUTOMATIC');
end;
begin
dbms_datapump.set_parameter(handle => h1, name => 'ESTIMATE', value => 'BLOCKS');
end;
begin
dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0);
end;
begin
dbms_datapump.detach(handle => h1);
end;
end;
After I finally got the job to execute (by specifying a local path), the export completed successfully.... BUT the job does not appear as a Scheduler Job. I expected to see it there. In reading it seems that, to execute a DP job, you first have to attach it, then run dbms_datapump.start_job - is that correct?
** How can I see the data pump jobs that exist?
I will create a database link and tweak my tnsnames file.
Steve, I prefer to drive this from the Production server so I will use a Network Export from that server going to dump files on the Test server.
** But I'm confused by your statement saying "in either case, the job will run on the test server". ??
2 years from now, when I have forgotten about all of this, I want to only have to look at the scheduled jobs on the Production server to determine all the jobs that depend on that server.
I'll post back with the results of my little "experiment". Thanks for your suggestions.
John - Memphis TN USA
Edited by: user629010 on Dec 23, 2008 3:08 PM -
Data pump problem on sync data from windows to linux
Hi,
i have production db on windows 2003 server, and syn data from production windows machine to linux server. using data pump exports, imports.
i have written an procedure to export schema level
and
procedure for import like
dbms_datapump.open(operation ='import',job_module='schema',job_name='imp');
dbms_datapump.add_file(handle='',filename='imp.dmp',directory='dir_datapump',filetype=dbms_datapump.ku$_file_type_dump_file);
dbms_datapump.metadata_filter(handle='',name='schema_expr',value='in('hr','oe')',object_type''table');
dbms_datapump.set_parameter(handle ='',name=table_exists_action,value=replace);
when i run the above procedure, error displays that table already exists ora-31684
but from the
impdp include=table table_exists_action=replace
i am able to do it.
can u suggest what iam missing.
Regards
AjayI will try to explain a little better using screenshots. This screenshot is from windows :
http://img510.imageshack.us/img510/1714/windowspg.jpg
At the top is says 'date taken' and ANY windows photo library software will sort it by this date. Below are the properties of the same file after it's copied to osx :
http://img153.imageshack.us/img153/6345/screenshot20100322at145.png
It contains lots of information about the picture but not when it was taken. So software such as iphoto or picasa sorts it by it's created date (which you can see at the top is Monday 12th January).
Why isn't OSX carrying over the exif data on when the photo was taken ? I can't see any iphoto options to sort by that date.
Message was edited by: Ollie UK
Message was edited by: Ollie UK
Maybe you are looking for
-
Itunes wont play HD 1080p movies and should be able to.
Operating System Microsoft Windows 7 Ultimate 64-bit SP1 CPU Intel Core i7 3770 @ 3.40GHz Ivy Bridge 22nm Technology RAM 12.0 GB Dual-Channel DDR3 @ 686MHz (9-9-9-24) Motherboard ASUSTeK COMPUTER INC.
-
Quicktime X: Bookmark progress in a video?
I watch lots of videos but like to stop halfway through and come back to them later. I'd like a way to mark the progress I've made in the video, i.e. leave a bookmark at the time that I stopped watching. That way I can reopen the file and start where
-
Urgent Openinings For Experienced Oracle ADF guys in Oracle India
Oracle india is looking for Oracle-ADF guys Location :: Hyderabad / Bangalore Experience :: 3-6 years sent resume:: [email protected]
-
Oracle 9i dump restore to the 10g database
hi all I have a oracle 9i dump. I want to restore it to 10g newly created database. can we do it as normal import or do we have to do some sort of task to mask the heteroginity between two databases. regards buddhike
-
Samsung S4 does not display Thai language in FF v22 but it dis on my S3.
basically, My old Galaxy S3 with fire fox had no issues with displaying Thai font. I can type and see Thai on my phone and other browsers....How do I fix this on my S4? Runing latest version of Firefox for Android (22) using version 4.2.2 for Android