Datapump start_job
Hi All,
Could you let me know the options used in datapump for start_job parameter.
Please explain in detail about the options used in the start_job.
Thanks
Rangarajbk
Pl post details of OS and database versions. Pl also elaborate on which portions of the documentation you did not understand.
http://docs.oracle.com/cd/E11882_01/server.112/e22490/dp_export.htm#SUTIL879
http://docs.oracle.com/cd/E11882_01/appdev.112/e25788/d_datpmp.htm#ARPLS66062
HTH
Srini
Similar Messages
-
Missing Role Grants after datapump
Hello OTN-Community,
I have a problem with datapump. I am using some include filters to get the relevant data exported. One of these filters inlcudes the ROLES of my database which starts with a certain expression.
After the export into another database these roles exists but all of the role grants and the grants to other users misses. The object grants are exported correctly.
What am I doing wrong?
The export script:
declare
/*some declare specification are not copyed*/
cursor curSchema is
select
distinct
t.Mdbn_Name Name
from
ProphetMaster.Dat_MdBn t
where
Upper(t.MDBN_Name) not in ('****', '***');
begin
-- Schemas festlegen
SchemaList := '''****'',''***''';
if ExportAllProphetUsers then
for recSchema in curSchema loop
SchemaList := SchemaList||','''||recSchema.Name||'''';
end loop;
end if;
-- Dateigröße
FileSizeStr := to_char(MaxFileSize)||'M';
-- Verzeichnis
DirectoryName := 'PHT_PUMP_DIR';
execute immediate 'create or replace directory "'||DirectoryName||'" as '''|| PumpDir||'''';
-- JobName
JobName := 'PHT_EXPORT'||DateStr;
-- Filename
if not FilenameWithDateTime then
DateStr :='';
end if;
Filename := 'PHTDB'||DateStr||'_%U.DMP';
Logfilename := JobName||'.LOG';
-- Job festlegen und Ausführen
h1 := dbms_datapump.open (operation => 'EXPORT', job_mode => 'FULL', job_name => JobName, version => 'COMPATIBLE');
dbms_datapump.set_parallel(handle => h1, degree => ParallelExecutions);
dbms_datapump.add_file(handle => h1, filename => Logfilename, directory => DirectoryName, filetype => 3);
dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 0);
--10g
--dbms_datapump.add_file(handle => h1, filename => Filename, directory => DirectoryName, filesize => FileSizeStr, filetype => 1);
--11g
dbms_datapump.add_file(handle => h1, filename => Filename, directory => DirectoryName, filesize => FileSizeStr, filetype => 1, reusefile =>OverwriteFiles);
dbms_datapump.set_parameter(handle => h1, name => 'INCLUDE_METADATA', value => 1);
dbms_datapump.set_parameter(handle => h1, name => 'DATA_ACCESS_METHOD', value => 'AUTOMATIC');
-- Include Schemas
--dbms_datapump.metadata_filter(handle => h1, name => 'NAME_EXPR', value => 'IN('||SchemaList||')', object_type => 'DATABASE_EXPORT/SCHEMA');
dbms_datapump.metadata_filter(handle => h1, name => 'NAME_EXPR', value => 'IN('||SchemaList||')', object_type => 'DATABASE_EXPORT/SCHEMA');
dbms_datapump.metadata_filter(handle => h1, name => 'INCLUDE_PATH_EXPR', value => 'IN(''DATABASE_EXPORT/SCHEMA'')');
--Include Profiles
dbms_datapump.metadata_filter(handle => h1, name => 'NAME_EXPR', value => 'like ''PROFILE_%''', object_type => 'PROFILE');
dbms_datapump.metadata_filter(handle => h1, name => 'INCLUDE_PATH_EXPR', value => 'IN(''PROFILE'')');
--Include Roles
dbms_datapump.metadata_filter(handle => h1, name => 'NAME_EXPR', value => 'like ''***%''', object_type => 'ROLE');
dbms_datapump.metadata_filter(handle => h1, name => 'INCLUDE_PATH_EXPR', value => 'IN(''ROLE'')');
-- Größenabschätzung
dbms_datapump.set_parameter(handle => h1, name => 'ESTIMATE', value => 'BLOCKS');
--Start Job
dbms_output.put_line('Import Job started; Logfile: '|| LogFileName);
dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0);
-- Wait for ending and finishing job
dbms_datapump.wait_for_job(handle=>h1,job_state =>job_state);
dbms_output.put_line('Job has completed');
dbms_output.put_line('Final job state = ' || job_state);
dbms_datapump.detach(handle => h1);
The Import Script:
begin
dbms_output.Enable(buffer_size => null);
-- Verzeichnis
DirectoryName := 'PHT_PUMP_DIR';
execute immediate 'create or replace directory "'||DirectoryName||'" as '''|| PumpDir||'''';
-- JobName
JobName := 'PHT_IMPORT'|| to_char(sysdate,'_yyyy-MM-DD-HH24-MI');
--FileNames
Filename := 'PHTDB'||FileNameDateStr||'_%U.DMP';
LogFilename := JobName||'.LOG';
h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'FULL', job_name => JobName, version => 'COMPATIBLE');
--Wenn der Datapumpimport auf einer Standardversion ausgeführt wird, muss diese Aufrufzeizeile genutzt werden
--h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'FULL', job_name => JobName, version => '10.2');
dbms_datapump.set_parallel(handle => h1, degree => ParallelExecutions);
dbms_datapump.add_file(handle => h1, filename => Logfilename, directory => DirectoryName, filetype => 3);
dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 0);
dbms_datapump.add_file(handle => h1, filename => Filename, directory => DirectoryName, filetype => 1);
dbms_datapump.set_parameter(handle => h1, name => 'INCLUDE_METADATA', value => 1);
dbms_datapump.set_parameter(handle => h1, name => 'DATA_ACCESS_METHOD', value => 'AUTOMATIC');
dbms_datapump.set_parameter(handle => h1, name => 'REUSE_DATAFILES', value => 0);
dbms_datapump.set_parameter(handle => h1, name => 'TABLE_EXISTS_ACTION', value => 'REPLACE');
dbms_datapump.set_parameter(handle => h1, name => 'SKIP_UNUSABLE_INDEXES', value => 0);
--Start Job
dbms_output.put_line('Import Job started; Logfile: '|| LogFileName);
dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0);
-- Wait for ending and finishing job
dbms_datapump.wait_for_job(handle=>h1,job_state =>job_state);
dbms_output.put_line('Job has completed');
dbms_output.put_line('Final job state = ' || job_state);
dbms_datapump.detach(handle => h1);Has no one any idea?
-
Datapump help - cant get it to work
This is my script:
create or replace directory EXPORT as 'e:\aia\backup\dmp\';
declare
l_dp_handle NUMBER;
begin
l_dp_handle := DBMS_DATAPUMP.open(
operation => 'EXPORT',
job_mode => 'SCHEMA',
remote_link => NULL,
job_name => 'GTAIATB4_UPGRADE_EXP4',
version => 'LATEST'
DBMS_DATAPUMP.add_file(
handle => l_dp_handle,
filename => 'GTAIATB4_UPGRADE2.dmp',
directory => 'export',
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE
DBMS_DATAPUMP.add_file(
handle => l_dp_handle,
filename => 'GTAIATB4_UPGRADE2.log',
directory => 'exportT',
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE
DBMS_DATAPUMP.metadata_filter(
handle => l_dp_handle,
name => 'SCHEMA_EXPR',
value => 'IN (''GTAIATB4_UPGRADE'')');
DBMS_DATAPUMP.start_job(l_dp_handle);
DBMS_DATAPUMP.detach(l_dp_handle);
END;
I want to export everything that belongs to the schema called GTAIATB4_UPGRADE.
this is the result:
SQL> declare
2 l_dp_handle NUMBER;
3 begin
4 l_dp_handle := DBMS_DATAPUMP.open(
5 operation => 'EXPORT',
6 job_mode => 'SCHEMA',
7 remote_link => NULL,
8 job_name => 'GTAIATB4_UPGRADE_EXP4',
9 version => 'LATEST'
10 );
11 DBMS_DATAPUMP.add_file(
12 handle => l_dp_handle,
13 filename => 'GTAIATB4_UPGRADE2.dmp',
14 directory => 'export',
15 filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE
16 );
17 DBMS_DATAPUMP.add_file(
18 handle => l_dp_handle,
19 filename => 'GTAIATB4_UPGRADE2.log',
20 directory => 'exportT',
21 filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE
22 );
23 DBMS_DATAPUMP.metadata_filter(
24 handle => l_dp_handle,
25 name => 'SCHEMA_EXPR',
26 value => 'IN (''GTAIATB4_UPGRADE'')');
27 DBMS_DATAPUMP.start_job(l_dp_handle);
28 DBMS_DATAPUMP.detach(l_dp_handle);
29 END;
30
31
32 /
declare
ERROR at line 1:
ORA-39001: invalid argument value
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3444
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3693
ORA-06512: at line 11
Also, can someone tell me how I can delete the jobs that were created as now I have to keep adding job names ie 'GTAIATB4_UPGRADE_EXP4' started as 'GTAIATB4_UPGRADE_EXP', then 'GTAIATB4_UPGRADE_EXP3' etc etcThere could be lots of things wrong with this pl/sql script. If you need to change the job name every time, then I can tell you that at least the open is happening, because it is creating the master table. After that, I have no idea what is happening. One thing I would do is to add some sort of debugging into the code so you can figure out what is happening. I use an internal tool to help debug pl/sql, but a simple thing would be to do an insert into a table after every call. Something like this:
first create a table that is owned by the schema running the job.
create table debug_tab (a number);
then in the code, I would put a statement like this after every datapump call:
insert into debug_tab values (1);
commit;
then
insert into debug_tab values (2);
commit;
etc. Then after you run your script, look to see what is in debug_tab. This will tell you how far you got. Some of the things I would look at would be to make sure that the directory exists.
Actually, I think I just saw it. For the dumpfile, you have the directory as 'export'
for the log file, you have the directory as 'exportT' <---- you have both lower t and upper T. This is probably your problem.
As for getting rid of existing jobs,
select * from user_datapump_jobs;
then
drop table 'job_name';
This will get rid of the Data Pump job.
Hope this helps.
Dean -
Datapump - expdp.open creates tables in schema
Hi
i am using datapump in Oracle 10g to archive old partitions from the main schema to another schema.
i notice that when dbms_datapump.open is called that a new table is created by dbms_datadpump for internal purposes. This is verified in the oracle documentation
http://docs.oracle.com/cd/B12037_01/appdev.101/b10802/d_datpmp.htm#997383
Usage Notes
When the job is created, a master table is created for the job under the caller's schema within the caller's default tablespace. A handle referencing the job is returned that attaches the current session to the job. Once attached, the handle remains valid until either an explicit or implicit detach occurs. The handle is only valid in the caller's session. Other handles can be attached to the same job from a different session by using the ATTACH procedure.
Does anybody know whether this table can be removed by a "cleanup" Oracle dbms_datapump call, or whether it has to be cleaned up manually.can confirm thats what you do
v_job_handle:= DBMS_DATAPUMP.OPEN('EXPORT', 'TABLE', NULL, v_job_name);
-- Set parallelism to 1 and add file
DBMS_DATAPUMP.SET_PARALLEL(v_job_handle, 1);
DBMS_DATAPUMP.ADD_FILE(v_job_handle, v_job_name || '_' || v_partition.partition_name || '.dmp', 'PARTITION_DUMPS');
-- Apply filter to process only a certain partition in the table
DBMS_DATAPUMP.METADATA_FILTER(v_job_handle, 'SCHEMA_EXPR', 'IN(''SIS_MAIN'')');
DBMS_DATAPUMP.METADATA_FILTER(v_job_handle, 'NAME_EXPR', 'LIKE ''' || t_archive_list(i) || '''');
DBMS_DATAPUMP.DATA_FILTER(v_job_handle, 'PARTITION_EXPR', 'IN (''' || v_partition.partition_name || ''')', t_archive_list(i), 'SIS_MAIN');
-- Use statistics (rather than blocks) to estimate time.
DBMS_DATAPUMP.SET_PARAMETER(v_job_handle,'ESTIMATE','STATISTICS');
-- Start the job. An exception is returned if something is not set up properly.
DBMS_DATAPUMP.START_JOB(v_job_handle);
-- The export job should now be running. We loop until its finished
v_percent:= 0;
v_job_state:= 'UNDEFINED';
WHILE (v_job_state != 'COMPLETED') and (v_job_state != 'STOPPED') LOOP
DBMS_DATAPUMP.get_status(v_job_handle,DBMS_DATAPUMP.ku$_status_job_error + DBMS_DATAPUMP.ku$_status_job_status + DBMS_DATAPUMP.ku$_status_wip,-1,v_job_state,sts);
js:= sts.job_status;
-- As the percentage-complete changes in this loop, the new value displays.
IF js.percent_done != v_percent THEN
v_percent:= js.percent_done;
END IF;
END LOOP;
-- When the job finishes, display status before detaching from job.
PRC_LOG(f1, t_archive_list(i) || ': Export complete with status: ' || v_job_state);
-- DBMS_DATAPUMP.DETACH(v_job_handle);
-- use STOP_JOB instead of DETACH otherwise the "master table" which is created when OPEN is called will not be removed.
DBMS_DATAPUMP.STOP_JOB(v_job_handle,0,0); -
Datapump exp and imp using API method
Good Day All,
I want to know what is the best way of error handling of datapump export and Import using API. I need to implement in my current project as there lot of limitations and the only way to see the process worked is writing the code with error handling method using exceptions. I have seen some examples on the web but if there are practicle examples or good links with examples that will work sure way, I would like to know and explore. I have never used API method so I am not sure of it.
Thanks a lot for your time.
Maggie.I wrote the procedure with error handling but it does not out put any information of the statuses while kicking off the expdp process. I have put dbms_output.put_line as per oracle docs example but it doesnt display any messages, just kicks off and created dumpfiles. As a happy path its ok but I need to track if something goes wrong. I even stated set serveroutput on sqlplus. It doesnt even display if job started. Please help me where I made a mistake to display the status . Do I need to modify or add anything. Help!!
CREATE OR REPLACE PROCEDURE SCHEMAS_EXPORT_TEST AS
--Using Exception Handling During a Simple Schema Export
--This Proceedure shows a simple schema export using the Data Pump API.
--It extends to show how to use exception handling to catch the SUCCESS_WITH_INFO case,
--and how to use the GET_STATUS procedure to retrieve additional information about errors.
--If you want to get status up to the current point, but a handle has not yet been obtained,
--you can use NULL for DBMS_DATAPUMP.GET_STATUS.http://docs.oracle.com/cd/B19306_01/server.102/b14215/dp_api.htm
h1 number; -- Data Pump job handle
l_handle number;
ind NUMBER; -- Loop index
spos NUMBER; -- String starting position
slen NUMBER; -- String length for output
percent_done NUMBER; -- Percentage of job complete
job_state VARCHAR2(30); -- To keep track of job state
sts ku$_Status; -- The status object returned by get_status
le ku$_LogEntry; -- For WIP and error messages
js ku$_JobStatus; -- The job status from get_status
jd ku$_JobDesc; -- The job description from get_status
BEGIN
h1 := dbms_datapump.open (operation => 'EXPORT',job_mode => 'SCHEMA');
dbms_datapump.add_file (handle => h1,filename => 'SCHEMA_BKP_%U.DMP',directory => 'BKP_SCHEMA_EXPIMP',filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.add_file (handle => h1,directory => 'BKP_SCHEMA_EXPIMP',filename => 'SCHEMA_BKP_EX.log',filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
---- A metadata filter is used to specify the schema that will be exported.
dbms_datapump.metadata_filter (handle => h1, name => 'SCHEMA_LIST',value => q'|'XXXXXXXXXX'|');
dbms_datapump.set_parallel( handle => h1, degree => 4);
-- Start the job. An exception will be returned if something is not set up
-- properly.One possible exception that will be handled differently is the
-- success_with_info exception. success_with_info means the job started
-- successfully, but more information is available through get_status about
-- conditions around the start_job that the user might want to be aware of.
begin
dbms_datapump.start_job (handle => h1);
dbms_output.put_line('Data Pump job started successfully');
exception
when others then
if sqlcode = dbms_datapump.success_with_info_num
then
dbms_output.put_line('Data Pump job started with info available:');
dbms_datapump.get_status(h1,
dbms_datapump.ku$_status_job_error,0,
job_state,sts);
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end if;
else
raise;
end if;
end;
-- The export job should now be running. In the following loop, we will monitor the job until it completes.
-- In the meantime, progress information is displayed.
percent_done := 0;
job_state := 'UNDEFINED';
while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
dbms_datapump.get_status(h1,
dbms_datapump.ku$_status_job_error +
dbms_datapump.ku$_status_job_status +
dbms_datapump.ku$_status_wip,-1,job_state,sts);
js := sts.job_status;
-- If the percentage done changed, display the new value.
if js.percent_done != percent_done
then
dbms_output.put_line('*** Job percent done = ' ||to_char(js.percent_done));
percent_done := js.percent_done;
end if;
-- Display any work-in-progress (WIP) or error messages that were received for
-- the job.
if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
then
le := sts.wip;
else
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
else
le := null;
end if;
end if;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end loop;
-- Indicate that the job finished and detach from it.
dbms_output.put_line('Job has completed');
dbms_output.put_line('Final job state = ' || job_state);
dbms_datapump.detach (handle => h1);
-- Any exceptions that propagated to this point will be captured. The
-- details will be retrieved from get_status and displayed.
Exception
when others then
dbms_output.put_line('Exception in Data Pump job');
dbms_datapump.get_status(h1,dbms_datapump.ku$_status_job_error,0, job_state,sts);
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
spos := 1;
slen := length(le(ind).LogText);
if slen > 255
then
slen := 255;
end if;
while slen > 0 loop
dbms_output.put_line(substr(le(ind).LogText,spos,slen));
spos := spos + 255;
slen := length(le(ind).LogText) + 1 - spos;
end loop;
ind := le.NEXT(ind);
end loop;
end if;
end if;
END SCHEMAS_EXPORT_TEST; -
View job status during datapump
Hi,
I was running datapump and I watching the output, but then pressed ctrl-c so that I can report on the status and see what was going on. I then entered CONTINUE_CLIENT, and nothing happened, then pressed ctrl-c again and now I am back at my command prompt. I assume the job is still running in background, but I wanted to know how I can jump back into the datapump so that I can see the output again and also see the status via me going into the Export Monitoring tool.
How can I do this?I got it going.
from my prompt, I had to issue:
impdp system@??? attach=jobname
IMPORT > START_JOB
and if needed
IMPORT > CONTINUE_CLIENT -
DATAPUMP ERROR ::::PLZ HELP ME
HI all,
every day, I generate an export DATAPUMP of my Oracle base; (Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 on Redhat Linux Server)
Suddenly, this morning I have a problem; my procedure of export does not works any more I have this oracle error:
Error codes -29283: ORA-29283: invalid file operation
ORA-06512: At “SYS.UTL_FILE”, line 475
ORA-29283: invalid file operation
ORA-31626: job does not exist
ORA-31626: job does not exist
Someone can help me please;
Thank you ;
Regards,
CREATE OR REPLACE PROCEDURE SP_DPUMP_EXPORT (
P_DIRECTORY_NAME IN VARCHAR2
AS
|| Procedure: here
||
|| Creates a nightly DataPump Export of all User schema
||
idx NUMBER; -- Loop index
JobHandle NUMBER; -- Data Pump job handle
PctComplete NUMBER; -- Percentage of job complete
JobState VARCHAR2(30); -- To keep track of job state
LogEntry ku$_LogEntry; -- For WIP and error messages
JobStatus ku$_JobStatus; -- The job status from get_status
Status ku$_Status; -- The status object returned by get_status
BEGIN
-- Build a handle for the export job
JobHandle :=
DBMS_DATAPUMP.OPEN(
operation => 'EXPORT'
,job_mode => 'FULL'
,remote_link => NULL
,job_name => 'DEVXEN03_'||'OBJECTS'
,version => 'LATEST'
-- Using the job handle value obtained, specify multiple dump files for the job
-- and the directory to which the dump files should be written. Note that the
-- directory object must already exist and the user account running the job must
-- have WRITE access permissions to the directory
DBMS_DATAPUMP.ADD_FILE(
handle => JobHandle
,filename => 'DEVXEN03_'||TO_CHAR(SYSDATE-1,'YYYYMMDD')||'.dmp'
,directory => P_DIRECTORY_NAME
,filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE
/* ,filesize => '100M'*/
DBMS_DATAPUMP.ADD_FILE(
handle => JobHandle
,filename => 'DEVXEN03_'||TO_CHAR(SYSDATE-1,'YYYYMMDD')||'.log'
,directory => P_DIRECTORY_NAME
,filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE
-- Apply a metadata filter to restrict the DataPump Export job to only return
-- selected tables and their dependent objects from the Miva schema
/*DBMS_DATAPUMP.METADATA_FILTER(
handle => JobHandle
,NAME => 'SCHEMA_EXPR'
,VALUE => '= ''ALL'''
\* ,object_type => 'TABLE'*\
-- Initiate the DataPump Export job
DBMS_DATAPUMP.START_JOB(JobHandle);
-- If no exception has been returned when the job was initiated, this loop will
-- keep track of the job and return progress information until the job is done
PctComplete := 0;
JobState := 'UNDEFINED';
WHILE(JobState != 'COMPLETED') and (JobState != 'STOPPED')
LOOP
DBMS_DATAPUMP.GET_STATUS(
handle => JobHandle
,mask => 15 -- DBMS_DATAPUMP.ku$_status_job_error + DBMS_DATAPUMP.ku$_status_job_status + DBMS_DATAPUMP.ku$_status_wip
,timeout => NULL
,job_state => JobState
,status => Status
JobStatus := Status.job_status;
-- Whenever the PctComplete value has changed, display it
IF JobStatus.percent_done != PctComplete THEN
DBMS_OUTPUT.PUT_LINE('*** Job percent done = ' || TO_CHAR(JobStatus.percent_done));
PctComplete := JobStatus.percent_done;
END IF;
-- Whenever a work-in progress message or error message arises, display it
IF (BITAND(Status.mask,DBMS_DATAPUMP.ku$_status_wip) != 0) THEN
LogEntry := Status.wip;
ELSE
IF (BITAND(Status.mask,DBMS_DATAPUMP.ku$_status_job_error) != 0) THEN
LogEntry := Status.error;
ELSE
LogEntry := NULL;
END IF;
END IF;
IF LogEntry IS NOT NULL THEN
idx := LogEntry.FIRST;
WHILE idx IS NOT NULL
LOOP
DBMS_OUTPUT.PUT_LINE(LogEntry(idx).LogText);
idx := LogEntry.NEXT(idx);
END LOOP;
END IF;
END LOOP;
-- Successful DataPump Export job completion, so detach from the job
DBMS_OUTPUT.PUT_LINE('Job has succesfully completed');
DBMS_OUTPUT.PUT_LINE('Final job state = ' || JobState);
DBMS_DATAPUMP.DETACH(JobHandle);
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('ERROR DATA PUMP EXPORT : '||SQLERRM);
END SP_DPUMP_EXPORT;
Message was edited by:
HAGGARin your sql prompt logged as a sysdba
--define variable
var JobHandle
execute each step as in your procedure and check at what location you get an error.
1)
JobHandle :=
DBMS_DATAPUMP.OPEN(
operation => 'EXPORT'
,job_mode => 'FULL'
,remote_link => NULL
,job_name => 'DEVXEN03_'||'OBJECTS'
,version => 'LATEST'
-- Using the job handle value obtained, specify multiple dump files for the job
-- and the directory to which the dump files should be written. Note that the
-- directory object must already exist and the user account running the job must
-- have WRITE access permissions to the directory
2)
DBMS_DATAPUMP.ADD_FILE(
handle => JobHandle
,filename => 'DEVXEN03_'||TO_CHAR(SYSDATE-1,'YYYYMMDD')||'.dmp'
,directory => P_DIRECTORY_NAME
,filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE
/* ,filesize => '100M'*/
3)
DBMS_DATAPUMP.ADD_FILE(
handle => JobHandle
,filename => 'DEVXEN03_'||TO_CHAR(SYSDATE-1,'YYYYMMDD')||'.log'
,directory => P_DIRECTORY_NAME
,filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE
-- Initiate the DataPump Export job
4)
DBMS_DATAPUMP.START_JOB(JobHandle);
execute it manually or try to include exception handling for each statements in the procedure. to know exactly where the error occurs.
SS -
Datapump network_link questions
I had a few high level datapump questions that I have been unable to find a straight answer for and it's been a little while since I last did a large migration. Database will be either 10gR2 or 11gR2.
- Since you are not using any files, does the Oracle user doing either the export or import even need a directory created and granted read/write on it to the user?
- Does the user even need any other permission besides create session? (like create table, index, etc) I wasn't sure if any create table statements are executed behind the scenes when the DDL is run.
- Just out of curiosity, is there any other way to use NETWORK_LINK without using a public database link? For some reason, the system guys do not like us creating these, they see it as a security issue for some reason...
- Does using NETWORK_LINK lock the tables out for write access when doing a single schema level export/import or can it be done while the database is online without any consequences (besides some reduced performance I'd imagine)? I thought I read transportable tablespaces locked the tables for read access only (not that I'm trying to do that).
- We have at least 1 TB of raw data in the tablespaces to migrate. If I start the data pump using the NETWORK_LINK, and the network connection gets dropped unexpectedly (router outage, etc), and say I was at 900 GB, would I have to start over or is there any kind of savepoint and resume concept?
Thanks.Hi
is there any other way to use NETWORK_LINK without using a public database link?
Please find the document
http://www.oracle-base.com/articles/10g/oracle-data-pump-10g.php#NetworkExportsImports
is there any kind of savepoint and resume concept?
expdp help=y OR impdp help=y
START_JOB Start/resume current job.
Please refer oracle documentation
http://docs.oracle.com/cd/B28359_01/server.111/b28319/dp_export.htm
Best of Luck :)
Regards
Hitgon
Edited by: hitgon on Apr 26, 2012 9:34 AM -
Hi,
I've set up two stored procedures:
1. One to do a datapump iimport over a network link using the datapump api.
2. Another one to read the resulting log file on the server into a clob fiield in a table.
Individually, both of the above work fine.
The problem is when I try to call stored proc number 2 from number 1. Since the stored procedure returns before the imports complete, then the stored procedure to read the log file into the clob in the table runs, the log file is empty.
I know I can monitor user_datapump_sessions for completion, any ideas how to make that work in a stored procedure? I thought maybe chained jobs, but I'm not sure.I found the answer. User DBMS_DATAPUMP.WAIT_FOR_JOB instead of start_job.
Thanks. -
I am wanting to do a datapump from a procedure. Ever time i run my procedure i get a error that says "job does not exist" From what i read online i thougt i could set my procedure up like this but i guess i cant. Thanks for the help
PROCEDURE "NIGHTLY_SCHEMA_EXPORT" AS
l_dp_handle NUMBER;
l_last_job_state varchar2(30) := 'UNDEFINED';
l_job_state varchar2(30) := 'UNDEFINED';
l_sts KU$_STATUS;
begin
l_dp_handle := DBMS_DATAPUMP.OPEN(
operation => 'EXPORT',
job_mode => 'SCHEMA',
remote_link => NULL,
job_name => 'EMP_EXPORT',
version => 'LATEST');
DBMS_DATAPUMP.add_file(
handle => l_dp_handle,
filename => 'NIGHTLY_SCHEMA_EXPORT.DMP',
directory => 'WRITABLE_DIRECTORY');
DBMS_DATAPUMP.metadata_filter(
handle => l_dp_handle,
name => 'SCHEMA_EXPR',
value => '=''VFT''');
DBMS_DATAPUMP.start_job(l_dp_handle);
DBMS_DATAPUMP.detach(l_dp_handle);
end;
This is the complete error that i am getting
ORA-31626: job does not exist
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 911
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4356
ORA-06512: at "DALLAS.NIGHTLY_SCHEMA_EXPORT", line 8
ORA-06512: at line 2I have the same problem with a similar job:
declare
n_handle number;
begin
n_handle := dbms_datapump.open('EXPORT','SCHEMA',NULL,'REFRESH KONLOG','VALID');
end;
the error message is:
[1]: (Error): ORA-31626: job does not exist ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79 ORA-06512: at "SYS.DBMS_DATAPUMP", line 820 ORA-06512: at "SYS.DBMS_DATAPUMP", line 3742 ORA-06512: at line 4
any ideas thanks in advance
Andreas -
Datapump API: Import all tables in schema
Hi,
how can I import all tables using a wildcard in the datapump-api?
Thanks in advance,
tensai_tensai_ wrote:
Thanks for the links, but I already know them...
My problem is that I couldn't find an example which shows how to perform an import via the API which imports all tables, but nothing else.
Can someone please help me with a code-example?I'm not sure what you mean by "imports all tables, but nothing else". It could mean that you only want to import the tables, but not the data, and/or not the statistics etc.
Using the samples provided in the manuals:
DECLARE
ind NUMBER; -- Loop index
h1 NUMBER; -- Data Pump job handle
percent_done NUMBER; -- Percentage of job complete
job_state VARCHAR2(30); -- To keep track of job state
le ku$_LogEntry; -- For WIP and error messages
js ku$_JobStatus; -- The job status from get_status
jd ku$_JobDesc; -- The job description from get_status
sts ku$_Status; -- The status object returned by get_status
spos NUMBER; -- String starting position
slen NUMBER; -- String length for output
BEGIN
-- Create a (user-named) Data Pump job to do a "schema" import
h1 := DBMS_DATAPUMP.OPEN('IMPORT','SCHEMA',NULL,'EXAMPLE8');
-- Specify the single dump file for the job (using the handle just returned)
-- and directory object, which must already be defined and accessible
-- to the user running this procedure. This is the dump file created by
-- the export operation in the first example.
DBMS_DATAPUMP.ADD_FILE(h1,'example1.dmp','DATA_PUMP_DIR');
-- A metadata remap will map all schema objects from one schema to another.
DBMS_DATAPUMP.METADATA_REMAP(h1,'REMAP_SCHEMA','RANDOLF','RANDOLF2');
-- Include and exclude
dbms_datapump.metadata_filter(h1,'INCLUDE_PATH_LIST','''TABLE''');
dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/C%''');
dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/F%''');
dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/G%''');
dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/I%''');
dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/M%''');
dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/P%''');
dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/R%''');
dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/TR%''');
dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/STAT%''');
-- no data please
DBMS_DATAPUMP.DATA_FILTER(h1, 'INCLUDE_ROWS', 0);
-- If a table already exists in the destination schema, skip it (leave
-- the preexisting table alone). This is the default, but it does not hurt
-- to specify it explicitly.
DBMS_DATAPUMP.SET_PARAMETER(h1,'TABLE_EXISTS_ACTION','SKIP');
-- Start the job. An exception is returned if something is not set up properly.
DBMS_DATAPUMP.START_JOB(h1);
-- The import job should now be running. In the following loop, the job is
-- monitored until it completes. In the meantime, progress information is
-- displayed. Note: this is identical to the export example.
percent_done := 0;
job_state := 'UNDEFINED';
while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
dbms_datapump.get_status(h1,
dbms_datapump.ku$_status_job_error +
dbms_datapump.ku$_status_job_status +
dbms_datapump.ku$_status_wip,-1,job_state,sts);
js := sts.job_status;
-- If the percentage done changed, display the new value.
if js.percent_done != percent_done
then
dbms_output.put_line('*** Job percent done = ' ||
to_char(js.percent_done));
percent_done := js.percent_done;
end if;
-- If any work-in-progress (WIP) or Error messages were received for the job,
-- display them.
if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
then
le := sts.wip;
else
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
else
le := null;
end if;
end if;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end loop;
-- Indicate that the job finished and gracefully detach from it.
dbms_output.put_line('Job has completed');
dbms_output.put_line('Final job state = ' || job_state);
dbms_datapump.detach(h1);
exception
when others then
dbms_output.put_line('Exception in Data Pump job');
dbms_datapump.get_status(h1,dbms_datapump.ku$_status_job_error,0,
job_state,sts);
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
spos := 1;
slen := length(le(ind).LogText);
if slen > 255
then
slen := 255;
end if;
while slen > 0 loop
dbms_output.put_line(substr(le(ind).LogText,spos,slen));
spos := spos + 255;
slen := length(le(ind).LogText) + 1 - spos;
end loop;
ind := le.NEXT(ind);
end loop;
end if;
end if;
-- dbms_datapump.stop_job(h1);
dbms_datapump.detach(h1);
END;
/This should import nothing but the tables (excluding the data and the table statistics) from an schema export (including a remapping shown here), you can play around with the EXCLUDE_PATH_EXPR expressions. Check the serveroutput generated for possible values used in EXCLUDE_PATH_EXPR.
Use the DBMS_DATAPUMP.DATA_FILTER procedure if you want to exclude the data.
For more samples, refer to the documentation:
http://download.oracle.com/docs/cd/B28359_01/server.111/b28319/dp_api.htm#i1006925
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
Trying to import tables from datapump file which done using transportable mode
Hi using impdp on oracle 11.2.0.3 and have a dumpfile which contains export of tables which done using transportable tablespace mode.
Want to import 3 of the tables just form file cobncerned into another database using impd but not working
Error
ORA-39002: invalid operation
ORA-39061: import mode FULL conflicts with export mode TRANSPORTABLE
{code}
userid=archive/MDbip25
DIRECTORY=TERMSPRD_EXTRACTS
DUMPFILE=archiveexppre.964.dmp
LOGFILE=por_200813.log
PARALLEL=16
TABLES=ZPX_RTRN_CDN_STG_BAK,ZPX_RTRN_STG_BAK,ZPX_STRN_STG_BAK
REMAP_TABLESPACE=BI_ARCHIVE_DATA:BI_ARCHIVE_LARGE_DATA
REMAP_TABLESPACE=BI_ARCHIVE_IDX:BI_ARCHIVE_LARGE_IDX
{code}
Any ideasHi,
Export command
{code}
procedure export_old_partitions_to_disk (pi_SEQ_num NUMBER)
is
h1 number; -- Datapump handle
dir_name CONSTANT ALL_directories.DIRECTORY_NAME%type :='DATA_EXPORTS_DIR'; -- Directory Name
v_file_name varchar2(100);
v_log_name varchar2(100);
v_job_status ku$_Status; -- The status object returned by get_status
v_job_state VARCHAR2(4000);
v_status ku$_Status1010;
v_logs ku$_LogEntry1010;
v_row PLS_INTEGER;
v_current_sequence_number archive_audit.aa_etl_run_num_seq%type;
v_jobState user_datapump_jobs.state%TYPE;
begin
-- Set to read only to make transportable
execute immediate ('alter tablespace ARCHIVED_PARTITIONS read only');
-- Get last etl_run_num_seq by querying public synonym ARCHIVE_ETL_RUN_NUM_SEQ
-- Need check no caching on etl_run_num_seq
select last_number - 1
into v_current_sequence_number
from ALL_SEQUENCES A
WHERE A.SEQUENCE_NAME = 'ETL_RUN_NUM_SEQ';
v_file_name := 'archiveexppre.'||PI_SEQ_NUM||'.dmp';--v_current_sequence_number;
v_log_name := 'archiveexpprelog.'||PI_SEQ_NUM||'.log';--v_current_sequence_number;
dbms_output.put_line(v_file_name);
dbms_output.put_line(v_log_name);
-- Create a (user-named) Data Pump job to do a schema export.
-- dir_name := 'DATA_EXPORTS_DIR';
h1 := dbms_datapump.open(operation =>'EXPORT',
job_mode =>'TRANSPORTABLE',
remote_link => NULL,
job_name => 'ARCHIVE_OLD_PARTITIONS_'||PI_SEQ_NUM);
dbms_datapump.add_file(handle =>h1,
filename => v_file_name,
directory => dir_name,
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE,
reusefile => 1); -- value of 1 instructs to overwrite existing file
dbms_datapump.add_file(handle =>h1,
filename => v_log_name,
directory => dir_name,
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE,
reusefile => 1); -- value of 1 instructs to overwrite existing file
dbms_datapump.metadata_filter(
handle => h1,
name => 'TABLESPACE_EXPR',
VALUE => 'IN(''ARCHIVED_PARTITIONS'')'
dbms_datapump.metadata_filter(handle =>h1,
name => 'TABLE_FILTER',
value => 'BATCH_AUDIT');
--dbms_datapump.set_parameter(h1, 'TRANSPORTABLE', 'ALWAYS');
-- Start the datapump_job
dbms_datapump.start_job(h1);
begin
NULL;
--dbms_datapump.detach(handle => h1);
end;
dbms_output.put_line('Job has completed');
dbms_datapump.wait_for_job(h1,v_jobState);
dbms_output.put_line('Status '||v_jobState);
dbms_output.put_line('Job has completed');
execute immediate ('alter tablespace ARCHIVED_PARTITIONS read write');
exception
when others then
dbms_datapump.get_status(handle => h1,
mask => dbms_datapump.KU$_STATUS_WIP,
timeout=> 0,
job_state => v_job_state,
status => v_job_status);
dbms_output.put_line(v_job_state);
MISC_ROUTINES.record_error;
raise;
-- RAISE_APPLICATION_ERROR(-20010, DBMS_UTILITY.FORMAT_ERROR_BACKTRACE||' '||v_debug_table_name);
-- RAISE_APPLICATION_ERROR(-20010,DBMS_UTILITY.format_error_backtrace);
end export_old_partitions_to_disk;
{code} -
Hi,
I'm doing an import of a schema with one table with about 1 million (spatial) records. This import takes a very long time to complete, a ran the import on friday and when I looked at the proces on monday I saw is was finished at least 24 hours later after I started it. Here's the output:
Z:\>impdp system/password@ora schemas=bla directory=dpump_dir1 dumpfile=schema8.dmp
Import: Release 10.1.0.2.0 - Production on Friday, 21 January, 2005 16:32
Copyright (c) 2003, Oracle. All rights reserved.
Connected to: Personal Oracle Database 10g Release 10.1.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
Master table "SYSTEM"."SYS_IMPORT_SCHEMA_02" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_SCHEMA_02": system/********@ora schemas=bla directory=dpump_dir1 dumpfile=schema8.dmp
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/SE_PRE_SCHEMA_PROCOBJACT/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "BLA"."NLD_NW" 266.1 MB 1217147 rows
Job "SYSTEM"."SYS_IMPORT_SCHEMA_02" successfully completed at 15:22
The dumpfile was created with:
Z:\>expdp system/password@orcl schemas=bla directory=dpump_dir1 dumpfile=schema8.dmp
The source DB is running on Linux and the target is running WinXP.
I thought datapump was meant to be really fast, what's going wrong in my situation?...I've just looked at the time needed to do the import into the same database from where I exported the dumpfile. This import operation takes 5 minutes.
-
Check the status expdp datapump job
All,
I have started expdp (datapump) on size of 250GB schema.I want to know when the job will be completed.
I tried to find from views dba_datapump_sessions and v$session_longops,v$session.But but all in vain.
Is it possible to find completion time of expdp job?
Your help is really appreciated.Hi,
Have you started the Job in "Interactie mode".. ??
If yes then you can get the status of Exectuion with "STATUS"- defauly is zero (If you setting to not null value then based on that it will show the status - seconds)
Second, thing check "dba_datapump_jobs"
- Pavan Kumar N -
How to load data in a oracle-datapump table ...
I have done following steps,
1) I have created a external table locating a .csv file
CREATE TABLE TB1 (
COLLECTED_TIME VARCHAR2(8)
ORGANIZATION EXTERNAL(
TYPE oracle_loader
DEFAULT DIRECTORY SRC_DIR
ACCESS PARAMETERS (
RECORDS DELIMITED BY NEWLINE
BADFILE 'TB1.bad'
LOGFILE 'TB1.log'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '`'
MISSING FIELD VALUES ARE NULL
COLLECTED_TIME
LOCATION ('TB.csv')
2) I am creating a datapump table and .dmp file using the source from step1 table (TB1)
CREATE TABLE TB2
ORGANIZATION EXTERNAL
( TYPE oracle_datapump
DEFAULT DIRECTORY SRC_DIR
LOCATION ('TB.dmp')
) AS SELECT
to_date(decode(COLLECTED_TIME,'24:00:00','00:00:00',COLLECTED_TIME),'HH24:MI:SS') COLLECTED_TIME
FROM TB1;
3) Finally I have create a datapump table which will use TB.dmp as source created by step2
CREATE TABLE TB (
COLLECTED_TIME DATE
ORGANIZATION EXTERNAL (
TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY SRC_DIR
LOCATION ('TB.dmp')
Question : How to create the TB.dmp file in SRL_DIR of step2 using the PL/SQL code?
How to get data in table TB of step3 using the PL/SQL code?
Thanks
abcHi,
Yes I want to execute the SQL code step 2(as mentioned below) in PL/SQL package,
CREATE TABLE TB2
ORGANIZATION EXTERNAL
( TYPE oracle_datapump
DEFAULT DIRECTORY SRC_DIR
LOCATION ('TB.dmp')
) AS SELECT
to_date(decode(COLLECTED_TIME,'24:00:00','00:00:00',COLLECTED_TIME),'HH24:MI:SS') COLLECTED_TIME
FROM TB1;
Please advise.
Thanks
abc
Edited by: user10775411 on Sep 15, 2010 10:40 AM
Maybe you are looking for
-
Hi, I need to display two lines per loop in a sapscript. I managed to add the second line by just adding a symbol after the first line but I want to indent it. I kept on pressing tab but it seems like it has no effect. How do I put tabs in a line if
-
TextInput.restrict how to escape backslash in ActionScript?
I want to prevent input of a number of characters in my TextInput control including comma, tilde, semicolon, single quote, pipe, double quote, caret and backslash. Got everything working except backslash. Tried including double backslash but that d
-
Running SQL script at a certain time
Please suggest in application express what I need to do to run a sql script say 6.00 AM every day
-
I have to reduce the resolution for files in a folder. I have done this in the past but have a problem with the reinstall of Elements 11 on a PC that lost it's mind and had to reinstall all software. The Process Multiple Files under the File headin
-
First, I know you're not supposed to be able to upgrade from academic. However, I've looked at the FCS pages on the website and the Apple Store online and can't find any information that says you can't. Is it possible Apple is allowing upgrading from