IMPDP FOR Transportable Tablespace
Hi,
I've following scenario:
From source database:
datafiles =2
expdp file = expdp.dmp
TABLESPACE NAME =AIC
Target database:
At target location AIC tablespace is existing but in READ-WRITE mode.
My question is before runing impdp do I need put AIC tablespace into READ ONLY MODE right and copy sourec data files to that location?
impdp aic/pwd@DEV1 directory=aic_dump_dir
dumpfile=aic_data.dmp
transport_datafiles=/orag/u07/oradata/DEV1/aic_data01.dbf,/orag/u07/oradata/DEV1/aic_data02.dbf
Thanks in advance.
Hi,
The import will try to create that tablespace. If it already exists, the import job will fail. Like the previous reply said, one way to do this is to drop the existing tablespace. That is, if you can. If you can't then you can remap_tablespace, but if the objects in the tablespace you are importing already exist in the existing tablespace, you will get errors there.
maybe you need to explain what outcome you want. That might help us with a solution.
Thanks
Dean
Similar Messages
-
Error ORA-39125 and ORA-04063 during export for transportable tablespace
I'm using the Oracle Enterprise Manager (browser is IE) to create a tablespace transport file. Maintenance...Transport Tablespaces uses the wizard to walk me through each step. The job gets created and submitted.
The 'Prepare' and 'Convert Datafile(s)' job steps complete successfully. The Export step fails with the following error. Can anyone shed some light on this for me?
Thank you in advance!
=======================================================
Output Log
Export: Release 10.2.0.2.0 - Production on Sunday, 03 September, 2006 19:31:34
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Username:
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "SYS"."GENERATETTS000024": SYS/******** AS SYSDBA dumpfile=EXPDAT_GENERATETTS000024.DMP directory=EM_TTS_DIR_OBJECT transport_tablespaces=SIEBEL job_name=GENERATETTS000024 logfile=EXPDAT.LOG
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
ORA-39125: Worker unexpected fatal error in KUPW$WORKER.UNLOAD_METADATA while calling DBMS_METADATA.FETCH_XML_CLOB [TABLE_STATISTICS]
ORA-04063: view "SYS.KU$_IOTABLE_VIEW" has errors
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
ORA-06512: at "SYS.KUPW$WORKER", line 6241
----- PL/SQL Call Stack -----
object line object
handle number name
2CF48130 14916 package body SYS.KUPW$WORKER
2CF48130 6300 package body SYS.KUPW$WORKER
2CF48130 2340 package body SYS.KUPW$WORKER
2CF48130 6861 package body SYS.KUPW$WORKER
2CF48130 1262 package body SYS.KUPW$WORKER
2CF0850C 2 anonymous block
Job "SYS"."GENERATETTS000024" stopped due to fatal error at 19:31:44More information:
Using SQL Developer, I checked the view SYS.KU$_IOTABLE_VIEW referred to in the error message, and it does indeed report a problem with that view. The following code is the definition of that view. I have no idea what it's supposed to be doing, because it was part of the default installation. I certainly didn't write it. I did, however, execute the 'Test Syntax' button (on the Edit View screen), and the result was this error message:
=======================================================
The SQL syntax is valid, however the query is invalid or uses functionality that is not supported.
Unknown error(s) parsing SQL: oracle.javatools.parser.plsql.syntax.ParserException: Unexpected token
=======================================================
The SQL for the view looks like this:
REM SYS KU$_IOTABLE_VIEW
CREATE OR REPLACE FORCE VIEW "SYS"."KU$_IOTABLE_VIEW" OF "SYS"."KU$_IOTABLE_T"
WITH OBJECT IDENTIFIER (obj_num) AS
select '2','3',
t.obj#,
value(o),
-- if this is a secondary table, get base obj and ancestor obj
decode(bitand(o.flags, 16), 16,
(select value(oo) from ku$_schemaobj_view oo, secobj$ s
where o.obj_num=s.secobj#
and oo.obj_num=s.obj#),
null),
decode(bitand(o.flags, 16), 16,
(select value(oo) from ku$_schemaobj_view oo, ind$ i, secobj$ s
where o.obj_num=s.secobj#
and i.obj#=s.obj#
and oo.obj_num=i.bo#),
null),
(select value(s) from ku$_storage_view s
where i.file# = s.file_num
and i.block# = s.block_num
and i.ts# = s.ts_num),
ts.name, ts.blocksize,
i.dataobj#, t.bobj#, t.tab#, t.cols,
t.clucols, i.pctfree$, i.initrans, i.maxtrans,
mod(i.pctthres$,256), i.spare2, t.flags,
t.audit$, t.rowcnt, t.blkcnt, t.empcnt, t.avgspc, t.chncnt, t.avgrln,
t.avgspc_flb, t.flbcnt, t.analyzetime, t.samplesize, t.degree,
t.instances, t.intcols, t.kernelcols, t.property, 'N', t.trigflag,
t.spare1, t.spare2, t.spare3, t.spare4, t.spare5, t.spare6,
decode(bitand(t.trigflag, 65536), 65536,
(select e.encalg from sys.enc$ e where e.obj#=t.obj#),
null),
decode(bitand(t.trigflag, 65536), 65536,
(select e.intalg from sys.enc$ e where e.obj#=t.obj#),
null),
(select c.name from col$ c
where c.obj# = t.obj#
and c.col# = i.trunccnt and i.trunccnt != 0
and bitand(c.property,1)=0),
cast( multiset(select * from ku$_column_view c
where c.obj_num = t.obj#
order by c.col_num, c.intcol_num
) as ku$_column_list_t
(select value(nt) from ku$_nt_parent_view nt
where nt.obj_num = t.obj#),
cast( multiset(select * from ku$_constraint0_view con
where con.obj_num = t.obj#
and con.contype not in (7,11)
) as ku$_constraint0_list_t
cast( multiset(select * from ku$_constraint1_view con
where con.obj_num = t.obj#
) as ku$_constraint1_list_t
cast( multiset(select * from ku$_constraint2_view con
where con.obj_num = t.obj#
) as ku$_constraint2_list_t
cast( multiset(select * from ku$_pkref_constraint_view con
where con.obj_num = t.obj#
) as ku$_pkref_constraint_list_t
(select value(ov) from ku$_ov_table_view ov
where ov.bobj_num = t.obj#
and bitand(t.property, 128) = 128), -- IOT has overflow
(select value(etv) from ku$_exttab_view etv
where etv.obj_num = o.obj_num)
from ku$_schemaobj_view o, tab$ t, ind$ i, ts$ ts
where t.obj# = o.obj_num
and t.pctused$ = i.obj# -- For IOTs, pctused has index obj#
and bitand(t.property, 32+64+512) = 64 -- IOT but not overflow
-- or partitioned (32)
and i.ts# = ts.ts#
AND (SYS_CONTEXT('USERENV','CURRENT_USERID') IN (o.owner_num, 0) OR
EXISTS ( SELECT * FROM session_roles
WHERE role='SELECT_CATALOG_ROLE' ));
GRANT SELECT ON "SYS"."KU$_IOTABLE_VIEW" TO PUBLIC; -
10gR2 Transportable Tablespaces Certified for EBS 11i for Migration
Guys,
Good news is Now 10gR2 Transportable Tablespaces Certified for EBS 11i,
Here is Steven chan blog link, I know personally how much i struggled with expdp and impdp on linux, it sounds good news who is migrating 11i .
http://blogs.oracle.com/stevenChan/2010/04/10gr2_xtts_ebs11i.html?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+OracleE-BusinessSuiteTechnology+%28Oracle+E-Business+Suite+Technology%29To migrate the database you can either use Transportable tablespaces or export/import.
Transportable Tablespaces
New Source Database Added for EBS 12 + 11gR2 Transportable Tablespaces
https://blogs.oracle.com/stevenChan/entry/new_source_database_added_for
New Source Databases Added for Transportable Tablespaces + EBS 11i
https://blogs.oracle.com/stevenChan/entry/new_source_databases_added_for
Database Migration using 11gR2 Transportable Tablespaces Now Certified for EBS 12
https://blogs.oracle.com/stevenChan/entry/database_migration_using_11gr2_transportable
Export/Import
Export/import process for 12.0 or 12.1 using 11gR1 or 11gR2 (Doc ID 741818.1)
Export/import notes on Applications 11i Database 11g (Doc ID 557738.1)
If your application will remain on the same OS (which is different than the target database OS) then please also see:
Oracle EBS R12 with Database Tier Only Platform on Oracle Database 11.2.0 (Doc ID 456347.1)
Using Oracle EBS with a Split Configuration Database Tier on 11gR2 (Doc ID 946413.1)
Oracle E-Business Suite Upgrades and Platform Migration (Doc ID 1377213.1)
Thanks,
Hussein -
Oracle 10gR2 - Any way to speed up Transportable Tablespace Import?
I have a nightly process that uses expdp/impdp with Transportable Tablespace option to copy a full schema from one database to another. This process was initially done with just expdp/impdp, however the creation of the indexes started to slow down the process at which time I switched to using Transportable Tablespace.
In the last 3 months, the number of objects in the schema has not changes however the amount of data has increased 15.73%. At the same time, the Transportable Tablespace impdp has increased 30.77% in time to process.
Is there any way to speed up a Transportable Tablespace impdp in Oracle 10gR2? The docs show that I cannot use parallel with TTS. I have excluded STATISTICS in the export.
Eventually I will be moving away from doing this TTS expdp/impdp, however for now I need to do what I can to contain the amount of time it takes to process.
Environment: Oracle 10.2.0.4 (source and destination) on Sun Solaris SPARC v10. We are not using ASM currently. Tablespace is 6 datafiles for a total of 87.5GB, 791 tables, 1924 indexes, 1773 triggers, and 10 LOBs.
Any help would be greatly appreciated.
Thanks!
-VorpelMy reading of this thread was that there was a need to be able to do offline setups when the client had lost oracle lite (we get this on PDAs all the time when they go totally dead), so the CD install is something that can be sent out.
Any install (CD or otherwise) will be empty in terms of applications and data, not much you can do about that on a CD, but depending on the type of device and the storage possibilities you have you can secure the database and application files seperately. for example on a PDA using the external SD or flash for the data and file storage will keep them when the orace directory is lost from main storage and therefore all thet is needed after the oracle lite recovery is to recreate the odbc file (and possibly polite if you have special parameters). The setting of the crr flags for 'real' data on the server is a trick to bring the database up to date without a total rebuild.
NOTE if you use SD cards etc. on PDAs/small devices, beware of windows ce/mobile power up differential between main memory and external storage. If you power down in the middle of an update or query, you can get device read/write errors when the device powers up due to processing continuing before the storage media is active -
Standby not in sync because test on primary of transportable tablespace etc
This environment is new build environment , have not in use yet.
db version is 11.2.0.3 in linux, both primary/standby are configured in RAC two nodes and storage are in ASM storage.
primary db had tested by migration data using transportable tablespace methods. So the datafiles imported were put in a local filesystem which I have to switch to ASM afterwards.
In such way, the standby db got the impression that datafiles should be in local filesystem and it got invalidated.
Here is the error from standby log:
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: '/stage/index.341.785186191'
and datafiles names are all in local filesystem in standby: /stage/...../
However in primary datafile names are all in ASM instance:
+DAT/prd/datafile/arindex.320.788067609
How to resolve such situation?
ThanksMy steps for transportable tablespaces after scp the datafiles to linux are import the transportable datafiles into the database. and then using rman to copy it to "+DATA' in asm. and then switch dababase to copy.
Those procedure all worked on primary.
The standby was built before the transportable tablespace when there are only system, users, tools etc basic tablespaces. it worked and in sync with primary. However everytime the primary tested data migration with the transportable tablespaces, it will not work.
so what is the right way to perform this migration? how can I made standby in sync with primary and to read the datafiles in ASM storage?
Is there a way before the RMAN command to transfer the filesystem datafile to ASM, can I copy those local filesystem datafiles to ASM storage ? then do a import transportable datafiles with those on ASM?If you added any datafiles manually then those will be created in standby also as per the settings,
But here you performing TTS, of course DML/DDLs can be transfered to standby by archive logs, but what about actual datafiles? Here in this case files exist in primary & those files not exist in standby
AFAIK, you have to perform couple of steps. Once after your complete migration done in primary do as follows. (Do test it)
1) Complete your migration on primary
-- Check all data file status and location.
2) Now Restore standby controlfile & newly migrated tablespaces in standby Database
-- here you can directly restore to ASM using RMAN, because you are taking backup in primary using RMAN. So RMAN can restore them directly in ASM file system.
3) Make sure you have all the datafiles where you want and take care no one data file is missing. Crosscheck all tablespace information in primary & standby databases.
4) Now start *MRP*
Here is my above plan, But I suggest to test it...
One question, After *successful migration* are you able to SYNC standby with primary earlier ? -
Transportable tablespaces requirment
Hello All,
This is what I am running on.
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for IBM/AIX RISC System/6000: Version 10.2.0.4.0 - Productio
NLSRTL Version 10.2.0.4.0 - Production
We have datawarehouse env. and we are thinking of doing transportable tablespace. From what I understand when you do transportable tablespace the block size have to be the same and NLS_chracater? I am sure there are more requirments that I am not aware of. Please if you have a helpful link please let me know. My question is what are the requirments for transportable tablespace?
Thanks in advancehttp://download.oracle.com/docs/cd/B19306_01/server.102/b14231/tspaces.htm#sthref1288
OR
http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/tspaces.htm#i1007169 -
Transportable tablespaces for migration and upgrade
Hello All,
I'm trying to find out some theory about transportable tablespace and upgrade in a single step. I've to move a DB from a Windows server to a Linux server. Win is running 10.1 and Linux 10.2 . It's possible to use RMAN to migrate the whole database and then on new node run startup upgrade and catupgrd.sql to upgrade catalog ?
Should work?
Thanks
RajThe following doc for TDB says :-
"Note: TDB requires the same Oracle Database software version on the source and target systems; hence, a database upgrade cannot be accomplished simultaneous with the platform migration. However, best practices for a database upgrade, as documented in the Oracle Upgrade Companion, also apply to any planned database maintenance activity, including platform migration."
http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10gR2_PlatformMigrationTDB.pdf
There is also an Oracle Whitepaper called "Database Upgrade Using Transportable Tablespaces: Oracle Database 11g Release 1" but I am still trying to find a copy. -
Why transportable tablespace for platform migration of same endian format?
RDBMS Version : 10.2.0.4
We are planning to migrate our DB to a different platform. Both platforms are of BIG endian format. From googling , I came across the following link
http://levipereira.files.wordpress.com/2011/01/oracle_generic_migration_version_1.pdf
In this IBM document, they are migrating from Solaris 5.9 (SPARC) to AIX 6 . Both are of BIG endian format.Since they both are of same endian format can't they use TRANSPORTABLE DATABASE ? Why are they using RMAN COVERT DATAFILE (Transportable tablespace ) ?In this IBM document, they are migrating from Solaris 5.9 (SPARC) to AIX 6 . Both are of BIG endian format.Since they both are of same endian format can't they use TRANSPORTABLE DATABASE ? Why are they using RMAN COVERT DATAFILE (Transportable tablespace ) ?they are using transportable database - they are not importing data to dictionary, not creating users... - instead of using convert database, they used convert datafile to avoid of converting all datafiles (you need to convert only undo + system tablespace) - there's MOS note: Avoid Datafile Conversion during Transportable Database [ID 732053.1].
Basic steps for convert database:
1. Verify the prerequisites
2. Identify any external files and directories with DBMS_TDB.CHECK_EXTERNAL.
3. Shutdown (consistent) and restart the source database in READ ONLY mode.
4. Use DBMS_TDB.CHECK_DB to make sure the database is ready to be transported.
5. Run the RMAN convert database command.
6. Copy the converted files to the target database. Note that this implies that you will need 2x the storage on the source database for the converted files.
7. Copy the parameter file to the target database.
8. Adjust configuration files as required (parameter, listener.ora, tnsnames, etc).
9. Fire up the new database!
All other details are in:
http://docs.oracle.com/cd/B19306_01/backup.102/b14191/dbxptrn.htm#CHDFHBFI
Lukas -
hi all
i am new to oracle
i want to export in oracle 10g. transportable tbs.
when i execute expdp cmd. it give me error
cmd>expdp scott@orcl/tiger directory=data_load dumpfile=emp.dmp transport_tablespace=users.
in documentation i read transport_datafile ?? clearify me what is mean of transport_datafile and we used this clause in expdp cmd then what happend.
thanx in advance.why dont you use exp and imp instead of expdp and impdpbcoz. i thing according to doc. transport tbs in original exp and imp tools used in different ways.
and oracle 10g. data pump in expdp and impdp transport tbs used in different ways.
bcoz i search in doc and i found differenct command in original and data pump utitites export and import.
before that plz clearify what is mean and what is used to transport tablespace i read doc in doc. transport tbs is used to metadata but i not know much about metadata.
thanx for reply sir -
Error in Transport Tablespace from linux to windows
I am testing the cross-Platform Transport Tablespace. As per the oracle, we can transport tablespace from linux to windows without conversion because both are using same endian (Little).
But i am fail to do Transport Tablespace from Linux to Windows.
I am performing Transport Tablespace process as following:
from Source Oracle Database server(red had linux as 4 32-bit oracle version:10.2)
Sql> alter tablespace TEST read only;
$ expdp system/pass dumpfile=test.dmp directory=export_dir transport_tablespaces=test transport_full_check=y
after this i am coping test.dmp and data file (test.dbf) to the target machine (ms windows xp 32-bit with oracle 10.1) .
on Target Machine (with Ms windows xp os) here i am giving the following command:
impdp system dumpfile=test.dmp directory=exp_dir transport_datafiles=/exp_dir/test.dbf
but it is giving following error:
ora-39001: invalid argument value
ora-39000: bad dump file specification
ora-31619: invalid dump file "c:\pks\1103.dmp"
what may by ...
Prabhakernow for version compatibility i am inclusding version option with expdp
edpdp scott dumpfile=1103.dmp directory=pks transport_tablespaces=prabhu version=10.1.0.2.0
but now it is giving following error:
Import: Release 10.1.0.2.0 - Production on Saturday, 11 March, 2006 19:07
Copyright (c) 2003, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
Master table "SCOTT"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SCOTT"."SYS_IMPORT_TRANSPORTABLE_01": scott/******** DUMPFILE=1103.DMP DIRECTORY=PKS TRANSPORT_DATAFILES=C:\PKS\PRABHU version=10.1.0
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
ORA-39123: Data Pump transportable tablespace job aborted
ORA-06550: line 2, column 2:
PLS-00306: wrong number or types of arguments in call to 'BEGINIMPORT'
ORA-06550: line 2, column 2:
PL/SQL: Statement ignored
Job "SCOTT"."SYS_IMPORT_TRANSPORTABLE_01" stopped due to fatal error at 19:07
regards
Prabhu -
Transportable tablespace system+sysaux
Hi
I want to clone a database via transportable tablepsace.I am unable to use transportable database since endian formats are not same.
I know that I cannot transport the SYSTEM and SYSUAX tablespace.
Therefore, objects such as sequences, PL/SQL packages, and other objects that depend on the SYSTEM tablespace are not transported.
What kind of additional steps do I need to perform to copy the necessary objects in SYSTEM and sysaux tablespaces to make these two database identical ?
ThanksHi,
after you transported the tablespaces you can use expdp and impdp for the remaining objects. Have a look at this to see how to specify the objects:
http://download.oracle.com/docs/cd/E11882_01/server.112/e16536/dp_export.htm#i1007837
Cheers
Jörg -
Error in stored procedure while using dbms_datapump for transportable
Hi,
I'm facing following issue:
SQL> select * from v$version;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Solaris: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
====================================================================================
I'm trying to do transportable tablespace through stored procedure with help of DBMS_DATAPUMP, Following is the code :
==================================================================================
create or replace
procedure sp_tts_export(v_tbs_name varchar2) as
idx NUMBER; -- Loop index
JobHandle NUMBER; -- Data Pump job handle
PctComplete NUMBER; -- Percentage of job complete
JobState VARCHAR2(30); -- To keep track of job state
LogEntry ku$_LogEntry; -- For WIP and error messages
JobStatus ku$_JobStatus; -- The job status from get_status
Status ku$_Status; -- The status object returned by get_status
dts varchar2(140):=to_char(sysdate,'YYYYMMDDHH24MISS');
exp_dump_file varchar2(500):=v_tbs_name||'_tts_export_'||dts||'.dmp';
exp_log_file varchar2(500):=v_tbs_name||'_tts_export_'||dts||'.log';
exp_job_name varchar2(500):=v_tbs_name||'_tts_export_'||dts;
dp_dir varchar2(500):='DATA_PUMP_DIR';
log_file UTL_FILE.FILE_TYPE;
log_filename varchar2(500):=exp_job_name||'_main'||'.log';
err_log_file UTL_FILE.FILE_TYPE;
v_db_name varchar2(1000);
v_username varchar2(30);
t_dir_name VARCHAR2(4000);
t_file_name VARCHAR2(4000);
t_sep_pos NUMBER;
t_dir varchar2(30):='temp_0123456789';
v_sqlerrm varchar2(4000);
stmt varchar2(4000);
FUNCTION get_file(filename VARCHAR2, dir VARCHAR2 := 'TEMP')
RETURN VARCHAR2 IS
contents VARCHAR2(32767);
file BFILE := BFILENAME(dir, filename);
BEGIN
DBMS_LOB.FILEOPEN(file, DBMS_LOB.FILE_READONLY);
contents := UTL_RAW.CAST_TO_VARCHAR2(
DBMS_LOB.SUBSTR(file));
DBMS_LOB.CLOSE(file);
RETURN contents;
END;
begin
--execute immediate ('drop tablespace test including contents and datafiles');
--execute immediate ('create tablespace test datafile ''/home/smishr02/test.dbf'' size 10m');
--execute immediate ('create table prestg.test_table (a number) tablespace test');
--execute immediate ('insert into prestg.test_table values (1)');
--commit;
--execute immediate ('alter tablespace test read only');
--dbms_output.put_line('11111111111111111111');
dbms_output.put_line(log_filename||'>>>>>>>>>>>>>>>>>>>>>>>>>>>'|| dp_dir);
log_file:=UTL_FILE.FOPEN (dp_dir, log_filename, 'w');
UTL_FILE.PUT_LINE(log_file,'#####################################################################');
UTL_FILE.PUT_LINE(log_file,'REPORT: GENERATED ON ' || SYSDATE);
UTL_FILE.PUT_LINE(log_file,'#####################################################################');
select global_name,user into v_db_name,v_username from global_name;
UTL_FILE.PUT_LINE(log_file,'Database:'||v_db_name);
UTL_FILE.PUT_LINE(log_file,'user running the job:'||v_username);
UTL_FILE.PUT_LINE(log_file,'for tablespace:'||v_tbs_name);
UTL_FILE.NEW_LINE (log_file);
stmt:='ALTER TABLESPACE '||v_tbs_name || ' read only';
dbms_output.put_line('11111111111111111111'||stmt);
execute immediate (stmt);
UTL_FILE.PUT_LINE(log_file,' '||v_tbs_name || ' altered to read only mode.');
UTL_FILE.NEW_LINE (log_file);
UTL_FILE.PUT_LINE(log_file,'#####################################################################');
UTL_FILE.NEW_LINE (log_file);
UTL_FILE.PUT_LINE(log_file,' Initiating the Datapump engine for TTS export..............');
UTL_FILE.NEW_LINE (log_file);
dbms_output.put_line('11111111111111111111');
JobHandle :=
DBMS_DATAPUMP.OPEN(
operation => 'EXPORT'
*,job_mode => 'TRANSPORTABLE'*
*,remote_link => NULL*
*,job_name => NULL*
--,job_name => exp_job_name
-- ,version => 'LATEST'
UTL_FILE.PUT_LINE(log_file,'Done');
UTL_FILE.NEW_LINE (log_file);
UTL_FILE.PUT_LINE(log_file,' Allocating dumpfile................');
DBMS_DATAPUMP.ADD_FILE(
handle => JobHandle
,filename => exp_dump_file
,directory => dp_dir
,filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE
-- ,filesize => '100M'
UTL_FILE.PUT_LINE(log_file,'Done');
UTL_FILE.NEW_LINE (log_file);
UTL_FILE.PUT_LINE(log_file,' Allocating logfile................');
DBMS_DATAPUMP.ADD_FILE(
handle => JobHandle
,filename => exp_log_file
,directory => dp_dir
,filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE
UTL_FILE.PUT_LINE(log_file,'Done');
UTL_FILE.NEW_LINE (log_file);
UTL_FILE.PUT_LINE(log_file,' Setting attributes................');
DBMS_DATAPUMP.set_parameter(handle => JobHandle,
name=>'TTS_FULL_CHECK',
value=>1);
DBMS_DATAPUMP.METADATA_FILTER(
handle => JobHandle
,NAME => 'TABLESPACE_EXPR'
,VALUE => 'IN ('''||v_tbs_name||''')'
-- ,object_type => 'TABLE'
UTL_FILE.PUT_LINE(log_file,'Done');
UTL_FILE.NEW_LINE (log_file);
UTL_FILE.PUT_LINE(log_file,' Now starting datapump job................');
DBMS_DATAPUMP.START_JOB(JobHandle);
UTL_FILE.PUT_LINE(log_file,'Done');
UTL_FILE.NEW_LINE (log_file);
UTL_FILE.PUT_LINE(log_file,' Monitoring the job................');
--------------Monitor the job
PctComplete := 0;
JobState := 'UNDEFINED';
WHILE(JobState != 'COMPLETED') and (JobState != 'STOPPED')
LOOP
DBMS_DATAPUMP.GET_STATUS(
handle => JobHandle
,mask => 15 -- DBMS_DATAPUMP.ku$_status_job_error + DBMS_DATAPUMP.ku$_status_job_status + DBMS_DATAPUMP.ku$_status_wip
,timeout => NULL
,job_state => JobState
,status => Status
JobStatus := Status.job_status;
-- Whenever the PctComplete value has changed, display it
IF JobStatus.percent_done != PctComplete THEN
DBMS_OUTPUT.PUT_LINE('*** Job percent done = ' || TO_CHAR(JobStatus.percent_done));
PctComplete := JobStatus.percent_done;
END IF;
-- Whenever a work-in progress message or error message arises, display it
IF (BITAND(Status.mask,DBMS_DATAPUMP.ku$_status_wip) != 0) THEN
LogEntry := Status.wip;
ELSE
IF (BITAND(Status.mask,DBMS_DATAPUMP.ku$_status_job_error) != 0) THEN
LogEntry := Status.error;
ELSE
LogEntry := NULL;
END IF;
END IF;
IF LogEntry IS NOT NULL THEN
idx := LogEntry.FIRST;
WHILE idx IS NOT NULL
LOOP
DBMS_OUTPUT.PUT_LINE(LogEntry(idx).LogText);
idx := LogEntry.NEXT(idx);
END LOOP;
END IF;
END LOOP;
--copy the datafiles to data dump dir
UTL_FILE.PUT_LINE(log_file,'Done');
UTL_FILE.NEW_LINE (log_file);
UTL_FILE.PUT_LINE(log_file,' Copying datafiles to dump directory................');
-- grant select on dba_directories to prestg;
declare
cnt number;
begin
select count(*) into cnt from dba_directories
where directory_name=upper(t_dir);
if cnt=1 then
execute immediate('DROP DIRECTORY '||t_dir);
end if;
end;
FOR rec in (select file_name from sys.dba_data_files where tablespace_name=v_tbs_name)
LOOP
t_sep_pos:=instr(rec.file_name,'/',-1);
t_dir_name:=substr(rec.file_name,1,t_sep_pos-1);
t_file_name:=substr(rec.file_name,t_sep_pos+1,length(rec.file_name));
dbms_output.put_line(t_dir_name|| ' ' || t_dir);
dbms_output.put_line(t_file_name);
execute immediate('CREATE DIRECTORY '||t_dir||' AS '''||t_dir_name||'''');
UTL_FILE.PUT_LINE(log_file,' Copying '||rec.file_name||'................');
utl_file.fcopy(t_dir, t_file_name, dp_dir, t_file_name);
UTL_FILE.PUT(log_file,'Done');
execute immediate('DROP DIRECTORY '||t_dir);
END LOOP;
UTL_FILE.NEW_LINE (log_file);
UTL_FILE.PUT_LINE(log_file,' Altering tablespace to read write................');
execute immediate ('ALTER TABLESPACE '||v_tbs_name || ' read write');
UTL_FILE.PUT(log_file,' Done');
err_log_file:=utl_file.fopen(dp_dir, exp_log_file, 'r');
UTL_FILE.NEW_LINE (log_file);
UTL_FILE.PUT_LINE(log_file,' content of export logfile................');
loop
begin
utl_file.get_line(err_log_file,v_sqlerrm);
if v_sqlerrm is null then
exit;
end if;
UTL_FILE.PUT_LINE(log_file,v_sqlerrm);
EXCEPTION
WHEN NO_DATA_FOUND THEN
EXIT;
END;
end loop;
utl_file.fclose(err_log_file);
utl_file.fclose(log_file);
END;
I'm getting following error when DBMS_DATAPUMP.OPEN is called in procedure:
SQL> exec sp_tts_export('TEST');
BEGIN sp_tts_export('TEST'); END;
ERROR at line 1:
ORA-31626: job does not exist
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 938
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4566
ORA-06512: at "PRESTG.SP_TTS_EXPORT", line 78
ORA-06512: at line 1
==============================================================================================
This procedure is part of user ABC. I'm getting the above when I'm running this under ABC schema. However I have tested the same procedure under sys schema. When I'm creating same procedure in SYS schema it is running fine. I am clueless on this. Pls help
Thanks
Shailesh
Edited by: shaileshM on Jul 28, 2010 11:15 AMPrivileges acquired via ROLE do NOT apply within named PL/SQL procedures.
Explicit GRANT is required to resolve this issue. -
Error while using Transport Tablespace
Hi,
I am running into the below errors when I do the transport tablespace in 11g using the OEM --> Data Movement --> Transport Tablespace.
RMAN-03009: failure of conversion at source command on ORA_DISK_1 channel at 04/22/2008 08:28:32
RMAN-20038: must specify FORMAT for CONVERT command
When I search for the RMAN-03009 in OTN,Metalink,google , I get some unrelated hits with different message and not the "Failure of Conversion..." message.. for this message code.
Please advice if you have had the same issue and resolved the same.
Thanks
SheikFrom 11g Error Messages:
RMAN-20038: must specify FORMAT for CONVERT command
Cause: No FORMAT was specified when using CONVERT command.
Action: Resubmit the command using FORMAT clause. -
Transport tablespace related problem
Hi all,
I have a problem i want to export tablespaces as i have to migrate my db from oracle 9.2 to oracle 10.2 from win to RHEL4.......but i am getting an error ,pls suggest me for the same....
SQL> @G:\oracle\ora92\rdbms\admin\dbmsplts.sql;
Package created.
Package created.
Grant succeeded.
Package created.
Grant succeeded.
drop view sys.transport_set_violations
ERROR at line 1:
ORA-00942: table or view does not exist
PL/SQL procedure successfully completed.
drop table sys.transts_error$
ERROR at line 1:
ORA-00942: table or view does not exist
Package created.
SQL> exec sys.dbms_tts.transport_set_check('SRO',true);
BEGIN sys.dbms_tts.transport_set_check('SRO',true); END;
ERROR at line 1:
ORA-04068: existing state of packages has been discarded
ORA-04063: package body "SYS.DBMS_TTS" has errors
ORA-06508: PL/SQL: could not find program unit being called
ORA-06512: at line 1
SQL> desc dbms_tts
PROCEDURE DOWNGRADE
FUNCTION ISSELFCONTAINED RETURNS BOOLEAN
Argument Name Type In/Out Default?
TS_LIST CLOB IN
INCL_CONSTRAINTS BOOLEAN IN
FULL_CHECK BOOLEAN IN
PROCEDURE KFP_CKCMP
PROCEDURE TRANSPORT_SET_CHECK
Argument Name Type In/Out Default?
TS_LIST CLOB IN
INCL_CONSTRAINTS BOOLEAN IN DEFAULT
FULL_CHECK BOOLEAN IN DEFAULT
SQL> exec dbms_tts.transport_set_check('SRO',true);
BEGIN dbms_tts.transport_set_check('SRO',true); END;
ERROR at line 1:
ORA-04068: existing state of packages has been discarded
ORA-04063: package body "SYS.DBMS_TTS" has errors
ORA-06508: PL/SQL: could not find program unit being called
ORA-06512: at line 1Are you,by any chance, running Oracle 9i installation code in an Oracle 10g database?
If so : why?
As far as I know you can only transport tablespaces to a database of an identical version, so it looks like this is not going to work.
Sybrand Bakker
Senior Oracle DBA -
Hii all
I configured control file autobackup on and take a backup whole database with "backup database;" command and
RMAN> transport tablespace users tablespace destination 'c:\Transport' auxiliary destination 'c:\Auxi';
I am receiving ;
RMAN-03015: error occurred in stored script Memory Script
RMAN-06026: some targets not found - aborting restore
RMAN-06024: no backup or copy of the control file found to restore
In begining lines of rman command I saw that "control_files=c:\Auxi/cntrl_tspitr_ST1_dFcB.f" looks like using unix path format in windows? Have you any idea ?
Best Regards
Oracle 10.2.0.3 on Vista Sp2hi
Yes it is on besides I take backup with "backup database include current controlfile" nothing has changed
RMAN> show all;
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\
NCFST1.ORA'; # default
also I have just tested same command on aix platform it works well I think this is related windows platform ?
Edited by: EB on Sep 25, 2009 4:56 PM
Maybe you are looking for
-
Error occurs in background work process.
Respected Guru's, error occurs in two background work process, i have displayed the trace files of both the bgd work process. How to rectify this error ? Further, can you please help me in identifying the problem, please suggest me good material whic
-
Hi everybody. I wrote a real cool chat in java. I coded it a whole night long. And cool it works in Mozilla.... But in IE you get a "Class not found". First i thought it's the Server I am using. But, no it's not the server (checked with Apache/Omni).
-
hi experts: I use this bapi to create IncomingInvoice,there are two items in one PO,but the PO_UNIT are different,so it comes one erro. my question is ,this BAPI can not create this kind of Invoice? thanks a lot.
-
We have a dev and a test portal server, and we have been successfully using the following code to read custom OID atributes. ie We have created our own LDAP schema/class with some of our own attributes. Then doing the following: PortletRenderReq
-
I have a need to move the data from certain tables(of schema A) to schema B.I would be moving only tables and no other objects.I am using oracle sql developer and thought there are 2 options: 1.database copy 2.datapump I have few questions: 1.Datapu