OWM and transportable Tablespace
Hi,
Is there any restrictions regarding using OWM and transportable Tablespaces??
Thanks for any hint.
haitham
Hi,
OWM does not currently support transportable tablespaces. The reason for this is that only the table metadata and rows are preserved after importing the tablespace. None of the triggers, views or procedures that are created by workspace manager and defined on the table are maintained.
Also, the metadata that is maintained would become out of sync with the imported data and would lead to unexpected results.
Workspace manager currently only supports a full database import/export or a workspace level import/export using the dbms_wm.export and dbms_wm.import procedures. Further documentation on this can be found in the user guide.
Regards,
Ben
Similar Messages
-
Hi all,
I am trying to tranport a tablespace from one Database to another database.
I tried the following steps.
I made the tablespace which going to transport as read only.
Then I issue the exp command to export the meta data, but throwing following errors,
With the Partitioning, OLAP and Data Mining options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
Note: table data (rows) will not be exported
About to export transportable tablespace metadata...
EXP-00008: ORACLE error 29341 encountered
ORA-29341: The transportable set is not self-contained
ORA-06512: at "SYS.DBMS_PLUGTS", line 1387
ORA-06512: at line 1
EXP-00000: Export terminated unsuccessfully
Could you please tell me any other settings I have to do for this export?
Thanks & Cheers
AntonyHi,
Be aware of the following limitations as you plan to transport tablespaces:
The source and target database must use the same character set and national character set.
You cannot transport a tablespace to a target database in which a tablespace with the same name already exists. However, you can rename either the tablespace to be transported or the destination tablespace before the transport operation.
Objects with underlying objects (such as materialized views) or contained objects (such as partitioned tables) are not transportable unless all of the underlying or contained objects are in the tablespace set.
Beginning with Oracle Database 10g Release 2, you can transport tablespaces that contain XMLTypes, but you must use the IMP and EXP utilities, not Data Pump. When using EXP, ensure that the CONSTRAINTS and TRIGGERS parameters are set to Y (the default).
The following query returns a list of tablespaces that contain XMLTypes:
select distinct p.tablespace_name from dba_tablespaces p,
dba_xml_tables x, dba_users u, all_all_tables t where
t.table_name=x.table_name and t.tablespace_name=p.tablespace_name
and x.owner=u.username
For your reference go through this link
http://youngcow.net/doc/oracle10g/server.102/b14231/tspaces.htm#i1007169
and "Transporting Tablespaces Between Databases"
Aman,
Sorry I could not seen your reply....due to the refresh...Other wise I would not have been posted...
Thanks
Pavan Kumar N
Message was edited by:
Pavan Kumar -
Error ORA-39125 and ORA-04063 during export for transportable tablespace
I'm using the Oracle Enterprise Manager (browser is IE) to create a tablespace transport file. Maintenance...Transport Tablespaces uses the wizard to walk me through each step. The job gets created and submitted.
The 'Prepare' and 'Convert Datafile(s)' job steps complete successfully. The Export step fails with the following error. Can anyone shed some light on this for me?
Thank you in advance!
=======================================================
Output Log
Export: Release 10.2.0.2.0 - Production on Sunday, 03 September, 2006 19:31:34
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Username:
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "SYS"."GENERATETTS000024": SYS/******** AS SYSDBA dumpfile=EXPDAT_GENERATETTS000024.DMP directory=EM_TTS_DIR_OBJECT transport_tablespaces=SIEBEL job_name=GENERATETTS000024 logfile=EXPDAT.LOG
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
ORA-39125: Worker unexpected fatal error in KUPW$WORKER.UNLOAD_METADATA while calling DBMS_METADATA.FETCH_XML_CLOB [TABLE_STATISTICS]
ORA-04063: view "SYS.KU$_IOTABLE_VIEW" has errors
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
ORA-06512: at "SYS.KUPW$WORKER", line 6241
----- PL/SQL Call Stack -----
object line object
handle number name
2CF48130 14916 package body SYS.KUPW$WORKER
2CF48130 6300 package body SYS.KUPW$WORKER
2CF48130 2340 package body SYS.KUPW$WORKER
2CF48130 6861 package body SYS.KUPW$WORKER
2CF48130 1262 package body SYS.KUPW$WORKER
2CF0850C 2 anonymous block
Job "SYS"."GENERATETTS000024" stopped due to fatal error at 19:31:44More information:
Using SQL Developer, I checked the view SYS.KU$_IOTABLE_VIEW referred to in the error message, and it does indeed report a problem with that view. The following code is the definition of that view. I have no idea what it's supposed to be doing, because it was part of the default installation. I certainly didn't write it. I did, however, execute the 'Test Syntax' button (on the Edit View screen), and the result was this error message:
=======================================================
The SQL syntax is valid, however the query is invalid or uses functionality that is not supported.
Unknown error(s) parsing SQL: oracle.javatools.parser.plsql.syntax.ParserException: Unexpected token
=======================================================
The SQL for the view looks like this:
REM SYS KU$_IOTABLE_VIEW
CREATE OR REPLACE FORCE VIEW "SYS"."KU$_IOTABLE_VIEW" OF "SYS"."KU$_IOTABLE_T"
WITH OBJECT IDENTIFIER (obj_num) AS
select '2','3',
t.obj#,
value(o),
-- if this is a secondary table, get base obj and ancestor obj
decode(bitand(o.flags, 16), 16,
(select value(oo) from ku$_schemaobj_view oo, secobj$ s
where o.obj_num=s.secobj#
and oo.obj_num=s.obj#),
null),
decode(bitand(o.flags, 16), 16,
(select value(oo) from ku$_schemaobj_view oo, ind$ i, secobj$ s
where o.obj_num=s.secobj#
and i.obj#=s.obj#
and oo.obj_num=i.bo#),
null),
(select value(s) from ku$_storage_view s
where i.file# = s.file_num
and i.block# = s.block_num
and i.ts# = s.ts_num),
ts.name, ts.blocksize,
i.dataobj#, t.bobj#, t.tab#, t.cols,
t.clucols, i.pctfree$, i.initrans, i.maxtrans,
mod(i.pctthres$,256), i.spare2, t.flags,
t.audit$, t.rowcnt, t.blkcnt, t.empcnt, t.avgspc, t.chncnt, t.avgrln,
t.avgspc_flb, t.flbcnt, t.analyzetime, t.samplesize, t.degree,
t.instances, t.intcols, t.kernelcols, t.property, 'N', t.trigflag,
t.spare1, t.spare2, t.spare3, t.spare4, t.spare5, t.spare6,
decode(bitand(t.trigflag, 65536), 65536,
(select e.encalg from sys.enc$ e where e.obj#=t.obj#),
null),
decode(bitand(t.trigflag, 65536), 65536,
(select e.intalg from sys.enc$ e where e.obj#=t.obj#),
null),
(select c.name from col$ c
where c.obj# = t.obj#
and c.col# = i.trunccnt and i.trunccnt != 0
and bitand(c.property,1)=0),
cast( multiset(select * from ku$_column_view c
where c.obj_num = t.obj#
order by c.col_num, c.intcol_num
) as ku$_column_list_t
(select value(nt) from ku$_nt_parent_view nt
where nt.obj_num = t.obj#),
cast( multiset(select * from ku$_constraint0_view con
where con.obj_num = t.obj#
and con.contype not in (7,11)
) as ku$_constraint0_list_t
cast( multiset(select * from ku$_constraint1_view con
where con.obj_num = t.obj#
) as ku$_constraint1_list_t
cast( multiset(select * from ku$_constraint2_view con
where con.obj_num = t.obj#
) as ku$_constraint2_list_t
cast( multiset(select * from ku$_pkref_constraint_view con
where con.obj_num = t.obj#
) as ku$_pkref_constraint_list_t
(select value(ov) from ku$_ov_table_view ov
where ov.bobj_num = t.obj#
and bitand(t.property, 128) = 128), -- IOT has overflow
(select value(etv) from ku$_exttab_view etv
where etv.obj_num = o.obj_num)
from ku$_schemaobj_view o, tab$ t, ind$ i, ts$ ts
where t.obj# = o.obj_num
and t.pctused$ = i.obj# -- For IOTs, pctused has index obj#
and bitand(t.property, 32+64+512) = 64 -- IOT but not overflow
-- or partitioned (32)
and i.ts# = ts.ts#
AND (SYS_CONTEXT('USERENV','CURRENT_USERID') IN (o.owner_num, 0) OR
EXISTS ( SELECT * FROM session_roles
WHERE role='SELECT_CATALOG_ROLE' ));
GRANT SELECT ON "SYS"."KU$_IOTABLE_VIEW" TO PUBLIC; -
Transportable tablespaces for migration and upgrade
Hello All,
I'm trying to find out some theory about transportable tablespace and upgrade in a single step. I've to move a DB from a Windows server to a Linux server. Win is running 10.1 and Linux 10.2 . It's possible to use RMAN to migrate the whole database and then on new node run startup upgrade and catupgrd.sql to upgrade catalog ?
Should work?
Thanks
RajThe following doc for TDB says :-
"Note: TDB requires the same Oracle Database software version on the source and target systems; hence, a database upgrade cannot be accomplished simultaneous with the platform migration. However, best practices for a database upgrade, as documented in the Oracle Upgrade Companion, also apply to any planned database maintenance activity, including platform migration."
http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10gR2_PlatformMigrationTDB.pdf
There is also an Oracle Whitepaper called "Database Upgrade Using Transportable Tablespaces: Oracle Database 11g Release 1" but I am still trying to find a copy. -
Transportable tablespace between 10.1 and 11.1 database
Platform: Solaris
From: 10.2.0.4
To: 11.1.0.7
I did some google searches and I don't see anything about transporting tablespaces between a 10.1 database and an 11.1 database
Edited by: user11990507 on Oct 19, 2010 4:24 AMuser11990507 wrote:
Platform: Solaris
From: 10.2.0.4
To: 11.1.0.7
I did some google searches and I don't see anything about transporting tablespaces between a 10.1 database and an 11.1 database
It should work fine. check MOS :
Compatibility and New Features when Transporting Tablespaces with Export and Import *[ID 291024.1]*
Regards
Rajesh -
Transportable tablespace and partitioned objects
Hi,
I have to transport tablespaces which have partitioned objects from one server to another. If anyone has a proven process to perform this operation, please share with us. Following is a sample of violation report sample I got for 1 tablespace:
VIOLATIONS
Partitioned table ARIES.A_BI_BOARD_USAGE is partially contained in the transport
able set: check table partitions by querying sys.dba_tab_partitions
Partitioned table ARIES.A_TESTING_SESSION is partially contained in the transpor
table set: check table partitions by querying sys.dba_tab_partitions
Default Partition (Table) Tablespace A_DATA_SML_000000 for A_BI_BOARD_USAGE not
contained in transportable set
Default Partition (Table) Tablespace A_DATA_SML_000000 for A_TESTING_SESSION not
contained in transportable set
Thanks
SatishHi,
I am trying to transport the whole tablespace. What happened is that during duplication process, somehow few tablespaces in target were in offline mode and so duplication did not ceate the files properly on auxiliary server. Now, I need to get the 3 tablespaces from production to integration enviornment. These tablespaces have lot of partitioned objects. So I am trying to find a way to transport these tablespaces.
Thanks
Satish -
Need help on transportable tablespace, after convert and move datafile over
I need help on transportable tablespace, after convert and move datafile over the target server, How do I plug them in to the ASM instance?
And How do I start the database ? what controlfile I should use?
Thanks.I got error like this:
RMAN> copy datafile '/oracle_backup/stage/ar.269.775942363' to '+RE_DAT';
Starting backup at 29-MAY-12
using channel ORA_DISK_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 05/29/2012 17:46:21
RMAN-20201: datafile not found in the recovery catalog
RMAN-06010: error while looking up datafile: /oracle_backup/stage/ar.269.775942363
How to deal with this?
I deleted all the current target tablespaces including contents. Do I have to do anything to make the database recognize those converted datafiles? -
Transportable Tablespace and Packages/Functions etc
We are currently experimenting with transportable tablespaces. Is it possible to take functions/packages etc or are you limited to tables/data/triggers.
Thanks
ChrisTransport tablespace is available in version 9.2. In that version you need to put the tablespace in read only mode, export the tablespace metadata, copy the tabalespace files, and then import the metadata. Now, I don't believe this will transport the other objects. Someone else chime in if I am incorrect.
What you can do is transport the tabelspace and then run an export rows=n
and then import this empty dump (that will include the procedures/functions.etc) with ignore=y -
Transportable Tablespace and Table Statistics
We are loading a large datawarehouse of about 2Tb, with tables up to 100Gb full of raster images. The database is 9.2.0.7 on Solaris, and we are using Transportable Tablespaces to move datafiles and tablespaces from data integration server to production server.
During the data integration, we calculate statistics on large tables, and it takes forever. We are wondering if these statistics are moved along with the metadata from data integration server to production server. In other words, should we calculate statistics before the tablespace export or after the tablespaces have been set online on the production server.
Thanks,Hi,
Upto my knowledge you dont need to calculate statistics.
Even with export & import command there is a option for statistics.
So when ever you export oracle by default export the statistics of that table.
Hope this helps.
Regards
MMU -
Error while using Transport Tablespace
Hi,
I am running into the below errors when I do the transport tablespace in 11g using the OEM --> Data Movement --> Transport Tablespace.
RMAN-03009: failure of conversion at source command on ORA_DISK_1 channel at 04/22/2008 08:28:32
RMAN-20038: must specify FORMAT for CONVERT command
When I search for the RMAN-03009 in OTN,Metalink,google , I get some unrelated hits with different message and not the "Failure of Conversion..." message.. for this message code.
Please advice if you have had the same issue and resolved the same.
Thanks
SheikFrom 11g Error Messages:
RMAN-20038: must specify FORMAT for CONVERT command
Cause: No FORMAT was specified when using CONVERT command.
Action: Resubmit the command using FORMAT clause. -
Steps to create Cross-Platform Transportable Tablespaces
hi
what is the steps to do Cross-Platform Transportable Tablespaces.
i want to perform this in my pc, i have linux and windows in same PC, i want to migrate from linux to windows and vice versa.
so can i know complete steps and commands to perform Cross-Platform Transportable Tablespaces. to get complete knowledge i am doing this so i can implement this in my office when any migrate issues comes. so once i do this i will get confidence so that i can do the same in my office which saves my time.
veeresh s
oracle-dba
[email protected]Hi, also you can review the Note:413586.1 into metalink site.
Good luck.
Regards. -
Standby not in sync because test on primary of transportable tablespace etc
This environment is new build environment , have not in use yet.
db version is 11.2.0.3 in linux, both primary/standby are configured in RAC two nodes and storage are in ASM storage.
primary db had tested by migration data using transportable tablespace methods. So the datafiles imported were put in a local filesystem which I have to switch to ASM afterwards.
In such way, the standby db got the impression that datafiles should be in local filesystem and it got invalidated.
Here is the error from standby log:
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: '/stage/index.341.785186191'
and datafiles names are all in local filesystem in standby: /stage/...../
However in primary datafile names are all in ASM instance:
+DAT/prd/datafile/arindex.320.788067609
How to resolve such situation?
ThanksMy steps for transportable tablespaces after scp the datafiles to linux are import the transportable datafiles into the database. and then using rman to copy it to "+DATA' in asm. and then switch dababase to copy.
Those procedure all worked on primary.
The standby was built before the transportable tablespace when there are only system, users, tools etc basic tablespaces. it worked and in sync with primary. However everytime the primary tested data migration with the transportable tablespaces, it will not work.
so what is the right way to perform this migration? how can I made standby in sync with primary and to read the datafiles in ASM storage?
Is there a way before the RMAN command to transfer the filesystem datafile to ASM, can I copy those local filesystem datafiles to ASM storage ? then do a import transportable datafiles with those on ASM?If you added any datafiles manually then those will be created in standby also as per the settings,
But here you performing TTS, of course DML/DDLs can be transfered to standby by archive logs, but what about actual datafiles? Here in this case files exist in primary & those files not exist in standby
AFAIK, you have to perform couple of steps. Once after your complete migration done in primary do as follows. (Do test it)
1) Complete your migration on primary
-- Check all data file status and location.
2) Now Restore standby controlfile & newly migrated tablespaces in standby Database
-- here you can directly restore to ASM using RMAN, because you are taking backup in primary using RMAN. So RMAN can restore them directly in ASM file system.
3) Make sure you have all the datafiles where you want and take care no one data file is missing. Crosscheck all tablespace information in primary & standby databases.
4) Now start *MRP*
Here is my above plan, But I suggest to test it...
One question, After *successful migration* are you able to SYNC standby with primary earlier ? -
Hii all
I configured control file autobackup on and take a backup whole database with "backup database;" command and
RMAN> transport tablespace users tablespace destination 'c:\Transport' auxiliary destination 'c:\Auxi';
I am receiving ;
RMAN-03015: error occurred in stored script Memory Script
RMAN-06026: some targets not found - aborting restore
RMAN-06024: no backup or copy of the control file found to restore
In begining lines of rman command I saw that "control_files=c:\Auxi/cntrl_tspitr_ST1_dFcB.f" looks like using unix path format in windows? Have you any idea ?
Best Regards
Oracle 10.2.0.3 on Vista Sp2hi
Yes it is on besides I take backup with "backup database include current controlfile" nothing has changed
RMAN> show all;
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\
NCFST1.ORA'; # default
also I have just tested same command on aix platform it works well I think this is related windows platform ?
Edited by: EB on Sep 25, 2009 4:56 PM -
Oracle Upgrade V9 to V10 - Transportable Tablespaces
Hi,
I am upgrading a database from Oracle 9i v 9.2.0.6.0 to Oracle 10g 10.2.0.4.0. Both servers are running Windows 2003 Server 32Bit.
My plan is to upgrade using Transportable Tablespaces. This is what I have done.
1) exec DBMS_TTS.TRANSPORT_SET_CHECK('XXX,XXX,XXX,XXX,XXX,XXX,XXX,XXX,XXX,XXX,XXX',true); --> This checked out fine.
2) Set the above tablespaces as read only.
3) Copied all the datafiles to the new server (2TB)
4) Did the export: exp 'xxx@xxxx as sysdba' transport_tablespace=y tablespaces=(XXX,XXX,XXX,XXX,XXX ETC...) tts_full_check=y file=c:\EXP.dmp log=c:\exp.log
5) Here comes the problem: When I do the import it imports the tables with the data and forces me o create the Schemas manually. It does not import the users / packages / views / triggers etc...
I need to do an export that that will include everything (Creates Schemas, Packages, Views, Triggers, Tables, data).
I know there is a function FULL=y, this will not work as I do not have an additional 2TB for the export file.
Can someone please explain how to do this?
Kind Regards
HenkWelcome to the forums !
Is there a reason you are using transportable tablespaces ? Would it not be easier to do an in-place upgrade on the current server, then copy/clone the instance to the new server ?
HTH
Srini -
Oracle 10gR2 - Any way to speed up Transportable Tablespace Import?
I have a nightly process that uses expdp/impdp with Transportable Tablespace option to copy a full schema from one database to another. This process was initially done with just expdp/impdp, however the creation of the indexes started to slow down the process at which time I switched to using Transportable Tablespace.
In the last 3 months, the number of objects in the schema has not changes however the amount of data has increased 15.73%. At the same time, the Transportable Tablespace impdp has increased 30.77% in time to process.
Is there any way to speed up a Transportable Tablespace impdp in Oracle 10gR2? The docs show that I cannot use parallel with TTS. I have excluded STATISTICS in the export.
Eventually I will be moving away from doing this TTS expdp/impdp, however for now I need to do what I can to contain the amount of time it takes to process.
Environment: Oracle 10.2.0.4 (source and destination) on Sun Solaris SPARC v10. We are not using ASM currently. Tablespace is 6 datafiles for a total of 87.5GB, 791 tables, 1924 indexes, 1773 triggers, and 10 LOBs.
Any help would be greatly appreciated.
Thanks!
-VorpelMy reading of this thread was that there was a need to be able to do offline setups when the client had lost oracle lite (we get this on PDAs all the time when they go totally dead), so the CD install is something that can be sent out.
Any install (CD or otherwise) will be empty in terms of applications and data, not much you can do about that on a CD, but depending on the type of device and the storage possibilities you have you can secure the database and application files seperately. for example on a PDA using the external SD or flash for the data and file storage will keep them when the orace directory is lost from main storage and therefore all thet is needed after the oracle lite recovery is to recreate the odbc file (and possibly polite if you have special parameters). The setting of the crr flags for 'real' data on the server is a trick to bring the database up to date without a total rebuild.
NOTE if you use SD cards etc. on PDAs/small devices, beware of windows ce/mobile power up differential between main memory and external storage. If you power down in the middle of an update or query, you can get device read/write errors when the device powers up due to processing continuing before the storage media is active
Maybe you are looking for
-
I have tried all the recommend solutions I could find in the Apple Support Community. I noticed a couple of weeks ago the iPad was acting funny about connecting to the internet via the Wi-Fi. The past two days, it wasn't able to connect to the wire
-
once again I am trying to download the new update 8.1.3 and it says I am not connected to the Internet which I am. Any suggestions ?
-
Access BPEL metadata tables in Olite install of SOA Suite
Hi, Iwanted to know if there is a way to acess bpel metadata tables in olite. like for esb we use system/any user/pass and oraesb sid. I am unable to do so by changing sid from oraesb to orabpel. It gives access violation error. Thanks in advance
-
The message on my iphone 4s delivered late and I can't receive any calls for some time
It's almost 6 months since I've encountered this problem. I already requested my carrier to have my iphone 4s replaced and a series of testing was done to my iphone 4s before they found out this it was defective. Now, I have my new iphone 4s but the
-
I cannot find any prompt for me to give a serial number for my copy of Photoshop CS5.
I have used CS5 happily on my old computer for about two years but cannot find anywhere on the Adobe site where I might download something other than the Creative Cloud. I do not have nor want a CC membership. Where do I input my serial number so I c