Transportable tablespace number of datafiles

We are using transportable tablespace to move tablespaces.
Our database one data tablespace contain 21 datafile and index tablespace contain 18 datafiles.
Tablespace Name:TEST_DATA Datafiles: /oradata/test_data001.dbf,/oradata/test_data002.dbf, .... /oradata/test_data021.dbf
Tablespace Name:INDEX_DATA Datafiles: /oradata/index_data001.dbf,/oradata/index_data002.dbf, .... /oradata/test_data018.dbf
I will take export Successfully bellow using command.
exp "'/ as sysdba'" file=exp_tts.dmp log=exp_tts.log transport_tablespace=y \
tablespaces=TEST_DATA,INDEX_DATA
For Example. (datafile sample i have given)
imp "'/ as sysdba'" file=exp_tts.dmp log=imp_tts.log fromuser=xxx \
touser=xxx transport_tablespace=y datafiles=/oradata/test_data001.dbf,/oradata/test_data002.dbf, .... /oradata/test_data021.dbf, \
/oradata/index_data001.dbf,/oradata/index_data002.dbf, .... /oradata/test_data018.dbf
Over all we are using 39 datafiles in transportable tablespace.
My question is import side i will give all datafiles name. it is very Complicated ( it's possiable to miss any datafile name)
1.Any other way to using import ?
2.It's possiable to given one location in all datafiles. (ex datafile=/oradata)
3.we are using source location and target location same.
Regds
Murali
Edited by: skrmurali on Oct 30, 2009 12:21 PM

hi,
you can give your desired location during import... but you should give the exact location of the datafiles that have been restored..
i.e,
if you copied all datafiles to /uo1/oradata/test_data001.dbf
then in the import you must mention the same,, datafiles='/uo1/oradata/test_data001.dbf'.....
import will done successfully...
orelse if you can do cloning do RMAN clone..
regards,
Deepak

Similar Messages

  • Transportable Tablespace and Table Statistics

    We are loading a large datawarehouse of about 2Tb, with tables up to 100Gb full of raster images. The database is 9.2.0.7 on Solaris, and we are using Transportable Tablespaces to move datafiles and tablespaces from data integration server to production server.
    During the data integration, we calculate statistics on large tables, and it takes forever. We are wondering if these statistics are moved along with the metadata from data integration server to production server. In other words, should we calculate statistics before the tablespace export or after the tablespaces have been set online on the production server.
    Thanks,

    Hi,
    Upto my knowledge you dont need to calculate statistics.
    Even with export & import command there is a option for statistics.
    So when ever you export oracle by default export the statistics of that table.
    Hope this helps.
    Regards
    MMU

  • What is the best way to copy Oracle Transportable Tablespace datafiles betw

    Hi All,
    We are planning to implement Oracle Transportable Tablespace feature to copy huge data (around 900 GB) between the DBs.
    Our plan is to:
    1. Offline the tablespace 1 and tablespace 2 on Linux BOX1.
    2. Take export of these two tablespaces using TTS on Linux BOX1.
    3. zip datafiles of tablespace1 & 2 on Linux BOX1.
    4. Copy the zipped datafiles and dump file to Linux BOX2.
    5. Unzip datafiles of tablespace1 & 2 on Linux BOX1.
    6. Make the tablespaces online on Linux BOX1.
    7. Unzip datafiles of tablespace1 & 2 on Linux BOX2.
    8. Import the dump file to DB on Linux BOX2.
    9. Online the tablespace1 & 2 on Linux BOX2.
    However I do have below queries before I proceed.
    1. Do you see any issue with the above approach?
    2. To improve the copying speed across the network, Do you suggest any solution than rcp and ftp
    Our Environment: Oracle 9.2.0.8 running on Linux 2.6.5-7.308-bigsmp #1 SMP
    Please share your experiences.
    Thanks in advance for your time and suggestions :)
    Krishna Bussu.

    Take a look at Note:77523.1.
    Hope this Helps
    Regards

  • Need help on transportable tablespace, after convert and move datafile over

    I need help on transportable tablespace, after convert and move datafile over the target server, How do I plug them in to the ASM instance?
    And How do I start the database ? what controlfile I should use?
    Thanks.

    I got error like this:
    RMAN> copy datafile '/oracle_backup/stage/ar.269.775942363' to '+RE_DAT';
    Starting backup at 29-MAY-12
    using channel ORA_DISK_1
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of backup command at 05/29/2012 17:46:21
    RMAN-20201: datafile not found in the recovery catalog
    RMAN-06010: error while looking up datafile: /oracle_backup/stage/ar.269.775942363
    How to deal with this?
    I deleted all the current target tablespaces including contents. Do I have to do anything to make the database recognize those converted datafiles?

  • Error ORA-39125 and ORA-04063 during export for transportable tablespace

    I'm using the Oracle Enterprise Manager (browser is IE) to create a tablespace transport file. Maintenance...Transport Tablespaces uses the wizard to walk me through each step. The job gets created and submitted.
    The 'Prepare' and 'Convert Datafile(s)' job steps complete successfully. The Export step fails with the following error. Can anyone shed some light on this for me?
    Thank you in advance!
    =======================================================
    Output Log
    Export: Release 10.2.0.2.0 - Production on Sunday, 03 September, 2006 19:31:34
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Username:
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Starting "SYS"."GENERATETTS000024": SYS/******** AS SYSDBA dumpfile=EXPDAT_GENERATETTS000024.DMP directory=EM_TTS_DIR_OBJECT transport_tablespaces=SIEBEL job_name=GENERATETTS000024 logfile=EXPDAT.LOG
    Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
    Processing object type TRANSPORTABLE_EXPORT/TABLE
    Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
    ORA-39125: Worker unexpected fatal error in KUPW$WORKER.UNLOAD_METADATA while calling DBMS_METADATA.FETCH_XML_CLOB [TABLE_STATISTICS]
    ORA-04063: view "SYS.KU$_IOTABLE_VIEW" has errors
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
    ORA-06512: at "SYS.KUPW$WORKER", line 6241
    ----- PL/SQL Call Stack -----
    object line object
    handle number name
    2CF48130 14916 package body SYS.KUPW$WORKER
    2CF48130 6300 package body SYS.KUPW$WORKER
    2CF48130 2340 package body SYS.KUPW$WORKER
    2CF48130 6861 package body SYS.KUPW$WORKER
    2CF48130 1262 package body SYS.KUPW$WORKER
    2CF0850C 2 anonymous block
    Job "SYS"."GENERATETTS000024" stopped due to fatal error at 19:31:44

    More information:
    Using SQL Developer, I checked the view SYS.KU$_IOTABLE_VIEW referred to in the error message, and it does indeed report a problem with that view. The following code is the definition of that view. I have no idea what it's supposed to be doing, because it was part of the default installation. I certainly didn't write it. I did, however, execute the 'Test Syntax' button (on the Edit View screen), and the result was this error message:
    =======================================================
    The SQL syntax is valid, however the query is invalid or uses functionality that is not supported.
    Unknown error(s) parsing SQL: oracle.javatools.parser.plsql.syntax.ParserException: Unexpected token
    =======================================================
    The SQL for the view looks like this:
    REM SYS KU$_IOTABLE_VIEW
    CREATE OR REPLACE FORCE VIEW "SYS"."KU$_IOTABLE_VIEW" OF "SYS"."KU$_IOTABLE_T"
    WITH OBJECT IDENTIFIER (obj_num) AS
    select '2','3',
    t.obj#,
    value(o),
    -- if this is a secondary table, get base obj and ancestor obj
    decode(bitand(o.flags, 16), 16,
    (select value(oo) from ku$_schemaobj_view oo, secobj$ s
    where o.obj_num=s.secobj#
    and oo.obj_num=s.obj#),
    null),
    decode(bitand(o.flags, 16), 16,
    (select value(oo) from ku$_schemaobj_view oo, ind$ i, secobj$ s
    where o.obj_num=s.secobj#
    and i.obj#=s.obj#
    and oo.obj_num=i.bo#),
    null),
    (select value(s) from ku$_storage_view s
    where i.file# = s.file_num
    and i.block# = s.block_num
    and i.ts# = s.ts_num),
    ts.name, ts.blocksize,
    i.dataobj#, t.bobj#, t.tab#, t.cols,
    t.clucols, i.pctfree$, i.initrans, i.maxtrans,
    mod(i.pctthres$,256), i.spare2, t.flags,
    t.audit$, t.rowcnt, t.blkcnt, t.empcnt, t.avgspc, t.chncnt, t.avgrln,
    t.avgspc_flb, t.flbcnt, t.analyzetime, t.samplesize, t.degree,
    t.instances, t.intcols, t.kernelcols, t.property, 'N', t.trigflag,
    t.spare1, t.spare2, t.spare3, t.spare4, t.spare5, t.spare6,
    decode(bitand(t.trigflag, 65536), 65536,
    (select e.encalg from sys.enc$ e where e.obj#=t.obj#),
    null),
    decode(bitand(t.trigflag, 65536), 65536,
    (select e.intalg from sys.enc$ e where e.obj#=t.obj#),
    null),
    (select c.name from col$ c
    where c.obj# = t.obj#
    and c.col# = i.trunccnt and i.trunccnt != 0
    and bitand(c.property,1)=0),
    cast( multiset(select * from ku$_column_view c
    where c.obj_num = t.obj#
    order by c.col_num, c.intcol_num
    ) as ku$_column_list_t
    (select value(nt) from ku$_nt_parent_view nt
    where nt.obj_num = t.obj#),
    cast( multiset(select * from ku$_constraint0_view con
    where con.obj_num = t.obj#
    and con.contype not in (7,11)
    ) as ku$_constraint0_list_t
    cast( multiset(select * from ku$_constraint1_view con
    where con.obj_num = t.obj#
    ) as ku$_constraint1_list_t
    cast( multiset(select * from ku$_constraint2_view con
    where con.obj_num = t.obj#
    ) as ku$_constraint2_list_t
    cast( multiset(select * from ku$_pkref_constraint_view con
    where con.obj_num = t.obj#
    ) as ku$_pkref_constraint_list_t
    (select value(ov) from ku$_ov_table_view ov
    where ov.bobj_num = t.obj#
    and bitand(t.property, 128) = 128), -- IOT has overflow
    (select value(etv) from ku$_exttab_view etv
    where etv.obj_num = o.obj_num)
    from ku$_schemaobj_view o, tab$ t, ind$ i, ts$ ts
    where t.obj# = o.obj_num
    and t.pctused$ = i.obj# -- For IOTs, pctused has index obj#
    and bitand(t.property, 32+64+512) = 64 -- IOT but not overflow
    -- or partitioned (32)
    and i.ts# = ts.ts#
    AND (SYS_CONTEXT('USERENV','CURRENT_USERID') IN (o.owner_num, 0) OR
    EXISTS ( SELECT * FROM session_roles
    WHERE role='SELECT_CATALOG_ROLE' ));
    GRANT SELECT ON "SYS"."KU$_IOTABLE_VIEW" TO PUBLIC;

  • Oracle 10gR2 - Any way to speed up Transportable Tablespace Import?

    I have a nightly process that uses expdp/impdp with Transportable Tablespace option to copy a full schema from one database to another. This process was initially done with just expdp/impdp, however the creation of the indexes started to slow down the process at which time I switched to using Transportable Tablespace.
    In the last 3 months, the number of objects in the schema has not changes however the amount of data has increased 15.73%. At the same time, the Transportable Tablespace impdp has increased 30.77% in time to process.
    Is there any way to speed up a Transportable Tablespace impdp in Oracle 10gR2? The docs show that I cannot use parallel with TTS. I have excluded STATISTICS in the export.
    Eventually I will be moving away from doing this TTS expdp/impdp, however for now I need to do what I can to contain the amount of time it takes to process.
    Environment: Oracle 10.2.0.4 (source and destination) on Sun Solaris SPARC v10. We are not using ASM currently. Tablespace is 6 datafiles for a total of 87.5GB, 791 tables, 1924 indexes, 1773 triggers, and 10 LOBs.
    Any help would be greatly appreciated.
    Thanks!
    -Vorpel

    My reading of this thread was that there was a need to be able to do offline setups when the client had lost oracle lite (we get this on PDAs all the time when they go totally dead), so the CD install is something that can be sent out.
    Any install (CD or otherwise) will be empty in terms of applications and data, not much you can do about that on a CD, but depending on the type of device and the storage possibilities you have you can secure the database and application files seperately. for example on a PDA using the external SD or flash for the data and file storage will keep them when the orace directory is lost from main storage and therefore all thet is needed after the oracle lite recovery is to recreate the odbc file (and possibly polite if you have special parameters). The setting of the crr flags for 'real' data on the server is a trick to bring the database up to date without a total rebuild.
    NOTE if you use SD cards etc. on PDAs/small devices, beware of windows ce/mobile power up differential between main memory and external storage. If you power down in the middle of an update or query, you can get device read/write errors when the device powers up due to processing continuing before the storage media is active

  • Transportable tablespace errors

    Hi experts
    Could you please tell me how to resolve this?
    connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Master table "ABCD"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
    Starting "XYZ"."SYS_IMPORT_TRANSPORTABLE_01": DDEEF/******** directory = tt_import_dir dumpfile = dev_tt.dmp transport_datafiles = /apps/oracle/oradata/dev/datafile1_data-01.dbf, /apps/oracle/oradata/dev/datafile2_data-01.dbf, /apps/oracle/oradata/dev/datafile3_data-01.dbf remap_schema=dev_xxx:xxx14 remap_schema=dev_yyyy:yyyyy14 remap_schema=dev_zzzz:zzzz14 remap_tablespace=dev_zzzzz_data:zzzz14_data remap_tablespace=dev_yyyyy_data:yyyyt14_data remap_tablespace=dev_zzzzz_data:zzzzz14_data
    Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
    ORA-39123: Data Pump transportable tablespace job aborted
    ORA-19721: Cannot find datafile with absolute file number 11 in tablespace REPORT14_DATA
    Job "ABDC"."SYS_IMPORT_TRANSPORTABLE_01" stopped due to fatal error at 10:48:35
    thanks
    seema

    Seema,
    This is what is mentioned about the error,
    >
    ORA-19721:     Cannot find datafile with absolute file number string in tablespace string
    Cause:     Can not find one of the datafile that should be in the Pluggable Set.
    Action:     Make sure all datafiles are specified via import command line option or parameter files. >
    So it seems the exported dump is not complete. Retry by taking export again and than try importing it .
    HTH
    Aman....

  • Standby not in sync because test on primary of transportable tablespace etc

    This environment is new build environment , have not in use yet.
    db version is 11.2.0.3 in linux, both primary/standby are configured in RAC two nodes and storage are in ASM storage.
    primary db had tested by migration data using transportable tablespace methods. So the datafiles imported were put in a local filesystem which I have to switch to ASM afterwards.
    In such way, the standby db got the impression that datafiles should be in local filesystem and it got invalidated.
    Here is the error from standby log:
    ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
    ORA-01110: data file 4: '/stage/index.341.785186191'
    and datafiles names are all in local filesystem in standby: /stage/...../
    However in primary datafile names are all in ASM instance:
    +DAT/prd/datafile/arindex.320.788067609
    How to resolve such situation?
    Thanks

    My steps for transportable tablespaces after scp the datafiles to linux are import the transportable datafiles into the database. and then using rman to copy it to "+DATA' in asm. and then switch dababase to copy.
    Those procedure all worked on primary.
    The standby was built before the transportable tablespace when there are only system, users, tools etc basic tablespaces. it worked and in sync with primary. However everytime the primary tested data migration with the transportable tablespaces, it will not work.
    so what is the right way to perform this migration? how can I made standby in sync with primary and to read the datafiles in ASM storage?
    Is there a way before the RMAN command to transfer the filesystem datafile to ASM, can I copy those local filesystem datafiles to ASM storage ? then do a import transportable datafiles with those on ASM?If you added any datafiles manually then those will be created in standby also as per the settings,
    But here you performing TTS, of course DML/DDLs can be transfered to standby by archive logs, but what about actual datafiles? Here in this case files exist in primary & those files not exist in standby
    AFAIK, you have to perform couple of steps. Once after your complete migration done in primary do as follows. (Do test it)
    1) Complete your migration on primary
    -- Check all data file status and location.
    2) Now Restore standby controlfile & newly migrated tablespaces in standby Database
    -- here you can directly restore to ASM using RMAN, because you are taking backup in primary using RMAN. So RMAN can restore them directly in ASM file system.
    3) Make sure you have all the datafiles where you want and take care no one data file is missing. Crosscheck all tablespace information in primary & standby databases.
    4) Now start *MRP*
    Here is my above plan, But I suggest to test it...
    One question, After *successful migration* are you able to SYNC standby with primary earlier ?

  • Transport Tablespace via RMAN

    Hii all
    I configured control file autobackup on and take a backup whole database with "backup database;" command and
    RMAN> transport tablespace users tablespace destination 'c:\Transport' auxiliary destination 'c:\Auxi';
    I am receiving ;
    RMAN-03015: error occurred in stored script Memory Script
    RMAN-06026: some targets not found - aborting restore
    RMAN-06024: no backup or copy of the control file found to restore
    In begining lines of rman command I saw that "control_files=c:\Auxi/cntrl_tspitr_ST1_dFcB.f" looks like using unix path format in windows? Have you any idea ?
    Best Regards
    Oracle 10.2.0.3 on Vista Sp2

    hi
    Yes it is on besides I take backup with "backup database include current controlfile" nothing has changed
    RMAN> show all;
    RMAN configuration parameters are:
    CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
    CONFIGURE BACKUP OPTIMIZATION OFF; # default
    CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
    CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE MAXSETSIZE TO UNLIMITED; # default
    CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
    CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\
    NCFST1.ORA'; # default
    also I have just tested same command on aix platform it works well I think this is related windows platform ?
    Edited by: EB on Sep 25, 2009 4:56 PM

  • Oracle Upgrade V9 to V10 - Transportable Tablespaces

    Hi,
    I am upgrading a database from Oracle 9i v 9.2.0.6.0 to Oracle 10g 10.2.0.4.0. Both servers are running Windows 2003 Server 32Bit.
    My plan is to upgrade using Transportable Tablespaces. This is what I have done.
    1) exec DBMS_TTS.TRANSPORT_SET_CHECK('XXX,XXX,XXX,XXX,XXX,XXX,XXX,XXX,XXX,XXX,XXX',true); --> This checked out fine.
    2) Set the above tablespaces as read only.
    3) Copied all the datafiles to the new server (2TB)
    4) Did the export: exp 'xxx@xxxx as sysdba' transport_tablespace=y tablespaces=(XXX,XXX,XXX,XXX,XXX ETC...) tts_full_check=y file=c:\EXP.dmp log=c:\exp.log
    5) Here comes the problem: When I do the import it imports the tables with the data and forces me o create the Schemas manually. It does not import the users / packages / views / triggers etc...
    I need to do an export that that will include everything (Creates Schemas, Packages, Views, Triggers, Tables, data).
    I know there is a function FULL=y, this will not work as I do not have an additional 2TB for the export file.
    Can someone please explain how to do this?
    Kind Regards
    Henk

    Welcome to the forums !
    Is there a reason you are using transportable tablespaces ? Would it not be easier to do an in-place upgrade on the current server, then copy/clone the instance to the new server ?
    HTH
    Srini

  • Stuck with transported tablespace and ASM

    Hi,
    I'm on 10.2.0.5.4 and playing with TTS feature .
    So I've made TTS on source DB 10.2.0.5 via rman tts feature (auxilary instance) , and copied to cloned database with ASM (same os solaris10).
    Now my situation is as follows :
    on target I can see TTS1 tablespace
    FILE_NAME                                          TABLESPACE_NAME                STATUS
    +DG1/sol10p/datafile/users.262.763395469           USERS                          AVAILABLE
    +DG1/sol10p/datafile/sysaux.260.763395447          SYSAUX                         AVAILABLE
    +DG1/sol10p/datafile/undotbs1.259.763395423        UNDOTBS1                       AVAILABLE
    +DG1/sol10p/datafile/system.258.763395387          SYSTEM                         AVAILABLE
    +DG1/sol10p/datafile/example.261.763395463         EXAMPLE                        AVAILABLE
    +DG1/sol10p/datafile/assm.257.763395311            ASSM                           AVAILABLE
    /u01/tts/tts1.274.763488147                        TTS1                           AVAILABLE
    /u01/tts/tts1.272.763487799                        TTS1                           AVAILABLEbut when trying to copy those files into ASM I'm getting
    RMAN> report schema;
    Report of database schema
    List of Permanent Datafiles
    ===========================
    File Size(MB) Tablespace           RB segs Datafile Name
    1    510      SYSTEM               ***     +DG1/sol10p/datafile/system.258.763395387
    2    310      UNDOTBS1             ***     +DG1/sol10p/datafile/undotbs1.259.763395423
    3    250      SYSAUX               ***     +DG1/sol10p/datafile/sysaux.260.763395447
    4    62       USERS                ***     +DG1/sol10p/datafile/users.262.763395469
    5    100      EXAMPLE              ***     +DG1/sol10p/datafile/example.261.763395463
    6    2048     ASSM                 ***     +DG1/sol10p/datafile/assm.257.763395311
    List of Temporary Files
    =======================
    File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
    1    512      TEMP                 512         +DG1/sol10p/tempfile/temp.265.763396585
    RMAN> backup as copy datafile '/u01/tts/tts1.274.763488147' format '+DG1';
    Starting backup at 03-OCT-11
    using channel ORA_DISK_1
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of backup command at 10/03/2011 20:00:34
    RMAN-20201: datafile not found in the recovery catalog
    RMAN-06010: error while looking up datafile: /u01/tts/tts1.274.763488147Please advice .
    Regards
    GregG

    I'm not using catalog (connecting as nocatalog) Hmm....
    The files are listed in V$DATAFILE and DBA_DATA_FILES but not in an RMAN REPORT SCHEMA ?
    That's curious. V$DATAFILE and RMAN REPORT SCHEMA would both be reading from the controlfile.
    What was the sequence of actions to transport the tablespace ? Have you tried reconnecting with RMAN ?
    Hemant K Chitale

  • Transportable tablespaces with Logical Standby

    Does anyone know whether or not transportable tablespaces can be used in a Logical Standby environment? I know they can be used with a Physical Standby, and the documentation covers how to do this, but there's nothing about Logical Standby. This is Oracle 10.2.0.4/SLES 10.
    Thanks.

    Using transportable tablespaces to a logical standby environment. Actually, there is a paragraph in the 10.2 Data Guard Concepts and Administration manual. I will summarize:
    After creating the transportable tablespace export file on the source system, copy the datafiles to both the primary and logical standby systems.
    On the dataguard logical standby, within SQLplus, issue SQL> alter session disable guard;
    Import the tablespaces into the logical standby database
    SQL> alter session enable guard;
    Import the tablespaces into the primary database.
    In 10.1, we imported the tablespaces into an environment with a physical standby and then converted the physical standby to a logical standby.
    I tested the procedure for 10.2 and it seemed to work. Has anyone else done this and if so, are you aware of any issues? Like I said, the documentation is one very brief paragraph.
    Thanks.

  • TRANSPORTABLE TABLESPACE IN 8.1

    제품 : ORACLE SERVER
    작성날짜 : 2004-08-16
    Oracle8i에서는 tablespace단위로 그 구성 datafile들을 옮겨서 다른
    database에 연결시켜 사용할 수 있는 기능이 제공된다.
    SCOPE
    8i~10g Standard Edition 에서는 Import transportable tablespaces 기능만이 지원이 됩니다.
    유용성
    1. 전사적 정보시스템내에서 대량의 data의 흐름이 필요할 경우, - 예를 들어,
    OLTP database에서 data warehouse database로의 data이전 또는 data
    warehouse에서 data mart로의 data이전 등 - 8.1이전까지는 SQL*Loader의
    direct path나 parallel DML등의 방법을 이용하여 그 작업속도를
    향상시키려고 시도 하였다. 8.1의 Transportable Tablespace기능을
    이용한다면 datafile들을 새로운 system으로 copy하는 정도의 시간으로
    작업을 완료할 수 있다.
    2. 중앙에서 변경, 관리되고 지방(지사)에서 사용되는 data들을 CD-ROM에 담아서
    배포하는 등에 이용가능하다.
    예를 들어, 제품의 사양, 가격등에 대한 정보를 담는 tablespace를 중앙에서
    변경, 저장하여 배포하고, 이 data를 지방의 database에 연결하여 주문
    system등에 이용할 수 있다.
    3. Contents 사업자들은 자신이 제공하는 contents들을 Transportable
    Tablespace형태로 제공 하여 고객들의 database에 바로 연결하여 사용할 수
    있도록 할 수 있다.
    특성, 제한사항
    1. 특정 tablespace내의 전체 data를 이동시킨다.
    2. Media recovery를 지원한다.
    3. Source database와 target database는
    - 동일한 OS에서 구동되고 있어야 한다.
    - Oracle8i(8.1)이상의 version이어야 한다.
    - 동일한 block size를 이용해야 한다.
    - 동일한 characterset을 이용해야 한다.
    작업절차
    1. 대상 tablespace를 read only 상태로 변경한다.
    file을 copy하는 동안 해당 tablespace에 변경작업이 일어나지 않도록
    보장한다.
    2. Source database에서 metadata를 export한다.
    해당 tablespace와 그 안의 object들에 대한 dictionary정보를 dump file에
    받는 과정이다.
    3. 대상 tablespace의 datafile들을 target system으로 이동시킨다.
    4. Export dump file을 이동시킨다.
    5. Metadata를 target database에 import한다.
    6. 필요하다면 이후에 해당 tablespace를 read-write mode로 변경한다.
    SAMPLE
    Source database : dbA
    Target database : dbB
    이동 대상 tablespace : TRANS_TS(/u01/data/trans_ts01.dbf, /u01/data/trans_ts02.dbf 로 구성)
    1. dbA에서 TRANS_TS를 read only로 변경
    alter tablespace TRANS_TS read only ;
    2. dbA에서 metadata를 export한다.
    exp sys/manager file=trans.dmp transport_tablespace=y
    tablespaces=trans_ts triggers=n constraints=n
    version이 8.1.6이상이라면,
    exp system/manager 대신에 exp \'sys/manager as sysdba\'와 같이
    주여야 한다.
    transport_tablespace(Y or N)는 Y로 설정한다.
    tablespaces에는 transport의 대상이 되는 tablespace를 지정한다.
    대상 tablespace의 table들에 걸려있는 trigger, constraint들도 대상으로
    할 것인지를 지정한다.
    3. TRANS_TS의 두개의 datafile들을 dbB가 존재하는 system으로 binary
    copy한다.
    4. 위의 2번 과정에서 export한 dump file을 dbB가 존재하는 system으로
    binary copy한다.
    5. dbB에 metadata를 import한다.
    imp sys/manager file=trans.dmp transport_tablespace=y
    datafiles=/disk1/trans_ts01.dbf,/disk2/trans_ts02.dbf
    8.1.6이상이라면 이 부분도 sys/manager대신에 \'sys/manager as dba\'
    와 같이 적는다.
    transport_tablespace(Y or N)는 Y로 설정한다.
    datafile의 name은 dbB system에 copy된 filename을 지칭한다.
    6. 필요할 경우 tablespace를 read write mode로 변경한다.
    alter tablespace TRANS_TS read write ;
    TRANSPORT SET
    Transport하고자 하는 tablespace set은 self-contained이어야만 한다.
    대상이 되는 tablespace set 내에 partitioned table이 존재한다면 해당
    table의 모든 partition들이 이들 tablespace 내에 존재해야 하며, 비슷하게
    LOB column의 data들도 table의 data들과 함께 이들 tablespace 내에 존재해야
    하는데, 이렇게 서로 관련된 object들이 tablespace set내에 모두 존재하는
    것을 self-contained라고 지칭한다.
    tablespace set이 self-contained하지 않다면 transport할 수 없다.
    Transport tablespace set이 self-contained인지의 여부를 확인하기 위해서
    DBMS_TTS.TRANSPORT_SET_CHECK procedure를 이용한다.
    예를 들어,
    DBMS_TTS.TRANSPORT_SET_CHECK(ts_list=>'A,B,C',incl_constraints=>TRUE)
    을 수행하면 A, B, C 세개의 tablespace로 구성된 transport tablespace set이
    self-contained인지에 대한 정보를 TRANSPORT_SET_VIOLATIONS view에 기록해
    준다.
    incl_constraints를 설정하면 referencial(foreign key) constraint에
    대해서도 self-contained 여부를 check해준다.

    you can do the following:
    1. export all objects in your tablespace, by using option tables=(x,y,...)
    2. drop your tablespace including contents
    3. create new tablespace with a desired name using the same datafiles names (you can physically delete them first or use a REUSE option)
    4. create objects you exported in the new tablespace.
    5. perform your import
    By creating objects first, you guarantee that export will populate tables in the tablespace you need.
    This is just a general plan, you have to clarify and confirm all details.

  • Transportable tablespace /imp problem

    Dear all ,
    please i need to correct me if i am wrong
    transportable tablespace can you done of different version of oracle database
    (means can do that task from lower to upper ex:10g.2 to 11g.2) take in your consideration the limitation of this taks.???
    1. when i do this task in same platform unix to unix and same edians big to big
    there is no way to convert datafile or tablespace
    the take is done by take tablespace read only and then exp this tablespace and ftp the datafile and dmp file
    to new server and switch tablespace read write thus imp the dmp ???
    2. when this taks is done from win to unix from 10g.2 to 11g.2
    check the self_contained this in the two and take tablespace read only and convert and exp and ftp the datafile and dmp file
    to new server thus convert the datafile and then imp and the last one switch the tablespace in read write???
    please i need your opinion obout this ?????
    and i need your advice in import
    imp USERID=\'sys/iti_ICON_sys as sysdba\' \ tablespaces=USERS transport_tablespace=y FROMUSER=sys@TABSTST as sysdba TOUSER=sys@PRODICON as sysdba file=/icon/appl/oracle/trans.dmp datafiles='/icon/appl/oracle/USERS01.dbf'oraicon@billdb03 $ imp USERID=\'sys/iti_ICON_sys as sysdba\' \ tablespaces=USERS transport_tablespace=y FROMUSER=sys@TABSTST as sysdba TOUSER=sys@PRODICON as sysdba file=/icon/appl/oracle/trans.dmp datafiles='/icon/appl/oracle/USERS01.dbf'
    Import: Release 10.2.0.3.0 - Production on Tue Dec 20 12:11:34 2011
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    Export file created by EXPORT:V10.02.01 via conventional path
    About to import transportable tablespace(s) metadata...
    import done in AL32UTF8 character set and AL16UTF16 NCHAR character set
    export client uses US7ASCII character set (possible charset conversion)
    IMP-00017: following statement failed with ORACLE error 29342:
    "BEGIN sys.dbms_plugts.checkUser('ORAESB'); END;"
    IMP-00003: ORACLE error 29342 encountered
    ORA-29342: user ORAESB does not exist in the database
    ORA-06512: at "SYS.DBMS_PLUGTS", line 1895
    ORA-06512: at line 1
    IMP-00000: Import terminated unsuccessfully
    oraicon@billdb03 $ imp USERID=\'sys/iti_ICON_sys as sysdba\' \ tablespaces=USERS transport_tablespace=y file=trans.dmp datafiles='/icon/appl/oracle/USERS01.dbf'
    Import: Release 10.2.0.3.0 - Production on Tue Dec 20 12:28:50 2011
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    Export file created by EXPORT:V10.02.01 via conventional path
    About to import transportable tablespace(s) metadata...
    import done in AL32UTF8 character set and AL16UTF16 NCHAR character set
    export client uses US7ASCII character set (possible charset conversion)
    . importing SYS's objects into SYS
    . importing SYS's objects into SYS
    IMP-00017: following statement failed with ORACLE error 29342:
    "BEGIN sys.dbms_plugts.checkUser('ORAESB'); END;"
    IMP-00003: ORACLE error 29342 encountered
    ORA-29342: user ORAESB does not exist in the database
    ORA-06512: at "SYS.DBMS_PLUGTS", line 1895
    ORA-06512: at line 1
    IMP-00000: Import terminated unsuccessfully
    oraicon@billdb03 $
    MANY THANKS

    1 read documentation. You can use rman to convert the files
    2 read documentation. You can use rman to convert the files.
    3 the error message is self explanatory
    4 the error message is self explanatory
    Please do your own research prior to posting.
    Please include only 1 question per post.
    Sybrand Bakker
    Senior Oracle DBA

  • Maximum number of datafiles

    Hi All,
    We have a PROD database (Oracle Version 10.2.0.2) and in this DB a tablespace has crossed maximum number of datafiles (975 datafiles)
    Any suggestions what can be done to avoid the warnings and also does this have any impact on DB performance?
    Thanks and Regards,
    Nick.

    sybrandb wrote:
    On the other hand Oracle DBAs administrating products like SAP, Peoplesoft, and Oracle Financials tear their hair out and suffer from high blood pressure, because they can't tame the beast.
    (Oracle Financials even used to have no foreign keys).In standard, Peoplesoft neither has no FK. And no PK. And no trigger. And what's the issue ? You don't have to update the data outside the application or component interface.
    Everything must be done is respect of the application layer.
    Everything, including the database management.
    As a Peoplesoft admin for the last few years, yes, I have to confirm some high blood pressure because of that management could seems "strange" (that could explain some migraine I have, thanks for linking both), but believe me, after several years, you learn how you can bypass the proprietary tool provided by the ERP and manage everything through SQL*Plus, but right, always by maintaining the ERP metamodel. And finally, the hairs are still there.
    Nicolas.

Maybe you are looking for