An impdp question

Hi,
Is it possible to import a table data into a table with different name. All attributes of the target table is the same with the source table except its name.
Seems like it is not but there may be a workaround to do this?
IBM AIX 5.3, Oracle 10.2.0.4.0
Regards,
erkan

Hi porzer,
Thanks for the help. However it does not solve my problem. Let me explain my stuation:
I have a table SHEMA.TABLE in database A. In database B there is a view SHEMA.TABLE which accesses SHEMA.TABLE on database A using database link:
create or repliace view SHEMA.TABLE select * from SHEMA.TABLE@DB_LINK_A
I am about to set up replication between A and B. For the initial load i want to export SHEMA.TABLE from A and import it to B as SHEMA.TABLE_PRE. Then when two tables are synchronized, instantly i will drop view and rename table SHEMA.TABLE_PRE to TABLE in database B.
SHEMA.TABLE view in database B is used 7*24 so i can not touch it.
Here is a solution:
1-) export table
2-) import on a dummy database
3-) rename table and export again
4-) import at B
On the other hand there is not only one table but there are many huge tables to be copied. If i can do it in one export-import, would be very helpful.Two export-imports doubles by work to do
Regards,
Erkan

Similar Messages

  • IMPDP question -- Replace all objects

    Question
    I have a situation where I would like to use Data Pump to refresh a test database schema weekly from our production instance schema. I would like to set up a job to do this with data pump. I can use data pump (impdp) with success using the network_link with no issue. My question is does anyone know if there is a way (or can recommend a way) to update all schema objects in this refresh job? For example, I want to bring over all tables, data, sequences, plsql, etc. The impdp command allows for TABLE_EXISTS_ACTION=REPLACE to replace existing tables/data, is there a similar commadn so I can do this for specifically sequences, etc? Is there a replace command for other objects types when using impdp? If not anyone have any ideas how I can easily do this?
    I know I could write a sql script to drop and recreate all, but then I run into the problem of having to regrant privileges to sequences, synonyms, plsql, etc.
    Looking for ideas.
    Thanks.

    - Is TEST always the exact same schema (DDL) as PROD?YES, or should I say it better be as I do not typically allow for direct changes to be made to production without running through test. Although if we are refreshing test from production, any DDL changes would eventually be updated in test from their values in prod anyway.
    - Have you considered using replication or CTAS instead?Replication, YES, but there is a lot of overhead for something I don't think warrants it. CTAS would work, but drop and recreating is slower than IMPDP.
    Just the data & PL/SQL, no DDL?I suppose DDL would also need to be refreshed.
    There is IGNORE=N, which forces you to pre-drop the tables in TEST, thereby guaranteeing freshness.Does this option drop the objects or does it just err out and require that you pre-drop them by some other means. I suppose it is the latter..
    Because IMPDP is good at brining privileges over. I am considering the following.
    1. Connect to TEST, drop user XYZ cascade;
    2. impdp command using network_link option.
    Does doing this miss something?
    I agree that impdp log file will have to be interrogated for errors.
    Let me know.
    Thanks.

  • Impdp Question

    Hi,
    I have two databases on separate machines. Both database are 11gR1 running on Oracle Solaris 10.
    One machine is test1 and the other is test2.
    On test1, I have taken dump of database by using expdp. Now I want to import this dump to database on test 2 machine by using impdp.
    Can I use the impdp of test1 to import the dump in database at test2?
    For some reason, the impdp command at the test2 server is not working and giving ora-00600 error.
    regards

    Fahd Mirza wrote:
    Thanks for the answer. The error is 600 and can be removed by applying the patch. I dont have the patch right now, and i just want to verify that whether it is possible to use the impdp of server test1 to import in database at test2?yes it is possible...
    Data Pump runs only on the server side. You may initiate the export from a client but the job(s) themselves will run inside an Oracle server. There are no dump files (expdat.dmp) or log files that will be created on your local machine if you initiate a Data Pump Export (expdp) there.
    Oracle creates dump and log files through DIRECTORY objects. So before you can use Data Pump you must create a DIRECTORY object.
    You can invoke impdp from other machine but your dumpfile must lie on the server where db is running.
    Edited by: Rajesh Lathwal on May 14, 2010 12:23 PM

  • Expdp and impdp question

    Hi guys
    Actually I taking an export with the expdp utility of a Oracle 11g database.
    It's important to mention that the source database has a db_block_size of 8K and the target a db_block_size=16K, so I want to ask if there is any isue referring to this case.+
    Thanks in advance.
    Regards

    ORV wrote:
    Hi guys
    Actually I taking an export with the expdp utility of a Oracle 11g database.
    It's important to mention that the source database has a db_block_size of 8K and the target a db_block_size=16K, so I want to ask if there is any isue referring to this case.+
    Thanks in advance.
    Regards
    NO issue

  • Impdp, Network_Link, and Data Append, security questions

    I am implementing a nightly table data transfer between two Networked DB servers and have some questions
    1. Can impdp command have a NETWORK_LINK and still have TABLE_EXISTS_ACTION=Append, and REMAP_TABLE=[schema.]old_tablename[.partition]:new_tablename.
    For example: impdp hr/hr NETWORK_LINK=DB_LINK TABLES=hr.employees REMAP_TABLE=hr.employees:emps TABLE_EXISTS_ACTION=Append
    2. How secure is the data move, is the data encrypted?

    893730 wrote:
    I am implementing a nightly table data transfer between two Networked DB servers and have some questions
    1. Can impdp command have a NETWORK_LINK and still have TABLE_EXISTS_ACTION=Append, and REMAP_TABLE=[schema.]old_tablename[.partition]:new_tablename. Although i am not sure if this should fail. Did you encounter any issue while doing the export import?
    For example: impdp hr/hr NETWORK_LINK=DB_LINK TABLES=hr.employees REMAP_TABLE=hr.employees:emps TABLE_EXISTS_ACTION=Append
    2. How secure is the data move, is the data encrypted?This excerpt is from Oracle Documentation.
    Caution:
    If an export operation is performed over an unencrypted network link, then all data is exported as clear text even if it is encrypted in the database.
    http://docs.oracle.com/cd/E11882_01/server.112/e22490/dp_export.htm#SUTIL856
    Regards,
    NC

  • 11.2.0.3 impdp table_exist_action question

    Hi everyone,
    Need some help with my question about table_exist_action.
    My expdp is normally done as: full=y contents=all.
    My impdp is normally done as: full=y contents=all (OR) schemas=xxxx contents=all
    I've been reading about table_exist_action parameter.
    If I have no table pre-exists in the database, and I need to do import, which one should I be using:
    table_exist_action=skip
    table_exist_action=replace
    table_exist_action=append
    table_exist_action=truncate
    If I have tables pre-existed in the database, and I need to do import, should I pick replace OR skip?
    Thanks,
    Joy

    If I have no table pre-exists in the database, and I need to do import, which one should I be using:
    table_exist_action=skip
    table_exist_action=replace
    table_exist_action=append
    table_exist_action=truncateIf you don't have any of the tables already existing in your target database, then it really doesn't matter which one you use. The parameter tells Data Pump what to do if it tries to create a table and it already exists.
    If I have tables pre-existed in the database, and I need to do import, should I pick replace OR skip?It depends on what you want Data Pump do do.
    If you want the table and the data only from the dumpfile, then you want to REPLACE the existing table with the table from the dumpfile
    If you want the table and the data only from the existing database, then you want to SKIP the table if it already exists.
    If you want the truncate the existing data in the existing table and have only the data from the dump file, then you would want to TRUNCATE the existing table.
    If you want the add the data from the dumpfile to your existing table, then you want to APPEND to the existing data in the existing table.
    This only happens if the table already exists
    Hope this helps.
    Dean

  • Question on expdp and impdp for full batabase

    I have 10.2.0.4.0 db1 on solaris10.
    I have to convert the database character set from WE8ISO8859P1 to AL32UTF8 by the export and import method per note [ID 260192.1]
    I will not move the database for the converstion.
    Waht I have done:
    ran csscan for the db1 ( from WE8ISO8859P1 to AL32UTF8 ). The results are perfect -- no "lossy" "truncation".
    ran expdp with full=y sucessfully
    removed the db1 by DBCA.
    created the db1 with AL32UTF8 by DBCA unselect all the option of schemas. And the tablespaces/datafiles are in exact the same locations as the db1 before it was removed.
    I will run impdp with full=y
    Questions:
    There are some users exist in the origina dbl say source, but not exist in new created target, so are some of the tablespaces.
    Do I have to manually create these users and tablespaces in the target database?
    I thought impdp full=y would make a full load of the database which should create the new users and the new tablespaces on the target, correct?
    Thanks for the help.
    TIA

    Actually the answer for what you need to do depends in part on if you are importiing back into the origional souce after it was clearned out or importing into a new database.
    When the target is not the source then on a full import if the file paths exist and the import can allocate the data sets then the import should create the tablespaces however if the same pathing cannot be used or if the souce database in on the same server and you obviously do not want to overwrite the existing database data files then you would need to pre-create the target tablespaces in order to use different files.
    The users should create. I do not have the time to run a test if the user default tablespace does not exist but I would expect the database to create the user defaulting the user default tablespace to whatever tablespace you have set for the database as a whole.
    I would pre-create owners and grant the usernames the necessary system privileges in advance including quotas since it is possilbe that existing owners may not have all necessary privileges to re-creaate what alreadys exists.
    HTH -- Mark D Powell --

  • Expdp/impdp quick question

    I just did a export of around 20 schemas from my production database.
    expdp userid=system/pwd schemas=('a','b',...'x') dumpfile=xxx logfile=yyyy directory=....
    Is is possible for me to import just one particular schema into my test database by specifying the schema of interest??? (in this case schema a)
    impdp userid=system/pwd schemas=('a') dumpfile=.... logfile=... directory=...

    Sorry, you'll need to close and repost. I am not authorized to move the thread.
    Regards,
    Brandye Barrington
    Certification Forum Moderator
    Certification Program Manager

  • Impdp command question

    Hi All,
    I am trying to import into and 11.2.0.2 database on Linux. I am trying to exclude 2 tables from 2 different users and exclude constrains as well. This is the syntax I have. Any help would be appreciated.
    nohup impdp bb_bb60@BB9TEST schemas=BBADMIN,BBINSTHOST,BB_BB60,BB_BB60_REPORT,BB_BB60_STATS,CMS,CMS_DOC,COURSELIFE,LCYCLE,MSC197 directory=TEST_DIR parallel=4 dumpfile=SCHEMAS_01.dmp,SCHEMAS_02.dmp,SCHEMAS_03.dmp,SCHEMAS_04.dmp logfile=impdpSCHEMAS1.log EXCLUDE=TABLE: "IN ('BB_BB60.ACTIVITY_ACCUMULATOR','BB_BB60_STATS.ACTIVITY_ACCUMULATOR')" content=DATA_ONLY &
    Thanks

    When you specify an exclude statement, it only accepts object names, not schema.object_name. So, you can't really do what you want to do. If your dumpfile had:
    user1.table1
    user1.table2
    user2.table1
    user1.table2
    You can exclude user1.tablle1 and user2.table2. You can exclude table1,table2, but you would be excluding 4 tables instead of 2.
    This is not supported.
    Dean

  • Data Pump - expdp / impdp utility question

    HiAll,
    As a basic exercise to learn Data pump, I am trying to export schema scott and then want to load the dump file into a test_t schema in the same database.
    --1. created dir object from sysdba
    CREATE DIRECTORY dpump_dir1 AS 'C:\output_dir';
    --2. created dir on the OS as c:/output_dit
    --3. run expdp
    expdp system/*****@orcl schemas=scott DIRECTORY=dpump_dir1 JOB_NAME=hr DUMPFILE=scott_orcl_nov5.dmp PARALLEL=4
    --4. create test_t schema and grant dba to it.
    --5. run impdp
    impdp system/*****@orcl schemas=test_t DIRECTORY=dpump_dir1 JOB_NAME=hr DUMPFILE=scott_orcl_nov5.dmp PARALLEL=8
    it fails here as ORA39165 : schema test_t not found. However the schema test_t does exist.
    So, I am not sure why it should give this error. It seems that the schema of Scott from the expdp dump file can not be loaded to any other schema but only to a schema named scott...Is it right? If yes then how can I load all the objects of schema say scott to another schema say test_t? It would be helpful if you can please show the respective expdp and impdp command.
    Thanks a lot
    KS

    The test_t schema does not exist in the export dump file, you should remap from input scott schema to the target test_t schema.
    REMAP_SCHEMA : http://download.oracle.com/docs/cd/E11882_01/server.112/e22490/dp_import.htm#SUTIL927
    Nicolas.

  • Quick question on IMPDP

    Platform: Win2k3; Oracle 10gR2
    Am I imagining things or did I read somewhere I can import tables back to a schema but with a different name??
    Example:
    EXPDP - tableA, tableB
    Now, I want to IMPDP tableA to same schema but with different table name, say, tableA1.
    Is that possible? If so, pls let me know how.
    thanks.
    -andrew-

    impdp help=yes
    Import: Release 10.2.0.1.0 - Production on Tuesday, 02 March, 2010 13:13:44
    Copyright (c) 2003, 2005, Oracle.  All rights reserved.
    The Data Pump Import utility provides a mechanism for transferring data objects
    between Oracle databases. The utility is invoked with the following command:
         Example: impdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp
    You can control how Import runs by entering the 'impdp' command followed
    by various parameters. To specify parameters, you use keywords:
         Format:  impdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
         Example: impdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp
    USERID must be the first parameter on the command line.
    Keyword               Description (Default)
    ATTACH                Attach to existing job, e.g. ATTACH [=job name].
    CONTENT               Specifies data to load where the valid keywords are:
                          (ALL), DATA_ONLY, and METADATA_ONLY.
    DIRECTORY             Directory object to be used for dump, log, and sql files.
    DUMPFILE              List of dumpfiles to import from (expdat.dmp),
                          e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
    ENCRYPTION_PASSWORD   Password key for accessing encrypted column data.
                          This parameter is not valid for network import jobs.
    ESTIMATE              Calculate job estimates where the valid keywords are:
                          (BLOCKS) and STATISTICS.
    EXCLUDE               Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
    FLASHBACK_SCN         SCN used to set session snapshot back to.
    FLASHBACK_TIME        Time used to get the SCN closest to the specified time.
    FULL                  Import everything from source (Y).
    HELP                  Display help messages (N).
    INCLUDE               Include specific object types, e.g. INCLUDE=TABLE_DATA.
    JOB_NAME              Name of import job to create.
    LOGFILE               Log file name (import.log).
    NETWORK_LINK          Name of remote database link to the source system.
    NOLOGFILE             Do not write logfile.
    PARALLEL              Change the number of active workers for current job.
    PARFILE               Specify parameter file.
    QUERY                 Predicate clause used to import a subset of a table.
    REMAP_DATAFILE        Redefine datafile references in all DDL statements.
    REMAP_SCHEMA          Objects from one schema are loaded into another schema.
    REMAP_TABLESPACE      Tablespace object are remapped to another tablespace.
    REUSE_DATAFILES       Tablespace will be initialized if it already exists (N).
    SCHEMAS               List of schemas to import.
    SKIP_UNUSABLE_INDEXES Skip indexes that were set to the Index Unusable state.
    SQLFILE               Write all the SQL DDL to a specified file.
    STATUS                Frequency (secs) job status is to be monitored where
                          the default (0) will show new status when available.
    STREAMS_CONFIGURATION Enable the loading of Streams metadata
    TABLE_EXISTS_ACTION   Action to take if imported object already exists.
                          Valid keywords: (SKIP), APPEND, REPLACE and TRUNCATE.
    TABLES                Identifies a list of tables to import.
    TABLESPACES           Identifies a list of tablespaces to import.
    TRANSFORM             Metadata transform to apply to applicable objects.
                          Valid transform keywords: SEGMENT_ATTRIBUTES, STORAGE
                          OID, and PCTSPACE.
    TRANSPORT_DATAFILES   List of datafiles to be imported by transportable mode.
    TRANSPORT_FULL_CHECK  Verify storage segments of all tables (N).
    TRANSPORT_TABLESPACES List of tablespaces from which metadata will be loaded.
                          Only valid in NETWORK_LINK mode import operations.
    VERSION               Version of objects to export where valid keywords are:
                          (COMPATIBLE), LATEST, or any valid database version.
                          Only valid for NETWORK_LINK and SQLFILE.
    The following commands are valid while in interactive mode.
    Note: abbreviations are allowed
    Command               Description (Default)
    CONTINUE_CLIENT       Return to logging mode. Job will be re-started if idle.
    EXIT_CLIENT           Quit client session and leave job running.
    HELP                  Summarize interactive commands.
    KILL_JOB              Detach and delete job.
    PARALLEL              Change the number of active workers for current job.
                          PARALLEL=<number of workers>.
    START_JOB             Start/resume current job.
                          START_JOB=SKIP_CURRENT will start the job after skipping
                          any action which was in progress when job was stopped.
    STATUS                Frequency (secs) job status is to be monitored where
                          the default (0) will show new status when available.
                          STATUS[=interval]
    STOP_JOB              Orderly shutdown of job execution and exits the client.
                          STOP_JOB=IMMEDIATE performs an immediate shutdown of the
                          Data Pump job.Edited by: sb92075 on Mar 2, 2010 1:16 PM

  • ORA-00001: unique constraint @ impdp with table_exists_action=truncate

    Hi everybody
    I can't understand why my data pump import execution with parameter TABLE_EXISTS_ACTION=TRUNCATE returned ORA-00001 (unique constraint violation), while importing schema tables from a remote database where the source tables have the same unique constraints as the corresponding ones on the target database.
    Now my question is "If the table would be truncated, why I get unique constraint violation when inserting records from a table where the same unique constraint is validated?
    Here are the used parameter file content and the impdp logfile.
    parfile
    {code}
    DIRECTORY=IMPEXP_LOG_COLL2
    CONTENT=DATA_ONLY
    NETWORK_LINK=PRODUCTION
    PARALLEL=1
    TABLE_EXISTS_ACTION=TRUNCATE
    EXCLUDE=STATISTICS
    {code}
    logfile
    {code}
    Import: Release 10.2.0.1.0 - Production on Gioved� 22 Ottobre, 2009 15:33:44
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connesso a: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    FLASHBACK automatically enabled to preserve database integrity.
    Starting "IMPEXP"."PROVA_REFRESH_DBCOLL": impexp/********@dbcoll SCHEMAS=test_pump LOGFILE=test_pump.log parfile=refresh_dbcoll.par JOB_NAME=prova_refresh_dbcoll
    Estimate in progress using BLOCKS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 523 MB
    ORA-31693: Table data object "TEST_PUMP"."X10000000_TRIGGER" failed to load/unload and is being skipped due to error:
    ORA-00001: unique constraint (TEST_PUMP.SYS_C00726627) violated
    ORA-31693: Table data object "TEST_PUMP"."X10000000_BASIC" failed to load/unload and is being skipped due to error:
    ORA-00001: unique constraint (TEST_PUMP.SYS_C00726625) violated
    Job "IMPEXP"."PROVA_REFRESH_DBCOLL" completed with 2 error(s) at 15:34:04
    {code}
    Thank you
    Bye Alessandro

    I forgot to read the last two lines of the documentation about TABLE_EXISTS_ACTION where it says:
    "TRUNCATE cannot be used on clustered tables or over network links."
    So it seems that it ignored the clause for the use of NETWORK_LINK and unreasonably opted for an APPEND action instead of throwing an error to highlight the conflicting parameters in the used configuration.
    Bye Alessandro

  • EXP/IMP..of table having LOB column to export and import using expdp/impdp

    we have one table in that having colum LOB now this table LOB size is approx 550GB.
    as per our knowldge LOB space can not be resused.so we have alrady rasied SR on that
    we are come to clusion that we need to take backup of this table then truncate this table and then start import
    we need help on bekow ponts.
    1)we are taking backup with expdp using parellal parameter=4 this backup will complete sussessfully? any other parameter need to keep in expdp while takig backup.
    2)once truncate done,does import will complete successfully..?
    any SGA PGA or undo tablespace size or undo retention we need to increase.. to completer susecfully import? because its production Critical database?
    current SGA 2GB
    PGA 398MB
    undo retention 1800
    undo tbs 6GB
    please any one give suggestion to perform activity without error...also suggest parameter need to keep during expdp/impdp
    thanks an advance.

    Hi,
    From my experience be prepared for a long outage to do this - expdp is pretty quick at getting lobs out but very slow at getting them back in again - a lot of the speed optimizations that may datapump so excellent for normal objects are not available for lobs. You really need to test this somewhere first - can you not expdp from live and load into some test area - you don't need the whole database just the table/lob in question. You don;t want to find out after you truncate the table that its going to take 3 days to import back in....
    You might want to consider DBMS_REDEFINITION instead?
    Here you precreate a temporary table (with same definitiion as the existing one), load the data into it from the existing table and then do a dictionary switch to swap them over - giving you minimal downtime. I think this should work fine with LOBS at 10g but you should do some research and see if it works fine. You'll need a lot of extra tablespace (temporarily) for this approach though.
    Regards,
    Harry

  • Oracle 11g : Sequence Issues after impdp

    Hi All,
    We are migrating from Oracle 10g to Oracle 11g. As part of this, we take a expdp from 10g and successfully did an import using impdp on to Oracle 11g database.
    But the problem is, there were few Primary key violations which occurred and all of them relate to sequences.
    The Maximum data in the tables and the last number in the sequences differed which caused the issue.
    Upon investigation and browsing the Web, http://www.nerdliness.com/article/2009/03/18/my-oracle-sequencedatapump-shenanigans ; I understood it could be because of the export taken while the application is online and writing to the database.
    I reset all the failing sequences manually and it is fine now.
    My questions now are
    *1) Can we ascertain that the export taken when the source database is offline would eliminate the sequences issue.*
    2) As this is being done in Production, I would like to make few checks to ensure that the sequences are properly imported. ---
    Again reading few websites and oracle Forums, i found the below sql's..
    select table_name, column_name, utl_raw.cast_to_number(high_value) as highval
    from dba_Tab_columns
    where owner = 'PRODUCTION_OWNER'
    AND DATA_TYPE= 'NUMBER'
    AND (OWNER, TABLE_NAME, COLUMN_NAME) IN
    (SELECT CC.OWNER, CC.TABLE_NAME, CC.COLUMN_NAME
    FROM DBA_CONS_COLUMNS CC
    JOIN DBA_CONSTRAINTS C
    ON CC.OWNER=C.OWNER
    AND CC.CONSTRAINT_NAME=C.CONSTRAINT_NAME
    WHERE C.CONSTRAINT_TYPE ='P'
    ORDER BY 3;
    SELECT SEQUENCE_NAME, MIN_VALUE, MAX_VALUE, LAST_NUMBER
    FROM DBA_SEQUENCES
    WHERE SEQUENCE_OWNER = 'PRODUCTION_OWNER'
    ORDER BY LAST_NUMBER
    If I relate the last_number with the highval and if they are same, does that mean the sequences are imported properly.
    Note: We have sequence caching done and we are on RAC.
    Edited by: ramakrishnavydyula on Jan 30, 2013 9:36 AM

    We are migrating from Oracle 10g to Oracle 11g. As part of this, we take a expdp from 10g and successfully did an import using impdp on to Oracle 11g database.
    But the problem is, there were few Primary key violations which occurred and all of them relate to sequences.
    The Maximum data in the tables and the last number in the sequences differed which caused the issue.
    it could be because of the export taken while the application is online and writing to the database.I don't know what's your database's size , and what's acceptable downtime for you, but I could never propose such a migration tecnique to any of my customers, due to excessive downtime when databases are quite large.
    It's quite obvious that you get such errors when the export is done while people are working, unless you use FLASHBACK_SCN or FLASHBACK_TIME parameter while exporting. But in this case, assuming you don't get errors (e.g. ORA-01555), you'll probably lose a lot ot transactions.
    Assuming your 10g database is running in archivelog mode (which should be the default in production DBs) did you think of using RMAN to do the migration ?

  • Error : Temporary Tablespace is Empty  when doing expdp/impdp

    Hi all,
    I was doing expdp on my oracle 10.1.0.2.0 DB on Win XP P, though the user is having a default temporary tablespace with a temp file on autoextend enabled, I got the message as...
    ORA-25153: Temporary Tablespace is Empty
    Then I created a new temporary tablespace for the user with 500M tempfile and autoextend enabled, then expdp went through.
    Now I am doing the impdp for the same .dmp file to generate one sqlfile for the DB,
    again I am facing the same error message as...
    ORA-25153: Temporary Tablespace is Empty
    ----- PL/SQL Call Stack -----
    object line object
    handle number name
    17FE07EC 13460 package body SYS.KUPW$WORKER
    17FE07EC 5810 package body SYS.KUPW$WORKER
    17FE07EC 3080 package body SYS.KUPW$WORKER
    17FE07EC 3530 package body SYS.KUPW$WORKER
    17FE07EC 6395 package body SYS.KUPW$WORKER
    17FE07EC 1208 package body SYS.KUPW$WORKER
    17ABE058 2 anonymous block
    Job "CHECKUP"."SYS_SQL_FILE_FULL_02" stopped due to fatal error at 10:09
    The message indicates that...
    ORA-25153: Temporary Tablespace is Empty
    Cause: An attempt was made to use space in a temporary tablespace with no files.
    Action: Add files to the tablespace using ADD TEMPFILE command.
    SO my question is every time I do any imp exp have I to add temp file in my temporary tablespace? will it not be cleared on the completion of the job?
    Any advice please.

    Hi Sabdar,
    The result of the query is as...
    SQL> SELECT * FROM DATABASE_PROPERTIES where
    2 PROPERTY_NAME='DEFAULT_TEMP_TABLESPACE';
    PROPERTY_NAME
    PROPERTY_VALUE
    DESCRIPTION
    DEFAULT_TEMP_TABLESPACE
    TEMP
    Name of default temporary tablespace
    So the default temporary tablespace is TEMP which is not having any tempfile as I cloned this DB from the primary DB, but the user I am using for the impdp is 'checkup' and the temporary tablespace for 'checkup' is 'checkup_temp1' which s having tempfile.
    SO then why the impdp job is going to server's temporary tablespace instead of user's temporary tablespace.
    Is there any way to get whether 'checkup_temp1' tablespace is the default temporary tablespace for 'checkup' or not?
    Can I create create the user mentioning default temporary tablespace anyway because it is giving me error as...
    SQL> create user suman identified by suman
    2 default tablespace checkup_dflt
    3 default TEMPORARY TABLESPACE checkup_temp1;
    default TEMPORARY TABLESPACE checkup_temp1
    ERROR at line 3:
    ORA-00921: unexpected end of SQL command
    Then I did ...
    SQL> create user suman identified by suman
    2 default tablespace checkup_dflt
    3 TEMPORARY TABLESPACE checkup_temp1;
    User created.
    Regards

Maybe you are looking for