Fromuser/touser equivanet in expdp/impdp ??

hi .
i got a dump file from the following cmmand.
expdp system/123456@SCHDB dumpfile=studentinfo.dmp logfile=studentinfo.log tables=school.studentmaster,school.studentmarks, school.studentleave directory=mydir
now i want to import these 2 tables into a different schema (old_school) using impdp
whats the fromuser / touser equivalent in impdp/expdp ???

REMAP_DATA
Specify a data conversion function.
For example, REMAP_DATA=EMP.EMPNO:REMAPPKG.EMPNO.
REMAP_DATAFILE
Redefine data file references in all DDL statements.
REMAP_SCHEMA
Objects from one schema are loaded into another schema.
REMAP_TABLE
Table names are remapped to another table.
For example, REMAP_TABLE=HR.EMPLOYEES:EMPS.
REMAP_TABLESPACE
Tablespace objects are remapped to another tablespace.
REUSE_DATAFILES
Tablespace will be initialized if it already exists [N].

Similar Messages

  • EXP/IMP..of table having LOB column to export and import using expdp/impdp

    we have one table in that having colum LOB now this table LOB size is approx 550GB.
    as per our knowldge LOB space can not be resused.so we have alrady rasied SR on that
    we are come to clusion that we need to take backup of this table then truncate this table and then start import
    we need help on bekow ponts.
    1)we are taking backup with expdp using parellal parameter=4 this backup will complete sussessfully? any other parameter need to keep in expdp while takig backup.
    2)once truncate done,does import will complete successfully..?
    any SGA PGA or undo tablespace size or undo retention we need to increase.. to completer susecfully import? because its production Critical database?
    current SGA 2GB
    PGA 398MB
    undo retention 1800
    undo tbs 6GB
    please any one give suggestion to perform activity without error...also suggest parameter need to keep during expdp/impdp
    thanks an advance.

    Hi,
    From my experience be prepared for a long outage to do this - expdp is pretty quick at getting lobs out but very slow at getting them back in again - a lot of the speed optimizations that may datapump so excellent for normal objects are not available for lobs. You really need to test this somewhere first - can you not expdp from live and load into some test area - you don't need the whole database just the table/lob in question. You don;t want to find out after you truncate the table that its going to take 3 days to import back in....
    You might want to consider DBMS_REDEFINITION instead?
    Here you precreate a temporary table (with same definitiion as the existing one), load the data into it from the existing table and then do a dictionary switch to swap them over - giving you minimal downtime. I think this should work fine with LOBS at 10g but you should do some research and see if it works fine. You'll need a lot of extra tablespace (temporarily) for this approach though.
    Regards,
    Harry

  • Log file's format in expdp\impdp

    Hi all,
    I need to set log file format for expdp\impdp utility. I have this format for my dump file - filename=<name>%U.dmp which generates unique names for dump files. How can i generate unique names for log files? It'd better if dump file name and log file names will be the same.
    Regards,
    rustam_tj

    Hi Srini, thanks for advice.
    I read doc which you suggest me. The only thing which i found there is:
    Log files and SQL files overwrite previously existing files.
    So i cant keep previos log files?
    My OS is HP-UX (11.3) and database version is 10.2.0.4
    Regards,
    rustam

  • Expdp/impdp :: Constraints in Parent child relationship

    Hi ,
    I have one table parent1 and tables child1, child2 and chld3 have foreign key created on this parent1.
    Now I want to do some deletion on parent1. But since number of records are very high on parent1 , we are going with expdp / impdp with querry option.
    I have taken query level expdp on parent1. Now I dropped parent1 with cascade constraints option and all the foreign keys created by child1 ,2 and 3 which references parent1 are automatically dropped.
    Now If i fire the impdp for the query level dump file , will these foreign key constraints will get created automatically on child1 ,2 and 3 or I need to manually re-create it ??
    Regards,
    Anu

    Hi,
    The FK's will not be in the dumpfile - see the example code below where i generate a sqlfile following pretty much the process you would have done. THis is because the FK is part of the DDL for the child table not the parent.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning option
    OPS$ORACLE@EMZA3>create table a (col1 number);
    Table created.
    OPS$ORACLE@EMZA3>alter table a add primary key (col1);
    Table altered.
    OPS$ORACLE@EMZA3>create table b (col1 number);
    Table created.
    OPS$ORACLE@EMZA3>alter table b add constraint x foreign key (col1) references a(col1);
    Table altered.
    OPS$ORACLE@EMZA3>
    EMZA3:[/oracle/11.2.0.1.2.DB/bin]# expdp / include=TABLE:\"=\'A\'\"
    Export: Release 11.2.0.3.0 - Production on Fri May 17 15:45:50 2013
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning option
    Starting "OPS$ORACLE"."SYS_EXPORT_SCHEMA_04":  /******** include=TABLE:"='A'"
    Estimate in progress using BLOCKS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 0 KB
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    . . exported "OPS$ORACLE"."A"                                0 KB       0 rows
    Master table "OPS$ORACLE"."SYS_EXPORT_SCHEMA_04" successfully loaded/unloaded
    Dump file set for OPS$ORACLE.SYS_EXPORT_SCHEMA_04 is:
      /oracle/11.2.0.3.0.DB/rdbms/log/expdat.dmp
    Job "OPS$ORACLE"."SYS_EXPORT_SCHEMA_04" successfully completed at 15:45:58
    Import: Release 11.2.0.3.0 - Production on Fri May 17 15:46:16 2013
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning option
    Master table "OPS$ORACLE"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded
    Starting "OPS$ORACLE"."SYS_SQL_FILE_FULL_01":  /******** sqlfile=a.sql
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Job "OPS$ORACLE"."SYS_SQL_FILE_FULL_01" successfully completed at 15:46:17
    -- CONNECT OPS$ORACLE
    ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
    -- new object type path: SCHEMA_EXPORT/TABLE/TABLE
    CREATE TABLE "OPS$ORACLE"."A"
       (    "COL1" NUMBER
       ) SEGMENT CREATION IMMEDIATE
      PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
    NOCOMPRESS LOGGING
      STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
      PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1
      BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "SYSTEM" ;
    -- new object type path: SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    ALTER TABLE "OPS$ORACLE"."A" ADD PRIMARY KEY ("COL1")
      USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
      STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
      PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1
      BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "SYSTEM"  ENABLE;
    -- new object type path: SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    DECLARE I_N VARCHAR2(60);
      I_O VARCHAR2(60);
      NV VARCHAR2(1);
      c DBMS_METADATA.T_VAR_COLL;
      df varchar2(21) := 'YYYY-MM-DD:HH24:MI:SS';
    stmt varchar2(300) := ' INSERT INTO "SYS"."IMPDP_STATS" (type,version,flags,c1,c2,c3,c5,n1,n2,n3,n4,n5,n6,n7,n8,n9,n10,n11,n12,d1,cl1) VALUES (''I'',6,:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,NULL,:14,:15,NULL,:16,:17)';
    BEGIN
      DELETE FROM "SYS"."IMPDP_STATS";
      c(1) := 'COL1';
      DBMS_METADATA.GET_STAT_INDNAME('OPS$ORACLE','A',c,1,i_o,i_n);
      EXECUTE IMMEDIATE stmt USING 0,I_N,NV,NV,I_O,0,0,0,0,0,0,0,0,NV,NV,TO_DATE('2013-05-17 15:43:24',df),NV;
      DBMS_STATS.IMPORT_INDEX_STATS('"' || i_o || '"','"' || i_n || '"',NULL,'"IMPDP_STATS"',NULL,'"SYS"');
      DELETE FROM "SYS"."IMPDP_STATS";
    END;
    /Regards,
    Harry
    http://dbaharrison.blogspot.com/

  • System generated Index names different on target database after expdp/impdp

    After performing expdp/impdp to move data from one database (A) to another (B), the system name generated indexes has different names on the target database, which caused a major issue with GoldenGate. Could anyone provide any tricks on how to perform the expdp/impdp and have the same system index name generated to be the same on both source and target?
    Thanks in advance.
    JL

    While I do not agree with Sb choice of wording his solution is correct. I suggest you drop and recreate the objects using explicit naming then you will get the same names on the target database after import for constraints, indexes, and FK's.
    A detailed description of the problem this caused with Golden Gate would be interesting. The full Oracle and Golden Gate versions is use might also be important to the solution, if one exists other than explicitl naming.
    HTH -- Mark D Powell --
    Edited by: Mark D Powell on May 30, 2012 12:26 PM

  • Expdp+Impdp: Does the user have to have DBA privilege?

    Is a "normal" user (=Without DBA privilege) allowed to export and import (with new expdp/impdp) his own schema?
    Is a "normal" user (=Without DBA privilege) allowed to export and import (with new expdp/impdp) other (=not his own) schemas ?
    If he is not allowed: Which GRANT is necessary to be able to perform such expdp/impdp operations?
    Peter
    Edited by: user559463 on Feb 28, 2010 7:49 AM

    Hello,
    Is a "normal" user (=Without DBA privilege) allowed to export and import (with new expdp/impdp) his own schema?Yes, a User can always export its own objects.
    Is a "normal" user (=Without DBA privilege) allowed to export and import (with new expdp/impdp) other (=not his own) schemas ?Yes, if this User has EXP_FULL_DATABASE and IMP_FUL_DATABASE Roles.
    So, you can create a User and GRANT it EXP_FULL_DATABASE and IMP_FULL_DATABASE Roles and, being connected
    to this User, you could export/import any Object from / to any Schemas.
    On databases, on which there're a lot of export/import operations, I always create a special User with these Roles.
    NB: In DataPump you should GRANT also READ, WRITE Privileges on the DIRECTORY (if you use "dump") to the User.
    Else, be accurate on the choice of your words, as previously posted, DBA is a Role not a Privilege which has another meaning.
    Hope this help.
    Best regards,
    Jean-Valentin

  • XE11: expdp/impdp

    Hello,
    i would like to use expdp and impdp.
    As i installed XE11 on Linux, i unlocked the HR account:
    ALTER USER hr ACCOUNT UNLOCK IDENTIFIED BY hr;
    and use the expdp:
    expdp hr/hr DUMPFILE=hrdump.dmp DIRECTORY=DATA_PUMP_DIR SCHEMAS=HR
    LOGFILE=hrdump.log
    This quits with:
    ORA-39006: internal error
    ORA-39213: Metadata processing is not available
    The alert_XE.log reported:
    ORA-12012: error on auto execute of job "SYS"."BSLN_MAINTAIN_STATS_JOB"
    ORA-06550: line 1, column 807:
    PLS-00201: identifier 'DBSNMP.BSLN_INTERNAL' must be declared
    I read some entries here and did:
    sqlplus sys/******* as sysdba @?/rdbms/admin/catnsnmp.sql
    sqlplus sys/******* as sysdba @?/rdbms/admin/catsnmp.sql
    sqlplus sys/******* as sysdba @?/rdbms/admin/catdpb.sql
    sqlplus sys/******* as sysdba @?/rdbms/admin/utlrp.sql
    I restarted the database, but the result of expdp was the same:
    ORA-39006: internal error
    ORA-39213: Metadata processing is not available
    What's wrong with that? What can i do?
    Do i need "BSLN_MAINTAIN_STATS_JOB" or can this set ro FALSE?
    I created the database today on 24.07. and the next run for "BSLN_MAINTAIN_STATS_JOB"
    is on 29.07. ?
    In the Windows-Version it is working correct, but not in the Linux-Version.
    Best regards

    Hello gentlemen,
    back to the origin:
    'Is expdp/impdp working on XE11'
    The answer is simply yes.
    After a view days i found out that:
    - no stylesheets installed are required for this operation
    - a simple installation is enough
    And i did:
    SHELL:
    mkdir /u01/app > /dev/null 2>&1
    mkdir /u01/app/oracle > /dev/null 2>&1
    groupadd dba
    useradd -g dba -d /u01/app/oracle oracle > /dev/null 2>&1
    chown -R oracle:dba /u01/app/oracle
    rpm -ivh oracle-xe-11.2.0-1.0.x86_64.rpm
    /etc/init.d/./oracle-xe configure responseFile=xe.rsp
    ./sqlplus sys/********* as sysdba @/u01/app/oracle/product/11.2.0/xe/rdbms/admin/utlfile.sql
    SQLPLUS:
    ALTER USER hr IDENTIFIED BY hr ACCOUNT UNLOCK;
    GRANT CONNECT, RESOURCE to hr;
    GRANT read, write on DIRECTORY DATA_PUMP_DIR TO hr;
    expdp hr/hr dumpfile=hr.dmp directory=DATA_PUMP_DIR schemas=hr logfile=hr_exp.log
    impdp hr/hr dumpfile=hr.dmp directory=DATA_PUMP_DIR schemas=hr logfile=hr_imp.log
    This was carried out on:
    OEL5.8, OEL6.3, openSUSE 11.4
    For explanation:
    We did the style-sheet-installation for XE10 to have the expdp/impd functionality.
    Thanks for your assistance
    Best regards
    Achim
    Edited by: oelk on 16.08.2012 10:20

  • [ETL] TTS vs expdp/impdp vs ctas (dblink)

    Hi, all.
    The database is oracle 10gR2 on a unix machine.
    Assuming that the db size is about 1 tera bytes (table : 500 giga, index : 500 giga),
    how much faster is TTS (transportable tablespace) over expdp/impdp, and over ctas (dblink) ?
    As you know, the speed of etl depends on the hardware capacity. (io capacity, network bandwith, the number of cpu)
    I just would like to hear general guide from your experience.
    Thanks in advance.
    Best Regards.

    869578 wrote:
    Hi, all.
    The database is oracle 10gR2 on a unix machine.
    Assuming that the db size is about 1 tera bytes (table : 500 giga, index : 500 giga),
    how much faster is TTS (transportable tablespace) over expdp/impdp, and over ctas (dblink) ?
    As you know, the speed of etl depends on the hardware capacity. (io capacity, network bandwith, the number of cpu)
    I just would like to hear general guide from your experience.
    Thanks in advance.
    Best Regards.http://docs.oracle.com/cd/B19306_01/server.102/b14231/tspaces.htm#ADMIN01101
    Moving data using transportable tablespaces is much faster than performing either an export/import or unload/load of the same data. This is because the datafiles containing all of the actual data are just copied to the destination location, and you use an export/import utility to transfer only the metadata of the tablespace objects to the new database.
    If you really want to know "how much faster" you're going to have to benchmark. Lots of variables come in to play so best to determine this in your actual environment.
    Cheers,

  • Expdp /impdp  Win 2003 Oracle 11g

    Hi Guys,
    I did the export from production and I want to restore on dev. What is the sysntax for impdp.
    expdp userid=system/system dumpfile=livelink.dmp schemas=livelink logfile=livelink.log
    impdp userid=system/system dumpfile=e:/oradata/livelink.dmp ........ Is there fromuser=livelink touser=livelink
    Thanx.

    Handle:      NPD
    Status Level:      Newbie
    Registered:      Mar 15, 2007
    Total Posts:      359
    Total Questions:      82 (73 unresolved)
    so many questions & so few answers.
    Is there fromuser=livelink touser=livelinkNO
    check out SCHEMAS
    bcm@bcm-laptop:~$ impdp help=yes
    Import: Release 11.2.0.1.0 - Production on Thu Oct 7 08:44:28 2010
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    The Data Pump Import utility provides a mechanism for transferring data objects
    between Oracle databases. The utility is invoked with the following command:
         Example: impdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp
    You can control how Import runs by entering the 'impdp' command followed
    by various parameters. To specify parameters, you use keywords:
         Format:  impdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
         Example: impdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp
    USERID must be the first parameter on the command line.
    The available keywords and their descriptions follow. Default values are listed within square brackets.
    ATTACH
    Attach to an existing job.
    For example, ATTACH=job_name.
    CONTENT
    Specifies data to load.
    Valid keywords are: [ALL], DATA_ONLY and METADATA_ONLY.
    DATA_OPTIONS
    Data layer option flags.
    Valid keywords are: SKIP_CONSTRAINT_ERRORS.
    DIRECTORY
    Directory object to be used for dump, log and sql files.
    DUMPFILE
    List of dumpfiles to import from [expdat.dmp].
    For example, DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
    ENCRYPTION_PASSWORD
    Password key for accessing encrypted data within a dump file.
    Not valid for network import jobs.
    ESTIMATE
    Calculate job estimates.
    Valid keywords are: [BLOCKS] and STATISTICS.
    EXCLUDE
    Exclude specific object types.
    For example, EXCLUDE=SCHEMA:"='HR'".
    FLASHBACK_SCN
    SCN used to reset session snapshot.
    FLASHBACK_TIME
    Time used to find the closest corresponding SCN value.
    FULL
    Import everything from source [Y].
    HELP
    Display help messages [N].
    INCLUDE
    Include specific object types.
    For example, INCLUDE=TABLE_DATA.
    JOB_NAME
    Name of import job to create.
    LOGFILE
    Log file name [import.log].
    NETWORK_LINK
    Name of remote database link to the source system.
    NOLOGFILE
    Do not write log file [N].
    PARALLEL
    Change the number of active workers for current job.
    PARFILE
    Specify parameter file.
    PARTITION_OPTIONS
    Specify how partitions should be transformed.
    Valid keywords are: DEPARTITION, MERGE and [NONE].
    QUERY
    Predicate clause used to import a subset of a table.
    For example, QUERY=employees:"WHERE department_id > 10".
    REMAP_DATA
    Specify a data conversion function.
    For example, REMAP_DATA=EMP.EMPNO:REMAPPKG.EMPNO.
    REMAP_DATAFILE
    Redefine datafile references in all DDL statements.
    REMAP_SCHEMA
    Objects from one schema are loaded into another schema.
    REMAP_TABLE
    Table names are remapped to another table.
    For example, REMAP_TABLE=EMP.EMPNO:REMAPPKG.EMPNO.
    REMAP_TABLESPACE
    Tablespace object are remapped to another tablespace.
    REUSE_DATAFILES
    Tablespace will be initialized if it already exists [N].
    SCHEMAS
    List of schemas to import.
    SKIP_UNUSABLE_INDEXES
    Skip indexes that were set to the Index Unusable state.
    SOURCE_EDITION
    Edition to be used for extracting metadata.
    SQLFILE
    Write all the SQL DDL to a specified file.
    STATUS
    Frequency (secs) job status is to be monitored where
    the default [0] will show new status when available.
    STREAMS_CONFIGURATION
    Enable the loading of Streams metadata
    TABLE_EXISTS_ACTION
    Action to take if imported object already exists.
    Valid keywords are: APPEND, REPLACE, [SKIP] and TRUNCATE.
    TABLES
    Identifies a list of tables to import.
    For example, TABLES=HR.EMPLOYEES,SH.SALES:SALES_1995.
    TABLESPACES
    Identifies a list of tablespaces to import.
    TARGET_EDITION
    Edition to be used for loading metadata.
    TRANSFORM
    Metadata transform to apply to applicable objects.
    Valid keywords are: OID, PCTSPACE, SEGMENT_ATTRIBUTES and STORAGE.
    TRANSPORTABLE
    Options for choosing transportable data movement.
    Valid keywords are: ALWAYS and [NEVER].
    Only valid in NETWORK_LINK mode import operations.
    TRANSPORT_DATAFILES
    List of datafiles to be imported by transportable mode.
    TRANSPORT_FULL_CHECK
    Verify storage segments of all tables [N].
    TRANSPORT_TABLESPACES
    List of tablespaces from which metadata will be loaded.
    Only valid in NETWORK_LINK mode import operations.
    VERSION
    Version of objects to import.
    Valid keywords are: [COMPATIBLE], LATEST or any valid database version.
    Only valid for NETWORK_LINK and SQLFILE.
    The following commands are valid while in interactive mode.
    Note: abbreviations are allowed.
    CONTINUE_CLIENT
    Return to logging mode. Job will be restarted if idle.
    EXIT_CLIENT
    Quit client session and leave job running.
    HELP
    Summarize interactive commands.
    KILL_JOB
    Detach and delete job.
    PARALLEL
    Change the number of active workers for current job.
    START_JOB
    Start or resume current job.
    Valid keywords are: SKIP_CURRENT.
    STATUS
    Frequency (secs) job status is to be monitored where
    the default [0] will show new status when available.
    STOP_JOB
    Orderly shutdown of job execution and exits the client.
    Valid keywords are: IMMEDIATE.
    bcm@bcm-laptop:~$ Edited by: sb92075 on Oct 7, 2010 8:53 AM

  • Error : Temporary Tablespace is Empty  when doing expdp/impdp

    Hi all,
    I was doing expdp on my oracle 10.1.0.2.0 DB on Win XP P, though the user is having a default temporary tablespace with a temp file on autoextend enabled, I got the message as...
    ORA-25153: Temporary Tablespace is Empty
    Then I created a new temporary tablespace for the user with 500M tempfile and autoextend enabled, then expdp went through.
    Now I am doing the impdp for the same .dmp file to generate one sqlfile for the DB,
    again I am facing the same error message as...
    ORA-25153: Temporary Tablespace is Empty
    ----- PL/SQL Call Stack -----
    object line object
    handle number name
    17FE07EC 13460 package body SYS.KUPW$WORKER
    17FE07EC 5810 package body SYS.KUPW$WORKER
    17FE07EC 3080 package body SYS.KUPW$WORKER
    17FE07EC 3530 package body SYS.KUPW$WORKER
    17FE07EC 6395 package body SYS.KUPW$WORKER
    17FE07EC 1208 package body SYS.KUPW$WORKER
    17ABE058 2 anonymous block
    Job "CHECKUP"."SYS_SQL_FILE_FULL_02" stopped due to fatal error at 10:09
    The message indicates that...
    ORA-25153: Temporary Tablespace is Empty
    Cause: An attempt was made to use space in a temporary tablespace with no files.
    Action: Add files to the tablespace using ADD TEMPFILE command.
    SO my question is every time I do any imp exp have I to add temp file in my temporary tablespace? will it not be cleared on the completion of the job?
    Any advice please.

    Hi Sabdar,
    The result of the query is as...
    SQL> SELECT * FROM DATABASE_PROPERTIES where
    2 PROPERTY_NAME='DEFAULT_TEMP_TABLESPACE';
    PROPERTY_NAME
    PROPERTY_VALUE
    DESCRIPTION
    DEFAULT_TEMP_TABLESPACE
    TEMP
    Name of default temporary tablespace
    So the default temporary tablespace is TEMP which is not having any tempfile as I cloned this DB from the primary DB, but the user I am using for the impdp is 'checkup' and the temporary tablespace for 'checkup' is 'checkup_temp1' which s having tempfile.
    SO then why the impdp job is going to server's temporary tablespace instead of user's temporary tablespace.
    Is there any way to get whether 'checkup_temp1' tablespace is the default temporary tablespace for 'checkup' or not?
    Can I create create the user mentioning default temporary tablespace anyway because it is giving me error as...
    SQL> create user suman identified by suman
    2 default tablespace checkup_dflt
    3 default TEMPORARY TABLESPACE checkup_temp1;
    default TEMPORARY TABLESPACE checkup_temp1
    ERROR at line 3:
    ORA-00921: unexpected end of SQL command
    Then I did ...
    SQL> create user suman identified by suman
    2 default tablespace checkup_dflt
    3 TEMPORARY TABLESPACE checkup_temp1;
    User created.
    Regards

  • Expdp impdp fails from 10g to 11g db version

    Hello folks,
    Export DB Version : 10.2.0.4
    Import DB Version : 11.2.0.1
    Export Log File
    Export: Release 10.2.0.4.0 - Production on Wednesday, 03 November, 2010 2:19:20
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
    With the Partitioning, Data Mining and Real Application Testing options
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 45 GB
    . . exported "DYM"."CYCLE_COUNT_MASTER" 39.14 GB 309618922 rows
    Master table "DYM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
    Dump file set for DYM.SYS_EXPORT_SCHEMA_01 is:
    Job "DYM"."SYS_EXPORT_SCHEMA_01" successfully completed at 02:56:49
    Import Log File
    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    ORA-31693: Table data object "DYM_PRJ4"."CYCLE_COUNT_MASTER" failed to load/unload and is being skipped due to error:
    ORA-29913: error in executing ODCIEXTTABLEOPEN callout
    Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 10:54:38
    from 10g expdp to 11g impdp is not allowed ? any thoughts appreciated ??

    Nope , I do not see any error file.
    Current log# 2 seq# 908 mem# 0:
    Thu Nov 04 11:58:20 2010
    DM00 started with pid=530, OS id=1659, job SYSTEM.SYS_IMPORT_FULL_02
    Thu Nov 04 11:58:20 2010
    DW00 started with pid=531, OS id=1661, wid=1, job SYSTEM.SYS_IMPORT_FULL_02
    Thu Nov 04 11:58:55 2010
    DM00 started with pid=513, OS id=1700, job SYSTEM.SYS_IMPORT_FULL_02
    Thu Nov 04 11:58:55 2010
    DW00 started with pid=520, OS id=1713, wid=1, job SYSTEM.SYS_IMPORT_FULL_02
    Thu Nov 04 12:00:54 2010
    Thread 1 cannot allocate new log, sequence 909
    Private strand flush not complete
    Current log# 2 seq# 908 mem# 0: ####################redo02.log
    Thread 1 advanced to log sequence 909 (LGWR switch)
    Current log# 3 seq# 909 mem# 0: ###################redo03.log
    Thu Nov 04 12:01:51 2010
    Thread 1 cannot allocate new log, sequence 910
    Checkpoint not complete
    Current log# 3 seq# 909 mem# 0:###################redo03.log

  • Using expdp/impdp to backup schemas to new tablespace

    Hello,
    I have tablespace A for schemas A1 and A2, and I wish back up these schemes to tablespace B using schema names B1 and B2 (so the contents of schemas A1 and A2 are copied into schemas B1 and B2, respectively, to use as backups in case something happens to schemas A1 or A2 or tablespace A).
    I began by creating tablespace B, and schemas B1 and B2. Then I attempted to populate schemas B1 and B2 by doing the following:
    EXPORT SCHEMAS:
    expdp a1/a1password@myIdentifier ESTIMATE=BLOCKS DUMPFILE=myDpumpDirectory:a1.dmp LOGFILE=myDpumpDirectory:a1_export.log SCHEMAS=a1 COMPRESSION=METADATA_ONLY
    expdp a2/a2password@myIdentifier ESTIMATE=BLOCKS DUMPFILE=myDpumpDirectory:a2.dmp LOGFILE=myDpumpDirectory:a2_export.log SCHEMAS=a2 COMPRESSION=METADATA_ONLY
    IMPORT SCHEMAS:
    impdp b1/b1password@myIdentifier DUMPFILE=myDpumpDirectory:a1.dmp LOGFILE=myDpumpDirectory:b1_import.log REMAP_SCHEMA=a1:b1
    impdp b2/b2password@myIdentifier DUMPFILE=myDpumpDirectory:a2.dmp LOGFILE=myDpumpDirectory:b2_import.log REMAP_SCHEMA=a2:b2
    This resulted in backing up schema A1 into schema B1, and schema A2 into B2, but the tablespaces for schemas B1 and B2 remained tablespace A (when I wanted them to be tablespace B).
    I will drop schemas B1 and B2, create new schemas, and try again. What command should I use to get the tablespace correct this time?
    Reviewing the documentation for data pump import
    http://docs.oracle.com/cd/E11882_01/server.112/e22490/dp_import.htm#SUTIL300
    specifically the section titled REMAP_TABLESPACE, I'm thinking that I could just add a switch to the above import commands to remap tablespace, such as:
    impdp b1/b1password@myIdentifier DUMPFILE=myDpumpDirectory:a1.dmp LOGFILE=myDpumpDirectory:b1_import.log REMAP_SCHEMA=a1:b1 REMAP_TABLESPACE=a:b
    impdp b2/b2password@myIdentifier DUMPFILE=myDpumpDirectory:a2.dmp LOGFILE=myDpumpDirectory:b2_import.log REMAP_SCHEMA=a2:b2 REMAP_TABLESPACE=a:b
    Is that correct?
    Also, is it OK to use the same export commands above, or should they change to support the REMAP_TABLESPACE?

    Hi,
    if i understand correctly, you want to import  A1:B1 and  A2:B2 with the Respective Tablespace. You are using the expdp with ESTIMATE it can not help you
    You can use something like that with one dump file
    expdp system/password directory=<myDpumpDirectory> dumpfile=A1_A2_Export.dmp logfile=A1_A2_Export.log schemas=A1,A2
    impdp system/password directory=<myDpumpDirectory> dumpfile=A1_A2_Export.dmp logfile=A1_A2_Import.log remap_schemas=<A1:B1,A2:B2> REMAP_TABLESPACE=<TAB1:TAB2>
    HTH

  • Expdp / impdp - Private synonyms using DB link not imported

    Hi
    I'd appreciate any help with this.
    We are taking an export (expdp) of a database on Oracle EE 10.2.0.3 in Prod and then importing it into an identical DB on 10.2.0.5 in Dev.
    I doubt the minor version difference is an issues, but I mention it for completeness.
    Our expdp in prod used to look like this:
    expdp agdba/x directory=DATA_PUMP_DIR dumpfile=X.DMP logfile=X.LOG schemas=A,B,C,D
    We have changed it to:
    expdp agdba/x directory=DATA_PUMP_DIR dumpfile=X.DMP logfile=X.LOG full=y EXCLUDE=SCHEMA:"IN ('E', 'F')"
    ( so basically do a full export, but exclude schemas E & F )
    The impdp in dev has not changed (we import one schema at a time):
    impdp agdba/x DUMPFILE=X.DMP LOGFILE=imp_X.LOG SCHEMAS=A DIRECTORY=DATA_PUMP_DIR TABLE_EXISTS_ACTION=SKIP
    DB user AGDBA has been granted the DBA role...
    The Issue:_
    Private synonyms that use a DB link are NOT imported, but private synonyms that do NOT use a DB link ARE imported.
    I can fix the issue by simply recreating the private synonyms using the DB link - e.g.:
    CREATE SYNONYM EVENTLOGTBL FOR EVENTLOGTBL@FCISTOSMS;
    Things were working fine until we change the expdp to use FULL. I see a few posts about synonyms and FULL=Y issues, but nothing quite like our problem.
    Any ideas?
    Thanks,
    Andreas

    Andreas Hess wrote:
    So the problem is expdp FULL=Y for some reason does not export synonyms that refer to objects via DB links. I doubt that ... More than few times a week I refresh non-prod databases from a prod db, (it is schema level export not FULL=Y), and two of the schema own db links, and others have synonyms pointing to them. I have never encountered synonym problem (except that if db_links are invalid then impdp takes extremely long time to timeout while compiling pl/sql code.). We normally change db_links after the impdp is done, or (while it is importing table via another session).
    I just re-ran sqlfile option with include=db_link,synonym option and i can see statements that create synonyms and create db links.
    However i think the order in which impdp runs might be the source of your problem ... see this
    Starting "ME"."SYS_SQL_FILE_SCHEMA_01":  ME/******** dumpfile=xxxx.%u.dmp directory=xxxx_exp schemas=schema1,schema2,schema3 sqlfile=schema.sql include=db_link,synonym
    Processing object type SCHEMA_EXPORT/SYNONYM/SYNONYM       <<<< Synonyms are created first
    Processing object type SCHEMA_EXPORT/DB_LINK               <<<< db_links come later
    Job "ME"."SYS_SQL_FILE_SCHEMA_01" successfully completed at 08:32:02So, it is conceivable that if you drop synonyms/db_links from your schemas before impdp, some synonyms will fail to create properly. This could be a version specific issue, mine is 11.2 and I don't see failures for synonym creation, i.e. no error messages while impdp. Next time you could try to drop objects except db_links and see if you still have the same issue.
    Raj

  • Use expdp/impdp to reorganize a tablespace to remove additional datafile ?

    Oracle 10g (10.2.0.1)
    We had a tablespace with a single datafile WORK1, WORK1 filled up, colleague added two datafiles WORK2 and WORK3 (instead of resizing original larger).
    I resized WORK1, increasing by 500Mb.
    I was able to drop WORK3, but not WORK2 (ORA-03262: the file is non-empty)
    My proposed solution is to expdp the tablespace, drop the tablespace and datafiles, recreate the tablespace with a correctly sized datafile and finally impdp the tablespace.
    Is this solution valid ?
    Any hints at syntax would be useful

    1. Map your datafile.
    2. If no segments in datafile, drop it and go to 5.
    2. Shrink the datafile up to where the data ends.
    3. Rebuild/move the last object in the data file,
    4. Go to 1
    5. Fin
    To map data file...
    accept file_num char prompt 'File ID: ';
    SET PAGESIZE   70
    SET LINESIZE   132
    SET NEWPAGE    0
    SET VERIFY     OFF
    SET ECHO       OFF
    SET HEADING    ON
    SET FEEDBACK   OFF
    SET TERMOUT    ON
    COLUMN file_name   FORMAT a50          HEADING 'File Name'
    COLUMN owner       FORMAT a10   TRUNC  HEADING 'Owner'
    COLUMN object      FORMAT a30   TRUNC  HEADING 'Object'
    COLUMN obj_type    FORMAT a2           HEADING ' '
    COLUMN block_id    FORMAT 9999999      HEADING 'Block|ID'
    COLUMN blocks      FORMAT 999,999      HEADING 'Blocks'
    COLUMN mbytes      FORMAT 9,999.99     HEADING 'M-Bytes'
    SELECT  'free space'      owner,
            ' '               object,
            ' '               obj_type,
            f.file_name,
            s.block_id,
            s.blocks,
            s.bytes/1048576   mbytes
      FROM  dba_free_space s,
            dba_data_files f
    WHERE  s.file_id = TO_NUMBER(&file_num)
       AND  s.file_id = f.file_id
    UNION
    SELECT  owner,
            segment_name,
            DECODE(segment_type, 'TABLE',          'T',
                                 'INDEX',          'I',
                                 'ROLLBACK',       'RB',
                                 'CACHE',          'CH',
                                 'CLUSTER',        'CL',
                                 'LOBINDEX',       'LI',
                                 'LOBSEGMENT',     'LS',
                                 'TEMPORARY',      'TY',
                                 'NESTED TABLE',   'NT',
                                 'TYPE2 UNDO',     'U2',
                                 'TABLE PARTITION','TP',
                                 'INDEX PARTITION','IP', '?'),
            f.file_name,
            s.file_id,
            s.block_id,
            s.blocks,
            s.bytes/1048576
      FROM  dba_extents s,
            dba_data_files f
    WHERE  s.file_id = TO_NUMBER(&file_num)
       AND  s.file_id = f.file_id
    ORDER
        BY  file_id,
            block_id

  • EXPDP/IMPDP TIMESTAMP0

    Hi All,
    I encountered a strange problem when I was doing an export from a 10.2.0.3 on solaris to a 10.2.0.3 databsae on another solaris box.
    expdp system dumpfile=db_20100617.dmp logfile=db_20100617.log directory=expdp full=y compression=metadata_only parallel=2 status=30
    and this is my import statement
    impdp system dumpfile=db_20100617.dmp logfile=db_20100617_imp.log directory=expdp full=y parallel=4
    I've followed this process twice, and everytime on my destination database, all my columns with (source) data_type=TIMESTAMP(6) woudl be converted to data_type=TIMESTAMP(0) which causes a select that defines the column name (select column1 from tablea) to crash with a End-of-communication error.
    Fortunately, the work around was to alter table modify column and set it to timestamp(6), but I was wondering if anyone else had seen this before. My log shows no errors with table creation, only issues package compilation and dblinks.
    Has anyone seen this problem before or know how to bypass it?

    Hi All,
    Just to finish off this thread. I've put it through metalink and they havne't confirmed it as a bug. Their only resolution according to oracle support was to recreate the table manually. Because any table created using
    create table ... as select *
    also causes the issue during an export stage. Using the get_ddl function also produces the same error with timestamp(0).
    Anyway all, thanks for your time.
    Cheers.

Maybe you are looking for

  • AC Adapter for T430 -- which FRU to choose

    I just purchased an extra adapater for my T430, as well as a docking station.  I used the wizard on the site to select the ac adapater, but noticed that the FRU/Part Number differs from the one that came with the laptop.  All three adapaters (the one

  • Essbase Studio Performance Issue : Data load into BSO cube

    Hello, Having succesfully built my outline by member loading through Essbase Studio, I have tried to load data into my application again with Studio. However I was never able to complete the data load because it is taking forever. Each time I tried t

  • Any temporary solutions to the washed out colors issue?

    **Note: I posted this in the Boot Camp forum, but I received no response. So, I'll give this forum a try.** I have a question about Windows XP and washed out colors on the display. I hear that there are some display driver issues with the new unibody

  • Can I use the iPhone charge to charge N97?

    As the title says, can I use the iphone wall charger, connect my data cable to it and charge my N97?

  • Cannot open docs in iCloud after update version 3.2

    Hello folks, today in the morning I take the suggested update for the iWork Suite on my MacBook Pro and my iPad mini. I use all the applications on both, so Numbers, Pages, Keynote. After the update my Mac programs show to me an error trying to open