Impdp schemas

HI
oracle version 10g r2
I use datapump export to export Metadat only for 3 schemas (IGL,TEXXON,OLI)
this export is done by user --- IGL
I Try to import(iimpdp) this 3 schemas in other database same oracle version using the follwoing command
impdp system@twmin DIRECTORY=DBPUMPDIR DUMPFILE=twmin_schema_0120090109.dmp,twmin_schema_0220090109.dmp SCHEMAS=IGL,TEXXON,OLI
i hit lot of ORA Errors , the main ORA Errors is
ORA-01435: user does not exist
ORA-31625: error changing username from SYSTEM to IGL
AND I USE REMAP SCHEMA OPTION
impdp system@twmin DIRECTORY=DBPUMPDIR REMAP_SCHEMA=IGL:IGLTEST DUMPFILE=twmin_schema_0120090109.dmp,twmin_schema_0220090109.dmp
This option also through lot of ora error , because it says other user lik -- OLI
ORA-31625: error changing username from SYSTEM to OLI
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"IGLTEST"."SYS_EXPORT_SCHEMA_02" creation failed
ORA-31625: error changing username from SYSTEM to TEXXON
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 4715
ORA-01435: user does not exist
and it will import the procedure,packages from IGL to IGLTEST but it did'nt
create the table structure
how i can import this 3 schema to other database without error
rds

Hi..
ORA-31625: error changing username from SYSTEM to OLIORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"IGLTEST"."SYS_EXPORT_SCHEMA_02" creation failed
ORA-31625: error changing username from SYSTEM to TEXXON
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 4715
ORA-01435: user does not exist>
Last line reads ORA-01435: user does not exist. First create the user IGLTEST and then import the data.
HTH
Anand

Similar Messages

  • Impdp does not create user/schema

    I'm an Oracle noob. I'm trying to copy (expdp/impdp) schema (no data) from one machine to another. I do not want any remapping. I just want empty table structure created on targetHOST.
    Error I get is:
        ORA-39083: Object type PROCACT_SCHEMA failed to create with error:
        ORA-31625: Schema ZABBIX is needed to import this object, but is unaccessible
        ORA-01435: user does not exist
    My understanding is that if the user performing import has 'IMPORT FULL DATABASE' privilege, it'll create the users/schemas in the targetHOST. I granted it to 'scott' and tried with that too, same error.
    - DB version: SQL*Plus: Release 11.2.0.1.0 Production on Thu Oct 24 14:27:41 2013
    - If it matters the DB version on sourceHOST is 11.2.0.***3***
    - OS: Linux targetHOST 2.6.18-308.el5 #1 SMP Fri Jan 27 17:17:51 EST 2012 x86_64 x86_64 x86_64 GNU/Linux
    Import errors:
        (0)oracle@targetHOST$  sqlplus system/manager
        SQL*Plus: Release 11.2.0.1.0 Production on Thu Oct 24 14:27:41 2013
        Copyright (c) 1982, 2009, Oracle.  All rights reserved.
        SQL> select * from session_privs;
        PRIVILEGE
        IMPORT FULL DATABASE
        CREATE SESSION
        .......200 other privileges.......
        202 rows selected.
        SQL> ^D
        (0)oracle@targetHOST$
        (0)oracle@targetHOST$ impdp system/oracle123 directory=TEST_DIR dumpfile=ZABBIX.dmp logfile=impdpZabbix.log
        Import: Release 11.2.0.1.0 - Production on Thu Oct 24 14:14:51 2013
        Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
        Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
        With the Partitioning, OLAP, Data Mining and Real Application Testing options
        Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
        Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/******** directory=TEST_DIR dumpfile=ZABBIX.dmp logfile=impdpZabbix.log
        Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
        ORA-39083: Object type PROCACT_SCHEMA failed to create with error:
        ORA-31625: Schema ZABBIX is needed to import this object, but is unaccessible
        ORA-01435: user does not exist
        Failing sql is:
        BEGIN
        sys.dbms_logrep_imp.instantiate_schema(schema_name=>SYS_CONTEXT('USERENV','CURRENT_SCHEMA'), export_db_name=>'XXX.YYY.COM', inst_scn=>'7788478540892');COMMIT; END;
        Processing object type SCHEMA_EXPORT/TABLE/TABLE
        ORA-39083: Object type TABLE:"ZABBIX"."TABLE1" failed to create with error:
        ORA-01918: user 'ZABBIX' does not exist
        Failing sql is:
        CREATE TABLE "ZABBIX"."TABLE1" ("COLUMN1" VARCHAR2(20 BYTE) NOT NULL ENABLE, "COLUMN2" VARCHAR2(20 BYTE), "COLUMN3" VARCHAR2(20 BYTE)) SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAU
        Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
        Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
        Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 2 error(s) at 14:14:53
        (5)oracle@targetHOST$
    Export was done using following command (on a different machine)
    (0)oracle@sourceHOST$ expdp zabbix/zabbix schemas=ZABBIX content=METADATA_ONLY directory=TEST_DIR dumpfile=ZABBIX.dmp logfile=expdpZABBIX.log
    Complete export log available here.:

    I figured it out on my own. Required privilege is 'CREATE USER'. I don't have system's password so I granted CREATE USER to zabbix, made it DBA and it seems to be exporting more than it did last time:
    Processing object type SCHEMA_EXPORT/USER
    Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
    Processing object type SCHEMA_EXPORT/ROLE_GRANT
    Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
    Yet to import but looks good so far..
    Appreciate you taking the time..
    Thanks...

  • Impdp operation taking more tablespace size in compare to expdp...

    Hi All,
    I have one issue with impdp operation. I am using 11gR2 database and schema's dmpfile size is 5G. When I start loading data through impdp schema's tablespace size grow more than 5G. I have to stop the impdp operation because of growing tablespace size. There is no compress parameter passed during expdp. Lastly I given tablespace maxsize= unlimited but seems like it still not sufficient and have to add one more dbf. so the tablespace size as of now is 60G and impdp operation is still running.
    Can anyone guide me if dmp file size is 5G then how could be the tablespace size more than 5G? I have an assumption that if my dmpfile size is 5G then the tablespace size in which I loaded my data (using impdp)should not more than 5G.
    Thanks in advance.

    I was facing the same problem. After giving parameter TRANSFORM=SEGMENT_ATTRIBUTES:n, the problem has been resolved.
    TRANSFORM = transform_name:value[:object_type]
    The transform_name specifies the name of the transform. Some of possible options are as follows:
    SEGMENT_ATTRIBUTES - If the value is specified as y, then segment attributes (physical attributes, storage attributes, tablespaces, and logging) are included, with appropriate DDL. The default is y. ====> IF THIS IS 'N' PHYSICAL STORAGE ATTRIBUTES ARE NOT INCLUDED.
    STORAGE - If the value is specified as y, the storage clauses are included, with appropriate DDL. The default is y. This parameter is ignored if SEGMENT_ATTRIBUTES=n.
    Although thread a quite old, but just updating this in case someone needs to refer in future. My system parameter deferred_segment_creation is set to TRUE.
    Here is the complete syntax, I have used
    impdp vygrdba/******* dumpfile=VYGRVS6I5_25DEC12.dmp logfile=VYGR_PT_09Jan13.log remap_schema= VYGRVS6I5:VYGR_PT TRANSFORM=SEGMENT_ATTRIBUTES:n
    Edited by: 980762 on 9 Jan, 2013 3:58 AM

  • Verify impdp completed successfully then send email

    Hi,
    I am creating a daily job refresh a test database.
    Basic steps
    1. kill all sessions/connections to database.
    2. drop schema 1 and schema 2
    3. impdp schema 1 and schema 2
    4. verify that impdb completed successfully. If job did not complete successfull send an email to DBA. If job did completed successfully sent email to developers stating that database has been refreshed.
    I am in a windows enviorment so I do not have GREP to read my log file. With a script, how to I determine if the refresh/impdp completed successfully?
    Edited by: user9934078 on Jan 14, 2010 1:58 PM

    I wanted to share this code that I created. It checks that your expdp log file, then sends an html email based on the completion status.
    Hope this helps someone
    --THIS CODE READS THE DATA PUMP EXPORT LOG AND CHECKS FOR ERRROS. THEN SENDS EMAIL.
    DECLARE
    vinhandle utl_file.file_type;
    vnewline VARCHAR2(32767);
    statusMessage VARCHAR2(32767);
    jobName     VARCHAR2(32767) := 'ORACLE Scheduled Task: MWFPROD backup completed on \\MWFDBPRODSVR ';
    styleCSS VARCHAR2(32767) := '<style>BODY {font-family: Arial, Verdana; margin-top: -9px; background-color:#FFFFFF; font-size:12px;} A {font-family: Arial, Verdana;} A:link {text-decoration: underline; color:#003399;} A:visited {text-decoration: underline; color:#003399;} A:hover {text-decoration: underline; color:#003399;} A:active {text-decoration: underline; color:#003399;} B {font-size:12px;} TABLE {border-width: 0px; border-spacing: 0;border-style: solid; border-color: #E7E7E7; border-collapse: collapse;}TH {border-width: 0px; padding: 2px 10px 2px 5px ; border-style: solid; border-color:#333399; font-size: 12px; color:#FFFFFF; background-color:#333399; text-align:left;} TD {border-width: 0px; padding: 2px 10px 2px 5px ; border-style: solid; border-color:#E7E7E7; font-size: 12px;} .redAlert{FONT-WEIGHT: bolder;FONT-SIZE: 12px; COLOR: #FF4976;}</style>';
    jobCompleted BOOLEAN := FALSE;
    startLine VARCHAR2(32767);
    startTime VARCHAR2(32767);
    endTime     VARCHAR2(32767);
    totTime     VARCHAR2(32767);
    BEGIN
    vinhandle := utl_file.fopen('DATA_PUMP_DIR','MWFPRODexport.log','R');
    LOOP
    BEGIN
         utl_file.get_line(vinhandle, vnewline);
    IF instr(vnewline,'successfully completed') > 0 THEN
    jobCompleted := TRUE;
    END IF;
    IF instr(vnewline,'Export:') > 0 THEN
    startLine := vnewline;
    startTime := vnewline;
    END IF;
         --GET LAST LINE OF LOG. THIS IS USED FOR END TIME AND MESSAGES
         statusMessage := vnewline;
    EXCEPTION
    WHEN others THEN
    EXIT;
    END;
    END LOOP;
    startTime := substr(startTime,instr(startTime,':',1,2)-2,length(startTime));
    endTime := substr(statusMessage,instr(statusMessage,' at ') + 4,length(statusMessage));
    --GET DURATION OF JOB
    --totTime := ( to_char(trunc(sysdate) +(TO_DATE(endTime,'HH24:MI:SS') - TO_DATE(startTime,'HH24:MI:SS')) ,'HH24:MI:SS')); 
    totTime := ( to_char(trunc(sysdate) +(TO_DATE(endTime,'HH24:MI:SS') - TO_DATE(startTime,'HH24:MI:SS')) ,'HH24')) || ' hours, ';
    totTime := totTime || ( to_char(trunc(sysdate) +(TO_DATE(endTime,'HH24:MI:SS') - TO_DATE(startTime,'HH24:MI:SS')) ,'MI')) || ' minutes, ';
    totTime := totTime || ( to_char(trunc(sysdate) +(TO_DATE(endTime,'HH24:MI:SS') - TO_DATE(startTime,'HH24:MI:SS')) ,'SS')) || ' seconds';
    if jobCompleted then
    UTL_MAIL.send(sender => '[email protected]',
    recipients => '[email protected]',
    subject => jobName,
    message => styleCSS ||'<br><table><tr><td><b>DURATION:</b></td><td>' || totTime || '</td></tr><tr><td><b>STATUS:</b></td><td> Succeeded </td></tr>' || '<tr><td><b>JOB RUN:</b></td><td>' || startLine || '</td></tr>' || '<tr><td><b>MESSAGES:</b></td><td>' || statusMessage || '</td></tr></table>',
    mime_type => 'text/html; charset=windows-1250');
    else
    UTL_MAIL.send(sender => '[email protected]',
    recipients => '[email protected]',
    subject => jobName,
    message => styleCSS ||'<br><table><tr><td><b>DURATION:</b></td><td>' || totTime || '</td></tr><tr><td><b>STATUS:</b></td><td class=redAlert> Failed </td></tr>' || '<tr><td><b>JOB RUN:</b></td><td>' || startLine || '</td></tr>' || '<tr><td><b>MESSAGES:</b></td><td>' || statusMessage || '</td></tr></table>',
    mime_type => 'text/html; charset=windows-1250');
    end if;
    utl_file.fclose(vinhandle);
    END fopen;
    exit;

  • DUPLICATE PLEASE IGNORE verify impdp completed successfully then send email

    Hi,
    I am creating a daily job refresh a test database.
    Basic steps
    1. kill all sessions/connections to database.
    2. drop schema 1 and schema 2
    3. impdp schema 1 and schema 2
    4. verify that impdb completed successfully. If job did not complete successfull send an email to DBA. If job did completed successfully sent email to developers stating that database has been refreshed.
    I am in a windows enviorment so I do not have GREP to read my log file. With a script, how to I determine if the refresh/impdp completed successfully?
    Edited by: user9934078 on Jan 14, 2010 1:59 PM

    Hi,
    I am creating a daily job refresh a test database.
    Basic steps
    1. kill all sessions/connections to database.
    2. drop schema 1 and schema 2
    3. impdp schema 1 and schema 2
    4. verify that impdb completed successfully. If job did not complete successfull send an email to DBA. If job did completed successfully sent email to developers stating that database has been refreshed.
    I am in a windows enviorment so I do not have GREP to read my log file. With a script, how to I determine if the refresh/impdp completed successfully?
    Edited by: user9934078 on Jan 14, 2010 1:59 PM

  • Not able to import(getting error on impdp command)

    hi,
    i have taken the export of my database db6 using below command
    expdp system/password@db6 SCHEMAS=(CODASYL,DNET,FINANCE,BILLPAY) DUMPFILE=qr_bkp_9nov.dmp DIRECTORY=qr_bkp LOGFILE=qr_bkp_9nov.log PARALLEL=4
    the dump was created success fully(i have also created the directory n given permission before executing the above command).
    now on a different server i have created the directory as below
    CREATE OR REPLACE DIRECTORY qr_bkp AS '/opt/quikremit';
    GRANT READ, WRITE ON DIRECTORY qr_bkp to HR;
    and when executing the command for import i am getting the below error.
    bash-3.00$ impdp SCHEMAS=(CODASYL,DNET,FINANCE,BILLPAY) DIRECTORY=qr_bkp DUMPFILE=qr_bkp_9nov.dmp LOGFILE=qr_bkp_9nov1.LOG PARALLEL=4
    Import: Release 10.2.0.1.0 - 64bit Production on Thursday, 10 November, 2011 13:47:37
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Username: HR
    Password:
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    ORA-39002: invalid operation
    ORA-39070: Unable to open the log file.
    ORA-29283: invalid file operation
    ORA-06512: at "SYS.UTL_FILE", line 475
    ORA-29283: invalid file operation
    some one plz help????

    bash-3.00$ ls -la /opt/quikremit/
    total 2997776
    drwxrwxr-x 15 pqapp other 512 Nov 10 13:44 .
    drwxr-xr-x 15 root sys 512 Oct 24 13:18 ..
    drwxrwxr-x 13 pqapp other 1024 Nov 3 11:32 adodb
    drwxrwxr-x 17 pqapp other 512 Nov 3 12:16 apache2
    drwxrwxr-x 3 pqapp other 512 Nov 3 12:02 batch
    -rwxrwxr-x 1 pqapp other 451 Nov 3 18:21 conntest.php
    drwxrwxr-x 3 pqapp other 512 Nov 3 10:35 dev_envs
    drwxrwxr-x 12 pqapp other 1024 Oct 27 07:17 httpd-2.2.9
    drwxrwxr-x 2 pqapp other 512 Nov 3 10:20 logs
    drwxrwxr-x 2 pqapp other 512 Nov 3 10:26 mail_logs
    drwxrwxr-x 13 pqapp other 512 Oct 27 08:56 mysql-5.5.17-solaris10-sparc-64bit
    -rwxrwxr-x 1 pqapp other 1534005760 Oct 27 08:51 mysql-5.5.17-solaris10-sparc-64bit.tar
    drwxrwxr-x 3 pqapp other 512 Oct 27 09:12 mysql5.5.17
    drwxrwxr-x 23 pqapp other 2560 Nov 3 10:33 php
    drwxr-xr-x 2 pqapp other 512 Nov 10 13:44 qr_bkp
    -rw-r--r-- 1 pqapp other 66007 Nov 9 15:38 qr_bkp_9nov.log
    -rwxrwxrwx 1 pqapp other 134 Nov 10 11:46 qr_impdp.sh
    drwxrwxr-x 4 pqapp other 512 Nov 3 11:41 smarty_temp
    -rw-r--r-- 1 pqapp other 4003 Nov 10 13:32 sqlnet.log
    drwxrwxr-x 3 pqapp other 512 Nov 3 12:03 tmp

  • Having problem with import via dblink

    this is the import command:
    impdp schemas=g2log network_link=HS5 directory=DATA_PUMP_DIR logfile=g2loggblink.log CONTENT=data_only
    here are the error messages that I am getting:
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    Starting "SYS"."SYS_IMPORT_SCHEMA_06":  /******** AS SYSDBA schemas=g2log network_link=HS5 directory=DATA_PUMP_DIR logfile=g2loggblink.log CONTENT=data_only
    Estimate in progress using BLOCKS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    ORA-39125: Worker unexpected fatal error in KUPW$WORKER.GET_TABLE_DATA_OBJECTS while calling DBMS_LOB.CREATETEMPORARY []
    ORA-01157: cannot identify/lock data file 1504 - see DBWR trace file
    ORA-01110: data file 1504: '/data/oracle/oradata/hs4/temp02_01.dbf'
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
    ORA-06512: at "SYS.KUPW$WORKER", line 6228
    ----- PL/SQL Call Stack -----
      object      line  object
      handle    number  name
    43045e5d0     14916  package body SYS.KUPW$WORKER
    43045e5d0      6293  package body SYS.KUPW$WORKER
    43045e5d0      9108  package body SYS.KUPW$WORKER
    43045e5d0      3870  package body SYS.KUPW$WORKER
    43045e5d0      6910  package body SYS.KUPW$WORKER
    43045e5d0      1259  package body SYS.KUPW$WORKER
    4303272b0         2  anonymous block
    Job "SYS"."SYS_IMPORT_SCHEMA_06" stopped due to fatal error at 14:34:40
    can someone please help me with this thanks

    I have tried with the g2log user I still get the same error
    I also added 30gb to the temp tablespace as well
    impdp g2log/****** schemas=g2log network_link=HS5 directory=DATA_PUMP_DIR logfile=g2loggblink.log CONTENT=data_only
    Import: Release 10.2.0.3.0 - 64bit Production on Friday, 11 October, 2013 6:50:12
    Copyright (c) 2003, 2005, Oracle.  All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    Starting "G2LOG"."SYS_IMPORT_SCHEMA_02":  g2log/******** schemas=g2log network_link=HS5 directory=DATA_PUMP_DIR logfile=g2loggblink.log CONTENT=data_only
    Estimate in progress using BLOCKS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    ORA-39125: Worker unexpected fatal error in KUPW$WORKER.GET_TABLE_DATA_OBJECTS while calling DBMS_LOB.CREATETEMPORARY []
    ORA-01157: cannot identify/lock data file 1513 - see DBWR trace file
    ORA-01110: data file 1513: '/data/oracle/oradata/hs4/temp02_10.dbf'
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
    ORA-06512: at "SYS.KUPW$WORKER", line 6228
    ----- PL/SQL Call Stack -----
      object      line  object
      handle    number  name
    43045e5d0     14916  package body SYS.KUPW$WORKER
    43045e5d0      6293  package body SYS.KUPW$WORKER
    43045e5d0      9108  package body SYS.KUPW$WORKER
    43045e5d0      3870  package body SYS.KUPW$WORKER
    43045e5d0      6910  package body SYS.KUPW$WORKER
    43045e5d0      1259  package body SYS.KUPW$WORKER
    429e4fd80         2  anonymous block
    Job "G2LOG"."SYS_IMPORT_SCHEMA_02" stopped due to fatal error at 06:50:16

  • Expdp data without storage meta_data

    Hi,
    I'm working on a process to move database 10.2 from HP-UX with filesystem to Exadata 11.2 with ASM.
    We have limited room for downtime on the database and would like to start on Exadata as fresh as possible.
    I've just started with writing this process but wanted to check if someone had any ideas or comments on how to speed up the process. Each database on HP-UX will be moved sepratly and it will be one database at a time.
    Alternative one:
    1. Create ASM and database instance on Exadata
    2. Run create script for tablespaces
    3. Run create script for users
    4. Expdp relevant schemas (not system/sys and statistics)
    5. Move the dump files to Exadata (maybe connect the Exadata to our existing SAN)
    6. Impdp schemas
    7. create statistics
    8. Test database
    9. Production :-)
    Alternative two:
    1. Create ASM and database instance on Exadata
    2. Run create script for tablespaces
    3. Run create script for users
    4. export table, index, etc metadata but without storage info (initial extent 4gb on a index and so on) with dbms_metadata
    5. export data based on user tables.
    6. Configure external tables in new database on Exadata with ASM
    6b. insert data with the use of dump files and select against external tables
    7. create statistics
    8. test database
    9. production
    Regarding alternativ 1 I do know that it's possible to dump only metadata, but is it possible to dump only metadata but without storage info as described above?
    If you have another solution please feel free to add a alternativ 3 and 4 :)
    Thanks all.

    If you want to have the storage attributes of your segments modified during Data Pump Import, you could do the following:
    1) create a tablespace with your desired storage attributes in the target database
    2) impdp with transform=storage:n and remap_tablespace
    That way, your imported segments can get initial extents of 8m, e.g.
    (4m as initial extent will not give you a 4m initial extent on a tablespace with autoallocate - they will use 64k, 1m, 8m or 64m sized extents instead)
    See here for the documentation of this - it's not Exadata specific:
    http://docs.oracle.com/cd/E11882_01/server.112/e22490/dp_import.htm#BEHEDGJJ
    Kind regards
    Uwe Hesse
    "Don't believe it, test it!"
    http://uhesse.com
    Edited by: Uwe Hesse on 07.05.2012 09:07 added the part with autoallocate

  • Importing into partitioned table from unpartitioned table

    I have taken export of a unpartitioned table using datapump.
    I want to import data using these dumpfiles into a partitioned table using impdp.
    Can you please provide me the basic script, I have tried running using the following script and I got the following error.
    noarg impdp $SCHEMA/$pw TABLES=\(ITPCS.ITPCS_BOM_GRT_SCAN\) EXCLUDE=STATISTICS parallel=4 dumpfile=dumpdir:exptable%U.dmp logfile=logdir:imptab2.log job_name=imptables_3 content=DATA_ONLY EXCLUDE=constraint,index,materialized_view
    Processing object type TABLE_EXPORT/TABLE/TBL_TABLE_DATA/TABLE/TABLE_DATA
    ORA-31693: Table data object "ITPCS"."ITPCS_BOM_GRT_SCAN" failed to load/unload and is being skipped due to error:
    ORA-00942: table or view does not exist
    ORA-06512: at "SYS.KUPD$DATA", line 1167
    ORA-14400: inserted partition key does not map to any partition
    Job "SYSTEM"."IMPTABLES_3" completed with 1 error(s) at 08:02

    pankaj2086 wrote:
    I have taken export of a unpartitioned table using datapump.
    I want to import data using these dumpfiles into a partitioned table using impdp.
    Can you please provide me the basic script, I have tried running using the following script and I got the following error.
    noarg impdp $SCHEMA/$pw TABLES=\(ITPCS.ITPCS_BOM_GRT_SCAN\) EXCLUDE=STATISTICS parallel=4 dumpfile=dumpdir:exptable%U.dmp logfile=logdir:imptab2.log job_name=imptables_3 content=DATA_ONLY EXCLUDE=constraint,index,materialized_view
    Processing object type TABLE_EXPORT/TABLE/TBL_TABLE_DATA/TABLE/TABLE_DATA
    ORA-31693: Table data object "ITPCS"."ITPCS_BOM_GRT_SCAN" failed to load/unload and is being skipped due to error:
    ORA-00942: table or view does not existWhat do you suppose the ora-00942 means? Looks pretty self explanatory to me. Have you looked it up? Have you verified that the target table exists in the target database?
    ORA-06512: at "SYS.KUPD$DATA", line 1167
    ORA-14400: inserted partition key does not map to any partitionAnd if the target table does exist, what do you suppose the ora-14400 means? Looks pretty self explanatory to me. Have you looked it up?

  • Impdp import schema to different tablespace

    Hi,
    I have PRTDB1 schema in PRTDTS tablespace, I need encrypt this tablespace.
    and I have done this steps:
    1. I created new encrypted PRTDTS_ENC tablespace.
    2. I take export PRTDB1 schema. (expdp system/system schema=PRTDB1 directory=datapump dumpfile=data.dmp logfile=log1.log)
    and I want import this dump to new PRTDTS_ENC tablespace.
    can you give me impdp command, which I need?

    Hello,
    ORA-14223: Deferred segment creation is not supported for this table
    Failing sql is:
    CREATE TABLE "PRTDB1"."DATRE_CONT" ("BRON" NUMBER NOT NULL ENABLE, "PCDID" NUMBER(9,0) NOT NULL ENABLE, "OWNERSID" NUMBER NOT NULL ENABLE, "VALUESTR" VARCHAR2(4000 CHAR), "VALNUM" NUMBER, "VALDATE" DATE, "VALXML" "XMLTYPE") SEGMENT CREATION DEFERRED PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING So it means that these Tables "need" a Segment but, they are created with the option SEGMENT CREATION DEFERRED.
    A workaround is to create a Segment for these Tables on the Source Database before exporting them. Then, at the Import you won't have the option SEGMENT CREATION DEFERRED. The following Note of MOS detail this Solution:
    - *Error Ora-14223 Deferred Segment Creation Is Not Supported For This Table During Datapump Import [ID 1293326.1]*
    The link below gives more informations about the DEFERRED SEGMENT CREATION:
    http://docs.oracle.com/cd/E14072_01/server.112/e10595/tables002.htm#CHDGJAGB
    With the DEFERRED SEGMENT CREATION feature, a Segment is created automatically at the first insert on the Table. So the Tables for which the error ORA-14223 occurs are empty.
    You may also create them separately by using a script generated by the SQLFILE parameter in which you change the option SEGMENT CREATION DEFERRED by SEGMENT CREATION IMMEDIATE.
    Hope this help.
    Best regards,
    Jean-Valentin

  • Using expdp/impdp to backup schemas to new tablespace

    Hello,
    I have tablespace A for schemas A1 and A2, and I wish back up these schemes to tablespace B using schema names B1 and B2 (so the contents of schemas A1 and A2 are copied into schemas B1 and B2, respectively, to use as backups in case something happens to schemas A1 or A2 or tablespace A).
    I began by creating tablespace B, and schemas B1 and B2. Then I attempted to populate schemas B1 and B2 by doing the following:
    EXPORT SCHEMAS:
    expdp a1/a1password@myIdentifier ESTIMATE=BLOCKS DUMPFILE=myDpumpDirectory:a1.dmp LOGFILE=myDpumpDirectory:a1_export.log SCHEMAS=a1 COMPRESSION=METADATA_ONLY
    expdp a2/a2password@myIdentifier ESTIMATE=BLOCKS DUMPFILE=myDpumpDirectory:a2.dmp LOGFILE=myDpumpDirectory:a2_export.log SCHEMAS=a2 COMPRESSION=METADATA_ONLY
    IMPORT SCHEMAS:
    impdp b1/b1password@myIdentifier DUMPFILE=myDpumpDirectory:a1.dmp LOGFILE=myDpumpDirectory:b1_import.log REMAP_SCHEMA=a1:b1
    impdp b2/b2password@myIdentifier DUMPFILE=myDpumpDirectory:a2.dmp LOGFILE=myDpumpDirectory:b2_import.log REMAP_SCHEMA=a2:b2
    This resulted in backing up schema A1 into schema B1, and schema A2 into B2, but the tablespaces for schemas B1 and B2 remained tablespace A (when I wanted them to be tablespace B).
    I will drop schemas B1 and B2, create new schemas, and try again. What command should I use to get the tablespace correct this time?
    Reviewing the documentation for data pump import
    http://docs.oracle.com/cd/E11882_01/server.112/e22490/dp_import.htm#SUTIL300
    specifically the section titled REMAP_TABLESPACE, I'm thinking that I could just add a switch to the above import commands to remap tablespace, such as:
    impdp b1/b1password@myIdentifier DUMPFILE=myDpumpDirectory:a1.dmp LOGFILE=myDpumpDirectory:b1_import.log REMAP_SCHEMA=a1:b1 REMAP_TABLESPACE=a:b
    impdp b2/b2password@myIdentifier DUMPFILE=myDpumpDirectory:a2.dmp LOGFILE=myDpumpDirectory:b2_import.log REMAP_SCHEMA=a2:b2 REMAP_TABLESPACE=a:b
    Is that correct?
    Also, is it OK to use the same export commands above, or should they change to support the REMAP_TABLESPACE?

    Hi,
    if i understand correctly, you want to import  A1:B1 and  A2:B2 with the Respective Tablespace. You are using the expdp with ESTIMATE it can not help you
    You can use something like that with one dump file
    expdp system/password directory=<myDpumpDirectory> dumpfile=A1_A2_Export.dmp logfile=A1_A2_Export.log schemas=A1,A2
    impdp system/password directory=<myDpumpDirectory> dumpfile=A1_A2_Export.dmp logfile=A1_A2_Import.log remap_schemas=<A1:B1,A2:B2> REMAP_TABLESPACE=<TAB1:TAB2>
    HTH

  • Schema Expdp-IMPdp , i have the below errors in my imp log

    Hello All,
    I am doing some exp/imp of schemas in a OBI environment and size around 300GB of each shema,  after quiet a long time.., and not sure of the imp log errors whether to ignore or rerun my import with any changes or drop schemas and reimport ......
    My IMP statement is
    impdp system/********@OBIDEV schemas=ODS,DWH,STG,GEOGRAPHY directory=dumpdir dumpfile=BI2011_SCHEMAS.dmp logfile=impBI2011_SCHEMAS.log
    Errors from the imp log :
    Processing object type SCHEMA_EXPORT/USER
    ORA-31684: Object type USER:"GEOGRAPHY" already exists
    ORA-31684: Object type USER:"DWH" already exists
    ORA-31684: Object type USER:"STG" already exists
    Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
    Processing object type SCHEMA_EXPORT/ROLE_GRANT
    ORA-39083: Object type ROLE_GRANT failed to create with error:
    ORA-01919: role 'ROLE_SRC' does not exist
    Failing sql is:
    GRANT "ROLE_SRC" TO "STG" WITH ADMIN OPTION
    ORA-39083: Object type ROLE_GRANT failed to create with error:
    ORA-01919: role 'ROLE_STG' does not exist
    Failing sql is:
    GRANT "ROLE_STG" TO "ODS" WITH ADMIN OPTION
    Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
    a
    --- and after some lines again the below errors, many failing sql's to created the tables ..
    ORA-39083: Object type TABLE:"STG"."STG_ART_STOCK_OIG_XT" failed to create with error:
    ORA-06564: object BI_SRC_FILES does not exist
    Failing sql is:
    CREATE TABLE "STG"."STG_ART_STOCK_OIG_XT" ("YEARWEEK" NUMBER, "ARTICLE_NUMBER" NUMBER, "BU_CODE" NUMBER, "ART_QTY_STOCK" NUMBER, "PM_CODE" VARCHAR2(10 BYTE)) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BI_SRC_FILES" ACCESS PARAMETERS ( RECORDS DELIMITED BY NEWLINE CHARACTERSET WE8MSWIN1252 STRING SIZES ARE IN BYTES BADFILE 'ART_STOCK_OIG.bad'
    ORA-39083: Object type TABLE:"STG"."STG_ART_STOCK_INGKA_XT" failed to create with error:
    ORA-06564: object BI_SRC_FILES does not exist
    ORA-39083: Object type TABLE:"STG"."STG_ART_PMCODE_INGKA_XT" failed to create with error:
    ORA-06564: object BI_SRC_FILES does not exist
    Failing sql is:
    CREATE TABLE "STG"."STG_IFPM_XT" ("YEAR" VARCHAR2(255 BYTE), "TERTIAL" VARCHAR2(255 BYTE), "YW" VARCHAR2(255 BYTE), "STORE_NAME" VARCHAR2(255 BYTE), "BUDGET" VARCHAR2(255 BYTE), "ACTUAL_SALES" VARCHAR2(255 BYTE), "CUSTOMERS" VARCHAR2(255 BYTE), "VISITORS" VARCHAR2(255 BYTE), "REST_SALES" VARCHAR2(255 BYTE), "EXIT_CAFE_SALES" VARCHAR2(255 BYTE), "SWE_FOOD_SALES" VARCHAR2
    ORA-39083: Object type TABLE:"STG"."HFB_PA_EXT" failed to create with error:
    ORA-06564: object TEMP does not exist
    Failing sql is:
    CREATE TABLE "STG"."HFB_PA_EXT" ("HFB_ID" NUMBER, "HFB_DESC" VARCHAR2(100 BYTE), "RA_ID" NUMBER, "RA_DESCR" VARCHAR2(100 BYTE), "PA_ID" NUMBER, "PA_DESCR" VARCHAR2(100 BYTE)) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "TEMP" ACCESS PARAMETERS ( records delimited  by newline
        skip 1
        fields  terminated by ','
        missing field values are null
          ) LOCAT
    ORA-39151: Table "GEOGRAPHY"."CAHC" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    . . imported "ODS"."ZZ_ODS_ART_STOCK_T"                  26.81 GB 321434766 rows
    imported "ODS"."ODS_PM_CODE_T":"SYS_P15548"          259.4 MB 3418214 rows  (are the sys tables effected here)
    Then the rows started importing ....
    Share all the possible ways to avoid the above all errors in the imp log as it has to be shared with the client ,    Thanks for the swif response and help in advance.
    Regards,
    RanG

    HI,
    RanG wrote:
    Hello All,
    I am doing some exp/imp of schemas in a OBI environment and size around 300GB of each shema,  after quiet a long time.., and not sure of the imp log errors whether to ignore or rerun my import with any changes or drop schemas and reimport ......
    My IMP statement is
    impdp system/********@OBIDEV schemas=ODS,DWH,STG,GEOGRAPHY directory=dumpdir dumpfile=BI2011_SCHEMAS.dmp logfile=impBI2011_SCHEMAS.log
    Errors from the imp log :
    Processing object type SCHEMA_EXPORT/USER
    ORA-31684: Object type USER:"GEOGRAPHY" already exists
    ORA-31684: Object type USER:"DWH" already exists
    ORA-31684: Object type USER:"STG" already exists
    Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
    Processing object type SCHEMA_EXPORT/ROLE_GRANT
    ORA-39083: Object type ROLE_GRANT failed to create with error:
    ORA-01919: role 'ROLE_SRC' does not exist
    Failing sql is:
    GRANT "ROLE_SRC" TO "STG" WITH ADMIN OPTION
    ORA-39083: Object type ROLE_GRANT failed to create with error:
    ORA-01919: role 'ROLE_STG' does not exist
    Failing sql is:
    GRANT "ROLE_STG" TO "ODS" WITH ADMIN OPTION
    Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
    a
    --- and after some lines again the below errors, many failing sql's to created the tables ..
    ORA-39083: Object type TABLE:"STG"."STG_ART_STOCK_OIG_XT" failed to create with error:
    ORA-06564: object BI_SRC_FILES does not exist
    Failing sql is:
    CREATE TABLE "STG"."STG_ART_STOCK_OIG_XT" ("YEARWEEK" NUMBER, "ARTICLE_NUMBER" NUMBER, "BU_CODE" NUMBER, "ART_QTY_STOCK" NUMBER, "PM_CODE" VARCHAR2(10 BYTE)) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BI_SRC_FILES" ACCESS PARAMETERS ( RECORDS DELIMITED BY NEWLINE CHARACTERSET WE8MSWIN1252 STRING SIZES ARE IN BYTES BADFILE 'ART_STOCK_OIG.bad'
    ORA-39083: Object type TABLE:"STG"."STG_ART_STOCK_INGKA_XT" failed to create with error:
    ORA-06564: object BI_SRC_FILES does not exist
    ORA-39083: Object type TABLE:"STG"."STG_ART_PMCODE_INGKA_XT" failed to create with error:
    ORA-06564: object BI_SRC_FILES does not exist
    Failing sql is:
    CREATE TABLE "STG"."STG_IFPM_XT" ("YEAR" VARCHAR2(255 BYTE), "TERTIAL" VARCHAR2(255 BYTE), "YW" VARCHAR2(255 BYTE), "STORE_NAME" VARCHAR2(255 BYTE), "BUDGET" VARCHAR2(255 BYTE), "ACTUAL_SALES" VARCHAR2(255 BYTE), "CUSTOMERS" VARCHAR2(255 BYTE), "VISITORS" VARCHAR2(255 BYTE), "REST_SALES" VARCHAR2(255 BYTE), "EXIT_CAFE_SALES" VARCHAR2(255 BYTE), "SWE_FOOD_SALES" VARCHAR2
    ORA-39083: Object type TABLE:"STG"."HFB_PA_EXT" failed to create with error:
    ORA-06564: object TEMP does not exist
    Failing sql is:
    CREATE TABLE "STG"."HFB_PA_EXT" ("HFB_ID" NUMBER, "HFB_DESC" VARCHAR2(100 BYTE), "RA_ID" NUMBER, "RA_DESCR" VARCHAR2(100 BYTE), "PA_ID" NUMBER, "PA_DESCR" VARCHAR2(100 BYTE)) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "TEMP" ACCESS PARAMETERS ( records delimited  by newline
        skip 1
        fields  terminated by ','
        missing field values are null
          ) LOCAT
    ORA-39151: Table "GEOGRAPHY"."CAHC" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    . . imported "ODS"."ZZ_ODS_ART_STOCK_T"                  26.81 GB 321434766 rows
    imported "ODS"."ODS_PM_CODE_T":"SYS_P15548"          259.4 MB 3418214 rows  (are the sys tables effected here)
    Then the rows started importing ....
    Share all the possible ways to avoid the above all errors in the imp log as it has to be shared with the client ,    Thanks for the swif response and help in advance.
    Regards,
    RanG
    Because the table already exists. You can use the TABLE_EXISTS_ACTION option, so you can tell impdp what to do if the table it is trying to create already exists.
    TABLE_EXISTS_ACTION={SKIP | APPEND | TRUNCATE | REPLACE}
    impdp system/********@OBIDEV schemas=ODS,DWH,STG,GEOGRAPHY directory=dumpdir dumpfile=BI2011_SCHEMAS.dmp logfile=impBI2011_SCHEMAS.log table_exists_action=replace
    Thank you

  • How to impdp data only into a specific schema

    Hello,
    How to impdp Tables and Materialized Views only into a specific schema. Also, how to exclude certain tables in particular schemas from being imported. Thank you.

    Pl Check the link
    http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_import.htm#i1007865
    Thanks

  • Impdp Performing a Schema-Mode Import Example

    The Oracle docs ( http://docs.oracle.com/cd/B12037_01/server.101/b10825/dp_import.htm#i1006564) has an example of performing a schema-mode import as follows:
    impdp hr/hr SCHEMAS=hr DIRECTORY=dpump_dir1 DUMPFILE=expschema.dmp
    EXCLUDE=CONSTRAINT, REF_CONSTRAINT, INDEX TABLE_EXISTS_ACTION=REPLACE
    Looking carefully at this example what does " INDEX TABLE_EXISTS_ACTION=REPLACE" mean, specifically with respect to "INDEX"?
    Does it mean it will drop the table and associated index if it already exists and then re-create and load it using the dump file contents?

    Index is an argument to "EXCLUDE" and Table_Exists_Action is separate option. In this example, It is excluding Indexes during import. It has nothing to do with "TABLE_EXISTS_ACTION=REPLACE"
    Thanks

  • IMPDP into schema with truncated tables, best way to do this?

    Hello all,
    I'm a bit of a noob witih datapump.
    I've got a schema that I've truncated all tables. I have a full schema export I took awhile back, and I'm wanting to import this into the schema to basically 'reset' it.
    First time run, I got the :
    ORA-39151: Table "xyz.tablename" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
    I've been reading through, and see suggestions to add to the par file:
    CONTENT=DATA_ONLY TABLE_EXISTS_ACTION=APPEND
    And I've seen others use the option for:
    table_exists_action=replace
    I'm wondering if I could get suggestions best approach to use...I basically want to put the data back into the tables, and have the indexes rebuilt....what is the best course of action here?
    Thank you in advance,
    cayenne

    If all you want is to reload the data, then
    impdp user/password ... content=data_only table_exists_action=append
    This will only load the data into the existing tables and it will append the data to those tables. If the table definitions may have changed, then you would remove the content=data_only and use table_exists_action=replace.
    If you need to load dependent objects like indexes, constraints, etc, then don't do content=data_only and use table_exists_action=replace.
    Hope this helps.
    Dean

Maybe you are looking for

  • Error in Opening the Posting Period.

    Hi All, The Current period in the system is 10 2009. The Previous period in the system is 09 2009. When trying to open the posting period 11 2009 ( through MMPV ), system throws an error saying "The specified year 2009 is not the current calendar yea

  • XML-form shown in the Browser but not in iView

    Hi, I am exploring xml-builder, was able to create a project, save the form and can even use the form from the New 'New Project' in the browser, saved my news item. But the news item is not shown on the iView, though I can edit it (I only see the fun

  • IPhone 4/iPad (Gen 1) randomly losing wireless connection

    Hi I have an iPad and iPhone 4 both with latest software (4.3.4) - over the last week or so I have noticed that my wireless connection on both devices seems to randomly drop out and I have to turn wifi off then on again from the device and I'm connec

  • Customer Return Into Unrestricted Stock

    Hi, Our existing process is as follows: We have Material with Serial Number activated. a.     We sale to the Camera to our customer. Here the Status of the Serial Number becomes u201CECUSu201D. b.     Due to some reason, the Customer Return the Camer

  • Installation OEM issue

    I install and config personal oracle 10g client at home and I am able to launch OEM at home but not at office. Do you know if I have to reconfig everything or what might be missing? Thanks for your kindly support!!! Nancy