Expdp without estimate

Hi,
I'm trying to do an expdp of full database.
the problem is that the whole expdp takes 2 hours - and from that 2 hours the estimate part take 40 minutes - which sounds unreasonable.
I'm trying to find a way to cut the estimate time. i tried to do the estimate with statistics instead of blocks but it didn't help.
is there another possible solution for that? i'm willing to skip the estimate part completely but i didn't find a way to do that.
the db version i use is 11.2.0.2 linux redhat
will appreciate any help,
Arnon
Edited by: arnon82 on Jan 31, 2013 10:22 PM

Hi,
The estimate phase is a critical part of the export job. It is not only estimating the amount of data that is being exported. It is actually gathering data that is describing what we call TABLE_DATA objects. These are objects where data is stored. Let me explain:
for a table, it is the table itself
for a partitioned table, it is each partition
for a subpartitioned table, it is each subpartition
So, Data Pump starts by collecting this information. It then stores all of this in the Master Table so the job knows what data needs to be exported. This "phase" needs to be done sooner or later, we do it at the beginning of the job in case you specify parallel. Once this data is collected, parallel worker processes can start to unload data. If this was done later, the job would not begin parallel operation until this was collected. If the parallelism of the job is 1, then this could be done later, but it still needs to be done.
Now, having said all of that, there have been some performance in this area from release to release. I'm not sure what version you are running, but you could check with Oracle Support to see if there is a patch available for your version.
Hope this explains things a bit and helps.... a little :^)
Dean

Similar Messages

  • Expdp without password

    Hi all,
    I have a problem with running expdp without providing password.
    On one instance the following command works without providing password
    export ORACLE_SID=DBLN
    expdp / parfile=/oralog/scripts/parfiles/expdp-DBLN.par
    on the other instance it doesn't work.. it asks me for username and password
    export ORACLE_SID=TST
    expdp / parfile=/oralog/scripts/parfiles/expdp-TST.par
    UDE-00008: operation generated ORACLE error 1017
    ORA-01017: invalid username/password; logon denied
    Both instances are running on the same machine, under same user and have the same DATA_PUMP_DIR directory.
    Can anyone give me some advice how to run expdp without password. I would like to run it from crontab (not from EM).
    Thank you,
    Miha

    To use that syntax (expdp /) you have to configure an Oracle user "identified externally"
    http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_8003.htm#i2065278
    http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_8003.htm#i2094003
    See this example on my test machine : db102 DB has an external user, while test DB hasn't :
    [ora102 work db102]$ export ORACLE_SID=db102
    [ora102 work db102]$ expdp /
    Export: Release 10.2.0.1.0 - Production on Friday, 03 November, 2006 11:13:07
    Copyright (c) 2003, 2005, Oracle.  All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    [ora102 work db102]$ export ORACLE_SID=test
    [ora102 work test]$ expdp /
    Export: Release 10.2.0.1.0 - Production on Friday, 03 November, 2006 11:13:24
    Copyright (c) 2003, 2005, Oracle.  All rights reserved.
    UDE-00008: operation generated ORACLE error 1017
    ORA-01017: invalid username/password; logon denied
    Username:                                                                        

  • Expdp without some tables data

    Hi.
    Is it possible to get schema export with no data in some tables? Is there a parameter where can assign it?

    Hi,
    You can do something like that
    in this example it will export 0 rows from emp table.
    -- ------- -Parfile---------------
    directory=DATA_DD_DIR dumpfile=test_1.dmp logfile=imp_test_1.log schemas=ME query=EMP:"where (1=2)"
    Export: Release 11.2.0.2.0 - Production on Thu Aug 8 03:14:48 2013
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    With the Partitioning, Data Mining and Real Application Testing options
    Starting "ME"."SYS_EXPORT_SCHEMA_01":  me/******** parfile=dd.par
    Estimate in progress using BLOCKS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 768 KB
    Processing object type SCHEMA_EXPORT/USER
    Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
    Processing object type SCHEMA_EXPORT/ROLE_GRANT
    Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
    Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type SCHEMA_EXPORT/TABLE/COMMENT
    Processing object type SCHEMA_EXPORT/TABLE/AUDIT_OBJ
    Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION
    Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION
    Processing object type SCHEMA_EXPORT/VIEW/VIEW
    Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    . . exported "ME"."ALL_ONES"                             5.820 KB       2 rows
    . . exported "ME"."ALL_ZEROS"                            5.414 KB       2 rows
    . . exported "ME"."BONUS"                                6.195 KB       1 rows
    . . exported "ME"."DEPT"                                 5.945 KB       5 rows
    . . exported "ME"."EMP"                                  7.984 KB       0 rows
    . . exported "ME"."EMP_TEST"                             8.601 KB      15 rows
    . . exported "ME"."PROJECT_DATA_1"                       26.07 KB       3 rows
    . . exported "ME"."SALGRADE"                             5.859 KB       5 rows
    . . exported "ME"."SQLA"                                 8.734 KB       1 rows
    . . exported "ME"."T"                                    5.046 KB       3 rows
    . . exported "ME"."TBL_MD_STG"                           7.085 KB       1 rows
    . . exported "ME"."TEST"                                 5.539 KB      10 rows
    Master table "ME"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
    Dump file set for ME.SYS_EXPORT_SCHEMA_01 is:
      /home/oracle/MYSHELL/test_1.dmp
    Job "ME"."SYS_EXPORT_SCHEMA_01" successfully completed at 03:15:18
    HTH

  • EXPDP 10g - skip estimate

    I have a big database (1,6TB) and now many rows in many tables where deleted.
    The tablespaces are still very big but i want to shrink them to free up some disk space.
    So i want to try to export the whole database and import them with TABLE_EXISTS_ACTION=REPLACE. (after that i can shrink the tablespaces)
    When i start the export the utility estimates the size of the dump before really starts with the export.
    I can switch to estimate-mode "STATISTICS"(you can see in my example), but there is no way to skip ist completely!?
    expdp system/<secret_pwd> DUMPFILE=expdp_<mydbsid>.dmp LOGFILE=expdp_<mydbsid>_shrinktest.log DIRECTORY=EXPDP_SHRINK_TEST_DIR FULL=Y ESTIMATE=STATISTICS
    Export: Release 10.2.0.4.0 - 64bit Production on Wednesday, 01 December, 2010 15:15:05
    Copyright (c) 2003, 2007, Oracle.  All rights reserved.
    Connected to: Oracle Database 10g Release 10.2.0.4.0 - 64bit Production
    Starting "SYSTEM"."SYS_EXPORT_FULL_13":  system/<secret_pwd> DUMPFILE=expdp_<mydbsid>.dmp LOGFILE=expdp_<mydbsid>_shrinktest.log DIRECTORY=EXPDP_SHRINK_TES
    _DIR FULL=Y ESTIMATE=STATISTICS
    Estimate in progress using STATISTICS method...
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
    ...For this database the estimate process consumes very much time(some hours) and for a productive database time is very expensive.
    So is there a way to skip the default estimate process?
    (i'm not intressted in an explanation of the expdp parameter estimate only and so on .... i also know that there are other ways to shrink the tablespaces - "alter table move ..." or "alter table shrink ..." - ...)
    Anyway ... if there is a way to deactivate the estimation i be interested in.

    @N Gasparotto:
    :-/ so if there is no way ... i will try some of the other shrinking methods.
    @sb92075:
    I don't know if i understand your question ... but i will shrink the tablespaces only one time. For this database such a case(deleting so much lines in the tables) will not appear in the future.
    @sybrand_b:
    thumbs up+ ... big words from a senior dba

  • Using expdp/impdp to backup schemas to new tablespace

    Hello,
    I have tablespace A for schemas A1 and A2, and I wish back up these schemes to tablespace B using schema names B1 and B2 (so the contents of schemas A1 and A2 are copied into schemas B1 and B2, respectively, to use as backups in case something happens to schemas A1 or A2 or tablespace A).
    I began by creating tablespace B, and schemas B1 and B2. Then I attempted to populate schemas B1 and B2 by doing the following:
    EXPORT SCHEMAS:
    expdp a1/a1password@myIdentifier ESTIMATE=BLOCKS DUMPFILE=myDpumpDirectory:a1.dmp LOGFILE=myDpumpDirectory:a1_export.log SCHEMAS=a1 COMPRESSION=METADATA_ONLY
    expdp a2/a2password@myIdentifier ESTIMATE=BLOCKS DUMPFILE=myDpumpDirectory:a2.dmp LOGFILE=myDpumpDirectory:a2_export.log SCHEMAS=a2 COMPRESSION=METADATA_ONLY
    IMPORT SCHEMAS:
    impdp b1/b1password@myIdentifier DUMPFILE=myDpumpDirectory:a1.dmp LOGFILE=myDpumpDirectory:b1_import.log REMAP_SCHEMA=a1:b1
    impdp b2/b2password@myIdentifier DUMPFILE=myDpumpDirectory:a2.dmp LOGFILE=myDpumpDirectory:b2_import.log REMAP_SCHEMA=a2:b2
    This resulted in backing up schema A1 into schema B1, and schema A2 into B2, but the tablespaces for schemas B1 and B2 remained tablespace A (when I wanted them to be tablespace B).
    I will drop schemas B1 and B2, create new schemas, and try again. What command should I use to get the tablespace correct this time?
    Reviewing the documentation for data pump import
    http://docs.oracle.com/cd/E11882_01/server.112/e22490/dp_import.htm#SUTIL300
    specifically the section titled REMAP_TABLESPACE, I'm thinking that I could just add a switch to the above import commands to remap tablespace, such as:
    impdp b1/b1password@myIdentifier DUMPFILE=myDpumpDirectory:a1.dmp LOGFILE=myDpumpDirectory:b1_import.log REMAP_SCHEMA=a1:b1 REMAP_TABLESPACE=a:b
    impdp b2/b2password@myIdentifier DUMPFILE=myDpumpDirectory:a2.dmp LOGFILE=myDpumpDirectory:b2_import.log REMAP_SCHEMA=a2:b2 REMAP_TABLESPACE=a:b
    Is that correct?
    Also, is it OK to use the same export commands above, or should they change to support the REMAP_TABLESPACE?

    Hi,
    if i understand correctly, you want to import  A1:B1 and  A2:B2 with the Respective Tablespace. You are using the expdp with ESTIMATE it can not help you
    You can use something like that with one dump file
    expdp system/password directory=<myDpumpDirectory> dumpfile=A1_A2_Export.dmp logfile=A1_A2_Export.log schemas=A1,A2
    impdp system/password directory=<myDpumpDirectory> dumpfile=A1_A2_Export.dmp logfile=A1_A2_Import.log remap_schemas=<A1:B1,A2:B2> REMAP_TABLESPACE=<TAB1:TAB2>
    HTH

  • EXPDP errors

    Hi,
    I got error when I tried this...Please let me know as we need to backup the schema.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Starting "SYSTEM"."SYS_EXPORT_SCHEMA_02": system/******** parfile=expdp.par
    Estimate in progress using BLOCKS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    ORA-39126: Worker unexpected fatal error in KUPW$WORKER.CONFIGURE_METADATA_UNLOAD [ESTIMATE_PHASE]
    ORA-04063: package body "SYS.DBMS_METADATA" has errors
    ORA-06508: PL/SQL: could not find program unit being called: "SYS.DBMS_METADATA"
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
    ORA-06512: at "SYS.KUPW$WORKER", line 8358
    ----- PL/SQL Call Stack -----
    object line object
    handle number name
    0x7b228110 19208 package body SYS.KUPW$WORKER
    0x7b228110 8385 package body SYS.KUPW$WORKER
    0x7b228110 6628 package body SYS.KUPW$WORKER
    0x7b228110 12605 package body SYS.KUPW$WORKER
    0x7b228110 2546 package body SYS.KUPW$WORKER
    0x7b228110 9054 package body SYS.KUPW$WORKER
    0x7b228110 1688 package body SYS.KUPW$WORKER
    0x7d909dc8 2 anonymous block
    Estimate in progress using BLOCKS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    ORA-39126: Worker unexpected fatal error in KUPW$WORKER.CONFIGURE_METADATA_UNLOAD [ESTIMATE_PHASE]
    ORA-04063: package body "SYS.DBMS_METADATA" has errors
    ORA-06508: PL/SQL: could not find program unit being called: "SYS.DBMS_METADATA"
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
    ORA-06512: at "SYS.KUPW$WORKER", line 8358
    ----- PL/SQL Call Stack -----
    object line object
    handle number name
    0x7b228110 19208 package body SYS.KUPW$WORKER
    0x7b228110 8385 package body SYS.KUPW$WORKER
    0x7b228110 6628 package body SYS.KUPW$WORKER
    0x7b228110 12605 package body SYS.KUPW$WORKER
    0x7b228110 2546 package body SYS.KUPW$WORKER
    0x7b228110 9054 package body SYS.KUPW$WORKER
    0x7b228110 1688 package body SYS.KUPW$WORKER
    0x7d909dc8 2 anonymous block
    Job "SYSTEM"."SYS_EXPORT_SCHEMA_02" stopped due to fatal error at 14:14:00
    [1]+ Exit 5 expdp parfile=expdp.par
    Thanks

    It is impossible to not have RMAN if you have any Oracle database that has been supported in the last decade.
    RMAN is built in ... RMAN is free ... RMAN is the only tool you should be using.
    Worried about learning something new? Here's what is required to run RMAN.
    # rman target /
    RMAN> backup database;one can get a lot more detailed but this is all that is actually required.

  • Export error - expdp

    Hello gurus...
    DB v: 11.2.0.3
    expdp file estimation: 70GB
    Excludes: 500 tables
    Using parfile
    Target: Schema refresh
    I am trying to export schema excluding 500 odd tables with expdp estimation of 70GB dump file size..
    created temp table listing all table which needed to exclude from export and using select query in EXCLUDE parameter... e.g. EXCLUDE=TABLE:'IN (SELECT tbl FROM list_of_tables)
    also using FLASHBACK_TIME=TO_TIMESTAMP(TO_CHAR(SYSDATE,'YYYY-MM-DD HH24:MI:SS'),'YYYY-MM-DD HH24:MI:SS') in parfile..
    Tried few different things:
    1. Used expdp with 3 channels
    error: export writes to 3 files till 50GB then stops and only writes to 1 file after 2 hours. First 2 hours writing speed is quick then suddenly it really really slows down... and ora-0155 after waiting for 5 hours..
    2. Used expdp without any channels
    error: same as above... performance issue occurs after 2 hours... after 5 hours many tables skipped and ora-0155 snapshot old happens....
    Trace file mentions about wait on different block but it's not consistent on 1 block...
    read 100's of metalink doco but all points different location..
    any other ways to tackle this problem??

    khallas301 wrote:
    Hello gurus...
    DB v: 11.2.0.3
    expdp file estimation: 70GB
    Excludes: 500 tables
    Using parfile
    Target: Schema refresh
    I am trying to export schema excluding 500 odd tables with expdp estimation of 70GB dump file size..
    created temp table listing all table which needed to exclude from export and using select query in EXCLUDE parameter... e.g. EXCLUDE=TABLE:'IN (SELECT tbl FROM list_of_tables)
    also using FLASHBACK_TIME=TO_TIMESTAMP(TO_CHAR(SYSDATE,'YYYY-MM-DD HH24:MI:SS'),'YYYY-MM-DD HH24:MI:SS') in parfile..
    Tried few different things:
    1. Used expdp with 3 channels
    error: export writes to 3 files till 50GB then stops and only writes to 1 file after 2 hours. First 2 hours writing speed is quick then suddenly it really really slows down... and ora-0155 after waiting for 5 hours..
    2. Used expdp without any channels
    error: same as above... performance issue occurs after 2 hours... after 5 hours many tables skipped and ora-0155 snapshot old happens....
    Trace file mentions about wait on different block but it's not consistent on 1 block...
    read 100's of metalink doco but all points different location..
    any other ways to tackle this problem??I'll hazard a guess, since it writing to only one file, it means it is still writing your largest tablel.
    Couple of ideas,
    1. try using access_method=external_table (it may or may not help but worth a try)
    2. exclude your largest table as well, and run a seperate expdp just for that one table with parallel option.
    Oracle does creates internal list of all tables to export in the desc order of size and starts allocating one table for each worker thread. Which is why you see 3 files being written in the beginning, then 2 stop and only one writes since probably your largest table is still being written. It will also help if your stats are accurate (or relatavely new) for Oracle to make a good judgement call on size. Also why only 3 files?
    Raj

  • Expdp - long time

    Hello
    I've a question about expdp. I'd export my database with expdp without parallel-option. this 280GB-export runs 11h. Now I'd update my database to enterprise edition and now I export the db with paralleloption (8 CPUs). But now the export runs 12h. is this normal?
    Thanks for the answer.
    Best regards.
    roger

    Export time mainly depends on the size of the database. As per your earlier update you have upgraded your database to Enterprise Edition. During this time your database size must have increased and you observe increase in export time. Also please do provide more details like exact command you are using for the export.
    Honestly 1hr increase in the export duration should not a big concern as it could have been caused by other programs running at that time on the server as well.
    HTH
    ~Ravi

  • Export error ORA-08186

    Hello gurus,
    Please help me with below export.
    Here is the para file of export.
    Cat expdp.par
    tables=USER1.TMP_01,USER1.TMP_02,USER1.TEMP_03
    directory=DATAPUMP
    dumpfile=expdp_%u.dmp
    logfile=expdp.log
    JOB_NAME=expdp_job
    FILESIZE=4G
    VERSION=LATEST
    FLASHBACK_SCN=61281521
    select current_scn from v$database;
    61292504Here is my export log
    expdp "'/ as sysdba'" parfile=expdp_dsatrn.par
    Export: Release 11.2.0.2.0 - Production on Thu Apr 18 12:17:11 2013
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Starting "SYS"." EXPDP_JOB":  "/******** AS SYSDBA" parfile=expdp.par
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 384 KB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    ORA-31693: Table data object "USER1"."TMP_01" failed to load/unload and is being skipped due to error:
    ORA-02354: error in exporting/importing data
    ORA-08186: invalid timestamp specified
    ORA-06512: at "SYS.TIMESTAMP_TO_SCN", line 1
    ORA-31693: Table data object "USER1"."TMP_02" failed to load/unload and is being skipped due to error:
    ORA-02354: error in exporting/importing data
    ORA-08186: invalid timestamp specified
    ORA-06512: at "SYS.TIMESTAMP_TO_SCN", line 1
    . . exported "USER1"."TEM_03"                  6.648 KB       0 rows
    Master table "SYS"." EXPDP_JOB" successfully loaded/unloaded
    Dump file set for SYS. EXPDP_JOB is:
      /db1/rdbm7/expdp_01.dmp
    Job "SYS"." EXPDP_JOB" completed with 2 error(s) at 12:17:41According to below suggestion, I increased my undo from 2 Gig to 8Gig, still no result.
    Expdp fails with errors when using flashback_scnI tried blow also got error as followed
    flashback_time=\"TO_TIMESTAMP \(TO_CHAR \(SYSDATE, \'YYYY-MM-DD HH24:MI:SS\'\), \'YYYY-MM-DD HH24:MI:SS\'\)\"
    LRM-00112: multiple values not allowed for parameter 'flashback_time'
    LRM-00113: error when processing file 'expdp_dsatrn.par'
    and
    FLASHBACK_TIME=TO_TIMESTAMP(TO_CHAR(SYSDATE,'YYYY-MM-DD HH24:MI:SS'),'YYYY-MM-DD HH24:MI:SS')
    LRM-00116: syntax error at ')' following 'YYYY-MM-DD HH24:'
    LRM-00113: error when processing file 'expdp.par'Edited by: user3636719 on Apr 18, 2013 10:15 AM
    Edited by: user3636719 on Apr 18, 2013 12:36 PM

    I tried blow also got error as followed
    flashback_time=\"TO_TIMESTAMP \(TO_CHAR \(SYSDATE, \'YYYY-MM-DD HH24:MI:SS\'\), \'YYYY-MM-DD HH24:MI:SS\'\)\"
    LRM-00112: multiple values not allowed for parameter 'flashback_time'
    LRM-00113: error when processing file 'expdp_dsatrn.par'
    and
    FLASHBACK_TIME=TO_TIMESTAMP(TO_CHAR(SYSDATE,'YYYY-MM-DD HH24:MI:SS'),'YYYY-MM-DD HH24:MI:SS')
    LRM-00116: syntax error at ')' following 'YYYY-MM-DD HH24:'
    LRM-00113: error when processing file 'expdp.par'
    No need to complicate it.
    FLASHBACK_TIME=SYSTIMESTAMP
    Try this way.

  • Impdp with created with compilation warnings

    expdp without any error but impdp.
    ORA-39082: Object type TRIGGER:"myuser_name"."AA_CUR_TRIGGER" created with compilation warnings.
    I pick one of compilation errors as above. There are many other errors such as view.
    Did I miss anything? Please help.

    The problem could just be ordering or information missing. expdp exports object in the best order it can think of. By this I mean, if you are exporting a table, it will export the table before the index, and the index before the index_statistics, etc. Sometimes there are objects that are dependent on other objects that are not yet imported. Let's say that you have a procedure that uses a view and a view that uses a procedure. The way expdp/impdp works (and I'm not sure of the order of these 2 objects) is that it will create all procedures and then later on, create all views. (or the other way around, all views, then all procedures). In the case I described above, if views are created before procedures, then the view that calls a procedure will get a compilation warning because the procedure has not been imported yet. If it is the other way around, then the procedure will get the compilation warning because the view is not there.
    The objects will be created and they will recompile the next time you use them. You could also go in after the impdp job is complete and just issue the
    alter object_type schema.name compile;
    and they should compile.
    The other problem you could be running into is if the object you are creating uses another object that is not in the database or if the grant is not there. Let's say that you create a procedure proc_a and it is in the user1 schema. If you try to create a view in user2 that calls proc_a, and if proc_a does not exist, you can get the compilation warning.
    So, I don't think you missed anything.
    Hope this helps.
    Dean

  • Datpump Export,Import across lower and higher versions

    Versions involved : 10.2.0.4, 11.2.0.3
    Operating System  : AIX 6.1
    +++++++++++++++++++++++++++++++++Lets say I export a schema in a 11.2 Database using 11.2 expdp .
    Can I import the above dump (11.2) to a 10.2 Database using 10.2 impdp ?

    Setting VERSION=10.2 worked fine for me.
    Since I was curious, I tested taking a table level export dump from 11.2.0.3 Schema using 11.2.0.3 expdp without VERSION parameter. I succesfully managed to import those tables to a 10.2.0.2 schema needless to say using 10.2.0.2 impdp.
    I don't know how this worked . Maybe export,importing from higher to lower version at schema level might fail. Need to test it when I have time.
    This is contrary to Oracle documentation (11.2 DB Utilities guide)
    Data Pump Import cannot read dump file sets created by a database release that is
    newer than the current database release, unless those dump file sets were created
    with the VERSION parameter set to the release of the target database. Therefore,
    the best way to perform a downgrade is to perform your Data Pump export with
    the VERSION parameter set to the release of the target database
    But, anyway I am glad that Oracle's superior coding has enabled me to export,import from higher to lower version.

  • Error in Estimate of schema for EXPDP

    Hi all,
    OS:Windows
    DB:11.2 xpress edition.
    I needed to estimate the size of dumpfile before export for a schema A, i used the following command:
    expdp schema_A/schema_a estimate_only=y nologifle=y.but i faced the below mentioned error:
    Connected to: Oracle Database 11g Express Edition Release 11.2.0.2.0 - Production
    ORA-31626: job does not exist
    ORA-31633: unable to create master table "schema_A.SYS_EXPORT_SCHEMA_05"
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
    ORA-06512: at "SYS.KUPV$FT", line 1020
    ORA-01031: insufficient privileges.Kinldy help.
    Regards,
    Sphinx

    Also refer this
    http://arjudba.blogspot.in/2008/09/expdp-fails-with-ora-31626-ora-31633.html
    Privileges needed for Import and export

  • Re:No release of production order without standard cost estimate

    Hello Experts
    I do not want to release any production order without standard cost estimate
    I have come acroos the same problem raised by some other users and can you please suggest how to achieve this?
    Is there any way to achieve this without user exit?
    Thanks and Regards
    Subbu

    Hello Subbu,
    In this case, you need to put the status as "B4" Blocked in costing 1 view of material master, when you create the costing view and change it to "03" status when you released the cost estimate.
    Please note that this field can be overwritten in MRP view also. Hence please get the good control of material master and good coordination with PP team to achive this.
    Please let me know if you require any clarifications.
    Thank you,
    Regards,
    Santosh
    Please reward points if helpful.

  • Re: create material cost estimate with ck11n without

    hi gurus,
    i need to do create material cost estimate with ck11n without cost result but before i need to check the BOM with material component 11111, and also instruct me how to correct maintain BOM list to get the standard cost
    if anybody do the same scenoria giveme your golden inputs
    thanks in advance,
    kumar.b

    Hi,
    As note 351835 explains, calculation needs an inforecord to calculate prices from purchasing. Prices from outline agreements are generally not taken into account. According to note 499699, the system may consider entries in the source but it will ignore entered agreements (before this correction, any source list entries with agreement were ignored).
    Prices from agreements could only be considered either via a modification or by using the user exit for material valuation (COPCP005).
    This unfortuntely is a restriction resulting as a side-effect from note 499699. As soon as an agreement is entered in the source list, the fixed indicator will not be considered anymore.
    You therefore have two options to overcome this.
    1) You create a second source list entry without an agreement; then the fix indicator will be reflected; report Z_CREATE_EORD from note  409960 could be used to do this automatically
    2) You are using user-exit EXIT_SAPLMEQR_001 in order to 'overrule'   the standard search for source of supply (enhancement LMEQR001)
    regards
    Waman

  • Using expdp to export a mix of tables with data and tables without data

    Hi,
    I would like to create a .dmp file using expdp, exporting a set of tables with data and another set without data. Is there a way to do this in a single .dmp file? For example, I want all the tables in a schema with data, but for the fact tables in that schema, I only want the fact table objects, not the data. I thought it might be easier to create two separate .dmp files, one for each scenario, but would be nice to have one .dmp file that satisfies my requirement. Any help is appreciated.
    Thanks,
    -Rodolfo
    Edited by: user6902559 on May 11, 2010 12:05 PM

    You could do this with where clauses. Let's say you have 10 tables to export, 5 with data and 5 without data. I would do it like this
    tab1_w_data
    tab2_w_data
    tab3_w_data
    tab4_w_data
    tab5_w_data
    tab1_wo_data
    tab2_wo_data
    tab3_wo_data
    tab4_wo_data
    tab5_wo_data
    I would make one generic query
    query="where rownum = 0"
    and I would make 5 specific queries
    query=tab1_w_data:"where rownum > 0"
    query=tab2_w_data:"where rownum > 0"
    query=tab3_w_data:"where rownum > 0"
    query=tab4_w_data:"where rownum > 0"
    query=tab5_w_data:"where rownum > 0"
    The first query will be applied to all tables that don't have their own specific query and it will export no rows, the next 5 will apply to each of the corresponding table.
    Dean

Maybe you are looking for

  • Error handling with example

    Hi, Can any one explain please 1.Why we need to create XSD, 2.What is the use of dehydration, 3.Types in XML Please explain any error handling with example.

  • Is there any way to report users for spamming?

    Is there a way to report users for spamming? One of them in this forum is now editing their handle to nearly match genuine users.

  • Ip camera connect resume

    I use FMLE to transmit a live image from IP camera. Now I have a pc with FMLE connected to IP camera by ethernet wire and all go ok. I would move pc and connect camera by internet. When camera connection break also for a little time, FMLE stop and I

  • Web Sharing Start and Stop routines...

    Does any one know where to find the actual start and stop methods that get executed when you turn on/off Personal Web Sharing in System Preferences. A long while back I found where those were to do some custom event upon web sharing start-up/shut-dow

  • QuickTIme Pro Windows Key Not Working in OSX

    Is this intentional and is there a way to get a key for both if you purchased or do you have to purchase twice?