IMPDP and GATHER_SCHEMA_STATS

1.Do I have to GATHER_SCHEMA_STATS after every IMPDP of data?
2.How frequently should we gather schema level statistics?

1. If, you do statisticas=no then you should collect statisticas for imported objects/schema. IF you are specifiying statisticas=YES and if your server configuration is different by memmory of file layout is different or by cpu number, still you want to collect statisticas. I think, collecting statisticas after each IMP job would be more appropriate coz that way you have fresh statisticas on which you can rely on.
2. About how frequent you should collect statisticas depend on how much data you are updateing/inserting/deleting.
In 9i, you can run GATHER_SCHEMA_STATS everyday with STALE option. In this case, Oracle will automatically find objects which need new statisticas and collect it.
In 10g, statisticas are collected automatically and that should be work fine for almost every kind of database. Still you can change configuration. Statisticas job is run through new scheduler.
Read this
Thanks
~Keyur

Similar Messages

  • Trigger-Body isnt remaped during IMPDP and REMAP_SCHEMA in Oracle 10g

    Hello,
    we are facing the following problem:
    During an IMPDP we are remaping the schema. The schema includes triggers. The trigger bodys are of normal syntax without a relation to the schema/user.
    If we start the import with datapump using the remap_schema option the triggers are remap to the new schema. But the trigger bodys point to the old schema.
    For example:
    CREATE OR REPLACE TRIGGER "OLD_SCHEMA".NAME_OF_TRIGGER
    BEFORE INSERT ON "OLD_SCHEMA"."TABLE_NAME" REFERENCING OLD AS OLD NEW AS NEW FOR EACH ROW BEGIN
    END;
    After using the REMAP_SCHEMA option:
    CREATE OR REPLACE TRIGGER "NEW_SCHEMA".NAME_OF_TRIGGER
    BEFORE INSERT ON "OLD_SCHEMA"."TABLE_NAME" REFERENCING OLD AS OLD NEW AS NEW FOR EACH ROW BEGIN
    END;
    So the Trigger Body in the NEW_SCHEMA uses a link to the OLD_SCHEMA and the trigger doesn't work. We wrote the trigger without referencing to a schema. So the OLD_SCHEMA resp. NEW_SCHEMA syntax must be added by datapump.
    The question: Is there a way to remap the triggers correctly in the new schema or is this behavior "normal" to Oracle 10g Datapump?
    We are using Oracle 10g Database 10.2.0.4 with the latest Patches on Windows Server 2003 SP2.
    Thank you for you attention.

    Hello,
    I had the same problem and you can't fix it by changing a parameter of impdp. You really need to change the source of those triggers so they don't use hard-code schema references.
    Solution would be to change the source code of those triggers. Or to use the "script" parameter of impdp and recreate the triggers yourself after the impdp has finished.
    For more info check Metalink note: 750783.1
    I hope this helps.
    Regards,
    Michiel.

  • I need help with IMPDP and the parameter 'EXCLUDE'...

    Hi all... Im new at this forum and Im looking for some help =)
    I'm working with DataPump (EXPDP and IMPDP), and I'm importing some tables from HR... I'm able to import those tables without any problem, but I have to exclude the TRIGGER and PROCEDURES. I'm using the parameter EXCLUDE=TRIGGER,PROCEDURE and it shows me an error message , because its unable to find the object call "PROCEDURE"... if I remove the EXCLUDE=PROCEDURE then it shows no error message... can someone help me?
    this is the script Im using....
    IMPDP system/mypassword directory=DATA_PUMP REMAP_SCHEMA=HR:EXAMEN include=table:\"in('DEPARTAMENTS','EMPLOYEES','JOB_HISTORY','JOBS')\" EXCLUDE=TRIGGER,PROCEDURE DUMPFILE=IMP_HR.DMP LOGFILE=IMP_HR_.LOG

    Hi,
    Use:
    EXCLUDE=TRIGGER
    EXCLUDE=PROCEDURE
    Always read the docs first:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_import.htm#i1010670

  • Datapump expdp impdp and sequences

    Hi all. I have a 10g XE database on my dev machine witha whole load of test data in it. I want to copy the schema to another machine without the data which I can do with expdp usr/pwd CONTENT=metadata_only and I can import it with impdp and everyhing works tickety boo. Except that all the sequences are populated where the test data left off. Can someone please tell me how to copy the schema with the sequences reset please ? I'm guessing either I can export the schema resetting the sequences (somehow) or export EXCLUDING the sequeces and create them seperately (somehow). Thanks for reading.

    I don't think you can reset the sequences directly. You can run the import to an sql file and then use search/replace on it:
    $ impdp user/pass dumpfile=test.dmp directory=MY_DIR include=SEQUENCE sqlfile=sequences.sql
    You will have several lines like this inside "sequences.sql":
    CREATE SEQUENCE "USER"."SEQ_NAME" MINVALUE 1 MAXVALUE 99999999 INCREMENT BY 1 START WITH 1857 CACHE 20 ORDER CYCLE ;
    Then just use regular expressions to replace "START WITH NNNN" by "START WITH 1"

  • Impdp and commit=y

    Hello,
    I'm doing a large scale import using datapump on a table with many partitions.
    In the import utility there was commit=y that prevented too much undo in huge loads. is there something similar to use in the datapump?
    Thanks

    no, "commit" hasn't translate in impdp
    a trick will be change "undo_retention" to a lower value before import and set to original value when the import finish.

  • Impdp and expdp failed due to following error

    hi,
    to every one
    when ever i issue impdp or empdp command it gives the error
    i have used intesivly data pump but i am not get the idea that why this going to do on this DBserver only.
    even i have just give impdp username and passward but the error is same if any one have information then me is waiting for u'r reply.
    error is :
    Import: Release 10.2.0.1.0 - Production on Friday, 24 February, 2006 18:53:06
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Password:
    Connected to: Oracle Database 10g Release 10.2.0.1.0 - Production
    ORA-39006: internal error
    ORA-39065: unexpected master process exception in DISPATCH
    ORA-01775: looping chain of synonyms
    ORA-39097: Data Pump job encountered unexpected error -1775

    Hi,
    Error: ORA-01775: looping chain of synonyms
    Cause: You created a series of synonyms that resulted in a circular reference.
    Action: The options to resolve this Oracle error are:
    Correct the synonyms so that the circular reference is removed.
    A circular reference can occur as follows:
    CREATE SYNONYM syn1 for syn2;
    CREATE SYNONYM syn2 for syn3;
    CREATE SYNONYM syn3 for syn1;
    see user_synonyms ...

  • Confusing impdp and expdp stuff

    Hello, I am kinda new to Oracle in general.
    Building a test environment.
    Want to move a production schema to a test RAC db i have built.
    I read up and decided that expdp would be the best method ( correct me if i am wrong please )
    I am on oracle linux.
    Now I have expdp'ed a schme succesfully but when impdping on the other system i am getting a total of 735 errors.
    I have created the same exact user and am using the following command
    Please let me know whether I am missing a step here.
    $>impdp system/<password> directory=impdp_dir dumpfile=UNIXDATA.DMP logfile=impdpUNIXdata.log fromuser='unixdata' touser='unixdata' commit=Y ignore=Y
    would the log file help?
    one of them
    Processing object type SCHEMA_EXPORT/JOB
    ORA-39083: Object type JOB failed to create with error:
    ORA-00001: unique constraint (SYS.I_JOB_JOB) violated
    also this
    ORA-39083: Object type MATERIALIZED_VIEW failed to create with error:
    ORA-31625: Schema UNIXDATA is needed to import this object, but is unaccessible
    ORA-01435: user does not exist
    THANKS ALOT FOR YOUR HELP!
    Ali
    Edited by: user10270464 on Apr 16, 2010 5:40 AM

    Hi,
    You don't say what version of Oracle you are using so trying to help without that info is more difficult. It also looks like you are using some old exp parameters and some new expdp parameters. Your mixed command is:
    impdp system/<password> directory=impdp_dir dumpfile=UNIXDATA.DMP logfile=impdpUNIXdata.log fromuser='unixdata' touser='unixdata' commit=Y ignore=Y
    You said that you did a schema mode export, so I would change the impdp command to be:
    impdp system/<password> directory=impdp_dir dumpfile=UNIXDATA.DMP logfile=impdpUNIXdata.log table_exists_action=append
    The parameters I changed were:
    fromuser='unixdata' -- if the only thing in the dumpfile is what you want, then you don't need fromuser
    touser='unixdata' -- since you are importing into the same schema, you don't need touser
    commit=y -- no such equivalent in datapump
    ignore=y -- the same as table_exists_action=append
    Now for your errors.
    The first one is not familiar with me, but I have seen the second error, but can't remember the details. Having the version that you are running may help job my memory.
    Dean

  • Impdp and scheduler maintenance window

    Environment: Oracle 11.1.0.7.0 running on HP-UX B.11.31 U ia64.
    I was performing import using impdp. Before starting the import, I manaully disabled the maintenance window.
    But once, import over, when I try to enable the maintenance window, when I checked, it got already enabled.
    On investigating found that the timing of enable match with the import finish time.
    Before performing the import using impdp, I have disabled maintenance window using;
    DBMS_SCHEDULER.DISABLE(name=> mwindow,force=>TRUE);
    After Import, when i checked the maintenance window using;
    SELECT
    w.window_name,
    w.repeat_interval,
    w.duration,
    w.enabled
    FROM dba_scheduler_windows w
    the value as ENABLED = TRUE.
    On querying the select * from DBA_SCHEDULER_WINDOW_LOG, the timing of LOG_DATE matches with the import impdp finish time. However on the column ADDITIONAL_INFO
    has value of REASON="manually enabled"

    Are you doing a system import here ?
    If you are doing a system import then properties of the maintenance window will also be imported and that may include enabling the window.
    Since a system export exports system-level properties as well, you might need to disable the maintenance window before doing your system export.
    Hope this helps,
    Ravi.

  • IMPDP and EXPDP

    Is there an option to be specified in EXPDP(Data Pump Export) and IMPDP(Data Pump import) to export/import the packages, procedures from one user/schema to the other with in the same database?

    You can use include parameter and the remap schema parameter.
    Expdp user/password include=procedure,packages ...
    If you have a loopback dblink, then you can do it all pin one command without creating a dump file. There is no expdp required. Do this
    Impdp user/password network_link=your_loopback_dblink include=package,procedure remap_schema=old:new schemas=old ...
    Hope this helps
    Dean

  • Impdp and available memory

    Just a quick one on how available memory can affect datapump imp.
    Will increasing the SGA from 2GB to 8GB of a database on which data is being imported affect the total import time?
    Just a bit curious!
    Thanks

    Hi,
    What is the method of the Import Method. Whether it is External Path or Direct Path. If it is Direct Path it will not use buffer cache, it will directly insert after the HWM. If it is External Path it will use the SGA. I dont find a point to increase the size of SGA for the import process alone. If you have a luxury of having 8GB instance for import process you can have it for the normal databse operations too. Rather you can do the import process on the less peak time or you need to address the below questions to increase the size of SGA for this process
    1) how much data are you going to insert?
    2) how many users are going to be connected to the Oracle database when import is going to happen?
    3) how big is your improting data and how it wll affect the performance
    You can go for direct path insert which will be faster than the external path
    Thanks,
    Vijay

  • Upgrading using expdp and impdp

    I am upgrading from 10.2 to 11.1 by creating a new 11.1 db and then expdp from the 10.2 db and impdp to the 11.1 db. There is Oracle text installed and in use in the 10g db. Should I install the Oracle text in the 11.1 db first before I do a full impdp? Or install Oracle text and only do schema level impdp and ignore importing to ctxsys schema? I would like to know what's the correct way to do this w/o screwing up the text index on the application schema.
    Thanks.

    Hi,
    CTXSYS is never part of an export (think about that, it is the same as with SYS). Only Indexes used by schemas are exported. The best way to work with Oracle Text and an upgrade is to script the indexes before the export by using ctx_report.create_index_script. Make the export and import, and run afterwards the scripts for the text indexes. In the export the preferences for Text indexes are lost (not exported), so you can get in trouble if you need to change such indexes with preferences.
    Herald ten Dam
    http://htendam.wordpress.com

  • Using expdp and impdp instead of exp and imp

    Hi All,
    I am trying to use expdp and impdp instead of exp and imp.
    I am facing few issues while using expdb. I have a Job which exports data from one DB server and then its imported so another DB server. Both DB servers are run on separate machines. Job runs on various clients machine and not on any of DB server.
    For using expdp we have to create DIRECTORY and as I understand it has to be created on DB server. Problem here is Job can not access DB Server or files on DB server. Also dump file created is moved by Job to other machines based on requirement( Usually it goes to multiple DB server).
    I need way to create dump files on server where job runs.
    If I am not using expdp correctly please guide. I am new to expdp/impdp and imp/exp.
    Regards,

    Thanks for quick reply ..
    Job executing expdb/impdp runs on Red Hat Enterprise Linux Server release 5.6 (Tikanga)
    ORACLE server Release 11.2.0.2.0
    Job can not access the ORACLE server as it does not have privileges (In fact there is no user / password to access ORACLE server machines). Creating dump on oracle server and moving is not an option for this JOB. It has to keep dump with itself.
    Regards,

  • Steps to export and import oracle 10g databse from AIX to AIX and LINUX

    Hi,
    I need the steps to export the oracle 10g database from AIX server to AIX server and LINUX server.
    Please give me all the steps as this is my first exort an import activity.
    thanks,

    For 10g there exist two ways to do so.
    1) Regular exp/imp.
    2) Data pump expdp/impdp.
    As this is your first export and import activity, relevant concept understanding is mandatory for you.
    Documentation Link
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14215/toc.htm
    Read the chapters 1,2,3 (Data pump expdp/impdp) and 19 (regular exp/imp). Good luck.

  • Changing database to noarchivelog mode during impdp

    Hi Guys,
    I am using Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production on RHEL5.
    I am in the middle on the impdp and I want my database to run on noarchivelog as it creates a lot of archivelogs.
    Will it be proper if I stop the import:
    e.g Import> STOP_JOB=IMMEDIATE
    and shutdown the database put the database on noarchivelog mode.
    then opne database and
    impdp hr/hr@mydb ATTACH=myfulljob Will these work fine?
    Or should I execute:
    alter system archive log stop;
    Please Help !!!!!!!!!!!

    Vinod Dhandapani wrote:
    Running the database in Noarchivelog mode means you should be ready to lose data incase there is any crash. I would recommend you to change the size of the redologs so that it will not create frequent archivelogs. Which does nothing about the space required for the archivelogs. If you are generating redo at the rate of 1gb/hour, you will consume archivelog dest space at the rate of 1gb/hour. Doesn't matter if those files are 10mb each or 100mb each.
    If space is the problem then use a shell script to move the archivelog to another location (Tape)....And hopefully that shell script is invoking rman to do the work, so that in the event of recovery, rman will know that backups were taken and where they are located.
    >
    Incase you want to convert the database from archivelog to noarchivelog, follow the steps provided in the following link.
    http://racadmin.blogspot.com/2011/06/convert-archivelog-mode-to-noarchivelog.html
    Thanks

  • ORA-39166: Object was not found in impdp

    I am trying to do an impdp to another database.
    Both databases are 11g on AIX.
    This is the command I use.
    impdp &dUSR/&dPASS@&dCONN REMAP_SCHEMA=&uUSR:&dUSR directory=DATA_PUMP_DIR dumpfile=file.dmp logfile=...log content=DATA_ONLY table_exists_action=truncate 'TABLES=(table1,table2,...)'
    &dUSR is for destination databsse.
    &uUSR is for source database.
    I am getting the following errors:
    ======================
    ORA-39166: Object TAR.TABLE1 was not found.
    ORA-39166: Object TAR.TABLE2 was not found.
    ======================
    When import command was modified (table1 -> &uUSR.table1) so it becomes:
    impdp &dUSR/&dPASS@&dCONN REMAP_SCHEMA=&uUSR:&dUSR directory=DATA_PUMP_DIR dumpfile=file.dmp logfile=...log content=DATA_ONLY table_exists_action=truncate 'TABLES=(&uUSR.table1,&uUSR.table2,...)'
    The import works. I am already using REMAP_SCHEMA so why is this necessary?

    VST,
    Expdp was done as tables=foo rather than tables=user2.foo.This means that the foo table that is in the dumpfile would have to be originally in the schema running the job. So, let's say your export was
    expdp user1/pass1 tables=foo dumpfile=mydump.dmp ...
    This means that table user1.foo would have to exist (or you would get an export error) and this table would be in the dumpfile.
    So the bottom line is to transfer between databases and schemas I need to use schema either during expdp or impdp and remap_schema is still necessary?If you want the table user1.foo to be imported into the target database, all you need to do is run either this command:
    impdp any_prived_user/password full=y dumpfile=mydump.dmp ...
    -- this would require that the user1 is already created in the target database. It will create table user1.foo
    or this command:
    impdp user1/password full=y dumpfile=mydump.dmp ...
    -- this would require that the user1 exists because you are running the job from user1.
    If you want this to go into another schema, then you will need to use remap_schema in the impdp command.
    Hppe this helps
    Dean

Maybe you are looking for

  • Outgoing mail issue in SCOT

    Hi All, When we have an outgoing mail generated either by a program or the SAP Business Workplace the mails sit in the Waiting stage. They do not show up in SOST as well. And they are all sent together probably in the night as we can see the mails in

  • I have a Brother  HL-2240 that I wont allow me to print in my Mac Book Bro.

    The printer is currently installed in my windows Dell Machine. However, my macbook pro shows connection to the printer. Im just not to sure what to do to allow to print from my macbok.

  • Why can't i remove my start up programs?

    I have 2 programs that launch in Start up (Windows Messenger and utorrent). But i don't want them to launch on start up. So i go to Users & Groups Preferences from the Menu Bar and i click Login Items. Then i click the lock because i want to make cha

  • CF9 and Ehcache

    After ready an interesting article recently about CF and Ehcache (http://java.dzone.com/articles/building-high-performance), I became curious about the mention of using CF and Ehcache for Distributed Transactions.  Spent some time looking around, but

  • Single role limit to user

    Hello Some of the users might get more than 300+ single roles to users in production, is that Ok, This is getting to effect user buffer area ?  Please let me your thought and your experience regarding the number of roles limit to users. Thanks Damoda