Importing statistics using datapump

When importing production data into my test databases (schema imports) I want ot import the production statistics for the imported schemas, then lock them, so hopefully all testing will be done using the same explain plans.
When using expdp on production I do not specify eny EXCLUDE objects and when I impdp into my test DB it imported most of the stats but for 3 tables and all their indexes it regathererd the stats.
Any ideas which it recalculated stats for just 3 tables (+ indexes) and how I can avoid this.
The schemas I import into are droped and recreated prior to the import.

Hi,
You don't mention what version you are running. The following works on 10.2 and newer. The reason stats are regathered is due to some type of background process that regathers stars on tables if stats exists and if the database is idle. (Or at least that is what I seem to remember reading.) I don't know all of the conditions, but that is what is happening. If you don't create stats, then they are not regathered.
To verify that you don't have the lock code from Data Pump, you could run the impdp command and add:
content=metadata_only include=statistics sqlfile=stats.sql
Then edit the stats.sql file to see if there are calls to dbms_stats.lock
If you have this support, you should see
dbms_stats.lock_table_stats for each table and
dbms_stats.lock_index_stats for each index
Dean
p.s. Here ia a section of my sqlfile for the HR schema
DECLARE
c varchar2(60);
nv varchar2(1);
df varchar2(21) := 'YYYY-MM-DD:HH24:MI:SS';
s varchar2(60) := 'HR';
t varchar2(60) := 'REGIONS';
p varchar2(1);
sp varchar2(1);
stmt varchar2(300) := 'INSERT INTO "SYS"."IMPDP_STATS" (type,version,c1,c2,c3,c4,c5,n1,n2,n3,n4,n5,n6,n7,n8,n9,n10,n11,d1,r1,r2,ch1,flags) VALUES (''C'',5,:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17,:18,:19,:20,:21)';
BEGIN
DELETE FROM "SYS"."IMPDP_STATS";
INSERT INTO "SYS"."IMPDP_STATS" (type,version,flags,c1,c2,c3,c5,n1,n2,n3,n4,n10,n11,n12,d1) VALUES ('T',5,2,t,p,sp,s,
4,1,14,4,NULL,NULL,NULL,
TO_DATE('2009-09-16 21:40:06',df));
c := 'REGION_ID';
EXECUTE IMMEDIATE stmt USING t,p,sp,c,s,
4,.25,4,4,0,1,4,3,nv,nv,nv,
TO_DATE('2009-09-16 21:40:06',df),'C102','C105',nv,2;
c := 'REGION_NAME';
EXECUTE IMMEDIATE stmt USING t,p,sp,c,s,
4,.25,4,4,0,3.39718115904673E+35,4.01944465011359E+35,11,nv,nv,nv,
TO_DATE('2009-09-16 21:40:06',df),'416D657269636173','4D6964646C65204561737420616E6420416672696361',nv,2;
DBMS_STATS.IMPORT_TABLE_STATS('"HR"','"REGIONS"',NULL,'"IMPDP_STATS"',NULL,NULL,'"SYS"');
DELETE FROM "SYS"."IMPDP_STATS";
END;
BEGIN
DBMS_STATS.LOCK_TABLE_STATS('"HR"','"REGIONS"','ALL');
END;
/

Similar Messages

  • Stats Not getting imported while importing partition using DATAPUMP API

    Hi,
    I am using datapump API for exporting a partition of a table and the importing the partition into table on another DB.
    The data is getting imported but i could the partition stats are not getting imported. I am using TABLE_EXISTS_ACTION = APPEND.
    any idea on how to achieve the stats also to be imported into the partition.
    Thanks!

    Hi,
    I am using datapump API for exporting a partition of a table and the importing the partition into table on another DB.
    The data is getting imported but i could the partition stats are not getting imported. I am using TABLE_EXISTS_ACTION = APPEND.
    any idea on how to achieve the stats also to be imported into the partition.
    Thanks!

  • Error while importing schemas using datapump

    Hi,
    I am trying to import schema from qc to development. after importing i got the following error attached below:
    Processing object type SCHEMA_EXPORT/TABLE/GRANT/WITH_GRANT_OPTION/OBJECT_GRANT
    Processing object type SCHEMA_EXPORT/TABLE/GRANT/CROSS_SCHEMA/OBJECT_GRANT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
    ORA-39065: unexpected master process exception in RECEIVE
    ORA-39078: unable to dequeue message for agent MCP from queue "KUPC$C_2_20090421161917"
    Job "SYS"."uat.210409" stopped due to fatal error at 20:15:13
    ORA-39014: One or more workers have prematurely exited.
    ORA-39029: worker 2 with process name "DW02" prematurely terminated
    ORA-31671: Worker process DW02 had an unhandled exception.
    ORA-39078: unable to dequeue message for agent KUPC$A_2_20090421161934 from queue "KUPC$C_2_20090421161917"
    ORA-06512: at "SYS.KUPW$WORKER", line 1397
    ORA-06512: at line 2
    ORA-39029: worker 3 with process name "DW03" prematurely terminated
    ORA-31671: Worker process DW03 had an unhandled exception.
    ORA-39078: unable to dequeue message for agent KUPC$A_2_20090421162030 from queue "KUPC$C_2_20090421161917"
    ORA-06512: at "SYS.KUPW$WORKER", line 1397
    ORA-06512: at line 2
    ORA-39029: worker 4 with process name "DW04" prematurely terminated
    ORA-31671: Worker process DW04 had an unhandled exception.
    ORA-39078: unable to dequeue message for agent KUPC$A_2_20090421162031 from queue "KUPC$C_2_20090421161917"
    ORA-06512: at "SYS.KUPW$WORKER", line 1397
    ORA-06512: at line 2
    Is my import completed successfully or not??. please help...

    When a datapump job runs, it creates a table called the master table. It has the same name as the job name. This is used to keep track of where all of the information in the dumpfile is located. It is also used when restarting a job. For some reason, this table got dropped. I'm not sure why, but in most cases, datapump jobs are restartable. I don't know why the original message was reported, but I was hoping the job woudl be restartable. You could always just rerun the job. Since the job that failed already created tables and indexes, if you restart the job, all of the objects that are dependent on those objects will not be created by default.
    Let's say you have table tab1 with an index ind1 and both table and index are anaylized. Since tab1 is already created, the datapump job will mark all of the objects dependent on tab1 to skip. This includes the index, table_statistics, and index_statistics. To get around this, you could say
    table_exists_action=replace
    but this will replace all tables that are in the dumpfile. Your other options are:
    table_exists_action=
    truncate -- to truncate the data in the table and then just reload the data, but not the dependent objects
    append -- to append the data from the dumpfile to the existing table, but do not import the dependent objects
    skip -- skip the data and dependent objects from the dumpfile.
    Hope this helps.
    Dean

  • Hindi font data import by using datapump

    Dear All,
    I have a dmp file in which there are some data in hindi font (mangal font) in some tables.
    I am importing the dmp file by using the datapump in some other system ,the hindi font data is showing like '???????' format.
    So please suggest me that how i will import the dmp file so that i willl get my data in proper format.

    Characterset and Compatibility Issues
    Unlike previous tools, the Data Pump uses the characterset of the source database to ensure proper conversion to the target database. There can still be issues with character loss in the conversion process.
    Note 227332.1 NLS considerations in Import/Export - Frequently Asked Questions
    Note 457526.1 Possible data corruption with Impdp if importing NLS_CHARACTERSET is different from exporting NLS_CHARACTERSET
    Note 436240.1 ORA-2375 ORA-12899 ORA-2372 Errors While Datapump Import Done From Single Byte Characterset to Multi Byte Characterset Database
    Note 553337.1 Export/Import DataPump Parameter VERSION - Compatibility of Data Pump Between Different Oracle Versions [Video]
    Note 864582.1 Examples using Data pump VERSION parameter and its relationship to database compatible parameter

  • Error in import data using datapump

    Hi all,
    I just install oracle 10gR2 in my local machine and tryied to import data in 9iR2 database. But it ended with errors. Here r the steps I have followed.

    SQL> conn sys/ as sysdba
    Enter password:
    Connected.
    SQL> crete user amila identified by a;
    SP2-0734: unknown command beginning "crete user..." - rest of line ignored.
    SQL> create user amila identified by a;
    User created.
    SQL> grant connect,resource,create any directory to amila;
    Grant succeeded.
    SQL> create or replace directory dump_dir as 'E:\oracle\dump';
    Directory created.
    SQL> grant read,write on directory dump_dir to amila;
    Grant succeeded.
    -==========================================================
    C:\>impdp amila5/a@cd01 directory=dump_dir dumpfile=amila5.dmp logfile=amila5.lo
    g
    Import: Release 10.2.0.1.0 - Production on Friday, 19 October, 2007 13:27:17
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.1.0 - Production
    UDI-00008: operation generated ORACLE error 6550
    ORA-06550: line 1, column 52:
    PLS-00201: identifier 'SYS.DBMS_DATAPUMP' must be declared
    ORA-06550: line 1, column 52:
    PL/SQL: Statement ignored
    Pls help on this issue.

  • Import fails using datapump from 10g to 11g.

    wrong posting, moving it to right one.
    how shall I move messages between different categories?
    Edited by: user11155666 on Oct 22, 2010 11:51 AM

    You could be running into Bug 4950024. See MOS Doc 4950024.8 (Bug 4950024 - ORA-39083/PLS-302 during import of metric threshold settings)
    You might want to open an SR with Support.
    HTH
    Srini

  • Import Indexes only using datapump

    Hi Gurus,
    I have 2 identical schemas like same table structures but row number and data is different so I want to import the indexes from one schema to other. Can you please advise how can I do that using datapump.
    Thanks in advance.

    It doesn't matter what data is in the tables, as long as the columns the indexes are on exist. The indexes get created and built using a normal
    create index ...
    statement. So, if the original table had 100 rows and the new table has 10, the index will be built using the 10 rows. All that gets stored in the dumpfile is the ddl (in xml format). No index data is stored in the dumpfile. (If the op was using transportable, that would be a different story.)
    Dean

  • Check for import errors programmatically using datapump api

    Hi,
    I have pl/sql that imports some tables over DBLink using dbms_datapump. Is there a way to programmatically check that the import job had no errors at all, before proceeding to the next step after the import is completed?
    I saw several samples for showing the progress of the import job using while loops and whether it completed or not.
    But none explaining how to check whether any of the import steps failed, or if all steps completed successfully.
    Thanks,
    JGP

    check this
    [Datapump Demos|http://www.morganslibrary.org/reference/dbms_datapump.html]

  • Require help in understanding exporting and importing statistics.

    Hi all,
    I am bit new to this statistics.
    Can anyone please explain me in detail about these commands.
    1) exec DBMS_STATS.GATHER_TABLE_STATS (ownname => 'MRP' , tabname => 'MRP_ATP_DETAILS_TEMP', estimate_percent => 100 ,cascade => TRUE);
    2) exec DBMS_STATS.CREATE_STAT_TABLE ( ownname => 'MRP', stattab => 'MRP_ATP_3');
    3) exec DBMS_STATS.EXPORT_TABLE_STATS ( ownname => 'MRP', stattab => 'MRP_ATP_3', tabname => 'MRP_ATP_DETAILS_TEMP',statid => 'MRP27jan14');
    4) exec DBMS_STATS.IMPORT_TABLE_STATS ( ownname => 'MRP', stattab => 'MRP_ATP_3', tabname => 'MRP_ATP_DETAILS_TEMP');
    I understand that these commands are used to export and import table statistics.
    But please anyone help me in understanding this indetail.
    Thanks in advance.
    Regards,
    Shiva.

    Shiva,
    Please post the details of the application release, database version and OS.
    Please see (FAQ: Statistics Gathering Frequently Asked Questions (Doc ID 1501712.1) -- What is the difference between DBMS_STATS and FND_STATS).
    For exporting/importing statistics summary of FND_STATS Subprograms can be found in (Doc ID 122371.1)
    http://etrm.oracle.com/pls/et1211d9/etrm_pnav.show_details?c_name=FND_STATS&c_owner=APPS&c_type=PACKAGE&c_detail_type=source
    http://etrm.oracle.com/pls/et1211d9/etrm_pnav.show_details?c_name=FND_STATS&c_owner=APPS&c_type=PACKAGE%20BODY&c_detail_type=source
    Thanks,
    Hussein

  • Total number of records of partition tables exported using datapump

    Hi All,
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    OS: RHEL
    I exported a table with partitions using datapump and I would like to verify the total number of all the records exported of all these partition tables.The export has a query on it with WHERE ITEMDATE< TO_DATE (1-JAN-2010). I need it to compare with the exact amount of records in the actual table if it's the same.
    Below is the log file of the exported table. It does not show the total number of rows exported but only individually via partition.
    Starting "SYS"."SYS_EXPORT_TABLE_05": '/******** AS SYSDBA' dumpfile=data_pump_dir:GSDBA_APPROVED_TL.dmp nologfile=y tables=GSDBA.APPROVED_TL query=GSDBA.APPROVED_TL:"
    WHERE ITEMDATE< TO_DATE(\'1-JAN-2010\'\)"
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 517.6 MB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2008_Q3" 35.02 MB 1361311 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2008_Q4" 33.23 MB 1292051 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2010_Q4" 5.875 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2011_Q1" 5.875 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2009_Q3" 30.53 MB 1186974 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2010_Q3" 5.875 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2009_Q1" 30.44 MB 1183811 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2009_Q2" 30.29 MB 1177468 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2009_Q4" 30.09 MB 1170470 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2010_Q2" 5.875 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2011_Q2" 5.875 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2010_Q1" 5.875 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2011_Q3" 5.875 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2011_Q4" 5.875 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2012_Q1" 5.875 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2006_Q3" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2006_Q4" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2007_Q1" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2007_Q2" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2007_Q3" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2007_Q4" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2008_Q1" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2008_Q2" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2012_Q2" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2012_Q3" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2012_Q4" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_MAXVALUE" 0 KB 0 rows
    Master table "SYS"."SYS_EXPORT_TABLE_05" successfully loaded/unloaded
    Dump file set for SYS.SYS_EXPORT_TABLE_05 is:
    /u01/export/GSDBA_APPROVED_TL.dmp
    Job "SYS"."SYS_EXPORT_TABLE_05" successfully completed at 12:00:36
    Edited by: 831134 on Jan 25, 2012 6:42 AM
    Edited by: 831134 on Jan 25, 2012 6:43 AM
    Edited by: 831134 on Jan 25, 2012 6:43 AM

    I assume you want this so you can run a script to check the count? If not and this is being done manually, then just add up the individual rows. I'm not very good at writing scripts, but I would think that someone here could come up with a script that would sum up the row count for the partitions of the tables from your log file. This is not something that Data Pump writes to the log file.
    Dean

  • Transporting Georaster Data using datapump

    Hi all,
    I'm trying to use datapump to transport my schema from one database to another. However i have problem with my georaster data.
    MAP
    Name Null? Type
    ID NUMBER(38)
    GEOR SDO_GEORASTER
    MAP_RDT
    Name Null? Type
    RASTERID NOT NULL NUMBER
    PYRAMIDLEVEL NOT NULL NUMBER
    BANDBLOCKNUMBER NOT NULL NUMBER
    ROWBLOCKNUMBER NOT NULL NUMBER
    COLUMNBLOCKNUMBER NOT NULL NUMBER
    BLOCKMBR MDSYS.SDO_GEOMETRY
    RASTERBLOCK BLOB
    SCHOOL
    Name Null? Type
    CODE NOT NULL NUMBER(38)
    ENG_NAME VARCHAR2(200)
    ADDRESS VARCHAR2(200)
    POSTAL VARCHAR2(200)
    TELEPHONE VARCHAR2(200)
    FAX VARCHAR2(200)
    EMAIL VARCHAR2(200)
    STATUS VARCHAR2(200)
    ZONE_ID NUMBER(38)
    GEOM SDO_GEOMETRY
    MG_ID NUMBER(38)
    Other table just contain my normal data.
    Export statement:
    - expdp schools/** schemas=schools directory=exp_dir dumpfile=schools.dmp
    Import statement:
    - impdp schools/** schemas=schools directory=imp_dir dumpfile=schools.dmp
    parfile=exclude.par
    exclude.par content:
    exclude=trigger:"like 'GRDMLTR_%'"
    exclude=table:"IN('MAP','MAP_RDT')"
    - impdp schools/** schemas=schools directory=imp_dir dumpfile=schools.dmp
    parfile=include.par content=metadata_only
    include.par content:
    include=table:"IN('MAP','MAP_RDT')"
    - impdp schools/** schemas=schools directory=imp_dir dumpfile=schools.dmp
    parfile=include.par content=data_only
    However, after my import, i did a validation on the georaster object
    select sdo_geor.validategeoraster(geor) from map;
    SDO_GEOR.VALIDATEGEORASTER(GEOR)
    13442: RDT=MAP_RDT, RID=1
    There is problem in the georaster object but i don't quite understand the information that i found online.
    After my import, the type of the georaster change to
    MAP
    Name Null? Type
    ID NUMBER(38)
    GEOR PUBLIC.SDO_GEORASTER
    SCHOOL
    Name Null? Type
    CODE NOT NULL NUMBER(38)
    ENG_NAME VARCHAR2(200)
    ADDRESS VARCHAR2(200)
    POSTAL VARCHAR2(200)
    TELEPHONE VARCHAR2(200)
    FAX VARCHAR2(200)
    EMAIL VARCHAR2(200)
    STATUS VARCHAR2(200)
    ZONE_ID NUMBER(38)
    GEOM PUBLIC.SDO_GEOMETRY
    MG_ID NUMBER(38)
    And i have problem running my application that render the georaster with a theme created on the school. The error message is as below:
    In the mapviewer console
    2008-07-29 16:46:26.773 ERROR cannot find entry in ALL_SDO_GEOM_METADATA table for theme: SCHOOLS_LOCATION
    2008-07-29 16:46:26.773 ERROR using query:select SRID from ALL_SDO_GEOM_METADATA where TABLE_NAME=:1 and (COLUMN_NAME=:2 OR COLUMN_NAME=:3 or COLUMN_NAME=:4) and OWNER=:5
    2008-07-29 16:46:26.774 ERROR Exhausted Resultset
    2008-07-29 16:46:26.783 ERROR cannot find entry in USER_SDO_GEOM_METADATA table for theme: SCHOOLS_LOCATION
    2008-07-29 16:46:26.784 ERROR Exhausted Resultset
    I sincerely hope if anyone could provide me with some guidance. Thanks in advance =)

    This is probably almost a year too late but I had the same problem and was able to resolve it by registering the recently imported geoRaster data
    SQL>select * from the (select sdo_geor_admin.listGeoRasterObjects from dual);
    This will likely show that you have 'unregistered' data and the following will register it after which your validation calls should all return true instead of 13442
    SQL>execute sdo_geor_admin.registerGeoRasterObjects;
    SORRY -- I just saw that you'd already done the registerGeoRasterObjects bit msm
    Edited by: mmillman on May 27, 2009 5:11 PM

  • (9I) EXPORT의 TABLESPACE OPTION, IMPORT의 STATISTICS OPTION

    제품 : ORACLE SERVER
    작성날짜 : 2004-08-13
    (9I) EXPORT의 TABLESPACE OPTION, IMPORT의 STATISTICS OPTION
    ===========================================================
    PURPOSE
    Oracle 9i에서의 새로운 IMPORT parameter 인 STATISTICS 와
    EXPORT 의 새로운 기능인 TABLESPACE parameter에 관한 설명이다.
    SCOPE
    Export Transportable Tablespace Feature는 8i~10g Standard Edition에서는
    지원하지 않는다.
    Explanation & Example
    1. Oracle 9i에서는 STATISTICS 라는 새로운 Import parameter 값을 제공한다.
    비교 ) Oracle 8i에서는 ANALYZE parameter와 RECALCULATE_STATISTICS
    parameter의 조합으로 사용한다.
    - ALWAYS : 미리 계산된 통계정보가 불안정하든 하지 않든 간에
    상관없이 통계정보를 항상 import. (Default)
    - NONE : 미리 계산된 통계정보를 import하지 않는다.
    - SAFE : 불안정하면 table을 import한후 통계정보를 재계산하고
    그렇지 않으면 미리 계산된 통계정보를 import.
    - RECALCULATE : 미리 계산된 통계정보를 import하는 대신에 import한 후
    table에 대한 통계정보를 재계산.
    여기서 잠깐 통계정보가 불안정하다는 것은 언제를 말하는지를 짚어본다.
    : Export time에 아래와 같은 경우에 미리 계산된 통계정보를 불안정하다
    고 표시한다.
    - Export하는 동안에 row error가 있을 때
    - Client character set 또는 NCHAR character set이 server의
    character set 또는 NCHAR character set과 다를 때
    - Query option을 사용할 때 ( 8i 이상에서 제공하는 새로운
    parameter)
    - 특정 partition 또는 subpartition만이 export될 때
    비교 ) Export 시에 STATISTICS parameter는 estimate, compute, none
    의 세가지 값을 가지며 estimate가 default이다.
    2. Oracle 9i 에서는 특정 tablespace에 있는 table을 export하기 위한
    TABLESPACE라는 새로운 parameter를 제공한다.
    - index가 어느 tablesapce에 속해 있는지의 여부에 상관없이 tablespace
    에 속한 table와 연관된 index는 모두 export된다.
    - TABLESPACE mode로 export을 받기 위해서는 EXP_FULL_DATABASE
    privilege를 가지고 있어야 한다.
    비교 ) 8i에서의 TABLESPACE option은 8i에서의 새로운 기능인
    TRANSPORTABLE TABLESPACE 기능을 사용하는 데 필요한 단지 meta
    data만을 export받는 것에 그친다. TABLESPACE 단위로 data을 받을
    수는 없다.
    9i에서도 믈론 TABLESPACE option은 TRANSPORTABLE TABLESPACE 기
    능을 위해서도 사용이 된다.
    RELATED DOCUMENTS
    <Note:159787.1>
    Oracle 9i New Features for Administrtors.

    Hi
    1)if I use the 3rd option it should gather stats for
    all the objects including indexes. However, when I
    used that option it didn't gather satas for Index. Do
    I need to use cascade=>TRUE option?
    Yes, default is FALSE, so need to force TRUE
    2) I need to backup my current stats and gather new
    stats for entire DB. If I export DATABASE stats will
    it export all stats including index, system?
    Yes
    3) If I collect stats using GATHER_DATABASE_STATS, Do
    I need to explicitly collect other stats for index,
    system etc?
    No

  • ORA-39112: Dependent object type OBJECT_GRANT - using Datapump

    Hi, all.
    I am importing full database from 10g to 11g, using datapump:
    impdp system/aroeira directory=DATA_PUMP_DIR full=y dumpfile=saodlx_full.dmp logfile=saodlx_full_imp.log PARALLEL=6
    I got error message below with partitioned table:
    ORA-39112: Dependent object type OBJECT_GRANT:"SHR04" skipped, base object type TABLE:"SHR04"."SHR04_ARMAZENAGEM_DTL" creation failed
    ORA-39112: Dependent object type OBJECT_GRANT:"SHR04" skipped, base object type TABLE:"SHR04"."SHR04_ARMAZENAGEM_DTL" creation failed
    ORA-39112: Dependent object type OBJECT_GRANT:"SHR04" skipped, base object type TABLE:"SHR04"."SHR04_ARMAZENAGEM_DTL" creation failed
    ORA-39112: Dependent object type OBJECT_GRANT:"SHR04" skipped, base object type TABLE:"SHR04"."SHR04_ARMAZENAGEM_DTL" creation failed
    ORA-39112: Dependent object type OBJECT_GRANT:"SHR04" skipped, base object type TABLE:"SHR04"."SHR04_ARMAZENAGEM_DTL" creation failed
    ORA-39112: Dependent object type OBJECT_GRANT:"SHR04" skipped, base object type TABLE:"SHR04"."SHR04_ARMAZENAGEM_DTL" creation failed
    Please, anybody can help me?
    Thanks in advance.
    Leonardo.

    Which exact versions are you dealing with?
    There are a couple of MOS notes about this topic: *Compatibility Matrix for Export And Import Between Different Oracle Versions [ID 132904.1]* and *Export/Import DataPump Parameter VERSION - Compatibility of Data Pump Between Different Oracle Versions [ID 553337.1]*
    It is recommended that both databases have the latest patchset installed.
    Also, you can troubleshoot the issue generating an SQL file with tables creation for the original dump (SQLFILE parameter to impdp), and extract and create those tables alone to see if there is something wrong there.
    Regards.
    Nelson

  • Import database using impdp

    Hello,
    I wanna import a database using impdp. Can i import while i'm logged with system/<password> ? On which database should i be logged? I have a single database, 'orcl'
    Thanks

    I wanna import a database using impdp.
    Can i import while i'm logged with system/<password> ?
    On which database should i be logged? I have a single database, 'orcl'You don't have to login to any database to perform the import using datapump import.
    You just have to set the oracle database environment where you want to import into.
    suppose, you have an export dump generated already,using datapump export(expdp) and if you want to import into orcl database,
    set the oracle environment to orcl
    You have to have the directory object created with read,write access to the user performing export/import(expdp or impdp)
    use impdp help=y at the command line to see the list of available options.
    http://download.oracle.com/docs/cd/B13789_01/server.101/b10825/dp_import.htm
    -Anantha

  • Speedup the import process using imp tool

    hi,
    how many ways to speedup the import process using imp tool in oracle 9.2.0.8

    Hi,
    Follow below guidelines also:
    IMPORT(imp):
    Create an indexfile so that you can create indexes AFTER you have imported data. Do this by setting INDEXFILE to a filename and then import. No data will be imported but a file containing index definitions will be created. You must edit this file afterwards and supply the passwords for the schemas on all CONNECT statements.
    Place the file to be imported on a separate physical disk from the oracle data files.
    Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i) considerably in the init$SID.ora file
    Set the LOG_BUFFER to a big value and restart oracle.
    Stop redo log archiving if it is running (ALTER DATABASE NOARCHIVELOG;)
    Create a BIG tablespace with a BIG rollback segment inside. Set all other rollback segments offline (except the SYSTEM rollback segment of course). The rollback segment must be as big as your biggest table (I think?)
    Use COMMIT=N in the import parameter file if you can afford it
    Use STATISTICS=NONE in the import parameter file to avoid time consuming to import the statistics
    Remember to run the indexfile previously created.
    Note: Before following the above guideliness check your requirement aso.
    Best regards,
    Rafi.
    http://rafioracledba.blogspot.com/

Maybe you are looking for

  • Can't open project from RH for HTML

    I want to work in my RH for HTML Project, but everytime I would want to open it, the software starts jamming. I tried to shut down computer and restart and it didn't fix the problem. Can somebody help me with this problem?

  • How do you transfer an old iTunes Library to a new one?

    I just got a new MacBook for Christmas and I'm trying to sync my old iTunes Library from my PC to the new iTunes on my MacBook. I've tried the Home Sharing and I'm not sure if I'm doing it wrong or if it's just not working...if someone could give me

  • How do I see the number of e-mails in my inbox using the standard mail app?

    Hi, Before ios7 I used to be able to see how many e-mails were in my inbox for each synced account.  Now this information doesn't appear like it used to.  It is very frustrating for me as I used to use this as a tool to prompt a tidy up of my inbox f

  • Copy from PDF paste to Notes iOS 4 iPhone 3GS

    The bluetooth keyboard works so good on the iphone I thought I wouuld line u a bunch of work for the flight to Halifax from Vancouver. But alsa I cannot copy from a pdf "Snow leopard server manual" to my Notes program. How do you copy from a pdf and

  • Logic required to for Report

    Hi BWers, We are looking for valuable suggestions for generating a query with actuals and target values for Distributors. We are getting Resale Quantity (Daily) from CRM for at for Every BP (Distributor) level.  And targets for those BP are maintaine