Datapump import of tables will automatically rebuild indexes ?

dear all,
do datapump import of tables will automatically rebuild the indexes againts the associated tables?
urgent response please

Yes indexes are rebulit.
From dba-oracle
Set indexes=n – Index creation can be postponed until after import completes, by specifying indexes=n. If indexes for the target table already exist at the time of execution, import performs index maintenance when data is inserted into the table. Setting indexes=n eliminates this maintenance overhead. You can also use the indexfile=filename parm to rebuild all the indexes once, after the data is loaded. When editing the indexfile, add the nologging and parallel keywords (where parallel degree = cpu_count-1).

Similar Messages

  • Rebuilding indexes after importing...

    My coworker and I are discussing whether it is necessary, or advised, to rebuild indexes after an import of the schema.
    My thinking is that the index data is put into fresh blocks thereby creating a very flat index tree without any fragmentation.
    But my coworker suspects that perhaps the blocks are built exactly as they existed in the source database.
    I could understand, perhaps, if the refresh were done by using RMAN which copies block by block, but even then I'm not sure.
    Can you help us understand this please?
    Thanks.
    Ed

    Hi,
    Normally, the indexes are build and statistics upfated after the import automatically. You do not ned to generate the statistics after the import until unless you are running very old version of database.
    Regards
    [sfa-dev1:oracle:10.2.0] $ imp help=Y
    Import: Release 10.2.0.4.0 - Production on Fri Nov 13 11:49:56 2009
    Copyright (c) 1982, 2007, Oracle. All rights reserved.
    You can let Import prompt you for parameters by entering the IMP
    command followed by your username/password:
    Example: IMP SCOTT/TIGER
    Or, you can control how Import runs by entering the IMP command followed
    by various arguments. To specify parameters, you use keywords:
    Format: IMP KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
    Example: IMP SCOTT/TIGER IGNORE=Y TABLES=(EMP,DEPT) FULL=N
    or TABLES=(T1:P1,T1:P2), if T1 is partitioned table
    USERID must be the first parameter on the command line.
    Keyword Description (Default) Keyword Description (Default)
    USERID username/password FULL import entire file (N)
    BUFFER size of data buffer FROMUSER list of owner usernames
    FILE input files (EXPDAT.DMP) TOUSER list of usernames
    SHOW just list file contents (N) TABLES list of table names
    IGNORE ignore create errors (N) RECORDLENGTH length of IO record
    GRANTS import grants (Y) INCTYPE incremental import type
    INDEXES import indexes (Y) COMMIT commit array insert (N)
    ROWS import data rows (Y) PARFILE parameter filename
    LOG log file of screen output CONSTRAINTS import constraints (Y)
    DESTROY overwrite tablespace data file (N)
    INDEXFILE write table/index info to specified file
    SKIP_UNUSABLE_INDEXES skip maintenance of unusable indexes (N)
    FEEDBACK display progress every x rows(0)
    TOID_NOVALIDATE skip validation of specified type ids
    FILESIZE maximum size of each dump file
    STATISTICS             import precomputed statistics (always) On 10gR2
    Edited by: skvaish1 on Nov 13, 2009 11:50 AM

  • Datapump import error on 2 partioned tables

    I am trying to run impdp to import two tables that are partioned and use the LOB types...for some reason it always errors out. Anyone seen this issue in 11g?
    Here is the info:
    $ impdp parfile=elm_rt.par
    Master table "ELM"."SYS_IMPORT_TABLE_05" successfully loaded/unloaded
    Starting "ELM"."SYS_IMPORT_TABLE_05": elm/******** parfile=elm_rt.par
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/INDEX
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/AUDIT_OBJ
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS
    ORA-39014: One or more workers have prematurely exited.
    ORA-39029: worker 1 with process name "DW01" prematurely terminated
    ORA-31671: Worker process DW01 had an unhandled exception.
    ORA-04030: out of process memory when trying to allocate 120048 bytes (session heap,kuxLpxAlloc)
    ORA-06512: at "SYS.KUPW$WORKER", line 1602
    ORA-06512: at line 2
    ORA-39014: One or more workers have prematurely exited.
    ORA-39029: worker 2 with process name "DW01" prematurely terminated
    ORA-31671: Worker process DW01 had an unhandled exception.
    ORA-04030: out of process memory when trying to allocate 120048 bytes (session heap,kuxLpxAlloc)
    ORA-06512: at "SYS.KUPW$WORKER", line 1602
    ORA-06512: at line 2
    Job "ELM"."SYS_IMPORT_TABLE_05" stopped due to fatal error at 13:11:04
    elm_rt.par_
    $ vi elm_rt.par
    "elm_rt.par" 25 lines, 1340 characters
    DIRECTORY=DP_REGRESSION_DATA_01
    DUMPFILE=ELM_MD1.dmp,ELM_MD2.dmp,ELM_MD3.dmp,ELM_MD4.dmp
    LOGFILE=DP_REGRESSION_LOG_01:ELM_RT.log
    DATA_OPTIONS=SKIP_CONSTRAINT_ERRORS
    CONTENT=METADATA_ONLY
    TABLES=RT_AUDIT_IN_HIST,RT_AUDIT_OUT_HIST
    REMAP_TABLESPACE=RT_AUDIT_IN_HIST_DAT01:RB_AUDIT_IN_HIST_DAT01
    REMAP_TABLESPACE=RT_AUDIT_IN_HIST_IDX04:RB_AUDIT_IN_HIST_IDX01
    REMAP_TABLESPACE=RT_AUDIT_OUT_HIST_DAT01:RB_AUDIT_OUT_HIST_DAT01
    PARALLEL=4

    Read this metalink note 286496.1. (Export/Import DataPump Parameter TRACE - How to Diagnose Oracle Data Pump)
    This will help you generate trace for the datapump job.

  • Datapump API: Import all tables in schema

    Hi,
    how can I import all tables using a wildcard in the datapump-api?
    Thanks in advance,
    tensai

    _tensai_ wrote:
    Thanks for the links, but I already know them...
    My problem is that I couldn't find an example which shows how to perform an import via the API which imports all tables, but nothing else.
    Can someone please help me with a code-example?I'm not sure what you mean by "imports all tables, but nothing else". It could mean that you only want to import the tables, but not the data, and/or not the statistics etc.
    Using the samples provided in the manuals:
    DECLARE
      ind NUMBER;              -- Loop index
      h1 NUMBER;               -- Data Pump job handle
      percent_done NUMBER;     -- Percentage of job complete
      job_state VARCHAR2(30);  -- To keep track of job state
      le ku$_LogEntry;         -- For WIP and error messages
      js ku$_JobStatus;        -- The job status from get_status
      jd ku$_JobDesc;          -- The job description from get_status
      sts ku$_Status;          -- The status object returned by get_status
      spos NUMBER;             -- String starting position
      slen NUMBER;             -- String length for output
    BEGIN
    -- Create a (user-named) Data Pump job to do a "schema" import
      h1 := DBMS_DATAPUMP.OPEN('IMPORT','SCHEMA',NULL,'EXAMPLE8');
    -- Specify the single dump file for the job (using the handle just returned)
    -- and directory object, which must already be defined and accessible
    -- to the user running this procedure. This is the dump file created by
    -- the export operation in the first example.
      DBMS_DATAPUMP.ADD_FILE(h1,'example1.dmp','DATA_PUMP_DIR');
    -- A metadata remap will map all schema objects from one schema to another.
      DBMS_DATAPUMP.METADATA_REMAP(h1,'REMAP_SCHEMA','RANDOLF','RANDOLF2');
    -- Include and exclude
      dbms_datapump.metadata_filter(h1,'INCLUDE_PATH_LIST','''TABLE''');
      dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/C%''');
      dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/F%''');
      dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/G%''');
      dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/I%''');
      dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/M%''');
      dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/P%''');
      dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/R%''');
      dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/TR%''');
      dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/STAT%''');
    -- no data please
      DBMS_DATAPUMP.DATA_FILTER(h1, 'INCLUDE_ROWS', 0);
    -- If a table already exists in the destination schema, skip it (leave
    -- the preexisting table alone). This is the default, but it does not hurt
    -- to specify it explicitly.
      DBMS_DATAPUMP.SET_PARAMETER(h1,'TABLE_EXISTS_ACTION','SKIP');
    -- Start the job. An exception is returned if something is not set up properly.
      DBMS_DATAPUMP.START_JOB(h1);
    -- The import job should now be running. In the following loop, the job is
    -- monitored until it completes. In the meantime, progress information is
    -- displayed. Note: this is identical to the export example.
    percent_done := 0;
      job_state := 'UNDEFINED';
      while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
        dbms_datapump.get_status(h1,
               dbms_datapump.ku$_status_job_error +
               dbms_datapump.ku$_status_job_status +
               dbms_datapump.ku$_status_wip,-1,job_state,sts);
        js := sts.job_status;
    -- If the percentage done changed, display the new value.
         if js.percent_done != percent_done
        then
          dbms_output.put_line('*** Job percent done = ' ||
                               to_char(js.percent_done));
          percent_done := js.percent_done;
        end if;
    -- If any work-in-progress (WIP) or Error messages were received for the job,
    -- display them.
           if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
        then
          le := sts.wip;
        else
          if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
          then
            le := sts.error;
          else
            le := null;
          end if;
        end if;
        if le is not null
        then
          ind := le.FIRST;
          while ind is not null loop
            dbms_output.put_line(le(ind).LogText);
            ind := le.NEXT(ind);
          end loop;
        end if;
      end loop;
    -- Indicate that the job finished and gracefully detach from it.
      dbms_output.put_line('Job has completed');
      dbms_output.put_line('Final job state = ' || job_state);
      dbms_datapump.detach(h1);
    exception
      when others then
        dbms_output.put_line('Exception in Data Pump job');
        dbms_datapump.get_status(h1,dbms_datapump.ku$_status_job_error,0,
                                  job_state,sts);
        if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
        then
          le := sts.error;
          if le is not null
          then
            ind := le.FIRST;
            while ind is not null loop
              spos := 1;
              slen := length(le(ind).LogText);
              if slen > 255
              then
                slen := 255;
              end if;
              while slen > 0 loop
                dbms_output.put_line(substr(le(ind).LogText,spos,slen));
                spos := spos + 255;
                slen := length(le(ind).LogText) + 1 - spos;
              end loop;
              ind := le.NEXT(ind);
            end loop;
          end if;
        end if;
        -- dbms_datapump.stop_job(h1);
        dbms_datapump.detach(h1);
    END;
    /This should import nothing but the tables (excluding the data and the table statistics) from an schema export (including a remapping shown here), you can play around with the EXCLUDE_PATH_EXPR expressions. Check the serveroutput generated for possible values used in EXCLUDE_PATH_EXPR.
    Use the DBMS_DATAPUMP.DATA_FILTER procedure if you want to exclude the data.
    For more samples, refer to the documentation:
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28319/dp_api.htm#i1006925
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Datapump import doesnt filter out table when mentioned in 'exclude'

    The below datapump import fails to filter out the 'COUNTRIES' table. I thought that was what it was supposed to do ? Any suggestions ?
    C:\Documents and Settings\>impdp remap_schema=hr:hrtmp dumpfile=hr1.dmp parfile='C:\Documents and Settings\para.par'
    Import: Release 10.2.0.1.0 - Production on Monday, 18 January, 2010 15:00:05
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Username: / as sysdba
    Connected to: Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
    Master table "SYS"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
    Starting "SYS"."SYS_IMPORT_FULL_01": /******** AS SYSDBA remap_schema=hr:hrtmp dumpfile=hr1.dmp parfile='C:\Documents a
    nd Settings\para.par'
    Processing object type SCHEMA_EXPORT/USER
    Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
    Processing object type SCHEMA_EXPORT/ROLE_GRANT
    Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
    Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
    Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
    Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    . . imported "HRTMP"."COUNTRIES" 6.093 KB 25 rows
    . . imported "HRTMP"."DEPARTMENTS" 6.640 KB 27 rows
    . . imported "HRTMP"."EMPLOYEES" 15.77 KB 107 rows
    . . imported "HRTMP"."JOBS" 6.609 KB 19 rows
    . . imported "HRTMP"."JOB_HISTORY" 6.585 KB 10 rows
    . . imported "HRTMP"."LOCATIONS" 7.710 KB 23 rows
    . . imported "HRTMP"."REGIONS" 5.296 KB 4 rows
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type SCHEMA_EXPORT/TABLE/COMMENT
    Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
    Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
    Processing object type SCHEMA_EXPORT/VIEW/VIEW
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
    Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    Job "SYS"."SYS_IMPORT_FULL_01" successfully completed at 15:00:13
    para.par
    ========
    exclude=TABLES:"like '%COUNTRIES%'"

    Hi,
    The first thing I see is that you excluded TABLES (plural). The object type is not pluralized in DataPump. The parfile should look like:
    exclude=TABLE:"like '%COUNTRIES%'"
    Dean

  • Will reorg of tables and reindex of indexes help ? And How?

    Suddenly users told that one of their applications has become very slow. They identified millions of history data also, which they deleted . But still it did not help. Should we go for reorg of tables and reindex of indexes? Will this at all help ? Will this help to gain space ? or It will help in speed ? How should I do it ?
    Reorg tables
    1. First take export dmp of all the tables of that particular schema.
    2. Then truncate the tables.
    3. Then import that export backup.
    reindex indexes
    1. Find out how much space is used by the indexes.
    2. Add that much of space in a different tablespace
    3. then move the indexes there.
    I am not sure about the steps which I missed. I need some advice from you.
    And how can I gain the space in the Unix server after this delete operation?

    With problems like this, I think that the first response should not be that we need to reorg tables/indexes. Steps are as follows.
    1)Ask users what portion of app is running slow? and how much slow.
    2)Also as others have suggested generate awr report or statspack or estat/bstat and see where time is being spent?
    3)Problem could be that execution plan of some sqls may have changed and they may be performing badly. This is the most likely reason. Other problem could be that there is some other contention in database or there is an I/O problem. All these you can find from these reports.

  • Run Rebuild Index Task daily on database but about 77 tables still highly fragmented over 80% !!!

    Hello everyone
    On our particular database server, we run the Rebuild Index Task (Using classic Maintenance Plan Designer) every night. Running the  script below, I saw that about 77 tables had an avg_fragmentation_in_percentage between 80% and 99% !!
    SELECT OBJECT_NAME(ind.OBJECT_ID) AS TableName,
    ind.name AS IndexName, indexstats.index_type_desc AS IndexType,
    indexstats.avg_fragmentation_in_percent
    FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, NULL) indexstats
    INNER JOIN sys.indexes ind
    ON ind.object_id = indexstats.object_id
    AND ind.index_id = indexstats.index_id
    WHERE indexstats.avg_fragmentation_in_percent > 30--You can specify the percent as you want
    ORDER BY indexstats.avg_fragmentation_in_percent DESC
    I dont understand why these tables are highly fragmented after a daily index rebuild! Unless the users are doing heavy inserts/updates/deletes during the day.
    Anyone has an idea of the possible causes of these results??
    Thank you all in advance

    Hallo Efyuzegeekgurl,
    this is a normal behaviour when your table is only a small one. The reason for that behaviour is the following:
    The dmv sys.dm_db_index_physical_stats counts each page which is not located at the very next one of the same table as a "fragment". If the first data page is 1700 and the second one is located on 1705 SQL server will count it as a fragment.
    The basic allocation algorithm of Microsoft SQL Server is that new data pages will be created in "mixed extents". An extent is a contigious line of 8 consecutive data pages. This is for historical reasons when storage was expensive and data should
    use as much as possible of allocated storage.
    If a new table will be created the very first 8 pages are not located one after the other but "could" be located on different extents. If your table has only 9 data pages than 8 pages will count as "fragmented" because of their non consecutive
    allocation. After 8 pages have been allocated Microsoft SQL Server will ALWAYS use a full extent for the next 8 pages (and so on!).
    To understand this behaviour use the following example which creates a table with a record length of 8K. The next command will create 10 records (which means 10 data pages).
    CREATE TABLE dbo.foo
    Id INT NOT NULL IDENTITY (1, 1),
    c1 CHAR(8000) NOT NULL DEFAULT ('filler')
    GO
    SET NOCOUNT ON;
    GO
    INSERT INTO dbo.foo DEFAULT VALUES
    GO 10
    When you check the page allocation you'll find the first 8 data pages not in a consecutive row but "scrambled" into different pages:
    SELECT database_id,
    index_id,
    partition_id,
    allocation_unit_type_desc,
    extent_page_id,
    allocated_page_page_id,
    is_mixed_page_allocation,
    page_type_desc
    FROM sys.dm_db_database_page_allocations
    DB_ID(),
    OBJECT_ID('dbo.foo', 'U'),
    0,
    NULL,
    'DETAILED'
    ) AS DDDPA
    WHERE
    DDDPA.is_allocated = 1
    ORDER BY
    DDDPA.is_iam_page DESC,
    DDDPA.page_level DESC;
    GO
    You will see that the first 8 pages are located in a MIXED extent. Although the pages are in a consecutive order in the extent 221112 it is not guaranteed!
    You can see from the very first three records that they are not in a consecutive row stored.
    Now the fragmentation of the data seems to be quite high because we have 4 breaks in the rows. If we compare this value to the "fragmentation in sys.dm_db_index_physical_stats we will see at least the same result:
    SELECT index_type_desc,
    alloc_unit_type_desc,
    DDIPS.avg_fragmentation_in_percent,
    DDIPS.fragment_count,
    DDIPS.page_count,
    avg_page_space_used_in_percent
    FROM sys.dm_db_index_physical_stats
    DB_ID(),
    OBJECT_ID('dbo.foo', 'U'),
    NULL,
    NULL,
    'DETAILED'
    ) AS DDIPS;
    So - just from my point of view - don't count only to the fragmentation in percent because if the table is to small it will ALWAYS be over 30%. Take more into account the density of the pages and - as all others have mentioned - the number of pages of the
    index.
    I would consider indexes with more or equal to 1.000 pages.
    Another tip: Forget about the f... maintenance plans. These "plans" are not worth the time and money because you cannot control it in a more precise way. Have a look to the solution of Ola Hallengren and implement his solution - a great master
    piece of index maintenance:
    http://ola.hallengren.com
    All the best to the community :)
    MCM - SQL Server 2008
    MCSE - SQL Server 2012
    db Berater GmbH
    SQL Server Blog (german only)

  • Goldengate Extracts reads slow during Table Data Archiving and Index Rebuilding Operations.

    We have configured OGG on a  near-DR server. The extracts are configured to work in ALO Mode.
    During the day, extracts work as expected and are in sync. But during any dialy maintenance task, the extracts starts lagging, and read the same archives very slow.
    This usually happens during Table Data Archiving (DELETE from prod tables, INSERT into history tables) and during Index Rebuilding on those tables.
    Points to be noted:
    1) The Tables on which Archiving is done and whose Indexes are rebuilt are not captured by GoldenGate Extract.
    2) The extracts are configured to capture DML opeartions. Only INSERT and UPDATE operations are captured, DELETES are ignored by the extracts. Also DDL extraction is not configured.
    3) There is no connection to PROD or DR Database
    4) System functions normally all the time, but just during table data archiving and index rebuild it starts lagging.
    Q 1. As mentioned above, even though the tables are not a part of capture, the extracts lags ? What are the possible reasons for the lag ?
    Q 2. I understand that Index Rebuild is a DDL operation, then too it induces a lag into the system. how ?
    Q 3. We have been trying to find a way to overcome the lag, which ideally shouldn't have arised. Is there any extract parameter or some work around for this situation ?

    Hi Nick.W,
    The amount of redo logs generated is huge. Approximately 200-250 GB in 45-60 minutes.
    I agree that the extract has to parse the extra object-id's. During the day, there is a redo switch every 2-3 minutes. The source is a 3-Node RAC. So approximately, 80-90 archives generated in an hour.
    The reason to mention this was, that while reading these archives also, the extract would be parsing extra Object ID's, as we are capturing data only for 3 tables. The effect of parsing extract object id's should have been seen during the day also. The reason being archive size is same, amount of data is same, the number of records to be scanned is same.
    The extract slows down and read at half the speed. If normally it would take 45-50 secs to read an archive log of normal day functioning, then it would take approx 90-100 secs to read the archives of the mentioned activities.
    Regarding the 3rd point,
    a. The extract is a classic extract, the archived logs are on local file system. No ASM, NO SAN/NAS.
    b. We have added  "TRANLOGOPTIONS BUFSIZE" parameter in our extract. We'll update as soon as we see any kind of improvements.

  • Data Pump -Importing one table index

    Is it possible to import one table index alone(any table ex emp ) .If it can be how should the param should look like ..
    Thanks and Regards
    harris

    I can't think of anything that would prevent this from working. You just need to make sure that the large table does not have any ref constraints, or other associations with the other tables that may get screwed up while the other users are using the database.
    Dean

  • Importing a table and its index statistics, cannot import index stats

    Hi,
    Oracle 10.2.0.4 on Solaris.
    I have DBA to imprt tables and index statistics for a table from prod into QA for further analysis. The stats for this table are locked in prod
    DBA has used the following command for export and import table statistics
    exec dbms_stats.export_table_stats('SCHEMA','TABLE',null,'TABLE_20130225',null,true,'INV');
    exec dbms_stats.import_table_stats('SCHEMA','TABLE',null,'TABLE_20130225',null,true,'SCHEMA');Although cascade is set to true above, this resulted in "only table stats" to be imported no index stats. So we have imported the prod table level stats but no index stats! FYI, the indexes in prod have stats (last_analyzed) set.
    Next DBA tried the export and import using export_index_stats and import_index_stats but no luck. DBA advising me that the only option we have is to import the table itself from prod to QA. It seems that import with cascade does not work.
    Is this a bug in 10g or there is another way around to get index statistics as well?
    Thanks
    Edited by: 902986 on 25-Feb-2013 06:22

    902986 wrote:
    Hi,
    Oracle 10.2.0.4 on Solaris.
    I have DBA to imprt tables and index statistics for a table from prod into QA for further analysis. The stats for this table are locked in prod
    DBA has used the following command for export and import table statistics
    exec dbms_stats.export_table_stats('SCHEMA','TABLE',null,'TABLE_20130225',null,true,'INV');
    exec dbms_stats.import_table_stats('SCHEMA','TABLE',null,'TABLE_20130225',null,true,'SCHEMA');Although cascade is set to true above, this resulted in "only table stats" to be imported no index stats. Problem Exists Between Keyboard And Chair
    "Gather statistics on the indexes for this table. Index statistics gathering is not parallelized. Using this option is equivalent to running the GATHER_INDEX_STATS Procedure on each of the table's indexes. Use the constant DBMS_STATS.AUTO_CASCADE to have Oracle determine whether index statistics to be collected or not. This is the default. The default value can be changed using theSET_PARAM Procedure."
    Handle:     902986
    Status Level:     Newbie
    Registered:     Dec 17, 2011
    Total Posts:     69
    Total Questions:     18 (12 unresolved)
    Why so MANY unanswered questions?

  • When will i use index organization table.

    when will i use index organization table.
    what is the advantage of these.

    See the sit
    http://www.dba-oracle.com/t_index_organized_tables.htm

  • [10G] DATAPUMP IMPORT는 자동으로 USER 생성

    제품 : ORACLE SERVER
    작성날짜 : 2004-05-28
    [10G] DATAPUMP IMPORT는 자동으로 USER 생성
    ===================================
    PURPOSE
    다음은 DATAPUMP의 기능을 소개하고자 한다.
    Explanation
    imp utility 에서는 target user가 존재해야 했었다
    그러나 DATAPUMP 는 만익 target user가 존재하지 않는다면
    이를 자동으로 생성한다.
    Example :
    =============
    Source database 에서 TEST라는 user 가 있다.
    TEST ( password : test , role : connect , resource )
    Export the TEST schema using datapump:
    expdp system/oracle dumpfile=exp.dmp schemas=TEST
    Case I
    =======
    test user가 존재하지 않을 경우에 import 는 test user를 자동으로 생성한다.
    impdp system/oracle dumpfile=exp.dmp
    *************TEST does not exist*************************************
    Import: Release 10.1.0.2.0 - Production on Friday, 28 May, 2004 1:02
    Copyright (c) 2003, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Produc
    tion
    With the Data Mining option
    Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
    Starting "SYSTEM"."SYS_IMPORT_FULL_01": system/******** dumpfile=exp.dmp
    Processing object type SCHEMA_EXPORT/USER
    Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
    Processing object type SCHEMA_EXPORT/ROLE_GRANT
    Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
    Processing object type SCHEMA_EXPORT/SE_PRE_SCHEMA_PROCOBJACT/PROCACT_SCHEMA
    Processing object type SCHEMA_EXPORT/DB_LINK
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    . . imported "TEST"."DEPT" 5.648 KB 4 rows
    . . imported "TEST"."SALGRADE" 5.648 KB 10 rows
    . . imported "TEST"."BONUS" 0 KB 0 rows
    . . imported "TEST"."EMP" 0 KB 0 rows
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
    Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
    Job "SYSTEM"."SYS_IMPORT_FULL_01" successfully completed at 01:02
    connect TEST/TEST (on target database)
    => connected
    SQL> select * from session_roles;
    ROLE
    connect
    resource
    Case II
    ========
    Target database 에 TEST user가 존재하는 경우에 warning message가 발생하며 import
    작업은 계속 진행된다.
    impdp system/oracle dumpfile=exp.dmp
    *************user TEST already exists************************************
    Import: Release 10.1.0.2.0 - Production on Friday, 28 May, 2004 1:06
    Copyright (c) 2003, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Produc
    tion
    With the Data Mining option
    Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
    Starting "SYSTEM"."SYS_IMPORT_FULL_01": system/******** dumpfile=exp.dmp
    Processing object type SCHEMA_EXPORT/USER
    ORA-31684: Object type USER:"TEST" already exists
    Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
    Processing object type SCHEMA_EXPORT/ROLE_GRANT
    Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
    Processing object type SCHEMA_EXPORT/SE_PRE_SCHEMA_PROCOBJACT/PROCACT_SCHEMA
    Processing object type SCHEMA_EXPORT/DB_LINK
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    . . imported "TEST"."DEPT" 5.648 KB 4 rows
    . . imported "TEST"."SALGRADE" 5.648 KB 10 rows
    . . imported "TEST"."BONUS" 0 KB 0 rows
    . . imported "TEST"."EMP" 0 KB 0 rows
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
    Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
    Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 01:06
    You will receive ORA-31684 error but import will continue.
    Case - III
    ===========
    Target database에 TEST user가 존재하지만 warning (ora-31684) 을 피하기 위해서는
    EXCLUDE=USER 를 사용한다.
    impdp system/oracle dumpfile=exp.dmp exclude=user
    *********Disable create user statment as user TEST already exist***********
    Import: Release 10.1.0.2.0 - Production on Friday, 28 May, 2004 1:11
    Copyright (c) 2003, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Produc
    tion
    With the Data Mining option
    Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
    Starting "SYSTEM"."SYS_IMPORT_FULL_01": system/******** dumpfile=exp.dmp exclud
    e=user
    Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
    Processing object type SCHEMA_EXPORT/ROLE_GRANT
    Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
    Processing object type SCHEMA_EXPORT/SE_PRE_SCHEMA_PROCOBJACT/PROCACT_SCHEMA
    Processing object type SCHEMA_EXPORT/DB_LINK
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    . . imported "TEST"."DEPT" 5.648 KB 4 rows
    . . imported "TEST"."SALGRADE" 5.648 KB 10 rows
    . . imported "TEST"."BONUS" 0 KB 0 rows
    . . imported "TEST"."EMP" 0 KB 0 rows
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
    Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
    Job "SYSTEM"."SYS_IMPORT_FULL_01" successfully completed at 01:11
    TEST user가 존재하지 않는데 EXCLUDE=USER parameter 를 사용하면
    ORA-01917 error 를 만나게 된다.
    Reference Documents :
    Note : 272132.1 DATAPUMP import automatically creates schema

  • Regarding Exporting and Importing internal table

    Hello Experts,
    I have two programs:
    1) Main program: It create batch jobs through open_job,submit and close job.Giving sub program as SUBMIT.
    I am using Export IT to memory id 'MID' to export internal table data to sap memory in the subprogram.
    The data will be processed in the subprogram and exporting data to sap memory.I need this data in the main program(And using import to get the data,but it is not working).
    Importing IT1 from memory id 'MID' to import the table data in the main program after completing the job(SUBMIT SUBPROGRAM AND RETURN).
    Importing is not getting data to internal table.
    Can you please suggest something to solve this issue.
    Thank you.
    Regards,
    Anand.

    Hi,
    This is the code i am using.
    DO g_f_packets TIMES.
    * Start Immediately
           IF NOT p_imm IS INITIAL .
             g_flg_start = 'X'.
           ENDIF.
           g_f_jobname = 'KZDO_INHERIT'.
           g_f_jobno = g_f_jobno + '001'.
           CONCATENATE g_f_jobname g_f_strtdate g_f_jobno INTO g_f_jobname
                                                  SEPARATED BY '_'.
           CONDENSE g_f_jobname NO-GAPS.
           p_psize1 = p_psize1 + p_psize.
           p_psize2 = p_psize1 - p_psize + 1.
           IF p_psize2 IS INITIAL.
             p_psize2  = 1.
           ENDIF.
           g_f_spname = 'MID'.
           g_f_spid = g_f_spid + '001'.
           CONDENSE g_f_spid NO-GAPS.
           CONCATENATE g_f_spname  g_f_spid INTO g_f_spname.
           CONDENSE g_f_spname NO-GAPS.
    * ... (1) Job creating...
           CALL FUNCTION 'JOB_OPEN'
             EXPORTING
               jobname          = g_f_jobname
             IMPORTING
               jobcount         = g_f_jobcount
             EXCEPTIONS
               cant_create_job  = 1
               invalid_job_data = 2
               jobname_missing  = 3
               OTHERS           = 4.
           IF sy-subrc <> 0.
             MESSAGE e469(9j) WITH g_f_jobname.
           ENDIF.
    * (2)Report start under job name
           SUBMIT (g_c_prog_kzdo)
                  WITH p_lgreg EQ p_lgreg
                  WITH s_grvsy IN s_grvsy
                  WITH s_prvsy IN s_prvsy
                  WITH s_prdat IN s_prdat
                  WITH s_datab IN s_datab
                  WITH p1      EQ p1
                  WITH p3      EQ p3
                  WITH p4      EQ p4
                  WITH p_mailid EQ g_f_mailid
                  WITH p_psize EQ p_psize
                  WITH p_psize1 EQ p_psize1
                  WITH p_psize2 EQ p_psize2
                  WITH spid     EQ g_f_spid
                  TO SAP-SPOOL WITHOUT SPOOL DYNPRO
                  VIA JOB g_f_jobname NUMBER g_f_jobcount AND RETURN.
    *(3)Job closed when starts Immediately
           IF NOT p_imm IS INITIAL.
             IF sy-index LE g_f_nojob.
               CALL FUNCTION 'JOB_CLOSE'
                 EXPORTING
                   jobcount             = g_f_jobcount
                   jobname              = g_f_jobname
                   strtimmed            = g_flg_start
                 EXCEPTIONS
                   cant_start_immediate = 1
                   invalid_startdate    = 2
                   jobname_missing      = 3
                   job_close_failed     = 4
                   job_nosteps          = 5
                   job_notex            = 6
                   lock_failed          = 7
                   OTHERS               = 8.
               gs_jobsts-jobcount = g_f_jobcount.
               gs_jobsts-jobname  = g_f_jobname.
               gs_jobsts-spname   = g_f_spname.
               APPEND gs_jobsts to gt_jobsts.
             ELSEIF sy-index GT g_f_nojob.
               CLEAR g_f_flg.
               DO.                         " Wiating untill any job completion
                 LOOP AT gt_jobsts into gs_jobsts.
                   CLEAR g_f_status.
                   CALL FUNCTION 'BP_JOB_STATUS_GET'
                     EXPORTING
                       JOBCOUNT                         = gs_jobsts-jobcount
                       JOBNAME                          = gs_jobsts-jobname
                    IMPORTING
                       STATUS                           = g_f_status
    *            HAS_CHILD                        =
    *          EXCEPTIONS
    *            JOB_DOESNT_EXIST                 = 1
    *            UNKNOWN_ERROR                    = 2
    *            PARENT_CHILD_INCONSISTENCY       = 3
    *            OTHERS                           = 4
                   g_f_mid = gs_jobsts-spname.
                   IF g_f_status = 'F'.
                     IMPORT gt_final FROM MEMORY ID g_f_mid .
                     FREE MEMORY ID gs_jobsts-spname.
                     APPEND LINES OF gt_final to gt_final1.
                     REFRESH gt_prlist.
                     CALL FUNCTION 'JOB_CLOSE'
                       EXPORTING
                         jobcount             = g_f_jobcount
                         jobname              = g_f_jobname
                         strtimmed            = g_flg_start
                       EXCEPTIONS
                         cant_start_immediate = 1
                         invalid_startdate    = 2
                         jobname_missing      = 3
                         job_close_failed     = 4
                         job_nosteps          = 5
                         job_notex            = 6
                         lock_failed          = 7
                         OTHERS               = 8.
                     IF sy-subrc = 0.
                       g_f_flg = 'X'.
                       gs_jobsts1-jobcount = g_f_jobcount.
                       gs_jobsts1-jobname  = g_f_jobname.
                       gs_jobsts1-spname   = g_f_spname.
                       APPEND gs_jobsts1 TO gt_jobsts.
                       DELETE TABLE gt_jobsts FROM gs_jobsts.
                       EXIT.
                     ENDIF.
                   ENDIF.
                 ENDLOOP.
                 IF g_f_flg = 'X'.
                   CLEAR g_f_flg.
                   EXIT.
                 ENDIF.
               ENDDO.
             ENDIF.
           ENDIF.
           IF sy-subrc <> 0.
             MESSAGE e539(scpr) WITH g_f_jobname.
           ENDIF.
           COMMIT WORK .
         ENDDO.

  • Pagination query help needed for large table - force a different index

    I'm using a slight modification of the pagination query from over at Ask Tom's: [http://www.oracle.com/technology/oramag/oracle/07-jan/o17asktom.html]
    Mine looks like this when fetching the first 100 rows of all members with last name Smith, ordered by join date:
    SELECT members.*
    FROM members,
        SELECT RID, rownum rnum
        FROM
            SELECT rowid as RID
            FROM members
            WHERE last_name = 'Smith'
            ORDER BY joindate
        WHERE rownum <= 100
    WHERE rnum >= 1
             and RID = members.rowidThe difference between this and the one at Ask Tom's is that my innermost query just returns the ROWID. Then in the outermost query we join the ROWIDs returned to the members table, after we have pruned the ROWIDs down to only the chunk of 100 we want. This makes it MUCH faster (verifiably) on our large tables, as it is able to use the index on the innermost query (well... read on).
    The problem I have is this:
    SELECT rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindateThis will use the index for the predicate column (last_name) instead of the unique index I have defined for the joindate column (joindate, sequence). (Verifiable with explain plan). It is much slower this way on a large table. So I can hint it using either of the following methods:
    SELECT /*+ index(members, joindate_idx) */ rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindate
    SELECT /*+ first_rows(100) */ rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindateEither way, it now uses the index of the ORDER BY column (joindate_idx), so now it is much faster as it does not have to do a sort (remember, VERY large table, millions of records). So that seems good. But now, on my outermost query, I join the rowid with the meaningful columns of data from the members table, as commented below:
    SELECT members.*      -- Select all data from members table
    FROM members,           -- members table added to FROM clause
        SELECT RID, rownum rnum
        FROM
            SELECT /*+ index(members, joindate_idx) */ rowid as RID   -- Hint is ignored now that I am joining in the outer query
            FROM members
            WHERE last_name = 'Smith'
            ORDER BY joindate
        WHERE rownum <= 100
    WHERE rnum >= 1
            and RID = members.rowid           -- Merge the members table on the rowid we pulled from the inner queriesOnce I do this join, it goes back to using the predicate index (last_name) and has to perform the sort once it finds all matching values (which can be a lot in this table, there is high cardinality on some columns).
    So my question is, in the full query above, is there any way I can get it to use the ORDER BY column for indexing to prevent it from having to do a sort? The join is what causes it to revert back to using the predicate index, even with hints. Remove the join and just return the ROWIDs for those 100 records and it flies, even on 10 million records.
    It'd be great if there was some generic hint that could accomplish this, such that if we change the table/columns/indexes, we don't need to change the hint (the FIRST_ROWS hint is a good example of this, while the INDEX hint is the opposite), but any help would be appreciated. I can provide explain plans for any of the above if needed.
    Thanks!

    Lakmal Rajapakse wrote:
    OK here is an example to illustrate the advantage:
    SQL> set autot traceonly
    SQL> select * from (
    2  select a.*, rownum x  from
    3  (
    4  select a.* from aoswf.events a
    5  order by EVENT_DATETIME
    6  ) a
    7  where rownum <= 1200
    8  )
    9  where x >= 1100
    10  /
    101 rows selected.
    Execution Plan
    Plan hash value: 3711662397
    | Id  | Operation                      | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |            |  1200 |   521K|   192   (0)| 00:00:03 |
    |*  1 |  VIEW                          |            |  1200 |   521K|   192   (0)| 00:00:03 |
    |*  2 |   COUNT STOPKEY                |            |       |       |            |          |
    |   3 |    VIEW                        |            |  1200 |   506K|   192   (0)| 00:00:03 |
    |   4 |     TABLE ACCESS BY INDEX ROWID| EVENTS     |   253M|    34G|   192   (0)| 00:00:03 |
    |   5 |      INDEX FULL SCAN           | EVEN_IDX02 |  1200 |       |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    1 - filter("X">=1100)
    2 - filter(ROWNUM<=1200)
    Statistics
    0  recursive calls
    0  db block gets
    443  consistent gets
    0  physical reads
    0  redo size
    25203  bytes sent via SQL*Net to client
    281  bytes received via SQL*Net from client
    8  SQL*Net roundtrips to/from client
    0  sorts (memory)
    0  sorts (disk)
    101  rows processed
    SQL>
    SQL>
    SQL> select * from aoswf.events a, (
    2  select rid, rownum x  from
    3  (
    4  select rowid rid from aoswf.events a
    5  order by EVENT_DATETIME
    6  ) a
    7  where rownum <= 1200
    8  ) b
    9  where x >= 1100
    10  and a.rowid = rid
    11  /
    101 rows selected.
    Execution Plan
    Plan hash value: 2308864810
    | Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |            |  1200 |   201K|   261K  (1)| 00:52:21 |
    |   1 |  NESTED LOOPS               |            |  1200 |   201K|   261K  (1)| 00:52:21 |
    |*  2 |   VIEW                      |            |  1200 | 30000 |   260K  (1)| 00:52:06 |
    |*  3 |    COUNT STOPKEY            |            |       |       |            |          |
    |   4 |     VIEW                    |            |   253M|  2895M|   260K  (1)| 00:52:06 |
    |   5 |      INDEX FULL SCAN        | EVEN_IDX02 |   253M|  4826M|   260K  (1)| 00:52:06 |
    |   6 |   TABLE ACCESS BY USER ROWID| EVENTS     |     1 |   147 |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter("X">=1100)
    3 - filter(ROWNUM<=1200)
    Statistics
    8  recursive calls
    0  db block gets
    117  consistent gets
    0  physical reads
    0  redo size
    27539  bytes sent via SQL*Net to client
    281  bytes received via SQL*Net from client
    8  SQL*Net roundtrips to/from client
    0  sorts (memory)
    0  sorts (disk)
    101  rows processed
    Lakmal (and OP),
    Not sure what advantage you are trying to show here. But considering that we are talking about pagination query here and order of records is important, your 2 queries will not always generate output in same order. Here is the test case:
    SQL> select * from v$version ;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
    PL/SQL Release 10.2.0.1.0 - Production
    CORE     10.2.0.1.0     Production
    TNS for Linux: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    SQL> show parameter optimizer
    NAME                                 TYPE        VALUE
    optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      10.2.0.1
    optimizer_index_caching              integer     0
    optimizer_index_cost_adj             integer     100
    optimizer_mode                       string      ALL_ROWS
    optimizer_secure_view_merging        boolean     TRUE
    SQL> show parameter pga
    NAME                                 TYPE        VALUE
    pga_aggregate_target                 big integer 103M
    SQL> create table t nologging as select * from all_objects where 1 = 2 ;
    Table created.
    SQL> create index t_idx on t(last_ddl_time) nologging ;
    Index created.
    SQL> insert /*+ APPEND */ into t (owner, object_name, object_id, created, last_ddl_time) select owner, object_name, object_id, created, sysdate - dbms_random.value(1, 100) from all_objects order by dbms_random.random;
    40617 rows created.
    SQL> commit ;
    Commit complete.
    SQL> exec dbms_stats.gather_table_stats(user, 'T', cascade=>true);
    PL/SQL procedure successfully completed.
    SQL> select object_id, object_name, created from t, (select rid, rownum rn from (select rowid rid from t order by created desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    OBJECT_ID OBJECT_NAME                    CREATED
         47686 ALL$OLAP2_JOIN_KEY_COLUMN_USES 28-JUL-2009 08:08:39
         47672 ALL$OLAP2_CUBE_DIM_USES        28-JUL-2009 08:08:39
         47681 ALL$OLAP2_CUBE_MEASURE_MAPS    28-JUL-2009 08:08:39
         47682 ALL$OLAP2_FACT_LEVEL_USES      28-JUL-2009 08:08:39
         47685 ALL$OLAP2_AGGREGATION_USES     28-JUL-2009 08:08:39
         47692 ALL$OLAP2_CATALOGS             28-JUL-2009 08:08:39
         47665 ALL$OLAPMR_FACTTBLKEYMAPS      28-JUL-2009 08:08:39
         47688 ALL$OLAP2_DIM_LEVEL_ATTR_MAPS  28-JUL-2009 08:08:39
         47689 ALL$OLAP2_DIM_LEVELS_KEYMAPS   28-JUL-2009 08:08:39
         47669 ALL$OLAP9I2_HIER_DIMENSIONS    28-JUL-2009 08:08:39
         47666 ALL$OLAP9I1_HIER_DIMENSIONS    28-JUL-2009 08:08:39
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> set autotrace traceonly
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc
      2  ;
    11 rows selected.
    Execution Plan
    Plan hash value: 44968669
    | Id  | Operation                       | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                |       |  1200 | 91200 |   180   (2)| 00:00:03 |
    |   1 |  SORT ORDER BY                  |       |  1200 | 91200 |   180   (2)| 00:00:03 |
    |*  2 |   HASH JOIN                     |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  3 |    VIEW                         |       |  1200 | 30000 |    98   (0)| 00:00:02 |
    |*  4 |     COUNT STOPKEY               |       |       |       |            |          |
    |   5 |      VIEW                       |       | 40617 |   475K|    98   (0)| 00:00:02 |
    |   6 |       INDEX FULL SCAN DESCENDING| T_IDX | 40617 |   793K|    98   (0)| 00:00:02 |
    |   7 |    TABLE ACCESS FULL            | T     | 40617 |  2022K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("T".ROWID="T1"."RID")
       3 - filter("RN">=1190)
       4 - filter(ROWNUM<=1200)
    Statistics
              1  recursive calls
              0  db block gets
            348  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
    11 rows selected.
    Execution Plan
    Plan hash value: 882605040
    | Id  | Operation                | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |      |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  1 |  VIEW                    |      |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  2 |   COUNT STOPKEY          |      |       |       |            |          |
    |   3 |    VIEW                  |      | 40617 |  1546K|    80   (2)| 00:00:01 |
    |*  4 |     SORT ORDER BY STOPKEY|      | 40617 |  2062K|    80   (2)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL   | T    | 40617 |  2062K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("RN">=1190)
       2 - filter(ROWNUM<=1200)
       4 - filter(ROWNUM<=1200)
    Statistics
              0  recursive calls
              0  db block gets
            343  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    11 rows selected.
    Execution Plan
    Plan hash value: 168880862
    | Id  | Operation                      | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  1 |  HASH JOIN                     |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  2 |   VIEW                         |       |  1200 | 30000 |    98   (0)| 00:00:02 |
    |*  3 |    COUNT STOPKEY               |       |       |       |            |          |
    |   4 |     VIEW                       |       | 40617 |   475K|    98   (0)| 00:00:02 |
    |   5 |      INDEX FULL SCAN DESCENDING| T_IDX | 40617 |   793K|    98   (0)| 00:00:02 |
    |   6 |   TABLE ACCESS FULL            | T     | 40617 |  2022K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - access("T".ROWID="T1"."RID")
       2 - filter("RN">=1190)
       3 - filter(ROWNUM<=1200)
    Statistics
              0  recursive calls
              0  db block gets
            349  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 order by last_ddl_time desc ;
    11 rows selected.
    Execution Plan
    Plan hash value: 882605040
    | Id  | Operation           | Name | Rows     | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT      |     |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  1 |  VIEW                |     |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  2 |   COUNT STOPKEY       |     |     |     |          |          |
    |   3 |    VIEW            |     | 40617 |  1546K|    80   (2)| 00:00:01 |
    |*  4 |     SORT ORDER BY STOPKEY|     | 40617 |  2062K|    80   (2)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL      | T     | 40617 |  2062K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("RN">=1190)
       2 - filter(ROWNUM<=1200)
       4 - filter(ROWNUM<=1200)
    Statistics
         175  recursive calls
           0  db block gets
         388  consistent gets
           0  physical reads
           0  redo size
           1063  bytes sent via SQL*Net to client
         385  bytes received via SQL*Net from client
           2  SQL*Net roundtrips to/from client
           4  sorts (memory)
           0  sorts (disk)
          11  rows processed
    SQL> set autotrace off
    SQL> spool offAs you will see, the join query here has to have an ORDER BY clause at the end to ensure that records are correctly sorted. You can not rely on optimizer choosing NESTED LOOP join method and, as above example shows, when optimizer chooses HASH JOIN, oracle is free to return rows in no particular order.
    The query that does not involve join always returns rows in the desired order. Adding an ORDER BY does add a step in the plan for the query using join but does not affect the other query.

  • "analyze index"  vs  "rebuild index"

    Hi,
    I don't undestand the difference between "analyze index" and "rebuild index".
    I have a table where I do a lot of "insert" and "update" and "query", What is the best thing to do ??
    thanks
    Giordano

    When you use dbms_stats.gather_schema_stats package with cascade=>true option, you are also collecting stats for the indexes, no need to collects stats separately using dbms_stats.gather_index_stats.Of course, but I refered to the rebuild index question. Therefore I only mentioned the GATHER_INDEX_STATS.
    Auto_sample_size has many problems/bugs in 9iOk didn't know that - I'm using 10gR2.
    But this discussion made me curious. So I tried something (10gR2):
    CREATE TABLE BIG NOLOGGING AS
    WITH GEN AS (
    SELECT ROWNUM ID FROM ALL_OBJECTS WHERE ROWNUM <=10000)
    SELECT V1.ID,RPAD('A',10) C FROM GEN V1,GEN V2
    WHERE ROWNUM <=10000000;
    SELECT COUNT(*) FROM BIG;
    COUNT(*)
    10000000
    So I had a Table containing 10 Million rows. Now I indexed ID:
    CREATE INDEX BIG_IDX ON BIG(ID)
    I tested two different methods:
    1.) GATHER_TABLE_STATS with estimate 10%
    EXEC DBMS_STATS.GATHER_TABLE_STATS(TABNAME=>'BIG',OWNNAME=>'DIMITRI',CASCADE=>TRUE,ESTIMATE_PERCENT=>10);
    It took about 6 seconds (I only set timing on in sqlplus, no 10046 trace) Now I checked the estimated values:
    SELECT NUM_ROWS,SAMPLE_SIZE,ABS(10000000-NUM_ROWS)/100000 VARIANCE,'TABLE' OBJECT FROM USER_TABLES WHERE TABLE_NAME='BIG'
    UNION ALL
    SELECT NUM_ROWS,SAMPLE_SIZE,ABS(10000000-NUM_ROWS)/100000 VARIANCE,'INDEX' OBJECT FROM USER_INDEXES WHERE INDEX_NAME='BIG_IDX';
    NUM_ROWS SAMPLE_SIZE VARIANCE OBJEC
    9985220 998522 ,1478 TABLE
    9996210 999621 ,0379 INDEX
    2.) GATHER_TABLE_STATS with DBMS_STATS.AUTO_SAMPLE_SIZE
    EXEC DBMS_STATS.DELETE_TABLE_STATS(OWNNAME=>'DIMITRI',TABNAME=>'BIG');
    EXEC DBMS_STATS.GATHER_TABLE_STATS(TABNAME=>'BIG',OWNNAME=>'DIMITRI',CASCADE=>TRUE,ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE);
    It took about 1,5 seconds. Now the results:
    NUM_ROWS SAMPLE_SIZE VARIANCE OBJEC
    9826851 4715 1,73149 TABLE
    10262432 561326 2,62432 INDEX
    The estimate 10% was more exact - also a 1,7 and 2,6 percent variance is still ok. It's also very interesting, that using AUTO_SAMPLE_SIZE
    causes oracle to execute a estimate 5% for the index and a estimate 0.5 for the table.
    I tried again with a table containing only 1 Million records and oracle did an estimate with 100% for the index.
    So for me I will continue using AUTO_SAMPLE_SIZE. Its very flexible, fast and accurate.
    Dim
    PS: Is there a way to format code like one can do in HTML using <code> or <pre>?

Maybe you are looking for