Gather table stats taking longer for Large tables

Version : 11.2
I've noticed that gathers stats (using dbms_stats.gather_table_stats) is taking longer for large tables.
Since row count needs to be calculated, a big table's stats collection would be understandably slightly longer (Running SELECT COUNT(*) internally).
But for a non-partitioned table with 3 million rows, it took 12 minutes to collect the stats ? Apart from row count and index info what other information is gathered for gather table stats ?
Does Table size actually matter for stats collection ?

Max wrote:
Version : 11.2
I've noticed that gathers stats (using dbms_stats.gather_table_stats) is taking longer for large tables.
Since row count needs to be calculated, a big table's stats collection would be understandably slightly longer (Running SELECT COUNT(*) internally).
But for a non-partitioned table with 3 million rows, it took 12 minutes to collect the stats ? Apart from row count and index info what other information is gathered for gather table stats ?
09:40:05 SQL> desc user_tables
Name                            Null?    Type
TABLE_NAME                       NOT NULL VARCHAR2(30)
TABLESPACE_NAME                        VARCHAR2(30)
CLUSTER_NAME                             VARCHAR2(30)
IOT_NAME                             VARCHAR2(30)
STATUS                              VARCHAR2(8)
PCT_FREE                             NUMBER
PCT_USED                             NUMBER
INI_TRANS                             NUMBER
MAX_TRANS                             NUMBER
INITIAL_EXTENT                         NUMBER
NEXT_EXTENT                             NUMBER
MIN_EXTENTS                             NUMBER
MAX_EXTENTS                             NUMBER
PCT_INCREASE                             NUMBER
FREELISTS                             NUMBER
FREELIST_GROUPS                        NUMBER
LOGGING                             VARCHAR2(3)
BACKED_UP                             VARCHAR2(1)
NUM_ROWS                             NUMBER
BLOCKS                              NUMBER
EMPTY_BLOCKS                             NUMBER
AVG_SPACE                             NUMBER
CHAIN_CNT                             NUMBER
AVG_ROW_LEN                             NUMBER
AVG_SPACE_FREELIST_BLOCKS                   NUMBER
NUM_FREELIST_BLOCKS                        NUMBER
DEGREE                              VARCHAR2(10)
INSTANCES                             VARCHAR2(10)
CACHE                                  VARCHAR2(5)
TABLE_LOCK                             VARCHAR2(8)
SAMPLE_SIZE                             NUMBER
LAST_ANALYZED                             DATE
PARTITIONED                             VARCHAR2(3)
IOT_TYPE                             VARCHAR2(12)
TEMPORARY                             VARCHAR2(1)
SECONDARY                             VARCHAR2(1)
NESTED                              VARCHAR2(3)
BUFFER_POOL                             VARCHAR2(7)
FLASH_CACHE                             VARCHAR2(7)
CELL_FLASH_CACHE                        VARCHAR2(7)
ROW_MOVEMENT                             VARCHAR2(8)
GLOBAL_STATS                             VARCHAR2(3)
USER_STATS                             VARCHAR2(3)
DURATION                             VARCHAR2(15)
SKIP_CORRUPT                             VARCHAR2(8)
MONITORING                             VARCHAR2(3)
CLUSTER_OWNER                             VARCHAR2(30)
DEPENDENCIES                             VARCHAR2(8)
COMPRESSION                             VARCHAR2(8)
COMPRESS_FOR                             VARCHAR2(12)
DROPPED                             VARCHAR2(3)
READ_ONLY                             VARCHAR2(3)
SEGMENT_CREATED                        VARCHAR2(3)
RESULT_CACHE                             VARCHAR2(7)
09:40:10 SQL> >
Does Table size actually matter for stats collection ?yes
Handle:     Max
Status Level:     Newbie
Registered:     Nov 10, 2008
Total Posts:     155
Total Questions:     80 (49 unresolved)
why so many unanswered questions?

Similar Messages

  • Gather table stats takes time for big table

    Table has got millions of record and it is partitioned. when i analyze the table using the following syntax it takes more than 5 hrs and it has got one index.
    I tried with auto sample size and also by changing the estimate percentage value like 20, 50 70 etc. But the time is almost same.
    exec dbms_stats.gather_table_stats(ownname=>'SYSADM',tabname=>'TEST',granularity =>'ALL',ESTIMATE_PERCENT=>100,cascade=>TRUE);
    What i should do to reduce the analyze time for Big tables. Can anyone help me. l

    Hello,
    The behaviour of the ESTIMATE_PERCENT may change from one Release to another.
    In some Release when you specify a "too high" (>25%,...) ESTIMATE_PERCENT in fact you collect the Statistics over 100% of the rows, as in COMPUTE mode:
    Using DBMS_STATS.GATHER_TABLE_STATS With ESTIMATE_PERCENT Parameter Samples All Rows [ID 223065.1]For later Release, *10g* or *11g*, you have the possibility to use the following value:
    estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZEIn fact, you may use it even in *9.2*, but in this release it is recommended using a specific estimate value.
    More over, starting with *10.1* it's possible to Schedule the Statistics collect by using DBMS_SCHEDULE and, specify a Window so that the Job doesn't run during production hours.
    So, the answer may depends on the Oracle Release and also on the Application (SAP, Peoplesoft, ...).
    Best regards,
    Jean-Valentin

  • CDHDR table query taking long time

    Hi all,
    Select query from CDHDR table is taking long time,in where condition i am giving OBJECTCLASS = 'MAT_FULL' udate = sy-datum and langu = 'EN'.
    any suggestion to improve the performance.i want to select all the article which got changed on current date
    regards
    shibu

    This will always be slow for large data volumes, since CDHDR is designed for quick access by object ID (in this case material number), not by date.
    I'm afraid you would need to introduce a secondary index on OBJECTCLAS and UDATE, if that query is crucial enough to warrant the additional disk space and processing time taken by the new index.
    Greetings
    Thomas

  • HS connection to MySQL fails for large table

    Hello,
    I have set up an HS to a MySql 3.51 dabatabe using an ODBC DNS. My Oracle box has version 10.2.0.1 running in Windows 2003 R2. MySQL version is 4.1.22 running on a different machine with the same OS.
    I completed the connection through a database link, which works fine in SQLPLUS when selecting small MySQL Tables. However, I keep getting an out of memory error when selecting certain large table from the MySQL database. Previously, I had tested the DNS and ran the same SELECT in Access and it doesn't give any error. This is the error thrown by SQLPLUS:
    SQL> select * from progressnotes@mysql_rmg where "encounterID" = 224720;
    select * from progressnotes@mysql_rmg where "encounterID" = 224720
    ERROR at line 1:
    ORA-00942: table or view does not exist
    [Generic Connectivity Using ODBC][MySQL][ODBC 3.51
        Driver][mysqld-4.1.22-community-nt]Lost connection to MySQL server during query
    (SQL State: S1T00; SQL Code: 2013)
    ORA-02063: preceding 2 lines from MYSQL_RMG
    I traced the HS connection and here is the result from the .trc file:
    Oracle Corporation --- THURSDAY JUN 12 2008 11:19:51.809
    Heterogeneous Agent Release
    10.2.0.1.0
    (0) [Generic Connectivity Using ODBC] version: 4.6.1.0.0070
    (0) connect string is: defTdpName=MYSQL_RMG;SYNTAX=(ORACLE8_HOA, BASED_ON=ORACLE8,
    (0) IDENTIFIER_QUOTE_CHAR="",
    (0) CASE_SENSITIVE=CASE_SENSITIVE_QUOTE);BINDING=<navobj><binding><datasources><da-
    (0) tasource name='MYSQL_RMG' type='ODBC'
    (0) connect='MYSQL_RMG'><driverProperties/></datasource></datasources><remoteMachi-
    (0) nes/><environment><optimizer noFlattener='true'/><misc year2000Policy='-1'
    (0) consumerApi='1' sessionBehavior='4'/><queryProcessor parserDepth='2000'
    (0) tokenSize='1000' noInsertParameterization='true'
    noThreadedReadAhead='true'
    (0) noCommandReuse='true'/></environment></binding></navobj>
    (0) ORACLE GENERIC GATEWAY Log File Started at 2008-06-12T11:19:51
    (0) hoadtab(26); Entered.
    (0) Table 1 - PROGRESSNOTES
    (0) [MySQL][ODBC 3.51 Driver][mysqld-4.1.22-community-nt]MySQL client ran out of
    (0) memory (SQL State: S1T00; SQL Code: 2008)
    (0) (Last message occurred 2 times)
    (0)
    (0) hoapars(15); Entered.
    (0) Sql Text is:
    (0) SELECT * FROM "PROGRESSNOTES"
    (0) [MySQL][ODBC 3.51 Driver][mysqld-4.1.22-community-nt]Lost connection to MySQL
    (0) server during query (SQL State: S1T00; SQL Code: 2013)
    (0) (Last message occurred 2 times)
    (0)
    (0) [A00D] Failed to open table MYSQL_RMG:PROGRESSNOTES
    (0)
    (0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
    (0) Code: 2006)
    (0) (Last message occurred 2 times)
    (0)
    (0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
    (0) Code: 2006)
    (0) (Last message occurred 2 times)
    (0)
    (0) [S1000] [9013]General error in nvITrans_Commit - rc = -1. Please refer to the
    (0) log file for details.
    (0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
    (0) Code: 2006)
    (0) (Last message occurred 2 times)
    (0)
    (0) [S1000] [9013]General error in nvITrans_Rollback - rc = -1. Please refer to
    (0) the log file for details.
    (0) Closing log file at THU JUN 12 11:20:38 2008.
    I have read the MySQL documentation and apparently there's a "Don't Cache Result (forward only cursors)" parameter in the ODBC DNS that needs to be checked in order to cache the results in the MySQL server side instead of the Driver side, but checking that parameter doesn't work for the HS connection. Instead, the SQLPLUS session throws the following message when selecting the same large table:
    SQL> select * from progressnotes@mysql_rmg where "encounterID" = 224720;
    select * from progressnotes@mysql_rmg where "encounterID" = 224720
    ERROR at line 1:
    ORA-02068: following severe error from MYSQL_RMG
    ORA-28511: lost RPC connection to heterogeneous remote agent using
    SID=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.0.0.120)(PORT=1521))(CONNECT_DATA=(SID=MYSQL_RMG)))
    Curiously enough, after checking the parameter, the Access connection through the DNS ODBS seems to improve!
    Is there an aditional parameter that needs to be set up in the inithsodbc.ora perhaps? These are current HS paramters:
    # HS init parameters
    HS_FDS_CONNECT_INFO = MYSQL_RMG
    HS_FDS_TRACE_LEVEL = ON
    My SID_LIST_LISTENER entry is:
    (SID_DESC =
    (PROGRAM = HSODBC)
    (SID_NAME = MYSQL_RMG)
    (ORACLE_HOME = D:\oracle\product\10.2.0\db_1)
    Finally, here is my TNSNAMES.ORA entry for the HS connection:
    MYSQL_RMG =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 10.0.0.120)(PORT = 1521))
    (CONNECT_DATA =
    (SID = MYSQL_RMG)
    (HS = OK)
    Your advice will be greatly appeciated,
    Thanks,
    Luis
    Message was edited by:
    lmconsite

    First of all please be aware HSODBC V10 has been desupported and DG4ODBC should be used instead.
    The root cause the problem you describe could be related to a timeout of the ODBC driver (especially while taking care of the comment: it happens only for larger tables):
    (0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
    (0) Code: 2006)
    indicates the Driver or the DB abends the connection due to a timeout.
    Check out the wait_timeout mysql variable on the server and increase it.

  • Is table maintenance generator only for custom table?

    hi ,
    i have doubt is table maintenance generator only for custom table?

    hi swamya,
    Table Maintanance Generator is used to create/change/delete table entries in a particular table.
    In the production system, end-users will not be having access to transaction codes like SE11 and SE16. Developers will not be having access to many transaction codes including the above two.To view the contents of the database table, we will use SE16n in Production system. All these authorizations will be maintained by BASIS team, by creating access profiles.So in order to edit or create the contents of a database table, we should go for table maintenance generator. In real time, authorizations will be maintained in production system.
    The second reason is, we can edit or create multiple entries at a time, using tablemaintenance generator.
    Apart from that we have options like 'Enter conditions' in table maintenance screen SM30.
    hope this helps in clearing ur doubt.
    Regards
    Saurabh

  • Pagination query help needed for large table - force a different index

    I'm using a slight modification of the pagination query from over at Ask Tom's: [http://www.oracle.com/technology/oramag/oracle/07-jan/o17asktom.html]
    Mine looks like this when fetching the first 100 rows of all members with last name Smith, ordered by join date:
    SELECT members.*
    FROM members,
        SELECT RID, rownum rnum
        FROM
            SELECT rowid as RID
            FROM members
            WHERE last_name = 'Smith'
            ORDER BY joindate
        WHERE rownum <= 100
    WHERE rnum >= 1
             and RID = members.rowidThe difference between this and the one at Ask Tom's is that my innermost query just returns the ROWID. Then in the outermost query we join the ROWIDs returned to the members table, after we have pruned the ROWIDs down to only the chunk of 100 we want. This makes it MUCH faster (verifiably) on our large tables, as it is able to use the index on the innermost query (well... read on).
    The problem I have is this:
    SELECT rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindateThis will use the index for the predicate column (last_name) instead of the unique index I have defined for the joindate column (joindate, sequence). (Verifiable with explain plan). It is much slower this way on a large table. So I can hint it using either of the following methods:
    SELECT /*+ index(members, joindate_idx) */ rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindate
    SELECT /*+ first_rows(100) */ rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindateEither way, it now uses the index of the ORDER BY column (joindate_idx), so now it is much faster as it does not have to do a sort (remember, VERY large table, millions of records). So that seems good. But now, on my outermost query, I join the rowid with the meaningful columns of data from the members table, as commented below:
    SELECT members.*      -- Select all data from members table
    FROM members,           -- members table added to FROM clause
        SELECT RID, rownum rnum
        FROM
            SELECT /*+ index(members, joindate_idx) */ rowid as RID   -- Hint is ignored now that I am joining in the outer query
            FROM members
            WHERE last_name = 'Smith'
            ORDER BY joindate
        WHERE rownum <= 100
    WHERE rnum >= 1
            and RID = members.rowid           -- Merge the members table on the rowid we pulled from the inner queriesOnce I do this join, it goes back to using the predicate index (last_name) and has to perform the sort once it finds all matching values (which can be a lot in this table, there is high cardinality on some columns).
    So my question is, in the full query above, is there any way I can get it to use the ORDER BY column for indexing to prevent it from having to do a sort? The join is what causes it to revert back to using the predicate index, even with hints. Remove the join and just return the ROWIDs for those 100 records and it flies, even on 10 million records.
    It'd be great if there was some generic hint that could accomplish this, such that if we change the table/columns/indexes, we don't need to change the hint (the FIRST_ROWS hint is a good example of this, while the INDEX hint is the opposite), but any help would be appreciated. I can provide explain plans for any of the above if needed.
    Thanks!

    Lakmal Rajapakse wrote:
    OK here is an example to illustrate the advantage:
    SQL> set autot traceonly
    SQL> select * from (
    2  select a.*, rownum x  from
    3  (
    4  select a.* from aoswf.events a
    5  order by EVENT_DATETIME
    6  ) a
    7  where rownum <= 1200
    8  )
    9  where x >= 1100
    10  /
    101 rows selected.
    Execution Plan
    Plan hash value: 3711662397
    | Id  | Operation                      | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |            |  1200 |   521K|   192   (0)| 00:00:03 |
    |*  1 |  VIEW                          |            |  1200 |   521K|   192   (0)| 00:00:03 |
    |*  2 |   COUNT STOPKEY                |            |       |       |            |          |
    |   3 |    VIEW                        |            |  1200 |   506K|   192   (0)| 00:00:03 |
    |   4 |     TABLE ACCESS BY INDEX ROWID| EVENTS     |   253M|    34G|   192   (0)| 00:00:03 |
    |   5 |      INDEX FULL SCAN           | EVEN_IDX02 |  1200 |       |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    1 - filter("X">=1100)
    2 - filter(ROWNUM<=1200)
    Statistics
    0  recursive calls
    0  db block gets
    443  consistent gets
    0  physical reads
    0  redo size
    25203  bytes sent via SQL*Net to client
    281  bytes received via SQL*Net from client
    8  SQL*Net roundtrips to/from client
    0  sorts (memory)
    0  sorts (disk)
    101  rows processed
    SQL>
    SQL>
    SQL> select * from aoswf.events a, (
    2  select rid, rownum x  from
    3  (
    4  select rowid rid from aoswf.events a
    5  order by EVENT_DATETIME
    6  ) a
    7  where rownum <= 1200
    8  ) b
    9  where x >= 1100
    10  and a.rowid = rid
    11  /
    101 rows selected.
    Execution Plan
    Plan hash value: 2308864810
    | Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |            |  1200 |   201K|   261K  (1)| 00:52:21 |
    |   1 |  NESTED LOOPS               |            |  1200 |   201K|   261K  (1)| 00:52:21 |
    |*  2 |   VIEW                      |            |  1200 | 30000 |   260K  (1)| 00:52:06 |
    |*  3 |    COUNT STOPKEY            |            |       |       |            |          |
    |   4 |     VIEW                    |            |   253M|  2895M|   260K  (1)| 00:52:06 |
    |   5 |      INDEX FULL SCAN        | EVEN_IDX02 |   253M|  4826M|   260K  (1)| 00:52:06 |
    |   6 |   TABLE ACCESS BY USER ROWID| EVENTS     |     1 |   147 |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter("X">=1100)
    3 - filter(ROWNUM<=1200)
    Statistics
    8  recursive calls
    0  db block gets
    117  consistent gets
    0  physical reads
    0  redo size
    27539  bytes sent via SQL*Net to client
    281  bytes received via SQL*Net from client
    8  SQL*Net roundtrips to/from client
    0  sorts (memory)
    0  sorts (disk)
    101  rows processed
    Lakmal (and OP),
    Not sure what advantage you are trying to show here. But considering that we are talking about pagination query here and order of records is important, your 2 queries will not always generate output in same order. Here is the test case:
    SQL> select * from v$version ;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
    PL/SQL Release 10.2.0.1.0 - Production
    CORE     10.2.0.1.0     Production
    TNS for Linux: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    SQL> show parameter optimizer
    NAME                                 TYPE        VALUE
    optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      10.2.0.1
    optimizer_index_caching              integer     0
    optimizer_index_cost_adj             integer     100
    optimizer_mode                       string      ALL_ROWS
    optimizer_secure_view_merging        boolean     TRUE
    SQL> show parameter pga
    NAME                                 TYPE        VALUE
    pga_aggregate_target                 big integer 103M
    SQL> create table t nologging as select * from all_objects where 1 = 2 ;
    Table created.
    SQL> create index t_idx on t(last_ddl_time) nologging ;
    Index created.
    SQL> insert /*+ APPEND */ into t (owner, object_name, object_id, created, last_ddl_time) select owner, object_name, object_id, created, sysdate - dbms_random.value(1, 100) from all_objects order by dbms_random.random;
    40617 rows created.
    SQL> commit ;
    Commit complete.
    SQL> exec dbms_stats.gather_table_stats(user, 'T', cascade=>true);
    PL/SQL procedure successfully completed.
    SQL> select object_id, object_name, created from t, (select rid, rownum rn from (select rowid rid from t order by created desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    OBJECT_ID OBJECT_NAME                    CREATED
         47686 ALL$OLAP2_JOIN_KEY_COLUMN_USES 28-JUL-2009 08:08:39
         47672 ALL$OLAP2_CUBE_DIM_USES        28-JUL-2009 08:08:39
         47681 ALL$OLAP2_CUBE_MEASURE_MAPS    28-JUL-2009 08:08:39
         47682 ALL$OLAP2_FACT_LEVEL_USES      28-JUL-2009 08:08:39
         47685 ALL$OLAP2_AGGREGATION_USES     28-JUL-2009 08:08:39
         47692 ALL$OLAP2_CATALOGS             28-JUL-2009 08:08:39
         47665 ALL$OLAPMR_FACTTBLKEYMAPS      28-JUL-2009 08:08:39
         47688 ALL$OLAP2_DIM_LEVEL_ATTR_MAPS  28-JUL-2009 08:08:39
         47689 ALL$OLAP2_DIM_LEVELS_KEYMAPS   28-JUL-2009 08:08:39
         47669 ALL$OLAP9I2_HIER_DIMENSIONS    28-JUL-2009 08:08:39
         47666 ALL$OLAP9I1_HIER_DIMENSIONS    28-JUL-2009 08:08:39
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> set autotrace traceonly
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc
      2  ;
    11 rows selected.
    Execution Plan
    Plan hash value: 44968669
    | Id  | Operation                       | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                |       |  1200 | 91200 |   180   (2)| 00:00:03 |
    |   1 |  SORT ORDER BY                  |       |  1200 | 91200 |   180   (2)| 00:00:03 |
    |*  2 |   HASH JOIN                     |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  3 |    VIEW                         |       |  1200 | 30000 |    98   (0)| 00:00:02 |
    |*  4 |     COUNT STOPKEY               |       |       |       |            |          |
    |   5 |      VIEW                       |       | 40617 |   475K|    98   (0)| 00:00:02 |
    |   6 |       INDEX FULL SCAN DESCENDING| T_IDX | 40617 |   793K|    98   (0)| 00:00:02 |
    |   7 |    TABLE ACCESS FULL            | T     | 40617 |  2022K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("T".ROWID="T1"."RID")
       3 - filter("RN">=1190)
       4 - filter(ROWNUM<=1200)
    Statistics
              1  recursive calls
              0  db block gets
            348  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
    11 rows selected.
    Execution Plan
    Plan hash value: 882605040
    | Id  | Operation                | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |      |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  1 |  VIEW                    |      |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  2 |   COUNT STOPKEY          |      |       |       |            |          |
    |   3 |    VIEW                  |      | 40617 |  1546K|    80   (2)| 00:00:01 |
    |*  4 |     SORT ORDER BY STOPKEY|      | 40617 |  2062K|    80   (2)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL   | T    | 40617 |  2062K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("RN">=1190)
       2 - filter(ROWNUM<=1200)
       4 - filter(ROWNUM<=1200)
    Statistics
              0  recursive calls
              0  db block gets
            343  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    11 rows selected.
    Execution Plan
    Plan hash value: 168880862
    | Id  | Operation                      | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  1 |  HASH JOIN                     |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  2 |   VIEW                         |       |  1200 | 30000 |    98   (0)| 00:00:02 |
    |*  3 |    COUNT STOPKEY               |       |       |       |            |          |
    |   4 |     VIEW                       |       | 40617 |   475K|    98   (0)| 00:00:02 |
    |   5 |      INDEX FULL SCAN DESCENDING| T_IDX | 40617 |   793K|    98   (0)| 00:00:02 |
    |   6 |   TABLE ACCESS FULL            | T     | 40617 |  2022K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - access("T".ROWID="T1"."RID")
       2 - filter("RN">=1190)
       3 - filter(ROWNUM<=1200)
    Statistics
              0  recursive calls
              0  db block gets
            349  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 order by last_ddl_time desc ;
    11 rows selected.
    Execution Plan
    Plan hash value: 882605040
    | Id  | Operation           | Name | Rows     | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT      |     |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  1 |  VIEW                |     |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  2 |   COUNT STOPKEY       |     |     |     |          |          |
    |   3 |    VIEW            |     | 40617 |  1546K|    80   (2)| 00:00:01 |
    |*  4 |     SORT ORDER BY STOPKEY|     | 40617 |  2062K|    80   (2)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL      | T     | 40617 |  2062K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("RN">=1190)
       2 - filter(ROWNUM<=1200)
       4 - filter(ROWNUM<=1200)
    Statistics
         175  recursive calls
           0  db block gets
         388  consistent gets
           0  physical reads
           0  redo size
           1063  bytes sent via SQL*Net to client
         385  bytes received via SQL*Net from client
           2  SQL*Net roundtrips to/from client
           4  sorts (memory)
           0  sorts (disk)
          11  rows processed
    SQL> set autotrace off
    SQL> spool offAs you will see, the join query here has to have an ORDER BY clause at the end to ensure that records are correctly sorted. You can not rely on optimizer choosing NESTED LOOP join method and, as above example shows, when optimizer chooses HASH JOIN, oracle is free to return rows in no particular order.
    The query that does not involve join always returns rows in the desired order. Adding an ORDER BY does add a step in the plan for the query using join but does not affect the other query.

  • Table valueset taking long time to open the LOV

    Hi,
    We added a table valueset to a concurrent program. The table vaueset showsTransaction number from ra_interface_lines_all table. It is having long list. So we added the partial string entering message before open a long list.But still it is taking long time.
    Please any help on this highly appreciated.
    Thanks,
    Samba

    Hi
    Try to modify the query or creating an index will speed up the process.
    Thanks & regards
    Rajan

  • Sql access for large table

    hi,
    if a table has more than 6,000,000 records, any way to optimise to access so that can be faster.
    i try not to use sql statement alot. i try to manipulate data in internal table but first time i also need to have select statement to copy to internal table.
    any advice.
    thanks

    Tips
    1)in select include all primary keys in where condition to fetch data
    2)delare table without header line and with out occurs statement. and use work area to handle it
    Ex:-
    TYPES:BEGIN OF gty_kna1,                             " General Data in Customer Master
          kunnr TYPE kna1-kunnr,                         " Payer Number
          name1 TYPE kna1-name1,                                " Name1
          telf1 TYPE kna1-telf1,                         " Communication
          konzs TYPE kna1-konzs,                         " Corporate Group
          END OF gty_kna1.
    data:gs_kna1             TYPE gty_kna1,                             " General Data in Customer Master
    gt_kna1             TYPE TABLE OF gty_kna1,                    " General Data in Customer Master
    Note:
    •     In a SELECT statement, only the fields (field-list) which are needed are selected in the order that they reside on the database, thus network load is considerably less. The number of fields can be restricted in two ways using a field list in the SELECT clause of the statement or by using a view defined in ABAP/4 Dictionary.  The usage of view has the advantage of better reusability.
    •     SELECT SINGLE is used instead of SELECT-ENDSELECT loop when the entire key is available. SELECT SINGLE requires one communication with the database system, whereas SELECT-ENDSELECT needs two
    •     Always specify the conditions in the WHERE-clause instead of checking them with check-statements, the database system can then use an index (if possible) and the network load is considerably less.  You should not check the conditions with the CHECK statement because the contents of the whole table must be read from the database files into DBMS cache and transferred over the network. If the conditions are specified in the where clause DBMS reads exactly the needed data.
    •     Complex code is not embedded within a SELECT / ENDSELECT statement.
    •     No complex WHERE clauses, since complex where clauses are poison for the statement optimizer in any database system.
    •     For all frequently used SELECT statements, try to use an index. You always use an index if you specify (a generic part of) the index fields concatenated with logical ANDs in the Select statement's WHERE clause
    •     When loading data into Internal table, INTO TABLE OR APPENDING TABLE is used instead of a SELECT/APPEND combination. It is always faster to use the INTO TABLE version of a Select statement than to use APPEND statements.                      
    •     Use a select list with aggregate functions instead of checking and computing, when trying to find the maximum, minimum, sum and average value or the count of a database column.
    Rewards if useful...............
    Minal

  • Postgres' LIMIT .. OFFSET for large table

    Hi!
    I have a really large table (some millions of rows) which I'd like to present on a web page. I let the user choose a limit, say 25 lines per page, and present some buttons to go one page forward or backwards.
    Some years ago, I have done this using PostgreSQL. There's an easy way to do it using LIMIT .. OFFSET. In Oracle, there's no such functionality.
    Currently, my 'workaround' looks like this (a bit more complex in reality):
    1 SELECT * FROM (
    2 SELECT
    3 ROW_NUMBER() OVER ( ORDER BY MSG_RCV_TIME DESC) AS ROWNO,
    4 TO_CHAR(MSG_RCV_TIME) MSG_RCV
    5 FROM MSG_TABLE
    6* ORDER BY MSG_RCV_TIME DESC) WHERE ROWNO BETWEEN 1 AND 10
    This gives back 10 rows, which does the job. The problem is: It takes AGES!. The web server falls in to a timeout before even printing one line. First, Oracle has to suck in all x*1'000'000 lines just to sort out the ones it doesn't need. That can't be the solution, can it?
    In this forum, I have read a few notes about PARTITION, CURSOR and such things, but I didn't really get what the use of it is.
    Any hints on that? This forum is based on Oracle, too (I hope), and it's fast. There must be a solution for this.
    Btw, the table I am talking about is being filled by syslog-ng, and it currently grows by 200MB per day (and it's still in the testing phase). I expect some hundred million lines to be present later.
    Thanks a lot in advance
    André

    See Tom Kyte's site for thisCool. Didn't know this one. How is he checking the performance of the queries?
    The one comment in there that I entirely agree with
    is that such large result sets are meaningless to the
    human eye so I would question exactly what you are
    trying to achieve. As Tom rightly says, nobody is
    ever going to scroll down to rows 999001 - 999010,
    even if they could.Of course not. But you see, as an example, that if you type just one word into google's mask, it returns loads of pages. As soon as you see that your query was not really a good one, you try with more specific words, and it returns less pages. That's exactly what my GUI is going to do. First, it gives you an overview, then, it lets you refine the search.
    Anyway: As soon as I limit the output in the innermost query, I doubt it's useful: Say, I limit the number of rows to browse through to 1000, but syslog-ng is producing 2000 rows per minute - you'll miss the rows you were maybe looking for.
    It's essential to be able to see all the records. I don't mind if nobody ever looks at pages 200'000 to 1'000'000.
    Thanks again for the great link.
    André (who really starts to like Oracle and its community)

  • ALV: how to save context space for large tables ?

    Dear collegues,
    We are displaying an ALV table that is quite large. Therefore, the corrsponding DDIC structure and the WD context is large. This has an impact on performance and the load size of the program. Now we will enhance the ALV table again.
    Example: for an icon and its explaining tooltip that are displayed in the ALV: there is are context fields required like "SOURCE_FIELDNAME" for the tooltip as well as for for the icon. They need a lot of characters for each tooltip and icon).
    Question: do you have an idea, how to save context space for those ALV fields ?
    Best regards,
    Christian

    >We are displaying an ALV table that is quite large.
    Do you mean quite large as in a large number of columns or as in a large number of rows (or both)?  I assume that the problem is probably more related to a large number of rows.  For very large tables, you should consider using the table instead of the ALV. For very large tables you can even use a technique called context paging to keep only a subset of the data in the context memory at a time.  Here is a recent blog that I created on the topic with demonstrations of different techniques for table sharing, shared memory, and context paging when dealing with large tables in Web Dynpro ABAP:
    Web Dynpro ABAP: How Fast Can You Consume 1 Million Rows?

  • Gather Schema Stat running long in R12

    Hi:
    I scheduled Gather Schema Stat (all with 10%) on two PROD instances last night. One completed about 2 hrs (this one normally has more activeties) while the other is still running since 12:00am. Please give me the steps and commands how I should troubleshoot. Is it still running? I have checked there is no block. BTW, I did one Gather Schema Stats for AP with 10% yesterday afternoon. It completed less than 2 min. They both are on 12.1.3 and 11.1.0.7 on Linux. Thank you in advance.

    I guess I have to do one of these. Thank you.
    Another question: Why some tables dont have date under last_analyzed? When I checked the all_tables and sorted on the last_analyzed, saw 1/3 of the tables with date blank. They are from all kinds of schemas, ex, two tables below. But some AR, or AP are analyzed.
    AR_SUBMISSION_CTRL_GT
    AP_PERIOD_CLOSE_EXCPS_GT

  • Gather schema stats not running for custom schema's in EBS 12.1.1

    Hi All,
    We are running Gather schema stats program periodically in our EBS system with Schama name as "ALL", but it is not generating the statistics for the custom schema. We have custom schema registered in our EBS application. Can you please let us know if there is any issue with our setup or this is standard behaviour of Gather schama stats concurrent program.
    Thanks,

    Hi,
    At how much percent ur using with Gather schema stats program like 10%,20%(in Gather schema stats program form)...I think there are no updates on the tables of that custom schema thats why the gather schema progam ignored it..can u check were there updates?
    Regards

  • Compare tables in two schemas for the table with particular column & value

    Hello All,
    I have a query to find out the list of table from a given schema to extract all the tables having a search column .
    ex :
    SELECT OWNER, TABLE_NAME, COLUMN_NAME FROM
    ALL_TAB_COLUMNS WHERE OWNER='<SCHEMA_NAME>'
    AND COLUMN_NAME='<COLUMN_NAME>'
    I want to compare two schemas for the same above query .
    Can we wirte a query on this - I am using SQL DEVELOPER , which has menu item - TOOL - database differneces to find the diffenence between two schemas but my requirement is to find the differences in two schemas for all the tables matching for a particular column ( as given in quer).
    Appreciate your help.
    thanks/Kumar
    Edited by: kumar73 on 29 Nov, 2012 1:50 PM

    Hi, Kumar,
    This is the SQL and PL/SQL forum. If you have a question about SQL Developer, then the SQL Developer is a better place to post it. Mark this thread as "Answered" before starting another thread for the same question.
    If SQL Developer has a tool for doing what you want, don't waste your time trying to devise a SQL solution. The SQL Developer way will probably be simpler, more efficient and more reliable.
    If you do need to try a SQL solution, then post some sample data (CREATE TABLE and INSERT statements for a table that resembles all_tab_columns; you can call it my_tab_columns) and the results you want from that data.

  • Unable to edit table data, but not for all tables

    I have multiple tables in a schema. For some tables, I am able make edits to table data directly, i.e., context menu Table | Open, and the Data tab. When I am able to edit, I do get a pencil icon inside the cell I am editing/typing (and am able to commit the changes). When I am not able to edit, it does nothing (no error messages, sound, or visual cue). I thought it had to do with who owns the table object, but I log in as the same owner of the affected table objects.
    Any pointers would be greatly appreciated so I am equipped when asking the DBA.
    Thanks,
    OS: Windows XP Professional SP2
    Java(TM) Platform: 1.6.0_11
    Oracle IDE: 2.1.1.64.45
    Versioning Support: 2.1.1.64.45
    Edited by: New2OWB10gR2 on Jun 23, 2010 12:20 PM

    Hello again,
    Here you are the DDL of the offending table:
    CREATE TABLE "DBADMEX"."T50SEC82"
    "COD_EMPRESA" CHAR(4 BYTE) DEFAULT ' ' NOT NULL ENABLE,
    "COD_EMPR_CONT" CHAR(4 BYTE) DEFAULT ' ' NOT NULL ENABLE,
    "COD_SECT_CONT" CHAR(2 BYTE) DEFAULT ' ' NOT NULL ENABLE,
    "NUM_CUEN_CONT" CHAR(18 BYTE) DEFAULT ' ' NOT NULL ENABLE,
    "COD_PAIS" CHAR(4 BYTE) DEFAULT ' ' NOT NULL ENABLE,
    "COD_SECTOR" CHAR(6 BYTE) DEFAULT ' ' NOT NULL ENABLE
    PCTFREE 10 PCTUSED 40 INITRANS 50 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE
    INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT
    TABLESPACE "TS_50" ;
    CREATE UNIQUE INDEX "DBADMEX"."I5000082" ON "DBADMEX"."T50SEC82"
    "COD_EMPRESA", "COD_EMPR_CONT", "COD_SECT_CONT", "NUM_CUEN_CONT"
    PCTFREE 10 INITRANS 50 MAXTRANS 255 COMPUTE STATISTICS STORAGE
    INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT
    TABLESPACE "TS_50" ;
    We are using the following versions:
    Oracle database: 11.1.0.7.0
    Oracle Client: 11.2.0.1.0
    Windows (where the client runs): XP SP3 (version 5.1 Build 2600_spsp_sp3_gdr.080814-1236) in spanish.
    SQL Developer: 2.1.1.64 (MAIN-64.45)
    I think I haven't forgotten anything.
    Thanks in advance for your help!

  • I just received my i phone 5 and it seems to be taking long for the apps to download with iCloud. Is that right?

    I just received my iphone5 and am downloading my apps with icloud and it seems to be taking long...is that right?

    Hello JeenC,
    I found an article with steps you can take when you experience issues with attachements on your iCloud calendar. 
    I recommend reviewing the steps in the section titled "Troubleshooting Calendar attachments" in the following article:
    iCloud: Using and troubleshooting Calendar attachments
    http://support.apple.com/kb/HT5373
    Thank you for using Apple Support Communities.
    Best,
    Sheila M.

Maybe you are looking for

  • FRM-40735 POST- UPDATE trigger raised unhandled exception ORA- 01403

    FRM-40735 POST- UPDATE trigger raised unhandled exception ORA- 01403 I am getting the above error when i am trying to change the Assignment Category field of an employee from Junior Staff to Senior Staff. Navigation People> Enter & Maintain> (B)Assig

  • What type of internal table should I use based from my scenario...

    Hello Experts, I am currently having a dillema here because I am trying to optimize a certain part of a report where in the internal table contains around 300,000 lines of records. The original internal table is of type standard table. So what the or

  • Web URLS & Documents

    I'm having 2 problems. 1. I want to create a hyperlink so visitors to my iWeb created website can download a .pdf document. But I can't figure out how to create,or even find, a URL at .mac (MobileMe) to direct it and automatically download or even op

  • What is this effect and how do I recreate it?

    https://www.facebook.com/photo.php?fbid=345387002224591&set=a.230823780347581.50213.154521 807977779&type=1&theater With the twisted segments, I've seen it several places but wouldn't know where to start with naming it and looking for how to recreate

  • My 4th gen nano has locked up and the click wheel doesn't function, is it dead?

    Hi my 4th gen nano stopped working, the screens ok with no error messages but the click wheel no longer functions other than reseting. I've set it back to default ok but still no good as it's stuck on the language page. Is it dead?? Pete.