Index size keep growing while table size unchanged

Hi Guys,
I've got some simple and standard b-tree indexes that keep on acquiring new extents (e.g. 4MB per week) while the base table size kept unchanged for years.
The base tables are some working tables with DML operation and nearly same number of records daily.
I've analysed the schema in the test environment.
Those indexes do not fulfil the criteria for rebuild as follows,
- deleted entries represent 20% or more of the current entries
- the index depth is more then 4 levels
May I know what cause the index size keep growing and will the size of the index reduced after rebuild?
Grateful if someone can give me some advice.
Thanks a lot.
Best regards,
Timmy

Please read the documentation. COALESCE is available in 9.2.
Here is a demo for coalesce in 10G.
YAS@10G>truncate table t;
Table truncated.
YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
SEGMENT_NAME              BYTES
T                         65536
TIND                      65536
YAS@10G>insert into t select level from dual connect by level<=10000;
10000 rows created.
YAS@10G>commit;
Commit complete.
YAS@10G>
YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
SEGMENT_NAME              BYTES
T                        196608
TIND                     196608We have 10,000 rows now. Let's delete half of them and insert another 5,000 rows with higher keys.
YAS@10G>delete from t where mod(id,2)=0;
5000 rows deleted.
YAS@10G>commit;
Commit complete.
YAS@10G>insert into t select level+10000 from dual connect by level<=5000;
5000 rows created.
YAS@10G>commit;
Commit complete.
YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
SEGMENT_NAME              BYTES
T                        196608
TIND                     327680Table size is the same but the index size got bigger.
YAS@10G>exec show_space('TIND',user,'INDEX');
Unformatted Blocks .....................               0
FS1 Blocks (0-25)  .....................               0
FS2 Blocks (25-50) .....................               6
FS3 Blocks (50-75) .....................               0
FS4 Blocks (75-100).....................               0
Full Blocks        .....................              29
Total Blocks............................              40
Total Bytes.............................         327,680
Total MBytes............................               0
Unused Blocks...........................               0
Unused Bytes............................               0
Last Used Ext FileId....................               4
Last Used Ext BlockId...................          37,001
Last Used Block.........................               8
PL/SQL procedure successfully completed.We have 29 full blocks. Let's coalesce.
YAS@10G>alter index tind coalesce;
Index altered.
YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
SEGMENT_NAME              BYTES
T                        196608
TIND                     327680
YAS@10G>exec show_space('TIND',user,'INDEX');
Unformatted Blocks .....................               0
FS1 Blocks (0-25)  .....................               0
FS2 Blocks (25-50) .....................              13
FS3 Blocks (50-75) .....................               0
FS4 Blocks (75-100).....................               0
Full Blocks        .....................              22
Total Blocks............................              40
Total Bytes.............................         327,680
Total MBytes............................               0
Unused Blocks...........................               0
Unused Bytes............................               0
Last Used Ext FileId....................               4
Last Used Ext BlockId...................          37,001
Last Used Block.........................               8
PL/SQL procedure successfully completed.The index size is still the same but now we have 22 full and 13 empty blocks.
Insert another 5000 rows with higher key values.
YAS@10G>insert into t select level+15000 from dual connect by level<=5000;
5000 rows created.
YAS@10G>commit;
Commit complete.
YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
SEGMENT_NAME              BYTES
T                        262144
TIND                     327680Now the index did not get bigger because it could use the free blocks for the new rows.

Similar Messages

  • Why Index size is bigger than table size?

    Dear All,
    I found in my database my tables sizes is coming around 30TB (All Tables in Database). and my index size for the same is 60TB. This is data ware housing environment.
    How the index size and table size are differing?
    Why they are differing? why index size is bigger than table size?
    How to manage the size?
    Please give me clear explanation and required information on the above.
    Regards
    Suresh

    There are many reasons why the total space allocated indexes could be larger than the total space allocated to tables. Sometimes it's a mark of good design, sometimes it indicates a problem. In your position your first move is to spend as little time as possible in deciding whether your high-level summary is indicative of a problem, so you need to look at a little more detail.
    As someone else pointed out - are you looking at the sizes because you are running out of space, or because you have a perceived performance problem. If not, then your question is one of curiosity.
    If it's about performance then you should be looking for code (either through statspack/AWR or sql_trace) that is performing badly and use the analysis of that code to help you identify suspect indexes.
    If it's about space, then you need to do some simple investigations aimed at finding a few indexes that can be "shrunk" or dropped. Pointers for this are:
    select
            table_owner, table_name, count(*)
    from
            dba_indexes
    group by
            table_owner, table_name
    having
            count(*) > 2   -- adjust to keep the output short
    order by
            count(*) desc;This tells you which tables have the most indexes - check the sizes of the tables and indexes and then check the index definitions for the larger tables with lots of indexes.
    Second quick check - join dba_tables to dba_indexes by table_name, and report the table blocks and index leaf blocks in desending order of leaf block count. Look for indexes which are very big, and also bigger than their underlying tables. There are special cases (and bugs) that can cause indexes to be much bigger than they need to be ... this report may identify a couple of anomalies that could benefit from an emergency fix followed (possibly) by a strategic fix.
    Regards
    Jonathan Lewis

  • Proxy 4 - Cache size keeps growing

    I may have a wrong cache setting somewhere, but I can't find it. I am running Proxy 4.0.2 (for windows).
    Under Cache settings, I have "Cache Size" set to 800MB. Under "Cache Capacity" I have it set to 1GB (500 MB-2GB).
    The problem is my physical cache size on the hard drive keeps growing and growing and is starting to fill the partition on the hard drive. At last count, the "cache" directory on the hard drive which holds the cache files is now using 5.7GB of space and still growing.
    Am I mis-understanding something? I thought the max physical size would be a lot lower, and stop at a given size. But the cache directory on the hard drive is now close to 6GB and still growing day by day. When is it going to stop growing, or how do I stop it and put a cap on the physical size it can grow to on the hard drive?
    Thanks

    Until 4.03 is out, you can use this script..
    Warning: experimental, run this on a copy of cache first to make sure that it works as you want it.
    The firs argument is the size in MB's that you want to remove.
    I assume your cachedir is "./cache" if it is not, then change the variable $cachedir to
    the correct value.
    ==============cut-here==========
    #!/bin/perl
    use strict;
    use File::stat;
    my $cachedir = "./cache";
    my $gc_size; #bytes
    my $verbose = 0;
    sub gc_file {
        my $file = shift;
        my $sb = stat($file);
        $gc_size -= $sb->size;
        unlink $file;
        print "$gc_size more after $file\n" if $verbose;
        exit 0 if $gc_size < 0;
    sub main {
        my $size = shift;
        $gc_size = $size * 1024 * 1024; #in MB's
        opendir(DIR, $cachedir) || die "can't opendir $cachedir: $!";
        my @sects = grep {/^s[0-9]\.[0-9]{2}$/} readdir(DIR);
        closedir DIR;
        foreach my $sect (@sects) {
            chomp $sect;
            opendir (CDIR, "$cachedir/$sect") || die "cant opendir $cachedir/$sect: $!";
            my @ssects = grep {/^[A-F0-9]{2}$/} readdir(CDIR);
            closedir CDIR;
            foreach my $ssect (@ssects) {
                chomp $ssect;
                opendir (SCDIR, "$cachedir/$sect/$ssect") || die "cant opendir $cachedir/$sect/$ssect: $!";
                my @files = grep {/^[A-Z0-9]{16}$/} readdir(SCDIR);
                closedir SCDIR;
                foreach my $file (@files) {
                    gc_file "$cachedir/$sect/$ssect/$file";
    main $ARGV[0] if $ARGV[0];
    =============cut-end==========On your second problem, the easiest way to recover a corrupted partition is to list out the sections in that partition, and delete those sections that seem like odd ones
    eg:
    $ls ./cache
    s4.00 s4.01 s4.02 s4.03 s4.04 s4.05 s4.06 s4.07 s4.08 s4.09 s4.10 s4.11 s4.12 s4.13 s4.14 s4.15 s0.00
    Here the s0.00 is the odd one out, so remove the s0.00 section. Also keep an eye on the relative sizes of the sections. if the section to be removed is larger than the rest of the sections combinde, you might not want to remove that.
    WARNING: anything you do, do on a copy

  • Index size increases than table size

    Hi All,
    Let me know what are the possible reasons for index size greater than the table size and in some cases index size smaller than table size . ASAP
    Thanks in advance
    sherief

    hi,
    The size of a index depends how inserts and deletes occur.
    With sequential indexes, when records are deleted randomly the space will not be reused as all inserts are in the leading leaf block.
    When all the records in a leaf blocks have been deleted then leaf block is freed (put on index freelist) for reuse reducing the overall percentage of free space.
    This means that if you are deleting aged sequence records at the same rate as you are inserting, then the number of leaf blocks will stay approx constant with a constant low percentage of free space. In this case it is most probably hardly ever worth rebuilding the index.
    With records being deleted randomly then, the inefficiency of the index depends on how the index is used.
    If numerous full index (or range) scans are being done then it should be re-built to reduce the leaf blocks read. This should be done before it significantly affects the performance of the system.
    If index access’s are being done then it only needs to be rebuilt to stop the branch depth increasing or to recover the unused space
    here is a exemple how index size can become larger than table size:
    Connected to Oracle Database 10g Enterprise Edition Release 10.2.0.3.0
    Connected as admin
    SQL> create table rich as select rownum c1,'Verde' c2 from all_objects;
    Table created
    SQL> create index rich_i on rich(c1);
    Index created
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 1179648 144 9
    INDEX 1179648 144 9
    SQL> delete from rich where mod(c1,2)=0;
    29475 rows deleted
    SQL> commit;
    Commit complete
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 1179648 144 9
    INDEX 1179648 144 9
    SQL> insert into rich select rownum+100000, 'qq' from all_objects;
    58952 rows inserted
    SQL> commit;
    Commit complete
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 1703936 208 13
    INDEX 2097152 256 16
    SQL> insert into rich select rownum+200000, 'aa' from all_objects;
    58952 rows inserted
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 2752512 336 21
    INDEX 3014656 368 23
    SQL> delete from rich where mod(c1,2)=0;
    58952 rows deleted
    SQL> commit;
    Commit complete
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 2752512 336 21
    INDEX 3014656 368 23
    SQL> insert into rich select rownum+300000, 'hh' from all_objects;
    58952 rows inserted
    SQL> commit;
    Commit complete
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 3014656 368 23
    INDEX 4063232 496 31
    SQL> alter index rich_i rebuild;
    Index altered
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 3014656 368 23
    INDEX 2752512 336 21
    SQL>

  • Query to find indexes bigger in size than tables sizes

    Team -
    I am looking for a query to find the list of indexes in a schema or in a entire database which are bigger in size than the respective tables size .
    Db version : Any
    Thanks
    Venkat

    results are the same in my case
      1  select di.owner, di.index_name, di.table_name
      2  from dba_indexes di, dba_segments ds
      3  where ds.blocks > (select dt.blocks
      4               from dba_tables dt
      5               where di.owner = dt.owner
      6               and  di.leaf_blocks > dt.blocks
      7               and   di.table_name = dt.table_name)
      8*  and ds.segment_name = di.index_name
    SQL> /
    OWNER                      INDEX_NAME                TABLE_NAME
    SYS                      I_CON1                     CON$
    SYS                      I_OBJAUTH1                OBJAUTH$
    SYS                      I_OBJAUTH2                OBJAUTH$
    SYS                      I_PROCEDUREINFO1            PROCEDUREINFO$
    SYS                      I_DEPENDENCY1                DEPENDENCY$
    SYS                      I_ACCESS1                ACCESS$
    SYS                      I_OID1                     OID$
    SYS                      I_PROCEDUREC$                PROCEDUREC$
    SYS                      I_PROCEDUREPLSQL$           PROCEDUREPLSQL$
    SYS                      I_WARNING_SETTINGS           WARNING_SETTINGS$
    SYS                      I_WRI$_OPTSTAT_TAB_OBJ#_ST     WRI$_OPTSTAT_TAB_HISTORY
    SYS                      I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST WRI$_OPTSTAT_HISTGRM_HISTORY
    SYS                      WRH$_PGASTAT_PK                WRH$_PGASTAT
    SYSMAN                      MGMT_STRING_METRIC_HISTORY_PK  MGMT_STRING_METRIC_HISTORY
    DBADMIN                  TSTNDX                     TSTTBL
    15 rows selected

  • Bitmap index or Composite index better on a huge table

    Hi All,
    I got a question regarding the Bitmap index and Composite Index.
    I got a table which has got only two colums CUSTOMER(group_no NUMBER, order_no NUMBER)
    This is a 100Million+ record table and here I got 100K Group_nos and and unique 100Million order numbers. I.E Each group should have 1000 order numbers.
    I tested by creating a GLOBAL Bitmap index on this huge table(more than 1.5gb in size) and the GLOBAL Bitmap index that got created is under 50MB and when I query for a group number say SELECT * FROM CUSTOMER WHERE group_no=67677; --> 0.5 seconds to retrive all the 1000 rows. I checked for different groups and it is the same.
    Now I dropped the BitMap Index and re-created a Composite index on( group_no and order_no). The index size more than the table size and is around 2GB in size and when I query using the same select statment SELECT * FROM CUSTOMER WHERE group_no=67677; -->0.5 seconds to retrive all the 1000 rows.
    My question is which one is BETTER. BTree or BITMAP Index and WHY?
    Appreciate your valuable inputs on this one.
    Regars,
    Madhu K.

    Dear,
    Hi All,
    I got a question regarding the Bitmap index and Composite Index.
    I got a table which has got only two colums CUSTOMER(group_no NUMBER, order_no NUMBER)
    This is a 100Million+ record table and here I got 100K Group_nos and and unique 100Million order numbers. I.E Each group should have 1000 order numbers.
    I tested by creating a GLOBAL Bitmap index on this huge table(more than 1.5gb in size) and the GLOBAL Bitmap index that got created is under 50MB and when I query for a group number say SELECT * FROM CUSTOMER WHERE group_no=67677; --> 0.5 seconds to retrive all the 1000 rows. I checked for different groups and it is the same.
    Now I dropped the BitMap Index and re-created a Composite index on( group_no and order_no). The index size more than the table size and is around 2GB in size and when I query using the same select statment SELECT * FROM CUSTOMER WHERE group_no=67677; -->0.5 seconds to retrive all the 1000 rows.
    My question is which one is BETTER. BTree or BITMAP Index and WHY?
    Appreciate your valuable inputs on this one.First of all, bitmap indexes are not recommended for write intensive OLTP applications due to the locking threat they can produce in such a kind of applications.
    You told us that this table is never updated; I suppose it is not deleted also.
    Second, bitmap indexes are suitable for columns having low cardinality. The question is how can we define "low cardinality", you said that you have 100,000 distincts group_no on a table of 100,000,000 rows.
    You have a cardinality of 100,000/100,000,000 =0,001. Group_no column might be a good candidate for a bitmap index.
    You said that order_no is unique so you have a very high cardinality on this column and it might not be a candidate for your bitmap index
    Third, your query where clause involves only the group_no column so why are you including both columns when testing the bitmap and the b-tree index?
    Are you designing such a kind of index in order to not visit the table? but in your case the table is made only of those two columns, so why not follow Hermant advise for an Index Organized Table?
    Finally, you can have more details about bitmap indexes in the following richard foot blog article
    http://richardfoote.wordpress.com/2008/02/01/bitmap-indexes-with-many-distinct-column-values-wotsuh-the-deal/
    Best Regards
    Mohamed Houri

  • Index size greated then Table Size

    Hi all,
    We are running BI7.0 in our environment.
    One of the tables' index size is much greated than the table itself. The Details are listed below:
    Table Name: RSBERRORLOG
    Total Table Size: 141,795,392  KB
    Total Index Size: 299,300,576 KB
    Index:
    F5: Index Size / Allocated Size: 50%
    Is there any reason that the index should grow more than Table? If so, would Reorganizing index help and if this can be controlled?
    Please letme know on this as I am not very clear on DB much.
    Thanks and Regards,
    Raghavan

    Hi Hari
    Its basically degenerated index.  You can follow the below steps
    1. Delete some entries from RSBERRORLOG.
    BI database growing at 1 Gb per day while no data update on ECC
    2. Re-organize this table from BRSPACE . Now the size of the table would be very less.  I do not remember if this table has a LONG RAW field ( in that case export /import) of this table would be required.   ---Basis job
    3. Delete and recreate Index on this table
    You will gain lot of space.
    I assumed you are on Oracle.
    More information on reoganization  is LINK: [Reorg|TABLE SPACE REORGANIZATION !! QUICK EXPERT INPUTS;
    Anindya
    Regards
    Anindya

  • Index size greater than table size

    HI ,
    While checking the large segments , I came to know that index HZ_PARAM_TAB_N1 is larger than table HZ_PARAM_TAB . I think it's highly fragmented and requires defragmentation . Need your suggestion on the same that how can I collect more information on the same . Providing you more information .
    1.
    select sum(bytes)/1024/1024/1024,segment_name from dba_segments group by segment_name having sum(bytes)/1024/1024/1024 > 1 order by 1 desc;
    SUM(BYTES)/1024/1024/1024 SEGMENT_NAME
    81.2941895 HZ_PARAM_TAB_N1
    72.1064453 SYS_LOB0000066009C00004$$
    52.7703857 HZ_PARAM_TAB
    2. Index code
    <pre>
    COLUMN_NAME COLUMN_POSITION
    ITEM_KEY 1
    PARAM_NAME 2
    </pre>
    Regards
    Rahul

    Hi ,
    Thanks . I know that rebuild will defragment it . But as I'm on my new site , I was looking for some more supporting information before drafting the mail on the same that it requires re org activity .It's not possible for an index to have the size greater than tables as it contains only 2 columns values + rowid . Whereas tables contains 6 columns .
    <pre>
    Name      Datatype      Length      Mandatory      Comments
    ITEM_KEY      VARCHAR2      (240)      Yes      Unique identifier for the event raised
    PARAM_NAME      VARCHAR2      (2000)      Yes      Name of the parameter
    PARAM_CHAR      VARCHAR2      (4000)      
         Value of the parameter only if its data type is VARCHAR2.
    PARAM_NUM      NUMBER      
         Value of the parameter only if its data type is NUM.
    PARAM_DATE      DATE      
         Value of the parameter only if its data type is DATE.
    PARAM_INDICATOR      VARCHAR2      (3)      Yes      Indicates if the parameter contains existing, new or >replacement values. OLD values currently exist. NEW values create initial values or replace existing values.</pre>
    Regds
    Rahul

  • Time machine keeps growing in size or backing up a size far greater than it

    Apologies if this is covered elsewhere but after hours of searching I still haven't been able to find a resolution which is actually fixing my problem.
    When my time machine tries to back up the size of the back up just keeps growing and growing - a problem which has been reported several times but none of the solutions are fixing the problem for me. I've completely reset time machine (disconnect external hard drive, deleting the preferences from ~library/preference, then reset up with my exclusions) and the problem just reappeared.
    I've gone through this routine several times and in most cases had no joy. On one occasion I thought the problem had been resolved as it did the initial backup and then a couple of hourly backs and then suddenly it returned a message saying it had insufficient space (trying to back up 470GB but disk only has 300GB available) which was just ridiculous as the full back up size was under 50GB.
    I've verified the disk on several occasions and it always comes up clean. In a fit of desperation I've even tried reformatting the external drive with no improvements
    I'm at the point of deleting time machine from my laptop and going back to the old Backup program. Can anyone offer any thing else I could try.

    Hi, and welcome to the forums.
    Have you Verified your internal HD (and Repaired any externals that are also being backed-up)?
    If so, it's probably something damaged or corrupted in your installation of OSX. I'd suggest downloading and installing the 10.6.4 "combo" update. That's the cleverly-named combination of all the updates to Snow Leopard since it was first released, so installing it should fix anything that's gone wrong since then, such as with one of the normal "point" updates. Info and download available at: http://support.apple.com/kb/DL1048 Be sure to do a +Repair Permissions+ via Disk Utility (in your Applications/Utilities folder) afterwards.
    If that doesn't help, reinstall OSX from your Snow Leopard Install disc (that won't affect anything else), then apply the "combo" again.

  • Table size exceeds Keep Pool Size (db_keep_cache_size)

    Hello,
    We have a situation where one of our applications started performing bad since last week.
    After some analysis, it was found this was due to data increase in a table that was stored in KEEP POOL.
    After the data increase, the table size exceeded db_keep_cache_size.
    I was of the opinion that in such cases KEEP POOL will still be used but the remaining data will be brought in as needed from the table.
    But, I ran some tests and found it is not the case. If the table size exceeds db_keep_cache_size, then KEEP POOL is not used at all.
    Is my inference correct here ?
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    PL/SQL Release 11.2.0.2.0 - Production
    CORE    11.2.0.2.0      Production
    TNS for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - ProductionSetup
    SQL> show parameter keep                    
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 4M
    SQL>
    SQL>     
    SQL> create table t1 storage (buffer_pool keep) as select * from all_objects union all select * from all_objects;
    Table created.
    SQL> set autotrace on
    SQL>
    SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
    PL/SQL procedure successfully completed.
    SQL> set serveroutput on
    SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
    SEGMENT_NAME                  : T1
    PARTITION_NAME                :
    SEGMENT_TYPE                  : TABLE
    SEGMENT_SUBTYPE               : ASSM
    TABLESPACE_NAME               : HR_TBS
    BYTES                         : 16777216
    BLOCKS                        : 2048
    EXTENTS                       : 31
    INITIAL_EXTENT                : 65536
    NEXT_EXTENT                   : 1048576
    MIN_EXTENTS                   : 1
    MAX_EXTENTS                   : 2147483645
    MAX_SIZE                      : 2147483645
    RETENTION                     :
    MINRETENTION                  :
    PCT_INCREASE                  :
    FREELISTS                     :
    FREELIST_GROUPS               :
    BUFFER_POOL                   : KEEP
    FLASH_CACHE                   : DEFAULT
    CELL_FLASH_CACHE              : DEFAULT
    PL/SQL procedure successfully completed.DB_KEEP_CACHE_SIZE=4M
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              9  recursive calls
              0  db block gets
           2006  consistent gets
           2218  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedDB_KEEP_CACHE_SIZE=10M
    SQL> connect / as sysdba
    Connected.
    SQL>
    SQL> alter system set db_keep_cache_size=10M scope=both;
    System altered.
    SQL>
    SQL> connect hr/hr@orcl
    Connected.
    SQL>
    SQL> show parameter keep
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 12M
    SQL>
    SQL> set autotrace on
    SQL>
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedDB_KEEP_CACHE_SIZE=20M
    SQL> connect / as sysdba
    Connected.
    SQL>
    SQL> alter system set db_keep_cache_size=20M scope=both;
    System altered.
    SQL>
    SQL> connect hr/hr@orcl
    Connected.
    SQL>
    SQL> show parameter keep
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 20M
    SQL> set autotrace on
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1943  consistent gets
           1656  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1943  consistent gets
              0  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedOnly with 20M db_keep_cache_size I see no physical reads.
    Does it mean that if the db_keep_cache_size < table size, there is no caching for that table ?
    Or am I missing something ?
    Rgds,
    Gokul

    Hello Jonathan,
    Many thanks for your response.
    Here is the test I ran;
    SQL> select buffer_pool,blocks from dba_tables where owner = 'HR' and table_name = 'T1';
    BUFFER_     BLOCKS
    KEEP          1977
    SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
      COUNT(*)
          1939
    SQL> show parameter db_keep_cache_size
    NAME                                 TYPE        VALUE
    db_keep_cache_size                   big integer 20M
    SQL>
    SQL> alter system set db_keep_cache_size = 5M scope=both;
    System altered.
    SQL> select count(*) from hr.t1;
      COUNT(*)
        135496
    SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
      COUNT(*)
           992I think my inference is wrong and as you said I am indeed seeing the effect of tail end flushing the start of the table.
    Rgds,
    Gokul

  • Getting same index size despite different table size

    Hello,
    this question arose from a different thread, but touches a different problem, which is why I have decided to post it as a separate thread.
    I have several tables of 3D points.
    The points roughly describe the same area but in different densities, which means the tables are of different sizes. The smallest contains around 3million entries and the largest around 37 million entries.
    I applied an index with
    CREATE INDEX <index name>
    ON <table name>(<column name>)
    INDEXTYPE is MDSYS.SPATIAL_INDEX
    PARAMETERS('sdo_indx_dims=3');
    My problem is that I am trying to see how much space the index occupies for each table.
    I used the following syntax to get the answer to this:
    SELECT usim.sdo_index_name segment_name, bytes/1024/1024 segment_size_mb
    FROM user_segments us, user_sdo_index_metadata usim
    WHERE usim.SDO_INDEX_NAME = <spatial index name>
    AND us.segment_name = usim.SDO_INDEX_TABLE;
    (thanks Reggie for supplying the sql)
    Now, the curious thing is that in all cases, I get the answer
    SEGMENT_NAME SEGMENT_SIZE_MB
    LIDAR_POINTS109_IDX .0625
    (obviously with a different sement name in each case).
    I tried to see what an estimated index size would be with
    SDO_TUNE.ESTIMATE_RTREE_INDEX_SIZE
    And I get estimates ranging from 230MB in the case of 3million records up to 2.9 for the case of 37million records.
    Does anyone have an idea why I am not getting a different actual index size for the different tables?
    Any help is greatly appreciated!!!
    Cheers,
    F.

    It looks like your indexes didn't actually create properly. Spatial indexes are a bit different to 'normal' indexes in this regard. A BTree index will either create or not. However, when creating a spatial index, something may fail, but the index structure will remain and it will appear to be valid according to the data dictionary.
    Consider the following example in which the SRID has a problem:
    SQL> CREATE TABLE INDEX_TEST (
      2  ID NUMBER PRIMARY KEY,
      3  GEOMETRY SDO_GEOMETRY);
    Table created.
    SQL>
    SQL> INSERT INTO INDEX_TEST (ID, GEOMETRY) VALUES (1,
      2  SDO_GEOMETRY(2001, 99999, SDO_POINT_TYPE(569278.141, 836920.735, NULL), NULL, NULL)
      3
    SQL> INSERT INTO user_sdo_geom_metadata VALUES ('INDEX_TEST','GEOMETRY',
      2     MDSYS.SDO_DIM_ARRAY(
      3     MDSYS.SDO_DIM_ELEMENT('X',0, 1000, 0.0005),
      4     MDSYS.SDO_DIM_ELEMENT('Y',0, 1000, 0.0005)
      5  ), 88888);
    1 row created.
    SQL>
    SQL> CREATE INDEX INDEX_TEST_SPIND ON INDEX_TEST(GEOMETRY) INDEXTYPE IS MDSYS.SPATIAL_INDEX;
    CREATE INDEX INDEX_TEST_SPIND ON INDEX_TEST(GEOMETRY) INDEXTYPE IS MDSYS.SPATIAL_INDEX
    ERROR at line 1:
    ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
    ORA-13249: SRID 88888 does not exist in MDSYS.CS_SRS table
    ORA-29400: data cartridge error
    Error - OCI_NODATA
    ORA-06512: at "MDSYS.SDO_INDEX_METHOD_10I", line 10
    SQL> SELECT usim.sdo_index_name segment_name, bytes/1024/1024 segment_size_mb,
      2  usim.sdo_index_status
      2  FROM user_segments us, user_sdo_index_metadata usim
      3  WHERE usim.SDO_INDEX_NAME = 'INDEX_TEST_SPIND'
      4  AND us.segment_name = usim.SDO_INDEX_TABLE;
    SEGMENT_NAME                     SEGMENT_SIZE_MB SDO_INDEX_STATUS
    INDEX_TEST_SPIND                           .0625 VALID
    1 row selected.
    SQL>When you ran the CREATE INDEX statement did it say "Index created." afterwards or did you get an error?
    Did you run the CREATE INDEX statement in SQL*Plus yourself or was it run by some software?
    I suggest you drop the indexes and try creating them again. Watch out for any errors. Chances are its an SRID issue.

  • TABLE SIZE 및 INDEX SIZE(크기) 계산

    제품 : ORACLE SERVER
    작성날짜 : 2002-10-15
    TABLE SIZE 및 INDEX SIZE(크기) 계산
    ===================================
    1. TABLE SIZE 계산 공식(ORACLE BLOCK SIZE : 2K 로 가정)
    $ sqlplus scott/tiger
    SQL> SELECT GREATEST(4, ceil(ROW_COUNT /
    ((round(((1958 - (initrans * 23)) *
    ((100 - PCTFREE) /100)) / ADJ_ROW_SIZE)))) * BLOCK_SIZE)
    TableSize_Kbytes
    FROM dual;
    *. 한 개의 BLOCK에 Available 한 Bytes - 1958
    *. 각 initrans 는 23 Bytes
    *. PCT_FREE : Table 의 pctfree 값(default 10)
    *. ADJ_ROW_SIZE : 각 row 의 평균 SIZE 추정치
    *. ROW_COUNT : table 의 row 의 갯수
    *. BLOCK_SIZE : 1 block의 크기 (단위: K)
    예) table 이름이 EMP 일 경우
    ROW_COUNT : select count(*) from emp;
    ADJ_ROW_SIZE :
    analyze table emp compute statistics;
    (또는 건수가 매우 많을 때에는 compute 대신 estimate 사용)
    select avg_row_len
    from user_tables
    where table_name='EMP';
    2. INDEX SIZE 계산 공식
    SQL> SELECT GREATEST(4, (1.01) * ((ROW_COUNT /
    ((floor(((2048 - 113 - (initrans * 23)) *
    (1 - (PCTFREE/100))) /
    ((10 + uniqueness) + number_col_index +
    (total_col_length)))))) * DB_BLOCK_SIZE))
    IndexSize_Kbytes
    FROM dual;
    *. 한 개의 block에 available 한 bytes ( 1935 or 2048 - 113 )
    *. 각 initrans 는 23 Bytes
    *. ROW_COUNT : table 의 row 의 갯수
    *. PCTFREE : Index 의 pctfree 값(default 10)
    *. number_col_index : Index 에서 column 의 수
    *. total_col_length : Index 의 길이 추정치
    *. uniqueness : 만일 unique index 이면 1, non-unique index 이면 0.
    *. DB_BLOCK_SIZE : 1 block의 크기 (단위: K)

    데이터블록 레이아웃을 보면..
    Data Block Layout
    Block header에는 cache layer와 Transaction layer을 가지고 있습니다.
    Data layer에는 Table directory, Row directory, Free space, Row data
    이렇게 나누어지구요..
    v$type_size를 보면.. KCB와 KTB의 크기가 실제로 헤더의 크기입니다.
    여기서는 92로 계산되었는데..
    2048-92=1956 으로 나옵니다.
    여기서의 값이 아닐지요? 2바이트가 차이나긴 하군요.;
    COMPONENT TYPE DESCRIPTION TYPE_SIZE
    KCB KCBH BLOCK COMMON HEADER 20
    KTB KTBIT TRANSACTION VARIABLE HEADER 24
    KTB KTBBH TRANSACTION FIXED HEADER 48

  • The size of the target table grows abnormaly

    hi all,
    I am curently using OWB (version 9 2.0 4 to feed some tables.
    we have created a new database 9.2.0.5 for a new datawarehouse.
    I have an issue that I really can not explain about the increase size of the target tables.
    I take the exemple of a parameter table that contains 4 fields and only 12 rows.
    CREATE TABLE SSD_DIM_ACT_INS
    ID_ACT_INS INTEGER,
    COD_ACT_INS VARCHAR2(10 BYTE),
    LIB_ACT_INS VARCHAR2(80 BYTE),
    CT_ACT_INS VARCHAR2(10 BYTE)
    TABLESPACE IOW_OIN_DAT
    PCTUSED 0
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    BUFFER_POOL DEFAULT
    LOGGING
    NOCACHE
    NOPARALLEL;
    this table is feed by a mapping and I use the update/insert option, which generates a Merge.
    first the table is empty, I run the maping and I add 14 lines.
    the size of the table is now 5 Mo !!
    then I delete 2 lines by sql with TOAD
    I run a again the mapping. It updates 12 lines and add 2 lines.
    at this point,the size of the table has increased of 2 Mo (1 Mo by line !!)
    the size of the table is now 7 Mo !!
    I do the same again and I get a 9 Mo table
    when I delete 2 lines with a SQL statement and create them manually, the size of the table does not change.
    when I create a copy of the table with an insert select sql statement the size becomes equal to 1 Mo which is normal.
    Could someone explain me how this can be possible.
    is it a problem with the database ? with the configuration of OWB ?
    what should I check ?
    Thank you for your help.

    Hi all
    We have found the reason of the increasing.
    Each mapping has a HINT which is defaulted to PARALLEL APPEND. as I understand it, it is use by OWB to determine if an insert allocates of not new space for a table when it runs and insert.
    We have changed each one to PARALLEL NOAPPEND and now, it's correct.

  • How to reduce table size after deleting data in table

    In one of the environment, we have 300 GB tabe which contains 50 columns. some of the columns are large object columns. This table contains data for the past one year and every month it is growing by 40 gb data. Due to this we have space issues. we would like to reduce the table size by keeping only recent two months data. What are the posiible ways to reduce the table size by keeping only 2 months data. Database version 10.2.04 on rhel 4.

    kumar wrote:
    Finally we dont have down time to do by exp/imp method.You have two problems to address:
    <ul>
    How you get from where you are now to where you want to be
    Figuring out what you want to do when you get there so that you can stay there.
    </ul>
    Technically a simple strategy to "delete all data more than 64 days old" could be perfect - once you've got your table (and lob segments) down to the correct size for two months of data. If you've got the licencing and can use local indexing it might be even better to use (for example) daily partitioning by date.
    To GET to the 2-month data set you need to do something big and nasty - this will probably give you the choice between blocking access for a while and getting the job done relatively quickly (e.g. CTAS) or leaving the system run slowly for a relatively long time while generating huge amounts of redo. (e.g. delete 10 months of data, then shrink / compact). You also have a choice between using NO extra space to get the job done (shrink/compact) or doing something which effectively copies the last two months of data.
    Think about the side effects you're prepared to run with, then we can tell you which options might be most appropriate.
    Regards
    Jonathan Lewis

  • My view size keeps on minimizing--why?

    My screen size keeps on changing, fonts, pictures, etc. keeps on getting smaller while scrolling.

    Make sure that you do not press the Ctrl key while scrolling the page with the mouse wheel.
    You may also need to check the mouse settings in Control Panel > Mouse
    Other things that need your attention:
    Your above posted system details show outdated plugin(s) with known security and stability risks that you should update.
    # Shockwave Flash 10.0 r45
    # Next Generation Java Plug-in 1.6.0_23 for Mozilla browsers
    Update the [[Managing the Flash plugin|Flash]] plugin to the latest version.
    *http://kb.mozillazine.org/Flash
    *http://www.adobe.com/software/flash/about/
    Update the [[Java]] plugin to the latest version.
    *http://kb.mozillazine.org/Java
    *http://www.oracle.com/technetwork/java/javase/downloads/index.html (Java Platform: Download JRE)

Maybe you are looking for