Estimate table size

I'm trying to pick a handful of tables in a particular schema which are taking up quite a bit of space, and move them onto another tablespace. There's one I know of in particular, but I'd like to move a couple more if they're also using up a good deal of space. Could I use a query like the following to determine which tables are taking up the majority of the space:
select table_name, num_rows, blocks, num_rows/blocks as div
from dba_tables
where owner = <schema>
  and blocks > 0 and num_rows > 10000
order by 4It's mostly a blunt instrument to find the 10 or so most space-intensive tables. Or it there a better way to multiply (row count) * (# of max bytes per row) to get a better estimate?
Thanks
--=Chuck

Hello,
select table_name, num_rows, blocks, num_rows/blocks as div from dba_tablesBe aware that the columns num_rows and blocks are populated if you collect Statistics on the Table (with DBMS_STATS for instance). So, you need to have fresh Statistics, if you want to get a liable value of these columns.
Hope this help.
Best regards,
Jean-Valentin

Similar Messages

  • Estimate table size for last 4 years

    Hi,
    I am on Oracle 10g
    I need to estimate a table size for the last 4 years. So what I plan to do is get a count of data in the table for last 4 years and then multiply that value by avg_row_length to get the total size for 4 years. Is this technique correct or do I need to add some overhead?
    Thanks

    Yes, the technique is correct, but it is better to account for some overhead. I usually multiply the results by 10 :)
    The most important thing to check is if there is any trend in data volumes. Was the count of records 4 years ago more or less equal to the last year? Is the business growing or steady? How fast is it growing? What are prospects for the future? Last year in not always 25% of last 4 years. It happens that last year is more than 3 other years added together.
    The other, technical issue is internal organisation of data in Oracle datafiles. The famous PCTFREE. If you expact that the data will be updated then it is much better to keep some unused space in each database block in case some of your records get larger. This is much better for performance reasons. For example, you leave 10% of each database block free and when you update your record with longer value (like replace NULL column with actual 25-characters string) then your record still fits into the same block. You should account for this and add this to your estimates.
    On the other hand, if your records get never updated and you load them in batch, then maybe they can be ORDERed before insert and you can setup a table with COMPRESS clause. Oracle COMPRESS clause has very little common with zip/gzip utilities, however it can bring you significant space savings.
    Finally, there is no point to make estimates too accurate. They are just only estimates and the reality will be almost always different. In general, it is better to overestimate and have some disk space unused than underestimate and need to have people to deal with the issue. Disks are cheap, people on the project are expensive.

  • Estimate index size on a table column before creating it

    Hi
    Is it possible to estimate the size of the index before actually creating it on a table column.
    I tried the below query. It gives size of the index after creating it.
    SELECT (SUM(bytes)/1048576)/1024 Gigs, segment_name
    FROM user_extents
    WHERE segment_name = 'IDX_NAME'
    GROUP BY segment_name.
    Can anyone through some light which system table will give this information.

    You can get an approximation by estimating the number of rows to be indexes, the average column lengths of the columns in the index, and the overheads for an index entry - once you have some reasonable stats on the table.
    I wrote a piece of code to demonstrate the method a few years ago - it's got some errors, but I've highlighted them in an update to the note: http://www.jlcomp.demon.co.uk/index_efficiency_2.html
    Regards
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • How can I estimate the table size?

    Hi guys,
    how can I estimate tabel size? I have a table, which contains char, numc, string, dec and so on.
    And the table will contain about 1 millon data. How much size does this table has?
    How can I calculate it? Is there any ABAP command to get it?
    Any hint is appreciated!
    Regards,
    Youyou

    Hi
    Perhaps this part of statament DESCRIBE TABLE from SAP help can be usefull for you:
    DESCRIBE TABLE
    Syntax
    DESCRIBE TABLE itab [KIND knd] [LINES lin] [OCCURS n].
    Extras:
    1. ... KIND knd
    2. ... LINES lin
    3. ... OCCURS n
    Effect
    This statement determines some properties of the internal table itab and assigns them to the specified variables. The various additions enable you to determine the table type, the number of currently filled rows and the initial memory requirement.
    In addition, the system fields sy-tfill and sy-tleng are filled with the current number of table rows and the length of a table row in bytes.
    So the system field sy-tleng has the size of a single line in bytes, it should mean the total table size can be calculated multiplying that field for the total of the record of the table:
    parameters: p_rec type i.
    DATA: T_BSEG TYPE TABLE OF BSEG.
    data: size type sy-tleng.
    SELECT * UP TO p_rec rows into table t_bseg
      FROM bseg where bukrs = 'MAAB'.
    describe table  t_bseg LINES sy-tabix.
    sy-tleng = sy-tleng * sy-tabix.
    write:   'Nr. records:', sy-tfill, sy-tabix,
           / 'Nr. bytes:',   sy-tleng.
    Max

  • Table size estimate

    hi
    can i estimate the size of table if i konw the no. of rows ?

    In the case of where the table does not yet exist and where good sample data may not be known then try this:
    Oracle Table Sizing Estimation Formula
    Abbreviations
    AVIL = Available space in block to hold rows
    OBS = Oracle block size
    RS = Row size
    Ovhd = Fixed plus variable block overhead
    TBR = Total blocks required
    Expected size = (( RS * number of rows) / AVIL ) * OBS) / K or M
    where K = 1024 and M = 1048576
    Figure RS as
    for varchar2 expected number of characters for column
    for number where p = number of ditits and s = 0 for positive and 1 for negative
    round((( length((p) + s) / 2)) + 1
    for date use 7
    + 1 byte per column in row
    + 3 byte row overhead per row
    [Or use the dba_tables.avg_row_len value]
    Figure number of bytes for block as
    pctfree = decimal value of pctfree parameter * OBS
    The variable area is mostly made up of 23 bytes per initran area and 2 bytes per row for the row table entry. For 1 to 4 initrans I have calculated row overhead of 86 to 156 bytes so I just use a constant for this value. Try 113 to start.
    Figure AVIL as OBS - ovhd - pctfree
    Total bytes = number of expected rows * RS
    TBR = Total Bytes / AVIL
    Expected Size = TBR * OBS / 1024 [for K]
    This is one way and it is fairly quick and works pretty well. The formula can be improved by adjusting the variable area size for the number of initrans and for the number of expected rows in the block, but using a constant works well for us.
    HTH -- Mark D Powell --

  • How to calculate table size given the structure.

    Today i was asked how to estimate the size of the table given th transactions,.
    table has 90 column all varchar(10)
    How to calculate the size of table which has no data.
    Message was edited by:
    Maran.E

    hi,
    SQL> create table test as select * from all_objects ;
    Table created.
    SQL> create table test1 as select * from all_objects where 1 = 2 ;
    Table created.
    sql>select segment_name,bytes/1024/1024 "size", blocks
    from dba_segments
    where segment_name in ('TEST','TEST1')
    SEGMENT_NAME
          size     BLOCKS
    TEST1
         .0625          8
    TEST
             6        768regards
    Taj

  • TABLE SIZE 및 INDEX SIZE(크기) 계산

    제품 : ORACLE SERVER
    작성날짜 : 2002-10-15
    TABLE SIZE 및 INDEX SIZE(크기) 계산
    ===================================
    1. TABLE SIZE 계산 공식(ORACLE BLOCK SIZE : 2K 로 가정)
    $ sqlplus scott/tiger
    SQL> SELECT GREATEST(4, ceil(ROW_COUNT /
    ((round(((1958 - (initrans * 23)) *
    ((100 - PCTFREE) /100)) / ADJ_ROW_SIZE)))) * BLOCK_SIZE)
    TableSize_Kbytes
    FROM dual;
    *. 한 개의 BLOCK에 Available 한 Bytes - 1958
    *. 각 initrans 는 23 Bytes
    *. PCT_FREE : Table 의 pctfree 값(default 10)
    *. ADJ_ROW_SIZE : 각 row 의 평균 SIZE 추정치
    *. ROW_COUNT : table 의 row 의 갯수
    *. BLOCK_SIZE : 1 block의 크기 (단위: K)
    예) table 이름이 EMP 일 경우
    ROW_COUNT : select count(*) from emp;
    ADJ_ROW_SIZE :
    analyze table emp compute statistics;
    (또는 건수가 매우 많을 때에는 compute 대신 estimate 사용)
    select avg_row_len
    from user_tables
    where table_name='EMP';
    2. INDEX SIZE 계산 공식
    SQL> SELECT GREATEST(4, (1.01) * ((ROW_COUNT /
    ((floor(((2048 - 113 - (initrans * 23)) *
    (1 - (PCTFREE/100))) /
    ((10 + uniqueness) + number_col_index +
    (total_col_length)))))) * DB_BLOCK_SIZE))
    IndexSize_Kbytes
    FROM dual;
    *. 한 개의 block에 available 한 bytes ( 1935 or 2048 - 113 )
    *. 각 initrans 는 23 Bytes
    *. ROW_COUNT : table 의 row 의 갯수
    *. PCTFREE : Index 의 pctfree 값(default 10)
    *. number_col_index : Index 에서 column 의 수
    *. total_col_length : Index 의 길이 추정치
    *. uniqueness : 만일 unique index 이면 1, non-unique index 이면 0.
    *. DB_BLOCK_SIZE : 1 block의 크기 (단위: K)

    데이터블록 레이아웃을 보면..
    Data Block Layout
    Block header에는 cache layer와 Transaction layer을 가지고 있습니다.
    Data layer에는 Table directory, Row directory, Free space, Row data
    이렇게 나누어지구요..
    v$type_size를 보면.. KCB와 KTB의 크기가 실제로 헤더의 크기입니다.
    여기서는 92로 계산되었는데..
    2048-92=1956 으로 나옵니다.
    여기서의 값이 아닐지요? 2바이트가 차이나긴 하군요.;
    COMPONENT TYPE DESCRIPTION TYPE_SIZE
    KCB KCBH BLOCK COMMON HEADER 20
    KTB KTBIT TRANSACTION VARIABLE HEADER 24
    KTB KTBBH TRANSACTION FIXED HEADER 48

  • DB Size and TAble size Estimation

    Hi all
    Please tell mehelp link or spredsheet to estimation DB Size and TAble size Estimate
    Regards

    Please tell mehelp link or spredsheet to estimation
    DB Size and TAble size EstimateWhat size are you looking for?
    1) Estimate of physical disk space used by an existing database schema?
    2) Estimate of how much physical disk space will be required for some arbitrary data in order to create a new database?
    Something else?
    You need to be clear with your requirements

  • Estimate Database Size

    We are planning to use Oracle database (Oracle 10g in Unix). We are in the process of estimating the hard disk capacity(in GB) required based on the past data for each business process. But we are faced with following queries
    1. Assuming the structure for a table is as follows :-
    create table temp1(
    Name varchar2(4),
    Age Number(2),
    Salary Number(8,2),
    DOB Date)
    The estimated records per year is assumed to be 500. How can we estimate the size that will be required for this table.
    2. We are planning to allocate 20% space for indexes on each table. Is it ok?
    3. Audit Logs (to keep track of changes made through update) :- Should it be kept in different partition/hard disk?
    Is there anything else to consider. Is there a better way to estimate the hard disk capacity required in more accurate manner?
    Our current database in Informix takes around 100GB per year, but there r lot of redundant data and due to business process change we cannot take that into consideration.
    Kindly guide. Thanks in advance.

    Well you can estimate the size of a table by estimating the average row size then multiplying this by the expected number of rows then add in overhead.
    3 for the row header + 4 for name + 2 for age + 5 for Salary + 7 for a date + 1 for each column null/length indicator is about 25 bytes per row (single byte character set) * 500 rows * 1.20 (20% overhead) = small.
    The overhead is the fixed block header, the ITL, the row table (2 bytes per row) and the pctfree. You can estimate this but I just said 20% which is probably a little high.
    The total space needed by indexes often equals or surpassed the space needed by tables. You need to know the design and be pretty sure additional indexes will not be necessary for performance reasons before you allocate such a small percentage of the table space to index space.
    Where you keep audit tables or extracts is totally dependent on your disk setup.
    HTH -- Mark D Powell --

  • How to estimate the size of the database object, before creating it?

    A typical question arise in mind.
    As DBA's, we all known to determine the objects size from the database.
    But before creating an object(s) in database or in a schema, we'll do an analysis on object(s) size in relation with data growth. i.e. the size we are estimating for the object(s) we are about to create.
    for example,
    Create table Test1 (Id Number, Name Varchar2(25), Gender Char(2), DOB Date, Salary Number(7));
    A table is created.
    Now what is the maximum size of a record for this table. i.e. maximum row length for one record. And How we are estimating this.
    Please help me on this...

    To estimate a table size before you create it you can do the following.  For each variable character column try to figure out the average size of the data.  Say on average the name will be 20 out of the allowed 25 characters.  Add 7 for each date column.  For numbers
    p = number of digits in value
    s = 0 for positive number and 1 for a negative number
    round((( length((p) + s) / 2)) + 1
    Now add one byte for the null indicator for each colmn plus 3 bytes for the row header.  This is your row length.
    Multiply by the expected number of rows to get a rough size which you need to adjust for the pctfree factor that will be used in each block and block overhead.  With an 8K Oracle block size if you the default pctfree of 10 then you lose 892 bytes of storage.  So 8192 - 819 - 108 estimated overhead = 7265 usable.  Now divide 7265 by the average row length to get an estimate of the number of rows that will fit in this space and reduce the number of usable bytes by 2 bytes for each row.  This is your new usable space.
    So number of rows X estimate row length divided by usable block space in blocks.  Convert to Megabytes or Gigabytes as desired.
    HTH -- Mark D Powell --
    ed add "number of usable"

  • "Convert Text to Table" Size limit issue?

    Alphabetize a List
    I’ve been using this well known work around for years.
    Select your list and in the Menu bar click Format>Table>Convert Text to Table
    Select one of the column’s cells (1st click selects entire table, 2nd click selects individual cell)
    Open “Table Inspector” (Click Table icon at top of Pages document)
    Make sure “table” button is selected, not “format” button
    Choose Sort Ascending from the Edit Rows & Columns pop-up menu
    Finally, click Format>Table>Convert Table to Text.
    A few days ago I added items & my list was 999 items long, ~22 pages.
    Tonight, I added 4 more items. Still the same # pages but now 1,003 items long.
    Unable to Convert Text to Table! Tried for 45 minutes. I think there is a list length limit, perhaps 999 items?
    I tried closing the document w/o any changes. Re-opening Pages & re-adding my new items to the end of the list as always & once again when I highlight list & Format>Table>Convert Text to Table .....nothing happens! I could highlight part of the list up to 999 items & leave the 4 new items unhighlighted & it works. I pasted the list into a new doc and copied a few items from the middle of the list & added them to the end of my new 999 list to make it 1003 items long (but different items) & did NOT work. I even attempted to add a single new item making the list an even 1000 items long & nope, not working. Even restarted iMac, no luck.
    I can get it to work with 999 or fewer items easily as always but no way when I add even a single new item.
    Anyone else have this problem?  It s/b easy to test out. If you have a list of say, 100 items, just copy & repeatedly paste into a new document multiple times to get over 1,000 & see if you can select all & then convert it from text to table.
    Thanks!
    Pages 08 v 3.03
    OS 10.6.8

    G,
    Yes, Pages has a table size limit, as you have discovered. Numbers has a much greater capacity for table length, so if you do your sort in Numbers you won't have any practical limitation.
    A better approach than switching to Numbers for the sort would be to download, install and activate Devon Wordservice. Then you could sort your list without converting it to a table.
    Jerry

  • Table size exceeds Keep Pool Size (db_keep_cache_size)

    Hello,
    We have a situation where one of our applications started performing bad since last week.
    After some analysis, it was found this was due to data increase in a table that was stored in KEEP POOL.
    After the data increase, the table size exceeded db_keep_cache_size.
    I was of the opinion that in such cases KEEP POOL will still be used but the remaining data will be brought in as needed from the table.
    But, I ran some tests and found it is not the case. If the table size exceeds db_keep_cache_size, then KEEP POOL is not used at all.
    Is my inference correct here ?
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    PL/SQL Release 11.2.0.2.0 - Production
    CORE    11.2.0.2.0      Production
    TNS for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - ProductionSetup
    SQL> show parameter keep                    
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 4M
    SQL>
    SQL>     
    SQL> create table t1 storage (buffer_pool keep) as select * from all_objects union all select * from all_objects;
    Table created.
    SQL> set autotrace on
    SQL>
    SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
    PL/SQL procedure successfully completed.
    SQL> set serveroutput on
    SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
    SEGMENT_NAME                  : T1
    PARTITION_NAME                :
    SEGMENT_TYPE                  : TABLE
    SEGMENT_SUBTYPE               : ASSM
    TABLESPACE_NAME               : HR_TBS
    BYTES                         : 16777216
    BLOCKS                        : 2048
    EXTENTS                       : 31
    INITIAL_EXTENT                : 65536
    NEXT_EXTENT                   : 1048576
    MIN_EXTENTS                   : 1
    MAX_EXTENTS                   : 2147483645
    MAX_SIZE                      : 2147483645
    RETENTION                     :
    MINRETENTION                  :
    PCT_INCREASE                  :
    FREELISTS                     :
    FREELIST_GROUPS               :
    BUFFER_POOL                   : KEEP
    FLASH_CACHE                   : DEFAULT
    CELL_FLASH_CACHE              : DEFAULT
    PL/SQL procedure successfully completed.DB_KEEP_CACHE_SIZE=4M
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              9  recursive calls
              0  db block gets
           2006  consistent gets
           2218  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedDB_KEEP_CACHE_SIZE=10M
    SQL> connect / as sysdba
    Connected.
    SQL>
    SQL> alter system set db_keep_cache_size=10M scope=both;
    System altered.
    SQL>
    SQL> connect hr/hr@orcl
    Connected.
    SQL>
    SQL> show parameter keep
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 12M
    SQL>
    SQL> set autotrace on
    SQL>
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedDB_KEEP_CACHE_SIZE=20M
    SQL> connect / as sysdba
    Connected.
    SQL>
    SQL> alter system set db_keep_cache_size=20M scope=both;
    System altered.
    SQL>
    SQL> connect hr/hr@orcl
    Connected.
    SQL>
    SQL> show parameter keep
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 20M
    SQL> set autotrace on
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1943  consistent gets
           1656  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1943  consistent gets
              0  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedOnly with 20M db_keep_cache_size I see no physical reads.
    Does it mean that if the db_keep_cache_size < table size, there is no caching for that table ?
    Or am I missing something ?
    Rgds,
    Gokul

    Hello Jonathan,
    Many thanks for your response.
    Here is the test I ran;
    SQL> select buffer_pool,blocks from dba_tables where owner = 'HR' and table_name = 'T1';
    BUFFER_     BLOCKS
    KEEP          1977
    SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
      COUNT(*)
          1939
    SQL> show parameter db_keep_cache_size
    NAME                                 TYPE        VALUE
    db_keep_cache_size                   big integer 20M
    SQL>
    SQL> alter system set db_keep_cache_size = 5M scope=both;
    System altered.
    SQL> select count(*) from hr.t1;
      COUNT(*)
        135496
    SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
      COUNT(*)
           992I think my inference is wrong and as you said I am indeed seeing the effect of tail end flushing the start of the table.
    Rgds,
    Gokul

  • Change table size and headers in type def cluster

    Is is possible to change a table size and headers that is inside a type def cluster?
    I have a vi that loads test parameters from a csv file. The original program used an AC load so there was a column for power factor. I now have to convert this same program to be used with a DC load, so there is no power factor column.
    I have modified to vi to adjust the "test table" dynamically based on the input file. But the "test table" in the cluster does not update it's size or column headers.
    The "test table" in the cluster is used through out the main program to set the values for each test step and display the current step by highlighting the row.
    Attachments:
    Load Test Parms.JPG ‏199 KB
    Table Cluster.JPG ‏122 KB

    Nevermind, I figured it out...
    I was doing it wrong from the start, in an effort to save time writing the original program I simply copied the "test table" to by type def cluster.  This worked but was not really as universal as I thought it would be, as the table was now engraved in stone since the cluster is a type def.
    I should not have done that, but rather used an array in the cluster and only used the table in the top level VI where it's displayed on the screen.

  • Table size not reducing after delete

    The table size in dba_segments is not reducing after we delete the data from the table. How can i regain the space after deleting the data from a table.
    Regards,
    Natesh

    I think when you do DELETE it removes the data but
    it's not releasing any used space and it's still
    marked as used space. I think reorganizing would help
    to compress and pack all block and relase any unused
    space in blocks. Why do you think that? Deleting data will create space that can be reused by subsequent insert/ update operations. It is not going to release space back to the tablespace to make it available for inserts into other tables in the tablespace, but that's not generally an issue unless you are permanently decreasing the size of a table, which is pretty rare.
    Would you also please explain about different about
    LOB and LONG ? or point me to any link which explain
    baout it.From the Oracle Concepts manual's section on the LONG data type
    "Note:
    Do not create tables with LONG columns. Use LOB columns (CLOB, NCLOB) instead. LONG columns are supported only for backward compatibility.
    Oracle also recommends that you convert existing LONG columns to LOB columns. LOB columns are subject to far fewer restrictions than LONG columns. Further, LOB functionality is enhanced in every release, whereas LONG functionality has been static for several releases."
    LONG was a very badly implemented solution to storing large amounts of data. LOBs are a much, much better designed solution-- you should always be using LOBs.
    Justin

  • TABLE SIZE NOT DECREASING AFTER DELETION. BLOCKS NOT BEING RE-USED

    Hi ,
    Problem:
    Table size before deletion: 40GB
    Total rows before deletion: over 200000
    Rows deleted=190000 rows
    Table size after deletion is more (as new data was inserted meanwhile).
    Purpose of table:
    This table is a sort of transaction table.
    Whenever an SR is raised by CSR, data gets inserted into this table and is removed when the status is cleared.
    So there is constant insertion and purging will happen on this table.
    We are using ASSM and tablespace is LOCAL.
    This Table has a LONG column also.
    Is this problem because of LONG column ?
    So here there are 2 problems.
    1) INSERTs are not using the space created by DELETE.
    2) New INSERTs are taking much more space then expected ?
    Let me have your suggestion
    Thanks,

    I think when you do DELETE it removes the data but
    it's not releasing any used space and it's still
    marked as used space. I think reorganizing would help
    to compress and pack all block and relase any unused
    space in blocks. Why do you think that? Deleting data will create space that can be reused by subsequent insert/ update operations. It is not going to release space back to the tablespace to make it available for inserts into other tables in the tablespace, but that's not generally an issue unless you are permanently decreasing the size of a table, which is pretty rare.
    Would you also please explain about different about
    LOB and LONG ? or point me to any link which explain
    baout it.From the Oracle Concepts manual's section on the LONG data type
    "Note:
    Do not create tables with LONG columns. Use LOB columns (CLOB, NCLOB) instead. LONG columns are supported only for backward compatibility.
    Oracle also recommends that you convert existing LONG columns to LOB columns. LOB columns are subject to far fewer restrictions than LONG columns. Further, LOB functionality is enhanced in every release, whereas LONG functionality has been static for several releases."
    LONG was a very badly implemented solution to storing large amounts of data. LOBs are a much, much better designed solution-- you should always be using LOBs.
    Justin

Maybe you are looking for