ROW CHAINING 과 ROW MIGRATION

제품 : ORACLE SERVER
작성날짜 : 2002-04-10
ROW CHAINING 과 ROW MIGRATION
=============================
Purpose
Row chaining 과 Row Migration에 대해 이해하고 줄이는 방법을 확인한다.
Problem Description
Row chaining 은 단일 테이블 상의 특정 Row의 길이가 증가해서 더 이상
동일한 데이타 블럭에 들어갈수 없을때 발생한다. 이때 RDBMS는 또
다른 데이타 블럭을 찾는다. 이 데이타 블럭은 원래 블럭과 연결되어
있다. 이 경우 데이타 블럭이 하나의 I/O 작업과 동일한 양을 수행하기
위해 두 개의 I/O를 사용해야 한다는 점이다. 이 상황은 여러분의
데이터베이스 성능을 빠르게 약화시킬 것이다.
Data Block상의 하나의 Row는 길이가 증가하면서 갱신되며, Block의
Freespace가 0%일 때, Row는 Migration을 일으킨다. 물론, 전체 Row가
들어갈 만한 크기의 새로운 Block에 Row에 대한 Data가 Migration된다.
이경우 ORACLE은 Row에 대한 정보를 가져오기 위해 하나 이상의 Data
Block을 반드시 읽어야 하므로 I/O Perfmance는 감소한다.
Solution Description
1. Row chaining과 migration 확인
1) run ?/rdbms/admin/utlchain.sql
2) ANALYZE Command를 통해 Chaining과 Migrating의 횟수를 조사한다.
analyze table emp list chained rows;
2. 해결과정
1) 데이터 열을 CHAINED_ROWS 테이블의 ROWID를 사용하여 원래
테이블과 같은 행 구조를 가진 중간 테이블(intermediate table)로
이동시킨다.
2) 옮겨진 데이터 열을 CHAINED_ROWS 테이블의 ROWID를 사용하여 삭제 한다.
3) 중간 테이블로부터 열들을 다시 원래 테이블로 삽입한다.
4) 중간 테이블을 버린다.
5) CHAINED_ROWS 테이블의 레코드를 삭제한다.
이 과정이 수행되고 나면 analyze 명령은 다시 수행되야 한다. row가
다시 CHAINED_ROWS 테이블에 쌓이면 어떤 블럭에도 전체row 가 들어갈
충분한 공간이 없기 때문이다. 이것은 한 데이타 블럭의 한 row 가 너무
길어서 이거나 테이블의 PCTFREE 가 적절하지 못하기 때문이다. 전자의
경우는 chaine 현상이 일어날수 밖에 없고 후자의 경우 다음과 같이
PCTFREE 를 수정한다.
3. PCTFREE 값을 조정 하여야 하는 경우
1) 테이블에 대한 더 나은 퍼센트 프리 요소(percent free factor)를 결정한다.
2) 전체 테이블을 그 모든 의존관계(예를 들면, 인덱스, 그렌트(grants),
제약조건들 등)와 함께 export한다.
3) 원래 테이블을 버린다.
4) 새로운 사양으로 다시 만든다.
5) 테이블을 import한다.

Hi,
SQL> SELECT name, value FROM v$sysstat WHERE name = 'table fetch continued row';
NAME VALUE
table fetch continued row 163
Is this mean 163 tables contains chained rows?
Please suggest me.
Thanks
KSG

Similar Messages

  • About row-chaining, row- migration in a block

    What happens during row-chaining, during inserting of a record in a block, about row-migration, update of row occurs exactly at which place in a block?

    Hi,
    Why Every WhereYou ask Doc Questions, better you read some oracle Doc.

  • Migrated/chained rows causing double I/O

    " You have 3,454,496 table fetch continued row actions during this period. Migrated/chained rows always cause double the I/O for a row fetch and "table fetch continued row" (chained row fetch) happens when we fetch BLOB/CLOB columns (if the avg_row_len > db_block_size), when we have tables with > 255 columns, and when PCTFREE is too small. You may need to reorganize the affected tables with the dbms_redefintion utility and re-set your PCTFREE parameters to prevent future row chaining.
    What is migration and row chaining and when does this happen?
    Is there a query to find out affected tables? i.e migrated and chained rows?
    Is there a query to find out tables whos pctfree size is small?
    How to determine the optimal value for pctfree for these affected tables/

    user3390467 wrote:
    " You have 3,454,496 table fetch continued row actions during this period. Migrated/chained rows always cause double the I/O for a row fetch and "table fetch continued row" (chained row fetch) happens when we fetch BLOB/CLOB columns (if the avg_row_len > db_block_size), when we have tables with > 255 columns, and when PCTFREE is too small. You may need to reorganize the affected tables with the dbms_redefintion utility and re-set your PCTFREE parameters to prevent future row chaining.
    This is one of the better observations that you can get from the Statspack Analyzer. It would be helpful, though if it compared the number of continued fetches with the number of rows fetched by rowid and rows fetched by tablescan to produce some idea of the relative impact of the continued fetches.
    It is possible that this advice is a waste of space, though --- and we can't tell because (a) we don't know how long the interval was, and (b) we don't know where your system spent its time.
    If you care to post your statspack report, we might be able to give you some suggestions of the issues that are worth addressing. If you choose to do this (a) you may want to edit some of the text to make the report anonymous (database name, instance name, components of filenames, all but the first few words of each "SQL ordered by" statement).
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    "The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge."
    Stephen Hawking

  • How can we reduce Row Chaining?

    In a 10gR2 db, how can i reduce row chaining in tables?

    Hi,
    First, the prevention techniques for chained rows vs. migrated rows is a bit different. Note that both chained rows and migrated (relocated) rows manifest as "table fetch continued row" in v$sysstat and stats$sysstat for STATSPACK and dba_hist_sysstat for AWR.
    Preventing chained rows - Chained rows can occur when a row is to large for a data block. In these cases, moving large objects into a tablespace with a larger blocksize can often relieve chained rows.
    Preventing migrated rows - Migrated rows occur when a row expands (usually w2ith a varchar2 data type), and there is not enough reserve defined by PCTFREE for the row to expand. In this case, you adjust the PCTFREE to ensure that future rows will have room to expand and reorganize the table to remove the fragments.
    On some tables which are stored tiny and grow huge, you may need to set PCTFREE to a "large" value, so that only one row is stored per block. For example, if I have a row with a varchar2 that is stored at 2k and grows to 30k, I would need to use a 32k blocksize and set PCTFREE=95 so that only one rows is stored on each data block. That way, at update time, there will be room for the row to expand without fragmenting.
    Operationally, Oracle consultant Steve Adams offers this tip for finding the difference between chained and migrated rows:
    http://www.freelists.org/archives/oracle-l/10-2008/msg00750.html
    +"You can tell the difference between row migration and chaining by listing the chained rows with ANALYZE table LIST CHAINED ROWS and then fetching the first column from each "chained row" in a single query.+
    +The count of continued row fetches will be incremented for every migrated row, but not for most chained rows (unless the first cut point happens to fall with the first column, which should be rare)."+
    Hope this helps . . .
    Donald K. Burleson
    Oracle Press author
    Author of "Oracle Tuning: The Definitive Reference"
    http://www.rampant-books.com/book_2005_1_awr_proactive_tuning.htm

  • Row chaining and row migration in Oracle 10g R2/11g R2

    Hi,
    Due to the business rule changes, one of the numeric column in a large table (20 millions rows) will be expanded from number(8) to number(10). Also, the values of this column of each row will be updated from 6 digits to 10 digits. All the indexes that use this column will be dropped and recreated after update. I would like to know if there is any row chaining or row migration issue in Oracle 10g R2 /11g R2.
    Thanks for your help

    neemin wrote:
    Hi,
    Due to the business rule changes, one of the numeric column in a large table (20 millions rows) will be expanded from number(8) to number(10). Also, the values of this column of each row will be updated from 6 digits to 10 digits. All the indexes that use this column will be dropped and recreated after update. I would like to know if there is any row chaining or row migration issue in Oracle 10g R2 /11g R2.
    Thanks for your helpIt depends.
    what you do observe after TESTING against the Development DB?

  • Row chaining and row migration ???

    hi
    Can someone tell me what are the oprions to over come row chaining and row migration in 10g and 11g databases ???
    thanx in advance.
    s

    WIP  wrote:
    hi
    Can someone tell me what are the oprions to over come row chaining and row migration in 10g and 11g databases ???
    thanx in advance.
    sHi.Chained row is a row that is too large to fit into a single database data block. row migration mean is update some rows would cause it to not fit on the block anymore,then these migrates to new address.For more information see below links
    http://blog.tanelpoder.com/2009/11/04/detect-chained-and-migrated-rows-in-oracle/
    http://www.akadia.com/services/ora_chained_rows.html

  • Row chaining & Migration

    My production is running on 8i.
    I have obserevd that some of the tables gain chain_cnt>2,00,000.
    I brought the chain count to zero by creating a temporary table,
    moving all the migrated rows to it.Then deleting the migrated rows from the original table,inserting back all the rows from temporary table back into Original table.
    Then analyzing the table for compute statistics.
    Can anybody guide me how can I prevent row migration/chaining in future ?
    What are the parameters I have to consider while creating a table?
    Thanks in advance,
    chotu

    Row Chaining and Migration are two different things with different cause. Based on your description you were having Row Chaining.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/logical.htm#i15914
    Since you are on 8i you might be using manually managed tablespace. For that you mainly reduce row chaining by tweaking PCTFREE and PCTUSED

  • Row chaining and Row migrate

    Hi,
    how to do the difference between row chaining and row migrate.
    In what table may i see the difference.

    http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:4423420997870

  • Monitor chained rows and migrated rows of tables

    hai all,
    How to monitor the chained rows and migrated rows of tables....I think some big tables have the chained rows or migrated rows..what is the benchmark for recreate the tables.? any script for identify chained rows and migrated rows?
    Please help?

    Sorry i forget to post the query and here there is
    select
    owner c1,
    table_name c2,
    pct_free c3,
    pct_used c4,
    avg_row_len c5,
    num_rows c6,
    chain_cnt c7,
    chain_cnt/num_rows c8
    from dba_tables
    where
    owner not in ('SYS','SYSTEM')
    and
    table_name not in
    (select table_name from dba_tab_columns
    where
    data_type in ('RAW','LONG RAW')
    and
    chain_cnt > 0
    order by chain_cnt desc
    Regards
    jafar

  • Buffer busy waits and chained rows

    Hi,
    I've a db with many buffer busy waits events.
    This is caused by the application that run on it and many tablespaces that are in MSSM.
    Many tables suffers of chained rows.
    My question is, may chained rows create further impact on buffer busy waits?
    Thanks.

    HI Stefan,
    > Caused by the application due to what? High amount of INSERTs or what? Insufficient MSSM settings by database object creation? Bad physical database design (e.g. > 255 columns, column types)?
    Applications and jobs perform every 30s DELETE, UPDATE and INSERT. Tablespace are in Manual Segment Space Management, not in AUTO (i think wrong database design).
    >It depends. Do you mean intra-block row chaining or row chaining across various blocks? What kind of access path? Do you really experience chained
    rows and not migrated rows (it is mixed up a lot of times)?
    Migrated rows, row chaining across various block, caused by frequently update and delete. Migrated resolved with alter table move or exp/imp.
    Thank you

  • What to do if row chaining is found?

    Hello Alls,
    If i found rows chaining in my table then what i have to do?
    also in my database there is one table which contain the 2,00,00,000 of records so it is advisable to make partition of this table for faster searching?
    and how to check performance of oracle 10g database. since installed i am not checking any things in database?
    how to check in database which patches are applied on the database?
    can any body give me basic guidance from that i can check my database works fine or not. i want to check its response time and all performance related. currently i am getting very slow response from my database.

    If i found rows chaining in my table then what i have to do?In most cases chaining is unavoidable, especially when this involves tables
    with large columns such as LONGS, LOBs, etc. When you have a lot of chained
    rows in different tables and the average row length of these tables is not
    that large, then you might consider rebuilding the database with a larger
    blocksize.
    e.g.: You have a database with a 2K block size. Different tables have multiple
    large varchar columns with an average row length of more than 2K. Then this
    means that you will have a lot of chained rows because you block size is
    too small. Rebuilding the database with a larger block size can give you
    a significant performance benefit.
    Migration is caused by PCTFREE being set too low, there is not enough room in
    the block for updates. To avoid migration, all tables that are updated should
    have their PCTFREE set so that there is enough space within the block for updates.
    You need to increase PCTFREE to avoid migrated rows. If you leave more free
    space available in the block for updates, then the row will have more room to
    grow.
    SQL Script to eliminate row migration/chaining :
    Get the name of the table with migrated rows:
    ACCEPT table_name PROMPT 'Enter the name of the table with migrated rows: '
    -- Clean up from last execution
    set echo off
    DROP TABLE migrated_rows;
    DROP TABLE chained_rows;
    -- Create the CHAINED_ROWS table
    @.../rdbms/admin/utlchain.sql
    set echo on
    spool fix_mig
    -- List the chained and migrated rows
    ANALYZE TABLE &table_name LIST CHAINED ROWS;
    -- Copy the chained/migrated rows to another table
    create table migrated_rows as
    SELECT orig.*
    FROM &table_name orig, chained_rows cr
    WHERE orig.rowid = cr.head_rowid
    AND cr.table_name = upper('&table_name');
    -- Delete the chained/migrated rows from the original table
    DELETE FROM &table_name WHERE rowid IN (SELECT head_rowid FROM chained_rows);
    -- Copy the chained/migrated rows back into the original table
    INSERT INTO &table_name SELECT * FROM migrated_rows;
    spool off
    also in my database there is one table which contain the 2,00,00,000 of records so it is advisable to make partition of this table for faster searching?download-uk.oracle.com/docs/cd/B19306_01/server.102/b14220/partconc.htm
    and how to check performance of oracle 10g database. since installed i am not checking any things in database?
    can any body give me basic guidance from that i can check my database works fine or not. i want to check its response time and all performance related. currently i am getting very slow response from my databasedownload-uk.oracle.com/docs/cd/B19306_01/server.102/b14211/toc.htm
    Jafar

  • Row chaining problem

    hi
    as far as i know row chainig is the process in which when a rows is unable to fit in one datablock.... it just strips into more than one block using formation of chunks. i just want to know that whether this allocation is always done in contigous blocks or it just strips across variuos locations in datafile. because if so then row chaining also simulates row migration (if it is stripped across various locations of datablock which are not contigous).
    Now another questino arises
    if it stores in contigous locations then what will happen if row is unable to fit even after utilising contigous free blocks. i mean will oracle search for a long chain of contigous free blocks and move the complete row into that one or there will partially row migration?
    i hope you will understand my requirement. will be thanks a lot for clerification.
    thanks
    aps

    Hi
    i only quoted a part of the text by D. Burleson.Of course if Don is the source I will never see the test case I would like to see ;-)
    In the whole context it is described like your one:
    ======================
    You also need to understand how new free blocks are
    added to the freelist chain. At table extension time,
    the high-water mark for the table is increased, and
    new blocks are moved onto the master freelist, where
    they are, in turn, moved to process freelists. For
    tables that do not contain multiple freelists, the
    transfer is done five blocks at a time. For tables
    with multiple freelists, the transfer is done in
    sizes (5*(number of freelists + 1)). For example, in
    a table with 20 freelists, 105 blocks will be moved
    onto the master freelist each time that a table
    increases its high-water mark.
    ======================As I wrote, IMHO, the information is wrong. So, let's have a look to an example (executed on a 10.2.0.3 Linux x86_64):
    1) create a new tablespace and a table in it
    SQL> CREATE TABLESPACE t
      2  DATAFILE SIZE 10M AUTOEXTEND ON
      3  EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1m
      4  SEGMENT SPACE MANAGEMENT MANUAL
      5  BLOCKSIZE 8K;
    SQL> CREATE TABLE t (v varchar2(100)) TABLESPACE t STORAGE (FREELISTS 20);2) where is the table stored?
    SQL> SELECT file_id, block_id, blocks
      2  FROM dba_extents
      3  WHERE owner = user
      4  AND segment_name = 'T';
       FILE_ID   BLOCK_ID     BLOCKS
             8          9        1283) fill 5 blocks (this is necessary because for the first 5 increase of the HWM a single block is allocated)
    SQL> INSERT INTO t SELECT rpad('A',100,'A') FROM all_objects WHERE rownum <= 68;
    SQL> INSERT INTO t SELECT rpad('A',100,'A') FROM all_objects WHERE rownum <= 68;
    SQL> INSERT INTO t SELECT rpad('A',100,'A') FROM all_objects WHERE rownum <= 68;
    SQL> INSERT INTO t SELECT rpad('A',100,'A') FROM all_objects WHERE rownum <= 68;
    SQL> INSERT INTO t SELECT rpad('A',100,'A') FROM all_objects WHERE rownum <= 68;4) dump the header block to know the current HWM and status of the freelists
    SQL> ALTER SYSTEM DUMP DATAFILE 8 BLOCK 9;5) in the trace file the following information are contained (notice the HWM and that all freelists except one process freelist are "empty")
      Extent Control Header
      Extent Header:: spare1: 0      spare2: 0      #extents: 1      #blocks: 127
                      last map  0x00000000  #maps: 0      offset: 4128
          Highwater:: 0x0200000f ext#: 0 blk#: 5      ext size: 127
      #blocks in seg. hdr's freelists: 1
      #blocks below: 5
      mapblk  0x00000000  offset: 0
                       Unlocked
         Map Header:: next  0x00000000  #extents: 1    obj#: 12493  flag: 0x40000000
      Extent Map
       0x0200000a  length: 127
      nfl = 20, nfb = 1 typ = 1 nxf = 0 ccnt = 0
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: USED lhd: 0x0200000e ltl: 0x0200000e
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x000000006) fill one more block
    SQL> INSERT INTO t SELECT rpad('A',100,'A') FROM all_objects WHERE rownum <= 68;7) re-dump the header block
    SQL> ALTER SYSTEM DUMP DATAFILE 8 BLOCK 9;8) now the trace file contains the following information (notice that the HWM has increased of 5 blocks and not 100, in addition, as before, only one process freelist is not "empty")
      Extent Control Header
      Extent Header:: spare1: 0      spare2: 0      #extents: 1      #blocks: 127
                      last map  0x00000000  #maps: 0      offset: 4128
          Highwater:: 0x02000014 ext#: 0 blk#: 10     ext size: 127
      #blocks in seg. hdr's freelists: 5
      #blocks below: 10
      mapblk  0x00000000  offset: 0
                       Unlocked
         Map Header:: next  0x00000000  #extents: 1    obj#: 12493  flag: 0x40000000
      Extent Map
       0x0200000a  length: 127
      nfl = 20, nfb = 1 typ = 1 nxf = 0 ccnt = 0
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: USED lhd: 0x0200000f ltl: 0x02000013
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
      SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000Best regards,
    Chris

  • Is this too much chained rows ? How to prevent chained rows ?

    Hi,
    Due to performance issue on my database, I came across "Chained/migrated Rows articles" ... and ran script to check chained rows .....
    I have chained rows in 2 tables but only one is worth mention. It is a table that has 50 CLOB columns and has 1.1mil records .....
    After running the script for chained rows I get 500.000 chained rows out of this 1.1mil ....
    I will now do as explaind in the forums and books, reinsert this rows ..... to try fix this
    So my question would be, what do i need to do to prevent , if I actually can anything at all to not get so many chained rows ? I understand that some rows can't be prevented to have chains ...
    Database block is 8192 ..... Avarage row length(stats) of this table is 6093, est.size 8.9G .... PCTFree is 10 by default ...
    At this moment i'm getting warning :"PCTFREE too low for a table" and is at 1.3...
    Do I need to increase database block and/or increase PCTFree to some range between 20-25? If yes, can i somehow increase block only on this table cause recreating database that is 79GB would take some time ...?
    Performance is big issue, disk space is not ...
    Thank you.
    Kris

    user10702996 wrote:
    The whole insert row contains data about one newspaper article ..... So what we did for better search performances is to "cache" every word from this article into defined CLOB column but ordered by first character ... so words staring with A are in CHAR_1 clob column B is in CHAR_B and so on ....
    How are you querying the data ?
    From your description, it looks as if you need to look at Oracle's "text" indexing - I am basing this comment on the assumption that you are trying to do things like: "find all articles that reference aardvarks and zebras", and turning this into a search of the "A lob" and the "Z lob" of every row+ in the table. (I'm guessing that your biggest performance problem is actually the need to examine every row, rather than the problem of chained rows - obviously I may be wrong).
    If you use context (or intermedia, the name changes with version) you need only store the news item once as a LOB then create a text index on it - leaving Oracle to build supporting structures that allow you to run such queries fairly efficiently.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Row chaining in table with more than 255 columns

    Hi,
    I have a table with 1000 columns.
    I saw the following citation: "Any table with more then 255 columns will have chained
    rows (we break really wide tables up)."
    If I insert a row populated with only the first 3 columns (the others are null), is a row chaining occurred?
    I tried to insert a row described above and no row chaining occurred.
    As I understand, a row chaining occurs in a table with 1000 columns only when the populated data increases
    the block size OR when more than 255 columns are populated. Am I right?
    Thanks
    dyahav

    user10952094 wrote:
    Hi,
    I have a table with 1000 columns.
    I saw the following citation: "Any table with more then 255 columns will have chained
    rows (we break really wide tables up)."
    If I insert a row populated with only the first 3 columns (the others are null), is a row chaining occurred?
    I tried to insert a row described above and no row chaining occurred.
    As I understand, a row chaining occurs in a table with 1000 columns only when the populated data increases
    the block size OR when more than 255 columns are populated. Am I right?
    Thanks
    dyahavYesterday, I stated this on the forum "Tables with more than 255 columns will always have chained rows." My statement needs clarification. It was based on the following:
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/schema.htm#i4383
    "Oracle Database can only store 255 columns in a row piece. Thus, if you insert a row into a table that has 1000 columns, then the database creates 4 row pieces, typically chained over multiple blocks."
    And this paraphrase from "Practical Oracle 8i":
    V$SYSSTAT will show increasing values for CONTINUED ROW FETCH as table rows are read for tables containing more than 255 columns.
    Related information may also be found here:
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96524/c11schem.htm
    "When a table has more than 255 columns, rows that have data after the 255th column are likely to be chained within the same block. This is called intra-block chaining. A chained row's pieces are chained together using the rowids of the pieces. With intra-block chaining, users receive all the data in the same block. If the row fits in the block, users do not see an effect in I/O performance, because no extra I/O operation is required to retrieve the rest of the row."
    http://download.oracle.com/docs/html/B14340_01/data.htm
    "For a table with several columns, the key question to consider is the (average) row length, not the number of columns. Having more than 255 columns in a table built with a smaller block size typically results in intrablock chaining.
    Oracle stores multiple row pieces in the same block, but the overhead to maintain the column information is minimal as long as all row pieces fit in a single data block. If the rows don't fit in a single data block, you may consider using a larger database block size (or use multiple block sizes in the same database). "
    Why not a test case?
    Create a test table named T4 with 1000 columns.
    With the table created, insert 1,000 rows into the table, populating the first 257 columns each with a random 3 byte string which should result in an average row length of about 771 bytes.
    SPOOL C:\TESTME.TXT
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    INSERT INTO T4 (
    COL1,
    COL2,
    COL3,
    COL255,
    COL256,
    COL257)
    SELECT
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3)
    FROM
      DUAL
    CONNECT BY
      LEVEL<=1000;
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SET AUTOTRACE TRACEONLY STATISTICS
    SELECT
    FROM
      T4;
    SET AUTOTRACE OFF
    SELECT
      SN.NAME,
      SN.STATISTIC#,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SPOOL OFFWhat are the results of the above?
    Before the insert:
    NAME                      VALUE                                                
    table fetch continue        166
    After the insert:
    NAME                      VALUE                                                
    table fetch continue        166                                                
    After the select:
    NAME                 STATISTIC#      VALUE                                     
    table fetch continue        252        332  Another test, this time with an average row length of about 12 bytes:
    DELETE FROM T4;
    COMMIT;
    SPOOL C:\TESTME2.TXT
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    INSERT INTO T4 (
      COL1,
      COL256,
      COL257,
      COL999)
    SELECT
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3)
    FROM
      DUAL
    CONNECT BY
      LEVEL<=100000;
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SET AUTOTRACE TRACEONLY STATISTICS
    SELECT
    FROM
      T4;
    SET AUTOTRACE OFF
    SELECT
      SN.NAME,
      SN.STATISTIC#,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SPOOL OFFWith 100,000 rows each containing about 12 bytes, what should the 'table fetch continued row' statistic show?
    Before the insert:
    NAME                      VALUE                                                
    table fetch continue        332 
    After the insert:
    NAME                      VALUE                                                
    table fetch continue        332
    After the select:
    NAME                 STATISTIC#      VALUE                                     
    table fetch continue        252      33695The final test only inserts data into the first 4 columns:
    DELETE FROM T4;
    COMMIT;
    SPOOL C:\TESTME3.TXT
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    INSERT INTO T4 (
      COL1,
      COL2,
      COL3,
      COL4)
    SELECT
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3)
    FROM
      DUAL
    CONNECT BY
      LEVEL<=100000;
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SET AUTOTRACE TRACEONLY STATISTICS
    SELECT
    FROM
      T4;
    SET AUTOTRACE OFF
    SELECT
      SN.NAME,
      SN.STATISTIC#,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SPOOL OFFWhat should the 'table fetch continued row' show?
    Before the insert:
    NAME                      VALUE                                                
    table fetch continue      33695
    After the insert:
    NAME                      VALUE                                                
    table fetch continue      33695
    After the select:
    NAME                 STATISTIC#      VALUE                                     
    table fetch continue        252      33695 My statement "Tables with more than 255 columns will always have chained rows." needs to be clarified:
    "Tables with more than 255 columns will always have chained rows +(row pieces)+ if a column beyond column 255 is used, but the 'table fetch continued row' statistic +may+ only increase in value if the remaining row pieces are found in a different block."
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.
    Edited by: Charles Hooper on Aug 5, 2009 9:52 AM
    Paraphrase misspelled the view name "V$SYSSTAT", corrected a couple minor typos, and changed "will" to "may" in the closing paragraph as this appears to be the behavior based on the test case.

  • Row chaining issue in Oracle 10g

    Hello All,
    I was seeing row chaining issue in one of our production DB. Row chaining was present in all tables having LONG RAW columns.
    As of now I am not supposed to change these to BLOB/CLOB, so I did exp/imp to solve the issue. However, we are repeating this excercise once every quarter, and now it is time we put a permanent fix to it.
    One of such tables has below storage parameters:
    PCTUSED    0
    PCTFREE    10
    INITRANS   1
    MAXTRANS   255
    STORAGE    (
                INITIAL          40K
                MINEXTENTS       1
                MAXEXTENTS       UNLIMITED
                PCTINCREASE      0
                BUFFER_POOL      DEFAULT
               )Can I be advised what would be the tuning options in above? Note: All of these tables are in GB's.
    For any inputs, please let me know.
    Thanks,
    Suddhasatwa

    SELECT table_name,
           Round(( blocks * 8 ) / 1024 / 1024, 2)
           "Physical Size (GB)",
           Round(( num_rows * avg_row_len / 1024 / 1024 / 1024 ), 2)
           "Actual Size (GB)",
           ( Round(( blocks * 8 ) / 1024 / 1024, 2) - Round((
             num_rows * avg_row_len / 1024 / 1024 / 1024 ), 2) )
           "Wasted Space (GB)"
    FROM   dba_tables
    WHERE  owner = 'SYSADM'
           AND ( Round(( blocks * 8 ) / 1024, 2) - Round(
                     ( num_rows * avg_row_len / 1024 / 1024 )
                                                       , 2) ) > 20
           AND table_name IN (SELECT table_name
                              FROM   dba_tab_columns
                              WHERE  data_type IN ( 'RAW', 'LONG RAW', 'LONG' ))
           AND table_name IN (SELECT table_name
                              FROM   dba_tab_columns
                              WHERE  data_type LIKE '%LONG%')
    ORDER  BY ( Round(( blocks * 8 ) / 1024, 2) - Round(
                          ( num_rows * avg_row_len / 1024 / 1024 )
                                                            , 2) ) DESC; is the air inside the gas tank on your car considered "Wasted Space"?
    would you car run any better if the size of the gas tank got reduced as gasoline was consumed?
    Realize & understand that Oracle & does reused FREE SPACE without any manual intervention.
    It appears you suffer from Complusive Tuning Disorder!

Maybe you are looking for