Migrated/chained rows causing double I/O

" You have 3,454,496 table fetch continued row actions during this period. Migrated/chained rows always cause double the I/O for a row fetch and "table fetch continued row" (chained row fetch) happens when we fetch BLOB/CLOB columns (if the avg_row_len > db_block_size), when we have tables with > 255 columns, and when PCTFREE is too small. You may need to reorganize the affected tables with the dbms_redefintion utility and re-set your PCTFREE parameters to prevent future row chaining.
What is migration and row chaining and when does this happen?
Is there a query to find out affected tables? i.e migrated and chained rows?
Is there a query to find out tables whos pctfree size is small?
How to determine the optimal value for pctfree for these affected tables/

user3390467 wrote:
" You have 3,454,496 table fetch continued row actions during this period. Migrated/chained rows always cause double the I/O for a row fetch and "table fetch continued row" (chained row fetch) happens when we fetch BLOB/CLOB columns (if the avg_row_len > db_block_size), when we have tables with > 255 columns, and when PCTFREE is too small. You may need to reorganize the affected tables with the dbms_redefintion utility and re-set your PCTFREE parameters to prevent future row chaining.
This is one of the better observations that you can get from the Statspack Analyzer. It would be helpful, though if it compared the number of continued fetches with the number of rows fetched by rowid and rows fetched by tablescan to produce some idea of the relative impact of the continued fetches.
It is possible that this advice is a waste of space, though --- and we can't tell because (a) we don't know how long the interval was, and (b) we don't know where your system spent its time.
If you care to post your statspack report, we might be able to give you some suggestions of the issues that are worth addressing. If you choose to do this (a) you may want to edit some of the text to make the report anonymous (database name, instance name, components of filenames, all but the first few words of each "SQL ordered by" statement).
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
"The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge."
Stephen Hawking

Similar Messages

  • W_gl_balance_f has 2 balance rows, causing doubling in dashboard rpts

    Hi all,
    Thanks for taking the time to review my post.
    Environment
    Oracle BI Applications 7.9.6 Financial Analytics
    Oracle E-Business Suite 12
    Query
    select ledger_id, gl.code_combination_id, period_name, period_num, segment3 account, segment1, segment2, segment4,
    (begin_balance_dr - begin_balance_CR), (period_net_dr - period_net_CR),
    ((begin_balance_dr - begin_balance_CR) + (period_net_dr - period_net_CR)) ending_balance
    from gl_balances bal, gl_code_combinations gl, fnd_flex_values fv1, fnd_flex_values fv2
    where gl.code_combination_id = bal.code_combination_id and
    gl.segment1 = fv1.flex_value and fv1.flex_value_set_id = 1014868 and
    gl.segment3 = fv2.flex_value and fv2.flex_value_set_id = 1014870
    The results show the corrrect balance in one row ith acct_curr_code = 'USD' but there is a duplicate row expect for balance_type of null, acct_curr_code = '?' and integration_id that does not start with GL. This is causing the amounts to be doubled in the OBIEE dashboard reports
    This was a full ETL load
    Thanks for any help
    Mike
    ? in acct_curr_code

    The dimensions I added are aliases of their respective W_XXX_D tables and I have the content levels set for the LTS of my Dims (DimA and DimB) at the detail level. One thing I am not able to figure out is why is it going after "W_GL_OTHER_F /* Fact_W_GL_OTHER_F */" when it should go to W_GL_BALANCE_F and W_GL_BALANCE_A.
    The "GL Account Number" is sourced from the W_GL_ACCOUNT_D table and the other columns "Fin Statement Item" (FIN_STMT_ITEM_CODE) and "GL Account Category" (GL_ACCOUNT_CAT_CODE) are sourced from that same table too. But these 2 cols are on a higher level of hierarchy than the "GL Account Number" in the "GL Accounts" Dim-Hierarchy. But, as soon as I remove the "GL Account Number" logical column from the report the closing amounts show fine although the "Fin Statement Item" and "GL Account Category" cols from the W_GL_ACCOUNT_D table remain. Then the query does not show the W_GL_OTHER_F table in the log. Is there something I need to do to this dim-hierarchy?

  • ROW CHAINING 과 ROW MIGRATION

    제품 : ORACLE SERVER
    작성날짜 : 2002-04-10
    ROW CHAINING 과 ROW MIGRATION
    =============================
    Purpose
    Row chaining 과 Row Migration에 대해 이해하고 줄이는 방법을 확인한다.
    Problem Description
    Row chaining 은 단일 테이블 상의 특정 Row의 길이가 증가해서 더 이상
    동일한 데이타 블럭에 들어갈수 없을때 발생한다. 이때 RDBMS는 또
    다른 데이타 블럭을 찾는다. 이 데이타 블럭은 원래 블럭과 연결되어
    있다. 이 경우 데이타 블럭이 하나의 I/O 작업과 동일한 양을 수행하기
    위해 두 개의 I/O를 사용해야 한다는 점이다. 이 상황은 여러분의
    데이터베이스 성능을 빠르게 약화시킬 것이다.
    Data Block상의 하나의 Row는 길이가 증가하면서 갱신되며, Block의
    Freespace가 0%일 때, Row는 Migration을 일으킨다. 물론, 전체 Row가
    들어갈 만한 크기의 새로운 Block에 Row에 대한 Data가 Migration된다.
    이경우 ORACLE은 Row에 대한 정보를 가져오기 위해 하나 이상의 Data
    Block을 반드시 읽어야 하므로 I/O Perfmance는 감소한다.
    Solution Description
    1. Row chaining과 migration 확인
    1) run ?/rdbms/admin/utlchain.sql
    2) ANALYZE Command를 통해 Chaining과 Migrating의 횟수를 조사한다.
    analyze table emp list chained rows;
    2. 해결과정
    1) 데이터 열을 CHAINED_ROWS 테이블의 ROWID를 사용하여 원래
    테이블과 같은 행 구조를 가진 중간 테이블(intermediate table)로
    이동시킨다.
    2) 옮겨진 데이터 열을 CHAINED_ROWS 테이블의 ROWID를 사용하여 삭제 한다.
    3) 중간 테이블로부터 열들을 다시 원래 테이블로 삽입한다.
    4) 중간 테이블을 버린다.
    5) CHAINED_ROWS 테이블의 레코드를 삭제한다.
    이 과정이 수행되고 나면 analyze 명령은 다시 수행되야 한다. row가
    다시 CHAINED_ROWS 테이블에 쌓이면 어떤 블럭에도 전체row 가 들어갈
    충분한 공간이 없기 때문이다. 이것은 한 데이타 블럭의 한 row 가 너무
    길어서 이거나 테이블의 PCTFREE 가 적절하지 못하기 때문이다. 전자의
    경우는 chaine 현상이 일어날수 밖에 없고 후자의 경우 다음과 같이
    PCTFREE 를 수정한다.
    3. PCTFREE 값을 조정 하여야 하는 경우
    1) 테이블에 대한 더 나은 퍼센트 프리 요소(percent free factor)를 결정한다.
    2) 전체 테이블을 그 모든 의존관계(예를 들면, 인덱스, 그렌트(grants),
    제약조건들 등)와 함께 export한다.
    3) 원래 테이블을 버린다.
    4) 새로운 사양으로 다시 만든다.
    5) 테이블을 import한다.

    Hi,
    SQL> SELECT name, value FROM v$sysstat WHERE name = 'table fetch continued row';
    NAME VALUE
    table fetch continued row 163
    Is this mean 163 tables contains chained rows?
    Please suggest me.
    Thanks
    KSG

  • Row migration & chaining

    Hi,
    we have a database which has above 600 tables(all are same schema).Everyday DML statements happen Nearly 400 tables.How to find the row migration & chaining by A SINGLE QUERY.
    I would like to find everyday.Is it possible without analyze?
    I know analyze command but we have to mention all the tables.

    Lubiez Jean-Valentin wrote:
    However, the columns CHAIN_CNT and NUM_ROWS are populated by ANALYZE statement or DBMS_STATS package.Are you sure that DBMS_STATS can be used to populate CHAIN_CNT? A default DBMS_STATS.GATHER_SCHEMA_STATS does not populate the CHAIN_CNT, for example. And I'm not aware of any way to populate that without using ANALYZE.
    user3266490 - Why do you want to check this sort of thing on a daily basis? That seems exceptionally odd. Even checking it quarterly would seem like overkill. Are you actually having problems with excessive migrated rows (you can't fix chained rows, so those are basically irrelevant)? If you are, then whatever tables are having problems probably just need to be rebuilt with a more appropriate PCTFREE setting.
    Justin

  • Monitor chained rows and migrated rows of tables

    hai all,
    How to monitor the chained rows and migrated rows of tables....I think some big tables have the chained rows or migrated rows..what is the benchmark for recreate the tables.? any script for identify chained rows and migrated rows?
    Please help?

    Sorry i forget to post the query and here there is
    select
    owner c1,
    table_name c2,
    pct_free c3,
    pct_used c4,
    avg_row_len c5,
    num_rows c6,
    chain_cnt c7,
    chain_cnt/num_rows c8
    from dba_tables
    where
    owner not in ('SYS','SYSTEM')
    and
    table_name not in
    (select table_name from dba_tab_columns
    where
    data_type in ('RAW','LONG RAW')
    and
    chain_cnt > 0
    order by chain_cnt desc
    Regards
    jafar

  • Buffer busy waits and chained rows

    Hi,
    I've a db with many buffer busy waits events.
    This is caused by the application that run on it and many tablespaces that are in MSSM.
    Many tables suffers of chained rows.
    My question is, may chained rows create further impact on buffer busy waits?
    Thanks.

    HI Stefan,
    > Caused by the application due to what? High amount of INSERTs or what? Insufficient MSSM settings by database object creation? Bad physical database design (e.g. > 255 columns, column types)?
    Applications and jobs perform every 30s DELETE, UPDATE and INSERT. Tablespace are in Manual Segment Space Management, not in AUTO (i think wrong database design).
    >It depends. Do you mean intra-block row chaining or row chaining across various blocks? What kind of access path? Do you really experience chained
    rows and not migrated rows (it is mixed up a lot of times)?
    Migrated rows, row chaining across various block, caused by frequently update and delete. Migrated resolved with alter table move or exp/imp.
    Thank you

  • Is this too much chained rows ? How to prevent chained rows ?

    Hi,
    Due to performance issue on my database, I came across "Chained/migrated Rows articles" ... and ran script to check chained rows .....
    I have chained rows in 2 tables but only one is worth mention. It is a table that has 50 CLOB columns and has 1.1mil records .....
    After running the script for chained rows I get 500.000 chained rows out of this 1.1mil ....
    I will now do as explaind in the forums and books, reinsert this rows ..... to try fix this
    So my question would be, what do i need to do to prevent , if I actually can anything at all to not get so many chained rows ? I understand that some rows can't be prevented to have chains ...
    Database block is 8192 ..... Avarage row length(stats) of this table is 6093, est.size 8.9G .... PCTFree is 10 by default ...
    At this moment i'm getting warning :"PCTFREE too low for a table" and is at 1.3...
    Do I need to increase database block and/or increase PCTFree to some range between 20-25? If yes, can i somehow increase block only on this table cause recreating database that is 79GB would take some time ...?
    Performance is big issue, disk space is not ...
    Thank you.
    Kris

    user10702996 wrote:
    The whole insert row contains data about one newspaper article ..... So what we did for better search performances is to "cache" every word from this article into defined CLOB column but ordered by first character ... so words staring with A are in CHAR_1 clob column B is in CHAR_B and so on ....
    How are you querying the data ?
    From your description, it looks as if you need to look at Oracle's "text" indexing - I am basing this comment on the assumption that you are trying to do things like: "find all articles that reference aardvarks and zebras", and turning this into a search of the "A lob" and the "Z lob" of every row+ in the table. (I'm guessing that your biggest performance problem is actually the need to examine every row, rather than the problem of chained rows - obviously I may be wrong).
    If you use context (or intermedia, the name changes with version) you need only store the news item once as a LOB then create a text index on it - leaving Oracle to build supporting structures that allow you to run such queries fairly efficiently.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Effect with Chain row

    Hi All,
    what is the problem is my tables are effected with chain rows
    How to solve the problem?
    what is the solution?
    Regards.....

    Chained rows mean your table has had a lot of updated rows, and the space at PCTFREE is not enough to hold all the updated fragments within the same oracle block, so Oracle decides to place the updated row at the "nearest" possible block, leaving behing just a pointer to the new location.
    When you try to access the row, the old block is searched, and when trying to retreive the row, just a pointer to the new location is retreived, causing the system to look for the updated row at the next location.
    Result double IO operation = performance degradation.
    In order to avoid chained rows, increase the value of PCTFREE, so that from now on the new Oracle blocks have more space to handle updates within the same row.
    On the other hand, to fix the already chained rows, use:
    1. export/import
    2. CTAS - create table as select ...
    3. analyze table ... list chained rows (the utlchain.sql script is required) and fix only the affected rows.

  • Avoid chained Rows

    Hi,
    i have listed out the chained rows from my database. The result is given bloow...
    TABLE_NAME ROW_COUNT
    MCGZC01_CALLSLOG 19
    MCGZT01_DLERINVHED 358
    MCGZT06_AVLPMSCOUP 5656
    MCGZT07_ADHOCPOINT 560
    MCGZT08_EXDWARRDET 8
    MCGZT09_POINTSTRAN 44158
    MCGZT01_DLERINVHED_OLD 138
    To avoid this chainned and migrated rows what i have to do?.... Is thare any specific value to find out the criticality?... bacaues for table MCGZC01_CALLSLOG the row counrt is 19 and for MCGZT06_AVLPMSCOUP row count is 5656. what is mean row count?....Please explain me ...
    Regards,
    Karthik

    No, I am not sure. I have not received
    any feedback from users. I think they may not know
    about the performance of the query and response time
    of the query because they are working from web
    application.... Hmmm, I think users always complain irregardless of client server or web app if things are going slowly. So listen to your users and tune the problems they have.
    OK speaking about chained rows of course oracle docs can be consulted :
    http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14211/instance_tune.htm
    Search for chain in this page.
    And here is a small example when chained rows really affect performance:
    let's create a table with large varchar2 column and populate initially with a tiny value. And then scan all the table using via index scan and let's look what is statistic for table fetch continued row.
    SQL> create table t (
      2  id number primary key,
      3  a varchar2(4000));
    Table created.
    Elapsed: 00:00:00.02
    SQL> insert into t select rownum, 'a' from all_objects;
    54999 rows created.
    Elapsed: 00:00:03.04
    SQL> select * from v$statname where upper(name) like '%CONTIN%';
    STATISTIC# NAME                                                                  CLASS    STAT_ID
           252 table fetch continued row                                                64 1413702393
    Elapsed: 00:00:00.00
    SQL> select * from v$mystat where STATISTIC# = 252;
           SID STATISTIC#      VALUE
           104        252         65
    Elapsed: 00:00:00.00
    SQL> declare
      2  g varchar2(4000);
      3  begin
      4  for i in (select id from t) loop
      5    select a into g from t where id = i.id;
      6  end loop;
      7  end;
      8  /
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:02.04
    SQL> select * from v$mystat where STATISTIC# = 252;
           SID STATISTIC#      VALUE
           104        252         65
    Elapsed: 00:00:00.00OK the value is 65 and anonymous block took 2.04 secs to run.
    Now let's update the table to expand the row and see how much now takes the same anonymous block
    SQL> update t set a = lpad(a, 4000, a);
    54999 rows updated.
    Elapsed: 00:00:34.05
    SQL> commit;
    Commit complete.
    Elapsed: 00:00:00.00
    SQL> select * from v$mystat where STATISTIC# = 252;
           SID STATISTIC#      VALUE
           104        252         65
    Elapsed: 00:00:00.00
    SQL> declare
      2  g varchar2(4000);
      3  begin
      4  for i in (select id from t) loop
      5    select a into g from t where id = i.id;
      6  end loop;
      7  end;
      8  /
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:17.09
    SQL> select * from v$mystat where STATISTIC# = 252;
           SID STATISTIC#      VALUE
           104        252      55063
    Elapsed: 00:00:00.00
    SQL> So it took 17.09 secs and value for table fetch continued row has bumped way up.
    BUT
    This was arbitrary exterme created case, this is not normal scenario. If you know this will be your scenario you'll need to adjust pctfree parameter accordingly to avoid that.
    And realtively small number of chained rows won't affect your table reads by rowid and rememeber that full scans are not affected at all, because oracle reads all the table anyway and chained rows don't matter at all in this case.
    Gints Plivna
    http://www.gplivna.eu

  • Getting Chained Rows with SQL-Loader

    Hi, I'm getting chained rows when loading data into an empty table. I always thought this could only happen when updating rows in a table. I'm loading into a datawarehouse, so I set the pctfree and pctused to use as much of the data blocks as possible. Here's the specs:
    pctfree 5, pctused 90 (blocksize 8k).
    The average rowsize is 1481 bytes.
    There will never be updates or deletes on this table. The table is always created new or truncated. How come does Oracle chain about 30 percent of the rows with the above values? I had to go down to pctfree 50 in order to get chained rows = 0.
    tia,
    Danny Smith

    - check if your table has check constraints (like column not null)
    if you trust the data in the file you have to load you can disable this constrainst and after the loader enable this constrainst.
    - Check if you can modify the table and place it in nologging mode (generate less redo but ONLY is SOME Conditions)
    Hope it helps
    Rui Madaleno

  • What causes double letters with a single keystroke while in Calendar. I am using vs 10.10.2 with wireless keyboard. Have closed/opened Calendar, switching keyboard off/on, took out batteries on keyboard and have restarted computer.

    What would cause double letters with a single key stroke while adding information to Calendar? Version: 10.10.2  wireless keyboard 27" iMac  3.4 GHz i5. Have restarted computer, have closed/opened Calendar, switched on/off keyboard and taken batteries out/in of keyboard. The problem only happens while typing in Calendar.
    Not sure if I am accidentally hitting an unknown key to cause the problem or if it is a Calendar issue.........paul

    Try changing System Preferences/Keyboard/Keyboard/Key Repeat to a slower setting. Test.

  • CHAINED ROWS

    HI
    I have some chained rows in some of may table . I have exported, trunacte, import back the data in the table.Then collect the stats.
    But i have the same Chained_count. Nothing has change.This is a small table(around 52 k), no long,raw datatype in the table.I am ruuning Oracle 10gR2
    Any ideas?
    regards,
    Fabrice

    Hemant K Chitale wrote:
    For such a generic question, is there a difference between 10gR2 and 10.2.0.4 ?
    BTW, is 10.2.0.4 a "Version" ? I would say yes, it is a version (or more accurately, 10.2.0.4.0), at least to Oracle it is considered a "version."
    SQL > SELECT * FROM PRODUCT_COMPONENT_VERSION where PRODUCT like 'Oracle%';
    PRODUCT                                            VERSION         STATUS
    Oracle Database 10g Enterprise Edition             10.2.0.4.0      ProdI was trying to be consistent, as well as other forum users, with what Oracle considers a version. The SQL statement above is one example. Another is if you look at the [CPU Jan 2009 information|http://www.oracle.com/technology/deploy/security/critical-patch-updates/cpujan2009.html] Oracle has this listed under Category I:
    Oracle Database 10g Release 2, versions 10.2.0.2, 10.2.0.3, 10.2.0.4Either way I don't want to hijack this thread and get into a semantics discussion. I had no intention in my original post of being condescending or anything of that nature when requesting the version information. Some times, as you probably know, there are bugs from version to version that may make this a pertinent question to ask.
    Thanks! :)

  • Getting chained rows

    Hello
    Is there a diffrence between, getting the chanined tables from the below two queries
    1-) analyze all tables and then
    select table_name,chain_cnt from dba_tables
    2-) analyze table LIST CHAINED ROWS;
    select * from chained_rows;

    The CHAINED_ROWS table must be manually pre-created using UTLCHAIN.sql --- still present in 10g.
    SQL> desc chained_rows;
    ERROR:
    ORA-04043: object chained_rows does not exist
    SQL> @%ORACLE_HOME%\rdbms\admin\utlchain
    Table created.
    SQL> desc chained_Rows
    Name                                      Null?    Type
    OWNER_NAME                                         VARCHAR2(30)
    TABLE_NAME                                         VARCHAR2(30)
    CLUSTER_NAME                                       VARCHAR2(30)
    PARTITION_NAME                                     VARCHAR2(30)
    SUBPARTITION_NAME                                  VARCHAR2(30)
    HEAD_ROWID                                         ROWID
    ANALYZE_TIMESTAMP                                  DATE
    SQL>

  • About row-chaining, row- migration in a block

    What happens during row-chaining, during inserting of a record in a block, about row-migration, update of row occurs exactly at which place in a block?

    Hi,
    Why Every WhereYou ask Doc Questions, better you read some oracle Doc.

  • Migration Assistant. extra User, double the files - only 10 GB left on HD

    I restored my system after a login "freeze", following an upgrade from Leopard to Snow Leopard.  Apple help advised to use Migration Assistant to back up everything from Time Machine.  Now I have an extra user ID (as they said I would) but the file structure doubled!  This leaves me only 10GB on a 500GB drive - which I read is not good.  Do I wipe the drive and start over but instead restore files one by one, or do I delete all the files in the new user ID, or is there some other idea I should consider?  It appears there are dups of all the files from user to user, but I fear they are linked and deleting one will delete the other.  Help.

    Studio B player wrote:
    Apple told me to go back to the originally supplied OS X (Leopard 10.5.x), then use the Snow L upgrade disc - is this really required ?
    Yes, because your machine came with 10.5, there is a free iLife suit on the 10.5 disks that is not on the 10.6.3 retail disks.
    BTW the 10.6.3 disks have the full 10.6 on it too, just no iLife. So you could have installed 10.6.3, Software Updated and installed another iLife if you needed it.  A option to keep in mind for the future if the 10.5 disk no longer boots.
    Last question ( for now;), what about apps I've downloaded (no disks)?
    You need to download new copies from original sources. Your basically rebuilding your machine from the ground up to be reliable. If you have licenses, just email the developer and tell them your TM drive got screwed up, you had to rebuild your machine from scratch and need a new license code.
    Programs need to be installed in most cases with the developers installer as they can place files in certain locations on your computer, including copy protection methods etc.
    Once you have your brand new pristine system, then clone that to a couple of new blank external drives, date them, leave one as original, the other you set to update by tweaking CCC's settings to clone/backup according to your preferences.
    Good Luck.

Maybe you are looking for

  • Sharpness of Photos in Lightroom 5.3

    Can anyone please provide an explanation as to why my pictures when edited in Lightroom 5.3 appear soft and seem to require too much sharpening for an acceptable screen view.    When pictures are subsequently made ready for printing from Lightoom 5.3

  • Credit card question

    I am trying to remove my credit card from my itunes account. Not sure how to.

  • Remote App No Longer Working

    I am tearing my hair out trying to figure this out. For whatever reason, the Remote app has completely stopped working. I am using OS X. I have deleted and reinstalled the Remote App. I have deleted and reinstalled Itunes. When I go to "Add Library"

  • #MULTIVALUE Error while merging

    Hi, I merged two dataproviders and changed a dimension to Detail and added the detail object to the existing block.It gives me a #MULTIVALUE error. I found an article which addresses this issue, but i failed to understand how it could be achieved. be

  • Too tall and skinny...wrong aspect ratio?

    I imported a homemade camcorder video from a homemade DVD-R, using MPEG Streamclip and Quicktime MPEG-2 Playback. When I dropped that new file into iMovie, the resulting video was squished. We are all tall and skinny. Clearly, I made some incorrect c