What to do if row chaining is found?
Hello Alls,
If i found rows chaining in my table then what i have to do?
also in my database there is one table which contain the 2,00,00,000 of records so it is advisable to make partition of this table for faster searching?
and how to check performance of oracle 10g database. since installed i am not checking any things in database?
how to check in database which patches are applied on the database?
can any body give me basic guidance from that i can check my database works fine or not. i want to check its response time and all performance related. currently i am getting very slow response from my database.
If i found rows chaining in my table then what i have to do?In most cases chaining is unavoidable, especially when this involves tables
with large columns such as LONGS, LOBs, etc. When you have a lot of chained
rows in different tables and the average row length of these tables is not
that large, then you might consider rebuilding the database with a larger
blocksize.
e.g.: You have a database with a 2K block size. Different tables have multiple
large varchar columns with an average row length of more than 2K. Then this
means that you will have a lot of chained rows because you block size is
too small. Rebuilding the database with a larger block size can give you
a significant performance benefit.
Migration is caused by PCTFREE being set too low, there is not enough room in
the block for updates. To avoid migration, all tables that are updated should
have their PCTFREE set so that there is enough space within the block for updates.
You need to increase PCTFREE to avoid migrated rows. If you leave more free
space available in the block for updates, then the row will have more room to
grow.
SQL Script to eliminate row migration/chaining :
Get the name of the table with migrated rows:
ACCEPT table_name PROMPT 'Enter the name of the table with migrated rows: '
-- Clean up from last execution
set echo off
DROP TABLE migrated_rows;
DROP TABLE chained_rows;
-- Create the CHAINED_ROWS table
@.../rdbms/admin/utlchain.sql
set echo on
spool fix_mig
-- List the chained and migrated rows
ANALYZE TABLE &table_name LIST CHAINED ROWS;
-- Copy the chained/migrated rows to another table
create table migrated_rows as
SELECT orig.*
FROM &table_name orig, chained_rows cr
WHERE orig.rowid = cr.head_rowid
AND cr.table_name = upper('&table_name');
-- Delete the chained/migrated rows from the original table
DELETE FROM &table_name WHERE rowid IN (SELECT head_rowid FROM chained_rows);
-- Copy the chained/migrated rows back into the original table
INSERT INTO &table_name SELECT * FROM migrated_rows;
spool off
also in my database there is one table which contain the 2,00,00,000 of records so it is advisable to make partition of this table for faster searching?download-uk.oracle.com/docs/cd/B19306_01/server.102/b14220/partconc.htm
and how to check performance of oracle 10g database. since installed i am not checking any things in database?
can any body give me basic guidance from that i can check my database works fine or not. i want to check its response time and all performance related. currently i am getting very slow response from my databasedownload-uk.oracle.com/docs/cd/B19306_01/server.102/b14211/toc.htm
Jafar
Similar Messages
-
Row chaining in table with more than 255 columns
Hi,
I have a table with 1000 columns.
I saw the following citation: "Any table with more then 255 columns will have chained
rows (we break really wide tables up)."
If I insert a row populated with only the first 3 columns (the others are null), is a row chaining occurred?
I tried to insert a row described above and no row chaining occurred.
As I understand, a row chaining occurs in a table with 1000 columns only when the populated data increases
the block size OR when more than 255 columns are populated. Am I right?
Thanks
dyahavuser10952094 wrote:
Hi,
I have a table with 1000 columns.
I saw the following citation: "Any table with more then 255 columns will have chained
rows (we break really wide tables up)."
If I insert a row populated with only the first 3 columns (the others are null), is a row chaining occurred?
I tried to insert a row described above and no row chaining occurred.
As I understand, a row chaining occurs in a table with 1000 columns only when the populated data increases
the block size OR when more than 255 columns are populated. Am I right?
Thanks
dyahavYesterday, I stated this on the forum "Tables with more than 255 columns will always have chained rows." My statement needs clarification. It was based on the following:
http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/schema.htm#i4383
"Oracle Database can only store 255 columns in a row piece. Thus, if you insert a row into a table that has 1000 columns, then the database creates 4 row pieces, typically chained over multiple blocks."
And this paraphrase from "Practical Oracle 8i":
V$SYSSTAT will show increasing values for CONTINUED ROW FETCH as table rows are read for tables containing more than 255 columns.
Related information may also be found here:
http://download.oracle.com/docs/cd/B10501_01/server.920/a96524/c11schem.htm
"When a table has more than 255 columns, rows that have data after the 255th column are likely to be chained within the same block. This is called intra-block chaining. A chained row's pieces are chained together using the rowids of the pieces. With intra-block chaining, users receive all the data in the same block. If the row fits in the block, users do not see an effect in I/O performance, because no extra I/O operation is required to retrieve the rest of the row."
http://download.oracle.com/docs/html/B14340_01/data.htm
"For a table with several columns, the key question to consider is the (average) row length, not the number of columns. Having more than 255 columns in a table built with a smaller block size typically results in intrablock chaining.
Oracle stores multiple row pieces in the same block, but the overhead to maintain the column information is minimal as long as all row pieces fit in a single data block. If the rows don't fit in a single data block, you may consider using a larger database block size (or use multiple block sizes in the same database). "
Why not a test case?
Create a test table named T4 with 1000 columns.
With the table created, insert 1,000 rows into the table, populating the first 257 columns each with a random 3 byte string which should result in an average row length of about 771 bytes.
SPOOL C:\TESTME.TXT
SELECT
SN.NAME,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
SN.NAME = 'table fetch continued row'
AND SN.STATISTIC#=MS.STATISTIC#;
INSERT INTO T4 (
COL1,
COL2,
COL3,
COL255,
COL256,
COL257)
SELECT
DBMS_RANDOM.STRING('A',3),
DBMS_RANDOM.STRING('A',3),
DBMS_RANDOM.STRING('A',3),
DBMS_RANDOM.STRING('A',3)
FROM
DUAL
CONNECT BY
LEVEL<=1000;
SELECT
SN.NAME,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
SN.NAME = 'table fetch continued row'
AND SN.STATISTIC#=MS.STATISTIC#;
SET AUTOTRACE TRACEONLY STATISTICS
SELECT
FROM
T4;
SET AUTOTRACE OFF
SELECT
SN.NAME,
SN.STATISTIC#,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
SN.NAME = 'table fetch continued row'
AND SN.STATISTIC#=MS.STATISTIC#;
SPOOL OFFWhat are the results of the above?
Before the insert:
NAME VALUE
table fetch continue 166
After the insert:
NAME VALUE
table fetch continue 166
After the select:
NAME STATISTIC# VALUE
table fetch continue 252 332 Another test, this time with an average row length of about 12 bytes:
DELETE FROM T4;
COMMIT;
SPOOL C:\TESTME2.TXT
SELECT
SN.NAME,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
SN.NAME = 'table fetch continued row'
AND SN.STATISTIC#=MS.STATISTIC#;
INSERT INTO T4 (
COL1,
COL256,
COL257,
COL999)
SELECT
DBMS_RANDOM.STRING('A',3),
DBMS_RANDOM.STRING('A',3),
DBMS_RANDOM.STRING('A',3),
DBMS_RANDOM.STRING('A',3)
FROM
DUAL
CONNECT BY
LEVEL<=100000;
SELECT
SN.NAME,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
SN.NAME = 'table fetch continued row'
AND SN.STATISTIC#=MS.STATISTIC#;
SET AUTOTRACE TRACEONLY STATISTICS
SELECT
FROM
T4;
SET AUTOTRACE OFF
SELECT
SN.NAME,
SN.STATISTIC#,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
SN.NAME = 'table fetch continued row'
AND SN.STATISTIC#=MS.STATISTIC#;
SPOOL OFFWith 100,000 rows each containing about 12 bytes, what should the 'table fetch continued row' statistic show?
Before the insert:
NAME VALUE
table fetch continue 332
After the insert:
NAME VALUE
table fetch continue 332
After the select:
NAME STATISTIC# VALUE
table fetch continue 252 33695The final test only inserts data into the first 4 columns:
DELETE FROM T4;
COMMIT;
SPOOL C:\TESTME3.TXT
SELECT
SN.NAME,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
SN.NAME = 'table fetch continued row'
AND SN.STATISTIC#=MS.STATISTIC#;
INSERT INTO T4 (
COL1,
COL2,
COL3,
COL4)
SELECT
DBMS_RANDOM.STRING('A',3),
DBMS_RANDOM.STRING('A',3),
DBMS_RANDOM.STRING('A',3),
DBMS_RANDOM.STRING('A',3)
FROM
DUAL
CONNECT BY
LEVEL<=100000;
SELECT
SN.NAME,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
SN.NAME = 'table fetch continued row'
AND SN.STATISTIC#=MS.STATISTIC#;
SET AUTOTRACE TRACEONLY STATISTICS
SELECT
FROM
T4;
SET AUTOTRACE OFF
SELECT
SN.NAME,
SN.STATISTIC#,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
SN.NAME = 'table fetch continued row'
AND SN.STATISTIC#=MS.STATISTIC#;
SPOOL OFFWhat should the 'table fetch continued row' show?
Before the insert:
NAME VALUE
table fetch continue 33695
After the insert:
NAME VALUE
table fetch continue 33695
After the select:
NAME STATISTIC# VALUE
table fetch continue 252 33695 My statement "Tables with more than 255 columns will always have chained rows." needs to be clarified:
"Tables with more than 255 columns will always have chained rows +(row pieces)+ if a column beyond column 255 is used, but the 'table fetch continued row' statistic +may+ only increase in value if the remaining row pieces are found in a different block."
Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc.
Edited by: Charles Hooper on Aug 5, 2009 9:52 AM
Paraphrase misspelled the view name "V$SYSSTAT", corrected a couple minor typos, and changed "will" to "may" in the closing paragraph as this appears to be the behavior based on the test case. -
Row chaining and row migration in Oracle 10g R2/11g R2
Hi,
Due to the business rule changes, one of the numeric column in a large table (20 millions rows) will be expanded from number(8) to number(10). Also, the values of this column of each row will be updated from 6 digits to 10 digits. All the indexes that use this column will be dropped and recreated after update. I would like to know if there is any row chaining or row migration issue in Oracle 10g R2 /11g R2.
Thanks for your helpneemin wrote:
Hi,
Due to the business rule changes, one of the numeric column in a large table (20 millions rows) will be expanded from number(8) to number(10). Also, the values of this column of each row will be updated from 6 digits to 10 digits. All the indexes that use this column will be dropped and recreated after update. I would like to know if there is any row chaining or row migration issue in Oracle 10g R2 /11g R2.
Thanks for your helpIt depends.
what you do observe after TESTING against the Development DB? -
Row chaining issue in Oracle 10g
Hello All,
I was seeing row chaining issue in one of our production DB. Row chaining was present in all tables having LONG RAW columns.
As of now I am not supposed to change these to BLOB/CLOB, so I did exp/imp to solve the issue. However, we are repeating this excercise once every quarter, and now it is time we put a permanent fix to it.
One of such tables has below storage parameters:
PCTUSED 0
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 40K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
)Can I be advised what would be the tuning options in above? Note: All of these tables are in GB's.
For any inputs, please let me know.
Thanks,
SuddhasatwaSELECT table_name,
Round(( blocks * 8 ) / 1024 / 1024, 2)
"Physical Size (GB)",
Round(( num_rows * avg_row_len / 1024 / 1024 / 1024 ), 2)
"Actual Size (GB)",
( Round(( blocks * 8 ) / 1024 / 1024, 2) - Round((
num_rows * avg_row_len / 1024 / 1024 / 1024 ), 2) )
"Wasted Space (GB)"
FROM dba_tables
WHERE owner = 'SYSADM'
AND ( Round(( blocks * 8 ) / 1024, 2) - Round(
( num_rows * avg_row_len / 1024 / 1024 )
, 2) ) > 20
AND table_name IN (SELECT table_name
FROM dba_tab_columns
WHERE data_type IN ( 'RAW', 'LONG RAW', 'LONG' ))
AND table_name IN (SELECT table_name
FROM dba_tab_columns
WHERE data_type LIKE '%LONG%')
ORDER BY ( Round(( blocks * 8 ) / 1024, 2) - Round(
( num_rows * avg_row_len / 1024 / 1024 )
, 2) ) DESC; is the air inside the gas tank on your car considered "Wasted Space"?
would you car run any better if the size of the gas tank got reduced as gasoline was consumed?
Realize & understand that Oracle & does reused FREE SPACE without any manual intervention.
It appears you suffer from Complusive Tuning Disorder! -
How to avoid row chaining in _LT tables
Hi,
I am seeing lot of row chaining in the LT tables. What is the best method to avoid row chaining with workspace manager tables? Also can I move the LT tables using the "alter table move" command. Will this affect versioned tables metadata information?
Thanks.Hi,
Have you been able to determine what operation is causing the rows to be chained? Typically this would be from dml inserts, but these could be due to any number of Workspace Manager operations. Regardless, there really isn't anything special that needs to be done in regards to row chaining for versioned tables. Using a higher PCTFREE on the table will frequently be beneficial. I might be able to add additional suggestions if you are able to offer some details as to when the rows are being chained, as well as details on the table itself.
We do support 'alter table move' using our DDL procedures(dbms_wm.beginDDL, dbms_wm.commitDDL). You can view our user guide for additional details about those procedures, if you do not already know how it works.
Regards,
Ben -
Row chaining and row migration ???
hi
Can someone tell me what are the oprions to over come row chaining and row migration in 10g and 11g databases ???
thanx in advance.
sWIP wrote:
hi
Can someone tell me what are the oprions to over come row chaining and row migration in 10g and 11g databases ???
thanx in advance.
sHi.Chained row is a row that is too large to fit into a single database data block. row migration mean is update some rows would cause it to not fit on the block anymore,then these migrates to new address.For more information see below links
http://blog.tanelpoder.com/2009/11/04/detect-chained-and-migrated-rows-in-oracle/
http://www.akadia.com/services/ora_chained_rows.html -
hi
as far as i know row chainig is the process in which when a rows is unable to fit in one datablock.... it just strips into more than one block using formation of chunks. i just want to know that whether this allocation is always done in contigous blocks or it just strips across variuos locations in datafile. because if so then row chaining also simulates row migration (if it is stripped across various locations of datablock which are not contigous).
Now another questino arises
if it stores in contigous locations then what will happen if row is unable to fit even after utilising contigous free blocks. i mean will oracle search for a long chain of contigous free blocks and move the complete row into that one or there will partially row migration?
i hope you will understand my requirement. will be thanks a lot for clerification.
thanks
apsHi
i only quoted a part of the text by D. Burleson.Of course if Don is the source I will never see the test case I would like to see ;-)
In the whole context it is described like your one:
======================
You also need to understand how new free blocks are
added to the freelist chain. At table extension time,
the high-water mark for the table is increased, and
new blocks are moved onto the master freelist, where
they are, in turn, moved to process freelists. For
tables that do not contain multiple freelists, the
transfer is done five blocks at a time. For tables
with multiple freelists, the transfer is done in
sizes (5*(number of freelists + 1)). For example, in
a table with 20 freelists, 105 blocks will be moved
onto the master freelist each time that a table
increases its high-water mark.
======================As I wrote, IMHO, the information is wrong. So, let's have a look to an example (executed on a 10.2.0.3 Linux x86_64):
1) create a new tablespace and a table in it
SQL> CREATE TABLESPACE t
2 DATAFILE SIZE 10M AUTOEXTEND ON
3 EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1m
4 SEGMENT SPACE MANAGEMENT MANUAL
5 BLOCKSIZE 8K;
SQL> CREATE TABLE t (v varchar2(100)) TABLESPACE t STORAGE (FREELISTS 20);2) where is the table stored?
SQL> SELECT file_id, block_id, blocks
2 FROM dba_extents
3 WHERE owner = user
4 AND segment_name = 'T';
FILE_ID BLOCK_ID BLOCKS
8 9 1283) fill 5 blocks (this is necessary because for the first 5 increase of the HWM a single block is allocated)
SQL> INSERT INTO t SELECT rpad('A',100,'A') FROM all_objects WHERE rownum <= 68;
SQL> INSERT INTO t SELECT rpad('A',100,'A') FROM all_objects WHERE rownum <= 68;
SQL> INSERT INTO t SELECT rpad('A',100,'A') FROM all_objects WHERE rownum <= 68;
SQL> INSERT INTO t SELECT rpad('A',100,'A') FROM all_objects WHERE rownum <= 68;
SQL> INSERT INTO t SELECT rpad('A',100,'A') FROM all_objects WHERE rownum <= 68;4) dump the header block to know the current HWM and status of the freelists
SQL> ALTER SYSTEM DUMP DATAFILE 8 BLOCK 9;5) in the trace file the following information are contained (notice the HWM and that all freelists except one process freelist are "empty")
Extent Control Header
Extent Header:: spare1: 0 spare2: 0 #extents: 1 #blocks: 127
last map 0x00000000 #maps: 0 offset: 4128
Highwater:: 0x0200000f ext#: 0 blk#: 5 ext size: 127
#blocks in seg. hdr's freelists: 1
#blocks below: 5
mapblk 0x00000000 offset: 0
Unlocked
Map Header:: next 0x00000000 #extents: 1 obj#: 12493 flag: 0x40000000
Extent Map
0x0200000a length: 127
nfl = 20, nfb = 1 typ = 1 nxf = 0 ccnt = 0
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: USED lhd: 0x0200000e ltl: 0x0200000e
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x000000006) fill one more block
SQL> INSERT INTO t SELECT rpad('A',100,'A') FROM all_objects WHERE rownum <= 68;7) re-dump the header block
SQL> ALTER SYSTEM DUMP DATAFILE 8 BLOCK 9;8) now the trace file contains the following information (notice that the HWM has increased of 5 blocks and not 100, in addition, as before, only one process freelist is not "empty")
Extent Control Header
Extent Header:: spare1: 0 spare2: 0 #extents: 1 #blocks: 127
last map 0x00000000 #maps: 0 offset: 4128
Highwater:: 0x02000014 ext#: 0 blk#: 10 ext size: 127
#blocks in seg. hdr's freelists: 5
#blocks below: 10
mapblk 0x00000000 offset: 0
Unlocked
Map Header:: next 0x00000000 #extents: 1 obj#: 12493 flag: 0x40000000
Extent Map
0x0200000a length: 127
nfl = 20, nfb = 1 typ = 1 nxf = 0 ccnt = 0
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: USED lhd: 0x0200000f ltl: 0x02000013
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000
SEG LST:: flg: UNUSED lhd: 0x00000000 ltl: 0x00000000Best regards,
Chris -
My production is running on 8i.
I have obserevd that some of the tables gain chain_cnt>2,00,000.
I brought the chain count to zero by creating a temporary table,
moving all the migrated rows to it.Then deleting the migrated rows from the original table,inserting back all the rows from temporary table back into Original table.
Then analyzing the table for compute statistics.
Can anybody guide me how can I prevent row migration/chaining in future ?
What are the parameters I have to consider while creating a table?
Thanks in advance,
chotuRow Chaining and Migration are two different things with different cause. Based on your description you were having Row Chaining.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/logical.htm#i15914
Since you are on 8i you might be using manually managed tablespace. For that you mainly reduce row chaining by tweaking PCTFREE and PCTUSED -
About row-chaining, row- migration in a block
What happens during row-chaining, during inserting of a record in a block, about row-migration, update of row occurs exactly at which place in a block?
Hi,
Why Every WhereYou ask Doc Questions, better you read some oracle Doc. -
The row was not found at the Subscriber error keeps popup and stopped synchronization even after inserting missing record at subscriber - transcational replication.
first error throws: Grab exact sequence number, find row and inserted at subscriber...
Start synchronizing, ran fine for a while, stopped again with error with different exact sequence number, repeat again same as step 1.......
how can we stop this and make it run without this error?
Please advise!!!Hi,
This means that your database is out of sync. You can use the continue on data consistency error profile to skip errors. However, Microsoft recommends that you use -SkipErrors parameter cautiously and only when you have a good understanding of the following:
What the error indicates.
Why the error occurs.
Why it is better to skip the error instead of solving it.
If you do not know the answers to these items, inappropriate use of the
-SkipErrors parameter may cause data inconsistency between the Publisher and Subscriber. This article describes some problems that can occur when you incorrectly use the
-SkipErrors parameter.
Use the "-SkipErrors" parameter in Distribution Agent cautiously
http://support.microsoft.com/kb/327817/en-us
Here are two similar threads you may refer to:
http://social.technet.microsoft.com/Forums/en-US/af531f69-6caf-4dd7-af74-fd6ebe7418da/sqlserver-replication-error-the-row-was-not-found-at-the-subscriber-when-applying-the-replicated
http://social.technet.microsoft.com/Forums/en-US/f48c2592-bad7-44ea-bc6d-7eb99b2348a1/the-row-was-not-found-at-the-subscriber-when-applying-the-replicated-command
Thanks.
Tracy Cai
TechNet Community Support -
Total # of rows about 30 Million
1. Alter table WLD.World_Test list chained rows into chained_rows;
2. select count(1) from chained_rows;
--1 million rows
3. Alter table modify default attributes of PCTFREE to 30; it was set to 20 about 3 weeks back.
I had removed row chaining from this table ~ 3 weeks back.
4. Index pctfree is set to 10.
5. Tablespace is MSSM
Please advise how to make sure that rows chaining does not happen in .Hi,
Yes that is a good thought, however requires downtime.Use dbms_redefinition for online redefinition of the table in a new tablespace with ASSM.
The bigger question is what is causing it, we do have other tablespaces which are MSSM as well.What is your block size?
If block size is 2K, 30% PCTFREE leaves 1433 bytes space in your data block and if you have two rows with a size of 800 bytes, second row would certainly be chained to another block. Can you tell the average row length from user_tables view for this table?. Also see following thread from Tom Kyte
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:358341515662
Are there any disadvantages to ASSM?See following about its limitations
http://www.dba-oracle.com/art_builder_assm.htm
Salman -
Hi,
how to do the difference between row chaining and row migrate.
In what table may i see the difference.http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:4423420997870
-
TS1042 what happened to "front row" when I updated to OS X Lion?
what happened to "front row" when I updated to OS X Lion?
Lion doesnt have it anymore. When Mountain Lion comes out on July 25, it will have front row again!
-
What happen to front row when i went to mountain lion?
What happen to front row when I went to mountain lion?
Front Row is no longer supported. However, there are several applications that can do the same job for you (quite nicely, actually) and that work with ML. Try searching the Mac App Store. Also: there are uncountable threads about this over at the Front Row forum.
-
How can we reduce Row Chaining?
In a 10gR2 db, how can i reduce row chaining in tables?
Hi,
First, the prevention techniques for chained rows vs. migrated rows is a bit different. Note that both chained rows and migrated (relocated) rows manifest as "table fetch continued row" in v$sysstat and stats$sysstat for STATSPACK and dba_hist_sysstat for AWR.
Preventing chained rows - Chained rows can occur when a row is to large for a data block. In these cases, moving large objects into a tablespace with a larger blocksize can often relieve chained rows.
Preventing migrated rows - Migrated rows occur when a row expands (usually w2ith a varchar2 data type), and there is not enough reserve defined by PCTFREE for the row to expand. In this case, you adjust the PCTFREE to ensure that future rows will have room to expand and reorganize the table to remove the fragments.
On some tables which are stored tiny and grow huge, you may need to set PCTFREE to a "large" value, so that only one row is stored per block. For example, if I have a row with a varchar2 that is stored at 2k and grows to 30k, I would need to use a 32k blocksize and set PCTFREE=95 so that only one rows is stored on each data block. That way, at update time, there will be room for the row to expand without fragmenting.
Operationally, Oracle consultant Steve Adams offers this tip for finding the difference between chained and migrated rows:
http://www.freelists.org/archives/oracle-l/10-2008/msg00750.html
+"You can tell the difference between row migration and chaining by listing the chained rows with ANALYZE table LIST CHAINED ROWS and then fetching the first column from each "chained row" in a single query.+
+The count of continued row fetches will be incremented for every migrated row, but not for most chained rows (unless the first cut point happens to fall with the first column, which should be rare)."+
Hope this helps . . .
Donald K. Burleson
Oracle Press author
Author of "Oracle Tuning: The Definitive Reference"
http://www.rampant-books.com/book_2005_1_awr_proactive_tuning.htm
Maybe you are looking for
-
Forgotten Administrator Password
I am trying to add a printer to my new Apple MacBook pro and it is asking me for an administrator password. I did put in a password but have forgot what it is, even with the hint. Please help
-
Problem with copy serviceorder and personel number
we have a lot of old serviceorders which can be used as a template for new serviceorders . the only problem is that the personel numbers are copied also. is it possible to prevent that the personel numbers of the old serviceorder are not copied to th
-
Panasonic 3-d tv and sony 3-d blueray player
I have the tc-p50ut50 tv, and the sony blueray player. Having a hell of a time getting the glasses to work with a 3d DVD. The glasses work fine if i turn the tv to 3-d and watch cable TV, but when i put in a 3-d movie such as avatar, the glasses won'
-
my nef files are.turning red in apertutre since latest update. i understand i need to uninstall this . how do i uninstall
-
How do I create a "page peel" effect?
is there an effect or 3rd party filter that allows me to lift just the corner of video to reveal something beneath, and then to bring the corner back down. I've seen this in Johnson Wax spots. Corner lifts to reveal their logo. Don't need a full tran