Fragmented tables

Hi,
We are using following query to check fragmented tables. Is it correct?
Oracle 10.2.0.4 (64bit) and OS AIX 6.1 (64bit).
select owner,table_name,blocks,num_rows,avg_row_len,round(((blocks*16/1024)),2)||'MB' "TOTAL_SIZE",
round((num_rows*avg_row_len/1024/1024),2)||'Mb' "ACTUAL_SIZE",
round(((blocks*16/1024)-(num_rows*avg_row_len/1024/1024)),2) ||'MB' "FRAGMENTED_SPACE"
from all_tables WHERE Owner IN ('USERNAME');
I am using alter table move with parallel (due to huge size of data ) option  and rebuild index for De-fragmentation. Is its OK?

Hemant K Chitale,
Your multiplier of 16 is correct if the block size of the tablespace containing that table is 16KB.
DB block size is 16 kb.
What amount / proportion of "FRAGMENTED_SPACE" would you consider as qualifying a table for a rebuild ?  Why ?  What if you insert more rows into the table ?
Following is the ouptput of query I have posted in my first post. These are partitions tables
TABLE_NAME
BLOCKS
NUM_ROWS
AVG_ROW_LEN
TOTAL_SIZE
ACTUAL_SIZE
FRAGMENTED_SPACE
CRBT_DLY_SU
3426310
1134350200
104
53536.09MB
112507.27Mb
-58971.17MB
PST_VOICE_SU
5619763
373006533
376
87808.8MB
133753.26Mb
-45944.46MB
PRE_WEB_SU
2545843
627771700
125
39778.8MB
74836.22Mb
-35057.42MB
After the end of month I move partitions of tables into new created tablespace (alter table move partition with parallel option) , then rebuild index with parallel option. Now I am confused that how tables are fragmented with huge amount of data.
Are the tables in tablespaces with LMT and Segment Space Management AUTO ?  OR do you need to consider PCTUSED (which is ignored with Segment Space Management AUTO) ?   Are the tables set to the default PCTFREE of 10 ?
Yes tablespaces are LMT and Segment Space Management is Auto.

Similar Messages

  • How to serach most fragmented tables in database(10g)

    How to serach most fragmented tables in database(10g) and query

    I mean
    Most DML operations happened(mainly deletion ) by
    which HWM is set for the table. I know by rebuilding
    table segment can be compressed and we ggain free
    space in tablespace too.OK, but to what end do you gain that free space in the tablespace? Say you had a table of 1,000,000 rows, and you deleted 900,000 of those rows, emptying out 'x' of 'y' extents. If you would expect the table to again grow to 1,000,000 rows (not an unreasonable assumption) then you will just need to reclaim again (by grabbing new extents) the space you freed up with your reorg.

  • How to get a list of most fragmented tables in Oracle?

    Is there an SQL on how to get a list of most fragmented tables in Oracle DBMS?

    Thanks! I would just like to ask you, what do the negative values mean in wasted space?
    Is there an easy way to improve defragmentation state?
    TABLE NAME     SIZE     ACTUAL DATA     WASTER SPACE
    TREE     0     0     0
    GC_S     3744     4651.9     -907.9
    TRAIL     104     113.04     -9.04
    ASSOCIATION_RULES     272     353     -81
    ATTRIBUTES     1728     2528.12     -800.12
    AUDITACTION     128     208.48     -80.48
    DV     18608     36266.47     -17658.47
    S134     728     903.08     -175.08
    A178     344     518.75     -174.75
    S129     728     896.48     -168.48
    AGS_NODES     2864     4510.33     -1646.33
    S149     472     633.79     -161.79
    S127     728     871.62     -143.62
    tu     2232     3619.76     -1387.76
    PCd_DATA     3112     4371.75     -1259.75

  • Fragmented tables or indexes

    Dear all,
    I am using the following command to find out the fragmented tables or indexes, which ofcource returns names of the tables and indexes which are fragmented.
    select * from dba_segments where extents > 10 and owner like '%DB%'
    output is 43 rows
    Next I just rebuild certian indexes and still when I query the above sql statement I see the index which was rebuild is still present which means that the extents are stiil greater than 10.
    I also tried by first dropping the index and creating it again. Still the index is fragmented.
    What can I do.
    Regards
    SL

    1) The fact that an object has more than 10 extents is completely unrelated to whether it is fragmented or not.
    2) Since you haven't specified the Oracle version or the type of tablespace, assuming you've got a recent version of Oracle and are using locally managed tablespaces, the number of extents allocated to an object is pretty much irrelevent, particularly if you've got automatic extent allocation.
    3) If you are on a recent version of Oracle using LMT's, it is essentially impossible for objects to be fragmented for most any reasonable definition of "fragmented".
    Justin

  • Create fragmented table data

    Hi guys,
    I would like to create a table and populate it with lot of data to make some tests, but I also would like to have a high fragmented table to simulate this.
    is there any script that create this environment?
    thanks

    Hi ,
    Please try this ,Not tested.
    1-Create table without any index.
    2-insert data.
    3-create index
    4-update some data
    5-drop Index.
    6-create index
    7-insert another data
    check now select * from dba_segments where extents > 10;
    Regards
    faheem latif

  • SQL for fragmented tables

    Hello,
    Can someone give me the sql to determine what tables and or tablespaces are say 15% fragmented? I'm on Oracle 9i.
    thanks...

    Are you using locally managed tablespaces? If so, you can stop worrying about tablespace fragmentation, particularly if you are using UNIFORM extents.
    Can you define "table fragmentation"? I'm hard-pressed to figure out how a table can be fragmented.
    Assuming you need to worry about fragmentation in the first place, either because you are still using dictionary managed tablespaces or because you have "table fragmentation" for some definition of the term, can you define what you mean by percentage fragmentation? If you have a dictionary managed tablespace, it is not obvious what it would mean to have a tablespace that was 15% fragmented.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Can one logical table had bothfragmented and non-fragmented table sources?

    Hi,
    I created one logical table with three logical table sources as explained below.
    1. Inventory Item logical table source with fragmentation clause of ITEM_TYPE='INVENTORY', I checked source combination feature for this LTS.
    1. Punch out logical table source with fragmentation clause of ITEM_TYPE='PUNCHOUT', I checked source combination feature for this LTS.
    3. Category logical table source without any fragmentation.
    The relation between category and item is one to many.
    I am getting errors in answers if i try to query all attributes of item for a category in dimension only query.Could somebody validate whether i can create one Logical table with fragmeted as well as non-fragmented logical table sources or not?

    Can you share the error messages you are getting?
    regards
    John
    http://obiee101.blogspot.com

  • Sequences in fragmented tables in oracle9i

    how can I create a sequence that somehow checks what the nextval is as I have the same customer table in two databases? do I need to build a trigger?
    I tried to use LAST_NUMBER from USER_SEQUENCES but it is not updated during normal db operation (says the oracle complete ref).
    many thanks for your help!

    The SQL and PL/SQL forum at
    PL/SQL
    may be the best place to ask this question.
    -- CJ

  • Performance degradation due to table fragmentation

    Dear all,
    We use a table in oracle to store session IDs for various web applications. This is a very busy table because several rows are inserted and updated almost every single second by web-applications. Due to this reason the disk space containing this table gets apparently gets fragmented and results in poor performance of our web applications. Whenever this table is freshly re-built, the performance of our web application returns back to it’s normal level.
    Can someone kindly advice if this is a normal behaviour of Highly fragmented tables using ASM (Automatic storage Management) ? should the performance of applications degrade if tables are fragmented ? Also are there any suggestions if there is any better solution rather than re-building the table every month ?
    We use Oracle 10.1.0.4 using Real Application Clusters. Our storage system is based on Automatic storage Management (ASM).
    Thanks and regards

    Thanks for the Reply.
    No there is no union let take an example
    There is a table Bank this will be owned by ad1 and we created a synonym in ap1 with the same name bank but only granting the dml access to ap1.
    Second thing we created the new users , For that also do we need to gather statistics

  • What is the difference between Table & Tablespace Fragmentation

    What is the difference between Table Fragmentation & Tablespace Fragmentation.
    What causes Table Fragmentation and what cause Tablespace Fragmentation.
    How can we avoid Table Fragmentation & Tablespace Fragmentation.
    How can we fix already Fragmented Tables & Fragmented Tablespaces
    Thanks
    Naveen

    Unless you are using an exceptionally old version of Oracle or are still using dictionary managed tablespaces or are using some interesting definitions of "fragmentation", fragmentation is practically impossible in Oracle.
    Justin

  • How fragment a table in Oracle RAC

    Hello to all
    I have to use a DB cluster in a project, I'm using Oracle RAC for this but i have a question, is it posible to fragment tables in Oracle RAC? If this is possible, how can I do it?
    Thanks all for your help.

    If by "fragment" you mean doing what is typically referred to as "sharding" and storing different subsets of the table on different nodes, no, you can't.
    Unlike how many databases do clustering with a "shared nothing" architecture, a RAC cluster involves multiple instances (software processes running on each node) to access a single database (set of data files) on a shared storage system. Sharding a table in this context doesn't make sense-- every instance is going to have access to the entire table. Sharding makes sense in other approaches to clustering where the storage system is not shared across multiple nodes.
    If you are trying to ensure that different nodes don't conflict with each other when requesting particular blocks, or at least to minimize that contention, you can potentially create different services that run on different nodes and have different users connect to those services. So, for example, if you create a different service for different geographic regions, configured those services to run on particular nodes, configured users in those geographic regions to connect to the service appropriate for their region and the data in a table is naturally separated by region, you would end up with each node preferentially caching blocks that have data associated with their particular region and you would have relatively few cache fusion requests where one node asks for a cached block from a different node.
    If you are trying to improve query performance, you can use partitioning instead of sharding the data (you can do this without using RAC as well). This allows Oracle to store different subsets of the data for a table in different physical segments so your queries can hit individual partitions rather than the entire table to retrieve the data they're after.
    Justin

  • Reg : Table Fragmentation

    Hi Basis Guru's
                                What is meant by Table Fragmentation?Plz help with steps..

    Hello,
    Table Fragmentation
    Table fragmentation is the inability of the system to lay out related data sequentially (contiguously), an inherent phenomenon in storage-backed file systems that allow in-place modification of their contents.
    The correction to existing fragmentation is to compress tables and free space back into contiguous areas, a process called defragmentation.
    Table fragmentation will result in longer query times when a full table scan is performed. Since data is not as evenly packed in the data blocks, many blocks may have to be read during a scan to satisfy the query. These blocks may be distributed on various extents. In this case, Oracle must issue recursive calls to locate the address of the next extent in the table to scan.
    Recent studies have shown that table fragmentation has hardly any effect on the performance of the database system. This is mainly because full table scans are somewhat rare in an SAP system since data is accessed using an index.
    REWARD POINTS IF HELPFUL
    Regards
    Sai

  • How to expdp table with a BLOB field when table is larger than UNDO tbs?

    We have a 4-node RAC instance and are at 11.1. We have a 100 gig schema with a few hundred tables. One table contains about 80 gig of data. the table has pictures in it (BLOB column). Our 4 node RAC has 4 12 gig undo tablespaces.
    We run out of undo when export a schema or just this table due to the size of the table.
    According to metalink note ID 1086414.1 this can happen on fragmented tables. According to segment advisor, we are all good and not fragmented at all.
    I also followed the troubleshooting advice in ID 833635.1 and ID 846079.1, but everything turned out ok.
    LOBs and ORA-01555 troubleshooting [ID 846079.1]
    Export Fails With ORA-02354 ORA-01555 ORA-22924 and How To Confirm LOB Segment Corruption Using Export Utility? [ID 833635.1]
    initially we tried just to export it without special parameters.
    expdp MY_SCHEMA/********@RACINSTANC DUMPFILE=MYFILE.dmp PARALLEL=8 directory=DATA_PUMP_DIR SCHEMAS=MY_SCHEMA
    ORA-31693: Table data object "MY_SCHEMA"."BIGLOBTABLE" failed to load/unload and is being skipped due to error:
    ORA-02354: error in exporting/importing data
    ORA-01555: snapshot too old: rollback segment number 71 with name "_SYSSMU71_1268406335$" too small
    then tried to export just the table into 8 files of 8G each (the failing table is about 90% of the schema size)
    expdp MY_SCHEMA/******@RACINSTANCE DUMPFILE=MYFILE_%U.dmp PARALLEL=8 FILESIZE=8G directory=DATA_PUMP_DIR INCLUDE=TABLE:\"IN ('BIGLOBTABLE') \"
    ORA-31693: Table data object "MY_SCHEMA"."BIGLOBTABLE" failed to load/unload and is being skipped due to error:
    ORA-02354: error in exporting/importing data
    ORA-01555: snapshot too old: rollback segment number 71 with name "_SYSSMU71_1268406335$" too small
    We eventually resorted to exporting chunks out of the table by using the QUERY parameter
    QUERY=BIGLOBTABLE:"WHERE BIGLOBTABLEPK > 1 AND BIGLOBTABLEPK <=100000"
    and that worked but it is a kludge.
    Since we will have to export this again down the road I was wondering if there is an easier way to export.
    Any suggestions are appreciated.

    Note that undo data for LOB is not stored in UNDO tablespace but in LOB segments. So I am not sure ORA-1555 is directly linked to LOB data.
    What is your undo_retention parameter ?
    How long does EXPDP run before getting ORA-1555 ?
    You could try to increase undo_retention parameter to avoid ORA-1555.
    Are you running Entreprise Edition ? If yes, trying to transport the tablespace storing the table could be a solution.

  • Delete on very large table

    Hi all,
    one table in my database has grown to 20GB, this table holds log for applications since 2005 , so we decided to archive and delete all the log from 2005 and 2006.
    the table:
    CREATE TABLE WORKMG.PP_TRANSFAUX
    NID_TRANSF NUMBER(28) NOT NULL,
    VTRANSFTYPE VARCHAR2(200 BYTE) NOT NULL,
    VTRANSF VARCHAR2(4000 BYTE) NOT NULL,
    DTRANSFDATE DATE NOT NULL
    TABLESPACE TBS_TABWM_ALL
    The command:
    delete from workmg.pp_transfaux where dtransfdata < to_date('20070101
    00:00','yyyymmdd hh24:mi')
    My question is, what are the "best pratices" for this operation ? such a huge delete can " flood fill" the redo logs , i cant avoid it with "alter table pptransfaux nologging" ...
    So i can delete small chunks of data .. say .. delete 6 months each time ... but .. i'll get a big fragmented table ...
    my environment:
    oracle 9.2.0.1 under windows 2000
    Best Regards
    Rui Madaleno

    Since this is log data I am assuming you don't need it all online at a given time, and that you don't have a partitioning license:
    <Online>
    0. Backup the database.
    1. Create an empty duplicate table 'A'.
    <maintenance window>
    2. Exchange A and the primary table.
    <Online>
    3. insert-as-select-compress-nologging the data to keep from the primary table to A.
    4. create-table-as-select-compress-nologging the data to archive from the primary table to B.
    5. Drop the primary table.
    6. Archive and drop B.
    7. Backup the database.
    <Future>
    7. Upgrade to 11g
    8. License and utilize the automated partition generation feature to give each period its own partition.
    9. Periodically archive and drop partitions from the log table.
    If you have partitioning:
    <Online>
    1. Create an empty duplicate table, we'll call it A. It should be partitioned either by month or year.
    2. insert-as-select-compress-nologging the primary table into A.
    <maintenance window>
    3. exchange A and the primary table.
    <Online>
    4. Drop the primary table.
    5. archive and drop the partitions of A you no longer need.

  • Fragmentation In transactional replication

    Fragmentation is happening In transnational replication.
    some time fragmentation is happening even without DML operations
    Please let me know does transnational replication cause fragmentation 

    Fragmentation is primarily a problem with large scan operations and on large table. Is your workload on the highly fragmented tables characterized by large scan operations.
    You might want to defrag some of the large tables. Typically small tables are easily fragmented and defragging them is a losing battle.
    looking for a book on SQL Server 2008 Administration?
    http://www.amazon.com/Microsoft-Server-2008-Management-Administration/dp/067233044X looking for a book on SQL Server 2008 Full-Text Search?
    http://www.amazon.com/Pro-Full-Text-Search-Server-2008/dp/1430215941

Maybe you are looking for