Index Block Splits

How we can reduce or tune index block splits.. I mean wait event for leaf node splits, a session might be waiting to read the block which is getting split. And i guess index block split is natural we cannot avoid it, like for example ginger value has to be inserted between finger and hello, so to insert ginger value oracle will split the block into 2 by inserting ginger and moving hello to another block.. please help me or explain me experts

842638 wrote:
How we can reduce or tune index block splits.. I mean wait event for leaf node splits, a session might be waiting to read the block which is getting split. And i guess index block split is natural we cannot avoid it, like for example ginger value has to be inserted between finger and hello, so to insert ginger value oracle will split the block into 2 by inserting ginger and moving hello to another block.. please help me or explain me expertsThe block splits would be happening and if I remember correctly, there are going to be two types of it, 50-50 split and 90-10 split, both would depend on that the range of the data that you are inserting. I would suggest that you read Richard Foote's presentation [url http://richardfoote.files.wordpress.com/2007/12/index-internals-rebuilding-the-truth.pdf] Index Internals to know more about all this.
Aman....

Similar Messages

  • Contention on index block splits  consuming significant database time

    Hi Guys,
    can anybody suggest on how to remove Contention on index block splits,this is giving so many issues on my production DB,the CPU usage shots up and application hangs for few minutes.
    DB is 10.2.0.3 and OS is IBM AIX 5.3

    I found this.. it might be useful
    One possibility is that this is caused by shared CBC latching peculiarities:
    1) during normal selects your index root block can be examined under a
    shared cache buffers chains latch.
    So as long as everybody is only reading the index root block, everybody can
    do it concurrently (without pinning the block). The "current holder count"
    in the CBC latch structure is just increased by one for every read only
    latch get and decreased by one on every release. 0 value means that nobody
    has this latch taken currently.
    Nobody has to wait for others for reading index root block in all read only
    case. That greatly helps to combat hot index root issues.
    2) Now if a branch block split happens a level below the root block, the
    root block has to be pinned in exclusive mode for reflecting this change in
    it. In order to pin a block you need to get the corresponding CBC latch in
    exclusive mode.
    If there are already a bunch of readers on the latch, then the exclusive
    latch getter will just flip a bit in the CBC latch structure - stating it's
    interest for exclusive get.
    Every read only latch get will check for this bit, if it's set, then the
    getters will just spin instead, waiting this bit to be cleared (they may
    yield or sleep immediately as well, I haven't checked). Now the exclusive
    getter has to spin/wait until all the shared getters have released the latch
    and the "current holder count" drops to zero. Once it's zero (and the getter
    manager to get on to CPU) it can get the latch, do its work and release the
    latch.
    During all that time starting from when the "exclusive interest" bit was
    set, nobody could access this indexes root block except the processes which
    already had the latch in shared mode. Depending on latch spin/sleep strategy
    for this particular case and OSD implementation, this could mean that all
    those "4000 readers per second" start just spinning on that latch, causing
    heavy spike in CPU usage and they all queue up.
    How do diagnose that:
    You could sample v$latch_misses to see whether the number of "kcbgtcr:
    kslbegin shared" nowaitfails/sleeps counter takes an exceptional jump up
    once you observe this hiccup.
    How to fix that once diagnosed:
    The usual stuff, like partitioning if possible or creating a single table
    hash cluster instead.
    If you see that the problem comes from excessive spinning, think about
    reducing the spinning overhead (by reducing spincount for example). This
    could affect your other database functions though..
    If you can't do the above - then if you have off-peak time, then analyse
    indexes (using treedump for start) and if you see a block split coming in a
    branch below root block, then force the branch block to split during
    off-peak time by inserting carefully picked values into the index tree,
    which go exactly in the range which cause the proper block to split. Then
    you can just roll back your transaction - the block splits are not rolled
    back nor coalesced somehow, as this is done in a separate recursive
    transaction.
    And this
    With indexes, the story is more complicated since you can't just insert a
    row into any free block available like with tables. Multiple freelists with
    tables help us to spread up inserts to different datablocks, since every
    freelist has its distinct set of datablocks in it. With indexes, the
    inserted key has to go exactly to the block where the structure of b?tree
    index dictates, so multiple freelists can't help to spread contention here.
    When any of the index blocks has to split, a new block has to be allocated
    from the freelist (and possibly unlinked from previous location in index),
    causing an update to freelist entry in segment header block. Now if you had
    defined multiple freelists for your segment, they'd still remain in the
    single segment header block and if you'd have several simultaneous block
    splits, the segment header would become the bottleneck.
    You could relieve this by having multiple freelist groups (spreading up
    freelists into multiple blocks after segment header), but this approach has
    it's problems as well - like a server process which maps to freelist group 1
    doesn't see free blocks in freelist group 2, thus possibly wasting space in
    some cases...
    So, if you have huge contention on regular index blocks, then you should
    rethink the design (avoid right hand indexes for example), or physical
    design (partition the index), increasing freelists won't help here.
    But if you have contention on index segment's header block because of block
    splits/freelist operations, then either partition the index or have multiple
    freelist groups, adding freelists again won't help here. Note that adding
    freelist groups require segment rebuild.

  • Simple question about block splitting

    Hello Experts,
    My question is, aren't the following DML performing 90-10 block split?? Because it has been said that the first one performs 50-50 block splitting.
    SQL> CREATE TABLE album_sales_IOT(album_id number, country_id number, total_sals number, album_colour varchar2(20),
         CONSTRAINT album_sales_iot_pk PRIMARY KEY(album_id, country_id)) ORGANIZATION INDEX;
    Table created.
    1
    SQL> BEGIN 
      2    FOR i IN 5001..10000 LOOP
      3      FOR c IN 201..300 LOOP
      4        INSERT INTO album_sales_iot VALUES(i,c,ceil(dbms_random.value(1,5000000)), 'Yet more new rows');
      5      END LOOP;
      6    END LOOP;
      7    COMMIT;
      8  END;
      9  /
    PL/SQL procedure successfully completed.
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    SQL> BEGIN 
      2    FOR i IN 1..5000 LOOP
      3       FOR c IN 101..200 LOOP
      4          INSERT INTO album_sales_iot
      5          VALUES(i,c,ceil(dbms_random.value(1,5000000)), 'Some new rows');
      6       END LOOP;
      7    END LOOP;
      8    COMMIT;
      9  END;
    10  /
    PL/SQL procedure successfully completed.

    Hi NightWing,
    > Because it has been said that the first one performs 50-50 block splitting.
    Who said it? .. However it can easily be answered with Snapper by Tanel Poder.
    First PL/SQL procedure
        SID, USERNAME  , TYPE, STATISTIC                                                 ,         DELTA, HDELTA/SEC,    %TIME, GRAPH       , NUM_WAITS,  WAITS/SEC,
          8, SYS       , STAT, leaf node splits                                          ,          2427,      12.14,         ,             ,          ,           ,
          8, SYS       , STAT, leaf node 90-10 splits                                    ,          2427,      12.14,         ,             ,          ,           ,
          8, SYS       , STAT, branch node splits                                        ,             4,        .02,         ,             ,          ,           ,
          8, SYS       , STAT, root node splits                                          ,             1,        .01,         ,             ,          ,           ,
    Second PL/SQL procedure
        SID, USERNAME  , TYPE, STATISTIC                                                 ,         DELTA, HDELTA/SEC,    %TIME, GRAPH       , NUM_WAITS,  WAITS/SEC,
          8, SYS       , STAT, leaf node splits                                          ,          2179,       10.9,         ,             ,          ,           ,
          8, SYS       , STAT, branch node splits                                        ,             8,        .04,         ,             ,          ,           ,
    Regards
    Stefan

  • How should be set Index block size in Warehouse databases?

    Hi,
    We have Warehouse database.
    I cannot find out index block size.
    1. Where can I get know our index block sizes?
    2. How can I enlarge index block sizes? Is it related with tablespace?
    After your suggestion do I need increase or set buffer cache keep pool according to block sizes? 2K, 4K, 8K, 16K and 32K can be specified?
    could you help me please?
    thanks and regards,

    See the BLOCK_SIZE column in DBA_TABLESPACES.
    You can't "increase" the block size. You'd have
    a) to allocate DB_xK_cache_size for the new "x"K block size
    b) create a new tablespace explicitly specifying the block size in the CREATE TABLESPACE command
    c) rebuild your indexes into the new tablespace.
    Indexes created in a tablespace with a larger block size have more entries in each block.
    You may get better performance.
    You may get worse performance.
    You may see no difference in performance.
    You may encounter bugs.
    "increasing block size" is an option to be evaluated and tested thoroughly. It is not, per se, a solution.
    Hemant K Chitale
    http://hemantoracledba.blogspot.com

  • V$BH View no of index blocks...help

    hello guys,
    i have queried the view V$BH and it seems that there are 0-36 classes of block present.. can one help me with the class of the index blocks that are present in the buffer cache....

    Check this link,
    www.juliandyke.com/Internals/BlockClasses.html
    HTH
    Aman....

  • Index block dump: "header address" doesn't match rdba

    I did a dump on index leaf block, and I found "header address" doesn't match rdba, what's the "header address"? I also found several leaf blocks have the same "header address".
    buffer tsn: 11 rdba: 0x1684d120 (90/315680)
    ========> 0x1684d120 (1)
    header address 4403265988=0x1067481c4
    ========> 0x1067481c4 (2)
    *** SERVICE NAME:(SYS$USERS) 2009-08-04 04:37:36.335
    *** SESSION ID:(14234.24426) 2009-08-04 04:37:36.335
    Start dump data blocks tsn: 11 file#: 90 minblk 315680 maxblk 315680
    buffer tsn: 11 rdba: 0x1684d120 (90/315680) 
      ========>  0x1684d120  (1)
    scn: 0x0324.dda9ec3d seq: 0x01 flg: 0x04 tail: 0xec3d0601
    frmt: 0x02 chkval: 0xeb2a type: 0x06=trans data
    Hex dump of block: st=0, typ_found=1
    Block header dump:  0x1684d120
    Object id on Block? Y
    seg/obj: 0x7ca10  csc: 0x324.dda9ec3d  itc: 17  flg: O  typ: 2 - INDEX
         fsl: 0  fnx: 0x1684cf72 ver: 0x01
    Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
    Leaf block dump
    ===============
    header address 4403265988=0x1067481c4         
    ========>  0x1067481c4  (2)
    kdxcolev 0
    KDXCOLEV Flags = - - -
    kdxcolok 0
    kdxcoopc 0x90: opcode=0: iot flags=I-- is converted=Y
    kdxconco 2
    kdxcosdc 5
    kdxconro 0
    kdxcofbo 36=0x24
    kdxcofeo 7672=0x1df8
    kdxcoavs 7636
    kdxlespl 0
    kdxlende 0
    kdxlenxt 373579108=0x16445d64
    kdxleprv 377801347=0x1684ca83
    kdxledsz 0
    kdxlebksz 7672
    ----- end of leaf block dump -----Thanks,
    Daniel

    Hi user646745
    You didn't say why you need to do index block dump ?
    Also take are that block structures and dumps some time are different from a ver to ver it 9i and 10g. Unless you now what exectaly you are looking for
    Thanks

  • Dumping Index Blocks

    Hi,
    I'm trying to dump index blocks but the generated trace file has an error.
    how can I resolve this issue?
    Following is what I've done and got:
    SQL> SELECT object_id FROM USER_objects WHERE object_name = 'NAME_5'
    OBJECT_ID
         71142
    SQL> ALTER SESSION SET EVENTS 'immediate trace name treedump level 71142' ;
    Trace file e:\oracle\diag\rdbms\ora11g\ora11g\trace\ora11g_ora_3700.trc
    Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Windows Server 2003 Version V5.2 Service Pack 2
    ----- begin tree dump
    2010-04-08 01:21:53.043: [  OCROSD]utgdv:11:could not read reg value ocrmirrorconfig_loc os error= The system could not find the environment option that was entered.
    2010-04-08 01:21:53.059: [  OCROSD]utgdv:11:could not read reg value ocrmirrorconfig_loc os error= The system could not find the environment option that was entered.
    leaf: 0x18057e4 25188324 (0: nrow: 10 rrow: 10)
    ----- end tree dump

    ahb72 wrote:
    SQL> SELECT object_id FROM USER_objects WHERE object_name = 'NAME_5'
    OBJECT_ID
    71142
    SQL> ALTER SESSION SET EVENTS 'immediate trace name treedump level 71142' ;
    Trace file e:\oracle\diag\rdbms\ora11g\ora11g\trace\ora11g_ora_3700.trc
    Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Windows Server 2003 Version V5.2 Service Pack 2
    ----- begin tree dump
    2010-04-08 01:21:53.043: [  OCROSD]utgdv:11:could not read reg value ocrmirrorconfig_loc os error= The system could not find the environment option that was entered.
    2010-04-08 01:21:53.059: [  OCROSD]utgdv:11:could not read reg value ocrmirrorconfig_loc os error= The system could not find the environment option that was entered.
    leaf: 0x18057e4 25188324 (0: nrow: 10 rrow: 10)
    ----- end tree dump
    If your table has 10 rows, then this leaf block is the entire index and the two error lines are probably irrelevant.
    Create a table with a few thousand rows and see if the errors appear for every line in the tree dump, or just once at the start. If the former than you can probably live with it.
    Regards
    Jonathan Lewis

  • DBV reports index blocks failing... rows locked by itl

    Hi all,
    from two nights RMAN backup on a database (8.1.7.4) is failing due to corrupted blocks. I've run dbv and I've found this situation:
    Block Checking: DBA = 272791174, Block Type = KTB-managed data block
    **** actual rows locked by itl 2 = 1 != # in trans. header = 0
    **** actual free space credit for itl 2 = 26 != # in trans. hdr = 0
    **** actual free space = 4434 < kdxcoavs = 4460
    ---- end index block validation
    Page 161414 failed with check code 6401
    Block Checking: DBA = 272791728, Block Type = KTB-managed data block
    **** actual rows locked by itl 2 = 126 != # in trans. header = 125
    ---- end index block validation
    Page 161968 failed with check code 6401
    Block Checking: DBA = 272791736, Block Type = KTB-managed data block
    **** actual rows locked by itl 2 = 167 != # in trans. header = 166
    ---- end index block validation
    Page 161976 failed with check code 6401
    Block Checking: DBA = 272793483, Block Type = KTB-managed data block
    **** actual rows locked by itl 2 = 273 != # in trans. header = 272
    ---- end index block validation
    Page 163723 failed with check code 6401
    Block Checking: DBA = 272793485, Block Type = KTB-managed data block
    **** actual rows locked by itl 2 = 257 != # in trans. header = 256
    ---- end index block validation
    Page 163725 failed with check code 6401
    Block Checking: DBA = 272793496, Block Type = KTB-managed data block
    **** actual rows locked by itl 2 = 267 != # in trans. header = 266
    ---- end index block validation
    I've tried to drop and restore/recover tablespace without success. Any hints about this issue? I've looked in metalink for suggestions without success...
    Thanks Steve

    Can you do a /dev/null export to identify the corrupted segment?
    If you find any corrputed segment, check this metalink note: 28814.1 - Handling Oracle Block Corruptions in Oracle7/8/8i/9i/10g

  • FullText index blocking word contains with (S)

    Hi,
    My table column have "school(s)" value.
    I have created full text index for this table.
    And executing the below query first query not returning any records and second query returning records.
    select * from TABLENAME WHERE CONTAINS(ColumName, '"school(S)"') 
    select * from TABLENAME WHERE CONTAINS(ColumName, '"school(s)"') 
    Difference between both query is (S).
    Thanks in advance

    Hi Radhakrishnan,
    Full-text queries are not case-sensitive. For example, searching for "Aluminum" or "aluminum" returns the same results. Please refer to the article below:
    Full-Text Search (SQL Server):http://msdn.microsoft.com/en-us/library/ms142571.aspx
    I tried to repro this issue on my test environment, but it will return the expected reuslt. Here is the sample code for your reference, please see:
    CREATE DATABASE FULL_SEARCH_TEST
    USE [FULL_SEARCH_TEST]
    Create Table FullText_Search
    Id Int Primary Key Identity(1,1),
    Name Nvarchar(50)
    GO
    INSERT INTO FullText_Search VALUES('school(s)')
    INSERT INTO FullText_Search VALUES('Test1')
    INSERT INTO FullText_Search VALUES('Test2')
    SELECT * FROM FullText_Search
    GO
    CREATE FULLTEXT CATALOG FTSearch
    GO
    --Show all constraint names
    SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS
    CREATE FULLTEXT INDEX ON FullText_Search
    (Name) KEY INDEX PK__FullText__3214EC077EC2BD1F
    ON FTSearch;
    SELECT * FROM FullText_Search WHERE CONTAINS(Name, '"school(s)"')
    GO
    SELECT * FROM FullText_Search WHERE CONTAINS(Name, '"school(S)"')
    I would suggest you share your DDL+DML to us for further investigation. Don't forget to let us know the version of your SQL Server.
    Regards,
    Elvis Long
    TechNet Community Support

  • How to measure undo at a session level

    Below is what are trying to do.
    We are trying to implement Oracle's table block compression feature.
    In doing so, in one of our testing we discovered that the session performing the DML (inserts) generated almost 30x undo.
    We measured this undo by using below query (before the transaction commited).
    SELECT a.sid, a.username, used_ublk, used_ublk*8192 bytes
    FROM v$session a, v$transaction b
    WHERE a.saddr = b.ses_addr
    However, above is at a transaction level since it still not committed, we would lose this value once the transaction either committed or rolled back, for this reason, we are trying to find an equivalent statistic at a session level.
    1. What we are trying to find it out whether if an equivalent session level statistic exist to measure the amount of undo generated?
    2. Is the undo generated always in terms of "undo blocks?"
    3. When querying v$statname for name like '%undo%' we came across several statistics, the closest one
    undo change vector size -in bytes?
    4. desc test_table;
    Name Type
    ID NUMBER
    sql> insert into test_table values (1);
    5. However when we run the query against:
    SELECT s.username,sn.name, ss.value
    FROM v$sesstat ss, v$session s, v$statname sn
    WHERE ss.sid = s.sid
    AND sn.statistic# = ss.statistic#
    AND s.sid =204
    AND sn.name ='undo change vector size'
    SID USERNAME NAME BYTES
    204 NP4 undo change vector size 492
    6. Query against: v$transaction
    SELECT a.sid, a.username, used_ublk, used_ublk*8192 bytes
    FROM v$session a, v$transaction b
    WHERE a.saddr = b.ses_addr
    SID USED_UBLK BYTES
    204 1 8192
    What are trying to understand is:
    1. How can we or what is the correct statistic to determine how many undo blocks were generated by particular session?
    2. What is the statistic: undo change vector size? What does it really mean? or measure?

    Any transaction that generates Undo will use Undo Blocks in multiples of 1 --- i.e. the minimum allocation on disk is 8KB.
    Furthermore, an Undo_Rec does not translate to a Table Row. The Undo has to capture changes to Indexes, block splits, other actions. Multiple changes to the same table/index block may be collapsed into one undo record/block etc etc.
    Therefore, a transaction that generated 492 bytes of Undo would use 8KB of undo space because that is the minimum allocation.
    You need to test with larger transactions.
    SQL> update P_10 set col_2='ABC2' where mod(col_1,10)=0;
    250000 rows updated.
    SQL>
    SQL> @active_transactions
           SID    SERIAL# SPID         USERNAME     PROGRAM                       XIDUSN  USED_UBLK  USED_UREC
           143        542 17159        HEMANT       sqlplus@DG844 (TNS V1-V3)          6       5176     500000
    Statistic : db block changes                                      1,009,903
    Statistic : db block gets                                         1,469,623
    Statistic : redo entries                                            502,507
    Statistic : redo size                                           117,922,016
    Statistic : undo change vector size                              41,000,368
    Statistic : table scan blocks gotten                                 51,954
    Statistic : table scan rows gotten                               10,075,245Hemant K Chitale

  • Error with transaction

    Hi!
    An error has occurred! Why are you transfer money from my card again? I paid 9 735 HKD for all (mac mini, mouse, keyboard and adapter). But then you transferred money again! 738 HKD and 370 HKD for that? Can you comment this?
    Order Number: W226194711
    login: [email protected]

    The 8177 error does not seem to be a rarity when using transaction isolation serializable...
    Yes, if you elect to use serializable you can just about bet that ORA-8177 will, at some point, come along for the ride. It is just a "fact of life" when using serializable (including "false positives"). The only way to avoid it (as far as I know) is to not use serializable. If this is not possible, then the application will have to deal with it in some fashion.
    It is relatively easy to demonstrate that you can get the error using SQL*Plus. On one of my test systems, the following reliably raises ORA-8177:
    Session 1:
    SQL> create table test (a number, b number);
    Table created.
    SQL> insert into test values (1,1);
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> begin
      2    -- use serializable for this test
      3    set transaction isolation level serializable;
      4
      5    -- sleep for 10 seconds to perform actions in session 2
      6    dbms_lock.sleep(10);
      7
      8    -- this will likely fail with ORA-8177
      9    update test set b = 2 where a = 1;
    10  end;
    11  /
    While the anonymous pl/sql block is executing...
    Session 2:
    SQL> insert into test values (2,2);
    1 row created.
    SQL> commit;
    Commit complete.
    Now, back in Session 1...
    begin
    ERROR at line 1:
    ORA-08177: can't serialize access for this transaction
    ORA-06512: at line 9I'm not sure what you mean about using serializable to push concurrency problems into the database.
    Regards,
    Mark
    [EDIT]
    Also, I believe index block splits during the serializable transaction can also result in ORA-8177...

  • Interpreting an index data block dump

    I have seen a few postings about reading index data blocks, mine doesnt quite look like those.
    Ok: 11Gr1 (linux)
    Tracing down a hot block issue with an index, I performed
    alter system dump datafile 11 block 4030208;
    Looking at the Web page "Index Block Dump: Index Only Section Part II (Station To Station)" and others they show a dump like this:
    row#0[8021] flag: ——, lock: 0, len=15
    col 0; len 5; (5): 42 4f 57 49 45
    col 1; len 6; (6): 02 01 48 8a 00 00
    row#1[8002] flag: ——, lock: 0, len=19
    col 0; len 9; (9): 4d 41 4a 4f 52 20 54 4f 4d
    col 1; len 6; (6): 02 01 48 8a 00 02
    row#2[7987] flag: ——, lock: 0, len=15
    col 0; len 5; (5): 5a 49 47 47 59
    col 1; len 6; (6): 02 01 48 8a 00 01
    —– end of leaf block dump —–
    End dump data blocks tsn: 8 file#: 8 minblk 84234 maxblk 84234
    I dont see anything that "obvious" in my dump. Am I looking at something other then an leaf block perhaps?
    I am expecting/hoping to see some sort of pairs for an index like X(y number, z number)
    Block dump from cache:
    Dump of buffer cache at level 4 for tsn=6, rdba=50167552
    BH (0x275f2aec8) file#: 11 rdba: 0x02fd7f00 (11/4030208) class: 4 ba: 0x274992000
      set: 111 bsz: 8192 bsi: 0 sflg: 0 pwc: 0, 25 lid: 0x00000000,0x00000000
      dbwrid: 2 obj: 127499 objn: 77784 tsn: 6 afn: 11
      hash: [0x403d34650,0x403d34650] lru: [0x333f32878,0x209f4ea88]
      lru-flags: hot_buffer
      ckptq: [NULL] fileq: [NULL] objq: [0x22dede3f8,0x30ff9c3f8]
      st: XCURRENT md: NULL tch: 2
      flags: block_written_once redo_since_read gotten_in_current_mode
      LRBA: [0x0.0.0] LSCN: [0x0.0] HSCN: [0xffff.ffffffff] HSUB: [34]
      cr pin refcnt: 0 sh pin refcnt: 0
      buffer tsn: 6 rdba: 0x02fd7f00 (11/4030208)
      scn: 0x0001.19bccf84 seq: 0x02 flg: 0x04 tail: 0xcf841002
      frmt: 0x02 chkval: 0x987f type: 0x10=DATA SEGMENT HEADER - UNLIMITED
    Hex dump of block: st=0, typ_found=1
    Dump of memory from 0x0000000274992000 to 0x0000000274994000
    274992000 0000A210 02FD7F00 19BCCF84 04020001  [................]
    274993FF0 00000000 00000000 00000000 CF841002  [................]
      Extent Control Header
      Extent Header:: spare1: 0      spare2: 0      #extents: 66     #blocks: 10239
                      last map  0x00000000  #maps: 0      offset: 4128
          Highwater::  0x047feb5b  ext#: 65     blk#: 731    ext size: 1024
      #blocks in seg. hdr's freelists: 0
      #blocks below: 9946
      mapblk  0x00000000  offset: 65
                       Unlocked
         Map Header:: next  0x00000000  #extents: 66   obj#: 127499 flag: 0x40000000
      Extent Map
       0x02fd7f01  length: 127
       0x0339ea80  length: 128
    ...

    Some time ago, I wrote a python script to print decimal form integer values from an index block dump. I don't know if it will help you, but it may be a start. It only prints the integer equivalent of the first column in the index, as that is what I needed at the time.
    It is called as...
    18:55:31 oracle@oh1xcwcdb01 /u02/admin/wcperf/udump >./blockdump.py wcperf1_ora_21618.trc
    col  0: [ 4]  c4 48 2a 53 converts to 71418200 on line #526 in the block dump.
    col  0: [ 5]  c4 48 2a 53 1d converts to 71418228 on line #640 in the block dump.
    col  0: [ 6]  c5 08 02 20 61 3f converts to 701319662 on line #648 in the block dump.
    col  0: [ 6]  c5 08 03 2f 33 17 converts to 702465022 on line #785 in the block dump.
    col  0: [ 6]  c5 08 03 2f 33 5f converts to 702465094 on line #793 in the block dump.
    col  0: [ 6]  c5 08 03 2f 40 38 converts to 702466355 on line #801 in the block dump.
    col  0: [ 6]  c5 08 03 30 09 5c converts to 702470891 on line #809 in the block dump.
    col  0: [ 6]  c5 08 03 32 61 05 converts to 702499604 on line #817 in the block dump.
    col  0: [ 6]  c5 08 03 33 0b 06 converts to 702501005 on line #827 in the block dump.
    col  0: [ 6]  c5 08 03 33 19 4b converts to 702502474 on line #835 in the block dump.
    col  0: [ 6]  c5 08 03 33 44 3d converts to 702506760 on line #843 in the block dump.
    col  0: [ 6]  c5 08 03 33 45 08 converts to 702506807 on line #851 in the block dump.
    col  0: [ 6]  c5 08 03 33 4e 5a converts to 702507789 on line #859 in the block dump.
    col  0: [ 6]  c5 08 03 33 5f 3b converts to 702509458 on line #867 in the block dump.
    col  0: [ 6]  c5 09 01 01 21 64 converts to 800003299 on line #875 in the block dump.
    col  0: [ 6]  c5 09 01 01 22 3b converts to 800003358 on line #883 in the block dump.
    18:55:41 oracle@oh1xcwcdb01 /u02/admin/wcperf/udump >...and the script itself is below...
    #!/usr/bin/python
    #Author:        Steve Howard
    #Date:          March 23, 2009
    #Organization:  AppCrawler
    #Purpose:       Simple script to print integer equivalents of block dump values in index.
    import fileinput
    import string
    import sys
    import re
    #boo=1
    boo=0
    j=0
    for line in fileinput.input([sys.argv[1:][0]]):
      j=j+1
      if re.match('^col  0:', line):
        #print line
        dep=int(string.replace(string.split(string.split(line,"]")[1])[0],"c","")) - 1
        #print dep
        i=0
        tot=0
        exp=dep
        for col in string.split(string.split(line,"]")[1]):
          if i > 0:
            tot = tot + ((int(col, 16) - 1) * (100**exp))
            exp = exp - 1
          i = i + 1
        print line.rstrip("\n") + " converts to " + str(tot) + " on line #" + str(j) + " in the block dump."

  • Index / Root Block - Branch Nodes - Leaf Nodes

    Hello guys,
    i readed many documentations about indeces and performance issues. In many cases i could optimize my queries.
    But one thing, i didn't find while i was searching for documents.
    What value/fact specifies the following:
    1) How much branch nodes can a root block contain/handle?
    2) How much leaf nodes can a branch node contain/handle?
    3) How much values can a leaf node contain/handle?
    It there any rule that specifies at which point the branch block / leaf block should be splitted?
    Maybe you can give me a link to a documentation about that topic or explain by yourself....
    Thanks :-)
    Regards
    Stefan

    All of these will be a function of:
    1.) block_size
    2.) PCTFREE specified at index creation time.
    3.) Size of the key values being indexed.
    An index on a very small table will start out as one block. The root block will itself be a leaf block. When it fills, Oracle will split it, and you'll end up with a root block pointing to two different leaf blocks. When one of those leaf blocks fills, Oracle will split again, and you'll end up with one root block pointing to three leaf blocks. After some time, and a lot of data, you'll end up splitting so many times that the root block will be filled with pointers to leaf blocks. At this point, when a leaf block splits, there will be no room to add a pointer to the new leaf block in the root block. So, Oracle will recursively execute another split, this time on the root block, and this is when the BLEVEL of the index will grow. So, Oracle will always split a block on demand, when there is no space left in the block.
    Also, a block split can happen if there is not an open ITL slot available on the block, but I won't go into those details now.
    Hope that helps,
    -Mark

  • How to get list of block identifiers in a empty table and an empty index

    We have an application that has issue with ITL waits: this application is running many INSERT statements on a table that has 2 NUMBER columns and one primary key index. The application is designed to run INSERT statements but they are never committed (this is a software package).
    To check what are the really allocated ITL slots, I know that I can dump data block but I don't know how to get the block identifiers/numbers for an "empty" table and an "empty" index. Does someone knows how to do that ?
    PS: I already had a look to the Metalink notes and I have a Metalink SR for that but maybe OTN forum is faster ?

    You should be able to find the first data/index block with the following, even on an empty table/index.
    select header_file, header_block +1
    from dba_segments
    where segment_name = '<index or table name>';

  • Logical block corruption in an unused block which is a part of index

    Hi All,
    During RMAN backup level 0 I am getting a corrupted block my DB:
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03009: failure of backup command on t20 channel at 07/22/2009 21:30:49
    ORA-19566: exceeded limit of 0 corrupt blocks for file /oracle/oradata/DB2/plind05_02.dbf
    SQL> select * from v$database_block_corruption;
    FILE# BLOCK# BLOCKS CORRUPTION_CHANGE# CORRUPTIO
    2950 1879477 1 1.0124E+13 LOGICAL
    SQL> SELECT tablespace_name, partition_name,segment_type, owner, segment_name FROM dba_extents WHERE file_id = 2950 and 1879477 between block_id AND block_id + blocks - 1;
    no rows selected
    So this block does not belong to any object.
    SQL > select * from dba_free_space where file_id= 2950 and 1879477 between block_id and block_id blocks -1;+
    TABLESPACE_NAME FILE_ID BLOCK_ID BYTES BLOCKS RELATIVE_FNO
    USAGIDX_200907 2950 1879433 1048576 128 909
    But it exists in dba_free_space so it belongs to file space usage bitmap.
    DB Verify shows:
    myserver:/oracle/rman/DB2:DBINST1> dbv file=/oracle/oradata/DB2/plind05_02.dbf BLOCKSIZE=8192
    DBVERIFY: Release 10.2.0.4.0 - Production on Wed Jul 29 13:47:38 2009
    Copyright (c) 1982, 2007, Oracle. All rights reserved.
    DBVERIFY - Verification starting : FILE = /oracle/oradata/DB2/plind05_02.dbf
    Block Checking: DBA = -480465494, Block Type = KTB-managed data block
    **** row 2: key out of order
    ---- end index block validation
    Page 1879477 failed with check code 6401
    DBVERIFY - Verification complete
    Total Pages Examined : 4194176
    Total Pages Processed (Data) : 0
    Total Pages Failing (Data) : 0
    Total Pages Processed (Index): 3404935
    Total Pages Failing (Index): 1
    Total Pages Processed (Other): 569
    Total Pages Processed (Seg) : 0
    Total Pages Failing (Seg) : 0
    Total Pages Empty : 788672
    Total Pages Marked Corrupt : 0
    Total Pages Influx : 0
    Highest block SCN : 1795222745 (2360.1795222745)
    Now, I have identified that this block belongs to an index subpartition so I have rebuild it with alter index ... rebuild subpartition... However, the RMAN backup still fails and DBV still reports an error.
    I know that we could simply recreate the index but the problem is that its quite big (>6GB and table is >7TB).
    My strong feeling is that the cause of this is that corrupted blocks will still be reported by RMAN and DBV until they are reused and reformatted.
    My question is:
    How can I reuse or reformat a block which does not belong to any object?

    Hi,
    Yes you're right, you need to reformat that block.
    For that you need to allocate that block to a table, and fill that table with data until high water mark goes higher than block 1879477.
    This isthe way I've done it once:
    1) check the free space size below that block:
    select sum(bytes)/1024/1024 before from dba_free_space where file_id=1879477 and block_id <= 1503738;
    Let's say it is 6000 MB
    2) create a dummy table, allocate enough extents to fill the size returned from the previous query
    This does not format blocks, but the advantage of allocate extents is that you can specify size and datafile:
    alter table allocate extents size 6000M datafile '/oracle/oradata/DB2/plind05_02.dbf';
    you can check dba_extents to see if it covers block 1879477. If not, try to add a little more extents.
    3) fill the table with data to fill those extents.
    One idea is to insert one rows into the table, then use 'alter table test minimize records_per_block;' so that each block will have 2 rows maximum.
    check the number of blocks (from dba_segments). Say you have 768000 blocks. Then you need to insert 768000/2 rows:
    insert into ... select ... from dual connect by level < (768000/2)
    4) check the high water mark has reach the end of all extents (compare dba_tables.blocks and dba_segemnts.blocks)
    5) if not enough, try to add a little more rows.
    Be careful that you don't go too far (especially if you have extensible datafile). Unfortunately, maxextents is ignored on LMT :(
    6) now, your block should be reformatted. Just drop the dummy table.
    Regards,
    Franck.

Maybe you are looking for

  • Acrobat won't convert documents

    I have been using Acrobat 8 Professional for a number of months with no problems.  Then in April we were going on vacation and wanted to do work from our laptop, so I installed Acrobat on the laptop.  On vacation the laptop completely crashed and say

  • I subscribed to Adobe PDF Pack, but how do I open/start it?

    I subscribed to Adobe PDF Pack, but how do I open/start it?

  • Webservice to XI. Pls advice urgent

    Hi All, My scenario: Partner A(Webservice) -- XI -- Partner B(Webservice) Partner A do not have Webservice. So for Partner A I will make O/B Interface and make WSDL and export that WSDL to Partner A. Partner B has webservice So I will take webservice

  • Iphoto 11 to LR 5.6 conversion methods

    I have seen two methods to convert from iPhoto 11 to LR 5.6. One is the 5 step method that culminates in dragging the masters copy to the grid view of LR. The other is using the utility recently released by Adobe. I have about 12K photos organized in

  • Browser, play store and google account disappeared!

    Hi! I just got my Z1 back from service. The mic didn't work, so for some reason they just changed the whole motherboard. The mic seems to be working great now, But another problem has ocurred. When the phone started up after a factory reset I was not