Leaf Block Dump / Output varies

Hello guys,
i have faced the following issues on a leaf block dump:
First)
row#0[8024] flag: ------, lock: 0, len=12, data:(6): 00 40 76 fa 00 00
col 0; len 2; (2): c1 02
col 1; NULL
row#1[7998] flag: ------, lock: 0, len=14, data:(6): 00 40 76 fa 00 02
col 0; len 2; (2): c1 03
col 1; len 2; (2): c1 02
.......Second)
1103F5A90 3D91D858 000C0100 03303130 0944454C  [=..X.....010.DEL]
1103F5AA0 494E535F 494E0832 30303730 36303106  [0505.20070601.]
1103F5AB0 30303535 3331063D 91D85800 0D010003  [005531.=..X.....]
1103F5AC0 30313009 44454C49 4E535F49 4E083230  [010.DAT050_IN.20]
1103F5AD0 30373036 30310630 30353533 34063D91  [070601.005534.=.]
1103F5AE0 D858000E 01000330 31300944 454C494E  [.X.....010.DAT03]
1103F5AF0 535F494E 08323030 37303630 31063030  [S_IN.20070601.00]
1103F5B00 35353334 063D91D8 58000F01 00033031  [5534.=..X.....01]
1103F5B10 30094445 4C494E53 5F494E08 32303037  [0.DAT050_IN.2007]Sometimes i am facing the row#n output like in the first example and sometimes i am facing only the "real data" like in the second example when i am doing a leaf block dump. Both examples are from different indeces...
When does one occur? Has anyone any idea?
Regards
Stefan

Hi,
File is truncated by trace file limit. Yeah but the file is not big enough.. i only dumped one block.. and the limit is up to 10MB
But as far as I'm aware, a memory dump is normally not included in block dump. Did you check what's at the top? Really interesting... i have done the same on our test system on the same index.. and there the section leaf block dump includes the information. The same command on the same index (i have cloned the database 3 days ago to our test system).
Leaf block dump
===============
row#0[3514] flag: ---D--, lock: 2, len=39
col 0; len 3; (3):  30 31 30
col 1; len 9; (9):  44 45 4c 49 4e 53 5f 49 4e
col 2; len 8; (8):  32 30 30 37 30 36 30 31
col 3; len 6; (6):  30 30 35 35 31 39
col 4; len 6; (6):  3d 91 d7 d8 00 13
----- end of leaf block dump -----Can this leaf block cleaned out (by deletes) and not include any pointers anymore?
I have compared the leaf block attributes:
-> kdxcoavs 3298 (with row#n entries)
-> kdxcoavs 6988 (without row#n entries)
So the dump were are no row#n entries has much more available space - this would be confirm my guess.
Regards
Stefan

Similar Messages

  • Interpreting an index data block dump

    I have seen a few postings about reading index data blocks, mine doesnt quite look like those.
    Ok: 11Gr1 (linux)
    Tracing down a hot block issue with an index, I performed
    alter system dump datafile 11 block 4030208;
    Looking at the Web page "Index Block Dump: Index Only Section Part II (Station To Station)" and others they show a dump like this:
    row#0[8021] flag: ——, lock: 0, len=15
    col 0; len 5; (5): 42 4f 57 49 45
    col 1; len 6; (6): 02 01 48 8a 00 00
    row#1[8002] flag: ——, lock: 0, len=19
    col 0; len 9; (9): 4d 41 4a 4f 52 20 54 4f 4d
    col 1; len 6; (6): 02 01 48 8a 00 02
    row#2[7987] flag: ——, lock: 0, len=15
    col 0; len 5; (5): 5a 49 47 47 59
    col 1; len 6; (6): 02 01 48 8a 00 01
    —– end of leaf block dump —–
    End dump data blocks tsn: 8 file#: 8 minblk 84234 maxblk 84234
    I dont see anything that "obvious" in my dump. Am I looking at something other then an leaf block perhaps?
    I am expecting/hoping to see some sort of pairs for an index like X(y number, z number)
    Block dump from cache:
    Dump of buffer cache at level 4 for tsn=6, rdba=50167552
    BH (0x275f2aec8) file#: 11 rdba: 0x02fd7f00 (11/4030208) class: 4 ba: 0x274992000
      set: 111 bsz: 8192 bsi: 0 sflg: 0 pwc: 0, 25 lid: 0x00000000,0x00000000
      dbwrid: 2 obj: 127499 objn: 77784 tsn: 6 afn: 11
      hash: [0x403d34650,0x403d34650] lru: [0x333f32878,0x209f4ea88]
      lru-flags: hot_buffer
      ckptq: [NULL] fileq: [NULL] objq: [0x22dede3f8,0x30ff9c3f8]
      st: XCURRENT md: NULL tch: 2
      flags: block_written_once redo_since_read gotten_in_current_mode
      LRBA: [0x0.0.0] LSCN: [0x0.0] HSCN: [0xffff.ffffffff] HSUB: [34]
      cr pin refcnt: 0 sh pin refcnt: 0
      buffer tsn: 6 rdba: 0x02fd7f00 (11/4030208)
      scn: 0x0001.19bccf84 seq: 0x02 flg: 0x04 tail: 0xcf841002
      frmt: 0x02 chkval: 0x987f type: 0x10=DATA SEGMENT HEADER - UNLIMITED
    Hex dump of block: st=0, typ_found=1
    Dump of memory from 0x0000000274992000 to 0x0000000274994000
    274992000 0000A210 02FD7F00 19BCCF84 04020001  [................]
    274993FF0 00000000 00000000 00000000 CF841002  [................]
      Extent Control Header
      Extent Header:: spare1: 0      spare2: 0      #extents: 66     #blocks: 10239
                      last map  0x00000000  #maps: 0      offset: 4128
          Highwater::  0x047feb5b  ext#: 65     blk#: 731    ext size: 1024
      #blocks in seg. hdr's freelists: 0
      #blocks below: 9946
      mapblk  0x00000000  offset: 65
                       Unlocked
         Map Header:: next  0x00000000  #extents: 66   obj#: 127499 flag: 0x40000000
      Extent Map
       0x02fd7f01  length: 127
       0x0339ea80  length: 128
    ...

    Some time ago, I wrote a python script to print decimal form integer values from an index block dump. I don't know if it will help you, but it may be a start. It only prints the integer equivalent of the first column in the index, as that is what I needed at the time.
    It is called as...
    18:55:31 oracle@oh1xcwcdb01 /u02/admin/wcperf/udump >./blockdump.py wcperf1_ora_21618.trc
    col  0: [ 4]  c4 48 2a 53 converts to 71418200 on line #526 in the block dump.
    col  0: [ 5]  c4 48 2a 53 1d converts to 71418228 on line #640 in the block dump.
    col  0: [ 6]  c5 08 02 20 61 3f converts to 701319662 on line #648 in the block dump.
    col  0: [ 6]  c5 08 03 2f 33 17 converts to 702465022 on line #785 in the block dump.
    col  0: [ 6]  c5 08 03 2f 33 5f converts to 702465094 on line #793 in the block dump.
    col  0: [ 6]  c5 08 03 2f 40 38 converts to 702466355 on line #801 in the block dump.
    col  0: [ 6]  c5 08 03 30 09 5c converts to 702470891 on line #809 in the block dump.
    col  0: [ 6]  c5 08 03 32 61 05 converts to 702499604 on line #817 in the block dump.
    col  0: [ 6]  c5 08 03 33 0b 06 converts to 702501005 on line #827 in the block dump.
    col  0: [ 6]  c5 08 03 33 19 4b converts to 702502474 on line #835 in the block dump.
    col  0: [ 6]  c5 08 03 33 44 3d converts to 702506760 on line #843 in the block dump.
    col  0: [ 6]  c5 08 03 33 45 08 converts to 702506807 on line #851 in the block dump.
    col  0: [ 6]  c5 08 03 33 4e 5a converts to 702507789 on line #859 in the block dump.
    col  0: [ 6]  c5 08 03 33 5f 3b converts to 702509458 on line #867 in the block dump.
    col  0: [ 6]  c5 09 01 01 21 64 converts to 800003299 on line #875 in the block dump.
    col  0: [ 6]  c5 09 01 01 22 3b converts to 800003358 on line #883 in the block dump.
    18:55:41 oracle@oh1xcwcdb01 /u02/admin/wcperf/udump >...and the script itself is below...
    #!/usr/bin/python
    #Author:        Steve Howard
    #Date:          March 23, 2009
    #Organization:  AppCrawler
    #Purpose:       Simple script to print integer equivalents of block dump values in index.
    import fileinput
    import string
    import sys
    import re
    #boo=1
    boo=0
    j=0
    for line in fileinput.input([sys.argv[1:][0]]):
      j=j+1
      if re.match('^col  0:', line):
        #print line
        dep=int(string.replace(string.split(string.split(line,"]")[1])[0],"c","")) - 1
        #print dep
        i=0
        tot=0
        exp=dep
        for col in string.split(string.split(line,"]")[1]):
          if i > 0:
            tot = tot + ((int(col, 16) - 1) * (100**exp))
            exp = exp - 1
          i = i + 1
        print line.rstrip("\n") + " converts to " + str(tot) + " on line #" + str(j) + " in the block dump."

  • Index block dump: "header address" doesn't match rdba

    I did a dump on index leaf block, and I found "header address" doesn't match rdba, what's the "header address"? I also found several leaf blocks have the same "header address".
    buffer tsn: 11 rdba: 0x1684d120 (90/315680)
    ========> 0x1684d120 (1)
    header address 4403265988=0x1067481c4
    ========> 0x1067481c4 (2)
    *** SERVICE NAME:(SYS$USERS) 2009-08-04 04:37:36.335
    *** SESSION ID:(14234.24426) 2009-08-04 04:37:36.335
    Start dump data blocks tsn: 11 file#: 90 minblk 315680 maxblk 315680
    buffer tsn: 11 rdba: 0x1684d120 (90/315680) 
      ========>  0x1684d120  (1)
    scn: 0x0324.dda9ec3d seq: 0x01 flg: 0x04 tail: 0xec3d0601
    frmt: 0x02 chkval: 0xeb2a type: 0x06=trans data
    Hex dump of block: st=0, typ_found=1
    Block header dump:  0x1684d120
    Object id on Block? Y
    seg/obj: 0x7ca10  csc: 0x324.dda9ec3d  itc: 17  flg: O  typ: 2 - INDEX
         fsl: 0  fnx: 0x1684cf72 ver: 0x01
    Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
    Leaf block dump
    ===============
    header address 4403265988=0x1067481c4         
    ========>  0x1067481c4  (2)
    kdxcolev 0
    KDXCOLEV Flags = - - -
    kdxcolok 0
    kdxcoopc 0x90: opcode=0: iot flags=I-- is converted=Y
    kdxconco 2
    kdxcosdc 5
    kdxconro 0
    kdxcofbo 36=0x24
    kdxcofeo 7672=0x1df8
    kdxcoavs 7636
    kdxlespl 0
    kdxlende 0
    kdxlenxt 373579108=0x16445d64
    kdxleprv 377801347=0x1684ca83
    kdxledsz 0
    kdxlebksz 7672
    ----- end of leaf block dump -----Thanks,
    Daniel

    Hi user646745
    You didn't say why you need to do index block dump ?
    Also take are that block structures and dumps some time are different from a ver to ver it 9i and 10g. Unless you now what exectaly you are looking for
    Thanks

  • Index Range Scan / Deleted Leaf Blocks

    Hello guys,
    i have such a scenario on a big index / table which i can not reproduce on my test database, so i need to know how oracle handles the index range scan.
    For example:
    TABLE TAB with the following columns NR (number), I_DATE (date), TEXT (VARCHAR2(50))
    INDEX I_TAB on the column I_DATE.
    Now the index has blevel 2 and many leaf blocks. And now my question.
    Query: SQL> SELECT * from TAB WHERE I_DATE < 10.10.2004
    The index had stored some values which are a less than 2003 but these ones are already deleted (so the leaf blocks are gone to the freelist), but it was not reorganized.
    The execution plan is a INDEX RANGE SCAN on the INDEX I_TAB. Does the branch block still have pointers to the deleted leaf blocks which contained only 2003 values before (and so the INDEX RANGE SCAN scans all these blocks too) or are the pointers to these leaf blocks deleted in the branch block?
    Thanks and Regards
    Stefan

    You can verify it by yourself. See following:
    SELECT count(*) FROM index_test;
    ==> 1569408
    SELECT count(*) FROM index_test WHERE id <= 2;
    ==> 12
    -- Delete all except first 12 rows
    DELETE FROM index_test WHERE id > 2;
    -- Query and SQL Trace
    BEGIN
    FOR C IN (SELECT /*+index(index_test index_test_idx) deleted */ * FROM INDEX_TEST WHERE ID < 1000000) LOOP
    NULL;
    END LOOP;
    END;
    SELECT /*+index(index_test index_test_idx) deleted */ *
    FROM
    INDEX_TEST WHERE ID < 1000000
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 0.00 0.00 0 3490 0 12
    total 3 0.00 0.01 0 3490 0 12
    ==> 3490 logical reads only for 12 rows and range scan??
    -- Index tree dump
    ALTER SESSION SET EVENTS 'IMMEDIATE TRACE NAME TREEDUMP LEVEL 67513'
    ----- begin tree dump
    branch: 0x1000124 16777508 (0: nrow: 6, level: 2)
    branch: 0x100b1ca 16822730 (-1: nrow: 557, level: 1)
    leaf: 0x1000125 16777509 (-1: nrow: 512 rrow: 12)
    leaf: 0x1000126 16777510 (0: nrow: 484 rrow: 0)
    leaf: 0x1000127 16777511 (1: nrow: 479 rrow: 0)
    leaf: 0x1000128 16777512 (2: nrow: 479 rrow: 0)
    leaf: 0x1000139 16777529 (3: nrow: 479 rrow: 0)
    leaf: 0x100013a 16777530 (4: nrow: 478 rrow: 0)
    branch: 0x100b401 16823297 (0: nrow: 558, level: 1)
    leaf: 0x100b1c9 16822729 (-1: nrow: 449 rrow: 0)
    leaf: 0x100b1cb 16822731 (0: nrow: 449 rrow: 0)
    leaf: 0x100b1cc 16822732 (1: nrow: 449 rrow: 0)
    ==> leaf:3488, branch: 7
    This means that almost all the branch and leaf nodes are read only for 12 keys.
    You can cross check this with the result of "10200" event which traces cr reads. You would find out that the blocks that are read by the query are exactly same as all the index blocks.
    This is what you mean? that the deleted leaf blocks(which contain no actual data) are read by range scan? Through the simple test, the anwer is "yes".

  • Block the output in R/3 if order is not compliant

    Hello All
    We are currently implementing SAP GTS  compliance module.  To integrated SAP GTS with the rest of the solution, we want to block the output for purchase orders and sales orders and the creation of the subsequent document.
    For the creation of the subsequent document, there are some SAP notes related to it and it doesn't really create an issue for us
    For the output, we are currently struggling to find a correct solution
    So far, we had found the oss note 900555 to block the printed output for the Purchase order but how can we do it  for the others ?
    In addition, I didn't find any solution for the sales orders output  ( expecially we hae some EDI messages that we want to block)
    Currently we were investigating an  other way to do it:
    - Send back a user status that we will use as condition to determine the output. We have still some concerns about this solution because it means that by default we should put this user status as "non-compliant" for all the orders and only when the status is sent back from GTS we can execute the output
    An issue that raise to me also the developper is that in R/3 the output determination is executed before the first call to GTS.
    I hope that some members of the community have better solution which ideally can cover PO and SO with real time call
    Thank you in advance
    Nicolas

    Hello Bastian,
    We had the same issue and didn't find something useful to block the outputs completely.
    Instead, for EDI Outputs probably you can have an extra text segment added in the idoc structure. And put the document compliant status as Yes/No in that segment. The document status will be available.
    But however as we expect, the Outputs cannot be blocked. But atleast notification about the status can be sent to Third party. You can liaise with third party to have a logic implemented to check the status of this field and pass the information to destination.
    Regards
    Dhilipan

  • How to download the blocked ALV output to PDF file.

    How to download the blocked ALV output to PDF file.
    I am able to download the BLocked ALV output in PDF format,
    but the each bolck in ALV is displaying different pages of PDF.
    In my report I have 4 block in 1 page, I am able to see the output in PDF but in different page.
    How to avoid the Page-break in PDF.
    Thanks,
    Ravi Yasoda.

    hi,
    I believe that your have 4 containers on the screen with individual ALV display. in this case, there is no way to get combined PDF output to my knowledge.
    However you can use Smartform/Sapscript as output which would allow you to display ALV in blocks and also print it in one.
    Regards,
    Nirmal

  • Could not read boot block (input/output errror) powerbook g4

    Hello
    My friend gave me a powerbook g4 and a copy of osx lion, I would like to install it but the internal hardive is not available within the "select Destination" area of the  installation process. I tried to repair the disk which is labelled Disk0s1, but shortly after it is underway i recieve this message.
    **/dev/disk0s1 could not read boot block (input/output error) Error: The underlying task reported failure on exit. 1 non HFS volume checked, 1 volume could not be repaired because of an error.
    Within disk utilities the Disk01s is grey, I can try to repair it but the above error occurs. Do you know what the problem is and how I can fix it?

    Since you are going to install system fresh, it sounds like, you should erase the drive.  Boot from the installer disc, then select the language but do not start the installer.  From the menu bar, you should have either an Applications or an Installer menu. From one of the menus, you can then start Disk Utility.
    From the drives and volumes pane, select the internal drive hardware listing. 
    This will now allow you to partition or erase the disc.  Now click the Erase tab and select Mac OS Extended (Journaled) as the format and then click the "Erase" button.  You can name the volume that will actually be created, like for instance "MacintoshHD" or whatever you want.)
    Once the erase has completed, you need to check the information for the hard drive at the bottom of the window.  Be sure that the partition scheme is Apple Partition Map.  If for some reason it is something else, like GUID, you won't be able to install MacOS on it.  You will need to click the Partition tab, click the "Options..." button and select Apple Partition Map, then click the "Partition" button.
    Once this is done, then you can quit Disk Utility and you should be able to install Tiger on that HD.

  • ORACLE7/ORACLE8/OS에서 DATAFILE의 BLOCK DUMP를 얻는 방법

    제품 : ORACLE SERVER
    작성날짜 : 1999-05-24
    Oracle7 / Oracle8 / OS에서 datafile의 block dump를 얻는 방법
    'block dump'는 database 내의 block의 내용을 dump받도록 해준다.
    이렇게 얻어진 dump에는 block 내에 포함된 모든 내용을 포함하는데 단, 이것은
    OS level의 image dump는 아니며, oracle에서 user가 이해하는 데 도움이 되도록
    정해진 SYMBOL로 표현되므로 이렇게 얻어진 내용을 해석하기 위해서는
    Oracle에 dependent하며, 별도의 자료를 참조하여야 한다.
    여기에서는 해석 방법은 포함하지 않으며 dump를 얻는 방법에 대해서만 기술하였다.
    1. Oracle7에서 block dump 사용하는 방법
    oracle7에서 block dump를 얻기 위해서는 항상 database가 OPEN 상태에서만 가
    능하며, datafile도 ONLINE 상태여야 한다.
    (1) dump하고자 하는 block의 DBA를 십진수 값으로 얻는다.
    file 번호와 block 번호를 이용해 DBA를 얻는 방법은 <Bulletin:11508>을
    이용한다.
    (2) 다음 문장을 실행하여 dump 내용을 포함하는 trace file을 생성한다.
    sqlplus system/manager
    SQL>alter session set events 'immediate trace name BLOCKDUMP level <DBA>';
    (3) 다음과 같이 trace file이 생성된 위치를 확인한다.
    os>svrmgrl
    SVRMGR>connect internal
    SVRMGR>show parameter user_dump_dest
    2. Oracle8 에서 BLOCK DUMP
    Oracle7에서는 block dump를 얻기 위해서는 DBA를 얻어야하고 DBA를 이용하여
    dump를 생성하는 sql 문의 syntax도 친숙하지 않은 것이었다. Oracle8에서는
    보다 간단하고 사용자에게 익숙한 형태로 block dump가 가능하도록 하였다.
    sqlplus로 접속한 상태에서 아래에 나열된 것과 같이 file 이름이나 번호,
    block 번호나 범위를 지정하여 dump를 얻을 수 있다.
    - ALTER SYSTEM DUMP DATAFILE {'filename'}|{filenumber};
    - ALTER SYSTEM DUMP DATAFILE {'filename'}|{filenumber} BLOCK {blockno};
    - ALTER SYSTEM DUMP DATAFILE {'filename'}|{filenumber} BLOCK MIN {blockno}
    BLOCK MAX {blockno};
    여기에서 blockno는 dump하고자 하는 block의 십진수 값이다.
    예를 들어 다음과 같이 하면 된다.
    - ALTER SYSTEM DUMP DATAFILE 1 BLOCK 5586;
    - ALTER SYSTEM DUMP DATAFILE 1 BLOCK MIN 5585 BLOCK MAX 5586;
    - ALTER SYSTEM DUMP DATAFILE '/u01/oradata/MYDB/system01.dbf' BLOCK 98;
    주의: 그런데 이 때 잘못된 block file number를 입력하는 등의 잘못된 명령에도
    불구하고 'Statement processed'라는 정상 처리 형태의 메시지가 나타난다.
    이러한 경우 trace file에는 dump 내용 대신 다음과 같은 오류 메시지가
    적혀 있다.
    Error: alter system dump datafile: input file # 100 is too big
    BLOCK 절을 생략하면 datafile의 모든 block에 대해서 dump가 생성된다.
    이 때 trace file의 크기는 MAX_DUMP_FILE_SIZE에 의해 제한을 받으므로 전체
    block을 모두 포함하지 않는 dump가 생성될 수 있다.
    {filenumber} 를 사용할 때에는 database는 항상 OPEN된 상태이고 file은 ONLINE
    이어야 한다. 그리고 이 때의 {filenumber}는 tablespace 내의 상대적인 값이 아닌,
    절대적인 값이어야 한다.
    'filename' 을 사용하면 같은 block size를 가지고 있는 다른 데이타베이스의
    데이타화일도 dump를 할 수 있다. 이러한 경우에 dump를 수행하는 instance는
    최소한nomount는 되어야 한다. 즉 다음과 같이 하면 된다.
    SVRMGR>startup nomount
    SVRMGR>alter system dump datafile mnt3/rctest73/server/eykim/test01.dbf'
    block 88;
    dump의 결과가 저장되는 trace 화일은 USER_DUMP_DEST 디렉토리에 생성된다.
    USER_DUMP_DEST를 확인하는 방법은 1-(3)번(Oracle7)에서 기술하였다.
    3. OS block dump
    UNIX: dd if=dbfile.dbf bs=2k skip={block} count=1 | od -x > dump.out
    여기에서 {block}은 dump 하지 않기 위해 skip하고자 하는 block의 갯
    수를 지정한다.
    NT: 다음과 같은 제품들이 사용가능하다.
    MKS Toolkit , FileView (Shareware), HEdit

  • [svn] 2477: Addendum to SDK-16086 - clear up a block scope-related var name warning.

    Revision: 2477
    Author: [email protected]
    Date: 2008-07-14 14:40:18 -0700 (Mon, 14 Jul 2008)
    Log Message:
    Addendum to SDK-16086 - clear up a block scope-related var name warning.
    Thanks Deepa :)
    Ticket Links:
    http://bugs.adobe.com/jira/browse/SDK-16086
    Modified Paths:
    flex/sdk/branches/3.1.0/frameworks/projects/rpc/src/mx/messaging/ChannelSet.as

    Remember that Arch Arm is a different distribution, but we try to bend the rules and provide limited support for them.  This may or may not be unique to Arch Arm, so you might try asking on their forums as well.

  • Leaf blocks,blevel,clustering factor

    Hi,
    I am trying to understand what is the meaning of
    leaf_blocks,clustering_factor,blevel
    Regards
    MMU

    For a non-technical but still pretty useful explanation of the CLUSTERING_FACTOR (and how not to use it!), see: http://www.dizwell.com/prod/node/22
    Leaf blocks are the index equivalent of table blocks: blocks of disk space in which your index entries are actually stored. Indexes also have branch blocks and a root block which act as 'signposts' to the leaf nodes: "If you're looking for the entries beginning with E, those are in that leaf block over there. Entries beginning with F start in that other leaf block there, though".
    Depending on how many 'signposts' you have to read before you get to the actual leaf block entry you're interested in, the index can have a smaller or bigger "blevel". Sometimes, we talk about the index having a "height": if you have to visit a root node, which points you off to one of the branch nodes, which then points you to the leaf entry you want, that's a height of 3. It also happens to be a blevel of 2. The difference is probably not worth worrying about for now. Suffice it to say: an index with a blevel of (say) 2 is going to be a lot quicker and more efficient to use than one with a blevel of (say) 5.
    The optimiser uses those statistics to decide whether or not it's worth its while using the index to satisfy a query or not.

  • Index leaf blocks???

    Can anybody explain me the details of index leaf blocks?
    What are the possible usage of those?

    Hi DKar,
    index leaf block is the smallest unit of a btree index, they are used to store the rowid of the indexed rows.
    Their usage is transparent to you.
    I suggest to take a look here:
    http://download-uk.oracle.com/docs/cd/B14117_01/server.101/b10743/schema.htm#sthref883
    search for: The Internal Structure of Indexes
    Regards
    Acr

  • Why Block Video Output?

    I have just purchase a brand new shiny Yamaha AV amp and Yamaha iPod dock (which was supposed to be video compatible) with the hope of using the on-screen facility to select music tracks. But oh no, thanks to someone in Apples ludicrous decision to block video output unless you're using Apple's own dock and cables I can’t!
    Apple - please unblock this ridiculous restriction!

    Hi, 
    I'm not condoning it, but in my estimation this will not change. Apple implemented this to secure the video so that they could reach the agreements they have with the film companies regarding online purchasing and rentals. If the limit is ever officially removed I'm sure Apple would lose the iTunes Store content overnight. It's for this reason that the first video iPod (the fifth generation) can accept rental films.
    mrtotes
    Message was edited by: mrtotes

  • Need explanation meaning from output "/var/crash/`hostname'/*"

    Dear IT Experts,
    Oncetime, I run command "ls -al /var/crash/`hostname`/* ".
    This command give me output:
    -rw-r--r-- 1 root root 2 May 8 2007 /var/crash/drserv1/bounds
    -rw-r--r-- 1 root root 784890 Jan 5 2005 /var/crash/drserv1/unix.2
    -rw-r--r-- 1 root root 776755 May 8 2007 /var/crash/drserv1/unix.3
    -rw-r--r-- 1 root root 419938304 Jan 5 2005 /var/crash/drserv1/vmcore.2
    -rw-r--r-- 1 root root 276365312 May 8 2007 /var/crash/drserv1/vmcore.3
    Do anyone can help me what this output mean? What can I do to prevent this "crash" file
    appear anymore? because when I try to access one of those files, I got the strange characters.
    No character can be undestood by people.
    Please help
    Thanks for any response.
    Regards,
    Ferianto

    1. You said that I can delete unix.* and vmcore.* files. How about "bounds" file?
    Can I delete it and What is this file used for?The bounds file is just a counter. On the first crash, "bounds" is 0 so the files are named unix.0 and vmcore.0. The next crash, bounds is incremented to 1 so the files are named unix.1 and vmcore.1. It's best not to touch this file.
    2. You are right that I have no ability or tools to analyze it. Could you
    tell me , is there any tools that we can use? (Maybe we can find the tools using "google" ?) or we must send them to SUN support?The way I understand it, back in the cowboy days the tools used to be available to everyone. Over the years, more and more 3rd party support vendors have popped up and started using the tools that was Sun's bread-and-butter.
    As you can imagine, these tools were developed by Sun and was being used by their competitors to erode their market share so Sun has rightfully withdrawn them from the public arena and has reinforced agreements with customers, i.e. 3rd parties can no longer use explorer to provide support for Sun hosts.
    So the answer to your question is yes, you can only send crash dumps to Sun for analysis. You can only obviously do this if you have a valid support contract.
    3. Can you tell me what should I check if I want to now that my machine is in healty condition,please?I've answered this in your other post on "explorer".
    Welcome to Solaris!
    Cheers,
    Erick Ramirez
    Melbourne, Australia

  • Wsdl problem - why do i need to name output vars?

    I have a web service that is working (WSDL:
    http://getanagram.com/wsdevel.wsdl).
    I can access it from ColdFusion but not in the standard way. See
    below for examples. The problem is that I don't want to specify the
    output variables up front inside the call to the web service. I
    want the two output variables to go into a single struct like all
    the coldfusion documentation implies.
    I'd like to be able to access it this way:
    <cfinvoke
    webservice="
    http://getanagram.com/wsdevel.wsdl"
    method="GetTypeScores"
    returnvariable="foo">
    <cfinvokeargument name="text" value="123 456 7890"/>
    </cfinvoke>
    Output: <cfoutput>#foo#</cfoutput>
    But I get the error "Web service operation "GetTypeScores"
    with parameters {text={123 456 7890}} could not be found."
    Accessing it this way works PERFECTLY without errors but
    requires I name the output variables in the call, which I do not
    want:
    <cfscript>
    ws = CreateObject("webservice", "
    http://getanagram.com/wsdevel.wsdl");
    ws.GetTypeScores(text='123 456
    7890',ContactScore="ContactScore",EventScore="EventScore");
    </cfscript>
    Contact Score: <cfdump var="#ContactScore#"
    /><br>
    Event Score: <cfdump var="#EventScore#" />
    Exactly why is this happening?

    aparsons wrote:
    > I have a web service that is working (WSDL:
    >
    http://getanagram.com/wsdevel.wsdl).
    I can access it from ColdFusion but not in
    > the standard way. See below for examples. The problem is
    that I don't want to
    > specify the output variables up front inside the call to
    the web service. I
    > want the two output variables to go into a single struct
    like all the
    > coldfusion documentation implies.
    >
    > I'd like to be able to access it this way:
    >
    > <cfinvoke
    > webservice="
    http://getanagram.com/wsdevel.wsdl"
    > method="GetTypeScores"
    > returnvariable="foo">
    > <cfinvokeargument name="text" value="123 456
    7890"/>
    > </cfinvoke>
    > Output: <cfoutput>#foo#</cfoutput>
    The web service requires those parameters so you need to send
    them
    (nothing wrong with CF here):
    <element name="TypeScores">
    <complexType>
    <sequence>
    <element name="ContactScore" type="xsd:int" minOccurs="1"
    maxOccurs="1"/>
    <element name="EventScore" type="xsd:int" minOccurs="1"
    maxOccurs="1"/>
    </sequence>
    </complexType>
    </element>
    <mack />

  • How to dump output to OC4J logging?

    I wrote a webservice impl. class and tried to add some output to the log file. I assume the log file is log.xml. But I couldn't find any of my output. I was told that I could just use system.out.println for dumping the logging. It doesn't seem to work.
    Please help.
    Thanks,
    Jason

    It depends on the version of OAS I believe.
    Now in my case, we were on OAS 10.1.2.x. OC4J is not standalone. It is a part of the OAS.
    The system.outs and system errs go to
    OASHOME/opmn/logs/OC4J~home~default~island~1ETC
    This will be true if you do not redirect out and err to some other specific files. This will be mentioned in APP server OC4J instance JVM options (check opmn.xml).
    Some more references
    10.1.3
    http://download-west.oracle.com/docs/cd/B31017_01/web.1013/b28950/logadmin.htm
    http://download-west.oracle.com/docs/cd/B31017_01/core.1013/b28944/appendix.htm
    10.1.2
    http://download-west.oracle.com/docs/cd/B14099_19/web.1012/b14011/advanced.htm#i1027867
    I am not sure about this is true for webservices implementation though.
    Let us know what you find out.

Maybe you are looking for

  • HP LaserJet 400 Color MFP M475dw Problems in OS 10.9.4

    I'm using a 2013 MacBook Pro, 2.4 GHz Intel Core i7, 9 GB 1600 MHz DDR3.  I connect via wifi to the referenced printer (for almost a year now).  Since updating to OS 10.9.4 I am able to print only one document at a time.  Attempting to print a second

  • How to get GUID of nav predecessor in Breadcrumb navigation

    Hello, we have implemented a Z-Component in which we can navigate from other standard components via i.e. follow-up transaction. I have the requirement to get the GUID of the transaction from which we came from. For example we are working at an offer

  • Application monitor PO local errors "Buffer table not up to date"

    Hi Experts, Any idea about the cause of below error, no PO's effected, but in RZ20 i could see below message General details Column                 Contents Node name (MTE)        ESP\Business to Business Procurement(100)\...\Purchase Order\Local err

  • How does 3.1's new "Genius Mix" differ from a Genius Playlist?

    Hey guys I still haven't updated to 3.1 because of all the complaints and issues associated with it, but I was curious about one feature I'm missing out on... How does the new Genius Mix any different than an automated Genius Playlist that is made wh

  • Creating color schemes

    According to the official tutorial on creating color schemes s:Application has a style property named 'selectionColor': http://help.adobe.com/en_US/flex/using/WS2db454920e96a9e51e63e3d11c0bf69084-7f85.html#WS69 9A11B5-8BFB-4282-829A-C97DA87F116F But