Using large block sizes for index and table spaces

" You are not using large blocksizes for your index tablespaces. Oracle research proves that indexes will build flatter tree structures in larger blocksizes.
Is this a generic statement that I can use for all tables or indexes? I also have batch and online activity. My primary target is batch and it should not impact online. Not sure if both have common tables.
How to find the current block size used for tables and index? is there a v$parameter query?
What is an optimal block size value for batch?
How do I know when flatter tree str has been achieved using above changes? Is there a query to determine this?
What about tables, what is the success criterion for tables. can we use the same flat tree str criterion? Is there a query for this?

user3390467 wrote:
" You are not using large blocksizes for your index tablespaces. Oracle research proves that indexes will build flatter tree structures in larger blocksizes.
Is this a generic statement that I can use for all tables or indexes? This is a generic statement used by some consultants. Unfortunately, it is riddled with exceptions and other considerations.
One consultant in particular seems to have anecdotal evidence that using different block sizes for index (big) and data (small) can yield almost miraculous improvements. However, that can not be backed up due to NDA. Many of the rest of us can not duplicate the improvements, and indeed some find situations where that results in a degradation (esp with high insert/update rates from separated transactions).
I also have batch and online activity. My primary target is batch and it should not impact online. Not sure if both have common tables.
How to find the current block size used for tables and index? is there a v$parameter query?
What is an optimal block size value for batch?
How do I know when flatter tree str has been achieved using above changes? Is there a query to determine this?
What about tables, what is the success criterion for tables. can we use the same flat tree str criterion? Is there a query for this?I'd strongly recommend that you
1) stop using generic tools to analyze specific problems
2) define you problem in detail ()what are you really trying to accomplish - seems like performance tuning, but you never really state that)
3) define the OS and DB version - in detail. Give rev levels and patch levels.
If you are having a serious performance issue, I strongly recommend you look at some performance tuning specialists like "http://www.method-r.com/", "http://www.miracleas.dk/", "http://www.hotsos.com/", "http://www.pythian.com/", or even Oracle's Performance Tuning consultants. Definitely worth the price of admission.

Similar Messages

  • Any reliable cases where larger block sizes are useful?

    So I did some googling around to read up on 16kb or larger blocksizes. I found a series of articles by Jonathan Lewis and Richard Foote (plus other DBAs whose posts I trust ) debunking the usage of larger blocksize. I have not been able to find a single article, blog post, forum post, with a legitimate case where a larger block size actually improves performance. Its hard to google this stuff because the good stuff is buried beneath all the trash.
    So have any of the Oak Table people and other guys who write articles where they do quality testing find cases where larger block sizes are useful?
    I don't have a specific need. I'm just curious. Every time I  look this up I get buried in generic copy and paste blog posts that copy the docs, the generic test cases that were debunked, by the guys above, and other junk. So its hard to look for this.

    Guess2 wrote:
    So I did some googling around to read up on 16kb or larger blocksizes. I found a series of articles by Jonathan Lewis and Richard Foote (plus other DBAs whose posts I trust ) debunking the usage of larger blocksize. I have not been able to find a single article, blog post, forum post, with a legitimate case where a larger block size actually improves performance. Its hard to google this stuff because the good stuff is buried beneath all the trash.
    So have any of the Oak Table people and other guys who write articles where they do quality testing find cases where larger block sizes are useful?
    Lurking in the various things I've written about block sizes there are a couple of comments about using different block sizes (occasionally) for LOBs - though this might be bigger or smaller depending on the sizes of the actual LOBs and the usage pattern: it also means you automatically separate the LOB cache from the main cache, which can be very helpful.
    I've also suggested that for IOTs (index organized tables) where the index entries can be fairly large and you don't want to create an overflow segment you may get some benefit if the larger block size typically allows all rows for a give (partial) key value to reside in just one or two blocks.  The same argument can apply, though with slightly less strength for "fat" indexes (i.e. ones you've added columns to in order to avoid visiting the table for very impoartant time-critical queries).  The drawback in these two cases is that you're second-guessing, and to an extent choking, the LRU algorithms, and you may find that the gain on the specific indexes is obliterated by the loss on the rest of the caching activity.
    Regards
    Jonathan Lewis

  • Larger block size = faster DB?

    hi guys,
    (re Oracle 9.2.0.7)
    it seems that a larger block_size makes the database perform operations faster? Is this correct? if so, why would anyone use 2k block sizes?
    thanks

    Hi Howard,
    it's uncharted territory, especially at the higher blocksizes which seem to be less-well-tested than the smaller ones Yup, Oracle releases junkware all the time, untested poo . . .
    You complain, file a bug report, and wait for years while NOTHING happens . . .
    Tell me Howard, how incompetent does Oracle Corporation have to be not to test something as fundamental as blocksizes?
    I've seen Oracle tech support in-action, it's like watching the Keystone Cops:
    Oracle does not reveal the depth of their quality assurance testing, but many Oracle customers believe that Oracle does complete regression testing on major features of their software, such as blocksize. However, Oracle ACE Director Daniel Morgan, says that “The right size is 8K because that is the only size Oracle tests”, a serious allegation, given that the Oracle documentation, Oracle University and MetaLink all recommend non-standard blocksizes under special circumstances:
    - Large blocks gives more data transfer per I/O call.
    - Larger blocksizes provides less fragmentation (row chaining and row migration) of large objects (LOB, BLOB, CLOB)
    - Indexes like big blocks because index height can be lower and more space exists within the index branch nodes.
    - Moving indexes to a larger blocksize saves disk space. Oracle says "you will conserve about 4% of data storage (4GB on every 100GB) for every large index in your database by moving from a 2KB database block size to an 8KB database block size."
    So, does Oracle really not do testing with non-standard blocksizes? Oracle ACE Director Morgan says that he was quoting Bryn Llewellyn of Oracle Corporation:
    “Which brings us full circle to the statement Brynn made to me and that I have repeated several times in this thread. Oracle only tests 8K blocks.”
    Wow.
    Edited by: jkestely on Sep 1, 2008 2:52 PM

  • Choosing block size for RAID 0 & Final Cut

    Hi.
    I now have 3 500GB internal Seagate drives in bays 2/3/4 and want to make a striped 1.5TB RAID to use with Final Cut Studio 2. The help page talks about choosing a "large" data block size for use with video, but makes no specific size suggestion. What value would you recommend that I select for the block size? I haven't been in there yet so I don't know what the choices are.
    Any other settings I should be aware of that will optimize the RAID performance for video capture and editing? Thanks!
    Fred
    Message was edited by: FredGarvin
    Message was edited by: FredGarvin

    If you're using Disc Utility to set up your RAID, when you go to the RAID tab, you'll see an options button near the bottom of the window... clicking this will open a small menu where you can set the data block size... the largest is 256K, which is what you'd want to use.
    As for you're other question... have a look at this website: http://bytepile.com/raid_class.php
    note that disc utility can only set up RAID 0 & RAID 1 (if i remember rightly).

  • Default block size for UFS format in OSX?

    Hi,
    I formatted an external drive as "unix format" (UFS) using Disk Utility, which subsequently became unrecognizable by the firewire controller. Now in a Unix box, the superblock of this drive is coming up as corrupted or nonexistent. Does anyone know where I can find the default block size for UFS in os x? I need to specify a backup superblock. Thanks!
    -Dan

    If it is just scratch, run some benchmarks with it set to 128k and 256k and see how it feels with each. The default is too small, though some find it acceptable for small images. For larger files you want larger - and for PS scratch you definitely want 128 or 256k.

  • ORA-00349: failure obtaining block size for '+Z'  in Oracle XE

    Hello,
    I am attempting to move the online redo log files to a new flash recovery area location created on network drive "Z" ( Oracle Database 10g Express Edition Release 10.2.0.1.0).
    When I run @?/sqlplus/admin/movelogs; in SQL*Plus as a local sysdba, I get the following errors:
    ERROR at line 1:
    ORA-00349: failure obtaining block size for '+Z'
    ORA-06512: at line 14
    Please let me know how to go about resolving this issue.
    Thank you.
    See below for detail:
    Connected.
    SQL> @?/sqlplus/admin/movelogs;
    SQL> Rem
    SQL> Rem $Header: movelogs.sql 19-jan-2006.00:23:11 banand Exp $
    SQL> Rem
    SQL> Rem movelogs.sql
    SQL> Rem
    SQL> Rem Copyright (c) 2006, Oracle. All rights reserved.
    SQL> Rem
    SQL> Rem NAME
    SQL> Rem movelogs.sql - move online logs to new Flash Recovery Area
    SQL> Rem
    SQL> Rem DESCRIPTION
    SQL> Rem This script can be used to move online logs from old online
    log
    SQL> Rem location to Flash Recovery Area. It assumes that the database
    SQL> Rem instance is started with new Flash Recovery Area location.
    SQL> Rem
    SQL> Rem NOTES
    SQL> Rem For use to rename online logs after moving Flash Recovery
    Area.
    SQL> Rem The script can be executed using following command
    SQL> Rem sqlplus '/ as sysdba' @movelogs.sql
    SQL> Rem
    SQL> Rem MODIFIED (MM/DD/YY)
    SQL> Rem banand 01/19/06 - Created
    SQL> Rem
    SQL>
    SQL> SET ECHO ON
    SQL> SET FEEDBACK 1
    SQL> SET NUMWIDTH 10
    SQL> SET LINESIZE 80
    SQL> SET TRIMSPOOL ON
    SQL> SET TAB OFF
    SQL> SET PAGESIZE 100
    SQL> declare
    2 cursor rlc is
    3 select group# grp, thread# thr, bytes/1024 bytes_k
    4 from v$log
    5 order by 1;
    6 stmt varchar2(2048);
    7 swtstmt varchar2(1024) := 'alter system switch logfile';
    8 ckpstmt varchar2(1024) := 'alter system checkpoint global';
    9 begin
    10 for rlcRec in rlc loop
    11 stmt := 'alter database add logfile thread ' ||
    12 rlcRec.thr || ' size ' ||
    13 rlcRec.bytes_k || 'K';
    14 execute immediate stmt;
    15 begin
    16 stmt := 'alter database drop logfile group ' || rlcRec.grp;
    17 execute immediate stmt;
    18 exception
    19 when others then
    20 execute immediate swtstmt;
    21 execute immediate ckpstmt;
    22 execute immediate stmt;
    23 end;
    24 execute immediate swtstmt;
    25 end loop;
    26 end;
    27 /
    declare
    ERROR at line 1:
    ORA-00349: failure obtaining block size for '+Z'
    ORA-06512: at line 14
    Can someone point me in the right direction as to what I may be doing wrong here - Thank you!

    888442 wrote:
    I am trying to drop and recreate ONLINE redo logs on my STANDB DATABASE (11.1.0.7)., but i am getting the below error.
    On primary, we have done the changes., ie we added new logfile with bigger size and 3 members. When trying to do the same on Standby we are getting this error.
    Our database is in Active DG Read only mode and the oracle version is 11.1.0.7.
    I have deffered the log apply and cancelled the managed recovery, and dg is in manual mode.
    SQL> alter database Add LOGFILE GROUP 4 ('+DT_DG1','+DT_DG2','+DT_DG3') SIZE 1024M;
    alter database Add LOGFILE GROUP 4 ('+DT_DG1','+DT_DG2','+DT_DG3') SIZE 1024M
    ERROR at line 1:
    ORA-00349: failure obtaining block size for '+DT_DG1'First why you are dropping & recreating online redo log files on standby.
    On standby only standby redo log files will be used. Not sure what you are trying to do.
    here is example how to create online redo log files, Check that diskgroup is mounted and have sufficient space to create.
    sys@ORCL> select member from v$logfile;
    MEMBER
    C:\ORACLE\ORADATA\ORCL\REDO03.LOG
    C:\ORACLE\ORADATA\ORCL\REDO02.LOG
    C:\ORACLE\ORADATA\ORCL\REDO01.LOG
    sys@ORCL> alter database add logfile group 4 (
      2     'C:\ORACLE\ORADATA\ORCL\redo_g01a.log',
      3     'C:\ORACLE\ORADATA\ORCL\redo_g01b.log',
      4     'C:\ORACLE\ORADATA\ORCL\redo_g01c.log') size 10m;
    Database altered.
    sys@ORCL> select member from v$logfile;
    MEMBER
    C:\ORACLE\ORADATA\ORCL\REDO03.LOG
    C:\ORACLE\ORADATA\ORCL\REDO02.LOG
    C:\ORACLE\ORADATA\ORCL\REDO01.LOG
    C:\ORACLE\ORADATA\ORCL\REDO_G01A.LOG
    C:\ORACLE\ORADATA\ORCL\REDO_G01B.LOG
    C:\ORACLE\ORADATA\ORCL\REDO_G01C.LOG
    6 rows selected.
    sys@ORCL>
    Your profile:-
    888442      
         Newbie
    Handle:      888442
    Status Level:      Newbie
    Registered:      Sep 29, 2011
    Total Posts:      12
    Total Questions:      8 (7 unresolved)
    Close the threads if answered, Keep the forum clean.

  • Question about Global index and Table Partitions

    I have created a global index for a partitioned table now in the future the partitions will be dropped in the table. Do I need to do anything to the global index? Does it need to be rebuilt or would it be ok if partitions get dropped in the table?

    >
    I have created a global index for a partitioned table now in the future the partitions will be dropped in the table. Do I need to do anything to the global index? Does it need to be rebuilt or would it be ok if partitions get dropped in the table?
    >
    You can use the UPDATE INDEXES clause. That allows users to keep using the table and Oracle will keep the global indexes updated.
    Otherwise, as already stated all global indexes will be marked UNUSABLE.
    See 'Dropping Partitions' in the VLDB and Partitioning Guide
    http://docs.oracle.com/cd/E11882_01/server.112/e25523/part_admin002.htm#i1007479
    >
    If local indexes are defined for the table, then this statement also drops the matching partition or subpartitions from the local index. All global indexes, or all partitions of partitioned global indexes, are marked UNUSABLE unless either of the following is true:
    You specify UPDATE INDEXES (Cannot be specified for index-organized tables. Use UPDATE GLOBAL INDEXES instead.)
    The partition being dropped or its subpartitions are empty

  • Optimal NTFS block size for Oracle 11G on Windows 2008 R2 (OLTP)

    Hi All,
    We are currently setting up an Oracle 11G instance on a Windows 2008 R2 server and were looking to see if there was an optimal NTFS block size. I've read the following: http://docs.oracle.com/cd/E11882_01/win.112/e10845/specs.htm
    But it only mentioned the block sizes that can be used (2k - 16k). And basically what i got out of it, was the different block szes affect the max # of database files possible for each database.
    Is there an optimal NTFS block size for Oracle 11G OLTP system on Windows?
    Thanks in advance

    Is there an optimal NTFS block size for Oracle 11G OLTP system on Windows?ideally FS block size should be equal to Oracle tablespace block size.
    or at least be N times less than Oracle block size.
    For example - if Oracle BS=8K then NTFS BS better to be 8K but also can be 4K or 2K.
    Also both must be 1 to N times of Disk sector size. Older disks had sectors 512 bytes.
    Contemporary HDDs have internal sector size 4K. Usually.

  • Gathering statistics on interMedia indexes and tables

    Has anyone found any differences (like which one is better or worse) between using the ANALYZE sql command, dbms_utility package, or dbms_stats package to compute or estimate statistics for interMedia text indexes and tables for 8.1.6? I've read the documentation on the subject, but it is still unclear as to which method should be used. The interMedia text docs say the ANALYZE command should be used, and the dbms_stats docs say that dbms_stats should be used.
    Any help or past experience will be grateful.
    Thanks,
    jj

    According to the Support Document "Using statistics with Oracle Text" (Doc ID 139979.1), no:
    Q. Should we gather statistics on the underlying DR$/DR# tables? If yes/no, why?
    A. The recommendation is NO. All internal recursive queries have hints to fix the plans that are deemed most optimal. We have seen in the past that statistics on the underlying DR$ tables may cause query plan changes leading to serious query performance problems.
    Q. Should we gather statistics on Text domain indexes ( in our example above, BOOKS_INDEX)? Does it have any effect?
    A: As documented in the reference manual, gathering statistics on Text domain index will help CBO to estimate selectivity and costs for processing a CONTAINS() predicate. If the Text index does not have statistics collected, default selectivity and cost will be used.
    So 'No' on the DR$ tables and indexes, 'yes' on the user table being indexed.

  • Primary key constraint for index-organized tables or sorted hash cluster

    We had a few tables dropped without using cascade constraints. Now when we try to recreate the table we get an error message stating that "name already used by an existing constraint". We cannot delete the constraint because it gives us an error "ORA-25188: cannot drop/disable/defer the primary key constraint for index-organized tables or sorted hash cluster" Is there some sort of way around this? What can be done to correct this problem?

    What version of Oracle are you on?
    And have you searched for the constraint to see what it's currently attached to?
    select * from all_constraints where constraint_name = :NAME;

  • Use of block sizes with APO pgm /SAPAPO/RDMCPPROCESS

    I am using APO V5.1.
    I am using standard pgm /SAPAPP/RDMCPPROCESS to send APO planning data back to the connected ECC system.
    One of the parameters is block size.
    I am using the default of 1000.
    But what are the performance implications of using different block sizes? What is 'good practice' for using this block size parameter?
    Thanks for any advice on this point...

    Hi,
    The F1 help itself gives good information on this topic.
    The block size indicates how many change pointers are processed in one shot (one block).
    So, if you increase the block size, it means that more change pointers are processed in one shot meaning that the memory requirement would be higher. On the other hand, since you have increased the change pointers to be processed in one shot, the number of blocks required would be less. So, if your system has sufficient capacity to process the number of change pointers that you specify in a block, increasing the block size would increase the load on the memory, but would make the process fast, meaning that your change pointer job would finish faster.
    Vice versa is also true.
    Your system performance (mostly linked to memory availability) could get negatively impacted if you increase the block size. If you see that your system performance goes down with this block size of 1000, this should mostly mean that your memory resources are already exhausted in other processes, and you could reduce the block size to 500. With this, you free up memory and system would perform faster. Though your change pointer job would take a little bit more time.
    Try first with having the default block size. Unless you face some issue, there's no point in changing the default value.
    Hope this explains.
    PS: In our case, we mostly use 1000, but during some peak system load times, we use 500 as block size.
    Thanks - Pawan

  • Firefox is using large amounts of CPU time and disk access, and I need to know how to shut down most of this so I can actually use the browser.

    Firefox is a very busy piece of software. It's using large amounts of CPU time and disk access. It puts my usage at low priority, so I have to wait for some time to be able to use my pointer or keyboard. I don't know what it uses all that CPU and disk access time for, but it's of no use to me. It often takes off with massive use of resources when I'm not doing anything, and I may not have use of my pointer for several minutes. How can I shut down most of this so I can use the browser to get my work done. I just want to use the web site access part of the software, and drop all the extra. I don't want Firefox to be able to recover after a crash. I just want to browse with a minimum of interference from Firefox. I would think that this is the most commonly asked question.

    Firefox consumes a lot of CPU resources
    * https://support.mozilla.com/en-US/kb/Firefox%20consumes%20a%20lot%20of%20CPU%20resources
    High memory usage
    * https://support.mozilla.com/en-US/kb/High%20memory%20usage
    Check and tell if its working.

  • Use of Cell Variant for a normal table

    Hi Experts,
    I have a requirement in a normal table where a row is first in read-only mode, then on click of edit button (given on that row),
    the row should become editable and also edit button should become invisible and two more buttons update/cancel have to be visible. How to achieve this using a cell variant for this normal table.
    A detailed reply would be appreciated.
    Thanks in Advance,
    Mirza Hyder.

    Hi Mirza,
    Cell variant property is used to chenge the cell property not for other.That means in one cell you want checkbox and in other cell you want textview like this you can change the property of cell as you want.
    Test this standard program WDR_TEST_TABLE in that test view cell varint for your understanding.
    Coming to your requirement.You can do in this way.
    Check in the below thread in that i gave code to make the entire row is editable when you click on the lead selection of row.I gave code for ALV.I hope that code is use ful for you.
    How to make all columns of alv editable
    To make tool bar buttons disable and enable:Write your logic in method handler of lead selection event like change the visible property of button to none.
    Note: In the event handler method you dont get reference of view.You only get reference of view in WDDOMODIFYVIEW only so take a global variable of type ref to if_wd_view and populate that global variable in method WDODIFYVIEW.So that you can access any view element in any method.
    For cell variant in ALV check this Article written by me.
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/f0e7461d-5e6c-2b10-dda9-9e99df4d136d

  • Optimal Block Size for Xserve's RAID hosting Final Cut Server

    What would be the optimal block size for the software RAID on the machine that will be hosting Final Cut Server? The default is 36K. Since FCS is essentially a database, would be the optimal settings? Any glimpse what data size chunks FCS write to the disk?

    Actually I meant the block size for the internal startup volume where FCS is installed, not Xsan volumes. As to optimal settings for Xsan volumes it really depends on the type of the data you store on Xsan, and if it is primarily video, what format: SD, HD.

  • When I buy full CC can I use that both in my laptop and table iMac and iPad?

    When I buy full CC can I use that both in my laptop and table iMac and iPad? I hope that I do not need to pay for the CC in every equipment.

    You have two activations and can use it on two systems. iPad apps are separate and don't count.
    Mylenium

Maybe you are looking for

  • Problem un updating jdk 6 on Oracle Application Server 10.1.3 on EBS 12.0.4

    Hi All In Process of upgrading Oracle EBS 12.0.4 to 12.1.1 I updated the $IAS_ORACLE_HOME/appsutil/jdk to jdk 6u24. After replacing the original jdk with jdk 6u24 when I attempted to start the apps adoacorectl.sh failed with below error. lld.so.1: /d

  • Download Issues

    I want to sync my iPad Mini with Outlook 2007. I installed iTunes and got the error message below: iTunes was not properly installed. If you wish to import or burn CDs you will need to reinstall i Tunes. So I tried to fix the problem and followed the

  • No export type

    In the export tab of my Adobe After Effects CS4 i only have SWF,XFL,PRPROJ and Adobe Clip Notes.I'm running on a Windows 7 32 bit version.What should I do because I want to export it into a video file.

  • Home directory not showing in other apps

    I use various programs that need to access files in my home directory, such as Adobe Lightroom. The only thing showing as folders are sharing and one other folder call business cards.None of the other folder or files show for selection. I have change

  • Latest Bootcamp drivers (June 2010) comatible with Windows XP 54 Bit?

    Hi I am planning on updating my existing BootCamp partition to windows Xp 64 Bit edition on my MacBook Pro so it uses all my 4 gigs of RAM but I was wandering if the latest drivers would be compatible with it because in the description of the latest