Any reliable cases where larger block sizes are useful?

So I did some googling around to read up on 16kb or larger blocksizes. I found a series of articles by Jonathan Lewis and Richard Foote (plus other DBAs whose posts I trust ) debunking the usage of larger blocksize. I have not been able to find a single article, blog post, forum post, with a legitimate case where a larger block size actually improves performance. Its hard to google this stuff because the good stuff is buried beneath all the trash.
So have any of the Oak Table people and other guys who write articles where they do quality testing find cases where larger block sizes are useful?
I don't have a specific need. I'm just curious. Every time I  look this up I get buried in generic copy and paste blog posts that copy the docs, the generic test cases that were debunked, by the guys above, and other junk. So its hard to look for this.

Guess2 wrote:
So I did some googling around to read up on 16kb or larger blocksizes. I found a series of articles by Jonathan Lewis and Richard Foote (plus other DBAs whose posts I trust ) debunking the usage of larger blocksize. I have not been able to find a single article, blog post, forum post, with a legitimate case where a larger block size actually improves performance. Its hard to google this stuff because the good stuff is buried beneath all the trash.
So have any of the Oak Table people and other guys who write articles where they do quality testing find cases where larger block sizes are useful?
Lurking in the various things I've written about block sizes there are a couple of comments about using different block sizes (occasionally) for LOBs - though this might be bigger or smaller depending on the sizes of the actual LOBs and the usage pattern: it also means you automatically separate the LOB cache from the main cache, which can be very helpful.
I've also suggested that for IOTs (index organized tables) where the index entries can be fairly large and you don't want to create an overflow segment you may get some benefit if the larger block size typically allows all rows for a give (partial) key value to reside in just one or two blocks.  The same argument can apply, though with slightly less strength for "fat" indexes (i.e. ones you've added columns to in order to avoid visiting the table for very impoartant time-critical queries).  The drawback in these two cases is that you're second-guessing, and to an extent choking, the LRU algorithms, and you may find that the gain on the specific indexes is obliterated by the loss on the rest of the caching activity.
Regards
Jonathan Lewis

Similar Messages

  • Using large block sizes for index and table spaces

    " You are not using large blocksizes for your index tablespaces. Oracle research proves that indexes will build flatter tree structures in larger blocksizes.
    Is this a generic statement that I can use for all tables or indexes? I also have batch and online activity. My primary target is batch and it should not impact online. Not sure if both have common tables.
    How to find the current block size used for tables and index? is there a v$parameter query?
    What is an optimal block size value for batch?
    How do I know when flatter tree str has been achieved using above changes? Is there a query to determine this?
    What about tables, what is the success criterion for tables. can we use the same flat tree str criterion? Is there a query for this?

    user3390467 wrote:
    " You are not using large blocksizes for your index tablespaces. Oracle research proves that indexes will build flatter tree structures in larger blocksizes.
    Is this a generic statement that I can use for all tables or indexes? This is a generic statement used by some consultants. Unfortunately, it is riddled with exceptions and other considerations.
    One consultant in particular seems to have anecdotal evidence that using different block sizes for index (big) and data (small) can yield almost miraculous improvements. However, that can not be backed up due to NDA. Many of the rest of us can not duplicate the improvements, and indeed some find situations where that results in a degradation (esp with high insert/update rates from separated transactions).
    I also have batch and online activity. My primary target is batch and it should not impact online. Not sure if both have common tables.
    How to find the current block size used for tables and index? is there a v$parameter query?
    What is an optimal block size value for batch?
    How do I know when flatter tree str has been achieved using above changes? Is there a query to determine this?
    What about tables, what is the success criterion for tables. can we use the same flat tree str criterion? Is there a query for this?I'd strongly recommend that you
    1) stop using generic tools to analyze specific problems
    2) define you problem in detail ()what are you really trying to accomplish - seems like performance tuning, but you never really state that)
    3) define the OS and DB version - in detail. Give rev levels and patch levels.
    If you are having a serious performance issue, I strongly recommend you look at some performance tuning specialists like "http://www.method-r.com/", "http://www.miracleas.dk/", "http://www.hotsos.com/", "http://www.pythian.com/", or even Oracle's Performance Tuning consultants. Definitely worth the price of admission.

  • Larger block size = faster DB?

    hi guys,
    (re Oracle 9.2.0.7)
    it seems that a larger block_size makes the database perform operations faster? Is this correct? if so, why would anyone use 2k block sizes?
    thanks

    Hi Howard,
    it's uncharted territory, especially at the higher blocksizes which seem to be less-well-tested than the smaller ones Yup, Oracle releases junkware all the time, untested poo . . .
    You complain, file a bug report, and wait for years while NOTHING happens . . .
    Tell me Howard, how incompetent does Oracle Corporation have to be not to test something as fundamental as blocksizes?
    I've seen Oracle tech support in-action, it's like watching the Keystone Cops:
    Oracle does not reveal the depth of their quality assurance testing, but many Oracle customers believe that Oracle does complete regression testing on major features of their software, such as blocksize. However, Oracle ACE Director Daniel Morgan, says that “The right size is 8K because that is the only size Oracle tests”, a serious allegation, given that the Oracle documentation, Oracle University and MetaLink all recommend non-standard blocksizes under special circumstances:
    - Large blocks gives more data transfer per I/O call.
    - Larger blocksizes provides less fragmentation (row chaining and row migration) of large objects (LOB, BLOB, CLOB)
    - Indexes like big blocks because index height can be lower and more space exists within the index branch nodes.
    - Moving indexes to a larger blocksize saves disk space. Oracle says "you will conserve about 4% of data storage (4GB on every 100GB) for every large index in your database by moving from a 2KB database block size to an 8KB database block size."
    So, does Oracle really not do testing with non-standard blocksizes? Oracle ACE Director Morgan says that he was quoting Bryn Llewellyn of Oracle Corporation:
    “Which brings us full circle to the statement Brynn made to me and that I have repeated several times in this thread. Oracle only tests 8K blocks.”
    Wow.
    Edited by: jkestely on Sep 1, 2008 2:52 PM

  • Any scenario where the Condition Contracts are used?

    HI I am new to this forum. I have been through the defn of the Condition Contracts but still am not able to understand . Please throw some light on the same with some example.

    The condition contract is a document with which you can record conditions that a partner e.g a vendor grants to a set of eligible partners e.g customers.
    A condition contract consists of 3 components: 1. Condition contract header which contains info. regarding owner of the condition contract, a validity interval, the document status, currency information, exchange rate, terms of payment  etc.             2. List of eligible partners containing the customers or vendors who are entitled to use the condition contract             3.  Conditions i.e. the condition records
    TYPICAL SCENARIO WHERE CONDITION CONTRACTS ARE USED:
    Suppose a pharmaceutical company grants discounts and special prices to a group of hospitals for a range of medicines. A wholesaler must grant these special conditions whenever one of these hospitals places and order. Since the wholesaler has obtained the medicines from the manufacturer at the standard price, the manufacturer must refund part of the purchase price to the wholesaler if the latter has sold them at special conditions. In order to secure these special conditions, the wholesaler employs the condition contract. At header level, the pharmaceutical company is specified as the owner/vendor. Information relevant for settlement such as terms of payment, purchasing organization, settlement currency and exchange rates are also saved in the contract header. The eligible hospitals are either listed separately within the contract or are defined as an eligible partner list externally and included as a list. The special prices and/or discounts are defined in the conditions area of the condition contract. The period of time for which the agreement is valid is defined in the condition contract validity interval. If the special conditions are dependent solely on the type of medicine, you can use condition table A163. This contains, besides the condition contract number, only the material in the access. A release step can follow the document entry, since the conditions are available only after the condition contract is released. Then the special conditions are found in pricing for the sales order and the sales invoice if an eligible hospital has ordered one of the specified medicines.
    Kirti hope this answers your question to some extent.
    regards
    PARAM

  • Any waterproof cases for ipod 5g for everyday use?

    Any good waterproof cases for ipod touch 5g everyday use?

    Hey Avenger1104, I like the Griffin Survivor + Catalyst water tight to 9 feet, and offers shock, dust, weather and scratch protection. Also that a look at the Lifeproof cases. Note; no matter which case you buy it's only as good as it's seals, and there is no warning when or if the seal has failed. Good luck. Cheers.

  • Can i Know any standard Table where all the Subroutines are present?

    Hi Folks,
    Please let me know the Standard table name where i can find all the Forms ( Subroutines) .
    Thanks,
    Naresh

    Hi,
    There is no Std table in SAP to store all the performs used in the program.
    Generally we use SCAN statement to find all the PERFORMS used in a program.
    reward if useful
    regards,
    Anji

  • How can I see where Paragraph / Character Styles are used?

    I am editing a 40-page document and have many Paragraph styles. I have already deleted unused paragraph styles and now would like to see where are the remaining styles used to edit the text directly. IS there a way to see it? (InDesign CS4 or CS5)
    Thanks,
    Marina

    Marine von Koenig wrote:
    I am editing a 40-page document and have many Paragraph styles. I have already deleted unused paragraph styles and now would like to see where are the remaining styles used to edit the text directly. IS there a way to see it? (InDesign CS4 or CS5)
    Thanks,
    Marina
    Save two copies of the file.
    Close the original file. In a saved copy, [EDIT] use the paragraph style panel to [/EDIT] change the text color of every paragraph style to a new color whose swatch you name for the paragraph style. For example, Head1 paragraph style's text color becomes your newly-made swatch Head1color.
    Every paragraph in the document will appear in the color you assigned. This should give you a picture of which paragraphs are which style.
    Save the colored version of the file if you want to keep it.
    HTH
    Regards,
    Peter
    Peter Gold
    KnowHow ProServices
    Message was edited by: peter at knowhowpro

  • Redemption code was scratched off any Idea on where to contact someone about using the card

    Any body have Idea how to contact Apple regarding a overscratched Itune gift card can not read the code to redem.

    With any luck, the following document may be of some assistance with that:
    iTunes Store: Invalid, inactive, or illegible codes

  • Design Patterns that are used in standard j2se/j2ee classes/interfaces

    Hi All,
    I am understanding following design patterns (used within standard j2se/j2ee):
    Adapter
    Facade
    Composite
    Bridge
    Singleton
    Observer
    Mediator
    Proxy
    Chain of Responsibility
    Flyweight
    Builder
    Factory Method
    Abstract Factory
    Prototype
    Memento
    Template Method
    State
    Strategy
    Command
    Interpreter
    Decorator
    Iterator
    Visitor
    I want to see if/where these design patterns are used in j2se/j2ee classes/interfaces. i am looking for few examples of standard java classes/interfaces/cases where these design patterns are used by jdk developers.
    for e.g.
    WindowAdapter class is an example of Adapter DP.
    JOptionPane is an exmple of Facade DP.
    MouseListener is an example of Observer DP.
    Similarly, where can find examples of jdk classes/interfaces of the remaining DPs.
    I searched a lot of books, but they explain the DPs by creating their own classes/interfaces.
    I would like to see where these DPs are already utilised in std j2se/j2ee
    thanks,
    Madhu_1980

    877316 wrote:
    I searched a lot of books, but they explain the DPs by creating their own classes/interfaces.
    I would like to see where these DPs are already utilised in std j2se/j2eeWell, you can go through the javadocs first, they sometimes mention the pattern used.
    Then you can get the sources for the jdk and go through the classes yourself, identifying the patterns.

  • What are the essential cases where we must have use update module F.M.

    Dear sap Friends ,
    I want to update or insert bill no. in a ztable  when creating bill through VF01  .
    I have done it through insert sql query as well as by update module enabled functin module .
    what are the essential cases where we must have to use  update module enabled functin module and why we use IN UPDATE TASK .
    mainly i want to know in which case i must have to use update module enabled functin module and not by insert or modify query used smply .
    Thanks ,
    Amit Ranjan .

    Hi Amit,
    I sincerly advice you to search SCN or any anywherelse as big theory is behind this
    few related info.
    If you are using more than one screen to collect the data to update a perticular table and corresponding underlying table Child table in that case , after every screen change an implicit data base commit takes place and you are still collecting the data from other screens , in such cases you use IN UPDATE TASK
    Please go through SAP LUW
    Regards
    Ramchander Rao.K

  • Raid storage usage and block size

    We have two XServe RAID units Raid 5 and we are adding a new 16 bay ACNC raid with 16 1.5TB drives in Raid 6 + Hot Spare. I initialized the Raid 6 with 128K block size. The total data moving from the older raid volumes is around 5.7TB, but on the new Raid it is taking around 7.4TB of space. Is this due to the 128K block size? This is a prepress server so most of the files are quite large, but there may be lots of small files as well.

    Hi
    RAID 0 does indeed offer best performance, however if any one drive of the striped set fails you will lose all your data. If you have not considered a backup strategy now would be the time to do so. For redundancy RAID 1 Mirror might be a better option as this will offer a safety net in case of a single drive failure. A RAID is not a backup and you should always consider a workable backup strategy.
    Purchase another 2x1TB drives and you could consider a RAID 10? Two Stripes mirrored.
    Not all your files will be large ones as I'm guessing you'll be using this workstation for the usual mundane matters such as email etc? Selecting a larger block size with small file sizes usually decreases performance. You have to consider all applications and file sizes, in which case the best block size would be 32k.
    My 2p
    Tony

  • RAID, ASM, and Block Size

    * This was posted in the "Installation" Thread, but I copied it here to see if I can get more responses, Thank you.*
    Hello,
    I am about to set up a new Oracle 10.2 Database server. In the past, I used RAID5 since 1) it was a fairly small database 2) there were not alot of writes 3) high availability 4) wasted less space compared to other RAID techniques.
    However, even though our database is still small (around 100GB), we are noticing that when we update our data, the time it takes is starting to grow to a point whereby the update that used to take about an hour, now takes 10-12 hours or more. One thing we noticed that if we created another tablespace which had a block size of 16KB versus our normal tablespace which had a block size of 8KB, we almost cut the update time in half.
    So, we decided that we should really start from scratch on a new server and tune it optimally. Here are some questions I have:
    1) Our server is a DELL PowerEdge 2850 with 4x146GB Hard Drives (584GB total). What is the best way to set up the disks? Should I use RAID 1+0 for everything? Should I use ASM? If I use ASM, how is the RAID configured? Do I use RAID0 for ASM since ASM handles mirroring and striping? How should I setup the directory structure? How about partitioning?
    2) I am installing this on Linux and when I tried on my old system to use 32K block size, it said I could only use 16K due to my OS. Is there a way to use a 32K block size with Linux? Should I use a 32K block size?
    Thanks!

    Hi
    RAID 0 does indeed offer best performance, however if any one drive of the striped set fails you will lose all your data. If you have not considered a backup strategy now would be the time to do so. For redundancy RAID 1 Mirror might be a better option as this will offer a safety net in case of a single drive failure. A RAID is not a backup and you should always consider a workable backup strategy.
    Purchase another 2x1TB drives and you could consider a RAID 10? Two Stripes mirrored.
    Not all your files will be large ones as I'm guessing you'll be using this workstation for the usual mundane matters such as email etc? Selecting a larger block size with small file sizes usually decreases performance. You have to consider all applications and file sizes, in which case the best block size would be 32k.
    My 2p
    Tony

  • Choosing block size for RAID 0 & Final Cut

    Hi.
    I now have 3 500GB internal Seagate drives in bays 2/3/4 and want to make a striped 1.5TB RAID to use with Final Cut Studio 2. The help page talks about choosing a "large" data block size for use with video, but makes no specific size suggestion. What value would you recommend that I select for the block size? I haven't been in there yet so I don't know what the choices are.
    Any other settings I should be aware of that will optimize the RAID performance for video capture and editing? Thanks!
    Fred
    Message was edited by: FredGarvin
    Message was edited by: FredGarvin

    If you're using Disc Utility to set up your RAID, when you go to the RAID tab, you'll see an options button near the bottom of the window... clicking this will open a small menu where you can set the data block size... the largest is 256K, which is what you'd want to use.
    As for you're other question... have a look at this website: http://bytepile.com/raid_class.php
    note that disc utility can only set up RAID 0 & RAID 1 (if i remember rightly).

  • 8k block size upgrade

    Hello All,
    Our team is currently quickly approaching an 8k block size upgrade of a fairly large production database (800 gig). This is one step in several to improve the performance of a long running (44 hour) batch cycle. My concern is the following and I hoping people can tell me why it shouldn't or shouldn't be a concern. We do not have a place at the moment where we can test this upgrade. My fear is that Oracle may in some cases alter the execution plans for the queries in our batch due to the new larger block size and make bad choices. I am afraid it may choose table scans instead of an index scan (as an example) at the wrong time and cause our batch to run much longer than normal. While we can resolve these issues, this happening in production the first time we run 8k would be a big issue. I might be able to deal with a problem here or there but several issues may cause us to not meet our service level agreement. Should I be concerned about this with an upgrade 4k to 8k? Should I cancel the upgrade for now? -- ORACLE 8i.
    Is there anything else I should stay up at night worry about with this upgrade?
    Thanks,
    Jeff Vacha

    If all you are doing in this upgrade is changing the database block size, and you don't already have some compelling evidence that changing the block size is going to improve performance, I wouldn't bother. Changing database block size is a lot of work and is very rarely going to have a noticable impact.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Finding appropriate block size?

    Hi All,
    I believe this might be basic question, How to find appropriate block size for building an database to an specific application?
    I had seen always default 8K block size is used every where(Around 300-350 databases i have seen till now)....but why and how do they estimate this block size blindly before creating production database.
    Also in the same way how memory settings are finalized before creating database?
    -Yasser

    Yasser,
    I have been very fortunate to buy and read several very high quality Oracle books which not only correctly state the way something works, but also manage to provide a logical, reasoned explanation for why things happen as they do, when it is appropriate, and when it is not. While not the first book I read on the topic of Oracle, the book “Oracle Performance Tuning 101” by Gaja Vaidyanatha marked the start of logical reasoning in performance tuning exercises for me. A couple years later I learned that Gaja was a member of the Oaktable Network. I read the book “Expert Oracle One on One” by Tom Kyte and was impressed with the test cases presented in the book which help readers understand the logic of why Oracle behaves as it does, and I also enjoyed the performance tuning stories in the book. A couple years later I found Tom Kyte’s “Expert Oracle Database Architecture” book at a book store and bought it without a second thought; some repetition from his previous book, fewer performance tuning storing, but a lot of great, logically reasoned information. A couple years later I learned that Tom was a member of the Oaktable Network. I read the book “Optimizing Oracle Performance” by Cary Millsap, a book that once again marked a distinct turning point in the method I used for performance tuning – the logic made all of the book easy to understand. A couple years later I learned that Cary was a member of the Oaktable Network. I read the book “Cost-Based Oracle Fundamentals” by Jonathan Lewis, a book by its title seemed to be too much of a beginner’s book until I read the review by Tom Kyte. Needless to say, the book also marked a turning point in the way I approach problem solving through logical reasoning, asking and answering the question – “What is Oracle thinking”. Jonathan is a member of the Oaktable Network, a pattern is starting to develop here. At this point I started looking for anything written in book or blog form by members of the Oaktable Network. I found Richard Foote’s blog, which some how managed to make Oracle indexes interesting for me - probably through the use of logic and test cases which allowed me to reproduce what I reading about. I found Jonathan Lewis’ blog, which covers so many interesting topics about Oracle, all of which leverage logical approaches to help understanding. I also found the blogs of Kevin Closson, Greg Rahn, Tanel Poder, and a number of other members of the Oaktable Network. The draw to the performance tuning side of Oracle administration was primarily for a search for the elusive condition known as Compulsive Tuning Disorder, which was coined in the book written by Gaja. There were, of course, many other books which contributed to my knowledge – I reviewed at least 8 of the Oracle related books on the amazon.com website.
    Motivation… it is interesting to read what people write about Oracle. Sometimes what is written directly contradicts what one knows about Oracle. In such cases, it may be a fun exercise to determine if what was written is correct (and why it is logically correct), or why it is wrong (and why it is logically incorrect). Take, for example, the “Top 5 Timed Events” seen in this book (no, I have not read this book, I bumped into it a couple times when performing Google searches):
    http://books.google.com/books?id=bxHDtttb0ZAC&pg=PA17#v=onepage&q=&f=false
    The text of the book states that the “Top 5 Timed Events” shown indicates a CPU Constrained Database (side note: if a database is a series of files stored physically on a disk, can it ever be CPU constrained?). From the “Top 5 Timed Events”, we see that there were 4,851 waits on the CPU for a total time of 4,042 seconds, and this represented 55.76% of the wait time. Someone reading the book might be left thinking one of:
    * “That obviously means that the CPU is overwhelmed!”
    * “Wow 4,851 wait events on the CPU, that sure is a lot!”
    * “Wow wait events on the CPU, I didn’t know that was possible?”
    * “Hey, something is wrong with this ‘Top 5 Timed Events’ output as Oracle never reports the number of waits on CPU.”
    * “Something is really wrong with this ‘Top 5 Timed Events’ output as we do not know the number of CPUs in the server (what if there are 32 CPUs), the time range of the statics, and why the average time for a single block read is more than a second!”
    A Google search then might take place to determine if anyone else reports the number of waits for the CPU in an Oracle instance:
    http://www.google.com/search?num=100&q=Event+Waits+Time+CPU+time+4%2C851+4%2C042
    So, it must be correct… or is it? What does the documentation show?
    Another page from the same book:
    http://books.google.com/books?id=bxHDtttb0ZAC&pg=PA28#v=onepage&q=&f=false
    Shows the command:
    alter system set optimizer_index_cost_adj=20 scope = pfile;Someone reading the book might be left thinking one of:
    * That looks like an easy to implement solution.
    * I thought that it was only possible to alter parameters in the spfile with an ALTER SYSTEM command, neat.
    * That command will never execute, and should return an “ORA-00922: missing or invalid option” error.
    * Why would the author suggest a value of 20 for OPTIMIZER_INDEX_COST_ADJ and not 1, 5, 10, 12, 50, or 100? Are there any side effects? Why isn’t the author recommending the use of system (CPU) statistics to correct the cost of full table scans?
    A Google search finds this book (I have not read this book either, just bumped into it during a search) by a different author which also shows that it is possible to alter the pfile through an ALTER SYSTEM command:
    http://books.google.com/books?id=ufz5-hXw2_UC&pg=PA158#v=onepage&q=&f=false
    So, it must be correct… or is it? What does the documentation show?
    Regarding the question of updating my knowledge, I read a lot of books on a wide range of subjects including Oracle, programming, Windows and Linux administration, ERP systems, Microsoft Exchange, telephone systems, etc. I also try to follow Oracle blogs and answer questions in this and other forums (there are a lot of very smart people out there contributing to forums, and I feel fortunate to learn from those people). As long as the book or blog offers logical reasoning, it is fairly easy to tie new material into one’s pre-existing knowledge.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

Maybe you are looking for

  • Black screen during windows 7 install on 27inch iMac

    My iMac turns to a black screen during windows 7 boot camp install. I can use the install disk on my wifes Macbook Pro so I know that the media is good.

  • Jtable Renderer Problem

    Hi I have Jtable table where i am setting the model by using TableModelSorter model (extending defaulttablemodel), this table moedel is used for sorting. tblDocs.setModel(new TableModelSorter(modelDocument,tblDocs.getTableHeader())); The same Jtable

  • Jdbc/8.1.5/oci cannot connect

    Hi, We are not able to access the database using JDBC driver(oci8). We are able to connect to the database using thin driver or the JDBC-ODBC bridge. Also SQLplus can connect using scott/tiger@fpdev We used the following code to access the database u

  • Why use Transparent gateway in stead of heterogenous Connectivity

    I am looking for a solution to update data in a AS400 environment through a oracle DB. I found information to do this. I can use transparent gateways or Generic Connectivity. Generic connectivity is free and for transparent gateways I have to pay. Bu

  • Srpy, not in Web Standards! why?

    i have been using Spry for sometime.. But when i was trying to make the " image gallery " , i notice that Spry is using custom html codes and not base in Web Standards.. I would like to ask if it can be fixed in the later release, i know it might be