Larger block size = faster DB?

hi guys,
(re Oracle 9.2.0.7)
it seems that a larger block_size makes the database perform operations faster? Is this correct? if so, why would anyone use 2k block sizes?
thanks

Hi Howard,
it's uncharted territory, especially at the higher blocksizes which seem to be less-well-tested than the smaller ones Yup, Oracle releases junkware all the time, untested poo . . .
You complain, file a bug report, and wait for years while NOTHING happens . . .
Tell me Howard, how incompetent does Oracle Corporation have to be not to test something as fundamental as blocksizes?
I've seen Oracle tech support in-action, it's like watching the Keystone Cops:
Oracle does not reveal the depth of their quality assurance testing, but many Oracle customers believe that Oracle does complete regression testing on major features of their software, such as blocksize. However, Oracle ACE Director Daniel Morgan, says that “The right size is 8K because that is the only size Oracle tests”, a serious allegation, given that the Oracle documentation, Oracle University and MetaLink all recommend non-standard blocksizes under special circumstances:
- Large blocks gives more data transfer per I/O call.
- Larger blocksizes provides less fragmentation (row chaining and row migration) of large objects (LOB, BLOB, CLOB)
- Indexes like big blocks because index height can be lower and more space exists within the index branch nodes.
- Moving indexes to a larger blocksize saves disk space. Oracle says "you will conserve about 4% of data storage (4GB on every 100GB) for every large index in your database by moving from a 2KB database block size to an 8KB database block size."
So, does Oracle really not do testing with non-standard blocksizes? Oracle ACE Director Morgan says that he was quoting Bryn Llewellyn of Oracle Corporation:
“Which brings us full circle to the statement Brynn made to me and that I have repeated several times in this thread. Oracle only tests 8K blocks.”
Wow.
Edited by: jkestely on Sep 1, 2008 2:52 PM

Similar Messages

  • Any reliable cases where larger block sizes are useful?

    So I did some googling around to read up on 16kb or larger blocksizes. I found a series of articles by Jonathan Lewis and Richard Foote (plus other DBAs whose posts I trust ) debunking the usage of larger blocksize. I have not been able to find a single article, blog post, forum post, with a legitimate case where a larger block size actually improves performance. Its hard to google this stuff because the good stuff is buried beneath all the trash.
    So have any of the Oak Table people and other guys who write articles where they do quality testing find cases where larger block sizes are useful?
    I don't have a specific need. I'm just curious. Every time I  look this up I get buried in generic copy and paste blog posts that copy the docs, the generic test cases that were debunked, by the guys above, and other junk. So its hard to look for this.

    Guess2 wrote:
    So I did some googling around to read up on 16kb or larger blocksizes. I found a series of articles by Jonathan Lewis and Richard Foote (plus other DBAs whose posts I trust ) debunking the usage of larger blocksize. I have not been able to find a single article, blog post, forum post, with a legitimate case where a larger block size actually improves performance. Its hard to google this stuff because the good stuff is buried beneath all the trash.
    So have any of the Oak Table people and other guys who write articles where they do quality testing find cases where larger block sizes are useful?
    Lurking in the various things I've written about block sizes there are a couple of comments about using different block sizes (occasionally) for LOBs - though this might be bigger or smaller depending on the sizes of the actual LOBs and the usage pattern: it also means you automatically separate the LOB cache from the main cache, which can be very helpful.
    I've also suggested that for IOTs (index organized tables) where the index entries can be fairly large and you don't want to create an overflow segment you may get some benefit if the larger block size typically allows all rows for a give (partial) key value to reside in just one or two blocks.  The same argument can apply, though with slightly less strength for "fat" indexes (i.e. ones you've added columns to in order to avoid visiting the table for very impoartant time-critical queries).  The drawback in these two cases is that you're second-guessing, and to an extent choking, the LRU algorithms, and you may find that the gain on the specific indexes is obliterated by the loss on the rest of the caching activity.
    Regards
    Jonathan Lewis

  • Using large block sizes for index and table spaces

    " You are not using large blocksizes for your index tablespaces. Oracle research proves that indexes will build flatter tree structures in larger blocksizes.
    Is this a generic statement that I can use for all tables or indexes? I also have batch and online activity. My primary target is batch and it should not impact online. Not sure if both have common tables.
    How to find the current block size used for tables and index? is there a v$parameter query?
    What is an optimal block size value for batch?
    How do I know when flatter tree str has been achieved using above changes? Is there a query to determine this?
    What about tables, what is the success criterion for tables. can we use the same flat tree str criterion? Is there a query for this?

    user3390467 wrote:
    " You are not using large blocksizes for your index tablespaces. Oracle research proves that indexes will build flatter tree structures in larger blocksizes.
    Is this a generic statement that I can use for all tables or indexes? This is a generic statement used by some consultants. Unfortunately, it is riddled with exceptions and other considerations.
    One consultant in particular seems to have anecdotal evidence that using different block sizes for index (big) and data (small) can yield almost miraculous improvements. However, that can not be backed up due to NDA. Many of the rest of us can not duplicate the improvements, and indeed some find situations where that results in a degradation (esp with high insert/update rates from separated transactions).
    I also have batch and online activity. My primary target is batch and it should not impact online. Not sure if both have common tables.
    How to find the current block size used for tables and index? is there a v$parameter query?
    What is an optimal block size value for batch?
    How do I know when flatter tree str has been achieved using above changes? Is there a query to determine this?
    What about tables, what is the success criterion for tables. can we use the same flat tree str criterion? Is there a query for this?I'd strongly recommend that you
    1) stop using generic tools to analyze specific problems
    2) define you problem in detail ()what are you really trying to accomplish - seems like performance tuning, but you never really state that)
    3) define the OS and DB version - in detail. Give rev levels and patch levels.
    If you are having a serious performance issue, I strongly recommend you look at some performance tuning specialists like "http://www.method-r.com/", "http://www.miracleas.dk/", "http://www.hotsos.com/", "http://www.pythian.com/", or even Oracle's Performance Tuning consultants. Definitely worth the price of admission.

  • DASYLAB QUERIES on Sampling Rate and Block Size

    HELP!!!! I have been dwelling on DASYLAB for a few weeks regarding certain problems faced, yet hasn't come to any conclusion. Hope that someone would be able to help.Lots of thanks!
    1. I need to have more data points, thus I increase the sampling rate(SR). When sampling rate is increased, Block size(BS) will increase correspondingly.
    For low sampling rate (SR<100Hz) and Block size of 1, the recorded time in dasy and the real experimental time is the same. But problem starts when SR>100Hz for BS=1. I realized that the recorded time in dasylab differs from the real time. To solve the time difference problem, I've decided to use "AUTO" block size.
    Qn1: Is there any way to solve the time difference problem for high SR?
    Qn2: For Auto Block Size, Is the recorded result in dasylab at one time moment the actual value or has it been overwritten by the value from the previous block when AUTO BS is chosen.
    2. I've tried getting the result for both BS=1 and when BS is auto. Regardless of the sampling rate, the values gotten when BS=1 is always larger than that of Auto Block size. Qn1: Which is the actual result of the test?
    Qn2: Is there any best combination of the block size and sampling rate that can be used?
    Hope someone is able to help me with the above problem.
    Thanks-a-million!!!!!
    Message Edited by JasTan on 03-24-2008 05:37 AM

    Generally, the DASYLab sampling rate to block size ratio should be between 2:1 and 10:1.
    If your sample rate is 1000, the block size should be 500 to no smaller than 100.
    Very large block sizes that encompass more than 1 second worth of data often cause display delays that frustrate users.
    Very small block sizes that have less than 10 ms of data cause DASYLab to bog down.
    Sample rate of 100 samples / second and a block size of 1 is going to cause DASYLab to bog down.
    There are many factors that contribute to performance, or lack there of - the speed and on-board buffers of the data acquisition device, the speed, memory, and video capabilities of the computer, and the complexity of the worksheet. As a result, we cannot be more specific, other than to provide you with the rule of thumb above, and suggest that you experiment with various settings, as you have done.
    Usually the only reason that you want a small block size is for closed loop control applications. My usual advice is that DASYLab control is around 1 to 10 samples/second. Much faster, and delays start to set in. If you need fast, tight control loops, there are better solutions that don't involve Microsoft Windows and DASYLab.
    Q1 - without knowing more about your hardware, I cannot answer the question, but, see above. Keep the block size ratio between 2:1 and 10:1.
    Q2 - without knowing more about your hardware, and the driver, I'm not sure that I can fully answer the question. In general, the DASYLab driver instructs the DAQ device driver to program the DAQ device to a certain sampling rate and buffer size. The DASYLab driver then retrieves the data from the intermediate buffers, and feeds it to the DASYLab A/D Input module. If the intermediate buffers are too small, or the sample rate exceeds the capability of the built-in buffers on the hardwar, then data might be overwritten. You should have receive warning or error messages from the driver.
    Q3 - See above.
    It may be that your hardware driver is not configured correctly. What DAQ device, driver, DASYLab version, and operating system are you using? How much memory do you have? How complex is your worksheet? Are you doing control?
    Have you contacted your DASYLab reseller for more help? They should know your hardware better than I do.
    - cj
    Measurement Computing (MCC) has free technical support. Visit www.mccdaq.com and click on the "Support" tab for all support options, including DASYLab.

  • Raid block size ?

    i just purchased two firewire 800
    500gb external hard drives.
    i want to use them as a raid set up for recording audio, (vocals,guitars etc)
    or keeping my komplete ultimate 9 sound library on it.
    either way, i have no idea what raid block size to use ?
    hope everyone is having a great new year ;-)

    Hello d rock,
    When creating a RAID array, you'll typically want your block size to match (as closely as reasonable) the size of the files being stored on the array.
    ...specify an optimal storage block size for the data stored on the set. Set the block size to match the size of data stored on the set. For example, a database might store small units of data, so a small block size might be best. A video processing application might require fast throughput of large amounts of data, so a larger block size might be best.
    Disk Utility 12.x: Create a RAID set
    http://support.apple.com/kb/PH5834
    Cheers,
    Allen

  • Raid storage usage and block size

    We have two XServe RAID units Raid 5 and we are adding a new 16 bay ACNC raid with 16 1.5TB drives in Raid 6 + Hot Spare. I initialized the Raid 6 with 128K block size. The total data moving from the older raid volumes is around 5.7TB, but on the new Raid it is taking around 7.4TB of space. Is this due to the 128K block size? This is a prepress server so most of the files are quite large, but there may be lots of small files as well.

    Hi
    RAID 0 does indeed offer best performance, however if any one drive of the striped set fails you will lose all your data. If you have not considered a backup strategy now would be the time to do so. For redundancy RAID 1 Mirror might be a better option as this will offer a safety net in case of a single drive failure. A RAID is not a backup and you should always consider a workable backup strategy.
    Purchase another 2x1TB drives and you could consider a RAID 10? Two Stripes mirrored.
    Not all your files will be large ones as I'm guessing you'll be using this workstation for the usual mundane matters such as email etc? Selecting a larger block size with small file sizes usually decreases performance. You have to consider all applications and file sizes, in which case the best block size would be 32k.
    My 2p
    Tony

  • RAID, ASM, and Block Size

    * This was posted in the "Installation" Thread, but I copied it here to see if I can get more responses, Thank you.*
    Hello,
    I am about to set up a new Oracle 10.2 Database server. In the past, I used RAID5 since 1) it was a fairly small database 2) there were not alot of writes 3) high availability 4) wasted less space compared to other RAID techniques.
    However, even though our database is still small (around 100GB), we are noticing that when we update our data, the time it takes is starting to grow to a point whereby the update that used to take about an hour, now takes 10-12 hours or more. One thing we noticed that if we created another tablespace which had a block size of 16KB versus our normal tablespace which had a block size of 8KB, we almost cut the update time in half.
    So, we decided that we should really start from scratch on a new server and tune it optimally. Here are some questions I have:
    1) Our server is a DELL PowerEdge 2850 with 4x146GB Hard Drives (584GB total). What is the best way to set up the disks? Should I use RAID 1+0 for everything? Should I use ASM? If I use ASM, how is the RAID configured? Do I use RAID0 for ASM since ASM handles mirroring and striping? How should I setup the directory structure? How about partitioning?
    2) I am installing this on Linux and when I tried on my old system to use 32K block size, it said I could only use 16K due to my OS. Is there a way to use a 32K block size with Linux? Should I use a 32K block size?
    Thanks!

    Hi
    RAID 0 does indeed offer best performance, however if any one drive of the striped set fails you will lose all your data. If you have not considered a backup strategy now would be the time to do so. For redundancy RAID 1 Mirror might be a better option as this will offer a safety net in case of a single drive failure. A RAID is not a backup and you should always consider a workable backup strategy.
    Purchase another 2x1TB drives and you could consider a RAID 10? Two Stripes mirrored.
    Not all your files will be large ones as I'm guessing you'll be using this workstation for the usual mundane matters such as email etc? Selecting a larger block size with small file sizes usually decreases performance. You have to consider all applications and file sizes, in which case the best block size would be 32k.
    My 2p
    Tony

  • RAID: striping and block size

    I just finished the setup of a RAID striped array of 2 500MB disks, my question is about the RAID block size, is there a noticeable difference in performance by choosing a larger block size than the 32KB default?, I chose 64kb but will 128kb make any noticeable difference?

    Are you in the right place? The reason I ask is that you can only ever have one disk inside an MBP.
    However, if you're talking about an external drive(s) then it would depend on what you are doing with the drive. For a boot/Photoshop/general drive I would recommedn 32K else a maximum 64K. Otherwise, if you are doing large sequential transfers such as video then a larger block size will help.

  • 8K Database Block Size in 11.5.2

    The Release Notes for 11.5.2 state the minimum database block size is 8K. However, this was not a requirement for 11.5.1 and instead patch 1301168 was created for customers with a database block size less than 8K.
    In 11.5.2 is it MANDATORY that the database block size be 8K or greater, or can patch 1301168 be applied in lieu of migrating to a larger block size?

    4K to 8K:
    Use 7.3.4 Export using Unix Pipe to create the export. Files mayhave to split.
    Rebuild the 8.1.6 database. Good Opportunity to consolidate files.
    Import using Unix Pipe with 8.1.6 import utility.
    Set data files to autoextend, unlimited including system. Do not autoextend RBS, TEMP and log.
    Use COMMIT and IGNORE on import.
    Use COMPRESS on export.
    USE very large buffers on both export and import.
    Ignore messages on SYS and SYSTEM tablespace and other definitions.
    If you are using 10.7 with 8.1.6, use interop patch,a nd other module patches (see release notes). Run adprepdb.sql after Import.
    Compile all invalid modules till they match 10.7 invalid modules. Fix invalid modules like the ones documented in release notes.
    If you doing it for just 11.5.2 Just do export import in 8.1.6 with large file system set on your file system and you should be fine. AIX supports Large file systems.
    null

  • Tuning block size in Tablespace

    Hi
    Actually size of my tablespaces having 8k block size. How to tune it ? how to choose another size ?
    Regards
    Denis

    You can't choose a different block size for existing tablespaces, but you can do that if you create new tablespaces. Generally speaking, a smaller block size can be suitable for OLTP systems, while a larger block size can be adequate for DataWarehouse.
    See http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14211/iodesign.htm#sthref736

  • Best Block Size in Raid for Photo files

    I am setting up my two drive striped RAID 0 and came to a screeching halt at the raid block size.
    This RASID is strictly for photo scans and PS CS2 photo files, mostly high res, some medium JPEGs.
    Adobe says PS CS2's default block size in 64K, if I can believe the technical support guy, who said it off the top of his head, after not understanding what I was talking about.
    Apple Tech support first knew nothing about it. Then, after checking all over for quite some time, said 32K is adequate for what I am doing but 64K is alright. In other words, he said nothing.
    What would be the best block size for my purpose and why.
    One scan file size that I just checked is 135.2MB, another 134.6 MB and that is typical. JPEGs are, of course, smaller, ca 284 KB. Photos with the Canon EOS-1Ds Mk II run 9mb up to 200mb after processing. No other tyhpes of files will be on this drive.
    What would be the ideal block size and why?
    Thanks much,
    Mark

    The default 32K is for small random I/O pattern of a server. Use 128/256K for audio and video files. And 64K for workstation use.
    the larger block size gives the best performance for sequential I/O. Someone mentioned an AMUG review of CS2 tests that showed that 64K.
    Because this is probably a scratch volume, you could always test for yourself, and rebuild the RAID later and try a different scheme. Sometimes that is the best way to match your drives, your workflow, and system. There are a couple CS2 scripts and benchmark utilities to help get an idea of how long each step or operation takes.

  • How should be set Index block size in Warehouse databases?

    Hi,
    We have Warehouse database.
    I cannot find out index block size.
    1. Where can I get know our index block sizes?
    2. How can I enlarge index block sizes? Is it related with tablespace?
    After your suggestion do I need increase or set buffer cache keep pool according to block sizes? 2K, 4K, 8K, 16K and 32K can be specified?
    could you help me please?
    thanks and regards,

    See the BLOCK_SIZE column in DBA_TABLESPACES.
    You can't "increase" the block size. You'd have
    a) to allocate DB_xK_cache_size for the new "x"K block size
    b) create a new tablespace explicitly specifying the block size in the CREATE TABLESPACE command
    c) rebuild your indexes into the new tablespace.
    Indexes created in a tablespace with a larger block size have more entries in each block.
    You may get better performance.
    You may get worse performance.
    You may see no difference in performance.
    You may encounter bugs.
    "increasing block size" is an option to be evaluated and tested thoroughly. It is not, per se, a solution.
    Hemant K Chitale
    http://hemantoracledba.blogspot.com

  • Need to understand the relation B/w Bigger block size like 16k or 32k

    Hi
    How can we determine which block size is good for the data base specially for reporting DB on which real time replication is being performed.
    I will really appreciate if some one could help me in identifying this or are there any ways to find out the correct DB block size by queering any db objects.

    I think that part of the decision will have to be based on knowledge of your application and how it interacts with the database. If you database does a lot of single reads/updates of records then the smaller block size will probably be more appropriate.
    However if your application does bulk reads, insertions then you might benefit for larger block sizes.
    There are several Thread already on this subject and references as well:
    size of db_block_size
    how to decide size of db_block_size of a block.
    Re: DB_BLOCK_SIZE
    do a search in the search field on the main thread page for more.
    Regards
    tim

  • How to install 10g database on windows with db block size 16k

    Hi,
    Can somone help me install oracle 10g database on windows xp with db block size as 16k.
    i need this database, because it is one of the recommendations for insalling OWB(Oracle Warehouse Builder).
    Thanks,
    Philip.

    1)     In Initialization parameter pile
    DB_BLOCK_SIZE=8192/16K - One way
    2) The other way once you install Oracle 10 G then at the time when you are creating database with Database Configuration Assistant – you can modify the block size by the[b] Initialization parameter screen.
    -     Sizing Tab
    Block Size 8K/16K
    *** Remember by default Oracle 10 G uses 8K block size.
    You use the options on the Sizing tab to configure the block size of your database
    and the number of processes that can connect to this database. The Block Size setting corresponds to the smallest unit of storage within the Oracle database. All storage of database objects (tables, indexes, and so on) are governed by the block size. The block size defaults to 8KB, but you can modify it. Once the database is created, you cannot modify this setting.
    The maximum and minimum size of an Oracle block depends on the operating system. Generally, 8KB is sufficient for most transaction-oriented applications, and larger block sizes such as 16KB and higher are used in data warehouse–type applications.
    The Processes setting specifies the maximum number of simultaneous operating system processes that can be connected to this Oracle database. You must include at least six processes for each of the Oracle background processes. You can increase this number on the Initialization parameter screen.

  • Database Block Size

    4k vs. 8K block size
    The Database Configuration Assistant sets Blocksize to 4K for OLTP and 8K for Datawarehouse. I have seen conflicting information about optimal blocksize for OLTP?
    What is the different in the DCA when choosing "OLTP" vs "Data Warehouse"?
    We have an OLTP database and would like to keep our 8K block size as the databases are already created. What are the issues with OLTP using the Data Warehouse option on DCA?
    Thanks

    You should try to keep your block size the same as the how the OS is setup.That's true if you're using buffered reads (not desirable in iteself), but not otherwise I think, Paul.
    A key advantage of large block sizes in DWh/DSS systems is that is improves the ratio you can achieve on data segment compression -- you can consider going to 16kb or even 32kb.
    For OLTP systems where you are generally retrieving single rows, a small block size makes the buffer cache more efficient by allowing Oracle to retrieve fewer rows in a single i/o.

Maybe you are looking for

  • Advice sought re converting existing Bc site to a responsive design

    Hi, I read today that Google is shaking up its search rankings to favour mobile-friendly sites. I have an existing e-commerce Bc site (fixed width) that I'd like to convert to a responsive design. At the moment I don't have the time to do it myself b

  • Sql loader direct path leaving indexes unusuable

    10.2.0.3 redhat 4.5 temp tablespace 6.5 GB datafile size 33k I thought in version 10 you can do direct path loads without indexes going unusuable? i saw this, but we have plenty of temp space. http://dbasolve.com/OracleError/ora/ORA-01502.txt Oracle

  • Build and clean plus deployment

    Hi. I am writing Web application on NetBeans 6.9.1. I store css and javascript files in build folder. The contents of build folder is needed deploying application on server, src folder is not included. But when I clean and build the application on ID

  • SCCP Transcoders and Conference doesn't register.

    Dear Friends, I having a strange issue, I have configured my 3945 GW with SCCP for Conferencing and Transcoding. Also on the CUCM Side, i have used the Cisco IOS Enhanced media termination Point for this setup but I see the transcoders are not gettin

  • Copy To - Copy from not working

    i am trying to create an invoice drawing from a sale order. but when clicking for 'Copy from' from Invoice its showing the Document wizard and on clicking 'Finish' it is not taking any items to the invoice row. similarly i tried to do it from Sale or