What is Optimal Block Size?

Given technological improvements in the past several years to hardware, should we stick with block size of 8 to 100 kb per DBAG? I would think we can go higher.
But how high is very high?
We designing a database and if I stick with Acc (D) and Time (D) I will have around 700k of block size and this may grow in the future if more accounts are added.
What is your recommendation? Pros and cons with your recommendation.
Server Config:
OS: Sun Solaris 10
RAM: 32 GB
CPUs: 4
Thanks in advance for your ideas and answers!
Venu

If you're in a Planning environment, you really have to benchmark form performance when Accounts and Time are the axes if they're not dense -- sometimes pulling multiple blocks per form is okay, sometimes not. And in turn that performance drives what is in the block and thus block size.
The "right" answer is based on perspective -- is it calc time or retrieve time that is the performance measure, or both. Where you fall in that multidimensional continum determines block size. This would be the art part of sizing.
Regards,
Cameron Lackpour

Similar Messages

  • Optimal Block Size for Xserve's RAID hosting Final Cut Server

    What would be the optimal block size for the software RAID on the machine that will be hosting Final Cut Server? The default is 36K. Since FCS is essentially a database, would be the optimal settings? Any glimpse what data size chunks FCS write to the disk?

    Actually I meant the block size for the internal startup volume where FCS is installed, not Xsan volumes. As to optimal settings for Xsan volumes it really depends on the type of the data you store on Xsan, and if it is primarily video, what format: SD, HD.

  • Using large block sizes for index and table spaces

    " You are not using large blocksizes for your index tablespaces. Oracle research proves that indexes will build flatter tree structures in larger blocksizes.
    Is this a generic statement that I can use for all tables or indexes? I also have batch and online activity. My primary target is batch and it should not impact online. Not sure if both have common tables.
    How to find the current block size used for tables and index? is there a v$parameter query?
    What is an optimal block size value for batch?
    How do I know when flatter tree str has been achieved using above changes? Is there a query to determine this?
    What about tables, what is the success criterion for tables. can we use the same flat tree str criterion? Is there a query for this?

    user3390467 wrote:
    " You are not using large blocksizes for your index tablespaces. Oracle research proves that indexes will build flatter tree structures in larger blocksizes.
    Is this a generic statement that I can use for all tables or indexes? This is a generic statement used by some consultants. Unfortunately, it is riddled with exceptions and other considerations.
    One consultant in particular seems to have anecdotal evidence that using different block sizes for index (big) and data (small) can yield almost miraculous improvements. However, that can not be backed up due to NDA. Many of the rest of us can not duplicate the improvements, and indeed some find situations where that results in a degradation (esp with high insert/update rates from separated transactions).
    I also have batch and online activity. My primary target is batch and it should not impact online. Not sure if both have common tables.
    How to find the current block size used for tables and index? is there a v$parameter query?
    What is an optimal block size value for batch?
    How do I know when flatter tree str has been achieved using above changes? Is there a query to determine this?
    What about tables, what is the success criterion for tables. can we use the same flat tree str criterion? Is there a query for this?I'd strongly recommend that you
    1) stop using generic tools to analyze specific problems
    2) define you problem in detail ()what are you really trying to accomplish - seems like performance tuning, but you never really state that)
    3) define the OS and DB version - in detail. Give rev levels and patch levels.
    If you are having a serious performance issue, I strongly recommend you look at some performance tuning specialists like "http://www.method-r.com/", "http://www.miracleas.dk/", "http://www.hotsos.com/", "http://www.pythian.com/", or even Oracle's Performance Tuning consultants. Definitely worth the price of admission.

  • DPM transfer block size

    I am setting a DPM 2010 and am using a MSA 2000 for storage.  The MSA best practice suggest setting up the virual disk chunk size to match the transfer block size of the application using it.  Available chunk size include: 16 Kbyte, 32 Kbyte,
    64 Kbyte.  64 Kbyte is the default on the MSA.  Does anyone know what DPM transfer block size is? 

    DPM requests the data size at file system level, so there are many layers before it reaches storage/hardware devices so it may get changed at any place. In any case most of the DPM operations are at 16Kbytes block size.Thanks, Praveen D [MSFT] This posting is provided "AS IS" with no warranties, and confers no rights.

  • Recommended Block Size For RAID 0

    I am setting up a RAID configuration (Striping, no Parity, Mac G5, OS-X) and was curious what the recommended Block Size should be. Content is primarily (but not limited to) Images created with Adobe Photoshop CS2 and range in size from 1.5MB to >20MB. The default for OS-X is 32K chunks of data.
    Drives are External FW-400.
    Many thanks, and Happy Holidays to all!

    If it is just scratch, run some benchmarks with it set to 128k and 256k and see how it feels with each. The default is too small, though some find it acceptable for small images. For larger files you want larger - and for PS scratch you definitely want 128 or 256k.

  • Is tablespace block size and database block size have different meanings

    at the time of database creation we can define database block size
    which can not be changed afterwards.while in tablespace we can also
    define block size which may be different or same as define block size
    at the time of database creation.if it is different block size from
    the database creation times then what the actual block size using by oracle database.
    can any one explain in detail.
    Thanks in Advance
    Regards

    You can't meaningfully name things when there's nothing to compare and contrast them with. If there is no keep or recycle cache, then whilst I can't stop you saying, 'I only have a default cache'... well, you've really just got a cache. By definition, it's the default, because it's the only thing you've got! Saying it's "the default cache" is simply linguistically redundant!
    So if you want to say that, when you set db_nk_cache_size, you are creating a 'default cache for nK blocks', be my guest. But since there's no other bits of nk cache floating around the place (of the same value of n, that is) to be used as an alternate, the designation 'default' is pointless.
    Of course, at some point, Oracle may introduce keep and recycle caches for non-standard caches, and then the use of the differentiator 'default' becomes meaningful. But not yet it isn't.

  • Cache block size

    In Sun Ultra IIIi what is the block size that is transfered between L1 and higher caches? The data sheet says that there is a JBus of width 128b. Is this the block size? Is the b here bits or bytes?

    I would say the this figure is bits, sun usually describe bandwidth in bytes as MB/s or similar. Check the document again, I seem to remember the US IIIi and JBUS architecture being described pretty clear in the data sheet. I'll see if I can come up with more from the product JTF.

  • OS BLOCK SIZE SOLARIS 8

    What is the block size of solaris 8.0 ? Is there any command to findout the block size of OS.

    If you want to know the filesystem blocksize, it is 8K by default. The fragment
    size is 1K. To check any filesystem, you can use the "fstyp -v <device>" command.
    Note the device is the block or raw device of the filesystem.
    Here is some sample output (the program dumps a lot more information)::
    sblkno 16 cblkno 24 iblkno 32 dblkno 720
    sbsize 2048 cgsize 8192 cgoffset 32 cgmask 0xfffffff0
    ncg 104 size 4608072 blocks 4534838
    bsize 8192 shift 13 mask 0xffffe000
    fsize 1024 shift 10 mask 0xfffffc00
    bsize is the blocksize and fsize is the fragment size.
    Alan
    Sun Developer Technical Support
    http://www.sun.com/developers

  • "MediaKit reports block size error" when formatting drive

    I got a new 2TB hard drive, and I'm trying to format it as HFS+ journaled case sensitive encrypted, each time I try I get the following error:
    "MediaKit reports block size error, usually caused by not being a multiple of 512."
    This is a brand new drive, it has a GUID partition map with one partition, that was created by Disk Utility, and the disk passed a full hardware scan from Drive Genius.
    Normally when I get an error I can google it, but this particular error only gives me links to RAID problems, and I'm not doing anything with RAID.
    What determines the block size? I did not get an option to set it anywhere that I could see.
    How do I fix this error and get my drive encrypted?

    This does indeed seem to be the problem.
    Before you replied I RMAed the Seagate disk assuming it to be faulty, because, well, 4k is a multiple of 512b and all. Today, my new WesternDigital disk arrived - it gives the IDENTICAL error. The changes of two disks from two vendors both being faulty seem low, so I think we can safely say OS X has a nasty security bug that has been languishing for some time.
    It seems the only way I can protect my backups is to use 3rd party softeware, or to find an old 2TB disk somewhere that still has 512b blocks rather than 4k blocks.
    Hopefully Apple get this bug fixed soon, and hopefully others will find this post when they google the error message and at least understand that their disk is probably fine.

  • SSD erase block size

    I've just had a new T410 delivered with a 128Gb solid state drive. Linux reports the drive model as a "SAMSUNG MMCRE28G". Can someone tell me what the erase block size is, please, so I can get things aligned properly? I can't find any mention of this drive at all, let alone specs, on Samsung's rather useless website. It's easy enough to align the partitions on multiples of 1Mb to be safe, but I'm not sure what "stripe-width" to use to tell the ext4 filesystem to align files appropriately. It's too bad this basic information seems not to be published anywhere...
    Thanks!
    James.

    Did you read the section immediately after the one you linked? https://wiki.archlinux.org/index.php/Pa … ng_tools_2
    In past, proper alignment required manual calculation and intervention when partitioning. Many of the common partition tools now handle partition alignment automatically:
    To verify a partition is aligned, query it using /usr/bin/blockdev as shown below, if a '0' is returned, the partition is aligned
    Is that not sufficient for your purposes?
    Last edited by 2ManyDogs (2014-11-04 18:55:37)

  • Optimize Block Size

    Hi, Could some one explain me how we can optimize growing block size.???
    Thanks
    Radhika

    I disagree that the optimal block size is under 100k, with the servers available these days you can easily run blocks much bigger than this. Block size tuning will have an effect on calculation and retrieval times, extensive testing is needed before and after changing the dense\sparse settings.
    Have you tried optimizing the caches, there is no quick fix all tuning requires extensive before and after testing.

  • Optimal NTFS block size for Oracle 11G on Windows 2008 R2 (OLTP)

    Hi All,
    We are currently setting up an Oracle 11G instance on a Windows 2008 R2 server and were looking to see if there was an optimal NTFS block size. I've read the following: http://docs.oracle.com/cd/E11882_01/win.112/e10845/specs.htm
    But it only mentioned the block sizes that can be used (2k - 16k). And basically what i got out of it, was the different block szes affect the max # of database files possible for each database.
    Is there an optimal NTFS block size for Oracle 11G OLTP system on Windows?
    Thanks in advance

    Is there an optimal NTFS block size for Oracle 11G OLTP system on Windows?ideally FS block size should be equal to Oracle tablespace block size.
    or at least be N times less than Oracle block size.
    For example - if Oracle BS=8K then NTFS BS better to be 8K but also can be 4K or 2K.
    Also both must be 1 to N times of Disk sector size. Older disks had sectors 512 bytes.
    Contemporary HDDs have internal sector size 4K. Usually.

  • What should the "raid block size" be

    I just installed 3, 750 sata barracuda's in my 1 hour old 8 core mac. I am stripping them together with the OS to raid o.
    It is asking me what "raid block size" to use:
    16k
    32k
    64k
    128k
    256k
    The help menu suggests that for the most thruput, a higher number might be best...
    I will be using this raid for DV & HD video with Final cut studio 2 & AFX cs3.
    Not sure which value to use.
    Thanks,
    Steve

    I would not include the OS with your HD & DV storage RAID.
    And Apple RAID keeps improving so that you can use smaller default for non-video or audio applications as in CS3 scratch disks.
    Booting from RAID has limitations and problems, and I would prefer a dedicated fast drive instead, and isolate the system activity as well which also helps.

  • Software RAID0 for Video Editing+Storage: What Stripe Block Size ?

    Hi
    I'm planning to setup a raid0 with 2x 2TB HDD's to store and edit the videos from my digital camera on it. The most files will have a size of 5 to 15GB. In the past i had raid0's on RHEL with the (old) default stripe block size of 64kb. Since the new drive will only contain very big files would it be better to go with 512kb stripe block size ? 512 is also the default setting now in the gnome disk utility which i use for my partitions.
    Another question: Does the so called "raid lag" exist ? I think i've seen occasional stuttering in movies when they're played from a raid0/5 without cache enabled in the player (with cache its fine). Games also seem to occasional freeze when installed on raid (I had this problem in the past with wine games installed on raid0, they sometimes freeze when they try to load data which never occurred ever on a normal HDD).
    Many thanks in advance for your suggestions

    That is a hard question to answer.. Nothing is best for everyone... However if i am to generalize it I would put it this way.. If you want to cut everything from a short promos to hollywood pictures..
    A high end windows pc (only cause mac pro hasn't been updated in ages)
    Avid Media Composer with Nitrus DX
    Two monitors
    Broadcast monitor
    HD Deck
    pimping 5.1 speakers
    A good mixer
    You are looking at over 70,000 or 80,000, could even approach even more.. HD decks run atleast 15k.
    If price is not an issue then there you go....
    However this is not realistic for most people nor best solution by no means... I run a macbook pro with Avid (as primary) Final Cut 7, Final Cut X (for practice, didn't have to use it for a job yet), and Premiere (just in case)
    I am a final cut child who grew up on it and love it however everything I am doing in last few years is on AVID...
    Have a second monitor..
    I am very portable and rest of the gear I usually get where ever I work at.. I am looking into getting a good broadcast monitor connected with AJA thunderbolt..
    Like I said this is very open question, there is no (BEST) it all depends what you will be doing.. If you get AVID (which can do everything, however is cluncky as **** and counter intuitive) but you are only cutting wedding videos and short format stuff, it would be an overkill galore.. Just get FCP X in that case... Simple,easy, one app...
    Be more specific and you will get clearer answers..

  • What is the optimal cel size?

    I'm working with a program called Natural Scene Designer Pro v5.0 which allows the user to create a series of still rendered ray tracings into an animated AVI. I would like to edit these AVI clips in Premiere Elements 10, but before going terribly far into this project, I was curious about a couple of things.
    What is the optimal cel size that I should instruct NSD to produce that will most satisfy Premiere Elements?
    What are the optimal Project Settings in PE10 that should be used in editing this type of "footage" ?
    Initial tests have been done at 320:240 @ 30fps (which does not appear to be supported in PE10 amongst its presets), but eventually, I'd like to make things as large as is practical and for the broadest audience.
    It has been several years since I played around with things like this... which then were on another Windows machine and Premiere Elements 2. I need to get reacquainted with Premiere Elements and version 10 looks a little more mature and myself, more gray
    Thank you very much for any replies.
    Kind regards,
    Kelly

    Kelly,
    Thank you for that clarification. I had mis-read, above, and thought that it WAS an MS CODEC pack.
    The problem with too many CODEC packs is that they often install "stuff," besides the CODEC's, and some of that stuff is not good for a Video editing computer. Also, many (most?) of the CODEC's installed are hacked, or reverse-engineered versions, and will often overwrite good, commercial CODEC's, that are installed. This is why Adobe hides its Adobe MainConcept CODEC's - so that other programs cannot overwrite them. However, their "priority" is still vulnerable, and many CODEC packs will change that priority in the Registry, so that its CODEC's are chosen first, by the system.
    I am a big fan of the Lagarith Lossless CODEC, when using intermediate files. The same for another one, UT Lossless. They DO compress, but at a lossless level, i.e. the quality stays the same. Two others that I like are both Apple QuickTime CODEC's, Animation and Photo-PNG, which are also lossless. However, on my PC, either the Lagarith, or the UT are my first choices. Both of those are wrapped in the AVI format, where the two QT CODEC's are wrapped in the MOV format. [I buy 3D work from several artists, who are on Mac's, and specify the MOV Animation CODEC. Those have always worked 100% on my PC.]
    Now, I see that you have FFDShow installed. One power-user here, Neale, installed that, at the direction of Adobe Technical Support, and all has been great for him. However, and especially in the PrPro Forum, many users have experienced very bad things with it, and for most, PrPro would not even run. Uninstalling that CODEC has proved very difficult, and some have had to wipe their HDD, and reinstall everything from scratch. I shudder, when mention is made of it, BUT Neale has had zero issues, and its installation DID fix a problem, that he had. I avoid it, but that is my personal choice.
    Now, back to the Lagarith Lossless CODEC. One issue with some CODEC packs is that what they install is not even close to the latest version of a particular CODEC. Some are very, very old versions. Before I did much work with the Lagarith, I would check out their site, and compare version numbers, between the one installed with the CODEC pack, and what is currently the newest version. It, like the UT Lossless, is free, so it would cost nothing to install the latest version, if you did not get that one. This article goes into a bit more info on Lagarith and UT: http://forums.adobe.com/thread/875797?tstart=0
    Hope that helps, and thank you for all of the info that you have provided, and for your patience.
    Good luck,
    Hunt

Maybe you are looking for

  • Read ID3 of external MP3

    Hi, Currently I'm working on a MP3 Player. I start loading the MP3 using the Sound.loadSound(URL, true) method. In the Sound.onID3 event, I try to read the Tags using Sound.id3.TALB. Now I run a test with 2 files. The first one is on localhost, the s

  • Half-loaded page problem in IE6 (solved)

    Hello, I had a strange problem where the CSS page would only half-load only on some computers and only sometimes (all under IE6). When this problem occured, everything after the Flash content (integrated in a container layer using SWF object) would n

  • Service-specific error code 997 when starting up Oracle 9i service

    Hi, I have both Oracle 9i and 10g installed in a server. I've added 9i SID in 10g listner.ora. When I try to startup the 9i Oracle service in windows, I get "Windows could not start the 'service name' on Local Computer......If this is a non microsoft

  • Have to keep zooming everytime...

    Can you save a zoom setting? Every time I open Safari it is at "Actual Size." I have to zoom once or twice each time. I know this monitor (27") is quite big, but I find myself squinting on websites. It would be nice to not have to rezoom everytime I

  • SAP BO XCelsius : Drilldown from pie chart to raw data ( Excel Spread Sheet) in Xcelsius

    Hi, I am working on a dashboard where my requirement says, Drill down data from Pie-Chart to Raw data ( Excel Spread sheet). Clicking on one pie should show me the required data in Excel format May I know if this is possible to create from Dashboard