Cache block size

In Sun Ultra IIIi what is the block size that is transfered between L1 and higher caches? The data sheet says that there is a JBus of width 128b. Is this the block size? Is the b here bits or bytes?

I would say the this figure is bits, sun usually describe bandwidth in bytes as MB/s or similar. Check the document again, I seem to remember the US IIIi and JBUS architecture being described pretty clear in the data sheet. I'll see if I can come up with more from the product JTF.

Similar Messages

  • Cache block size in Sun Ultra Sparc IIIi

    What is the cache block size in Sun Ultra Sparc IIIi processor? The data sheet states that there is a Jbus with 128b bandwidth. Is this the cache block size? is the b in 128b for bits or bytes?

    I would say the this figure is bits, sun usually describe bandwidth in bytes as MB/s or similar. Check the document again, I seem to remember the US IIIi and JBUS architecture being described pretty clear in the data sheet. I'll see if I can come up with more from the product JTF.

  • How can i modify the cache block size to 64k in the 3510 array

    which option should be modify if i wanna setup cache block size to 64k in 3510 array ?
    because my stirpe size was set to 64k,so i wanna setup my cache block size to 64k.
    the document had refered to that it can be modify ,but not instruction how
    and i check all the option about the cache in 3510,and didn't find it... :(
    could anybody tell me how to do it?
    Thx!!

    could any body help me ?

  • Data block size

    I have just started reading the concepts and I got to know that Oracle database data is stored in data blocks. The standard block size is specified by
    DB_BLOCK_SIZE init parameter. Additionally we can specify upto 5 other block sizes using DB_nK_CACHE_SIZE parameter.
    Let us say I define in the init.ora
    DB_BLOCK_SIZE = 8K
    DB_CACHE_SIZE = 4G
    DB_4K_CACHE_SIZE=1G
    DB_16K_CACHE_SIZE=1G
    Questions:
    a) Does this mean I can create tablespaces with 8K, 4K and 16K block sizes only?
    b) whenever I query data from these tablespaces, it will go and sit in these respective cache sizes?
    Thanks in advance.
    Neel

    yes, it will give error message if you create tablespace with non standard block size without specify the db_nk_cache_size parameter in the init parameter file .
    Use the BLOCKSIZE clause of the CREATE TABLESPACE statement to create a tablespace with a block size different from the database standard block size. In order for the BLOCKSIZE clause to succeed, you must have already set the DB_CACHE_SIZE and at least one DB_nK_CACHE_SIZE initialization parameter. Further, and the integer you specify in the BLOCKSIZE clause must correspond with the setting of one DB_nK_CACHE_SIZE parameter setting. Although redundant, specifying a BLOCKSIZE equal to the standard block size, as specified by the DB_BLOCK_SIZE initialization parameter, is allowed.
    The following statement creates tablespace lmtbsb, but specifies a block size that differs from the standard database block size (as specified by the DB_BLOCK_SIZE initialization parameter):
    CREATE TABLESPACE lmtbsb DATAFILE '/u02/oracle/data/lmtbsb01.dbf' SIZE 50M
    EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K
    BLOCKSIZE 8K;
    reference:-http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/tspaces003.htm

  • Install Recommendations (RAID, ASM, Block Size etc)

    Hello,
    I am about to set up a new Oracle 10.2 Database server. In the past, I used RAID5 since 1) it was a fairly small database 2) there were not alot of writes 3) high availability 4) wasted less space compared to other RAID techniques.
    However, even though our database is still small (around 100GB), we are noticing that when we update our data, the time it takes is starting to grow to a point whereby the update that used to take about an hour, now takes 10-12 hours or more. One thing we noticed that if we created another tablespace which had a block size of 16KB versus our normal tablespace which had a block size of 8KB, we almost cut the update time in half.
    So, we decided that we should really start from scratch on a new server and tune it optimally. Here are some questions I have:
    1) Our server is a DELL PowerEdge 2850 with 4x146GB Hard Drives (584GB total). What is the best way to set up the disks? Should I use RAID 1+0 for everything? Should I use ASM? If I use ASM, how is the RAID configured? Do I use RAID0 for ASM since ASM handles mirroring and striping? How should I setup the directory structure? How about partitioning?
    2) I am installing this on Linux and when I tried on my old system to use 32K block size, it said I could only use 16K due to my OS. Is there a way to use a 32K block size with Linux? Should I use a 32K block size?
    Thanks!

    The way I usually handle databases of that size if you don't feel like migrating to ASM redundancy is to use RAID-10. RAID5 is HORRIBLY slow (your redo logs will hate you) and if your controller is any good, a RAID-10 will be the same speed as a RAID-0 on reads, and almost as fast on writes. Also, when you create your array, make the stripe blocks as close to 1MB as you can. Modern disks can usually cache 1MB pretty easily, and that will speed the performance of your array by a lot.
    I just never got into ASM, not sure why. But I'd say build your array as a RAID-10 (you have the capacity) and you'll notice a huge difference.
    16k block size should be good enough. If you have recordsets that are that large, you might want to consider tweaking your multiblock read count.
    ~Jer

  • Drives, block size and raptor300 choice

    Hi,
    Got MacPro2,1 (07 flavour) and was planning on upgrading some internal drives. My boot is currently using two striped WD 500G's with a block size of 16k. If i were to replace these with newer drives and changed the block size to 32k, would there be any issues to speak of ? Thinking of superduper backups, Adobe CS3 licensing, Time machine doing whole boot update etc.
    Alternatively i may go for a pared-down boot and use one raptor 300GB, but which one ?
    WD3000BLFS / WD3000HLFS / WD3000GLFS ? - HLFS looks like the one with regularly placed SATA connections, but unsure which fits the MacPro sleds. Also is there a link for further isolation solutions eg. vibration dampeners.
    Do the Raid Edition's of WD drives still cut the mustard in terms of performance (1TB Western Digital WD1002FBYS RE3, SATA 3Gb/s, 7200 rpm, 32MB Cache, 4.20 ms).
    Many thanks
    J

    16k use to be slightly better for boot drive. The trouble wtih Apple's, I don't know how to change it on the fly like I can with SoftRAIDs.
    Okay, so you probably might want 4 WD Veloci's for scratch, or 3 SSDs.
    Any WD Black or RE3 should be just fine, 500GB up to 1TB, and then you get into 2TB Green RE4, yes, an RE Green drive edition.
    The other factor is you want even more than 16GB RAM to be used as cache for primary 'scratch'.
    There is a guide to photoshop acceleration and optimizing up on
    http://www.macgurus.com - lower left side panel of links to articles.

  • Buffer pool keep and multiple db block sizes

    I have a tablespace with 8k block size (database default) and a tablespace with 16k block size. I have db_cache_size and db_16k_cache_size set (obviously).
    Also i have buffer cache keep set in the database.
    Question: If a table is placed in a tablespace with 16k block size, and it's buffer pool is keep, does it end up in the keep pool (like tables from 8k tablsepace and keep pool set), or it ends in 16k buffer?

    You can find in the following online manual
    http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14220/memory.htm#i16408

  • Raid 0 (Stripe) for OS X boot disk? Best Performance and block size

    Hi,
    so this is a new thread to an older question I had and would like some feedback on;
    I have a new Mac Pro with 4 matched 1TB caviar black drives. I WILL be doing Full Time-Machine Backups, as well as an independant full-system backup regularly.
    That being said, I have 4 drives open and am looking for suggestions. I am leaning toward 2 sets or stripes (one for the OS and one for 'work space', the former with a 32k stripe block size, the latter with 64k (will hold video, audio, scratch, and, yes, Games).
    Does this sound alright? Is there an issue with Striping the boot drive? Is the block size or 32 (or 64) optimal?
    Thanks!
    Dan

    Hi D# Shooter, regarding your question,
    D3 Shooter wrote:
    You brought to mind something I did not take into consideration, Time Machine. I really like the simplicity of TM as it saved me once before. So, could you tell me, for photo files, some video, how much does the striping (% wise) improve the accessing and filing of such files compared to no striping but, using internal drives (7200/WD/1TB/Caviar)? I have not done striping before and want to weigh in because of the back up storage issues now. Thanks.
    J_ust give it a try and see if it is worth it for you_.
    Striping:
    • just enhances (reduces the access/transfer) because in practice the access is distributed in parallel across several DDM's (Old school but it works great!). I think for video and file work the advantage is that you can access the whole object sooner (rather than faster).
    • this distribuition also reduces a load of old style queing on the device ove rthe path. THis was resolved in the late 1980's so no reall rocket science here.
    the issues with striping are few and basically over all the raid implementations (except JBOD which of course is not raid) when compared to a single spindle. The discussions are enormous and plentiful via google and experiences and opinions vary widely.
    Fir the I.T. peole its the advantage they get for access using a smart disk controller that caches goosies like indexes and stuff so that they can sustain a zillion trivial transactions/sec (i.e. banking & internet stuff).. stuff that is of no interest to me
    For the creative people and many applications that are BLOB's (like video, film and remote sensing objects) getting use of the objects sooner (not faster) is of prime importance for workflow efficiencies. If you have this need then striping stiff across disks is for you!
    TIMEMACHINE.app works fine as it seems fairly agnostic to whats implemented under the disk file system. MY issue with time machine is that I don't want it looking after my production stuff, only to keep an eye on my admin I.T. type stuff such as ~/ and data data files.
    As posted on ths thread:
    • availability is the major concern with any file system (cloud or raid or other). RAID with parity schemes and double parity schemes (Raid1,,3,5,6) and implementations such as RAID6+ LSF (log structured file) are all wonderful for this business workflows that need it.
    • timely access in a workflow is another
    • cost benefits are another
    However a *great benefit* for me of *consolidating small storage components under one huge file system is that you dont have to COPY any thing around*. THis is marvelous especially when you think you have to move 2TB's of stuff from one place to anther. THis a takes a lot of time with elcheapo didks that dont have fast interfaces such as SATA/SAS of FC for example.
    As always and has been addressed by others on this thread (Hatter) if you lose a component storage device the whole file system is hosed or severely degraded unless you spend a lot of money on full ranks of DDMs with hot spares and a very good RAID controller card. Again its money.
    YEah sure you can carry some PARITY RAID implementation around across 3 didks but the storage capacity usage is dreadful. THis is why more complex RAID implemntatiosn are in groups of 10+ dDMs.. (yep poepl can argue.. but this is the mainstream).
    My external disk arrays are merely two LUNs (SAS DOMAINSA) that have two file systems implemented using 2 x 4TB 1TBs DDMS - all RAID0 - no parity (no availability) - I just want speed. I look after my own "availability" withm= my archive solution. If the operation dies, I stat again. I'm happy wi that. RAID 5 has write penalty performace hits (well known +update in place+), , RAID 6+ is lousy for huge objects but good for I.T. but ok if you lose two disks in a stripe (RANK).
    They all have their flaws... and mirroring a RAID0 (RAID1/0) seems to be popular with storage vendors because they can see you more disk and thats proper business workflow depends on it.
    However you can achieve this stuff if you change your workflow slightly.
    Other than these the rest is tech specs and stuff under the cover.
    So you what is right for you and your business.
    I dont like spending money on nasty elcheapo FW800 LeCIE disk enclosures with the their junky components and their ilk having been done badly on several corrupted devices and lsing TB;s of content - this is why I invested in a high speed LTO4 ULtrium data tape archive solution.
    sorry for long post..
    w

  • How to change existing database block size in all tablespaces

    Hi,
    Need Help to change block size for my existing database which is in 8kb of block size.
    I have read that we can only change block size during database creation, but i want to change it after database installation.
    because for some reason i dont want to change the database installation script.
    Can any one list the steps to change database block size for all existing table space (except system, temp ).
    want to change it to 32kb.
    Thank you for you time.
    -Rushang Kansara

    > We are facing more and more physical reads, I thought by using 32K block size
    we would resolve that..
    A physical read reported by Oracle may not be - it could well be a logical read from the o/s file system cache and not a true physical read. With raw devices for example, a physical I/O reported by Oracle is indeed one as there is no o/s cache for raw devices. So one needs to be careful how aone interprets number like physical reads.
    Lots of physical reads may not necessarily be a bad thing. In contrast, a high percentage of "good/fast" logical reads (i.e. a high % buffer cache hit ratio) may indicate a serious problem with application design - as the application is churning through the exact same data again and again and again. Applications should typically only make a single pass through a data set.
    The best way to deal with physical reads is to make them less. Simple example. A database deals with a lot of inserts. Some bright developer decided to over-index a table. Numerous indexes for the same columns exist in difference physical column orders.
    Oracle now spends a lot of time dealing (reading) with these indexes when inserting (or updating a row). A single write I/O may incur a 100 read I/Os as a result of these indexes needing to be maintained.
    The bottom line is that "more and more physical I/O" is merely a symptom of a problem. Trying to speed these up could well be a wasted exercise. Besides, the most optimal approach to "lots of I/O" is to tune it to make less I/O.
    I/O is the most expensive operation for a RDBMS. It is very difficult to make this expense less (i.e. make I/Os faster). It is more effective to make sure that you use this expensive resource in an optimal way.
    Simple example. Single very large table with 4 indexes. Not very efficient design I/O wise. Single very large partitioned table with local indexes. This can reduce I/O on that table by up to 80% in my experience.

  • How should be set Index block size in Warehouse databases?

    Hi,
    We have Warehouse database.
    I cannot find out index block size.
    1. Where can I get know our index block sizes?
    2. How can I enlarge index block sizes? Is it related with tablespace?
    After your suggestion do I need increase or set buffer cache keep pool according to block sizes? 2K, 4K, 8K, 16K and 32K can be specified?
    could you help me please?
    thanks and regards,

    See the BLOCK_SIZE column in DBA_TABLESPACES.
    You can't "increase" the block size. You'd have
    a) to allocate DB_xK_cache_size for the new "x"K block size
    b) create a new tablespace explicitly specifying the block size in the CREATE TABLESPACE command
    c) rebuild your indexes into the new tablespace.
    Indexes created in a tablespace with a larger block size have more entries in each block.
    You may get better performance.
    You may get worse performance.
    You may see no difference in performance.
    You may encounter bugs.
    "increasing block size" is an option to be evaluated and tested thoroughly. It is not, per se, a solution.
    Hemant K Chitale
    http://hemantoracledba.blogspot.com

  • Database Block Size Smaller Than Operating System Block Size

    Finding that your database block size should be in multiples of your operating system block size is easy...
    But what if the reverse of the image below were the case?
    What happens when you store an Oracle Data Block that is 2 KB in an 8 KB Operating System Block?  Does it waste 6 KB or are there 4 Oracle Data Blocks stored in 1 Operating System Block?
    Is it different if you use ASM?
    I'd like to introduce a 2 KB block size into a RAC Exadata environment for a small set of highly transactional tables and indexes to reduce contention on blocks being requested in the Global Cache.  I've witnessed horrendous wait times for a plethora of sessions when a block was highly active.
    One index in particular has a column that indicates the "state" of the record, it is a very dense index.  Records will flood in, and then multiple processes will poll, do work, and change the state of the record.  The record eventually reaches a final state and is never updated again.
    I know that I can fill up the block with fluff by adjusting the percent free, percent used, and initrans, but that seems like a lazy hack to me and I'd like to do it right if possible.
    Any thoughts or wisdom is much appreciated.
    "The database requests data in multiples of data blocks, not operating system blocks."
    "In contrast, an Oracle block is a logical storage structure whose size and structure are not known to the operating system."
    http://docs.oracle.com/cd/E11882_01/server.112/e25789/logical.htm#BABDCGIB

    But what if the reverse of the image below were the case?
    What happens when you store an Oracle Data Block that is 2 KB in an 8 KB Operating System Block?  Does it waste 6 KB or are there 4 Oracle Data Blocks stored in 1 Operating System Block?
    Is it different if you use ASM?
    I'd like to introduce a 2 KB block size into a RAC Exadata environment for a small set of highly transactional tables and indexes to reduce contention on blocks being requested in the Global Cache.  I've witnessed horrendous wait times for a plethora of sessions when a block was highly active.
    One index in particular has a column that indicates the "state" of the record, it is a very dense index.  Records will flood in, and then multiple processes will poll, do work, and change the state of the record.  The record eventually reaches a final state and is never updated again.
    I know that I can fill up the block with fluff by adjusting the percent free, percent used, and initrans, but that seems like a lazy hack to me and I'd like to do it right if possible.
    Any thoughts or wisdom is much appreciated.
    "The database requests data in multiples of data blocks, not operating system blocks."
    "In contrast, an Oracle block is a logical storage structure whose size and structure are not known to the operating system."
    http://docs.oracle.com/cd/E11882_01/server.112/e25789/logical.htm#BABDCGIB
    You could have answered your own questions if you had just read the top of the page in that doc you posted the link for
    >
    At the finest level of granularity, Oracle Database stores data in data blocks. One logical data block corresponds to a specific number of bytes of physical disk space, for example, 2 KB. Data blocks are the smallest units of storage that Oracle Database can use or allocate.
    An extent is a set of logically contiguous data blocks allocated for storing a specific type of information. In Figure 12-2, the 24 KB extent has 12 data blocks, while the 72 KB extent has 36 data blocks.
    >
    There isn't any 'wasted' space using 2KB Oracle blocks for 8KB OS blocks. As the doc says Oracle allocates 'extents' and an extent, depending on your space management, is going to be a substantial multiple of blocks. You might typically have extents that are multiples of 64 KB and that would be 8 OS blocks for your example. Yes - it is possible that the very first OS block and the very last block might not map exactly to the Oracle blocks  but for a table of any size that is unlikely to be much of an issue.
    The single-block reads used for some index accesses could affect performance since the read of a 2K Oracle block will result in an 8K OS block being read but that 8K block is also likely to be part of the same index.
    The thing is though that an index entry that is 'hot' is going to be hot whether the block it is in is 2K or 8K so any 'contention' for that entry will exist regardless of the block size.
    You will need to conduct tests using a 2K (or other) block and cache size for your index tablespaces and see which gives you the best results for your access patterns.
    You should use the standard block size for ALL tablespaces unless you can substantiate the need for a non-standard size. Indexes and LOB storage are indeed the primary use cases for uses non-standard block sizes for one or more tablespaces. Don't forget that you need to allocate the appropriate buffer cache.

  • Can tablespace block size different from the database block size

    I have a 10.2.0.3 database in Unix system.
    I created a database used default block size of 8k. However, the client application requires 16k block size database. Can I work around to create a tablespace that has 16k block size instead of drop the database to recreate the database.
    Thanks a lot!

    As Steven pointed out, you certainly can.
    I would generally question, though, whether you should.
    - Why does the application require 16k block sizes? If this is a custom application, it almost certainly doesn't really require 16k blocks. If this is a packaged application, it probably doesn't really require 16k blocks. If 16k blocks are a requirement for support, I would wager that having the application's objects in 16k block size tablespaces in a database with 8k blocks would not be supported.
    - Mixing block sizes increases the management complexity of your database, potentially substantially. You need to specify a completely separate buffer cache for the 16k blocks, a buffer cache that would not be integrated with Oracle's automatic SGA management functionality. Figuring out how to split up the buffer cache between 8k and 16k blocks tends to be rather hard (particularly if the mix changes over time), which means that DBAs are going to be spending substantially more time managing the SGA in this sort of system than in a vanilla 10.2.0.3 system. And that DBAs will have many more opportunities to set things up incorrectly.
    Justin

  • Database Block Size

    4k vs. 8K block size
    The Database Configuration Assistant sets Blocksize to 4K for OLTP and 8K for Datawarehouse. I have seen conflicting information about optimal blocksize for OLTP?
    What is the different in the DCA when choosing "OLTP" vs "Data Warehouse"?
    We have an OLTP database and would like to keep our 8K block size as the databases are already created. What are the issues with OLTP using the Data Warehouse option on DCA?
    Thanks

    You should try to keep your block size the same as the how the OS is setup.That's true if you're using buffered reads (not desirable in iteself), but not otherwise I think, Paul.
    A key advantage of large block sizes in DWh/DSS systems is that is improves the ratio you can achieve on data segment compression -- you can consider going to 16kb or even 32kb.
    For OLTP systems where you are generally retrieving single rows, a small block size makes the buffer cache more efficient by allowing Oracle to retrieve fewer rows in a single i/o.

  • Will block size effect the calc script performance?

    Hi Experts,
    I have a cube called RCI_LA:RCI_LA, now I have created calc scripts and working fine. But those calc scripts are taking too much time than expected (normally it should not take more than 15 min but those are taking nearly 1 hr or more some calc scripts.)
    In database properties I found that block size is 155896 B i.e. 152.KB but this size should be 8 to 100 KB & Block density is 0.72%
    If block size exceeds more than 100 KB will it impact the performance of Calc scripts?
    I think answer to the above question is “yes”. In this case what should I need to do to improve calc scripts performance?
    Could you please share your experience here with me to come out of this problem?
    Thanks in advance.
    Ram

    I believe Sandeep was trying to say "Dynamic" rather than "Intelligent".
    The ideal block size is a factor in all calcs, but the contributing reasons are many (The main three are CPU caching, Data I/O overhead, Index I/O overhead).
    Generally speaking, the ideal block size is achieved when you can minimize the combination of Data I/O overhead and Index I/O overhead. For this reason a block size that is too large will incur too much Data I/O, while a block size that is too small will incur too much Index I/O. If your Index file is small, increasing your block size may help, although the commonly acceptible block size is between 8K and 64K in size, this is just a guideline.
    In other words, if you test it with something right in the middle and your index file is tiny, you might want to test it with a smaller block size. If your index file is very large (i.e. 400 MB or more), you may want to increase the block size and retest.
    Ways to increase/decrease it are also many. Obviously, changing the dense/sparse settings is the main way, but there are some considerations that make this a touchy process. Other ways are to use dynamic calc in the dense dimensions. I say start at the top of your smallest dense dimension and keep the number of DIMENSIONS that you use D-C on limited. Using D-C members in a dense dimension does NOT increase the index file, so it could be considered a "free" reduction in block size -- the penulty is paid on the retrieve side (there is no free ride).

  • Is tablespace block size and database block size have different meanings

    at the time of database creation we can define database block size
    which can not be changed afterwards.while in tablespace we can also
    define block size which may be different or same as define block size
    at the time of database creation.if it is different block size from
    the database creation times then what the actual block size using by oracle database.
    can any one explain in detail.
    Thanks in Advance
    Regards

    You can't meaningfully name things when there's nothing to compare and contrast them with. If there is no keep or recycle cache, then whilst I can't stop you saying, 'I only have a default cache'... well, you've really just got a cache. By definition, it's the default, because it's the only thing you've got! Saying it's "the default cache" is simply linguistically redundant!
    So if you want to say that, when you set db_nk_cache_size, you are creating a 'default cache for nK blocks', be my guest. But since there's no other bits of nk cache floating around the place (of the same value of n, that is) to be used as an alternate, the designation 'default' is pointless.
    Of course, at some point, Oracle may introduce keep and recycle caches for non-standard caches, and then the use of the differentiator 'default' becomes meaningful. But not yet it isn't.

Maybe you are looking for