Multiple block size

Hi
When I use multiple block size tablespaces.(32K)
I have to set DB_32K_CACHE_SIZE parameter.
Assume the size of the buffer cache is 500m.
If I set DB_32K_CACHE_SIZE to 200M.
Will there be only 300m available in buffer cache? how is the allocation works?

The vast majority of databases do not need to deploy multiple blocksizes.
As noted here it is not for all databases only specific: http://www.dba-oracle.com/t_multiple_blocksizes_summary.htm
I have some doubts in this issue...When in doubt consult the official documentation and metalink.
Here is IBM's Oracle documentation on multiple blocksizes: http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP100883
While most customers only use the default database block size, it is possible to use up to 5 different database block sizes for different objects within the same database.
Having multiple database block sizes adds administrative complexity and (if poorly designed and implemented) can have adverse performance consequences. Therefore, using multiple block sizes should only be done after careful planning and performance evaluation.

Similar Messages

  • ORA-27046: file size is not a multiple of logical block size

    Hi All,
    Getting the below error while creating Control File after database restore. Permission and ownership of CONTROL.SQL file is 777 and ora<sid>:dba
    ERROR -->
    SQL> !pwd
    /oracle/SID/sapreorg
    SQL> @CONTROL.SQL
    ORACLE instance started.
    Total System Global Area 3539992576 bytes
    Fixed Size                  2088096 bytes
    Variable Size            1778385760 bytes
    Database Buffers         1744830464 bytes
    Redo Buffers               14688256 bytes
    CREATE CONTROLFILE SET DATABASE "SID" RESETLOGS  ARCHIVELOG
    ERROR at line 1:
    ORA-01503: CREATE CONTROLFILE failed
    ORA-01565: error in identifying file
    '/oracle/SID/sapdata5/p11_19/p11.data19.dbf'
    ORA-27046: file size is not a multiple of logical block size
    Additional information: 1
    Additional information: 1895833576
    Additional information: 8192
    Checked in target system init<SID>.ora and found the parameter db_block_size is 8192. Also checked in source system init<SID>.ora and found the parameter db_block_size is also 8192.
    /oracle/SID/102_64/dbs$ grep -i block initSID.ora
    Kindly look into the issue.
    Regards,
    Soumya

    Please chk the following things
    1.SPfile corruption :
    Startup the DB in nomount using pfile (ie init<sid>.ora) create spfile from pfile;restart the instance in nomount state
    Then create the control file from the script.
    2. Check Ulimit of the target server , the filesize parameter for ulimit shud be unlimited.
    3. Has the db_block_size parameter been changed in init file by any chance.
    Regards
    Kausik

  • DB Cloning.file size is not a multiple of logical block size

    Dear All,
    I am trying to create database in windowsXP from the database files running in Linux.
    When i try to create control file, i m getting the following errors.
    ORA-01503: CREATE CONTROLFILE failed
    ORA-01565: error in identifying file
    'D:\oracle\orcl\oradata\orcl\system01.dbf'
    ORA-27046: file size is not a multiple of logical block size
    OSD-04012: file size mismatch (OS 367009792)
    Pls tell me the workarounds.
    Thanks
    Sathis.

    Hi ,
    I created database service by oradim. Now i m trying to create control file after editing the controlfile with the location of windows datafiles(copied from Linux)
    Thanks,
    Sathis.

  • Buffer pool keep and multiple db block sizes

    I have a tablespace with 8k block size (database default) and a tablespace with 16k block size. I have db_cache_size and db_16k_cache_size set (obviously).
    Also i have buffer cache keep set in the database.
    Question: If a table is placed in a tablespace with 16k block size, and it's buffer pool is keep, does it end up in the keep pool (like tables from 8k tablsepace and keep pool set), or it ends in 16k buffer?

    You can find in the following online manual
    http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14220/memory.htm#i16408

  • Mirrored RAID:  MediaKit reports block size error

    I am trying to create a 2nd set up backup drives for my photos.  I have two new iomega 2TB drives, which look essentially identical to drives I'm currently using as my primary backups as a mirrored RAID set.
    I can start the process with freshly erased and reformatted drives (with the default mac format, extended, journaled, unencrypted, not case-sensitive).  And after a minute or three, I see
    "MediaKit reports block size error, usually caused by not being a multiple of 512."
    The RAID options are Mirrored RAID, Mac extended journaled, and options settings are default.
    I see several series of posts with complaints about encrypting RAIDs and disk block sizes, but not unencrypted errors.   I actually started out trying to do this with the 2006 MBP running 10.6.8 and got a different error:  "POSIX reports:  the operation couldn't be completed. Operation not permitted."  I wasn't sure whether the 2TB RAID I already have was set up iwth the older or newer computer--it was definitely before I put Lion on this one--so I tried this one and now have a different error.
    Any idea what the problem might be? 

    Update:  I spent some time on the phone with an Apple support RAID expert, and couldn't figure out what the error was; we couldn't bypass it by playing with partitions on the drives, or any of another couple of manuevers that I've already forgotten.  He noted that his own searches were showing a lot of mentions of similar problems but only with Iomega drives, and he was finding the same links I found earlier about problems creating encrypted drives.  Now trying to decide if it's worth throwing more good money after bad for a call with Iomega support, and waiting to see if the iomega forum is at all helpful.

  • OSD-04001: invalid logical block size (OS 2800189884)

    My Windows 2003 crashed which was running Oracle XE.
    I installed Oracle XE on Windows XP on another machine.
    I coped my D:\oracle\XE10g\oradata folder of Win2003 to the same location in WinXP machine.
    When I start the database in WinXP using SQLPLUS i get the following message
    SQL> startup
    ORACLE instance started.
    Total System Global Area 146800640 bytes
    Fixed Size 1286220 bytes
    Variable Size 62918580 bytes
    Database Buffers 79691776 bytes
    Redo Buffers 2904064 bytes
    ORA-00205: error in identifying control file, check alert log for more info
    I my D:\oracle\XE10g\app\oracle\admin\XE\bdump\alert_xe I found following errors
    starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
    starting up 4 shared server(s) ...
    Oracle Data Guard is not available in this edition of Oracle.
    Wed Apr 25 18:38:36 2007
    ALTER DATABASE MOUNT
    Wed Apr 25 18:38:36 2007
    ORA-00202: control file: 'D:\ORACLE\XE10G\ORADATA\XE\CONTROL.DBF'
    ORA-27047: unable to read the header block of file
    OSD-04001: invalid logical block size (OS 2800189884)
    Wed Apr 25 18:38:36 2007
    ORA-205 signalled during: ALTER DATABASE MOUNT...
    ORA-00202: control file: 'D:\ORACLE\XE10G\ORADATA\XE\CONTROL.DBF'
    ORA-27047: unable to read the header block of file
    OSD-04001: invalid logical block size (OS 2800189884)
    Please help.
    Regards,
    Zulqarnain

    Hi Zulqarnain,
    Error OSD-04001 is Windows NT specific Oracle message. It means that the logical block size is not a multiple of 512 bytes, or it is too large.
    So what you can do? Well you should try to change the value of DB_BLOCK_SIZE in the initialization parameter file.
    Regards

  • Best Raid Block Size for video editing

    I cannot seem to get my head round about which Raid Block Size I should set my Striped Raid 50 configuration to.
    There seems to be very little info about this, but what info there is seems to imply that it could seriously affect the performace of the Raid.
    I have initialized two Raid array's to Raid 5 and was about to stripe them together using Disk Utility, when I decided to click on options in the bottom left of the Disk Utility window. This is where you can set the Raid Block Size.
    The default is 32K, but it states that there could be 'performance benefits' if this setting is changed to better match my configuration.
    What exactly does this mean?
    I want do read multiple dv streams from my Raid 50 - Any ideas which Block Size I should allocate??
    Should I just leave it as the default 32K??
    Any help will be appreciated
    Cheers
    Adam

    My main concern is really to have as many editors as possible reading DV footage from the Raid simultaneously (up to 5 at once).
    I understand that we may struggle at times, but Xsan isn't an option and I just need to get the best out of a limited budget!
    Chers
    Adam

  • Best RAID block size for media drive?

    What block size give you best performance when it comes to pushing data?
    For a striped RAID setup.
    32 is standard but since most of the media files are big and consistent would a higher value like 128 or even 256 KB be better?

    "fools step in where angels fear to tred"
    Well I'm not volunteering to be one of those.
    Jerry, if your fiber channel raid is giving you the throughput that you need, don't be concerned.
    (I had a quick look through your manual and I'm also confused. But I can't afford that kind of setup so...?)
    I have a simple two 250GB LaCie d2 raid 0 set via SoftRaid and firewire 800 using the G5 port and a LaCie card firewire 800 for dual channel setup.
    This houses my media for FCP.
    My stripe size is set to 128K simply because that's what the SoftRaid manual recommended for video applications.
    This two drive setup is fine for multiple SD streams of DV, but can only manage a single 8 bit uncompressed HD 1080i stream without dropped frames.

  • Transaction execution time and block size

    Hi,
    I have Oracle Database 11g R2 64 bit database on Oracle Linux 5.6. My system has ONE hard drive.
    Recently I experimented with 8.5 GB database in TPC-E test. I was watching transaction time for 2K,4K,8K Oracle block size. Each time I started new test on different block size, I would created new database from scratch to avoid messing something up (each time SGA and PGA parameters ware identical).
    In all experiments a gave to my own tablespace (NEWTS) different configuration because of oracle block-datafile size limits :
    2K oracle block database had 3 datafiles, each 7GB.
    4K oracle block database had 2 datafiles, each 10GB.
    8K oracle block database had 1 datafile of 20GB.
    Now best transaction (tranasaction execution) time was on 8K block, little longer tranasaction time had 4K block, but 2K oracle block had definitly worst transaction time.
    I identified SQL query(when using 2K and 4K block) that was creating hot segments on E_TRANSACTION table, that is largest table in database (2.9GB), and was slowly executed (number of executions was low compared to 8K numbers).
    Now here is my question. Is it possible that multiple datafiles are reasone for this low transaction times. I have AWR reports from that period, but as someone who is still learning things about DBA, I would like to asq, how could I identify this multi-datafile problem (if that is THE problem), by looking inside AWR statistics.
    THX to all.

    >
    It's always interesting to see the results of serious attempts to quantify the effects of variation in block sizes, but it's hard to do proper tests and eliminate side effects.
    I have Oracle Database 11g R2 64 bit database on Oracle Linux 5.6. My system has ONE hard drive.A single drive does make it a little too easy for apparently random variation in performance.
    Recently I experimented with 8.5 GB database in TPC-E test. I was watching transaction time for 2K,4K,8K Oracle block size. Each time I started new test on different block size, I would created new database from scratch to avoid messing something up Did you do anything to ensure that the physical location of the data files was a very close match across databases - inner tracks vs. outer tracks could make a difference.
    (each time SGA and PGA parameters ware identical).Can you give us the list of parameters you set ? As you change the block size, identical parameters DON'T necessarily result in the same configuration. Typically a large change in response time turns out to be due to changes in execution plan, and this can often be associated with different configuration. Did you also check that the system statistics were appropriately matched (which doesn't mean identical cross all databases).
    In all experiments a gave to my own tablespace (NEWTS) different configuration because of oracle block-datafile size limits :
    2K oracle block database had 3 datafiles, each 7GB.
    4K oracle block database had 2 datafiles, each 10GB.
    8K oracle block database had 1 datafile of 20GB.If you use bigfile tablespaces I think you can get 8TB in a single file for a tablespace.
    Now best transaction (tranasaction execution) time was on 8K block, little longer tranasaction time had 4K block, but 2K oracle block had definitly worst transaction time.We need some values here, not just "best/worst" - it doesn't even begin to get interesting unless you have at least a 5% variation - and then it has to be consistent and reproducible.
    I identified SQL query(when using 2K and 4K block) that was creating hot segments on E_TRANSACTION table, that is largest table in database (2.9GB), and was slowly executed (number of executions was low compared to 8K numbers).Query, or DML ? What do you mean by "hot" ? Is E_TRANSACTION a partitioned table - if not then it consists of one segment, so did you mean to say "blocks" rather than segments ? If blocks, which class of blocks ?
    Now here is my question. Is it possible that multiple datafiles are reasone for this low transaction times. I have AWR reports from that period, but as someone who is still learning things about DBA, I would like to asq, how could I identify this multi-datafile problem (if that is THE problem), by looking inside AWR statistics.On a single disc drive I could probably set something up that ensured you got different performance because of different numbers of files per tablespace. As SB has pointed out there are some aspects of extent allocation that could have an effect - roughly speaking, extents for a single object go round-robin on the files so if you have small extent sizes for a large object then a tablescan is more likely to result in larger (slower) head movements if the tablespace is made from multiple files.
    If the results are reproducible, then enable extended tracking (dbms_monitor, with waits) and show us what the tkprof summaries for the slow transactions look like. That may give us some clues.
    Regards
    Jonathan Lewis

  • How to find block size in OS

    I know db_block_size and db_file_multiblock_read_count of Oracle should be multiple of OS(operationg System).
    Can anybody suggest me , how can i find out block size in window or Linux.
    Thanks in advance
    Tinku

    $show parameter db_block_size this is the setting from your init.ora file.
    This parameter is set at the database creation time and cannot be altered.
    In linux system use
    dumpe2fs -fh /dev/hdb
    to get information about your block size.
    In my case it was 4k so my db_block_size will be multiples of 4k.
    http://www.dizwell.com/html/db_block_size.html
    Thanks
    Gopal
    visit
    http://dba.shilpatech.com/

  • Oracle Block Size - question for experts

    Hi ,
    For years i thought that my system block size was 8K.
    Lately due to an HPUX Bug i found that the file system block size is gust .... 1K
    (HP DocId: DCLKBRC00006913 fstyp(1m) returns unexpected block size (f_bsize) for VXFS )
    My instance is currently 10204 but previously was 7.3 --> 8 --> 8174 --> 10204.
    Since its old instance its block size is gust 4kb.
    We are planing to create new file system block size of 8k.
    The instance size is about 2 TB.
    Creating the whole database with 8 kb is impossible since its 24*7 instance.
    Do you think that i sould move gust few important tables to a new tablespace with 8k block size , or should i leave it with 4 kb ?
    Thanks

    Given that your Oracle Database Block_Size (4K) is a multiple of the FileSystem Block_Size (1K), there should be no inherent significant issue, as such.
    Yes, it would have been nice to have an 8KB Oracle Database Block_Size but whether you should recreate your FileSystems to 8KB is a difficult question. There would be implications on PreFetch that the OS does and on how the underlying Storage (must be a SAN, I presume) handles those requests.
    A thorough test (if you can setup a test environment for 2TB such that it does NOT share the same HW, doesn't complicate PreFetches in the existing SAN) would be well adviced.
    Else, check with HP and Veritas support if there are known issues and/or any Desupport plans for this combination ?!
    Oracle, obviously, would have issues with Index Key Length sizes if the Block Size is 4KB. Presumably you do not need to add any new indexes with very large keys.
    Having said that, you would have read all those posts about how Oracle doesn't (or really does ?) test every different block-size ! However, Oracle had, before 8i, been using 2K and 4K block sizes. Except that the new features (LMT, ASSM etc) may not have been well tested.
    Since you upgraded from 7.3 in place without changing the Block_Size, I would venture to say that your database is still using Dictionary Managed and Manual Allocation and Segment Space Management Manual ?
    Hemant K Chitale
    http://hemantoracledba.blogspot.com

  • Database Block Size Smaller Than Operating System Block Size

    Finding that your database block size should be in multiples of your operating system block size is easy...
    But what if the reverse of the image below were the case?
    What happens when you store an Oracle Data Block that is 2 KB in an 8 KB Operating System Block?  Does it waste 6 KB or are there 4 Oracle Data Blocks stored in 1 Operating System Block?
    Is it different if you use ASM?
    I'd like to introduce a 2 KB block size into a RAC Exadata environment for a small set of highly transactional tables and indexes to reduce contention on blocks being requested in the Global Cache.  I've witnessed horrendous wait times for a plethora of sessions when a block was highly active.
    One index in particular has a column that indicates the "state" of the record, it is a very dense index.  Records will flood in, and then multiple processes will poll, do work, and change the state of the record.  The record eventually reaches a final state and is never updated again.
    I know that I can fill up the block with fluff by adjusting the percent free, percent used, and initrans, but that seems like a lazy hack to me and I'd like to do it right if possible.
    Any thoughts or wisdom is much appreciated.
    "The database requests data in multiples of data blocks, not operating system blocks."
    "In contrast, an Oracle block is a logical storage structure whose size and structure are not known to the operating system."
    http://docs.oracle.com/cd/E11882_01/server.112/e25789/logical.htm#BABDCGIB

    But what if the reverse of the image below were the case?
    What happens when you store an Oracle Data Block that is 2 KB in an 8 KB Operating System Block?  Does it waste 6 KB or are there 4 Oracle Data Blocks stored in 1 Operating System Block?
    Is it different if you use ASM?
    I'd like to introduce a 2 KB block size into a RAC Exadata environment for a small set of highly transactional tables and indexes to reduce contention on blocks being requested in the Global Cache.  I've witnessed horrendous wait times for a plethora of sessions when a block was highly active.
    One index in particular has a column that indicates the "state" of the record, it is a very dense index.  Records will flood in, and then multiple processes will poll, do work, and change the state of the record.  The record eventually reaches a final state and is never updated again.
    I know that I can fill up the block with fluff by adjusting the percent free, percent used, and initrans, but that seems like a lazy hack to me and I'd like to do it right if possible.
    Any thoughts or wisdom is much appreciated.
    "The database requests data in multiples of data blocks, not operating system blocks."
    "In contrast, an Oracle block is a logical storage structure whose size and structure are not known to the operating system."
    http://docs.oracle.com/cd/E11882_01/server.112/e25789/logical.htm#BABDCGIB
    You could have answered your own questions if you had just read the top of the page in that doc you posted the link for
    >
    At the finest level of granularity, Oracle Database stores data in data blocks. One logical data block corresponds to a specific number of bytes of physical disk space, for example, 2 KB. Data blocks are the smallest units of storage that Oracle Database can use or allocate.
    An extent is a set of logically contiguous data blocks allocated for storing a specific type of information. In Figure 12-2, the 24 KB extent has 12 data blocks, while the 72 KB extent has 36 data blocks.
    >
    There isn't any 'wasted' space using 2KB Oracle blocks for 8KB OS blocks. As the doc says Oracle allocates 'extents' and an extent, depending on your space management, is going to be a substantial multiple of blocks. You might typically have extents that are multiples of 64 KB and that would be 8 OS blocks for your example. Yes - it is possible that the very first OS block and the very last block might not map exactly to the Oracle blocks  but for a table of any size that is unlikely to be much of an issue.
    The single-block reads used for some index accesses could affect performance since the read of a 2K Oracle block will result in an 8K OS block being read but that 8K block is also likely to be part of the same index.
    The thing is though that an index entry that is 'hot' is going to be hot whether the block it is in is 2K or 8K so any 'contention' for that entry will exist regardless of the block size.
    You will need to conduct tests using a 2K (or other) block and cache size for your index tablespaces and see which gives you the best results for your access patterns.
    You should use the standard block size for ALL tablespaces unless you can substantiate the need for a non-standard size. Indexes and LOB storage are indeed the primary use cases for uses non-standard block sizes for one or more tablespaces. Don't forget that you need to allocate the appropriate buffer cache.

  • OS Block Size finding command

    Hi,
    can anybody tell me the command to find the OS block size?
    (DB block size should be multiple of OS block size).
    PS: i use Red Hat Linux OS. (would be fantastic if you give Sun OS command too)
    Regards
    Suresh

    df -g or fstyp -v
    Nicolas.

  • What is Optimal Block Size?

    Given technological improvements in the past several years to hardware, should we stick with block size of 8 to 100 kb per DBAG? I would think we can go higher.
    But how high is very high?
    We designing a database and if I stick with Acc (D) and Time (D) I will have around 700k of block size and this may grow in the future if more accounts are added.
    What is your recommendation? Pros and cons with your recommendation.
    Server Config:
    OS: Sun Solaris 10
    RAM: 32 GB
    CPUs: 4
    Thanks in advance for your ideas and answers!
    Venu

    If you're in a Planning environment, you really have to benchmark form performance when Accounts and Time are the axes if they're not dense -- sometimes pulling multiple blocks per form is okay, sometimes not. And in turn that performance drives what is in the block and thus block size.
    The "right" answer is based on perspective -- is it calc time or retrieve time that is the performance measure, or both. Where you fall in that multidimensional continum determines block size. This would be the art part of sizing.
    Regards,
    Cameron Lackpour

  • Tablespace creation with different block size

    OS - rhel 4.3
    kernel - 2.6.9
    Oracle - 10.2
    Hardware - IBM X series 346.
    Defined block size for the db - 8kb.
    Question: Can I create a tablespace with 16k?
    regards,
    Lily

    You can create a tablespace with 16k blocks if, as Daljit suggests, you create a 16k block buffer cache.
    I would, however, ask why you would want to create such a tablespace? The ability to have tablespaces with different block sizes in a single database was created primarily to allow the use of transportable tablespaces in situations where different databases had different block sizes. If you are trying to create a 16k block size for a different reason, there are a couple of things to be aware of
    - While there are theories out there that tablespaces with different block sizes can have dramatic effects on performance, I've never seen any solid evidence that backs up this theory.
    - Using different block sizes in the same database can significantly increase the complexity of managing a database. You'll have multiple buffer caches that you'll have to size, for example, automatic SGA management doesn't work with non-standard block size buffer caches, etc.
    Justin

Maybe you are looking for

  • Oracle Web Center - Records Management Sessions at Collaborate 12!

    Save the Date For COLLABORATE 12! The COLLABORATE 12 Conference will take place April 22-26 2012 at Mandalay Bay resort in Las Vegas. Three and a half days of sessions | Search now to determine topics of interest to you and your team members. | Four

  • Change a web template in  Web Application Designer  3.X

    Hi All, I have a web template , when i execute it i get a chart format by default. But i want to change it to table format .Can you please let me know how to do it?. I am not able to change the web template . Thanks and Regards, Sharath

  • I lost my app folder from the dock . how do i get it back

    i drug the app folder to the desktop and it disapperred, now how do i find it and put it back on the dock

  • Zen Touch Software Problem!!

    Hi! I've still got this old problem with my zen touch. I did some backup of my own files and copied them on my zen touch. Now that I need this backup I can't get any access to my data files on my Zen Touch. I have no problems accessing the music file

  • How to make a key field manadatory, in a Z-Table (SE11)

    Hi, I've created a Z-table is se11 and one of the key fields is IWERK. Now the user does not want this field to be blank when they enter data into the table via SM30. Does anyone know how to make this field so that a blank entry is not acceptable. Ma