8K Database Block Size in 11.5.2

The Release Notes for 11.5.2 state the minimum database block size is 8K. However, this was not a requirement for 11.5.1 and instead patch 1301168 was created for customers with a database block size less than 8K.
In 11.5.2 is it MANDATORY that the database block size be 8K or greater, or can patch 1301168 be applied in lieu of migrating to a larger block size?

4K to 8K:
Use 7.3.4 Export using Unix Pipe to create the export. Files mayhave to split.
Rebuild the 8.1.6 database. Good Opportunity to consolidate files.
Import using Unix Pipe with 8.1.6 import utility.
Set data files to autoextend, unlimited including system. Do not autoextend RBS, TEMP and log.
Use COMMIT and IGNORE on import.
Use COMPRESS on export.
USE very large buffers on both export and import.
Ignore messages on SYS and SYSTEM tablespace and other definitions.
If you are using 10.7 with 8.1.6, use interop patch,a nd other module patches (see release notes). Run adprepdb.sql after Import.
Compile all invalid modules till they match 10.7 invalid modules. Fix invalid modules like the ones documented in release notes.
If you doing it for just 11.5.2 Just do export import in 8.1.6 with large file system set on your file system and you should be fine. AIX supports Large file systems.
null

Similar Messages

  • How to change existing database block size in all tablespaces

    Hi,
    Need Help to change block size for my existing database which is in 8kb of block size.
    I have read that we can only change block size during database creation, but i want to change it after database installation.
    because for some reason i dont want to change the database installation script.
    Can any one list the steps to change database block size for all existing table space (except system, temp ).
    want to change it to 32kb.
    Thank you for you time.
    -Rushang Kansara

    > We are facing more and more physical reads, I thought by using 32K block size
    we would resolve that..
    A physical read reported by Oracle may not be - it could well be a logical read from the o/s file system cache and not a true physical read. With raw devices for example, a physical I/O reported by Oracle is indeed one as there is no o/s cache for raw devices. So one needs to be careful how aone interprets number like physical reads.
    Lots of physical reads may not necessarily be a bad thing. In contrast, a high percentage of "good/fast" logical reads (i.e. a high % buffer cache hit ratio) may indicate a serious problem with application design - as the application is churning through the exact same data again and again and again. Applications should typically only make a single pass through a data set.
    The best way to deal with physical reads is to make them less. Simple example. A database deals with a lot of inserts. Some bright developer decided to over-index a table. Numerous indexes for the same columns exist in difference physical column orders.
    Oracle now spends a lot of time dealing (reading) with these indexes when inserting (or updating a row). A single write I/O may incur a 100 read I/Os as a result of these indexes needing to be maintained.
    The bottom line is that "more and more physical I/O" is merely a symptom of a problem. Trying to speed these up could well be a wasted exercise. Besides, the most optimal approach to "lots of I/O" is to tune it to make less I/O.
    I/O is the most expensive operation for a RDBMS. It is very difficult to make this expense less (i.e. make I/Os faster). It is more effective to make sure that you use this expensive resource in an optimal way.
    Simple example. Single very large table with 4 indexes. Not very efficient design I/O wise. Single very large partitioned table with local indexes. This can reduce I/O on that table by up to 80% in my experience.

  • Database Block Size Smaller Than Operating System Block Size

    Finding that your database block size should be in multiples of your operating system block size is easy...
    But what if the reverse of the image below were the case?
    What happens when you store an Oracle Data Block that is 2 KB in an 8 KB Operating System Block?  Does it waste 6 KB or are there 4 Oracle Data Blocks stored in 1 Operating System Block?
    Is it different if you use ASM?
    I'd like to introduce a 2 KB block size into a RAC Exadata environment for a small set of highly transactional tables and indexes to reduce contention on blocks being requested in the Global Cache.  I've witnessed horrendous wait times for a plethora of sessions when a block was highly active.
    One index in particular has a column that indicates the "state" of the record, it is a very dense index.  Records will flood in, and then multiple processes will poll, do work, and change the state of the record.  The record eventually reaches a final state and is never updated again.
    I know that I can fill up the block with fluff by adjusting the percent free, percent used, and initrans, but that seems like a lazy hack to me and I'd like to do it right if possible.
    Any thoughts or wisdom is much appreciated.
    "The database requests data in multiples of data blocks, not operating system blocks."
    "In contrast, an Oracle block is a logical storage structure whose size and structure are not known to the operating system."
    http://docs.oracle.com/cd/E11882_01/server.112/e25789/logical.htm#BABDCGIB

    But what if the reverse of the image below were the case?
    What happens when you store an Oracle Data Block that is 2 KB in an 8 KB Operating System Block?  Does it waste 6 KB or are there 4 Oracle Data Blocks stored in 1 Operating System Block?
    Is it different if you use ASM?
    I'd like to introduce a 2 KB block size into a RAC Exadata environment for a small set of highly transactional tables and indexes to reduce contention on blocks being requested in the Global Cache.  I've witnessed horrendous wait times for a plethora of sessions when a block was highly active.
    One index in particular has a column that indicates the "state" of the record, it is a very dense index.  Records will flood in, and then multiple processes will poll, do work, and change the state of the record.  The record eventually reaches a final state and is never updated again.
    I know that I can fill up the block with fluff by adjusting the percent free, percent used, and initrans, but that seems like a lazy hack to me and I'd like to do it right if possible.
    Any thoughts or wisdom is much appreciated.
    "The database requests data in multiples of data blocks, not operating system blocks."
    "In contrast, an Oracle block is a logical storage structure whose size and structure are not known to the operating system."
    http://docs.oracle.com/cd/E11882_01/server.112/e25789/logical.htm#BABDCGIB
    You could have answered your own questions if you had just read the top of the page in that doc you posted the link for
    >
    At the finest level of granularity, Oracle Database stores data in data blocks. One logical data block corresponds to a specific number of bytes of physical disk space, for example, 2 KB. Data blocks are the smallest units of storage that Oracle Database can use or allocate.
    An extent is a set of logically contiguous data blocks allocated for storing a specific type of information. In Figure 12-2, the 24 KB extent has 12 data blocks, while the 72 KB extent has 36 data blocks.
    >
    There isn't any 'wasted' space using 2KB Oracle blocks for 8KB OS blocks. As the doc says Oracle allocates 'extents' and an extent, depending on your space management, is going to be a substantial multiple of blocks. You might typically have extents that are multiples of 64 KB and that would be 8 OS blocks for your example. Yes - it is possible that the very first OS block and the very last block might not map exactly to the Oracle blocks  but for a table of any size that is unlikely to be much of an issue.
    The single-block reads used for some index accesses could affect performance since the read of a 2K Oracle block will result in an 8K OS block being read but that 8K block is also likely to be part of the same index.
    The thing is though that an index entry that is 'hot' is going to be hot whether the block it is in is 2K or 8K so any 'contention' for that entry will exist regardless of the block size.
    You will need to conduct tests using a 2K (or other) block and cache size for your index tablespaces and see which gives you the best results for your access patterns.
    You should use the standard block size for ALL tablespaces unless you can substantiate the need for a non-standard size. Indexes and LOB storage are indeed the primary use cases for uses non-standard block sizes for one or more tablespaces. Don't forget that you need to allocate the appropriate buffer cache.

  • Is tablespace block size and database block size have different meanings

    at the time of database creation we can define database block size
    which can not be changed afterwards.while in tablespace we can also
    define block size which may be different or same as define block size
    at the time of database creation.if it is different block size from
    the database creation times then what the actual block size using by oracle database.
    can any one explain in detail.
    Thanks in Advance
    Regards

    You can't meaningfully name things when there's nothing to compare and contrast them with. If there is no keep or recycle cache, then whilst I can't stop you saying, 'I only have a default cache'... well, you've really just got a cache. By definition, it's the default, because it's the only thing you've got! Saying it's "the default cache" is simply linguistically redundant!
    So if you want to say that, when you set db_nk_cache_size, you are creating a 'default cache for nK blocks', be my guest. But since there's no other bits of nk cache floating around the place (of the same value of n, that is) to be used as an alternate, the designation 'default' is pointless.
    Of course, at some point, Oracle may introduce keep and recycle caches for non-standard caches, and then the use of the differentiator 'default' becomes meaningful. But not yet it isn't.

  • Database block size and swap

    Hi,
    I changed the db_block_size from 4k(oracle 9.2.0.1) to 8k on my solaris machine. The swap use has increased dramatically I am wondering is there any relation between db_block_size and swap.

    Dup Database Block Size, please check our replies in other post.

  • Can tablespace block size different from the database block size

    I have a 10.2.0.3 database in Unix system.
    I created a database used default block size of 8k. However, the client application requires 16k block size database. Can I work around to create a tablespace that has 16k block size instead of drop the database to recreate the database.
    Thanks a lot!

    As Steven pointed out, you certainly can.
    I would generally question, though, whether you should.
    - Why does the application require 16k block sizes? If this is a custom application, it almost certainly doesn't really require 16k blocks. If this is a packaged application, it probably doesn't really require 16k blocks. If 16k blocks are a requirement for support, I would wager that having the application's objects in 16k block size tablespaces in a database with 8k blocks would not be supported.
    - Mixing block sizes increases the management complexity of your database, potentially substantially. You need to specify a completely separate buffer cache for the 16k blocks, a buffer cache that would not be integrated with Oracle's automatic SGA management functionality. Figuring out how to split up the buffer cache between 8k and 16k blocks tends to be rather hard (particularly if the mix changes over time), which means that DBAs are going to be spending substantially more time managing the SGA in this sort of system than in a vanilla 10.2.0.3 system. And that DBAs will have many more opportunities to set things up incorrectly.
    Justin

  • Database Block Size

    4k vs. 8K block size
    The Database Configuration Assistant sets Blocksize to 4K for OLTP and 8K for Datawarehouse. I have seen conflicting information about optimal blocksize for OLTP?
    What is the different in the DCA when choosing "OLTP" vs "Data Warehouse"?
    We have an OLTP database and would like to keep our 8K block size as the databases are already created. What are the issues with OLTP using the Data Warehouse option on DCA?
    Thanks

    You should try to keep your block size the same as the how the OS is setup.That's true if you're using buffered reads (not desirable in iteself), but not otherwise I think, Paul.
    A key advantage of large block sizes in DWh/DSS systems is that is improves the ratio you can achieve on data segment compression -- you can consider going to 16kb or even 32kb.
    For OLTP systems where you are generally retrieving single rows, a small block size makes the buffer cache more efficient by allowing Oracle to retrieve fewer rows in a single i/o.

  • Database Block Size 4K vs 8K

    4K vs. 8K block size
    The Database Configuration Assistant sets Blocksize to 4K for OLTP and 8K for Datawarehouse. I have seen conflicting information about optimal blocksize for OLTP?
    What is the different in the DCA when choosing "OLTP" vs "Data Warehouse"?
    We have an OLTP database and would like to keep our 8K block size as the databases are already created. What are the issues with OLTP using the Data Warehouse option on DCA?
    Thanks

    8k is fine for OLTP systems
    The general rule of thumb is keep ur block size small for OLTP systems and large for DSS systems.

  • Database block size vs. tablespace blocksize

    dear colleques
    the block size of my database is 8192 (db_block_size=8192), all the tablespace that i have, have the same block size, now i want to create a new tablespace with a different block size, and when i do so i receive the following error
    ORA-29339: tablespace block size 4096 does not match configured block size
    please help solving the problems
    cheers
    asif

    $ oerr ora 29339
    29339, 00000, "tablespace block size %s does not match configured block sizes"
    // *Cause:  The block size of the tablespace to be plugged in or
    //          created does not match the block sizes configured in the
    //          database.
    // *Action:Configure the appropriate cache for the block size of this
    //         tablespace using one of the various (db_2k_cache_size,
    //         db_4k_cache_size, db_8k_cache_size, db_16k_cache_size,
    //         db_32K_cache_size) parameters.
    $                                                 You have to configure db_4k_cache_size to a value different from zero. See http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams037.htm#sthref161

  • How to Calculate the Database block size??

    Can someone please tell me how during the Oracle Installation does one calculate the db_dblock_size. Thank u

    One way to find is to right click the drive in Windows Explorer and click Format (Not actually formatting it).
    In the popup window, you would see the "Allocation unit size" for the selected drive. Click "Close" to exit when done.

  • Can't change default block size in dbca

    10.1.0.3
    solaris
    I am using the dbca to create a database. When I go to the sizing screen and try to change the default block size this option is always greyed out at 8k.
    does anyone know why? this happens even when i pick a data warehouse template.

    There is a reason Oracle uses 8K as the default database block size for their warehouse template. Changing the default block size to a larger size generally does not result in better performance when both databases are allocated the exact same SGA memory allocations.
    HTH -- Mark D Powell --

  • Data block size

    I have just started reading the concepts and I got to know that Oracle database data is stored in data blocks. The standard block size is specified by
    DB_BLOCK_SIZE init parameter. Additionally we can specify upto 5 other block sizes using DB_nK_CACHE_SIZE parameter.
    Let us say I define in the init.ora
    DB_BLOCK_SIZE = 8K
    DB_CACHE_SIZE = 4G
    DB_4K_CACHE_SIZE=1G
    DB_16K_CACHE_SIZE=1G
    Questions:
    a) Does this mean I can create tablespaces with 8K, 4K and 16K block sizes only?
    b) whenever I query data from these tablespaces, it will go and sit in these respective cache sizes?
    Thanks in advance.
    Neel

    yes, it will give error message if you create tablespace with non standard block size without specify the db_nk_cache_size parameter in the init parameter file .
    Use the BLOCKSIZE clause of the CREATE TABLESPACE statement to create a tablespace with a block size different from the database standard block size. In order for the BLOCKSIZE clause to succeed, you must have already set the DB_CACHE_SIZE and at least one DB_nK_CACHE_SIZE initialization parameter. Further, and the integer you specify in the BLOCKSIZE clause must correspond with the setting of one DB_nK_CACHE_SIZE parameter setting. Although redundant, specifying a BLOCKSIZE equal to the standard block size, as specified by the DB_BLOCK_SIZE initialization parameter, is allowed.
    The following statement creates tablespace lmtbsb, but specifies a block size that differs from the standard database block size (as specified by the DB_BLOCK_SIZE initialization parameter):
    CREATE TABLESPACE lmtbsb DATAFILE '/u02/oracle/data/lmtbsb01.dbf' SIZE 50M
    EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K
    BLOCKSIZE 8K;
    reference:-http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/tspaces003.htm

  • 8k block size upgrade

    Hello All,
    Our team is currently quickly approaching an 8k block size upgrade of a fairly large production database (800 gig). This is one step in several to improve the performance of a long running (44 hour) batch cycle. My concern is the following and I hoping people can tell me why it shouldn't or shouldn't be a concern. We do not have a place at the moment where we can test this upgrade. My fear is that Oracle may in some cases alter the execution plans for the queries in our batch due to the new larger block size and make bad choices. I am afraid it may choose table scans instead of an index scan (as an example) at the wrong time and cause our batch to run much longer than normal. While we can resolve these issues, this happening in production the first time we run 8k would be a big issue. I might be able to deal with a problem here or there but several issues may cause us to not meet our service level agreement. Should I be concerned about this with an upgrade 4k to 8k? Should I cancel the upgrade for now? -- ORACLE 8i.
    Is there anything else I should stay up at night worry about with this upgrade?
    Thanks,
    Jeff Vacha

    If all you are doing in this upgrade is changing the database block size, and you don't already have some compelling evidence that changing the block size is going to improve performance, I wouldn't bother. Changing database block size is a lot of work and is very rarely going to have a noticable impact.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Multiple block size

    Hi
    When I use multiple block size tablespaces.(32K)
    I have to set DB_32K_CACHE_SIZE parameter.
    Assume the size of the buffer cache is 500m.
    If I set DB_32K_CACHE_SIZE to 200M.
    Will there be only 300m available in buffer cache? how is the allocation works?

    The vast majority of databases do not need to deploy multiple blocksizes.
    As noted here it is not for all databases only specific: http://www.dba-oracle.com/t_multiple_blocksizes_summary.htm
    I have some doubts in this issue...When in doubt consult the official documentation and metalink.
    Here is IBM's Oracle documentation on multiple blocksizes: http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP100883
    While most customers only use the default database block size, it is possible to use up to 5 different database block sizes for different objects within the same database.
    Having multiple database block sizes adds administrative complexity and (if poorly designed and implemented) can have adverse performance consequences. Therefore, using multiple block sizes should only be done after careful planning and performance evaluation.

  • 8k to 32k block size

    Hello Everyone,
    I have a project coming up where I need to upgrade from 8k to 32k block size. I am on database version 10.2.0.3. Database is on raw devices. Size of the database is 2TB. I need to know is there any docs on how to upgrade block size or anything that is useful (scripts,suggestion) etc .
    Thanks

    Hello Everyone,
    I have a project coming up where I need to upgrade
    from 8k to 32k block size. I am on database version
    10.2.0.3. Database is on raw devices. Size of the
    database is 2TB. I need to know is there any docs on
    how to upgrade block size or anything that is useful
    (scripts,suggestion) etc .You don't mention WHY. It would be interesting to know.
    1) Database block size is locked in at database create time. TO change that you have the opportunity to rebuild the database from scratch.
    2) Tablespace block size is locked in at tablespace create time. TO change that you have the opportunity to create a replacement tablespace from scratch.
    3) Be sure to quantify your benefits in a test environment before jumping to a new block size. Several discussions have indicated that block size does not really matter [in terms of general performance] ... other than some very specialized situations. The major benefit in general seems to be the opportunity to visit some block size related bugs.

Maybe you are looking for

  • Premier Pro CC 7.2.2 crashes on launch in Windows 7, probably self inflicted problem...

    I installed PPCC 7.2.2 because I wanted to test the content analyzer to text functionality. I started the software, loaded a file, and started the analyzer.  It then passed this over to Media Encoder as expected and the process chugged along merrily

  • CFDocument - controling file File size

    Hi, I am using cfdocument to generated a PDF file. It is a single page (A4), just HTML (not too heavy), no images. The resulting PDF file size is coming out at 500k+. This seems ridiculously high for what it is. I would expect it to be well under 100

  • AE Approver not found error

    Hi All I have confirmed and configured the custom approver determinator for this i have allso configured the approver under roles in the attributes section at business process level. This is the error below! Failed to Process your request, Configurat

  • 2 Servers install on one MaxDB

    I've installed MaxDB 7.6 with OS Windows 2008 r2 and created a instance SDB on <local>database. It works pretty good. Under this configuration I'd like to add another Server(name ContentServer) on existing Server as above. It means there might be  2

  • HT201317 "This photo stream is no longer shared"...but it IS.  why is this happening/

    "This photo stream is no longer shared."  ...but it IS shared.  Why is this happening?