Tuning block size in Tablespace

Hi
Actually size of my tablespaces having 8k block size. How to tune it ? how to choose another size ?
Regards
Denis

You can't choose a different block size for existing tablespaces, but you can do that if you create new tablespaces. Generally speaking, a smaller block size can be suitable for OLTP systems, while a larger block size can be adequate for DataWarehouse.
See http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14211/iodesign.htm#sthref736

Similar Messages

  • Database block size vs. tablespace blocksize

    dear colleques
    the block size of my database is 8192 (db_block_size=8192), all the tablespace that i have, have the same block size, now i want to create a new tablespace with a different block size, and when i do so i receive the following error
    ORA-29339: tablespace block size 4096 does not match configured block size
    please help solving the problems
    cheers
    asif

    $ oerr ora 29339
    29339, 00000, "tablespace block size %s does not match configured block sizes"
    // *Cause:  The block size of the tablespace to be plugged in or
    //          created does not match the block sizes configured in the
    //          database.
    // *Action:Configure the appropriate cache for the block size of this
    //         tablespace using one of the various (db_2k_cache_size,
    //         db_4k_cache_size, db_8k_cache_size, db_16k_cache_size,
    //         db_32K_cache_size) parameters.
    $                                                 You have to configure db_4k_cache_size to a value different from zero. See http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams037.htm#sthref161

  • Database Tablespaces with different block sizes...

    Hi ,
    What are the pros and cons of setting different block sizes to tablespaces, and where may be this useful???
    Thanks,
    Simon

    http://asktom.oracle.com/pls/ask/f?p=4950:8:101384681997409236::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:44779427427730
    generally speaking tablespaces with different block sizes than the db standard are for transportable tablespaces only.
    however,
    Burleson will tell you to micromanage your memory allocations with multiple block sized tablespaces and various keep/recycle caches.
    http://www.dbazine.com/oracle/or-articles/burleson2
    I think its overkill and you can most likely get performance improvements from looking at other things such as the SQL your applications are running against the db. My rule of thumb is that 70% of your performance improvements are going to come from sql tuning. You can focus on the other 30% when the vast majority of your queries use 5 LIO per row per join. (Milsap's ratio not mine)

  • Tablespaces and block size in Data Warehouse

    We are preparing to implement Data Warehouse on Oracle 11g R2 and currently I am trying to set up some storage strategy - unfortunately I have very little experience with that. The question is what are general advices in such considerations according table spaces and block size? I made some research and it is hard to find some clear answer, there are resources advising that block size is not important and can be left small (8 KB), others state that it is crucial and should be the biggest possible (64KB). The other thing is what part of data should be placed where? Many resources state that keeping indexes apart from its data is a myth and a bad practice as it may lead to decrease of performance, others say that although there is no performance benefit, index table spaces do not need to be backed up and thats why it should be split. The next idea is to have separate table spaces for big tables, small tables, tables accessed frequently and infrequently. How should I organize partitions in terms of table spaces? Is it a good idea to have "old" data (read only) partitions on separate table spaces?
    Any help highly appreciated and thank you in advance.

    Wojtus-J wrote:
    We are preparing to implement Data Warehouse on Oracle 11g R2 and currently I am trying to set up some storage strategy - unfortunately I have very little experience with that. With little experience, the key feature is to avoid big mistakes - don't try to get too clever.
    The question is what are general advices in such considerations according table spaces and block size? If you need to ask about block sizes, use the default (i.e. 8KB).
    I made some research and it is hard to find some clear answer, But if you get contradictory advice from this forum, how would you decide which bits to follow ?
    A couple of sensible guidelines when researching on the internet - look for material that is datestamped with recent dates (last couple of years), or references recent - or at least relevant - versions of Oracle. Give preference to material that explains WHY an idea might be relevant, give greater preference to material that DEMONSTRATES why an idea might be relevant. Check that any explanations and demonstrations are relevant to your planned setup.
    The other thing is what part of data should be placed where? Many resources state that keeping indexes apart from its data is a myth and a bad practice as it may lead to decrease of performance, others say that although there is no performance benefit, index table spaces do not need to be backed up and thats why it should be split. The next idea is to have separate table spaces for big tables, small tables, tables accessed frequently and infrequently. How should I organize partitions in terms of table spaces? Is it a good idea to have "old" data (read only) partitions on separate table spaces?
    It is often convenient, and sometimes very important, to separate data into different tablespaces based on some aspect of functionality. The performance thing was mooted (badly) in an era when discs were small and (disk) partitions were hard; but all your other examples of why to split are potentially valid for administrative. Big/Small, table/index, old/new, read-only/read-write, fact/dimension etc.
    For data warehouses a fairly common practice is to identify some sort of aging pattern for the data, and try to pick a boundary that allows you to partition data so that a large fraction of the data can eventually be made read-only: using tablespaces to mark time-boundaries can be a great convenience - note that the tablespace boundary need not match the partition boudary - e.g. daily partitions in a monthly tablespace. If you take this type of approach, you might have a "working" tablespace for recent data, and then copy the older data to "time-specific" tablespace, packing it and making it readonly as you do so.
    Tablespaces are (broadly speaking) about strategy, not performance. (Temporary tablespaces / tablespace groups are probably the exception to this thought.)
    Regards
    Jonathan Lewis

  • Can tablespace block size different from the database block size

    I have a 10.2.0.3 database in Unix system.
    I created a database used default block size of 8k. However, the client application requires 16k block size database. Can I work around to create a tablespace that has 16k block size instead of drop the database to recreate the database.
    Thanks a lot!

    As Steven pointed out, you certainly can.
    I would generally question, though, whether you should.
    - Why does the application require 16k block sizes? If this is a custom application, it almost certainly doesn't really require 16k blocks. If this is a packaged application, it probably doesn't really require 16k blocks. If 16k blocks are a requirement for support, I would wager that having the application's objects in 16k block size tablespaces in a database with 8k blocks would not be supported.
    - Mixing block sizes increases the management complexity of your database, potentially substantially. You need to specify a completely separate buffer cache for the 16k blocks, a buffer cache that would not be integrated with Oracle's automatic SGA management functionality. Figuring out how to split up the buffer cache between 8k and 16k blocks tends to be rather hard (particularly if the mix changes over time), which means that DBAs are going to be spending substantially more time managing the SGA in this sort of system than in a vanilla 10.2.0.3 system. And that DBAs will have many more opportunities to set things up incorrectly.
    Justin

  • Is tablespace block size and database block size have different meanings

    at the time of database creation we can define database block size
    which can not be changed afterwards.while in tablespace we can also
    define block size which may be different or same as define block size
    at the time of database creation.if it is different block size from
    the database creation times then what the actual block size using by oracle database.
    can any one explain in detail.
    Thanks in Advance
    Regards

    You can't meaningfully name things when there's nothing to compare and contrast them with. If there is no keep or recycle cache, then whilst I can't stop you saying, 'I only have a default cache'... well, you've really just got a cache. By definition, it's the default, because it's the only thing you've got! Saying it's "the default cache" is simply linguistically redundant!
    So if you want to say that, when you set db_nk_cache_size, you are creating a 'default cache for nK blocks', be my guest. But since there's no other bits of nk cache floating around the place (of the same value of n, that is) to be used as an alternate, the designation 'default' is pointless.
    Of course, at some point, Oracle may introduce keep and recycle caches for non-standard caches, and then the use of the differentiator 'default' becomes meaningful. But not yet it isn't.

  • Tablespace creation with different block size

    OS - rhel 4.3
    kernel - 2.6.9
    Oracle - 10.2
    Hardware - IBM X series 346.
    Defined block size for the db - 8kb.
    Question: Can I create a tablespace with 16k?
    regards,
    Lily

    You can create a tablespace with 16k blocks if, as Daljit suggests, you create a 16k block buffer cache.
    I would, however, ask why you would want to create such a tablespace? The ability to have tablespaces with different block sizes in a single database was created primarily to allow the use of transportable tablespaces in situations where different databases had different block sizes. If you are trying to create a 16k block size for a different reason, there are a couple of things to be aware of
    - While there are theories out there that tablespaces with different block sizes can have dramatic effects on performance, I've never seen any solid evidence that backs up this theory.
    - Using different block sizes in the same database can significantly increase the complexity of managing a database. You'll have multiple buffer caches that you'll have to size, for example, automatic SGA management doesn't work with non-standard block size buffer caches, etc.
    Justin

  • Tablespace - Block size

    Is it possible to have tablespaces of different block size in a given database? if so what is the advantage/usage? Thanks!

    Possible? Yes.
    A smart thing to do? In almost every conceivable case it is a terrible idea.
    Unfortunately our profession has a few self-annointed talking heads promoting the idea but as a Beta tester I can tell you that Oracle develops ONLY using 8K blocks and Beta testing, in almost every case, is done with 8K blocks. If there are bugs related to other block sizes, and they are found from time-to-time, you should not be surprised. Additionally it can make a bit of a mess of memory management.
    The only two cases where I would ever consider using block sizes other than 8K are with Advanced Compression in 11g where there are some advantages to 32K blocks and in Real Application Clusters where there are some possible advantages in improving node affinity with 2K or 4K blocks.
    The experts on this subject are Oak Table members and ACE Directors Jonathan Lewis and Richard Foote. I would advise you to google them for their comments if you wish to investigate further.

  • How to change existing database block size in all tablespaces

    Hi,
    Need Help to change block size for my existing database which is in 8kb of block size.
    I have read that we can only change block size during database creation, but i want to change it after database installation.
    because for some reason i dont want to change the database installation script.
    Can any one list the steps to change database block size for all existing table space (except system, temp ).
    want to change it to 32kb.
    Thank you for you time.
    -Rushang Kansara

    > We are facing more and more physical reads, I thought by using 32K block size
    we would resolve that..
    A physical read reported by Oracle may not be - it could well be a logical read from the o/s file system cache and not a true physical read. With raw devices for example, a physical I/O reported by Oracle is indeed one as there is no o/s cache for raw devices. So one needs to be careful how aone interprets number like physical reads.
    Lots of physical reads may not necessarily be a bad thing. In contrast, a high percentage of "good/fast" logical reads (i.e. a high % buffer cache hit ratio) may indicate a serious problem with application design - as the application is churning through the exact same data again and again and again. Applications should typically only make a single pass through a data set.
    The best way to deal with physical reads is to make them less. Simple example. A database deals with a lot of inserts. Some bright developer decided to over-index a table. Numerous indexes for the same columns exist in difference physical column orders.
    Oracle now spends a lot of time dealing (reading) with these indexes when inserting (or updating a row). A single write I/O may incur a 100 read I/Os as a result of these indexes needing to be maintained.
    The bottom line is that "more and more physical I/O" is merely a symptom of a problem. Trying to speed these up could well be a wasted exercise. Besides, the most optimal approach to "lots of I/O" is to tune it to make less I/O.
    I/O is the most expensive operation for a RDBMS. It is very difficult to make this expense less (i.e. make I/Os faster). It is more effective to make sure that you use this expensive resource in an optimal way.
    Simple example. Single very large table with 4 indexes. Not very efficient design I/O wise. Single very large partitioned table with local indexes. This can reduce I/O on that table by up to 80% in my experience.

  • Using large block sizes for index and table spaces

    " You are not using large blocksizes for your index tablespaces. Oracle research proves that indexes will build flatter tree structures in larger blocksizes.
    Is this a generic statement that I can use for all tables or indexes? I also have batch and online activity. My primary target is batch and it should not impact online. Not sure if both have common tables.
    How to find the current block size used for tables and index? is there a v$parameter query?
    What is an optimal block size value for batch?
    How do I know when flatter tree str has been achieved using above changes? Is there a query to determine this?
    What about tables, what is the success criterion for tables. can we use the same flat tree str criterion? Is there a query for this?

    user3390467 wrote:
    " You are not using large blocksizes for your index tablespaces. Oracle research proves that indexes will build flatter tree structures in larger blocksizes.
    Is this a generic statement that I can use for all tables or indexes? This is a generic statement used by some consultants. Unfortunately, it is riddled with exceptions and other considerations.
    One consultant in particular seems to have anecdotal evidence that using different block sizes for index (big) and data (small) can yield almost miraculous improvements. However, that can not be backed up due to NDA. Many of the rest of us can not duplicate the improvements, and indeed some find situations where that results in a degradation (esp with high insert/update rates from separated transactions).
    I also have batch and online activity. My primary target is batch and it should not impact online. Not sure if both have common tables.
    How to find the current block size used for tables and index? is there a v$parameter query?
    What is an optimal block size value for batch?
    How do I know when flatter tree str has been achieved using above changes? Is there a query to determine this?
    What about tables, what is the success criterion for tables. can we use the same flat tree str criterion? Is there a query for this?I'd strongly recommend that you
    1) stop using generic tools to analyze specific problems
    2) define you problem in detail ()what are you really trying to accomplish - seems like performance tuning, but you never really state that)
    3) define the OS and DB version - in detail. Give rev levels and patch levels.
    If you are having a serious performance issue, I strongly recommend you look at some performance tuning specialists like "http://www.method-r.com/", "http://www.miracleas.dk/", "http://www.hotsos.com/", "http://www.pythian.com/", or even Oracle's Performance Tuning consultants. Definitely worth the price of admission.

  • Data block size

    I have just started reading the concepts and I got to know that Oracle database data is stored in data blocks. The standard block size is specified by
    DB_BLOCK_SIZE init parameter. Additionally we can specify upto 5 other block sizes using DB_nK_CACHE_SIZE parameter.
    Let us say I define in the init.ora
    DB_BLOCK_SIZE = 8K
    DB_CACHE_SIZE = 4G
    DB_4K_CACHE_SIZE=1G
    DB_16K_CACHE_SIZE=1G
    Questions:
    a) Does this mean I can create tablespaces with 8K, 4K and 16K block sizes only?
    b) whenever I query data from these tablespaces, it will go and sit in these respective cache sizes?
    Thanks in advance.
    Neel

    yes, it will give error message if you create tablespace with non standard block size without specify the db_nk_cache_size parameter in the init parameter file .
    Use the BLOCKSIZE clause of the CREATE TABLESPACE statement to create a tablespace with a block size different from the database standard block size. In order for the BLOCKSIZE clause to succeed, you must have already set the DB_CACHE_SIZE and at least one DB_nK_CACHE_SIZE initialization parameter. Further, and the integer you specify in the BLOCKSIZE clause must correspond with the setting of one DB_nK_CACHE_SIZE parameter setting. Although redundant, specifying a BLOCKSIZE equal to the standard block size, as specified by the DB_BLOCK_SIZE initialization parameter, is allowed.
    The following statement creates tablespace lmtbsb, but specifies a block size that differs from the standard database block size (as specified by the DB_BLOCK_SIZE initialization parameter):
    CREATE TABLESPACE lmtbsb DATAFILE '/u02/oracle/data/lmtbsb01.dbf' SIZE 50M
    EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K
    BLOCKSIZE 8K;
    reference:-http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/tspaces003.htm

  • RAID, ASM, and Block Size

    * This was posted in the "Installation" Thread, but I copied it here to see if I can get more responses, Thank you.*
    Hello,
    I am about to set up a new Oracle 10.2 Database server. In the past, I used RAID5 since 1) it was a fairly small database 2) there were not alot of writes 3) high availability 4) wasted less space compared to other RAID techniques.
    However, even though our database is still small (around 100GB), we are noticing that when we update our data, the time it takes is starting to grow to a point whereby the update that used to take about an hour, now takes 10-12 hours or more. One thing we noticed that if we created another tablespace which had a block size of 16KB versus our normal tablespace which had a block size of 8KB, we almost cut the update time in half.
    So, we decided that we should really start from scratch on a new server and tune it optimally. Here are some questions I have:
    1) Our server is a DELL PowerEdge 2850 with 4x146GB Hard Drives (584GB total). What is the best way to set up the disks? Should I use RAID 1+0 for everything? Should I use ASM? If I use ASM, how is the RAID configured? Do I use RAID0 for ASM since ASM handles mirroring and striping? How should I setup the directory structure? How about partitioning?
    2) I am installing this on Linux and when I tried on my old system to use 32K block size, it said I could only use 16K due to my OS. Is there a way to use a 32K block size with Linux? Should I use a 32K block size?
    Thanks!

    Hi
    RAID 0 does indeed offer best performance, however if any one drive of the striped set fails you will lose all your data. If you have not considered a backup strategy now would be the time to do so. For redundancy RAID 1 Mirror might be a better option as this will offer a safety net in case of a single drive failure. A RAID is not a backup and you should always consider a workable backup strategy.
    Purchase another 2x1TB drives and you could consider a RAID 10? Two Stripes mirrored.
    Not all your files will be large ones as I'm guessing you'll be using this workstation for the usual mundane matters such as email etc? Selecting a larger block size with small file sizes usually decreases performance. You have to consider all applications and file sizes, in which case the best block size would be 32k.
    My 2p
    Tony

  • Install Recommendations (RAID, ASM, Block Size etc)

    Hello,
    I am about to set up a new Oracle 10.2 Database server. In the past, I used RAID5 since 1) it was a fairly small database 2) there were not alot of writes 3) high availability 4) wasted less space compared to other RAID techniques.
    However, even though our database is still small (around 100GB), we are noticing that when we update our data, the time it takes is starting to grow to a point whereby the update that used to take about an hour, now takes 10-12 hours or more. One thing we noticed that if we created another tablespace which had a block size of 16KB versus our normal tablespace which had a block size of 8KB, we almost cut the update time in half.
    So, we decided that we should really start from scratch on a new server and tune it optimally. Here are some questions I have:
    1) Our server is a DELL PowerEdge 2850 with 4x146GB Hard Drives (584GB total). What is the best way to set up the disks? Should I use RAID 1+0 for everything? Should I use ASM? If I use ASM, how is the RAID configured? Do I use RAID0 for ASM since ASM handles mirroring and striping? How should I setup the directory structure? How about partitioning?
    2) I am installing this on Linux and when I tried on my old system to use 32K block size, it said I could only use 16K due to my OS. Is there a way to use a 32K block size with Linux? Should I use a 32K block size?
    Thanks!

    The way I usually handle databases of that size if you don't feel like migrating to ASM redundancy is to use RAID-10. RAID5 is HORRIBLY slow (your redo logs will hate you) and if your controller is any good, a RAID-10 will be the same speed as a RAID-0 on reads, and almost as fast on writes. Also, when you create your array, make the stripe blocks as close to 1MB as you can. Modern disks can usually cache 1MB pretty easily, and that will speed the performance of your array by a lot.
    I just never got into ASM, not sure why. But I'd say build your array as a RAID-10 (you have the capacity) and you'll notice a huge difference.
    16k block size should be good enough. If you have recordsets that are that large, you might want to consider tweaking your multiblock read count.
    ~Jer

  • Buffer pool keep and multiple db block sizes

    I have a tablespace with 8k block size (database default) and a tablespace with 16k block size. I have db_cache_size and db_16k_cache_size set (obviously).
    Also i have buffer cache keep set in the database.
    Question: If a table is placed in a tablespace with 16k block size, and it's buffer pool is keep, does it end up in the keep pool (like tables from 8k tablsepace and keep pool set), or it ends in 16k buffer?

    You can find in the following online manual
    http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14220/memory.htm#i16408

  • Transaction execution time and block size

    Hi,
    I have Oracle Database 11g R2 64 bit database on Oracle Linux 5.6. My system has ONE hard drive.
    Recently I experimented with 8.5 GB database in TPC-E test. I was watching transaction time for 2K,4K,8K Oracle block size. Each time I started new test on different block size, I would created new database from scratch to avoid messing something up (each time SGA and PGA parameters ware identical).
    In all experiments a gave to my own tablespace (NEWTS) different configuration because of oracle block-datafile size limits :
    2K oracle block database had 3 datafiles, each 7GB.
    4K oracle block database had 2 datafiles, each 10GB.
    8K oracle block database had 1 datafile of 20GB.
    Now best transaction (tranasaction execution) time was on 8K block, little longer tranasaction time had 4K block, but 2K oracle block had definitly worst transaction time.
    I identified SQL query(when using 2K and 4K block) that was creating hot segments on E_TRANSACTION table, that is largest table in database (2.9GB), and was slowly executed (number of executions was low compared to 8K numbers).
    Now here is my question. Is it possible that multiple datafiles are reasone for this low transaction times. I have AWR reports from that period, but as someone who is still learning things about DBA, I would like to asq, how could I identify this multi-datafile problem (if that is THE problem), by looking inside AWR statistics.
    THX to all.

    >
    It's always interesting to see the results of serious attempts to quantify the effects of variation in block sizes, but it's hard to do proper tests and eliminate side effects.
    I have Oracle Database 11g R2 64 bit database on Oracle Linux 5.6. My system has ONE hard drive.A single drive does make it a little too easy for apparently random variation in performance.
    Recently I experimented with 8.5 GB database in TPC-E test. I was watching transaction time for 2K,4K,8K Oracle block size. Each time I started new test on different block size, I would created new database from scratch to avoid messing something up Did you do anything to ensure that the physical location of the data files was a very close match across databases - inner tracks vs. outer tracks could make a difference.
    (each time SGA and PGA parameters ware identical).Can you give us the list of parameters you set ? As you change the block size, identical parameters DON'T necessarily result in the same configuration. Typically a large change in response time turns out to be due to changes in execution plan, and this can often be associated with different configuration. Did you also check that the system statistics were appropriately matched (which doesn't mean identical cross all databases).
    In all experiments a gave to my own tablespace (NEWTS) different configuration because of oracle block-datafile size limits :
    2K oracle block database had 3 datafiles, each 7GB.
    4K oracle block database had 2 datafiles, each 10GB.
    8K oracle block database had 1 datafile of 20GB.If you use bigfile tablespaces I think you can get 8TB in a single file for a tablespace.
    Now best transaction (tranasaction execution) time was on 8K block, little longer tranasaction time had 4K block, but 2K oracle block had definitly worst transaction time.We need some values here, not just "best/worst" - it doesn't even begin to get interesting unless you have at least a 5% variation - and then it has to be consistent and reproducible.
    I identified SQL query(when using 2K and 4K block) that was creating hot segments on E_TRANSACTION table, that is largest table in database (2.9GB), and was slowly executed (number of executions was low compared to 8K numbers).Query, or DML ? What do you mean by "hot" ? Is E_TRANSACTION a partitioned table - if not then it consists of one segment, so did you mean to say "blocks" rather than segments ? If blocks, which class of blocks ?
    Now here is my question. Is it possible that multiple datafiles are reasone for this low transaction times. I have AWR reports from that period, but as someone who is still learning things about DBA, I would like to asq, how could I identify this multi-datafile problem (if that is THE problem), by looking inside AWR statistics.On a single disc drive I could probably set something up that ensured you got different performance because of different numbers of files per tablespace. As SB has pointed out there are some aspects of extent allocation that could have an effect - roughly speaking, extents for a single object go round-robin on the files so if you have small extent sizes for a large object then a tablescan is more likely to result in larger (slower) head movements if the tablespace is made from multiple files.
    If the results are reproducible, then enable extended tracking (dbms_monitor, with waits) and show us what the tkprof summaries for the slow transactions look like. That may give us some clues.
    Regards
    Jonathan Lewis

Maybe you are looking for

  • DS enhancement

    Hello Gurus, I am practicing on enhancing datasource for 2lis_11_vahdr. I managed to add 2 fields in rsa6 with zztid,zztname. Now i went to LBWE to add those fileds to seletion criteria but its not listed in pool. I think only after selecting here we

  • Trying to get open nat for mw3 on ps3 WRT610n

    ive tried all types of things i read online and it is stuck on moderate no matter what i do, im hooked into router directly, when i hook directly into modem it switches to open... please help

  • Quesion about dbms_advanced_rewrite ???

    SCOTT@oracle10g>alter session set nls_language=american; Session altered. SCOTT@oracle10g>create table tt1 as select * from user_objects; Table created. SCOTT@oracle10g>create table tt2 as select * from dba_objects where owner='SYSTEM'; Table created

  • Rule to write carrry forward Journals

    Dear users, Is it possible to write rules to create automatic carry forward Journals in HFM? The rule should generate journals to be posted to <Entity Curr Adj> and <Parent Curr Adjs>. I would like the system to automatrically carry forward from one

  • Transitions won't show

    When I click on the transitions tab in Premiere Elements 10, the word "Loading..." appears with the circular loading icon but no matter how long I wait, the transitions never show up.  All the other tabs (Effects, Titles, Themes, and Clip Art) all sh