Data block size

I have just started reading the concepts and I got to know that Oracle database data is stored in data blocks. The standard block size is specified by
DB_BLOCK_SIZE init parameter. Additionally we can specify upto 5 other block sizes using DB_nK_CACHE_SIZE parameter.
Let us say I define in the init.ora
DB_BLOCK_SIZE = 8K
DB_CACHE_SIZE = 4G
DB_4K_CACHE_SIZE=1G
DB_16K_CACHE_SIZE=1G
Questions:
a) Does this mean I can create tablespaces with 8K, 4K and 16K block sizes only?
b) whenever I query data from these tablespaces, it will go and sit in these respective cache sizes?
Thanks in advance.
Neel

yes, it will give error message if you create tablespace with non standard block size without specify the db_nk_cache_size parameter in the init parameter file .
Use the BLOCKSIZE clause of the CREATE TABLESPACE statement to create a tablespace with a block size different from the database standard block size. In order for the BLOCKSIZE clause to succeed, you must have already set the DB_CACHE_SIZE and at least one DB_nK_CACHE_SIZE initialization parameter. Further, and the integer you specify in the BLOCKSIZE clause must correspond with the setting of one DB_nK_CACHE_SIZE parameter setting. Although redundant, specifying a BLOCKSIZE equal to the standard block size, as specified by the DB_BLOCK_SIZE initialization parameter, is allowed.
The following statement creates tablespace lmtbsb, but specifies a block size that differs from the standard database block size (as specified by the DB_BLOCK_SIZE initialization parameter):
CREATE TABLESPACE lmtbsb DATAFILE '/u02/oracle/data/lmtbsb01.dbf' SIZE 50M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K
BLOCKSIZE 8K;
reference:-http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/tspaces003.htm

Similar Messages

  • Data Block size Differents

    My Block size is 92160
    but the logs state the following
    [Tue Jan 10 21:52:48 2012]Local/RFPWork/RFP/admin@Native Directory/2564/Info(1010008)
    Maximum Declared Blocks is [428197512113280] with data block size of [27132]
    [Tue Jan 10 21:52:48 2012]Local/RFPWork/RFP/admin@Native Directory/2564/Info(1010007)
    Maximum Actual Possible Blocks is [377251820487360] with data block size of [11520]
    why is the different between the three pieces ???
    Please advise

    Correction, the value in the logs is in bytes, your block size is in kilobytes. Sorry.....

  • Query Execution/Elapsed Time and Oracle Data Blocks

    Hi,
    I have created 3 tables with one column only. As an example Table 1 below:
    SQL> create table T1 ( x char(2000));
    So 3 tables are created in this way i.e. T1,T2 and T3.
    T1 = in the default database tablespace of 8k (11g v11.1.0.6.0 - Production) (O.S=Windows).
    T2 = I created in a Tablespace with Blocksize 16k.
    T3 = I created in a Tablespace with Blocksize 4k. In the same Instance.
    Each table has approx. 500 rows (So, table sizes are same in all the cases to test Query execution time ). As these 3 tables are created under different data block sizes so the ALLOCATED no. of data blocks are different in all cases.
    T1  =   8k  = 256 Blocks =  00:00:04.76 (query execution time/elapsed time)
    T2  = 16k=121 Blocks =  00:00:04.64
    T3 =   4k =  490 Blocks =  00:00:04.91
    Table Access is FULL i.e. I have used select * from table_name; in all 3 cases. No Index nothing.
    My Question is why query execution time is nearly the same in all 3 cases because Oracle has to read all the data blocks in each case to fetch the records and there is a much difference in the allocated no. of blocks ???
    In 4k block size example, Oracle has to read just 121 blocks and it's taking nearly the same time as it's taking to read 490 blocks???
    This is just 1 example of different data blocks. I have around 40 tables in each block size tablespace and the result are nearly the same. It's very strange for me because there is a much difference in the no. of allocated blocks but execution time is almost the same, only difference in milliseconds.
    I'll highly appreciate the expert opinions.
    Bundle of thanks in advance.
    Best Regards,

    Hi Chris,
    No I'm not using separate databases, it's 8k database with non-standard blocksizes of 16k and 4k.
    Actually I wanted to test the Elapsed time of these 3 tables, so for that I tried to create the same size
    tables.
    And how I equalize these is like I have created one column table with char(2000).
    555 MB is the figure I wanted to use for these 3 tables ( no special figure, just to make it bigger than the
    RAM used for my db at the db startup to be sure of not retrieving the records from cache).
    so row size with overhead is 2006 * 290,000 rows = 581740000(bytes) / 1024 = 568105KB / 1024 = 555MB.
    Through this math calculation I thought It will be the total table size. So I Created the same no. of rows in 3 blocksizes.
    If it's wrong then what a mes because I was calculating tables sizes in the same way from the last few months.
    Can you please explain a little how you found out the tables sizes in different block sizes.Though I understood how you
    calculated size in MB from these 3 block sizes
    T8K =97177 BLOCKS=759MB *( 97177*8 = 777416KB / 1024 = 759MB )*
    T16K=41639 BLOCKS=650MB
    BT4K=293656 BLOCKS=1147MB
    For me it's new to calculate the size of a table. Can you please tell me then how many rows I can create in each of
    these 3 tables to make them equal in MB to test for elapsed time.
    Then I'll again run my test and put the results here. Because If I've wrongly calculated table sizes then there is no need to talk about elapsed time. First I must equalize the table sizes properly.
    SQL> select sum(bytes)/1024/1024 "Size in MB" from dba_segments> 2 where segment_name = 'T16K';
    Size in MB
    655
    Is above SQL is correct to calculate the size or is it the correct alternative way to your method of calculating the size??
    I created the same table again with everything same and the result is :
    SQL> select num_rows,blocks from user_tables where table_name = 'T16K';NUM_ROWS BLOCKS
    290000 41703
    64 more blocks are allocated this time so may be that's y it's showing total size of 655 instead of 650.
    Thanks alot for your help.
    Best Regards,
    KAm
    Edited by: kam555 on Nov 20, 2009 5:57 PM

  • Choosing block size for RAID 0 & Final Cut

    Hi.
    I now have 3 500GB internal Seagate drives in bays 2/3/4 and want to make a striped 1.5TB RAID to use with Final Cut Studio 2. The help page talks about choosing a "large" data block size for use with video, but makes no specific size suggestion. What value would you recommend that I select for the block size? I haven't been in there yet so I don't know what the choices are.
    Any other settings I should be aware of that will optimize the RAID performance for video capture and editing? Thanks!
    Fred
    Message was edited by: FredGarvin
    Message was edited by: FredGarvin

    If you're using Disc Utility to set up your RAID, when you go to the RAID tab, you'll see an options button near the bottom of the window... clicking this will open a small menu where you can set the data block size... the largest is 256K, which is what you'd want to use.
    As for you're other question... have a look at this website: http://bytepile.com/raid_class.php
    note that disc utility can only set up RAID 0 & RAID 1 (if i remember rightly).

  • Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).

    Hi,
    Our Environment is Essbase 11.1.2.2 and working on Essbase EAS and Shared Services components.One of our user tried to run the Cal Script of one Application and faced this error.
    Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).
    I have done some Google and found that we need to add something in Essbase.cfg file like below.
    1012704 Dynamic Calc processor cannot lock more than number ESM blocks during the calculation, please increase CalcLockBlock setting and then retry (a small data cache setting could also cause this problem, please check the data cache size setting).
    Possible Problems
    Analytic Services could not lock enough blocks to perform the calculation.
    Possible Solutions
    Increase the number of blocks that Analytic Services can allocate for a calculation:
    Set the maximum number of blocks that Analytic Services can allocate to at least 500. 
    If you do not have an $ARBORPATH/bin/essbase.cfg file on the server computer, create one using a text editor.
    In the essbase.cfg file on the server computer, set CALCLOCKBLOCKHIGH to 500.
    Stop and restart Analytic Server.
    Add the SET LOCKBLOCK HIGH command to the beginning of the calculation script.
    Set the data cache large enough to hold all the blocks specified in the CALCLOCKBLOCKHIGH setting. 
    Determine the block size.
    Set the data catche size.
    Actually in our Server Config file(essbase.cfg) we dont have below data  added.
    CalcLockBlockHigh 2000
    CalcLockBlockDefault 200
    CalcLockBlocklow 50
    So my doubt is if we edit the Essbase.cfg file and add the above settings and restart the services will it work?  and if so why should we change the Server config file if the problem is with one application Cal Script. Please guide me how to proceed.
    Regards,
    Naveen

    Your calculation needs to hold more blocks in memory than your current set up allows.
    From the docs (quoting so I don't have to write it, not to be a smarta***:
    CALCLOCKBLOCK specifies the number of blocks that can be fixed at each level of the SET LOCKBLOCK HIGH | DEFAULT | LOW calculation script command.
    When a block is calculated, Essbase fixes (gets addressability to) the block along with the blocks containing its children. Essbase calculates the block and then releases it along with the blocks containing its children. By default, Essbase allows up to 100 blocks to be fixed concurrently when calculating a block. This is sufficient for most database calculations. However, you may want to set a number higher than 100 if you are consolidating very large numbers of children in a formula calculation. This ensures that Essbase can fix all the required blocks when calculating a data block and that performance will not be impaired.
    Example
    If the essbase.cfg file contains the following settings:
    CALCLOCKBLOCKHIGH 500  CALCLOCKBLOCKDEFAULT 200  CALCLOCKBLOCKLOW 50 
    then you can use the following SET LOCKBLOCK setting commands in a calculation script:
    SET LOCKBLOCK HIGH; 
    means that Essbase can fix up to 500 data blocks when calculating one block.
    Support doc is saying to change your config file so those settings can be made available for any calc script to use.
    On a side note, if this was working previously and now isn't then it is worth investigating if this is simply due to standard growth or a recent change that has made an unexpected significant impact.

  • Tablespaces and block size in Data Warehouse

    We are preparing to implement Data Warehouse on Oracle 11g R2 and currently I am trying to set up some storage strategy - unfortunately I have very little experience with that. The question is what are general advices in such considerations according table spaces and block size? I made some research and it is hard to find some clear answer, there are resources advising that block size is not important and can be left small (8 KB), others state that it is crucial and should be the biggest possible (64KB). The other thing is what part of data should be placed where? Many resources state that keeping indexes apart from its data is a myth and a bad practice as it may lead to decrease of performance, others say that although there is no performance benefit, index table spaces do not need to be backed up and thats why it should be split. The next idea is to have separate table spaces for big tables, small tables, tables accessed frequently and infrequently. How should I organize partitions in terms of table spaces? Is it a good idea to have "old" data (read only) partitions on separate table spaces?
    Any help highly appreciated and thank you in advance.

    Wojtus-J wrote:
    We are preparing to implement Data Warehouse on Oracle 11g R2 and currently I am trying to set up some storage strategy - unfortunately I have very little experience with that. With little experience, the key feature is to avoid big mistakes - don't try to get too clever.
    The question is what are general advices in such considerations according table spaces and block size? If you need to ask about block sizes, use the default (i.e. 8KB).
    I made some research and it is hard to find some clear answer, But if you get contradictory advice from this forum, how would you decide which bits to follow ?
    A couple of sensible guidelines when researching on the internet - look for material that is datestamped with recent dates (last couple of years), or references recent - or at least relevant - versions of Oracle. Give preference to material that explains WHY an idea might be relevant, give greater preference to material that DEMONSTRATES why an idea might be relevant. Check that any explanations and demonstrations are relevant to your planned setup.
    The other thing is what part of data should be placed where? Many resources state that keeping indexes apart from its data is a myth and a bad practice as it may lead to decrease of performance, others say that although there is no performance benefit, index table spaces do not need to be backed up and thats why it should be split. The next idea is to have separate table spaces for big tables, small tables, tables accessed frequently and infrequently. How should I organize partitions in terms of table spaces? Is it a good idea to have "old" data (read only) partitions on separate table spaces?
    It is often convenient, and sometimes very important, to separate data into different tablespaces based on some aspect of functionality. The performance thing was mooted (badly) in an era when discs were small and (disk) partitions were hard; but all your other examples of why to split are potentially valid for administrative. Big/Small, table/index, old/new, read-only/read-write, fact/dimension etc.
    For data warehouses a fairly common practice is to identify some sort of aging pattern for the data, and try to pick a boundary that allows you to partition data so that a large fraction of the data can eventually be made read-only: using tablespaces to mark time-boundaries can be a great convenience - note that the tablespace boundary need not match the partition boudary - e.g. daily partitions in a monthly tablespace. If you take this type of approach, you might have a "working" tablespace for recent data, and then copy the older data to "time-specific" tablespace, packing it and making it readonly as you do so.
    Tablespaces are (broadly speaking) about strategy, not performance. (Temporary tablespaces / tablespace groups are probably the exception to this thought.)
    Regards
    Jonathan Lewis

  • Block size in tt for writing data to transaction log and checkpoint files

    Hello,
    what block size is TimesTen using when its writing data to transaction log and checkpoint files? Does it use some fixed block size during filesystem writes?

    Although in theory logging can write 2 KB blocks in almost all circumstances it will write 4 KB or larger so yes a filesystem with a 4 KB block size is fine for both checkpointing and logging.
    Chris

  • Space utilization and optimal row size in Data Blocks in Oracle 11g

    Hi,
    My main concern till now is to find the Optimal no. of rows/tuples per one oracle data block of size 8192. For that purpose i'm doing different kind of testing.
    I created one table:
    SQL> create table t5
    2 (x char(2000));
    Table created.
    Inserted 4 rows in it.
    Queried to check in which block no's these 4 rows exist.
    SQL> SELECT x,DBMS_ROWID.ROWID_BLOCK_NUMBER(rowid) "Block No."
    2 from t5;
    x Block No.
    A 422
    B 422
    C 422
    D 423
    SQL> analyze table t5 compute statistics;
    Table analyzed.
    SQL> select LOGGING,BACKED_UP,NUM_ROWS,BLOCKS,EMPTY_BLOCKS, AVG_SPACE,CHAIN_CNT, AVG_ROW_LEN
    2 from user_tables
    3 where table_name = 'T5';
    LOG B NUM_ROWS BLOCKS EMPTY_BLOCKS AVG_SPACE CHAIN_CNT AVG_ROW_LEN . YES N 4 5 3 6466 0 2006 .
    8 data blocks are initially allocated. but my question is that while creating table of even small sizes, every time i'm seeing that it shows BLOCKS (used) are 5.
    why 3 block are EMPTY_BLOCKS allocated all the time?
    AVG_SPACE is the FREE space in every BLOCK that is used if i'm not wrong?
    In this scenerio 3 rows (each of around 2006 bytes) are inserted into into one BLOCK of 8192 i.e. 8192-6006=2186. 2186 is the PCTFREE or what?
    How can I see this CHAIN_CNT column value other than 0?
    How can actually I find the optimal no. of rows/tuples in 1 data block that will not increase OVERHEAD on the system.
    I'll highly appreciate the genuine help and solid suggestions.
    Thanks in Advance.
    Best Regards,
    Kam

    kamy555 wrote:
    If you want to suggest something it'll be nice of yo.I can't disclose my main concern :)If you can't disclose what you mean by "optimal" and you can't disclose what "overhead" you are concerned about, I'm not sure that anyone could answer your question.
    In this scenerio 3 rows (each of around 2006 bytes) are inserted into into one BLOCK of 8192 i.e. 8192-6006=2186. 2186 is the PCTFREE or
    what?PCTFREE is a percentage. You haven't specified what you set it to, but that decreases the space in a block available for inserts to 8192 * (1 - PCTFREE/100).
    How can I see this CHAIN_CNT column value other than 0?Why do you believe there would be chained rows? You would need to use the ANALYZE command to populate the CHAIN_CNT column.
    How can actually I find the optimal no. of rows/tuples in 1 data block that will not increase OVERHEAD on the system.Since you can't disclose what "optimal" means or what "overhead" you're concerned with, I don't see how this question could be answered.
    Justin

  • How to find Tuple(row) size in a Data Block

    hi,
    I want to find out the size of 1 row and total size of a table in a data block in Oracle 11g. For Example I've a Employee table with 300 rows. I want to find out how much space these rows are occupying in a data block (8192 default) and also the size of 1 individual row.
    I'll appreciate the valuable suggestions.
    Thanks alot.
    Regards,
    Kamran

    Hi,
    Welcome to the forum!
    See [Calculating Oracle average row length|http://www.dba-oracle.com/t_average_row_length.htm], I hope it helps
    Regards,

  • Buffer data before chart it and block size

    I hope you can help me with this situation because I have been stoped in this situation for two days and I think that I won't see the light in the short time and the time is a scarce resource.
       I want to use a NI DaqCard-AI-16XE-50 ( 20KS/s accordig to specifications). To acquired data in DasyLab I've been using OPC DA system but when I try get a chart from the signal I get awfull results. 
      I guess the origin of the problem is the PC is not powerfull to generate a chart in real time, so Is there a block to save the data, then graph the data without use "Write Data" Block to avoid write data to disk? 
      Another cause of the problem it could be an incorrect set value of Block size, but in my piont of view with the 10Khz, and 4096 of block size is more than neccesary to acquire a signal of 26[Hz] (showing in the photo). If I reduce the block size to 1 the signal showing in the graph is a constant of the first value acquire. Why could be this situation?
    Thanks in advance for your answers, 
    Solved!
    Go to Solution.
    Attachments:
    data from DAQcard.PNG ‏95 KB

    Is there someone who can help me
    I connect CN606TC2 and Dasylab 11... using the RS232 cable, and have done the instruction manual of the book cn606tc2.
    In dasylab RS232 module, there is a box:
    1). Measurement data request,
    2). Measurement data format ...
    what should I write in the box is so modules dasylab Digital meters can be read from CN606TC2.
    To start communication, the Command Module must send alert code ASCII [L] hex 4C. 
    Commands requesting data from the scannerCN606TC2)
     ASCII [A] hex 41 = Zones/ Alarms/ Scan time
     ASCII [M] hex 4D = Model/ Password/ ID#/ # of zones
     ASCII [S] hex 53 = Setpoints
     ASCII [T] hex 54 = Temperature
    I did not understand the program and the ASCII code.
    I send [T] in the RS232 monitor and restore dasylab 54.
    I am very grateful for your help

  • ORA-00607: Internal error occurred while making a change to a data block

    hi
    i hv use testing purpose oracle 10 Xe in unix
    but when today i hv connect database following error occured
    SQL> startup
    ORACLE instance started.
    Total System Global Area 83886080 bytes
    Fixed Size 1257284 bytes
    Variable Size 75497660 bytes
    Database Buffers 4194304 bytes
    Redo Buffers 2936832 bytes
    Database mounted.
    ORA-00607: Internal error occurred while making a change to a data block
    ORA-00600: internal error code, arguments: [4193], [625], [640], [], [], [],
    I hv check alert log , but no proper solution is given , how can start database

    Duh, and of course I missed the significance of
    oracle 10 Xe in unix
    XE is only supported for Linux - according to the downloads page that means "Debian, Mandriva, Novell, Red Hat and Ubuntu".
    Oracle have not publically stated the reasons why XE is only available on Linux and Windows, but my guess is that it is something to do the Intel chip architecture. If this is so it would be possible for Oracle to port XE to run on Solaris for x86. But they have not done so yet.
    Which means, alas, that you are unlikely to get any help for this one, because XE is not supported on any unix platform.
    Good luck, APC

  • Raid storage usage and block size

    We have two XServe RAID units Raid 5 and we are adding a new 16 bay ACNC raid with 16 1.5TB drives in Raid 6 + Hot Spare. I initialized the Raid 6 with 128K block size. The total data moving from the older raid volumes is around 5.7TB, but on the new Raid it is taking around 7.4TB of space. Is this due to the 128K block size? This is a prepress server so most of the files are quite large, but there may be lots of small files as well.

    Hi
    RAID 0 does indeed offer best performance, however if any one drive of the striped set fails you will lose all your data. If you have not considered a backup strategy now would be the time to do so. For redundancy RAID 1 Mirror might be a better option as this will offer a safety net in case of a single drive failure. A RAID is not a backup and you should always consider a workable backup strategy.
    Purchase another 2x1TB drives and you could consider a RAID 10? Two Stripes mirrored.
    Not all your files will be large ones as I'm guessing you'll be using this workstation for the usual mundane matters such as email etc? Selecting a larger block size with small file sizes usually decreases performance. You have to consider all applications and file sizes, in which case the best block size would be 32k.
    My 2p
    Tony

  • Can't change default block size in dbca

    10.1.0.3
    solaris
    I am using the dbca to create a database. When I go to the sizing screen and try to change the default block size this option is always greyed out at 8k.
    does anyone know why? this happens even when i pick a data warehouse template.

    There is a reason Oracle uses 8K as the default database block size for their warehouse template. Changing the default block size to a larger size generally does not result in better performance when both databases are allocated the exact same SGA memory allocations.
    HTH -- Mark D Powell --

  • Change block size for several log-files simultaneously?

    Hi,
    I'm using SignalExpress to record and analyze data.
    Sometimes I want to analyze the recorded data both for a short period of time and for longer time.
    (Imagine creating an average of every second first and then an average of every 10 seconds)
    Then I need to change all the log-files, and also the specific parts of the log-file. See attachment.
    I have sometimes up to 1000 log-files containing signals from 4 different modules, that makes 4000 adjustments to change from block size 10000 to block size 1000.
    Is there any way to adjust all the log-files block size at once?
    Many thanks!
    Anders Hansson
    Engineer
    Attachments:
    NI.JPG ‏95 KB

    Hi,
    Is't anyone else interested in a solution for this operation?
    I reported this to the NI-feedback service and they adviced me to report/request advice here to get a quicker reply.
    So...
    Best regards
    Ingenjör Hansson

  • Maximum Data file size in 10g,11g

    DB Versions:10g, 11g
    OS & versions: Aix 6.1, Sun OS 5.9, Solaris 10
    This is what Oracle 11g Documentation
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28320/limits002.htm
    says about the Maximum Data file size
    Operating system dependent. Limited by maximum operating system file size;typically 2^22 or 4 MB blocksI don't understand what this 2^22 thing is.
    In our AIX machine and ulimit command show
    $ ulimit -a
    time(seconds)        unlimited
    file(blocks)         unlimited  <-------------------------------------------
    data(kbytes)         unlimited
    stack(kbytes)        4194304
    memory(kbytes)       unlimited
    coredump(blocks)     unlimited
    nofiles(descriptors) unlimited
    threads(per process) unlimited
    processes(per user)  unlimitedSo, this means, In AIX that both the OS and Oracle can create a data file of any Size. Right?
    What about 10g, 11g DBs running on Sun OS 5.9 and Solaris 10 ? Is there any Limit on the data file size?

    How do i determine maximum number of blocks for an OS?df -g would give you the block size. OS blocksize is 512 bytes on AIX.
    Lets say the db_block_size is 8k. What would the maximum file size for data file in Small File tablespace and Big File tablespace be?Smallfile (traditional) Tablespaces - A smallfile tablespace is a traditional Oracle tablespace, which can contain 1022 datafiles or tempfiles, each of which can contain up to approximately 4 million (222) blocks. - 32G
    A bigfile tablespace contains only one datafile or tempfile, which can contain up to approximately 4 billion ( 232 ) blocks. The maximum size of the single datafile or tempfile is 128 terabytes (TB) for a tablespace with 32K blocks and 32TB for a tablespace with 8K blocks.
    HTH
    -Anantha

Maybe you are looking for