Available blocks in buffer cache
Hi.
I need to find available blocks in buffer cache. I can not query x$bh as not sysdba user. Anyone that has an idea how to get this information. I tried query the v$bh view but I can not get it right.
Anyone with a good idea?
Rgds
Kjell OVe
No,
When you have a 100m buffer cache, it means you can buffer 100m/8k blocks of your database in cache, and you don't need to read them from disk.
When the cache gets full Oracle will use a modified Least Recently Used algorithm to determine which blocks can be flushed.
If the block is unmodified (not dirty) it will simply be removed, if it is modified it will be written to disk.
When you insert a record (you seem to be really obsessed by this)
- Oracle will look for free space in the current segment.
When it finds a block and it is not in cache, it will retrieve this in cache.
- if there is no space, Oracle will allocate a new extent.
It will retrieve blocks from the new extent in cache. Simply: each block in the cache has a RBA (relative block address). The RBA points to a block on disk.
- When it can't allocate an extent, Oracle will try to extend the tablespace (actually the datafile)
If this doesn't succeed Oracle will raise an error, and send the error number and error text to the client program.
The failing statement will be rolled back automatically.
Hth
Sybrand Bakker
Senior Oracle DBA
Similar Messages
-
Find available space in buffer cache
Hi.
I want to find available space from buffer cache. First thought was to make it 8i-9i comp, by not using v$bh to calculate sum of memory and available space.
I have the following pl/sql block to calculate the values:
declare
num_free_blck integer;
num_all_blck integer;
num_used_blck integer;
overal_cache number := 0;
used_cache number := 0;
free_cache number := 0;
blck_size integer;
pct_free number := 0;
begin
select count(1) into num_free_blck from v$bh where status='free';
select count(1) into num_all_blck from v$bh;
select count(1) into num_used_blck from v$bh where status <> 'free';
select value into blck_size from v$parameter where name ='db_block_size';
used_cache := (blck_size * num_used_blck)/(1024*1024);
free_cache := (blck_size * num_free_blck)/(1024*1024);
overal_cache := (blck_size * num_all_blck)/(1024*1024);
pct_free := ((free_cache/overal_cache)*100);
dbms_output.put_line('There are '||num_free_blck||' free blocks in buffer cache');
dbms_output.put_line('There are '||num_used_blck||' used block in buffer cache');
dbms_output.put_line('There are totally '||num_all_blck||' blocks in buffer cache');
dbms_output.put_line('Overall cache size is '||to_char(overal_cache,'999.9')|| 'mb');
dbms_output.put_line('Used cache is '||to_char(used_cache,'999.9')||' mb');
dbms_output.put_line('Free cache is '||to_char(free_cache,'999.9')||' mb');
dbms_output.put_line('Percent free db_cache is '||to_char(pct_free,'99.9')||' %');
end;
The result of the execution is:
SQL> @c:\temp\bh
There are 3819 free blocks in buffer cache
There are 4189 used block in buffer cache
There are totally 8008 blocks in buffer cache
Overall cache size is 62.6mb
Used cache is 32.7 mb
Free cache is 29.8 mb
Percent free db_cache is 47.7 %
PL/SQL-prosedyren ble fullført.
SQL>
This is not correct according to the actuall size of the buffer cache:
SQL> select name,value from v$parameter where name='db_cache_size';
NAME
VALUE
db_cache_size
67108864
SQL>
Anyone that have an idea bout this?
Thanks
Kjell OveMark D Powell wrote:
select decode(state,0,'Free',
1,'Read and Modified',
2,'Read and Not Modified',
3,'Currently being Modified',
'Other'
) buffer_state,
count(*) buffer_count
from sys.xx_bh
group by decode(state,0,'Free',
1,'Read and Modified',
2,'Read and Not Modified',
3,'Currently being Modified',
'Other'
Provided the OP figures out that xx_bh is probably a view defined by sys on top of x$bh this will get him the number of free buffers - which may be what he wants - but apart from that your query is at least 10 years short of complete, and the decode() of state 3 is definitley wrong.
The decode of x$bh.state for 10g is:
decode(state,
0,'free',
1,'xcur',
2,'scur',
3,'cr',
4,'read',
5,'mrec',
6,'irec',
7,'write',
8,'pi',
9,'memory',
10,'mwrite',
11,'donated'
), and for 11g it is:
decode(state,
0, 'free',
1, 'xcur',
2, 'scur',
3, 'cr',
4, 'read',
5, 'mrec',
6, 'irec',
7, 'write',
8, 'pi',
9, 'memory',
10, 'mwrite',
11, 'donated',
12, 'protected',
13, 'securefile',
14, 'siop',
15, 'recckpt',
16, 'flashfree',
17, 'flashcur',
18, 'flashna'
), (At least, that was the last time I looked - they may have changed again in 10.2.0.5 and 11.2.0.2)
Regards
Jonathan Lewis -
How to remove blocks from buffer cache for a specific object
hi everybody,
is it possible to remove blocks which belogns to a specific object (a table for ex) from buffer cache.
as you know, there is
alter system flush buffer_cache;command but it does it's job for all buffer cache. if you ask me why i want this, for tuning reasons. I want to test some plsql codes when they run as if they are running for the first time (reading from disk).
ps: I use oracle 11g r2Hi mustafa,
Your performance will not degrade if you run the query second time ( if i understood correctly, you worry about the performance if you execute the procedure second time). Executing/running the code/sql statements over and over again will have following two good benefits.
1) This will avoid hard parsing (Hard parsing is resource intensive operation and this generally increase the overall processing time.
2) This will avoid physical read IO (You gonna see the benefits if data blocks already cached and you dont have to spend time in reading blocks from disk. Reading from disk is much costlier and time consuming operation as compared to data in RAM)
Having that said sometime bad written queries will acquire more blocks then required and consume most part of buffer cache, and this can some times effect the other important blocks and force to flush out from buffer cache.
Oracle have built some intelligence for large full table scan operations for e.g will doing full table scan(I hope you already know what is fts) oracle will put its blocks at end of LRU chain. So these will be the buffers will would flush out first then any other.
From oracle documentation:
"When the user process is performing a full table scan, it reads the blocks of the table into buffers and puts them on the LRU end (instead of the MRU end) of the LRU list. This is because a fully scanned table usually is needed only briefly, so the blocks should be moved out quickly to leave more frequently used blocks in the cache.
You can control this default behavior of blocks involved in table scans on a table-by-table basis. To specify that blocks of the table are to be placed at the MRU end of the list during a full table scan, use the CACHE clause when creating or altering a table or cluster. You can specify this behavior for small lookup tables or large static historical tables to avoid I/O on subsequent accesses of the table."
Regards
Edited by: 909592 on Feb 6, 2012 4:37 PM -
AWR's buffer cache reads and logical reads
In AWR report under "Segments by Logical Reads" section, there is a total logical reads, I assume it is in unit of block. Under "IOStat by Function summary" section, it has buffer cache reads in unit of bytes. Shouldn't the number of logical reads x 8k (if the block size is 8k) = the number of buffer cache reads?
They are not equal, not even close, does anybody know why? ThanksHi,
buffer gets = number of times a block was requested from buffer cache. A buffer get always request in a logical read. Depending on whether or not a copy of the block is available in the buffer cache, a logical read may or may not involve a physical read. So "buffer gets" and "logical reads" are basically synonyms and are often used interchangeably.
Oracle doesn't have a special "undo buffer". Undo blocks are stored in rollback segments in UNDO tablespace, and are managed in the same way data blocks are (they're even protected by redo). If a consistent get requires reading from UNDO tablespace, then statistics counters will show that, i.e. there will be one more consistent get in your autotrace.
For more information and some examples, see a thread at askTom:
http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:549546900346542976
Best regards,
Nikolay -
제품 : ORACLE SERVER
작성날짜 : 2004-10-13
BUFFER CACHE 내의 HOT BLOCK을 찾기
=============================
PURPOSE
ORACLE은 data block을 효율적으로 관리하기 위하여 buffer cache를
사용하고 있다.
이 buffer cache는 각각 buffer handle을 가지고 있으며 buffer cache
chain에 의하여 관리가 되고 있다. oracle은 원하는 block을 빨리 찾기
위하여 block의 주소를 hash key로 사용하는 hash function을 사용하여
특정 buffer chain을 찾고 이 list를 scan하여 원하는 block을 찾게
된다. 따라서 특정 block은 하나의 buffer chain에 의하여 관리된다.
이 buffer cache chains latch cocntention은 주로 이 buffer chain에
의하여 관리되는 buffer수가 많거나 hot block(hot buffer)에 의한 경우로
나누어 볼 수 있다.
아래에서는 hot block을 찾는 방법에 대하여 설명하고 있다.
Explanation
1. v$latch_children에서 contention이 많은 buffer cache chain
latch들을 찾는다.
select * from
(select l.child#, l.addr, l.sleeps, l.sleep1, l.sleep3
from
v$latch_children l
where l.latch#= 66
order by l.sleeps desc
where rownum < 11
이 쿼리를 일정한 간격을 두고 수행하여 현 시점에서 sleeps이 많은
latch를 선별한다.
CHILD# ADDR SLEEPS SLEEP1 SLEEP3
29406 33F84420 301809 242441 4041
2532 33BEDB68 233945 213363 651
3642 33C13A70 215950 181390 2330
3575 33C115CC 198600 102161 14556
25216 33EF50B8 195763 176779 796
33288 34008F18 180123 155168 1735
18475 33E0EA3C 169387 156205 405
32301 33FE7354 152855 137046 616
20770 33E5D150 91845 74889 889
2533 33BEDBF4 78774 74539 110
10 rows selected.
CHILD# ADDR SLEEPS SLEEP1 SLEEP3
29406 33F84420 301809 242441 4041 *
2532 33BEDB68 234272 213670 651
3642 33C13A70 216086 181520 2330
3575 33C115CC 198600 102161 14556 *
25216 33EF50B8 196068 177069 797
33288 34008F18 180123 155168 1735 *
18475 33E0EA3C 169598 156408 405
32301 33FE7354 152855 137046 616 *
20770 33E5D150 91845 74889 889 *
2533 33BEDBF4 78855 74618 110
10 rows selected.
현 시점에서 높은 sleeps 변화를 보이는 latch#은 2532, 3642,
25216, 18475, 2533 latch들이다.
2. 이제 이들 latch들로 관리되고 있는 buffer들 중에서 hot block들을
찾는다.
select b.hladdr, l.sleeps, l.sleep3, b.tch,
b.dbarfil, b.dbablk, b.state
from x$bh b, v$latch_children l
where l.child#
in
(2532, 3642, 25216, 18475, 2533)
and b.hladdr= l.addr
order by tch
x$bh의 tch는 touch count를 의미하므로 이 값이 가장 큰
것부터 hot block들이라고 볼 수 있다.
HLADDR SLEEPS SLEEP3 TCH DBARFIL DBABLK
STATE
33E0EA3C 169818 405 0 192
102764 1
33E0EA3C 169818 405 0 70
97847 1
33E0EA3C 169818 405 0 106
38012 1
33EF50B8 196361 797 1 193
115327 1
33C13A70 216224 2330 91 33
4494 1
33BEDBF4 78952 110 104 33
3385 1
33BEDB68 234617 651 132 33
3384 1
33EF50B8 196361 797 146 24
25614 1
33E0EA3C 169818 405 164 32
10107 1
3. 아래의 쿼리를 수행하여 hot block과 관련된 object를 찾는다.
select segment_name , segment_type , owner
from sys.dba_extents
where file_id = 32
and 10107 between block_id and (block_id+(blocks-1))
SEGMENT_NAME SEGMENT_TYPE OWNER
FRED_TABLE TABLE MARY
4. hot block이 확인된 object의 header block인지 확인해 본다.
select header_file, header_block, freelist_groups, freelists
from dba_segments where
segment_name = 'FRED_TABLE'
and owner = 'MARY'
HEADER_FILE HEADER_BLOCK FREELIST_GROUPS FREELISTS
32 4769 4 4
해당 block(10107)이 segment header block ~ segment header
block(4769) + free list group(4) 사이에 들어가지 않으므로
data block이다.
Example
none
Reference Documents
none -
Buffer cache vs data blocks that can be cache
Hi Guys,
Do it means that if we have 2 GB of buffer cache allocated to Oracle, we can only store up to 2GB of data in memory?
thanksdbaing wrote:
Hi Guys,
Do it means that if we have 2 GB of buffer cache allocated to Oracle, we can only store up to 2GB of data in memory?Yes, this means that you can any time, can have 2gb of cached blocks in your memory.
Aman.... -
Pinning blocks to the Buffer Cache
Does any one remember how you can pin data buffers to the buffer cache so they don't get cycled out ? The shared pool has dbms_shared_pool.KEEP for library caches. I thought there was something like this for the buffer cache as well ? Can't find it.
thx,
RP.Does any one remember how you can pin data buffers to the buffer cache so they don't get cycled out ? The shared pool has dbms_shared_pool.KEEP for library caches. I thought there was something like this for the buffer cache as well ? Can't find it.
thx,
RP. Use ALTER TABLE <table_name> CACHE;
and that will cache the table in the buffer pool. Or in the future (or if you recreate the table as CREATE TABLE . . . AS SELECT) the CACHE argument can be added to the CREATE TABLE statement; -
Hi All,
My DB Version: 10.2.0
OS: Windows Server 2003
I run the following script to get Hit ratio's
SELECT cur.inst_id, 'Buffer Cache Hit Ratio ' "Ratio", to_char(ROUND((1-(phy.value / (cur.value + con.value)))*100,2)) "Value"
FROM gv$sysstat cur, gv$sysstat con, gv$sysstat phy
WHERE cur.name = 'db block gets'
AND con.name = 'consistent gets'
AND phy.name = 'physical reads'
and phy.inst_id=1
and cur.inst_id=1
and con.inst_id=1
union all
SELECT cur.inst_id,'Buffer Cache Hit Ratio ' "Ratio", to_char(ROUND((1-(phy.value / (cur.value + con.value)))*100,2)) "Buffer Cache Hit Ratio"
FROM gv$sysstat cur, gv$sysstat con, gv$sysstat phy
WHERE cur.name = 'db block gets'
AND con.name = 'consistent gets'
AND phy.name = 'physical reads'
and phy.inst_id=2
and cur.inst_id=2
and con.inst_id=2
union
SELECT inst_id, 'Library Cache Hit Ratio ' "Ratio", to_char(Round(sum(pins) / (sum(pins)+sum(reloads)) * 100,2)) "Library Cache Hit Ratio"
FROM gv$librarycache group by inst_id
union
SELECT inst_id,'Dictionary Cache Hit Ratio ' "Ratio", to_char(ROUND ((1 - (SUM (getmisses) / SUM (gets))) * 100, 2)) "Percentage"
FROM gv$rowcache group by inst_id
union
Select inst_id, 'Get Hit Ratio ' "Ratio",to_char(round((sum(GETHITRATIO))*100,2)) "Get Hit"--, round((sum(PINHITRATIO))*100,2)"Pin Hit"
FROM GV$librarycache
where namespace in ('SQL AREA')
group by inst_id
union
Select inst_id, 'Pin Hit Ratio ' "Ratio", to_char(round((sum(PINHITRATIO))*100,2))"Pin Hit"
FROM GV$librarycache
where namespace in ('SQL AREA')
group by inst_id
union
select a.inst_id,'Soft-Parse Ratio ' "Ratio", to_char(round(100 * ((a.value - b.value) / a.value ),2)) "Soft-Parse Ratio"
from (select inst_id,value from gv$sysstat where name like 'parse count (total)') a,
(select inst_id, value from gv$sysstat where name like 'parse count (hard)') b
where a.inst_id = b.inst_id
union
select a.inst_id,'Execute Parse Ratio ' "Ratio", to_char(round(100 - ((a.value / b.value)* 100),2)) "Execute Parse Ratio"
from (Select inst_id, value from gv$sysstat where name like 'parse count (total)') a,
(select inst_id, value from gv$sysstat where name like 'execute count') b
where a.inst_id = b.inst_id
union
select a.inst_id,'Parse CPU to Elapsed Ratio ' "Ratio", to_char(round((a.value / b.value)* 100,2)) "Parse CPU to Elapsed Ratio"
from (Select inst_id, value from gv$sysstat where name like 'parse time cpu') a,
(select inst_id, value from gv$sysstat where name like 'parse time elapsed') b
where a.inst_id = b.inst_id
union
Select a.inst_id,'Chained Row Ratio ' "Ratio", to_char(round((a.val/b.val)*100,2)) "Chained Row Ratio"
from (SELECT inst_id, SUM(value) val FROM gV$SYSSTAT WHERE name = 'table fetch continued row' group by inst_id) a,
(SELECT inst_id, SUM(value) val FROM gV$SYSSTAT WHERE name IN ('table scan rows gotten', 'table fetch by rowid') group by inst_id) b
where a.inst_id = b.inst_id
union
Select inst_id,'Latch Hit Ratio ' "Ratio", to_char(round(((sum(gets) - sum(misses))/sum(gets))*100,2)) "Latch Hit Ratio"
from gv$latch
group by inst_id
/* Available from 10g
union
select inst_id, metric_name, to_char(value)
from gv$sysmetric
where metric_name in ( 'Database Wait Time Ratio', 'Database CPU Time Ratio')
and intsize_csec = (select max(intsize_csec) from gv$sysmetric)
order by inst_id
What i am getting after this is:
INST_ID Ratio Value
1 Buffer Cache Hit Ratio .83
1 Chained Row Ratio 0
1 Dictionary Cache Hit Ratio 77.5
1 Execute Parse Ratio 45.32
1 Get Hit Ratio 75.88
1 Latch Hit Ratio 100
1 Library Cache Hit Ratio 99.52
1 Parse CPU to Elapsed Ratio 24.35
1 Pin Hit Ratio 95.24
1 Soft-Parse Ratio 89.73
i have a doubt with buffer cache hit ratio, can anyone please help me to understand thisBuffer Cache Hit Ratio .83Quite weird value. It seems your system is doing all physical reads, which seems unrealistic.
I had a 10.2.0.1 database where i saw this kind of result for cache hit ratio and after patching it to 10.2.0.4, it started showing results correctly.
Probably it could be some Oracle 10g bug which made this odd display of hit ratio information in data dictionary. Can you try patching your database to latest 10g PSU, or contact Oracle support for a one-off patch for this problem
Salman -
Is there any way that I could signal daat buffer cache to write all data to data files if amount of dirty blocks reach say 50 Mb.
Iam processing with BLOBS, one blob at a time, some of which has sizes exceeeding 100 Mb and the diffcult thing is that I cannot write to disk until the whoile blob is finished as it is one transaction.
Well if anyone is going to suggest that to open, process close, commit ....well i tried that but it also gives error "nmo free buffer pool" but this comes for twice the size of buffer size = 100 Mb file when db cache size is 50 mb.
any ideas.Hello,
Ia using Oracle 9.0.1.3.1
I am getting error ORA-OO379 No free Buffers available in Buffer Pool. Default for block size 8k.
My Init.ora file is
# Copyright (c) 1991, 2001 by Oracle Corporation
# Cache and I/O
db_block_size=8192
db_cache_size=104857600
# Cursors and Library Cache
open_cursors=300
# Diagnostics and Statistics
background_dump_dest=C:\oracle\admin\iasdb\bdump
core_dump_dest=C:\oracle\admin\iasdb\cdump
timed_statistics=TRUE
user_dump_dest=C:\oracle\admin\iasdb\udump
# Distributed, Replication and Snapshot
db_domain="removed"
remote_login_passwordfile=EXCLUSIVE
# File Configuration
control_files=("C:\oracle\oradata\iasdb\CONTROL01.CTL", "C:\oracle\oradata\iasdb\CONTROL02.CTL", "C:\oracle\oradata\iasdb\CONTROL03.CTL")
# Job Queues
job_queue_processes=4
# MTS
dispatchers="(PROTOCOL=TCP)(PRE=oracle.aurora.server.GiopServer)", "(PROTOCOL=TCP)(PRE=oracle.aurora.server.SGiopServer)"
# Miscellaneous
aq_tm_processes=1
compatible=9.0.0
db_name=iasdb
# Network Registration
instance_name=iasdb
# Pools
java_pool_size=41943040
shared_pool_size=33554432
# Processes and Sessions
processes=150
# Redo Log and Recovery
fast_start_mttr_target=300
# Sort, Hash Joins, Bitmap Indexes
pga_aggregate_target=33554432
sort_area_size=524288
# System Managed Undo and Rollback Segments
undo_management=AUTO
undo_tablespace=UNDOTBS -
Data being fetched bigger than DB Buffer Cache
DB Version: 10.2.0.4
OS : Solarit 5.10
We have a DB with 1gb set for DB_CACHE_SIZE . Automatic Shared Memory Management is Disabled (SGA_TARGET = 0).
If a query is fired against a table which is going retrieve 2GB of data. Will that session hang? How will oracle handle this?Tom wrote:
If the retrieved blocks get automatically removed from the buffer cache after it is fetched as per LRU algorithm, then Oracle should handle this without any issues. Right?Yes. No issues in that the "+size of a fetch+" (e.g. selecting 2GB worth of rows) need to fit completely in the db buffer cache (only 1GB in size).
As Sybrand mentioned - everything in that case will be flushed as newer data blocks will be read... and that will be flushed again shortly afterward as even newer data blocks are read.
The cache hit ratio will thus be low.
But this will not cause Oracle errors or problems - simply that performance degrade as the data volumes being processed exceeds the capacity of the cache.
It is like running a very large program that requires more RAM than what is available on a PC. The "extra RAM" comes from the swap file on disk. App program will be slow as its memory pages (some on disk) needs to be swapped into and out of memory as needed. It will work faster if the PC has sufficient RAM. However, the o/s is designed to deal with this exact situation where more RAM is needed than what physically available.
Similar situation with processing larger data chunks than what the buffer cache has capacity for. -
Unexpected CR copies in buffer cache
Hello,
While trying to understand the mechanisms of the Oracle buffer cache I ran a small experiment and observed an unexpected outcome. I believe that my expectation was wrong and I would therefore appreciate, if someone could explain me what I misunderstood.
From what I understood, a consistent read (CR) copy of a buffer in the cache is created, when the old content of a buffer is to be read, e.g. in order to ignore the changes made by a yet uncommitted transaction when querying a table. I also thought, that CR copies in the buffer cache may be reused by subsequent queries that need a rolled back image of the corresponding block.
Now I ran the following experiment on an otherwise idle 10.2 DB.
1. I create a table BC_TEST (in a non-ASSM tablespace)
-> V$BH shows one buffer A with status XCUR for this table - V$BH.CLASS# is 4, which indicates a segment header according to various sources on the internet.
2. Session 1 inserts a row in the table (and doesn't commit)
-> Now V$BH shows 8 buffers with status XCUR belongig to table BC_TEST. I believe this is the blocks from an extent being allocated to the table (I would have expected only one data block to be loaded into the cache in addition to the header that was already there from step 1). There is still the buffer A with CLASS# = 4 from step 1, one buffer B with status XCUR and CLASS# = 1, which indicates a data block according to various sources on the internet, and 6 additional blocks with status FREE and CLASS# = 14 (this value is decoded differently in various internet sources).
3. Session 2 issues a "select * from bc_test"
-> V$BH shows 2 additional buffers with status CR and the identical FILE#/BLOCK# as buffer B from step 2. I understand that one consistent read copy needs to be done in order to revert the uncommitted changes from step 2 - I don't however understand why *2* such copies are created.
Note: With a small variation of the experiment, if I run "select * from bc_test" in Session 2 between step 1 and 2, then I will subsequently only get 1 CR copy in step 3 (as I would expect).
4. Session 2 issues "select * from bc_test" again
-> V$BH shows yet another additional buffer with status CR and the identical FILE#/BLOCK# as buffer B from step 2 (i.e. 3 such buffers in total). Here I don't understand, why the query can't reuse the CR copy already created in step 3 (which already shows buffer B without the changes from the uncommitted transaction in step 2).
5. Session 2 repeatedly issues "select * from bc_test" again
-> The number of buffers with status CR and the identical FILE#/BLOCK# as buffer B from step 2 increases by one with each dditional query up to a total number of 5. After that the number of those buffers remains constant after the further queries. However various statistics for session 2 ('consistent gets', 'CR blocks created', 'consistent changes' ,'data blocks consistent reads - undo records applied' ,'no work - consistent read gets') suggest, that session 2 continues to generate current read copies with every "select * from bc_test" (are the buffers in the buffer cache maybe just reused from that point on?).
To summarize I have the following question:
(I) Why does the insert of a single row (in step 2) load 8 blocks into the buffer cache - and what does the CLASS# = 14 indicate?
(II) Why does the first select on the table (step 3) create 2 CR copies of the (single used) data block of the table (rather than one as I would expect)?
(III)) Why do further queries create CR copies of that single data block (rather than reusing the CR copy created by the first select statement)?
(IV) What limits the number of created CR copies to 5 (is there some parameter controlling this value, is it depending on some cache sizing or is it simply hardcoded)?
(V) What exactly triggers the creation of a CR copy of a buffer in the buffer cache?
Thanks a lot for any reply
kind regards
Martin
P.S. Please find below the protocol of my experiment
Control Session
SQL> drop table bc_test;
Table dropped.
SQL> create table bc_test (col number(9)) tablespace local01;
Table created.
SQL> SELECT bh.file#, bh.block#, bh.class#, bh.status, bh.dirty, bh.temp, bh.ping, bh.stale, bh.direct, bh.new
2 FROM V$BH bh
3 ,dba_objects o
4 WHERE bh.OBJD = o.data_object_id
5 and o.object_name = 'BC_TEST'
6 order by bh.block#;
FILE# BLOCK# CLASS# STATUS D T P S D N
5 209 4 xcur Y N N N N N
Session 1
SQL> insert into bc_test values (1);
1 row created.
Control Session
SQL> /
FILE# BLOCK# CLASS# STATUS D T P S D N
5 209 4 xcur Y N N N N N
5 210 1 xcur Y N N N N N
5 211 14 free N N N N N N
5 212 14 free N N N N N N
5 213 14 free N N N N N N
5 214 14 free N N N N N N
5 215 14 free N N N N N N
5 216 14 free N N N N N N
8 rows selected.
Session 2
SQL> select * from bc_test;
no rows selected
Statistics
28 recursive calls
0 db block gets
13 consistent gets
0 physical reads
172 redo size
272 bytes sent via SQL*Net to client
374 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed
Control Session
SQL> /
FILE# BLOCK# CLASS# STATUS D T P S D N
5 209 4 xcur N N N N N N
5 210 1 cr N N N N N N
5 210 1 cr N N N N N N
5 210 1 xcur N N N N N N
5 211 14 free N N N N N N
5 212 14 free N N N N N N
5 213 14 free N N N N N N
5 214 14 free N N N N N N
8 rows selected.
Session 2
SQL> /
no rows selected
Statistics
0 recursive calls
0 db block gets
5 consistent gets
0 physical reads
108 redo size
272 bytes sent via SQL*Net to client
374 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed
SQL>
Control Session
SQL> /
FILE# BLOCK# CLASS# STATUS D T P S D N
5 209 4 xcur N N N N N N
5 210 1 cr N N N N N N
5 210 1 cr N N N N N N
5 210 1 cr N N N N N N
5 210 1 xcur Y N N N N N
5 211 14 free N N N N N N
5 213 14 free N N N N N N
5 214 14 free N N N N N N
8 rows selected.
SQL>
Session 2
SQL> select * from bc_test;
no rows selected
Statistics
0 recursive calls
0 db block gets
5 consistent gets
0 physical reads
108 redo size
272 bytes sent via SQL*Net to client
374 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed
SQL>
Control Session
SQL> /
FILE# BLOCK# CLASS# STATUS D T P S D N
5 209 4 xcur N N N N N N
5 210 1 cr N N N N N N
5 210 1 cr N N N N N N
5 210 1 cr N N N N N N
5 210 1 cr N N N N N N
5 210 1 xcur Y N N N N N
5 211 14 free N N N N N N
5 213 14 free N N N N N N
8 rows selected.
Session 2
SQL> select * from bc_test;
no rows selected
Statistics
0 recursive calls
0 db block gets
5 consistent gets
0 physical reads
108 redo size
272 bytes sent via SQL*Net to client
374 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed
Control Session
SQL> /
FILE# BLOCK# CLASS# STATUS D T P S D N
5 209 4 xcur N N N N N N
5 210 1 xcur Y N N N N N
5 210 1 cr N N N N N N
5 210 1 cr N N N N N N
5 210 1 cr N N N N N N
5 210 1 cr N N N N N N
5 210 1 cr N N N N N N
5 211 14 free N N N N N N
5 213 14 free N N N N N N
9 rows selected.
Session 2
SQL> select * from bc_test;
no rows selected
Statistics
0 recursive calls
0 db block gets
5 consistent gets
0 physical reads
108 redo size
272 bytes sent via SQL*Net to client
374 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed
SQL>
Control Session
SQL> /
FILE# BLOCK# CLASS# STATUS D T P S D N
5 209 4 xcur N N N N N N
5 210 1 cr N N N N N N
5 210 1 cr N N N N N N
5 210 1 cr N N N N N N
5 210 1 cr N N N N N N
5 210 1 cr N N N N N N
5 210 1 xcur Y N N N N N
7 rows selected.
Session 2
SQL> /
no rows selected
Statistics
0 recursive calls
0 db block gets
5 consistent gets
0 physical reads
108 redo size
272 bytes sent via SQL*Net to client
374 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed
SQL>
Control Session
SQL> /
FILE# BLOCK# CLASS# STATUS D T P S D N
5 209 4 xcur N N N N N N
5 210 1 xcur Y N N N N N
5 210 1 cr N N N N N N
5 210 1 cr N N N N N N
5 210 1 cr N N N N N N
5 210 1 cr N N N N N N
5 210 1 cr N N N N N N
7 rows selected.hi,
To make your code/query results more readable, please enclose it in tag.
I also thought, that CR copies in the buffer cache may be reused by subsequent queries that need a rolled back image of the >corresponding block.I don't think there is any CR copy in cache. Every time a new query reads a consistent image of data, it will read rollback blocks from the buffer cache to generate a consistent image, so there is not any CR copy which could be reused. For testing this, with every "select * from BC_TEST" from second session, every time you will see a new CR in v$bh
There is quite difficult to comment on the topics about merely an iota of resources available on internet.
(I) Why does the insert of a single row (in step 2) load 8 blocks into the buffer cache - and what does the CLASS# = 14 indicate?Difficult go say about class#14 as you know there is not official documentation available.
To insert a row in a block, oracle is picking up the blocks which are free for dta insertion. How and hoe many blocks to pick up, don't know as nothing about this is documented.
(II) Why does the first select on the table (step 3) create 2 CR copies of the (single used) data block of the table (rather than one as I would expect)?Quite difficult to answer, some person like tom Kyte can answer on this i think. First time there are 2 Cr but later only one CR per select statement.
(III)) Why do further queries create CR copies of that single data block (rather than reusing the CR copy created by the first select statement)?Because at a given point in time, a single block may have many versions available in the cache (one session update one row, creating a version of block. Other session inserting a row in the same block, creating another version). At every read, oracle is required to create the latest read consistent image for the session wanting to access the block.
(IV) What limits the number of created CR copies to 5 (is there some parameter controlling this value, is it depending on some cache sizing or is it simply hard coded)?
As far as i know, no parameter is for this and this is oracle internal architecture which is undocumented.
(V) What exactly triggers the creation of a CR copy of a buffer in the buffer cache?As you know that when a session changes a data block (by performing DML on any single or multiple rows), the old image of block is sent to the rollback blocks in the buffer cache and data is modified in the actual block in cache. Another session wants to access the data from the same block but this session should not see the data which has not been committed by the first session so oracle needs to build an image of this datablock for session 2 with its original shape with only the committed data.
If this session 2 will also modify some rows in the block, there is another version of this block in the cache and for session 3, there is required to be another read consistent image to be built.
Salman -
10G NEW FEATURE-HOW TO FLUSH THE BUFFER CACHE
제품 : ORACLE SERVER
작성날짜 : 2004-05-25
10G NEW FEATURE-HOW TO FLUSH THE BUFFER CACHE
===============================================
PURPOSE
이 자료는 Oracle 10g new feature 로 manual 하게
buffer cache 를 flush 할 수 있는 기능에 대하여 알아보도록 한다.
Explanation
Oracle 10g 에서 new feature 로 소개된 내용으로 SGA 내 buffer cache 의
모든 data 를 command 수행으로 clear 할 수 있다.
이 작업을 위해서는 "alter system" privileges 가 있어야 한다.
Buffer cache flush 를 위한 command 는 다음과 같다.
주의) 이 작업은 database performance 에 영향을 줄 수 있으므로 주의하여 사용하여야 한다.
SQL > alter system flush buffer_cache;
Example
x$bh 를 query 하여 buffer cache 내 존재하는 정보를 확인한다.
x$bh view 는 buffer cache headers 정보를 확인할 수 있는 view 이다.
우선 test 로 table 을 생성하고 insert 를 수행하고
x$bh 에서 barfil column(Relative file number of block) 과 file# 를 조회한다.
1) Test table 생성
SQL> Create table Test_buffer (a number)
2 tablespace USERS;
Table created.
2) Test table 에 insert
SQL> begin
2 for i in 1..1000
3 loop
4 insert into test_buffer values (i);
5 end loop;
6 commit;
7 end;
8 /
PL/SQL procedure successfully completed.
3) Object_id 확인
SQL> select OBJECT_id from dba_objects
2 where object_name='TEST_BUFFER';
OBJECT_ID
42817
4) x$bh 에서 buffer cache 내에 올라와 있는 DBARFIL(file number of block) 를 조회한다.
SQL> select ts#,file#,dbarfil,dbablk,class,state,mode_held,obj
2 from x$bh where obj= 42817;
TS# FILE# DBARFIL DBABLK CLASS STATE MODE_HELD J
9 23 23 1297 8 1 0 7
9 23 23 1298 9 1 0 7
9 23 23 1299 4 1 0 7
9 23 23 1300 1 1 0 7
9 23 23 1301 1 1 0 7
9 23 23 1302 1 1 0 7
9 23 23 1303 1 1 0 7
9 23 23 1304 1 1 0 7
8 rows selected.
5) 다음과 같이 buffer cache 를 flush 하고 위 query 를 재수행한다.
SQL > alter system flush buffer_cache ;
SQL> select ts#,file#,dbarfil,dbablk,class,state,mode_held,obj
2 from x$bh where obj= 42817;
6) x$bh 에서 state column 이 0 인지 확인한다.
0 은 free buffer 를 의미한다. flush 이후에 state 가 0 인지 확인함으로써
flushing 이 command 를 통해 manual 하게 수행되었음을 확인할 수 있다.
Reference Documents
<NOTE. 251326.1>I am also having the same issue. Can this be addressed or does BEA provide 'almost'
working code for the bargin price of $80k/cpu?
"Prashanth " <[email protected]> wrote:
>
Hi ALL,
I am using wl:cache tag for caching purpose. My reqmnt is such that I
have to
flush the cache based on user activity.
I have tried all the combinations, but could not achieve the desired
result.
Can somebody guide me on how can we flush the cache??
TIA, Prashanth Bhat. -
What else are stored in the database buffer cache?
What else are stored in the database buffer cache except the data blocks read from datafiles?
That is a good idea.
SQL> desc v$BH;
Name Null? Type
FILE# NUMBER
BLOCK# NUMBER
CLASS# NUMBER
STATUS VARCHAR2(10)
XNC NUMBER
FORCED_READS NUMBER
FORCED_WRITES NUMBER
LOCK_ELEMENT_ADDR RAW(4)
LOCK_ELEMENT_NAME NUMBER
LOCK_ELEMENT_CLASS NUMBER
DIRTY VARCHAR2(1)
TEMP VARCHAR2(1)
PING VARCHAR2(1)
STALE VARCHAR2(1)
DIRECT VARCHAR2(1)
NEW CHAR(1)
OBJD NUMBER
TS# NUMBERTEMP VARCHAR2(1) Y - temporary block
PING VARCHAR2(1) Y - block pinged
STALE VARCHAR2(1) Y - block is stale
DIRECT VARCHAR2(1) Y - direct block
My question is what are temporary block and direct block?
Is it true that some blocks in temp tablespace are stored in the data buffer? -
Hello -
We have 3 x EX2010 SP3 RU5 nodes in a cross-site DAG.
Multi-role servers with 18 GB RAM [increased from 16 GB in an attempt to clear this warning without success].
We run nightly backups on both nodes at the Primary Site.
Node 1 backup covers all mailbox databases [active & passive].
Node 2 backup covers the Public Folders database.
The backups for each database are timed so they do not overlap.
During each backup we get several of these event log warnings:
Log Name: Application
Source: ESE
Date: 23/04/2014 00:47:22
Event ID: 906
Task Category: Performance
Level: Warning
Keywords: Classic
User: N/A
Computer: EX1.xxx.com
Description:
Information Store (5012) A significant portion of the database buffer cache has been written out to the system paging file. This may result in severe performance degradation.
See help link for complete details of possible causes.
Resident cache has fallen by 42523 buffers (or 27%) in the last 903 seconds.
Current Total Percent Resident: 26% (110122 of 421303 buffers)
We've rescheduled the backups and the warning message occurences just move with the backup schedules.
We're not aware of perceived end-user performance degradation, overnight backups in this time zone coincide with the business day for mailbox users in SEA.
I raised a call with the Microsoft Enterprise Support folks, they had a look at BPA output and from their diagnostics tool. We have enough RAM and no major issues detected.
They suggested McAfee AV could be the root of our problems, but we have v8.8 with EX2010 exceptions configured.
Backup software is Asigra V12.2 with latest hotfixes.
We're trying to clear up these warnings as they're throwing SCOM alerts and making a mess of availability reporting.
Any suggestions please?
Thanks in advanceHaving said all that, a colleague has suggested we just limit the amount of RAM available for the EX2010 DB cache
Then it won't have to start releasing RAM when the backup runs, and won't throw SCOM alerts
This attribute should do it...
msExchESEParamCacheSizeMax
http://technet.microsoft.com/en-us/library/ee832793.aspx
Give me a shout if this is a bad idea
Thanks -
Will I increase my Buffer Cache ?
Oracle 9i
Shared Pool 2112 Mb
Buffer Cache 1728 Mb
Large Pool 32Mb
Java Pool 32 Mb
Total 3907.358 Mb
SGA Max Size 17011.494 Mb
PGA
Aggregate PGA Target 2450 Mb
Current PGA Allocated 3286059 KB
Maximum PGA Allocated (since Startup) 3462747 KB
Cache Hit Percentage 98.71%
The Buffer Cache Size advise is telling me that if I increase the Buffer Cache to 1930Mb i will get a 8.83 decrease in phyiscal reads (And its get better the more I increase it)
The question is .. can I safely increase it (In light of my current memory allocations) ? Is it worth it .. ?Two things stand out:
Your sga max size is 17Gb, but you are only using about 4Gb of it - so you seem to have 13Gb that you are not making best use of.
Your pga aggregate target is 2.4Gb, but you've already hit a peak of 3.4Gb - which means your target may be too small - so it's lucky you had all that spare memory which hadn't gone into the SGA. Despite the availability of memory, some of your queries may have been rationed at run-time to try to minimise the excess demand.
Is this OLTP or DSS - where do you really need the memory ? (Have a look in v$process to see the pga usage on a process by process level).
How many processes are allowed to connect to the database ? (You ought to allow about 2Mb - 4Mb per process to the pga_aggregate_target for OLTP) and at least 1Mb per process for the buffer cache.
Where do you see time lost ? time on disk I/O, or time on CPU ? What type of disk I/O, what's the nature of the CPU usage. These figures alone do not tell us what you should do with the spare memory you seem to have.
A simple response to your original question would be that you probably need to increase the pga_aggregate_target, and you might as well increase the buffer size since you seem to have the memory for both.
On the downside, changing the pga_aggregate_target could cause some execution plans to change; and changing the buffer size does change the limit size on a 'short' table, which can cause an increase in I/O as an unlucky side effect if you're a little heavy on "long" tablescans.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
Maybe you are looking for
-
How do I add my Custom Workflow Activity to FIM 2010 R2 SP1 installed on Windows 2012 server?
Hellos. I have tried and failed to add my custom.dll into the Windows Server 2012 GAC. We have a version of FIM 2010 R2 Sp1 running on Windows Server 2008 R2 and that was no problem. There seemed to be a gacutil.exe present on the system which added
-
Comment is not working properly when using list values - multiple rows
Hi I am trying to send comments in drop down menu but it is not working. I have a list field with the following values ( "Not Started";"In Progress";"Completed") the default value should be "Not Started" but if I select "In Progress" at the top and c
-
How to make in-app purchase?
Hello everybody! I've got problem with in-app purchase. Let me give you an example of this problem. I have bought application Russia Football Live Score. This application allows to get push notifications with game's result. To subscribe one push noti
-
Idiotic undo behavior and recalcitrant tween failure
Total neophyte here. Well experienced in Illustrator and Photoshop. I first used Flash last week and had no problems getting it to do some simple tween animations. Now I have opened it up again to do some more practice and I'm noticing some pathetic
-
Photo management and iWeb projects
I'm not in direct contact with the project but an individual has told me that the photos that are searching for that are contained in an iWeb project are not located in iPhoto. We are looking for the photos that wound up in someone's iWeb project. Wh