Global cache average current block request
Hi all experts,
I am getting this error frequently through 12C EM. Please tell me how I can suppress this alert so that I don't get it in future.
Name=<sid>_<instancename>
Type=Database Instance
Host=<servername>.<domainname>
Metric=Global Cache Average Current Block Request Time (centi-seconds)
Timestamp xxxxxxxxxxxxxxxxxxxxxxxx
Severity=Critical
Message=Metrics "Global Cache Average Current Get Time 62" is at 1.46154
Rule Name=XXProd
Rule Owner=SYS
this metric needs to be disabled in EM at the target level for each instance. you can disable "Global Cache Statistics" metrics in two different ways:
Navigate to the instance, then click the Oracle Database drop-down -> Monitoring -> All Metrics -> (expand) Global Cache Statistics
-> Global Cache Average CR Block Request Time (centi-seconds)
-> Global Cache Average Current Block Request Time (centi-seconds)
-> Global Cache Blocks Corrupt
-> Global Cache Blocks Lost
a. At the top level (Global Cache Statistics), click the [Modify] button , then the (*) Disable radio button to disable the entire group
b. If only disabling the individual metric (Global Cache Average CR Block Request Time (centi-seconds)), click on Modify Thresholds button to delete the values. Note that empty Thresholds will disable alerts for that metric.
Same can be done via navigation to the Cluster database instance -> Oracle Database drop-down -> Monitoring -> Metric and Collection Settings -> Global Cache Statistics
Similar Messages
-
EM Alert: Critical: Metrics "Global Cache Average Current Get Time" is at X
I frequently get this notification email alert from the enterprise manager. What does this mean and should I ignore it or work on it?
Name=<sid>_<instancename>
Type=Database Instance
Host=<servername>.<domainname>
Metric=Global Cache Average Current Block Request Time (centi-seconds)
Timestamp=Jan 27, 2009 9:27:20 PM GST
Severity=Critical
Message=Metrics "Global Cache Average Current Get Time" is at 1.46154
Rule Name=XXProd
Rule Owner=SYS
[email protected]those metrics shows the effect of accessing blocks in the global cache and maintaining cache coherency.
basically, the response time for Cache Fusion transfers is determined by the messaging time and
processing time imposed by the physical interconnect components, the IPC protocol, and the
GCS protocol.
therefore, Inter-Instance performance issues can be caused by:
1) Under configured network settings at the OS
2) dropped packets,retransmits, or cyclic redundancy check errors (CRC)
3) large number of processes in the run queue waiting for CPU
4) high value for the DB_FILE_MULTIBLOCK_READ_COUNT
Note also that poor SQL or bad optimization paths can cause additional block gets via the interconnect, just as not using ASSM for Locally Managed Tablespaces, or not using CACHE NOORDER for sequence, ... -
Global Cache Average critical message at "idle nodes" in RAC
We receives lots of critical message as Metrics "Global Cache Average Current Get Time" is at 1.125
and Global Cache Average CR Get Time" is at 1.07843 by EM Oracle metric.
We have a 4 node 11.1 RAC in red hat. However, these two nodes that display Critical messages message does not support application---second nodes in failover mode.
I also do not see any user's session in these nodes by gv$session/v$session. I could not understand why these 'idle" node got Critical messages?
Thanks for explaining
JinCheck this: http://download.oracle.com/docs/cd/B19306_01/em.102/b25986/oracle_database.htm#sthref902
-
Getting *Metrics "Global Cache Average CR Get Time" is at* Alerts
Hi ,
I am getting *** Metrics "Global Cache Average CR Get Time" is at *** kind of alerts from grid control from my RAC prod DB.
Can anybody tell me what should i do for these critical alerts? or will these effect on my applications performance which are running in my DB.This metric refers to the average time necessary to get blocks from the Global cache.
This itself does nog mean anything.
Are you applications running on this DB experiencing performance issues?
If not, you might consider:
increasing the hresholds for this specific metric (use Monitoring Templates for this)
disable this metric (nullify the thresholds)
regards
Rob
http://oemgc.wordpress.com -
Cluster multi-block requests were consuming significant database time
Hi,
DB : 10.2.0.4 RAC ASM
OS : AIX 5.2 64-bit
We are facing too much performance issues and CPU idle time becoming 20%.Based on the AWR report , the top 5 events are showing that problem is in cluster side.I placed 1st node AWR report here for your suggestions.
WORKLOAD REPOSITORY report for
DB Name DB Id Instance Inst Num Release RAC Host
PROD 1251728398 PROD1 1 10.2.0.4.0 YES msprod1
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 26177 26-Jul-11 14:29:02 142 37.7
End Snap: 26178 26-Jul-11 15:29:11 159 49.1
Elapsed: 60.15 (mins)
DB Time: 915.85 (mins)
Cache Sizes
~~~~~~~~~~~ Begin End
Buffer Cache: 23,504M 23,504M Std Block Size: 8K
Shared Pool Size: 27,584M 27,584M Log Buffer: 14,248K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 28,126.82 2,675.18
Logical reads: 526,807.26 50,105.44
Block changes: 3,080.07 292.95
Physical reads: 962.90 91.58
Physical writes: 157.66 15.00
User calls: 1,392.75 132.47
Parses: 246.05 23.40
Hard parses: 11.03 1.05
Sorts: 42.07 4.00
Logons: 0.68 0.07
Executes: 930.74 88.52
Transactions: 10.51
% Blocks changed per Read: 0.58 Recursive Call %: 32.31
Rollback per transaction %: 9.68 Rows per Sort: 4276.06
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.87 Redo NoWait %: 100.00
Buffer Hit %: 99.84 In-memory Sort %: 99.99
Library Hit %: 98.25 Soft Parse %: 95.52
Execute to Parse %: 73.56 Latch Hit %: 99.51
Parse CPU to Parse Elapsd %: 9.22 % Non-Parse CPU: 99.94
Shared Pool Statistics Begin End
Memory Usage %: 68.11 71.55
% SQL with executions>1: 94.54 92.31
% Memory for SQL w/exec>1: 98.79 98.74
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
CPU time 18,798 34.2
gc cr multi block request 46,184,663 18,075 0 32.9 Cluster
gc buffer busy 2,468,308 6,897 3 12.6 Cluster
gc current block 2-way 1,826,433 4,422 2 8.0 Cluster
db file sequential read 142,632 366 3 0.7 User I/O
RAC Statistics DB/Inst: PROD/PROD1 Snaps: 26177-26178
Begin End
Number of Instances: 2 2
Global Cache Load Profile
~~~~~~~~~~~~~~~~~~~~~~~~~ Per Second Per Transaction
Global Cache blocks received: 14,112.50 1,342.26
Global Cache blocks served: 619.72 58.94
GCS/GES messages received: 2,099.38 199.68
GCS/GES messages sent: 23,341.11 2,220.01
DBWR Fusion writes: 3.43 0.33
Estd Interconnect traffic (KB) 122,826.57
Global Cache Efficiency Percentages (Target local+remote 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer access - local cache %: 97.16
Buffer access - remote cache %: 2.68
Buffer access - disk %: 0.16
Global Cache and Enqueue Services - Workload Characteristics
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg global enqueue get time (ms): 0.6
Avg global cache cr block receive time (ms): 2.8
Avg global cache current block receive time (ms): 3.0
Avg global cache cr block build time (ms): 0.0
Avg global cache cr block send time (ms): 0.0
Global cache log flushes for cr blocks served %: 11.3
Avg global cache cr block flush time (ms): 1.7
Avg global cache current block pin time (ms): 0.0
Avg global cache current block send time (ms): 0.0
Global cache log flushes for current blocks served %: 0.0
Avg global cache current block flush time (ms): 4.1
Global Cache and Enqueue Services - Messaging Statistics
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg message sent queue time (ms): 0.1
Avg message sent queue time on ksxp (ms): 2.4
Avg message received queue time (ms): 0.0
Avg GCS message process time (ms): 0.0
Avg GES message process time (ms): 0.0
% of direct sent messages: 6.27
% of indirect sent messages: 93.48
% of flow controlled messages: 0.25
Time Model Statistics DB/Inst: PROD/PROD1 Snaps: 26177-26178
-> Total time in database user-calls (DB Time): 54951s
-> Statistics including the word "background" measure background process
time, and so do not contribute to the DB time statistic
-> Ordered by % or DB time desc, Statistic name
Statistic Name Time (s) % of DB Time
sql execute elapsed time 54,618.2 99.4
DB CPU 18,798.1 34.2
parse time elapsed 494.3 .9
hard parse elapsed time 397.4 .7
PL/SQL execution elapsed time 38.6 .1
hard parse (sharing criteria) elapsed time 27.3 .0
sequence load elapsed time 5.0 .0
failed parse elapsed time 3.3 .0
PL/SQL compilation elapsed time 2.1 .0
inbound PL/SQL rpc elapsed time 1.2 .0
repeated bind elapsed time 0.8 .0
connection management call elapsed time 0.6 .0
hard parse (bind mismatch) elapsed time 0.3 .0
DB time 54,951.0 N/A
background elapsed time 1,027.9 N/A
background cpu time 518.1 N/A
Wait Class DB/Inst: PROD/PROD1 Snaps: 26177-26178
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc
Avg
%Time Total Wait wait Waits
Wait Class Waits -outs Time (s) (ms) /txn
Cluster 50,666,311 .0 30,236 1 1,335.4
User I/O 419,542 .0 811 2 11.1
Network 4,824,383 .0 242 0 127.2
Other 797,753 88.5 208 0 21.0
Concurrency 212,350 .1 121 1 5.6
Commit 16,215 .0 53 3 0.4
System I/O 60,831 .0 29 0 1.6
Application 6,069 .0 6 1 0.2
Configuration 763 97.0 0 0 0.0
Second node top 5 events are as below,
Top 5 Timed Events
Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
CPU time 25,959 42.2
db file sequential read 2,288,168 5,587 2 9.1 User I/O
gc current block 2-way 822,985 2,232 3 3.6 Cluster
read by other session 345,338 1,166 3 1.9 User I/O
gc cr multi block request 991,270 831 1 1.4 Cluster
My RAM is 95GB each node and SGA is 51 GB and PGA is 14 GB.
Any inputs from your side are greatly helpful to me ,please.
Thanks,
SunandHi Forstmann,
Thanks for your update.
Even i have collected ADDM report, extract of Node1 report as below
FINDING 1: 40% impact (22193 seconds)
Cluster multi-block requests were consuming significant database time.
RECOMMENDATION 1: SQL Tuning, 6% benefit (3313 seconds)
ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
"59qd3x0jg40h1". Look for an alternative plan that does not use
object scans.
SYMPTOMS THAT LED TO THE FINDING:
SYMPTOM: Inter-instance messaging was consuming significant database
time on this instance. (55% impact [30269 seconds])
SYMPTOM: Wait class "Cluster" was consuming significant database
time. (55% impact [30271 seconds])
FINDING 3: 13% impact (7008 seconds)
Read and write contention on database blocks was consuming significant
database time.
NO RECOMMENDATIONS AVAILABLE
SYMPTOMS THAT LED TO THE FINDING:
SYMPTOM: Inter-instance messaging was consuming significant database
time on this instance. (55% impact [30269 seconds])
SYMPTOM: Wait class "Cluster" was consuming significant database
time. (55% impact [30271 seconds])
Any help from your side , please?
Thanks,
Sunand -
Hi ,
I have two node Oracle RAC.Version is 11.2.0.2
When i do an online inquire transaction for higher load, I am observing 'gc current block busy' in AWR as one of the top events.GV$SESSION_WAIT
doesnt show anything as below.
Kindly let me know how to tune it.
AWR top events.
DB CPU 32 64.34
log file sync 5,972 9 2 18.36 Commit
gc current block busy 2,448 7 23 13.32 Cluster
gc current block 2-way 4,665 3 1 5.41 Cluster
gc current grant busy 2,446 1 0 2.20 Cluster
SQL> SELECT
INST_ID,
EVENT,
P1 FILE_NUMBER,
P2 BLOCK_NUMBER,
WAIT_TIME
FROM
GV$SESSION_WAIT
WHERE
EVENT IN ('buffer busy global cr', 'global cache busy',
'buffer busy global cache');
2 3 4 5 6 7 8 9 10 11
no rows selectedThanksuser10698496 wrote:
I have two node Oracle RAC.Version is 11.2.0.2
When i do an online inquire transaction for higher lRoad, I am observing 'gc current block busy' in AWR as one of the top events.GV$SESSION_WAIT
doesnt show anything as below.
Kindly let me know how to tune it.(Note: Your query against v$session_Wait doesn't seem to match the events listed in the AWR.)
(Note 2: Your snapshot seems to be for a very short time - which doesn't really give an idea of how serious the problem may be in the bigger picture.)
There are many GC events that you won't see in v$session_wait because the session doesn't know the event consuming the time until after the wait has completed - so when you check v$session_wait you will often see "gc cr request", or "gc current request" (I think I may have the names wrong, but I don't have an instance in front of me right now) - these are known as "placeholders" to Oracle and may change to things like "gc current block 2-way" or "gc current block 3-way".
You've looked at the ADDM (so you're licensed for the diagnostic), and seen the top object. Had you not done so I would have suggested looking at the "Segments by .." sections of the AWR report, checking the two sections on CR and CUR traffic; then check the "SQL ordered by ...." sections for the sections on global cache time. If your offenders are insert statements that match primary key indexes then you probably need to stop the instances from inserting into the same index leaf block at the same time.
A key question to ask is whether the primary key is a meaningless value generated by an Oracle sequence; if so have you set the sequence cache size to a large enough value (in the order of thousands or tens of thousands). This is the first step in resolving sequence-based RAC issues. (There are other strategies - but we need more information to determine best action.)
Regards
Jonathan Lewis
P.S. The best place on the Internet for information about details of how RAC works and the interpretation of RAC events is probably Riyaj Shamsudeen's blog at: http://orainternals.wordpress.com/
Edited by: Jonathan Lewis on May 14, 2012 8:24 AM -
Strange 'gc cr multi block request' during dictionary queries
Hi,
here goes the case :
I've got 4 node 10.2.0.3 RAC , DDL intensive (CTAS, DROP/TRUNCATE) database
It's like 20TB of data, milions of partitions and objects .
When I'm doing queries agains dictionary tables like select * from
dba_segments where owner = 'A' and segment_name = 'B'
I'm observing 'gc cr multi block request' , as far as I know its kind of
scattered reads but using interconnect to gather data .
Whats bothering me is profile of that 'gc cr multi block requests' , here
goes some lines from 10046 trace:
WAIT #6: nam='gc cr multi block request' ela= 861 file#=1 block#$34788
class#=1 obj#8 tim18733123446958
WAIT #6: nam='gc cr multi block request' ela= 69 file#=1 block#$34788
class#=1 obj#8 tim18733123447083
WAIT #6: nam='gc cr multi block request' ela= 60 file#=1 block#$34788
class#=1 obj#8 tim18733123447220
WAIT #6: nam='gc cr multi block request' ela= 99 file#=1 block#$34788
class#=1 obj#8 tim18733123447347
WAIT #6: nam='gc cr multi block request' ela= 111 file#=1 block#$34788
class#=1 obj#8 tim18733123447482
WAIT #6: nam='gc cr multi block request' ela= 193 file#=1 block#$34788
class#=1 obj#8 tim18733123447704
WAIT #6: nam='gc cr multi block request' ela= 84 file#=1 block#$34788
class#=1 obj#8 tim18733123447820
WAIT #6: nam='gc cr multi block request' ela= 81 file#=1 block#$34788
class#=1 obj#8 tim18733123447931
WAIT #6: nam='gc cr multi block request' ela= 108 file#=1 block#$34788
class#=1 obj#8 tim18733123448065
WAIT #6: nam='gc cr multi block request' ela= 111 file#=1 block#$34788
class#=1 obj#8 tim18733123448199
WAIT #6: nam='gc cr multi block request' ela= 105 file#=1 block#$34788
class#=1 obj#8 tim18733123448328
WAIT #6: nam='gc cr multi block request' ela= 100 file#=1 block#$34788
class#=1 obj#8 tim18733123448458
WAIT #6: nam='gc cr multi block request' ela= 151 file#=1 block#$34788
class#=1 obj#8 tim18733123448639
WAIT #6: nam='gc cr multi block request' ela= 84 file#=1 block#$34788
class#=1 obj#8 tim18733123448750
WAIT #6: nam='gc cr multi block request' ela= 90 file#=1 block#$34788
class#=1 obj#8 tim18733123448867
WAIT #6: nam='gc cr multi block request' ela= 98 file#=1 block#$34788
class#=1 obj#8 tim18733123448994
and that pattern repeats with different block#
Question is why Oracle is requesting the same block , over and over (16
times) , I thinks 16 comes from MBRC which is 16 and I've got 16kb blocksize
Looks like a bug :) isnt it ?
Generally all queries agains dictionary tables are slow (minutes) , I'm not
observing any lost block issues so its probably bad plans issue .
Any comments ?
Regards
GregGIt looks like this:
select *
from
dba_segments where owner = 'INS' and segment_name = 'T'
call count cpu elapsed disk query current rows
Parse 1 0.10 0.10 0 6 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 11.05 79.71 74418 96116 0 1
total 4 11.16 79.82 74418 96122 0 1
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 842
Rows Row Source Operation
1 VIEW SYS_DBA_SEGS (cr=96116 pr=74418 pw=0 time=3397685 us)
1 UNION-ALL (cr=96116 pr=74418 pw=0 time=3397673 us)
1 FILTER (cr=91749 pr=74315 pw=0 time=3397634 us)
11 HASH JOIN RIGHT OUTER (cr=91749 pr=74315 pw=0 time=3397620 us)
1317 TABLE ACCESS FULL USER$ (cr=34 pr=0 pw=0 time=6738 us)
11 HASH JOIN (cr=91715 pr=74315 pw=0 time=3392292 us)
262 TABLE ACCESS FULL FILE$ (cr=3 pr=0 pw=0 time=1087 us)
11 HASH JOIN (cr=91712 pr=74315 pw=0 time=3389434 us)
75 TABLE ACCESS FULL TS$ (cr=82 pr=0 pw=0 time=2693 us)
11 NESTED LOOPS (cr=91630 pr=74315 pw=0 time=3386237 us)
12 HASH JOIN (cr=91592 pr=74313 pw=0 time=23765169 us)
20 TABLE ACCESS BY INDEX ROWID OBJ$ (cr=424 pr=21 pw=0 time=261735 us)
20 INDEX SKIP SCAN I_OBJ2 (cr=406 pr=21 pw=0 time=261140 us)(object id 37)
966949 VIEW SYS_OBJECTS (cr=91168 pr=74292 pw=0 time=52219753 us)
966949 UNION-ALL (cr=91168 pr=74292 pw=0 time=50285852 us)
113050 TABLE ACCESS FULL TAB$ (cr=20294 pr=16672 pw=0 time=18883720 us)
627116 TABLE ACCESS FULL TABPART$ (cr=7188 pr=5576 pw=0 time=5652638 us)
25 TABLE ACCESS FULL CLU$ (cr=20293 pr=16520 pw=0 time=1019 us)
13835 TABLE ACCESS FULL IND$ (cr=20322 pr=16550 pw=0 time=11400216 us)
195932 TABLE ACCESS FULL INDPART$ (cr=2523 pr=2421 pw=0 time=1967714 us)
1213 TABLE ACCESS FULL LOB$ (cr=20350 pr=16553 pw=0 time=22044772 us)
8363 TABLE ACCESS FULL TABSUBPART$ (cr=122 pr=0 pw=0 time=9995 us)
7160 TABLE ACCESS FULL INDSUBPART$ (cr=72 pr=0 pw=0 time=9572 us)
255 TABLE ACCESS FULL LOBFRAG$ (cr=4 pr=0 pw=0 time=2060 us)
11 TABLE ACCESS CLUSTER SEG$ (cr=38 pr=2 pw=0 time=32200 us)
11 INDEX UNIQUE SCAN I_FILE#_BLOCK# (cr=26 pr=0 pw=0 time=11331 us)(object id 9)
0 NESTED LOOPS (cr=1 pr=1 pw=0 time=8349 us)
0 NESTED LOOPS (cr=1 pr=1 pw=0 time=8344 us)
0 FILTER (cr=1 pr=1 pw=0 time=8340 us)
0 NESTED LOOPS OUTER (cr=1 pr=1 pw=0 time=8334 us)
0 NESTED LOOPS (cr=1 pr=1 pw=0 time=8327 us)
0 TABLE ACCESS BY INDEX ROWID UNDO$ (cr=1 pr=1 pw=0 time=8322 us)
0 INDEX RANGE SCAN I_UNDO2 (cr=1 pr=1 pw=0 time=8314 us)(object id 35)
0 TABLE ACCESS CLUSTER SEG$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_FILE#_BLOCK# (cr=0 pr=0 pw=0 time=0 us)(object id 9)
0 TABLE ACCESS CLUSTER USER$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_USER# (cr=0 pr=0 pw=0 time=0 us)(object id 11)
0 TABLE ACCESS BY INDEX ROWID FILE$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_FILE2 (cr=0 pr=0 pw=0 time=0 us)(object id 42)
0 TABLE ACCESS CLUSTER TS$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_TS# (cr=0 pr=0 pw=0 time=0 us)(object id 7)
0 FILTER (cr=4366 pr=102 pw=0 time=4366839 us)
0 HASH JOIN RIGHT OUTER (cr=4366 pr=102 pw=0 time=4366833 us)
1317 TABLE ACCESS FULL USER$ (cr=60 pr=0 pw=0 time=1397 us)
0 HASH JOIN (cr=4306 pr=102 pw=0 time=4361061 us)
75 TABLE ACCESS FULL TS$ (cr=82 pr=0 pw=0 time=745 us)
0 NESTED LOOPS (cr=4224 pr=102 pw=0 time=4357971 us)
262 TABLE ACCESS FULL FILE$ (cr=3 pr=0 pw=0 time=1738 us)
0 TABLE ACCESS CLUSTER SEG$ (cr=4221 pr=102 pw=0 time=4355035 us)
0 INDEX RANGE SCAN I_FILE#_BLOCK# (cr=4221 pr=102 pw=0 time=4351804 us)(object id 9)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
library cache lock 18 0.00 0.00
SQL*Net message to client 2 0.00 0.00
gc current block 2-way 2299 0.00 1.16
gc current block 3-way 1597 0.00 1.28
gc cr block busy 4 0.01 0.03
gc cr grant 2-way 313 0.00 0.11
db file sequential read 627 0.06 2.94
gc cr block 2-way 37 0.00 0.02
gc cr block 3-way 27 0.00 0.02
gc current grant 2-way 1 0.00 0.00
gc cr multi block request 23073 0.01 7.07
db file scattered read 1756 0.10 13.70
db file parallel read 4708 0.19 44.89
latch: KCL gc element parent latch 33 0.00 0.00
SQL*Net message from client 2 1492.36 1492.36
latch free 4 0.00 0.00
latch: gcs resource hash 2 0.00 0.00
latch: cache buffers chains 5 0.00 0.00
gc buffer busy 1 0.00 0.00
latch: cache buffers lru chain 2 0.00 0.00
gc cr disk read 2 0.00 0.00
and in raw trace:
WAIT #4: nam='gc cr multi block request' ela= 18 file#=1 block#=3396 class#=1 obj#=4 tim=1318904416456439
WAIT #4: nam='gc cr multi block request' ela= 320 file#=1 block#=3396 class#=1 obj#=4 tim=1318904416456780
WAIT #4: nam='gc cr multi block request' ela= 146 file#=1 block#=3396 class#=1 obj#=4 tim=1318904416456949
WAIT #4: nam='gc cr multi block request' ela= 14 file#=1 block#=3396 class#=1 obj#=4 tim=1318904416456979
WAIT #4: nam='gc cr multi block request' ela= 6 file#=1 block#=3396 class#=1 obj#=4 tim=1318904416457000
WAIT #4: nam='gc cr multi block request' ela= 107 file#=1 block#=3396 class#=1 obj#=4 tim=1318904416457119
WAIT #4: nam='gc cr multi block request' ela= 171 file#=1 block#=3396 class#=1 obj#=4 tim=1318904416457594
WAIT #4: nam='gc cr multi block request' ela= 1119 file#=1 block#=3387 class#=1 obj#=4 tim=1318904416458819
WAIT #4: nam='db file parallel read' ela= 9411 files=1 blocks=10 requests=10 obj#=4 tim=1318904416468429
WAIT #4: nam='gc cr multi block request' ela= 592 file#=1 block#=4244 class#=1 obj#=4 tim=1318904416470099
WAIT #4: nam='gc cr multi block request' ela= 7 file#=1 block#=4244 class#=1 obj#=4 tim=1318904416470136
WAIT #4: nam='gc cr multi block request' ela= 203 file#=1 block#=4244 class#=1 obj#=4 tim=1318904416470353
WAIT #4: nam='gc cr multi block request' ela= 69 file#=1 block#=4244 class#=1 obj#=4 tim=1318904416470487
WAIT #4: nam='gc cr multi block request' ela= 962 file#=1 block#=4244 class#=1 obj#=4 tim=1318904416471499
WAIT #4: nam='gc cr multi block request' ela= 49 file#=1 block#=4244 class#=1 obj#=4 tim=1318904416471650Any ideas ?:)
Regards
GregG -
Hi,
i've a question about global cache (11g)
Blocks are shared in all nodes when one instance request it (select) or only when there are transactions?
I think in both case, correct?In a doc i read about cache syncronization.
"In an Oracle RAC environment, when users execute queries from different instances, instead of the DBWR process having to retrieve data from the I/O subsystem every single time, data is transferred (traditionally) over the interconnect from one instance to another. (In Oracle Database 11g Release 2, the new "bypass reader" algorithm used in the cache fusion technology bypasses data transfer when large numbers of rows are being read and instead uses the local I/O subsystem from the requesting instance to retrieve data.) This provides considerable performance benefits, because latency of retrieving data from an I/O subsystem is much higher compared to transferring data over the network. Basically, network latency is much lower compared to I/O latency."
So blocks are shared in all instances in case of query that retrive small records. -
Long waits on 'gc cr multi block request' elapsed tim anomaly - tkprof inc
Hi,
in my 4-node RAC 10.2.0.3 when I run query (thats part os user_segments view)
select o.name,
o.subname,
so.object_type, s.type#,
ts.ts#, ts.name, ts.blocksize,
s.file#, s.block#,
s.blocks * ts.blocksize, s.blocks, s.extents,
s.iniexts * ts.blocksize,
decode(bitand(ts.flags, 3), 1, to_number(NULL),
s.extsize * ts.blocksize),
s.minexts, s.maxexts,
decode(bitand(ts.flags, 3), 1, to_number(NULL),
s.extpct),
decode(bitand(ts.flags, 32), 32, to_number(NULL),
decode(s.lists, 0, 1, s.lists)),
decode(bitand(ts.flags, 32), 32, to_number(NULL),
decode(s.groups, 0, 1, s.groups)),
s.cachehint, NVL(s.spare1, 0), s.hwmincr
from sys.obj$ o, sys.ts$ ts, sys.sys_objects so, sys.seg$ s
where s.file# = so.header_file
and s.block# = so.header_block
and s.ts# = so.ts_number
and s.ts# = ts.ts#
and o.obj# = so.object_id
and o.owner# = userenv('SCHEMAID')
and s.type# = so.segment_type_id
and o.type# = so.object_type_id
call count cpu elapsed disk query current rows
Parse 1 0.03 0.05 2 5 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 5 11.59 529.83 9580 115021 0 1957
total 7 11.62 529.88 9582 115026 0 1957
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
library cache lock 13 0.00 0.00
SQL*Net message to client 5 0.00 0.00
gc cr multi block request 63408 1.22 507.30
gc cr grant 2-way 75 0.00 0.02
db file sequential read 983 0.04 6.13
gc current block 3-way 97 0.00 0.07
gc current block 2-way 111 0.00 0.06
db file scattered read 522 0.17 4.28
db file parallel read 249 0.07 2.19
cr request retry 81 0.00 0.02
latch: cache buffers lru chain 5 0.00 0.00
latch: gcs resource hash 5 0.00 0.00
latch: KCL gc element parent latch 22 0.00 0.00
latch free 3 0.00 0.00
gc cr block 3-way 10 0.00 0.00
gc cr block 2-way 101 0.00 0.05
gc cr disk read 64 0.00 0.02
SQL*Net message from client 4 0.04 0.14
SQL*Net more data to client 50 0.00 0.00
********************************************************************************In raw trace file I can see:
WAIT #1: nam='gc cr multi block request' ela= 81 file#=1 block#=223460 class#=1 obj#=4 tim=1273765932709496
WAIT #1: nam='gc cr multi block request' ela= 1221367 file#=1 block#=223456 class#=1 obj#=4 tim=1273765933930894
WAIT #1: nam='gc cr multi block request' ela= 1221273 file#=1 block#=223456 class#=1 obj#=4 tim=1273765935152287
WAIT #1: nam='gc cr multi block request' ela= 1223855 file#=1 block#=223456 class#=1 obj#=4 tim=1273765936376232
WAIT #1: nam='gc cr multi block request' ela= 1180460 file#=1 block#=223456 class#=1 obj#=4 tim=1273765937556773
WAIT #1: nam='gc cr multi block request' ela= 406 file#=1 block#=223476 class#=1 obj#=4 tim=1273765937566092
WAIT #1: nam='gc cr multi block request' ela= 8039 file#=1 block#=263972 class#=1 obj#=4 tim=1273765937742971
WAIT #1: nam='gc cr multi block request' ela= 1221983 file#=1 block#=263972 class#=1 obj#=4 tim=1273765938965148
WAIT #1: nam='gc cr multi block request' ela= 1221642 file#=1 block#=263972 class#=1 obj#=4 tim=1273765940186879
WAIT #1: nam='gc cr multi block request' ela= 1221656 file#=1 block#=263972 class#=1 obj#=4 tim=1273765941408618
WAIT #1: nam='gc cr multi block request' ela= 1221654 file#=1 block#=263972 class#=1 obj#=4 tim=1273765942630357
WAIT #1: nam='gc cr multi block request' ela= 1221657 file#=1 block#=263972 class#=1 obj#=4 tim=1273765943852096
WAIT #1: nam='gc cr multi block request' ela= 578770 file#=1 block#=263972 class#=1 obj#=4 tim=1273765944430948
WAIT #1: nam='gc cr multi block request' ela= 101 file#=1 block#=351188 class#=1 obj#=4 tim=1273765944898180
WAIT #1: nam='gc cr multi block request' ela= 2287 file#=1 block#=351188 class#=1 obj#=4 tim=1273765944900500
WAIT #1: nam='gc cr multi block request' ela= 1221984 file#=1 block#=351188 class#=1 obj#=4 tim=1273765946122713
WAIT #1: nam='gc cr multi block request' ela= 1221641 file#=1 block#=351188 class#=1 obj#=4 tim=1273765947344453
WAIT #1: nam='gc cr multi block request' ela= 1221670 file#=1 block#=351188 class#=1 obj#=4 tim=1273765948566201
WAIT #1: nam='gc cr multi block request' ela= 1221663 file#=1 block#=351188 class#=1 obj#=4 tim=1273765949787950
WAIT #1: nam='gc cr multi block request' ela= 1221670 file#=1 block#=351188 class#=1 obj#=4 tim=1273765951009697
WAIT #1: nam='gc cr multi block request' ela= 182726 file#=1 block#=351188 class#=1 obj#=4 tim=1273765951192501
WAIT #1: nam='gc cr multi block request' ela= 558 file#=1 block#=351188 class#=1 obj#=4 tim=1273765951194110
same for obj#5
WAIT #1: nam='gc cr multi block request' ela= 3103 file#=1 block#=35284 class#=1 obj#=5 tim=1273766101728096
WAIT #1: nam='gc cr multi block request' ela= 1221789 file#=1 block#=35281 class#=1 obj#=5 tim=1273766102950203
WAIT #1: nam='gc cr multi block request' ela= 1221668 file#=1 block#=35281 class#=1 obj#=5 tim=1273766104171949
WAIT #1: nam='gc cr multi block request' ela= 1221639 file#=1 block#=35281 class#=1 obj#=5 tim=1273766105393674
WAIT #1: nam='gc cr multi block request' ela= 1221612 file#=1 block#=35281 class#=1 obj#=5 tim=1273766106615404
WAIT #1: nam='gc cr multi block request' ela= 838378 file#=1 block#=35281 class#=1 obj#=5 tim=1273766107453869
WAIT #1: nam='gc cr multi block request' ela= 695 file#=1 block#=35281 class#=1 obj#=5 tim=1273766107455175Any ideas ?
Regards
GregGHi Greg;
You may hit a bug?What is your OS? If linux than please see:
Bug 6268172: INCONSISTENT QUERY PERFORMANCE IN 4 NODE RAC
https://support.oracle.com/CSP/main/article?cmd=show&type=BUG&id=6268172&productFamily=Oracle
Also see:
Bug 9838876: WAITING FOR 'GC CR MULTI BLOCK REQUEST'
https://support.oracle.com/CSP/main/article?cmd=show&type=BUG&id=9838876&productFamily=Oracle
Regard
Helios -
Gc current block 2-way + gc remaster waits
Hi,
my db version is 10.2.0.5
i have these waits high in 5 minutes period in 2 node rac cluster.
what may be the reason.
Is there a workaround.
Thanks,Here is the AWR report :
WORKLOAD REPOSITORY report for
DB Name DB Id Instance Inst Num Release RAC Host
XXXXXXX 985096646 YYYYYY 1 10.2.0.5.0 YES ZZZZZZ
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 60910 02-Aug-11 10:00:15 511 12.5
End Snap: 60912 02-Aug-11 11:00:11 496 9.3
Elapsed: 59.93 (mins)
DB Time: 658.65 (mins)
Cache Sizes
~~~~~~~~~~~ Begin End
Buffer Cache: 75,136M 75,136M Std Block Size: 8K
Shared Pool Size: 6,720M 6,720M Log Buffer: 14,176K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 199,638.95 4,481.69
Logical reads: 111,069.18 2,493.39
Block changes: 1,139.51 25.58
Physical reads: 4,477.47 100.51
Physical writes: 200.85 4.51
User calls: 1,681.64 37.75
Parses: 295.75 6.64
Hard parses: 18.27 0.41
Sorts: 229.12 5.14
Logons: 3.70 0.08
Executes: 723.08 16.23
Transactions: 44.55
% Blocks changed per Read: 1.03 Recursive Call %: 58.29
Rollback per transaction %: 0.12 Rows per Sort: 662.19
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.99 Redo NoWait %: 100.00
Buffer Hit %: 98.72 In-memory Sort %: 100.00
Library Hit %: 94.36 Soft Parse %: 93.82
Execute to Parse %: 59.10 Latch Hit %: 99.80
Parse CPU to Parse Elapsd %: 24.11 % Non-Parse CPU: 91.61
Shared Pool Statistics Begin End
Memory Usage %: 57.92 57.86
% SQL with executions>1: 74.08 77.89
% Memory for SQL w/exec>1: 68.45 68.87
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
db file sequential read 2,180,370 8,450 4 21.4 User I/O
CPU time 5,562 14.1
gc current block 2-way 1,368,857 4,049 3 10.2 Cluster
gc remaster 1,606 2,776 1728 7.0 Cluster
gc cr block 2-way 312,527 1,615 5 4.1 Cluster
------------------------------------------------------------- -
Hi
We are designing an application which does not have a web front end and need some
suggestions on how to do global caching
The environment is a cluster of 2 Weblogic 7.02 SP2 app servers.
We need to cache some application wide data - we throught of using JNDI as a global
cache but later realised that this would not work in a clustered environment.
The other option is to use a global application cache which would have to be maintained
on both servers and any other instances added to the cluster - this cache is not
STATIC - rather it is updated at runtime as requests come in off an JMS queue.
Therefore it needs to be a truly global cache i.e we cannot maintain the same
read only cache on all servers in the cluster. Another option would be to use
an stateless bean with JDBC / entity bean to talk to a database or a DAO talking
to LDAP.
Can anyone provide suggestions ?
Thanks in advance.If you need to manage state efficiently in a WebLogic cluster, I suggest you
evaluate our Coherence product:
http://www.tangosol.com/coherence.jsp
You can share data and manage the concurrent access to it from all nodes in
the cluster, and it provides data replication and load balancing without any
single points of failure. Sites like http://www.theserverside.com use it to
cluster effectively.
Peace,
Cameron Purdy
Tangosol, Inc.
http://www.tangosol.com/coherence.jsp
Tangosol Coherence: Clustered Replicated Cache for Weblogic
"Ghulam Shaikh" <[email protected]> wrote in message
news:3e9d869c$[email protected]..
>
Hi
We are designing an application which does not have a web front end andneed some
suggestions on how to do global caching
The environment is a cluster of 2 Weblogic 7.02 SP2 app servers.
We need to cache some application wide data - we throught of using JNDI asa global
cache but later realised that this would not work in a clusteredenvironment.
>
The other option is to use a global application cache which would have tobe maintained
on both servers and any other instances added to the cluster - this cacheis not
STATIC - rather it is updated at runtime as requests come in off an JMSqueue.
>
Therefore it needs to be a truly global cache i.e we cannot maintain thesame
read only cache on all servers in the cluster. Another option would be touse
an stateless bean with JDBC / entity bean to talk to a database or a DAOtalking
to LDAP.
Can anyone provide suggestions ?
Thanks in advance. -
Global-Cache-Manager for Multi-Environment Applications
Hi,
Within our server implementation we provide a "multi-project" environment. Each project is fully isolated from the rest of the server e.g. in terms of file-system usage, backup and other ressources. As one might expect the way to go is using a single VM with multiple BDB environments.
Obviously each JE-Environment uses its own cache. Within a our environment with dynamic numbers of active projects this causes a problem because the optimal cache configuration within a given memory frame depends on the JE-Environments in use BUT there is no way to define a global JE cache for ALL JE-Environments.
Our "plan of attack" is to implement a Global-Cache-Manager to dynamicly configure the cache sizes of all active BDB environments depending on the given global cache size.
Like Federico proposed the starting point for determining the optimal cache setting at load time will be a modification to the DbCacheSize utility so that the return value can be picked up easily, rather than printed to stdout. After that the EnvironmentMutableConfig.setCacheSize will be used to set the cache size. If there is enough Cache-RAM available we could even set a larger cache but I do not know if that really makes sense.
If Cache-Memory is getting tight loading another BDB environment means decreasing cache sizes for the already loaded environments. This is also done via EnvironmentMutableConfig.setCacheSize. Are there any timing conditions one should obey before assuming the memory is really available? To determine if there are any BDB environments that do not use their cache one could query each cache utilization using EnvironmentStats.getCacheDataBytes() and getCacheTotalBytes().
Are there any comments to this plan? Is there perhaps a better solution or even an implementation?
Do you think a global cache manager is something worth back-donating?
Related Postings: Multiple envs in one process?
Stefan WalgenbachHere is the updated DbCacheSize.java to allow calling it with an API.
Charles Lamb
* See the file LICENSE for redistribution information.
* Copyright (c) 2005-2006
* Oracle Corporation. All rights reserved.
* $Id: DbCacheSize.java,v 1.8 2006/09/12 19:16:59 cwl Exp $
package com.sleepycat.je.util;
import java.io.File;
import java.io.PrintStream;
import java.math.BigInteger;
import java.text.NumberFormat;
import java.util.Random;
import com.sleepycat.je.Database;
import com.sleepycat.je.DatabaseConfig;
import com.sleepycat.je.DatabaseEntry;
import com.sleepycat.je.DatabaseException;
import com.sleepycat.je.Environment;
import com.sleepycat.je.EnvironmentConfig;
import com.sleepycat.je.EnvironmentStats;
import com.sleepycat.je.OperationStatus;
import com.sleepycat.je.dbi.MemoryBudget;
import com.sleepycat.je.utilint.CmdUtil;
* Estimating JE in-memory sizes as a function of key and data size is not
* straightforward for two reasons. There is some fixed overhead for each btree
* internal node, so tree fanout and degree of node sparseness impacts memory
* consumption. In addition, JE compresses some of the internal nodes where
* possible, but compression depends on on-disk layouts.
* DbCacheSize is an aid for estimating cache sizes. To get an estimate of the
* in-memory footprint for a given database, specify the number of records and
* record characteristics and DbCacheSize will return a minimum and maximum
* estimate of the cache size required for holding the database in memory.
* If the user specifies the record's data size, the utility will return both
* values for holding just the internal nodes of the btree, and for holding the
* entire database in cache.
* Note that "cache size" is a percentage more than "btree size", to cover
* general environment resources like log buffers. Each invocation of the
* utility returns an estimate for a single database in an environment. For an
* environment with multiple databases, run the utility for each database, add
* up the btree sizes, and then add 10 percent.
* Note that the utility does not yet cover duplicate records and the API is
* subject to change release to release.
* The only required parameters are the number of records and key size.
* Data size, non-tree cache overhead, btree fanout, and other parameters
* can also be provided. For example:
* $ java DbCacheSize -records 554719 -key 16 -data 100
* Inputs: records=554719 keySize=16 dataSize=100 nodeMax=128 density=80%
* overhead=10%
* Cache Size Btree Size Description
* 30,547,440 27,492,696 Minimum, internal nodes only
* 41,460,720 37,314,648 Maximum, internal nodes only
* 114,371,644 102,934,480 Minimum, internal nodes and leaf nodes
* 125,284,924 112,756,432 Maximum, internal nodes and leaf nodes
* Btree levels: 3
* This says that the minimum cache size to hold only the internal nodes of the
* btree in cache is approximately 30MB. The maximum size to hold the entire
* database in cache, both internal nodes and datarecords, is 125Mb.
public class DbCacheSize {
private static final NumberFormat INT_FORMAT =
NumberFormat.getIntegerInstance();
private static final String HEADER =
" Cache Size Btree Size Description\n" +
// 12345678901234 12345678901234
// 12
private static final int COLUMN_WIDTH = 14;
private static final int COLUMN_SEPARATOR = 2;
private long records;
private int keySize;
private int dataSize;
private int nodeMax;
private int density;
private long overhead;
private long minInBtreeSize;
private long maxInBtreeSize;
private long minInCacheSize;
private long maxInCacheSize;
private long maxInBtreeSizeWithData;
private long maxInCacheSizeWithData;
private long minInBtreeSizeWithData;
private long minInCacheSizeWithData;
private int nLevels = 1;
public DbCacheSize (long records,
int keySize,
int dataSize,
int nodeMax,
int density,
long overhead) {
this.records = records;
this.keySize = keySize;
this.dataSize = dataSize;
this.nodeMax = nodeMax;
this.density = density;
this.overhead = overhead;
public long getMinCacheSizeInternalNodesOnly() {
return minInCacheSize;
public long getMaxCacheSizeInternalNodesOnly() {
return maxInCacheSize;
public long getMinBtreeSizeInternalNodesOnly() {
return minInBtreeSize;
public long getMaxBtreeSizeInternalNodesOnly() {
return maxInBtreeSize;
public long getMinCacheSizeWithData() {
return minInCacheSizeWithData;
public long getMaxCacheSizeWithData() {
return maxInCacheSizeWithData;
public long getMinBtreeSizeWithData() {
return minInBtreeSizeWithData;
public long getMaxBtreeSizeWithData() {
return maxInBtreeSizeWithData;
public int getNLevels() {
return nLevels;
public static void main(String[] args) {
try {
long records = 0;
int keySize = 0;
int dataSize = 0;
int nodeMax = 128;
int density = 80;
long overhead = 0;
File measureDir = null;
boolean measureRandom = false;
for (int i = 0; i < args.length; i += 1) {
String name = args;
String val = null;
if (i < args.length - 1 && !args[i + 1].startsWith("-")) {
i += 1;
val = args[i];
if (name.equals("-records")) {
if (val == null) {
usage("No value after -records");
try {
records = Long.parseLong(val);
} catch (NumberFormatException e) {
usage(val + " is not a number");
if (records <= 0) {
usage(val + " is not a positive integer");
} else if (name.equals("-key")) {
if (val == null) {
usage("No value after -key");
try {
keySize = Integer.parseInt(val);
} catch (NumberFormatException e) {
usage(val + " is not a number");
if (keySize <= 0) {
usage(val + " is not a positive integer");
} else if (name.equals("-data")) {
if (val == null) {
usage("No value after -data");
try {
dataSize = Integer.parseInt(val);
} catch (NumberFormatException e) {
usage(val + " is not a number");
if (dataSize <= 0) {
usage(val + " is not a positive integer");
} else if (name.equals("-nodemax")) {
if (val == null) {
usage("No value after -nodemax");
try {
nodeMax = Integer.parseInt(val);
} catch (NumberFormatException e) {
usage(val + " is not a number");
if (nodeMax <= 0) {
usage(val + " is not a positive integer");
} else if (name.equals("-density")) {
if (val == null) {
usage("No value after -density");
try {
density = Integer.parseInt(val);
} catch (NumberFormatException e) {
usage(val + " is not a number");
if (density < 1 || density > 100) {
usage(val + " is not betwen 1 and 100");
} else if (name.equals("-overhead")) {
if (val == null) {
usage("No value after -overhead");
try {
overhead = Long.parseLong(val);
} catch (NumberFormatException e) {
usage(val + " is not a number");
if (overhead < 0) {
usage(val + " is not a non-negative integer");
} else if (name.equals("-measure")) {
if (val == null) {
usage("No value after -measure");
measureDir = new File(val);
} else if (name.equals("-measurerandom")) {
measureRandom = true;
} else {
usage("Unknown arg: " + name);
if (records == 0) {
usage("-records not specified");
if (keySize == 0) {
usage("-key not specified");
DbCacheSize dbCacheSize = new DbCacheSize
(records, keySize, dataSize, nodeMax, density, overhead);
dbCacheSize.caclulateCacheSizes();
dbCacheSize.printCacheSizes(System.out);
if (measureDir != null) {
measure(System.out, measureDir, records, keySize, dataSize,
nodeMax, measureRandom);
} catch (Throwable e) {
e.printStackTrace(System.out);
private static void usage(String msg) {
if (msg != null) {
System.out.println(msg);
System.out.println
("usage:" +
"\njava " + CmdUtil.getJavaCommand(DbCacheSize.class) +
"\n -records <count>" +
"\n # Total records (key/data pairs); required" +
"\n -key <bytes> " +
"\n # Average key bytes per record; required" +
"\n [-data <bytes>]" +
"\n # Average data bytes per record; if omitted no leaf" +
"\n # node sizes are included in the output" +
"\n [-nodemax <entries>]" +
"\n # Number of entries per Btree node; default: 128" +
"\n [-density <percentage>]" +
"\n # Percentage of node entries occupied; default: 80" +
"\n [-overhead <bytes>]" +
"\n # Overhead of non-Btree objects (log buffers, locks," +
"\n # etc); default: 10% of total cache size" +
"\n [-measure <environmentHomeDirectory>]" +
"\n # An empty directory used to write a database to find" +
"\n # the actual cache size; default: do not measure" +
"\n [-measurerandom" +
"\n # With -measure insert randomly generated keys;" +
"\n # default: insert sequential keys");
System.exit(2);
private void caclulateCacheSizes() {
int nodeAvg = (nodeMax * density) / 100;
long nBinEntries = (records * nodeMax) / nodeAvg;
long nBinNodes = (nBinEntries + nodeMax - 1) / nodeMax;
long nInNodes = 0;
long lnSize = 0;
for (long n = nBinNodes; n > 0; n /= nodeMax) {
nInNodes += n;
nLevels += 1;
minInBtreeSize = nInNodes *
calcInSize(nodeMax, nodeAvg, keySize, true);
maxInBtreeSize = nInNodes *
calcInSize(nodeMax, nodeAvg, keySize, false);
minInCacheSize = calculateOverhead(minInBtreeSize, overhead);
maxInCacheSize = calculateOverhead(maxInBtreeSize, overhead);
if (dataSize > 0) {
lnSize = records * calcLnSize(dataSize);
maxInBtreeSizeWithData = maxInBtreeSize + lnSize;
maxInCacheSizeWithData = calculateOverhead(maxInBtreeSizeWithData,
overhead);
minInBtreeSizeWithData = minInBtreeSize + lnSize;
minInCacheSizeWithData = calculateOverhead(minInBtreeSizeWithData,
overhead);
private void printCacheSizes(PrintStream out) {
out.println("Inputs:" +
" records=" + records +
" keySize=" + keySize +
" dataSize=" + dataSize +
" nodeMax=" + nodeMax +
" density=" + density + '%' +
" overhead=" + ((overhead > 0) ? overhead : 10) + "%");
out.println();
out.println(HEADER);
out.println(line(minInBtreeSize, minInCacheSize,
"Minimum, internal nodes only"));
out.println(line(maxInBtreeSize, maxInCacheSize,
"Maximum, internal nodes only"));
if (dataSize > 0) {
out.println(line(minInBtreeSizeWithData,
minInCacheSizeWithData,
"Minimum, internal nodes and leaf nodes"));
out.println(line(maxInBtreeSizeWithData,
maxInCacheSizeWithData,
"Maximum, internal nodes and leaf nodes"));
} else {
out.println("\nTo get leaf node sizing specify -data");
out.println("\nBtree levels: " + nLevels);
private int calcInSize(int nodeMax,
int nodeAvg,
int keySize,
boolean lsnCompression) {
/* Fixed overhead */
int size = MemoryBudget.IN_FIXED_OVERHEAD;
/* Byte state array plus keys and nodes arrays */
size += MemoryBudget.byteArraySize(nodeMax) +
(nodeMax * (2 * MemoryBudget.ARRAY_ITEM_OVERHEAD));
/* LSN array */
if (lsnCompression) {
size += MemoryBudget.byteArraySize(nodeMax * 2);
} else {
size += MemoryBudget.BYTE_ARRAY_OVERHEAD +
(nodeMax * MemoryBudget.LONG_OVERHEAD);
/* Keys for populated entries plus the identifier key */
size += (nodeAvg + 1) * MemoryBudget.byteArraySize(keySize);
return size;
private int calcLnSize(int dataSize) {
return MemoryBudget.LN_OVERHEAD +
MemoryBudget.byteArraySize(dataSize);
private long calculateOverhead(long btreeSize, long overhead) {
long cacheSize;
if (overhead == 0) {
cacheSize = (100 * btreeSize) / 90;
} else {
cacheSize = btreeSize + overhead;
return cacheSize;
private String line(long btreeSize,
long cacheSize,
String comment) {
StringBuffer buf = new StringBuffer(100);
column(buf, INT_FORMAT.format(cacheSize));
column(buf, INT_FORMAT.format(btreeSize));
column(buf, comment);
return buf.toString();
private void column(StringBuffer buf, String str) {
int start = buf.length();
while (buf.length() - start + str.length() < COLUMN_WIDTH) {
buf.append(' ');
buf.append(str);
for (int i = 0; i < COLUMN_SEPARATOR; i += 1) {
buf.append(' ');
private static void measure(PrintStream out,
File dir,
long records,
int keySize,
int dataSize,
int nodeMax,
boolean randomKeys)
throws DatabaseException {
String[] fileNames = dir.list();
if (fileNames != null && fileNames.length > 0) {
usage("Directory is not empty: " + dir);
Environment env = openEnvironment(dir, true);
Database db = openDatabase(env, nodeMax, true);
try {
out.println("\nMeasuring with cache size: " +
INT_FORMAT.format(env.getConfig().getCacheSize()));
insertRecords(out, env, db, records, keySize, dataSize, randomKeys);
printStats(out, env,
"Stats for internal and leaf nodes (after insert)");
db.close();
env.close();
env = openEnvironment(dir, false);
db = openDatabase(env, nodeMax, false);
out.println("\nPreloading with cache size: " +
INT_FORMAT.format(env.getConfig().getCacheSize()));
preloadRecords(out, db);
printStats(out, env,
"Stats for internal nodes only (after preload)");
} finally {
try {
db.close();
env.close();
} catch (Exception e) {
out.println("During close: " + e);
private static Environment openEnvironment(File dir, boolean allowCreate)
throws DatabaseException {
EnvironmentConfig envConfig = new EnvironmentConfig();
envConfig.setAllowCreate(allowCreate);
envConfig.setCachePercent(90);
return new Environment(dir, envConfig);
private static Database openDatabase(Environment env, int nodeMax,
boolean allowCreate)
throws DatabaseException {
DatabaseConfig dbConfig = new DatabaseConfig();
dbConfig.setAllowCreate(allowCreate);
dbConfig.setNodeMaxEntries(nodeMax);
return env.openDatabase(null, "foo", dbConfig);
private static void insertRecords(PrintStream out,
Environment env,
Database db,
long records,
int keySize,
int dataSize,
boolean randomKeys)
throws DatabaseException {
DatabaseEntry key = new DatabaseEntry();
DatabaseEntry data = new DatabaseEntry(new byte[dataSize]);
BigInteger bigInt = BigInteger.ZERO;
Random rnd = new Random(123);
for (int i = 0; i < records; i += 1) {
if (randomKeys) {
byte[] a = new byte[keySize];
rnd.nextBytes(a);
key.setData(a);
} else {
bigInt = bigInt.add(BigInteger.ONE);
byte[] a = bigInt.toByteArray();
if (a.length < keySize) {
byte[] a2 = new byte[keySize];
System.arraycopy(a, 0, a2, a2.length - a.length, a.length);
a = a2;
} else if (a.length > keySize) {
out.println("*** Key doesn't fit value=" + bigInt +
" byte length=" + a.length);
return;
key.setData(a);
OperationStatus status = db.putNoOverwrite(null, key, data);
if (status == OperationStatus.KEYEXIST && randomKeys) {
i -= 1;
out.println("Random key already exists -- retrying");
continue;
if (status != OperationStatus.SUCCESS) {
out.println("*** " + status);
return;
if (i % 10000 == 0) {
EnvironmentStats stats = env.getStats(null);
if (stats.getNNodesScanned() > 0) {
out.println("*** Ran out of cache memory at record " + i +
" -- try increasing the Java heap size ***");
return;
out.print(".");
out.flush();
private static void preloadRecords(final PrintStream out,
final Database db)
throws DatabaseException {
Thread thread = new Thread() {
public void run() {
while (true) {
try {
out.print(".");
out.flush();
Thread.sleep(5 * 1000);
} catch (InterruptedException e) {
break;
thread.start();
db.preload(0);
thread.interrupt();
try {
thread.join();
} catch (InterruptedException e) {
e.printStackTrace(out);
private static void printStats(PrintStream out,
Environment env,
String msg)
throws DatabaseException {
out.println();
out.println(msg + ':');
EnvironmentStats stats = env.getStats(null);
out.println("CacheSize=" +
INT_FORMAT.format(stats.getCacheTotalBytes()) +
" BtreeSize=" +
INT_FORMAT.format(stats.getCacheDataBytes()));
if (stats.getNNodesScanned() > 0) {
out.println("*** All records did not fit in the cache ***"); -
Top time events showing global cache buffer busy waits
can any one guide me to find the root cause for global cache buffer waits
"Segments by Global Cache Buffer Busy" output from an AWR report.
And let us know how many CPUs you have If you don't want to reveal the names of the objects, then change them (but do it in a way that means if two indexes are for the same table then it's visible). The distribution of waits is significant.
How many of the indexes in the "segments" are based on an Oracle Sequence ? Check the values for the CACHE size of those sequences they probably ought to be at least 1,000.
See note below about producing readable output.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
There is a +"Preview"+ tab at the top of the text entry panel. Use this to check what your message will look like before you post the message. If it looks a complete mess you're unlikely to get a response. (Click on the +"Plain text"+ tab if you want to edit the text to tidy it up.) -
Current block not in the list of the block_menu LOV
Hi all,
In the key-crerec trigger of my forms I call do_key('block_menu'). I perform some actions and the built-in block_menu in the key-menu trigger. And what I noticed is that the current block where I launched the key-crerec trigger is not listed in the list of the block_menu LOV.
What is the reason ?
Thank you.I'm not sure but maybe because there's no point in to jumping to the current block your already in.
-
some one can help me please
i have no idea what i must to do.
an unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.The Exception Handler gave all the info that you need. No need to print the whole stack trace.
The exception handler says
Exception Details: java.lang.IllegalArgumentException
TABLE1.NAME
Look in the session bean (assuming that is where your underlying rowset is). Look in the _init() method for statements similar to the following:
personRowSet.setCommand("SELECT * FROM TRAVEL.PERSON");
personRowSet.setTableName("PERSON");
What do you have?
Maybe you are looking for
-
Appearing numbers without text at index
Hello, i have an auto generated index for fm 9, some textless page numbers (i see them below also correct) keep appearing as symbols this is the first time i meet markers with start and end range, but i wonder if something is wrong and if i can do an
-
Iphoto library won't burn to DVD
I'm having problems burning my library to DVD. I'm using a iomega external DVD+/- burner. When I select the pictures i want to burn from the iphoto window, i get this error..."NO DEVICES ARE AVAILABLE FOR BURNING". I tried burning a DVD with another
-
Hello, Question: is there any way in ORACLE to generate script "what is called reverse engineer " from a database with complete (database structure, size, configuration, control file, tablespace, redo files, tables, views, users, security, roles etc.
-
I have several apple ids and I can't figure out which one I "associated to this device" and had to reset it, so now I can't get my iTunes back for 90 days....***? Please help.....I can't live without my music.....thanks everyone....is there a way
-
11.5.9 - Forgot Password
Hi, Is there any functionality in 11.5.9 to use "Forgot Password" in the login page?. I know this functionality is available from 11.5.10 onwards. Thanks in advance.