"latch: row cache objects" and high "VERSION_COUNT"
Hello,
we are being faced with a situation where the database spends most of it's time waiting for latches in the shared pool (as seen in the AWR report).
All statements issued by the application are using bind variables, but what we can see in V$SQL is that even though the statements are using bind variables some of them have a relatively high version_count (> 300) and many invaliadations (100 - 200) even though the tables involved are very small (some not more than 3 or 4 rows).
Here is some (hopefully enough) information about the environment
Version: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production (on RedHat EL 5)
Parameters:
cursor_bind_capture_destination memory+disk
cursor_sharing EXACT
cursor_space_for_time FALSE
filesystemio_options none
hi_shared_memory_address 0
memory_max_target 12288M
memory_target 12288M
object_cache_optimal_size 102400
open_cursors 300
optimizer_capture_sql_plan_baselines FALSE
optimizer_dynamic_sampling 2
optimizer_features_enable 11.2.0.2
optimizer_index_caching 0
optimizer_index_cost_adj 100
optimizer_mode ALL_ROWS
optimizer_secure_view_merging TRUE
optimizer_use_invisible_indexes FALSE
optimizer_use_pending_statistics FALSE
optimizer_use_sql_plan_baselines TRUE
plsql_optimize_level 2
session_cached_cursors 50
shared_memory_address 0The shared pool size (according to AWR) is 4,832M
The buffer cache is 3,008M
Now, my question: is a version_count of > 300 a problem (we have about 10-15 of those with a total of ~7000 statements in v$sqlarea). Those are also the statements listed in the AWR report at the top in the section "SQL ordered by Version Count" and "SQL ordered by Sharable Memory"
Is it possible that those statements are causing the the latch contention in the shared pool?
I went through https://blogs.oracle.com/optimizer/entry/why_are_there_more_cursors_in_11g_for_my_query_containing_bind_variables_1
The tables involved are fairly small and all the execution plans for each cursor are identical.
I can understand some of the invalidations that happen, because we have 7 schemas that have identical tables, but from my understanding that shouldn't cause such a high invalidation number. Or am I mistaken?
I'm not that experienced with Oracle tuning at that level, so I would appreciate any pointer on how I can find out where exactly the latch problem occurs
After flushing the shared pool, the problem seems to go away for a while. But apparently that is only fighting symptoms, not fixing the root cause of the problem.
Some of the statements in question:
SELECT * FROM QRTZ_SIMPLE_TRIGGERS WHERE TRIGGER_NAME = :1 AND TRIGGER_GROUP = :2
UPDATE QRTZ_TRIGGERS SET TRIGGER_STATE = :1 WHERE TRIGGER_NAME = :2 AND TRIGGER_GROUP = :3 AND TRIGGER_STATE = :4
UPDATE QRTZ_TRIGGERS SET TRIGGER_STATE = :1 WHERE JOB_NAME = :2 AND JOB_GROUP = :3 AND TRIGGER_STATE = :4
SELECT TRIGGER_STATE FROM QRTZ_TRIGGERS WHERE TRIGGER_NAME = :1 AND TRIGGER_GROUP = :2
UPDATE QRTZ_SIMPLE_TRIGGERS SET REPEAT_COUNT = :1, REPEAT_INTERVAL = :2, TIMES_TRIGGERED = :3 WHERE TRIGGER_NAME = :4 AND TRIGGER_GROUP = :5
DELETE FROM QRTZ_TRIGGER_LISTENERS WHERE TRIGGER_NAME = :1 AND TRIGGER_GROUP = :2So all of them are using bind variables.
I have seen that the columns used in the where clause all have histograms available. Would removing them reduce the number of invalidations?
Unfortunately I did not save the information from v$sql_shared_cursor before the shared pool was flushed, but most of the invalidations occurred in the ROLL_INVALID_MISMATCH column if that is of any help. There are some invalidations reported for AUTH_CHECK_MISMATCH and TRANSLATION_MISMATCH but to my understanding they caused by executing the statement for different schemas if I'm not mistaken.
Looking at v$latch_missed, most of the waits for parent = 'row cache objects' are for "kqrpre: find obj" and "kqreqd: reget"
>
In the AWR report, what does the Dictionary Cache Stats section say?
>
Here they are:
Dictionary Cache Stats
Cache Get Requests Pct Miss Scan Reqs Mod Reqs Final Usage
dc_awr_control 65 0.00 0 2 1
dc_constraints 729 33.33 0 729 1
dc_global_oids 60 23.33 0 0 31
dc_histogram_data 7,397 10.53 0 0 2,514
dc_histogram_defs 21,797 9.83 0 0 5,239
dc_object_grants 4 25.00 0 0 12
dc_objects 27,683 2.29 0 223 2,581
dc_profiles 1,842 0.00 0 0 1
dc_rollback_segments 1,634 0.00 0 0 39
dc_segments 7,335 6.94 0 360 1,679
dc_sequences 139 5.76 0 139 19
dc_table_scns 53 100.00 0 0 0
dc_tablespace_quotas 1,956 0.10 0 0 4
dc_tablespaces 17,488 0.00 0 0 11
dc_users 58,013 0.03 0 0 164
global database name 4,261 0.00 0 0 1
outstanding_alerts 54 0.00 0 0 9
sch_lj_oids 4 0.00 0 0 2
Library Cache Activity
Namespace Get Requests Pct Miss Pin Requests Pct Miss Reloads Invalidations
ACCOUNT_STATUS 3,664 0.03 0 0 0
BODY 560 2.14 2,343 0.60 0 0
CLUSTER 52 0.00 52 0.00 0 0
DBLINK 3,668 0.00 0 0 0
EDITION 1,857 0.00 3,697 0.00 0 0
INDEX 99 19.19 99 19.19 0 0
OBJECT ID 68 100.00 0 0 0
SCHEMA 2,646 0.00 0 0 0
SQL AREA 32,996 2.26 1,142,497 0.21 189 226
SQL AREA BUILD 848 62.15 0 0 0
SQL AREA STATS 860 82.09 860 82.09 0 0
TABLE/PROCEDURE 17,713 2.62 26,112 4.88 61 0
TRIGGER 1,704 2.00 6,737 0.52 1 0
Similar Messages
-
Latch: row cache objects
Hello everyone,
Note: Apologize for the bad formatting, tried but it seems I forgot how to use it
BANNER
Oracle Database 11g Release 11.2.0.2.0 - 64bit Production
I've seen high "*latch: row cache objects*" in SP/ASH report for ~14 hours back, when the users were unable to connect to the database. There were,
WARNING: inbound connection timed out (ORA-3136)
Time: 30-APR-2012 02:24:36
Tracing not turned on.
Tns error struct:
errors all over the alert log for the duration of 6 minutes of the problem.
I've put few records in bold due to which I concluded that the problem was with "dc_users" thing.
Can anybody tell me how/where I should proceed forward ?
SP report:Instance Efficiency Indicators
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.84 Optimal W/A Exec %: 100.00
Library Hit %: 97.43 Soft Parse %: 87.86
Execute to Parse %: 22.54 Latch Hit %: 99.95
Parse CPU to Parse Elapsd %: 0.30 % Non-Parse CPU: 87.83
Shared Pool Statistics Begin End
Memory Usage %: 45.09 46.98
% SQL with executions>1: 11.49 13.15
% Memory for SQL w/exec>1: 72.96 21.33
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
latch: row cache objects 6,655 634,260 95306 97.0
log file sync 289,923 6,469 22 1.0
CPU time 5,039 .8
db file sequential read 310,084 2,840 9 .4
log file parallel write 451,706 1,144 3 .2
ASH Report
Analysis Begin Time: 30-Apr-12 02:24:00
Analysis End Time: 30-Apr-12 02:30:00
Elapsed Time: 6.0 (mins)
Begin Data Source: DBA_HIST_ACTIVE_SESS_HISTORY
in AWR snapshot 12185
End Data Source: DBA_HIST_ACTIVE_SESS_HISTORY
in AWR snapshot 12185
Sample Count: 1,385
Average Active Sessions: 38.47
Avg. Active Session per CPU: 1.60
Report Target: None specified
Top User Events DB/Inst: NIKU/niku (Apr 30 02:24 to 02:30)
Avg Active
Event Event Class % Event Sessions
latch: row cache objects Concurrency 75.45 29.03
CPU + Wait for CPU CPU 9.75 3.75
log file sync Commit 3.83 1.47
db file sequential read User I/O 3.61 1.39
Top Event P1/P2/P3 Values DB/Inst: NIKU/niku (Apr 30 02:24 to 02:30)
Event % Event P1 Value, P2 Value, P3 Value % Activity
Parameter 1 Parameter 2 Parameter 3
latch: row cache objects 75.60 "42287858200","279","0" 75.60
address number tries
1* select addr, latch#, child#, name, misses, gets from v$latch_children where name like '%row%cache%objec%' order by gets , misses
niku> /
ADDR LATCH# CHILD# NAME MISSES GETS
0000000A16FF21C8 279 26 row cache objects 0 0
0000000A16FF14C8 279 2 row cache objects 0 0
00000009D88D7ED8 279 3 row cache objects 0 0
0000000A16FF1B48 279 14 row cache objects 0 0
00000009D88D8558 279 15 row cache objects 0 0
0000000A16FF1CE8 279 17 row cache objects 0 0
0000000A26265A28 279 19 row cache objects 0 0
0000000A16FF1E88 279 20 row cache objects 0 0
00000009D88D8898 279 21 row cache objects 0 0
0000000A26265BC8 279 22 row cache objects 0 0
0000000A16FF2028 279 23 row cache objects 0 0
00000009D88D8A38 279 24 row cache objects 0 0
0000000A26265D68 279 25 row cache objects 0 0
00000009D88D8BD8 279 27 row cache objects 0 0
0000000A26265F08 279 28 row cache objects 0 0
00000009D88D8D78 279 30 row cache objects 0 0
0000000A262660A8 279 31 row cache objects 0 0
0000000A16FF2508 279 32 row cache objects 0 0
0000000A16FF26A8 279 35 row cache objects 0 0
00000009D88D90B8 279 36 row cache objects 0 0
0000000A262663E8 279 37 row cache objects 0 0
0000000A262668C8 279 46 row cache objects 0 0
0000000A26266A68 279 49 row cache objects 0 0
0000000A16FF2368 279 29 row cache objects 0 11
0000000A16FF2848 279 38 row cache objects 0 116
0000000A16FF29E8 279 41 row cache objects 0 200
00000009D88D93F8 279 42 row cache objects 0 318
00000009D88D9258 279 39 row cache objects 0 1010
0000000A16FF2EC8 279 50 row cache objects 0 1406
00000009D88D9598 279 45 row cache objects 0 1472
0000000A26266588 279 40 row cache objects 0 1705
0000000A26266728 279 43 row cache objects 0 7383
0000000A16FF2B88 279 44 row cache objects 0 32346
00000009D88D98D8 279 51 row cache objects 19 63948
0000000A26265888 279 16 row cache objects 0 88045
0000000A26266248 279 34 row cache objects 0 141176
00000009D88D9738 279 48 row cache objects 0 326672
0000000A16FF19A8 279 11 row cache objects 867 1770385
00000009D88D8078 279 6 row cache objects 9 1979542
0000000A16FF2D28 279 47 row cache objects 2 3435018
00000009D88D86F8 279 18 row cache objects 2557 14956121
0000000A26265068 279 1 row cache objects 224 24335868
0000000A262653A8 279 7 row cache objects 29760 133991553
00000009D88D8F18 279 33 row cache objects 60612 677263122
00000009D88D83B8 279 12 row cache objects 23981 739014460
0000000A26265208 279 4 row cache objects 19973399 852043775
0000000A26265548 279 10 row cache objects 280137 856097342
00000009D88D8218 279 9 row cache objects 715879777 1219000976
0000000A262656E8 279 13 row cache objects 3856073 2397402780
0000000A16FF1668 279 5 row cache objects 12763217 2920278217
*0000000A16FF1808 279 8 row cache objects 67329804 4145389092*
51 rows selected.
niku> list
1 select addr, latch#, child#, name, misses, gets from v$latch_children where name like '%row%cache%objec%' order by gets , misses
niku> select distinct s.kqrstcln latch#,r.cache#,r.parameter name,r.type,r.subordinate#
from v$rowcache r,x$kqrst s
where r.cache#=s.kqrstcid
order by 1,4,5; 2 3 4
LATCH# CACHE# NAME TYPE SUBORDINATE#
1 3 dc_rollback_segments PARENT
2 1 dc_free_extents PARENT
3 4 dc_used_extents PARENT
4 2 dc_segments PARENT
5 0 dc_tablespaces PARENT
6 5 dc_tablespace_quotas PARENT
7 6 dc_files PARENT
*8 10 dc_users PARENT*
*8 7 dc_users SUBORDINATE 0*
*8 7 dc_users SUBORDINATE 1*
*8 7 dc_users SUBORDINATE 2*
9 8 dc_objects PARENT
9 8 dc_object_grants SUBORDINATE 0
10 17 dc_global_oids PARENT
11 12 dc_constraints PARENT
12 13 dc_sequences PARENT
13 16 dc_histogram_defs PARENT
13 16 dc_histogram_data SUBORDINATE 0
13 16 dc_histogram_data SUBORDINATE 1
14 54 dc_sql_prs_errors PARENT
15 32 kqlsubheap_object PARENT
16 19 dc_table_scns PARENT
16 19 dc_partition_scns SUBORDINATE 0
17 18 dc_outlines PARENT
18 14 dc_profiles PARENT
19 47 realm cache PARENT
19 47 realm auth SUBORDINATE 0
20 48 Command rule cache PARENT
21 49 Realm Object cache PARENT
21 49 Realm Subordinate Cache SUBORDINATE 0
22 46 Rule Set Cache PARENT
23 34 extensible security user and rol PARENT
24 35 extensible security principal pa PARENT
25 37 extensible security UID to princ PARENT
26 36 extensible security principal na PARENT
27 33 extensible security principal ne PARENT
28 38 XS security class privilege PARENT
29 39 extensible security midtier cach PARENT
30 43 AV row cache 1 PARENT
31 44 AV row cache 2 PARENT
32 45 AV row cache 3 PARENT
33 15 global database name PARENT
34 20 rule_info PARENT
35 21 rule_or_piece PARENT
35 21 rule_fast_operators SUBORDINATE 0
36 23 dc_qmc_ldap_cache_entries PARENT
37 52 qmc_app_cache_entries PARENT
38 53 qmc_app_cache_entries PARENT
39 27 qmtmrcin_cache_entries PARENT
40 28 qmtmrctn_cache_entries PARENT
41 29 qmtmrcip_cache_entries PARENT
42 30 qmtmrctp_cache_entries PARENT
43 31 qmtmrciq_cache_entries PARENT
44 26 qmtmrctq_cache_entries PARENT
45 9 qmrc_cache_entries PARENT
46 50 qmemod_cache_entries PARENT
47 24 outstanding_alerts PARENT
48 22 dc_awr_control PARENT
49 25 SMO rowcache PARENT
50 40 sch_lj_objs PARENT
51 41 sch_lj_oids PARENT
61 rows selected.
niku> select parameter, gets from v$rowcache order by gets desc;
PARAMETER GETS
dc_users 2802019571
dc_tablespaces 2405092307
dc_objects 1815427326jjk wrote:
I've already been thru the link that you've mentioned and unfortunately couldn't make much use of it.I didn't think it was really likely to be relevant, but there was always a long shot that it might have given you a clue.
Considering the "dc_users" had maximum gets, I thought (rather as per internet) that it might be the point of contention. However I did observe high misses on child# 9 which is "dc_objects". It's often the case that the misses is more important than the gets when you see lots of gets and misses on a few latches/caches - the bit that might have been most instructure was the dictionary cache bit from the AWR showing gets, misses, scans, scanmisses etc. It might have told us a little about what was going in and out of the dictionary cache and let us guess why.
In alert log:
Sun Apr 29 02:20:00 2012
29-APR-2012 02:20:00 -- xxxxxxx package - REGRANT_READONLY Begin re-grant read only roles
Sun Apr 29 02:24:34 2012
29-APR-2012 02:24:34 -- xxxxxxx package - REGRANT_READONLY End re-grant read only roles
Sun Apr 29 02:30:00 2012
29-APR-2012 02:30:00 -- xxxxxxx package - REGRANT_READWRITE Begin re-grant read write roles
Sun Apr 29 02:32:02 2012
29-APR-2012 02:32:02 -- xxxxxxx package - REGRANT_READWRITE End re-grant read write roles
Is this code that "regrants" roles to users who already have them ? That's what it sounds like, and that sounds like something that would impact on various parts of the dictionary cache, especially dc_users, and possibly dc_obejcts.
CPU per Elap per Old
Executions Rows Processed Rows per Exec Exec (s) Exec (s) Hash Value
161,198 1,244 0.0 0.00 0.00 978935325
select /*+ rule */ c.name, u.name from con$ c, cdef$ cd, user$ u
where c.con# = cd.con# and cd.enabled = :1 and c.owner# = u.us
er#
159,955 159,952 1.0 0.00 0.00 2458412332
select o.name, u.name from obj$ o, user$ u where o.obj# = :1 an
d o.owner# = u.user#
159,932 6 0.0 0.00 0.00 2636710067
insert into objauth$(option$,grantor#,obj#,privilege#,grantee#,c
ol#,sequence#) values(decode(:1,0,null,:1),:2,:3,:4,:5,decode(:6
,0,null,:6),object_grant.nextval)
147,168 147,168 1.0 0.00 0.00 3468666020
select text from view$ where rowid=:1
124,635 124,635 1.0 0.00 0.00 564166580
select count(*) from ( select u.
name from registry$ r, us
er$ u where r.status in (1,3,5)
and r.namespace = 'SERVER'The first one looks like a response to a constraint being breached.
The third one looks like something that might happen when you grant a privilege on an object to a user - and maybe the first one happens if the user has already got it and the insert raises a "duplicate key" error. The fourth one commonly happens when you have to re-optimize a query containing a view - and when you execute DDL (such as changing privileges on an object) you invalidate SQL and have to re-optimize it eventually. I can't remember where I've seen the second one appearing.
If you have a process that tries to do a lot of grants on objects to users and roles in a very short time, it's quite likely to create havoc in the dictionary cache - check what that package was up to and why it runs.
What is the missing information ?When I looked at some of your posting, the output didn't match the query, some of the later columns had gone missing - this might have been my browser rather than your input though.
Regards
Jonathan Lewis -
Row cache lock aquired for more than 1 hour
Hi could some please let me know what is ROW CACHE LOCK, and at what situations does this happen. And also what does dc_histogram_defs enqueue means, what happening internally??
I am facing with a problem in our d/b(11g r1) with a code running more that 1 hr, but nothing is happening in our objects actually, only info i can see is ROW CACHE LOCK for more that 3000 seconds:
select p1text,p1,p2text,p2,p3text,p3 from v$session where event = 'row cache lock' and sid=37
P1TEXT P1 P2TEXT P2 P3TEXT P3
cache id 16 mode 0 request 3
select type,parameter,count,usage,gets,getmisses,scans,scanmisses,flushes,dlm_requests from v$rowcache where cache#=16
TYPE PARAMETER COUNT USAGE GETS GETMISSES SCANS SCANMISSES FLUSHES DLM_REQUESTS
PARENT dc_histogram_defs 4,497 4,497 12,426,122 1,446,845 0 0 210,040 1,706,801
SUBORDINATE dc_histogram_data 1,965 1,965 8,995,128 500,660 0 0 91,463 0
SUBORDINATE dc_histogram_data 297 297 3,500,090 46,371 0 0 6,591 0hi,
could you take a look at this topic
row cache lock
regards, -
I recently have bought a new Macbook Pro (Version 10.10.1) with the OS X Yosemite. The computer comes with the new Pages (version 5.5.1).
Here is the problem: I like to create artwork using the shapes on Pages. Previously, on my old mac, I used Pages 4.3 to create objects, which I would copy then paste to Photoshop and it would become a vector smart object. However, in the new Pages (version 5.5.1), when I copy objects, they would appear on Photoshop as instead, a layer and it would not be in full resolution.
Also, I know there is nothing wrong with the Pages file itself because I have converted the document to PDF form and it is high resolution when inserted into Photoshop that way.
Does anyone know how I can copy individual objects from Pages (5.5.1) and paste it into Photoshop as a vector smart object with high resolution as I have done before?
Thanks!ghotiz wrote:
copy the image and have it in a high-quality PNG format that does not include the background from the Pages document.
Oh, well if you don't actually need vector objects then it looks like this is possible. As I said earlier, Pages is putting a PNG on the clipboard. I tested it and it does paste into Photoshop as a transparent layer, because I can see the transparent background of the pasted PNG graphic if I either turn off all layers behind it in Photoshop, or if I start a new Photoshop document to paste into but make sure I choose Transparent for the Background Contents in the New Document dialog. -
Caching RAW and LONG RAW objects
Hi,
Is there any way to cache RAW and LONG RAW object like BLOB caching?
ThanksIs there any way to cache RAW and LONG RAW object like BLOB caching?What is the version?
to fetch long fetch size character of bytes you must use any one of below three.
1)Primary key
2)ROWID
3)Unique columns -
Cache destroy and removing objects based on filter
Hi,
I have lined up a few questions to understand the ways we can destroy/remove cache entries or cache itself on a collective basis and not just the cache.remove(key).
1. How do I remove a group of keys from the cache? Say either I know the keySet or a keySet obtained based on a filter. I can see the Java doc says namedCache also implements queryMap and that supports keySet(Filter) but unlike Map.keySet(), the set returned by this method may not be backed by the map, so changed to the set may not be reflected in the map.
http://download.oracle.com/otn_hosted_doc/coherence/353/com/tangosol/util/QueryMap.html#keySet(com.tangosol.util.Filter)
Now that also means cache.keySet().removeAll() may not work. Can we confirm that, as another article http://wiki.tangosol.com/display/COH35UG/Data+Affinity shows an example way to use entryset or keyset and shows this line:
cacheLineItems.keySet().removeAll(setLineItemKeys).
2. Is namedcache.destroy() a blocking or non-blocking? Coz in one of our tests, we created a for loop of the following code:
a. create cache.
b. use cache.
c. destroy cache.
And we expected at any point of time to have only one cache to be active on the cluster, however with the 500M as the high unit, we never saw evictions and indeed saw out of memory error. We had like 100 iterations. We expected destroy cache to basically delete the cache, thereby freeing the memory on the JVM. We only had one proxy and one storage in our test.
3. Is namedcache.clear() a blocking or non-blocking call? Although this does not necessarily removes the cache, it only unmaps all the entries in the cache.Hi,
You can remove multiple cache entries based on a filter like this...
Filter filter = ...
cache.invokeAll(filter, new ConditionalRemove(AlwaysFilter.INSTANCE));...or on a collection of keys like this...
Collection keys = ...
cache.invokeAll(keys, new ConditionalRemove(AlwaysFilter.INSTANCE));Regarding non-blocking calls, presumably for clear() I would think that the call will not return until the cache is cleared (i.e. empty) otherwise you would get all sorts of potential problems. For cache.destroy() I am not sure what state the cache will be in when the call returns. I suspect if you call this from a client it only destroys the cache locally on that client and not throughout the rest of the cluste though - but I am sure someone could confirm this.
JK -
Child Objects and TopLink Cache
All,
I have a problem RE the TopLink cache:
Object A has a Vector of Object Bs (1:M) and Object B has a Vector of Object Cs(1:M). I am using ValueHolderInterface and indirection pattern for each of these Vectors.
When I update an Object C, it is not refreshed the next time I read Object A using the readObject(expression). I can see the changes in the database. Can someone tell me the best way to refresh the cache to get the updated Object Cs that belong to Object A.
What I am doing is updating the C objects that belong to object A (thru Vector B) and then retrieving them again in the very next method call. Hope this makes sense!
Thanks!
JThere is something wrong with your test case, I've seen this before -- if you update a C, then without fail the cached version of C is updated and if you have a handle to the cached A that has the B that has the C in question, then you will see the update. It sounds like perhaps you're not actually looking at the cached A, but instead looking at it from a UOW, etc.
Send me an email, it's simply my firstname . lastname at Oracle.com. I'll send you a UOW primer that should help better understand these semantics...
- Don -
Poor performance and high number of gets on seemingly simple insert/select
Versions & config:
Database : 10.2.0.4.0
Application : Oracle E-Business Suite 11.5.10.2
2 node RAC, IBM AIX 5.3Here's the insert / select which I'm struggling to explain why it's taking 6 seconds, and why it needs to get > 24,000 blocks:
INSERT INTO WF_ITEM_ATTRIBUTE_VALUES ( ITEM_TYPE, ITEM_KEY, NAME, TEXT_VALUE,
NUMBER_VALUE, DATE_VALUE ) SELECT :B1 , :B2 , WIA.NAME, WIA.TEXT_DEFAULT,
WIA.NUMBER_DEFAULT, WIA.DATE_DEFAULT FROM WF_ITEM_ATTRIBUTES WIA WHERE
WIA.ITEM_TYPE = :B1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 4 0
Execute 2 3.44 6.36 2 24297 198 36
Fetch 0 0.00 0.00 0 0 0 0
total 3 3.44 6.36 2 24297 202 36
Misses in library cache during parse: 1
Misses in library cache during execute: 2Also from the tkprof output, the explain plan and waits - virtually zero waits:
Rows Execution Plan
0 INSERT STATEMENT MODE: ALL_ROWS
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'WF_ITEM_ATTRIBUTES' (TABLE)
0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'WF_ITEM_ATTRIBUTES_PK' (INDEX (UNIQUE))
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
library cache lock 12 0.00 0.00
gc current block 2-way 14 0.00 0.00
db file sequential read 2 0.01 0.01
row cache lock 24 0.00 0.01
library cache pin 2 0.00 0.00
rdbms ipc reply 1 0.00 0.00
gc cr block 2-way 4 0.00 0.00
gc current grant busy 1 0.00 0.00
********************************************************************************The statement was executed 2 times. I know from slicing up the trc file that :
exe #1 : elapsed = 0.02s, query = 25, current = 47, rows = 11
exe #2 : elapsed = 6.34s, query = 24272, current = 151, rows = 25
If I run just the select portion of the statement, using bind values from exe #2, I get small number of gets (< 10), and < 0.1 secs elapsed.
If I make the insert into an empty, non-partitioned table, I get :
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.01 0.08 0 137 53 25
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.01 0.08 0 137 53 25and same explain plan - using index range scan on WF_Item_Attributes_PK.
This problem is part of testing of a database upgrade and country go-live. On a 10.2.0.3 test system (non-RAC), the same insert/select - using the real WF_Item_Attributes_Value table takes :
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.10 10 27 136 25
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.10 10 27 136 25So I'm struggling to understand why the performance on the 10.2.0.4 RAC system is so much worse for this query, and why it's doing so many gets. Suggestions, thoughts, ideas welcomed.
I've verified system level things - CPUs weren't/aren't max'd out, no significant paging/swapping activity, run queue not long. AWR report for the time period shows nothing unusual.
further info on the objects concerned:
query source table :
WF_Item_Attributes_PK : unique index on Item_Type, Name. Index has 144 blocks, non-partitioned
WF_Item_Attributes tbl : non-partitioned, 160 blocks
insert destination table:
WF_Item_Attribute_Values:
range partitioned on Item_Type, and hash sub-partitioned on Item_Key
both executions of the insert hit the partition with the most data : 127,691 blocks total ; 8 sub-partitions with 15,896 to 16,055 blocks per sub-partition.
WF_Item_Attribute_Values_PK : unique index on columns Item_Type, Item_Key, Name. Range/hash partitioned as per table.
Bind values:
exe #1 : Item_Type (:B1) = OEOH, Item_Key (:B2) = 1048671
exe #2 : Item_Type (:B1) = OEOL, Item_Key (:B2) = 4253168
number of rows in WF_Item_Attribute_Values for Item_Type = OEOH : 1132587
number of rows in WF_Item_Attribute_Values for Item_Type = OEOL : 18763670
The non-RAC 10.2.0.3 test system (clone of Production from last night) has higher row counts for these 2.
thanks and regards
Ivanhi Sven,
Thanks for your input.
1) I guess so, but I haven't lifted the lid to delve inside the form as to which one. I don't think it's the cause though, as I got poor performance running the insert statement with my own value (same statement, using my own bind value).
2) In every execution plan I've seen, checked, re-checked, it uses a range scan on the primary key. It is the most efficient I think, but the source table is small in any case - table 160 blocks, PK index 144 blocks. So I think it's the partitioned destination table that's the problem - but we only see this issue on the 10.2.0.4 pre-production (RAC) system. The 10.2.0.3 (RAC) Production system doesn't have it. This is why it's so puzzling to me - the source table read is fast, and does few gets.
3) table storage details below - the Item_Types being used were 'OEOH' (fast execution) and 'OEOL' (slow execution). Both hit partition WF_ITEM49, hence I've only expanded the subpartition info for that one (there are over 600 sub-partitions).
============= From DBA_Part_Tables : Partition Type / Count =============
PARTITI SUBPART PARTITION_COUNT DEF_TABLESPACE_NAME
RANGE HASH 77 APPS_TS_TX_DATA
1 row selected.
============= From DBA_Tab_Partitions : Partition Names / Tablespaces =============
Partition Name TS Name High Value High Val Len
WF_ITEM1 APPS_TS_TX_DATA 'A1' 4
WF_ITEM2 APPS_TS_TX_DATA 'AM' 4
WF_ITEM3 APPS_TS_TX_DATA 'AP' 4
WF_ITEM47 APPS_TS_TX_DATA 'OB' 4
WF_ITEM48 APPS_TS_TX_DATA 'OE' 4
WF_ITEM49 APPS_TS_TX_DATA 'OF' 4
WF_ITEM50 APPS_TS_TX_DATA 'OK' 4
WF_ITEM75 APPS_TS_TX_DATA 'WI' 4
WF_ITEM76 APPS_TS_TX_DATA 'WS' 4
WF_ITEM77 APPS_TS_TX_DATA MAXVALUE 8
77 rows selected.
============= From dba_part_key_columns : Partition Columns =============
NAME OBJEC Column Name COLUMN_POSITION
WF_ITEM_ATTRIBUTE_VALUES TABLE ITEM_TYPE 1
1 row selected.
PPR1 sql> @q_tabsubpart wf_item_attribute_values WF_ITEM49
============= From DBA_Tab_SubPartitions : SubPartition Names / Tablespaces =============
Partition Name SUBPARTITION_NAME TS Name High Value High Val Len
WF_ITEM49 SYS_SUBP3326 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3328 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3332 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3331 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3330 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3329 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3327 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3325 APPS_TS_TX_DATA 0
8 rows selected.
============= From dba_part_key_columns : Partition Columns =============
NAME OBJEC Column Name COLUMN_POSITION
WF_ITEM_ATTRIBUTE_VALUES TABLE ITEM_KEY 1
1 row selected.
from DBA_Segments - just for partition WF_ITEM49 :
Segment Name TSname Partition Name Segment Type BLOCKS Mbytes EXTENTS Next Ext(Mb)
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3332 TblSubPart 16096 125.75 1006 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3331 TblSubPart 16160 126.25 1010 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3330 TblSubPart 16160 126.25 1010 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3329 TblSubPart 16112 125.875 1007 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3328 TblSubPart 16096 125.75 1006 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3327 TblSubPart 16224 126.75 1014 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3326 TblSubPart 16208 126.625 1013 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3325 TblSubPart 16128 126 1008 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3332 IdxSubPart 59424 464.25 3714 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3331 IdxSubPart 59296 463.25 3706 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3330 IdxSubPart 59520 465 3720 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3329 IdxSubPart 59104 461.75 3694 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3328 IdxSubPart 59456 464.5 3716 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3327 IdxSubPart 60016 468.875 3751 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3326 IdxSubPart 59616 465.75 3726 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3325 IdxSubPart 59376 463.875 3711 .125
sum 4726.5
[the @ in the TS Name is my shortcode, as Apps stupidly prefixes every ts with "APPS_TS_"]
The Tablespaces used for all subpartitions are UNIFORM extent mgmt, AUTO segment_space_management ; LOCAL extent mgmt.regards
Ivan -
ROW CACHE ENQUEUE LOCK/ibrary cache load lock leads to database hung
(lowercase, curly brackets, no spaces)
We faced database hung on 3 node 11i erp 9i rac database.
We saw the library cache load lock timed out events reported in alert log.
Then few ora-600 and later ROW CACHE ENQUEUE LOCK timed out event. Eventually database was hung and we had to bounce the services .
we created support sr 7845542.992 for RCA.
The support says to increase shared pool size to avoid shared pool fragmentation and avoid reload ,additionaly to upgrade to 10g database.
I am not covinced adding additional pool size would solve this or upgrade to 10 .furthermore even 10g has such issues reported.
I saw couple of bugs mentioned such issue can happen due deadlock of session holding latches .
kindly let me know your view on issue
If required i can attach statspack for more information. (lowercase, curly brackets, no spaces)Many Thanks, i was keen to have your update .
There are 8 cpus on each node . Reloads very high during time period ,but normally there are not high reloads.
Statspack details for 3 nodes
STATSPACK report for
DB Name DB Id Instance Inst Num Release Cluster Host
PROD 21184234 PROD1 1 9.2.0.8.0 YES npi-or-db-p-
11.npi.corp
Snap Id Snap Time Sessions Curs/Sess Comment
Begin Snap: 149817 30-Oct-09 13:00:09 574 #########
End Snap: 149837 30-Oct-09 14:00:17 602 #########
Elapsed: 60.13 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 8,192M Std Block Size: 8K
Shared Pool Size: 1,024M Log Buffer: 10,240K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 122,414.93 11,449.13
Logical reads: 69,550.76 6,504.89
Block changes: 928.41 86.83
Physical reads: 196.24 18.35
Physical writes: 28.65 2.68
User calls: 343.97 32.17
Parses: 558.61 52.25
Hard parses: 43.48 4.07
Sorts: 467.24 43.70
Logons: 0.63 0.06
Executes: 2,046.99 191.45
Transactions: 10.69
% Blocks changed per Read: 1.33 Recursive Call %: 97.59
Rollback per transaction %: 5.07 Rows per Sort: 15.85
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.72 In-memory Sort %: 100.00
Library Hit %: 96.79 Soft Parse %: 92.22
Execute to Parse %: 72.71 Latch Hit %: 99.77
Parse CPU to Parse Elapsd %: 60.10 % Non-Parse CPU: 78.07
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
db file sequential read 249,234 0 1,537 6 6.5
db file scattered read 61,776 0 769 12 1.6
row cache lock 780,098 10 566 1 20.2
library cache lock 697,849 157 432 1 18.1
latch free 127,926 4,715 387 3 3.3
global cache cr request 370,770 3,091 309 1 9.6
PL/SQL lock timer 59 58 112 1903 0.0
wait for scn from all nodes 303,572 18 103 0 7.9
library cache pin 26,231 2 100 4 0.7
global cache null to x 17,717 716 92 5 0.5
buffer busy waits 5,388 18 74 14 0.1
db file parallel read 5,245 0 69 13 0.1
log file sync 20,407 29 66 3 0.5
enqueue 52,200 70 60 1 1.4
buffer busy global CR 4,845 33 55 11 0.1
CGS wait for IPC msg 412,512 407,106 50 0 10.7
ksxr poll remote instances 1,279,565 483,046 48 0 33.2
log file parallel write 160,040 0 42 0 4.1
library cache load lock 1,491 2 29 20 0.0
global cache open x 19,507 344 28 1 0.5
buffer busy global cache 957 0 22 23 0.0
global cache s to x 16,516 180 20 1 0.4
db file parallel write 11,120 0 12 1 0.3
log file sequential read 618 0 11 18 0.0
DFS lock handle 23,768 0 10 0 0.6
control file sequential read 8,563 0 4 0 0.2
KJC: Wait for msg sends to c 1,549 57 4 3 0.0
lock escalate retry 76 76 4 52 0.0
SQL*Net break/reset to clien 12,546 0 3 0 0.3
SQL*Net more data to client 85,773 0 3 0 2.2
control file parallel write 1,265 0 2 1 0.0
global cache null to s 648 23 1 2 0.0
global cache busy 200 0 1 5 0.0
global cache open s 1,493 28 1 1 0.0
log file switch completion 12 0 1 61 0.0
PX Deq Credit: send blkd 161 70 1 4 0.0
kksfbc child completion 119 118 1 5 0.0
PX Deq: reap credit 5,948 5,456 0 0 0.2
PX Deq: Execute Reply 83 29 0 3 0.0
process startup 8 0 0 25 0.0
LGWR wait for redo copy 992 12 0 0 0.0
IPC send completion sync 450 450 0 0 0.0
PX Deq: Parse Reply 100 28 0 1 0.0
undo segment extension 10,380 10,372 0 0 0.3
PX Deq: Join ACK 146 65 0 1 0.0
buffer deadlock 222 221 0 0 0.0
async disk IO 1,179 0 0 0 0.0
wait list latch free 2 0 0 16 0.0
PX Deq: Msg Fragment 112 28 0 0 0.0
Library Cache Activity for DB: PROD Instance: PROD1 Snaps: 149817 -149837
->"Pct Misses" should be very low
Get Pct Pin Pct Invali-
Namespace Requests Miss Requests Miss Reloads dations
BODY 116,007 1.1 133,347 19.9 24,338 0
CLUSTER 4,224 0.6 5,131 1.0 0 0
INDEX 15,048 24.1 13,798 26.4 2 0
JAVA DATA 82 0.0 692 39.6 136 0
JAVA RESOURCE 66 39.4 206 25.2 12 0
PIPE 1,140 0.5 1,160 0.5 0 0
SQL AREA 1,197,908 12.6 13,517,660 1.5 111,833 73
TABLE/PROCEDURE 3,847,439 0.8 4,230,265 7.9 142,200 0
TRIGGER 8,444 2.4 8,657 18.5 1,274 0
GES Lock GES Pin GES Pin GES Inval GES Invali-
Namespace Requests Requests Releases Requests dations
BODY 1 1,234 1,258 985 0
CLUSTER 3,222 25 25 25 0
INDEX 13,792 3,641 3,631 3,629 0
JAVA DATA 0 0 0 0 0
JAVA RESOURCE 0 26 25 0 0
PIPE 0 0 0 0 0
SQL AREA 0 0 0 0 0
TABLE/PROCEDURE 857,137 13,130 13,264 10,762 0
TRIGGER 0 200 202 200 0
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
STATSPACK report for
DB Name DB Id Instance Inst Num Release Cluster Host
PROD 21184234 PROD2 2 9.2.0.8.0 YES npi-or-db-p-
12.npi.corp
Snap Id Snap Time Sessions Curs/Sess Comment
Begin Snap: 149847 30-Oct-09 14:00:05 493 #########
End Snap: 149857 30-Oct-09 15:00:02 432 #########
Elapsed: 59.95 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 8,192M Std Block Size: 8K
Shared Pool Size: 1,024M Log Buffer: 10,240K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 71,853.44 32,058.65
Logical reads: 273,904.84 122,207.36
Block changes: 889.13 396.70
Physical reads: 40.40 18.03
Physical writes: 20.97 9.35
User calls: 153.74 68.60
Parses: 66.19 29.53
Hard parses: 2.66 1.19
Sorts: 25.70 11.47
Logons: 0.16 0.07
Executes: 726.41 324.10
Transactions: 2.24
% Blocks changed per Read: 0.32 Recursive Call %: 92.41
Rollback per transaction %: 4.84 Rows per Sort: 193.55
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 99.99
Buffer Hit %: 99.99 In-memory Sort %: 100.00
Library Hit %: 99.35 Soft Parse %: 95.97
Execute to Parse %: 90.89 Latch Hit %: 99.99
Parse CPU to Parse Elapsd %: 36.55 % Non-Parse CPU: 98.28
Wait Events for DB: PROD Instance: PROD2 Snaps: 149847 -149857
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
enqueue 65,823 33,667 90,459 1374 8.2
row cache lock 38,996 560 1,795 46 4.8
PX Deq Credit: send blkd 522 499 1,223 2344 0.1
PX Deq: Parse Reply 466 416 987 2117 0.1
db file sequential read 50,130 0 421 8 6.2
library cache lock 78,842 172 210 3 9.8
db file scattered read 6,904 0 152 22 0.9
global cache cr request 84,801 575 113 1 10.5
latch free 8,096 736 65 8 1.0
log file sync 5,676 27 41 7 0.7
wait for scn from all nodes 18,891 10 24 1 2.3
CGS wait for IPC msg 394,678 392,142 21 0 49.0
library cache pin 1,339 0 17 13 0.2
global cache null to x 2,145 48 16 8 0.3
global cache s to x 3,242 32 16 5 0.4
buffer busy waits 366 10 15 40 0.0
ksxr poll remote instances 70,990 31,295 14 0 8.8
db file parallel read 359 0 11 31 0.0
global cache open x 2,708 55 10 4 0.3
async disk IO 3,474 0 8 2 0.4
global cache open s 3,470 10 6 2 0.4
log file parallel write 13,076 0 5 0 1.6
global cache busy 58 40 5 90 0.0
PL/SQL lock timer 1 1 5 4877 0.0
DFS lock handle 3,362 0 5 1 0.4
log file sequential read 412 0 4 10 0.1
db file parallel write 2,774 0 3 1 0.3
library cache load lock 59 0 3 58 0.0
buffer busy global CR 722 0 3 4 0.1
control file sequential read 6,398 0 3 0 0.8
SQL*Net break/reset to clien 16,078 0 2 0 2.0
name-service call wait 26 0 2 67 0.0
control file parallel write 1,248 0 2 1 0.2
process startup 24 0 1 49 0.0
KJC: Wait for msg sends to c 3,491 4 1 0 0.4
SQL*Net more data to client 23,724 0 1 0 2.9
buffer busy global cache 23 0 0 19 0.0
global cache null to s 114 0 0 4 0.0
PX Deq: reap credit 5,646 5,509 0 0 0.7
log file switch completion 4 0 0 58 0.0
lock escalate retry 54 54 0 1 0.0
IPC send completion sync 119 118 0 0 0.0
direct path read 2,820 0 0 0 0.3
direct path read (lob) 3,632 0 0 0 0.5
PX Deq: Join ACK 88 37 0 0 0.0
direct path write 2,470 0 0 0 0.3
kksfbc child completion 6 6 0 6 0.0
buffer deadlock 3 3 0 11 0.0
global cache quiesce wait 4 4 0 8 0.0
Library Cache Activity for DB: PROD Instance: PROD2 Snaps: 149847 -149857
->"Pct Misses" should be very low
Get Pct Pin Pct Invali-
Namespace Requests Miss Requests Miss Reloads dations
BODY 27,353 0.5 28,091 6.5 1,643 0
CLUSTER 203 1.0 269 1.5 0 0
INDEX 526 9.9 271 19.9 0 0
JAVA DATA 18 0.0 120 6.7 4 0
JAVA RESOURCE 20 45.0 56 26.8 3 0
JAVA SOURCE 1 100.0 1 100.0 0 0
PIPE 999 0.4 1,043 0.4 0 0
SQL AREA 131,793 7.6 3,406,577 0.4 7,012 0
TABLE/PROCEDURE 926,987 0.2 1,907,993 1.0 8,845 0
TRIGGER 1,519 0.1 1,532 4.9 69 0
GES Lock GES Pin GES Pin GES Inval GES Invali-
Namespace Requests Requests Releases Requests dations
BODY 1 129 277 117 0
CLUSTER 168 2 2 2 0
INDEX 271 52 56 52 0
JAVA DATA 0 0 0 0 0
JAVA RESOURCE 0 9 6 0 0
JAVA SOURCE 0 1 1 1 0
PIPE 0 0 0 0 0
SQL AREA 0 0 0 0 0
TABLE/PROCEDURE 89,523 764 868 460 0
TRIGGER 0 2 14 2 0
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
DB Name DB Id Instance Inst Num Release Cluster Host
PROD 21184234 PROD3 3 9.2.0.8.0 YES npi-or-db-p-
13.npi.corp
Snap Id Snap Time Sessions Curs/Sess Comment
Begin Snap: 149808 30-Oct-09 14:00:00 31 #########
End Snap: 149809 30-Oct-09 15:00:02 34 11,831.4
Elapsed: 60.03 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 8,192M Std Block Size: 8K
Shared Pool Size: 1,024M Log Buffer: 10,240K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 1,518.14 36,700.35
Logical reads: 1,333.43 32,235.02
Block changes: 5.09 123.01
Physical reads: 54.31 1,312.88
Physical writes: 3.91 94.44
User calls: 1.46 35.40
Parses: 2.24 54.21
Hard parses: 0.04 0.93
Sorts: 0.84 20.28
Logons: 0.06 1.45
Executes: 3.11 75.23
Transactions: 0.04
% Blocks changed per Read: 0.38 Recursive Call %: 94.31
Rollback per transaction %: 45.64 Rows per Sort: 215.97
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.99 Redo NoWait %: 100.00
Buffer Hit %: 96.21 In-memory Sort %: 100.00
Library Hit %: 99.07 Soft Parse %: 98.29
Execute to Parse %: 27.94 Latch Hit %: 99.98
Parse CPU to Parse Elapsd %: 69.88 % Non-Parse CPU: 97.92
Wait Events for DB: PROD Instance: PROD3 Snaps: 149808 -149809
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
enqueue 19,510 7,472 15,509 795 130.9
PX Deq: Parse Reply 1,152 1,071 2,577 2237 7.7
row cache lock 2,202 518 1,579 717 14.8
db file scattered read 31,556 0 354 11 211.8
db file sequential read 17,272 0 67 4 115.9
db file parallel read 1,722 0 34 20 11.6
global cache cr request 53,754 91 32 1 360.8
wait for scn from all nodes 1,897 13 10 5 12.7
CGS wait for IPC msg 403,358 401,478 10 0 2,707.1
DFS lock handle 4,753 0 8 2 31.9
direct path read 1,248 0 6 5 8.4
PX Deq: Execute Reply 110 38 6 51 0.7
global cache open s 160 10 5 31 1.1
control file sequential read 6,442 0 3 0 43.2
name-service call wait 26 0 2 78 0.2
latch free 129 109 2 13 0.9
KJC: Wait for msg sends to c 153 24 1 9 1.0
control file parallel write 1,245 0 1 1 8.4
buffer busy waits 199 0 1 6 1.3
process startup 20 0 1 44 0.1
global cache null to x 74 2 1 9 0.5
global cache null to s 19 0 1 29 0.1
global cache open x 268 1 1 2 1.8
library cache lock 1,150 0 0 0 7.7
PX Deq: Join ACK 129 48 0 3 0.9
log file parallel write 1,157 0 0 0 7.8
async disk IO 219 0 0 1 1.5
direct path write 1,024 0 0 0 6.9
ksxr poll remote instances 6,740 4,595 0 0 45.2
PX Deq: reap credit 6,580 6,511 0 0 44.2
buffer busy global CR 73 0 0 2 0.5
log file sequential read 11 0 0 10 0.1
log file sync 100 0 0 1 0.7
global cache s to x 282 2 0 0 1.9
db file parallel write 95 0 0 1 0.6
library cache pin 142 0 0 0 1.0
SQL*Net break/reset to clien 28 0 0 1 0.2
IPC send completion sync 81 81 0 0 0.5
PX Deq: Signal ACK 32 14 0 1 0.2
PX Deq Credit: send blkd 3 1 0 7 0.0
SQL*Net more data to client 841 0 0 0 5.6
PX Deq: Msg Fragment 37 17 0 0 0.2
log file single write 4 0 0 1 0.0
db file single write 1 0 0 1 0.0
SQL*Net message from client 4,213 0 13,673 3246 28.3
gcs remote message 214,784 75,745 7,016 33 1,441.5
wakeup time manager 233 233 6,812 29237 1.6
PX Idle Wait 2,338 2,294 5,686 2432 15.7
PX Deq: Execution Msg 2,151 1,979 4,796 2229 14.4
Library Cache Activity for DB: PROD Instance: PROD3 Snaps: 149808 -149809
->"Pct Misses" should be very low
Get Pct Pin Pct Invali-
Namespace Requests Miss Requests Miss Reloads dations
BODY 1,290 0.0 1,290 0.0 0 0
CLUSTER 18 0.0 8 0.0 0 0
SQL AREA 4,893 2.0 36,371 0.5 2 0
TABLE/PROCEDURE 1,555 3.9 3,834 4.9 71 0
TRIGGER 286 0.0 286 0.0 0 0
GES Lock GES Pin GES Pin GES Inval GES Invali-
Namespace Requests Requests Releases Requests dations
BODY 1 0 0 0 0
CLUSTER 4 0 0 0 0
SQL AREA 0 0 0 0 0
TABLE/PROCEDURE 863 224 42 42 0
TRIGGER 0 0 0 0 0
------------------------------------------------------------- -
Please recommend solutions for Cache Connect and ?
---> Solution I
2 servers create Cache Connect to RDBMS
---> Solution II
1 server create Cache Connect to RDBMS and create active standby pair with another serverHi,
If you only need READONLY caching in TimesTen and all updates will be made in Oracle then you have two main options:
Multiple READONLY Caches
For this you have one or more separate TimesTen caches each with a READONLY cache group defined against the oracle DBMS. Each cache can cache different tables/data or they can can cache the same tables/data as required.
This architecture is very flexible (adding or removing TimesTen servers is very simple) and very scalable. It also provides very good HA; if one cache is down applications can just access a different cache.
However, due to the asynchronous, time based nature of the refresh from Oracle to Timesten at any moment in times the data in all the caches may not be 100% consistent with each other or Oracle.
By this I mean the following:
- Assume that you have 2 (or more) READONLY caches caching the same data from Oracle, with an AUTOREFRESH interval of T1
- At some time, T2, you update, in Oracle, one of the rows cached by the caches.
- At some later time, T3, you query the updated row via both caches
If (T3 - T2) < T1 then the values returned by your query may differ between the caches (depending on where exactly they are in the autorefresh interval when the update is done).
Active/Standby pair using 2-SAFE replication with READONLY cache group and optional read-only subscribers
With this architecture you define a TimesTen Active/Standby replicated pair using 2-safe replication and containing the READONLY cache group. 'Scale out' is accomplished in one of three ways:
1. Adding further A/S pairs with a READONLY cache group
2. Adding read-only subscriber datastores to the original A/S pair
3. A mixture of (1) and (2)
The main advantages of this architecture are as follows:
1. When 2-Safe is used within the A/S pair, queries to either cache will always return consistent results (i.e. the consistency issue that I described for the first scenario does not exist in this configuration). However, threr can still be inconsitencies in results between the A/S pair and any readonly subscribers (since the replication to them is asynchronous) but given the high performance of TimesTen replication the latency between a change appearing at the A/S pair and the readonly subscribers will typically be a few ms rather than potentially several seconds for the multiple-cache scenario.
2. The loading on the central Oracle DBMS arising from AuTOREFRESH processing is reduced compared to the multiple-cache scenario. The difference in loading between this solution and the multiple cache solution will be larger as more TT servers are deployed.
It should be noted that the operational management of this solution is a little more complex than for the first scenario since the A/S pair must be monitored and a 'failover' trigerred if there is some fauilure within the pair.
Hope that helps a little.
Chris -
Reflection objects and thread safety
Hi,
I believe that I saw that Field and Method objects are thread-safe (i.e., can safely have methods called against a single object instance concurrently from multiple threads), but am having trouble finding such a description in the JDK javadocs static that fact.
I'm assuming that all thread-specific 'state' would be managed by the Object target passed to methods like invoke()/get()/set() and not on the actual Field and Method objects themselves. Ideally, i'd like to only have to look up fields and methods only once reflectively, and thereafter just use the same reflection object instances to access their target objects at runtime as a performance optimizations - possibly in different threads and at the same time - without having to pay the cost of looking it up again. I should be able to do that providing Method.invoke() is thread safe. Otherwise, i'd probably be forced to call Class.getMethod() to get a new Method object to use against each object instance, which would be more costly both from a memory standpoint (more Method objects) and a lookup-cost perspective.
Given that lots of existing performance-critical enterprise infrastructure code, such as OR database APIs, IoC frameworks and J2EE containers use reflection to decouple the generic code from any app specific code (from a compile time perspective) as an alternative to code generation, it's surprising that there's no obvious statement about thread safety in these classes. If I look at the source code for Method, it appears to be thread safe, but I can only get so far with this analysis, as the critical code in Method appears to be implemented using a class named 'sun.reflect.MethodAccessor', whose source I don't have access to.
I know it's possible to invoke a method against multiple objects by calling Method.invoke() against each of the target objects in question. However, there's no mention as to whether it's safe to use a single Method object instance to invoke a method against multiple target object instances at the same time (i.e., from different threads running in parallel). This would fail, for instance, if the Method object had data members that were used to communicate information between internal calls without any synchronization, as the values might be used by one thread while another was changing them.
Just to clarify (as i've seen some confusion in other forum discussions on this topic):
I completely understand that the thread safety of a target object's method (read, small 'm') is entirely dependent upon it's implementation and not the mechanism by which it's invoked - i.e., whether a method is invoked by an explicit compiled-in call against an instance of the target object in some Java source file, or indirectly via Method object-based reflection, is immaterial the the method's thread safety.
What i'm asking about is the thread safety of the Method.invoke() call itself (read, big 'M'). Same question wrt Field.get()/.set() as well. These calls should be thread-safe if they're stateless wrt the Method and Field object instances that they are invoked against.In general, if a Java API is silent about multi-threading, it is intended to be thread-safe. See the javadoc for HashMap for an example of an explicit warning.
It is true that Java code can have bugs that show up only on unusual implementations of the Java memory model, such as relaxed memory model machines. Most (if not all) implementations of the JDK have been deployed principally on platforms with strong memory models. (Perhaps not coincidentally, those are also the machines that have market share.) There are even bugs found occasionally in the JDK core, so draw your own conclusions about the bug-tail of our software stack on systems with relaxed memory models!
One of the more likely bugs to run into on highly optimized systems is failure of timely initialization of non-final fields in objects which are shared in an unsynchronized manner. See http://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html#finalRight and related pages. JDK core programmers (at Sun, to my personal knowledge) take care not to write code with such bugs, but application programmers might.
And, yes, caching your own Method objects is a good idea, if only because their lookup is generally cumbersome and slow. If you are very performance sensitive, you'll end up generating bytecode "shim" between your callers and the desired target methods. I expect that the http://openjdk.java.net/projects/mlvm/ (an openjdk project we are just starting) will provide some relief for this; stay tuned.
Finally, since Method objects have no state to speak of (except their "accessible" bit, which is an ahead-of-time configuration), it would be really, really surprising if they could create a race condition of some sort. If you expect race conditions in formally stateless data structures, you are certifiably paranoid. (A normal state on some platforms, hopefully not on Java.)
For more information about method calls, including reflective methods, see my blog post: http://blogs.sun.com/jrose/entry/anatomy_of_a_call_site
Best wishes... -
Performance issues; waited too long for a row cache enqueue lock!
hi Experts,
OS: Oracle Solaris on SPARC (64-bit)
DB version:
SQL> select * from V$VERSION;
BANNER
Oracle Database 11g Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Solaris: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
SQL>We have seen 100% CPU usage and high database load, so I checked the instance and have seen there were many blocking sessions and more than 71 sessions running the same select ;
elect tablespace_name as tbsname from (select tablespace_name,sum(bytes)/1024/1024 free_mb,0 total_mb,0 max_mb from dba_free_space group by tablespace_name union select tablespace_name, 0 current_mb,sum(bytes)/1024/1024 total_mb, sum(decode(maxbytes, 0, bytes, maxbytes))/1024/1024 max_mb from dba_data_files group by tablespace_name) group by tablespace_name having round((sum(total_mb)-sum(free_mb))/sum(max_mb)*100) > 95 Blocking sessions are running queries like this;
SELECT * from MYTABLE WHERE MYCOL=:1 FOR UPDATE;This select queries are coming from a cron job running every 10 minutes to check the tablespaces; so I first killed (kill -9 pid) those select statements so the load and CPU decreased to 13% of CPU usage. Blocking sessions still there and I didn't killed them waiting for app guys confirmation... after few hours and the CPU usage never went down the 13%; I have seen many errors;
WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=...System State dumped to trace file .....trcAfter that , we decided to restart the DB to release the locks!
I would like to understand why during loads we were no able to run those select statements, statspack schedule snapshot reports were not able to finish, also automatic
database statistics... why 5 for update statements locked the whole DB?user12035575 wrote:
SELECT FOR UPDATE will only lock the table row until the transaction is completed.
"WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK" happens when it needs to acquire a lock on data dictionary. Did you check the trace file associated with the statement?The trace file is too long, which information I need to focus more? -
Need help with inserting rows in ResultSet and JTable
hello Guru!
i have inserted a row in my result set and i want that my table shows this row promptly after i have inserted it in my result set...
but when i use following code for my resultset:
rs.moveToInsertRow();
rs.updateInt(1,nr);
rs.updateString(2, name);
rs.insertRow();
Record are inserted in resultset and database but not shown in my JTable??
Anyone a Clue to without reexecuting the query how can i display inserted row in JTable
http://download-west.oracle.com/docs/cd/A87860_01/doc/java.817/a83724/resltse7.h
I have refrered the following links but still clue less help Guruuuuuuu
i m really in trobble??????i am just near by the Solution using the Database Metadata
by couldn't get the ideaaaa
==================================================
http://download-west.oracle.com/docs/cd/A87860_01/doc/java.817/a83724/resltse7.htm
Seeing Database Changes Made Internally and Externally
This section discusses the ability of a result set to see the following:
its own changes (DELETE, UPDATE, or INSERT operations within the result set), referred to as internal changes
changes made from elsewhere (either from your own transaction outside the result set, or from other committed transactions), referred to as external changes
Near the end of the section is a summary table.
Note:
External changes are referred to as "other's changes" in the Sun Microsystems JDBC 2.0 specification.
Seeing Internal Changes
The ability of an updatable result set to see its own changes depends on both the result set type and the kind of change (UPDATE, DELETE, or INSERT). This is discussed at various points throughout the "Updating Result Sets" section beginning on , and is summarized as follows:
Internal DELETE operations are visible for scrollable result sets (scroll-sensitive or scroll-insensitive), but are not visible for forward-only result sets.
After you delete a row in a scrollable result set, the preceding row becomes the new current row, and subsequent row numbers are updated accordingly.
Internal UPDATE operations are always visible, regardless of the result set type (forward-only, scroll-sensitive, or scroll-insensitive).
Internal INSERT operations are never visible, regardless of the result set type (neither forward-only, scroll-sensitive, nor scroll-insensitive).
An internal change being "visible" essentially means that a subsequent getXXX() call will see the data changed by a preceding updateXXX() call on the same data item.
JDBC 2.0 DatabaseMetaData objects include the following methods to verify this. Each takes a result set type as input (ResultSet.TYPE_FORWARD_ONLY, ResultSet.TYPE_SCROLL_SENSITIVE, or ResultSet.TYPE_SCROLL_INSENSITIVE).
boolean ownDeletesAreVisible(int) throws SQLException
boolean ownUpdatesAreVisible(int) throws SQLException
boolean ownInsertsAreVisible(int) throws SQLException
Note:
When you make an internal change that causes a trigger to execute, the trigger changes are effectively external changes. However, if the trigger affects data in the row you are updating, you will see those changes for any scrollable/updatable result set, because an implicit row refetch occurs after the update.
Seeing External Changes
Only a scroll-sensitive result set can see external changes to the underlying database, and it can only see the changes from external UPDATE operations. Changes from external DELETE or INSERT operations are never visible.
Note:
Any discussion of seeing changes from outside the enclosing transaction presumes the transaction itself has an isolation level setting that allows the changes to be visible.
For implementation details of scroll-sensitive result sets, including exactly how and how soon external updates become visible, see "Oracle Implementation of Scroll-Sensitive Result Sets".
JDBC 2.0 DatabaseMetaData objects include the following methods to verify this. Each takes a result set type as input (ResultSet.TYPE_FORWARD_ONLY, ResultSet.TYPE_SCROLL_SENSITIVE, or ResultSet.TYPE_SCROLL_INSENSITIVE).
boolean othersDeletesAreVisible(int) throws SQLException
boolean othersUpdatesAreVisible(int) throws SQLException
boolean othersInsertsAreVisible(int) throws SQLException
Note:
Explicit use of the refreshRow() method, described in "Refetching Rows", is distinct from this discussion of visibility. For example, even though external updates are "invisible" to a scroll-insensitive result set, you can explicitly refetch rows in a scroll-insensitive/updatable result set and retrieve external changes that have been made. "Visibility" refers only to the fact that the scroll-insensitive/updatable result set would not see such changes automatically and implicitly.
Visibility versus Detection of External Changes
Regarding changes made to the underlying database by external sources, there are two similar but distinct concepts with respect to visibility of the changes from your local result set:
visibility of changes
detection of changes
A change being "visible" means that when you look at a row in the result set, you can see new data values from changes made by external sources to the corresponding row in the database.
A change being "detected", however, means that the result set is aware that this is a new value since the result set was first populated.
With Oracle8i release 8.1.6 and higher, even when an Oracle result set sees new data (as with an external UPDATE in a scroll-sensitive result set), it has no awareness that this data has changed since the result set was populated. Such changes are not "detected".
JDBC 2.0 DatabaseMetaData objects include the following methods to verify this. Each takes a result set type as input (ResultSet.TYPE_FORWARD_ONLY, ResultSet.TYPE_SCROLL_SENSITIVE, or ResultSet.TYPE_SCROLL_INSENSITIVE).
boolean deletesAreDetected(int) throws SQLException
boolean updatesAreDetected(int) throws SQLException
boolean insertsAreDetected(int) throws SQLException
It follows, then, that result set methods specified by JDBC 2.0 to detect changes--rowDeleted(), rowUpdated(), and rowInserted()--will always return false with the 8.1.6 Oracle JDBC drivers. There is no use in calling them.
Summary of Visibility of Internal and External Changes
Table 12-1 summarizes the discussion in the preceding sections regarding whether a result set object in the Oracle JDBC implementation can see changes made internally through the result set itself, and changes made externally to the underlying database from elsewhere in your transaction or from other committed transactions.
Table 12-1 Visibility of Internal and External Changes for Oracle JDBC
Result Set Type Can See Internal DELETE? Can See Internal UPDATE? Can See Internal INSERT? Can See External DELETE? Can See External UPDATE? Can See External INSERT?
forward-only
no
yes
no
no
no
no
scroll-sensitive
yes
yes
no
no
yes
no
scroll-insensitive
yes
yes
no
no
no
no
For implementation details of scroll-sensitive result sets, including exactly how and how soon external updates become visible, see "Oracle Implementation of Scroll-Sensitive Result Sets".
Notes:
Remember that explicit use of the refreshRow() method, described in "Refetching Rows", is distinct from the concept of "visibility" of external changes. This is discussed in "Seeing External Changes".
Remember that even when external changes are "visible", as with UPDATE operations underlying a scroll-sensitive result set, they are not "detected". The result set rowDeleted(), rowUpdated(), and rowInserted() methods always return false. This is further discussed in "Visibility versus Detection of External Changes".
Oracle Implementation of Scroll-Sensitive Result Sets
The Oracle implementation of scroll-sensitive result sets involves the concept of a window, with a window size that is based on the fetch size. The window size affects how often rows are updated in the result set.
Once you establish a current row by moving to a specified row (as described in "Positioning in a Scrollable Result Set"), the window consists of the N rows in the result set starting with that row, where N is the fetch size being used by the result set (see "Fetch Size"). Note that there is no current row, and therefore no window, when a result set is first created. The default position is before the first row, which is not a valid current row.
As you move from row to row, the window remains unchanged as long as the current row stays within that window. However, once you move to a new current row outside the window, you redefine the window to be the N rows starting with the new current row.
Whenever the window is redefined, the N rows in the database corresponding to the rows in the new window are automatically refetched through an implicit call to the refreshRow() method (described in "Refetching Rows"), thereby updating the data throughout the new window.
So external updates are not instantaneously visible in a scroll-sensitive result set; they are only visible after the automatic refetches just described.
For a sample application that demonstrates the functionality of a scroll-sensitive result set, see "Scroll-Sensitive Result Set--ResultSet5.java".
Note:
Because this kind of refetching is not a highly efficient or optimized methodology, there are significant performance concerns. Consider carefully before using scroll-sensitive result sets as currently implemented. There is also a significant tradeoff between sensitivity and performance. The most sensitive result set is one with a fetch size of 1, which would result in the new current row being refetched every time you move between rows. However, this would have a significant impact on the performance of your application.
how can i implement this using
JDBC 2.0 DatabaseMetaData objects include the following methods to verify this. Each takes a result set type as input (ResultSet.TYPE_FORWARD_ONLY, ResultSet.TYPE_SCROLL_SENSITIVE, or ResultSet.TYPE_SCROLL_INSENSITIVE).
boolean deletesAreDetected(int) throws SQLException
boolean updatesAreDetected(int) throws SQLException
boolean insertsAreDetected(int) throws SQLException -
Navigation Cache - Object Size
We're currently in the middle of our upgrade to SP15 and one of the new features we're implementing is the navigation cache. By default, the number of objects to be cached is set at 5000. So far the behavior is, all the navigation objects (pages) counts as objects. Then for each user entering the portal with an unique role combination, an additional set of objects (equaling the number of roles they have) is also added. In our dev environment where most users are superadmins, we're at around 1300 objects.
My question is what's an object limit before there's going to be a performance hit with too much memory usage. Is 5000 a safe limit? Or can it be higher? What happens if the object limit is reached, will it be like a queue where the oldest cached object gets deleted when a new one is added?
Any info on this subject is welcome. Any experiences with high availability environments using navigation cache would be appreciated. ThanksHi,
The Preliminary objective of Navigation Cache is to improve performance on the server-side. By saving the Navigation Nodes in memory, the number of calls to the PCD or any other backend systems is reduced.
The cache is implemented in a First in - First out manner (FIFO).
Try this link for more information:
http://help.sap.com/saphelp_erp2005/helpdata/en/5f/2720a513ea4ce9a5a4e5d285a1c09c/frameset.htm
Hope it's help
Best Regards,
Shimon. -
Memory Notification:Library Cache Object loaded in Heap size 2262K exceeds
Dear all
I am facing the following problem. I am using Oracle 10gR2 on Windows.
Please help me.
Memory Notification: Library Cache Object loaded into SGA
Heap size 2262K exceeds notification threshold (2048K)
KGL object name :XDB.XDbD/PLZ01TcHgNAgAIIegtw==
ThanksThis is a normal warning message displayed for release 10.2.0.1.0, this is just a bug that by default has declared the kgllarge_heap_warning_threshold instance parameter to 8388608 . The bug is harmless, but the problem is that you will see a lot of messages displayed on the alert.log file, which renders this file difficult to read and it is uncomfortable to spot the real errors.
Just declare a higher value for the kgllarge_heap_warning_threshold undocumented instance parameter. This is meant to be corrected at 10.2.0.2.0, but you can manually have this parameter increased to a value higher than the highest value reported.
For further references take a look at this metalink note:
Memory Notification: Library Cache Object Loaded Into Sga
Doc ID: Note:330239.1
~ Madrid
http://hrivera99.blogspot.com/
Maybe you are looking for
-
Updating statistics after insert
Is it possible, without creating a custom post-map process, to update statistics after inserting rows into a table. I've tried setting different parameters within OWB to no avail. Please respond only if you have actually succeeded in updating statist
-
How to Display XML in a UIWebView?
Hi everybody. I like how Safari displays XML files and I would like to emulate that functionality in an iPhone UIWebView. However, I just can't get it to look the same when passing a URL to the UIWebView. For example, take a simple XML file like http
-
Hi, i have installed ACS 5.1.0.44 demo (demo license) on ESX VM 4.0, everything works fine.But i have a problem is the logging. 1- i have configured the ACS to use remote log server, it sends the logs to the server in a very detail way. the question
-
How can I retrieve photos and music I deleted "secure empty trash"? accidentally?
I deleted downloads and accidentally it removed all my photos and music. I retrieved all purchased music but not the rest.
-
Need a Jbutton object with TWO text labels??
Hi I have a major problem I need to create a JButton object that takes two string as argument MyButtonClass(String a, String b); I do not have a clue how to do this... the reason I need this is that I have ttf(true type font) file each character refe