How to cache objects?
Hi,
I have an application that connects to a database using JDBC. It basically implements JSR-170 and what I want to do is to be able to cache the objects so that I don't have to make a trip to the database everytime. How can I accomplish this? I looked into java object caching but it looks like I would still need to do a lot on my side - first, to cache the objects and then, to retrieve them using an index. If I want to store the objects for a tree hierarchy and for multiple users, what would I need to do?
Thanks.
For a simple cache, you might do a google for objectcache.java. This will give you some idea of what a cache does and how it's coded.
I have no idea what you mean by 'retrieve them using an index'. If you want to cache a tree heirarchy, you might cache DefaultMutableTreeNodes. I'm not sure what you mean by multiple users. Are you talking about server side processing?
Similar Messages
-
How to cache the objects MANUALLY?
hello
some o-r mapping tools can cache the objects that have been queried,then next time these objects are required,it don't need to access the database again,it can also monitor the database updating.
i wonder how i can implement such "cache" function MANUALLY? because i DON'T want to use ANY o-r mapping tools. i only use the jdbc to query database,then generate the object.
who can give me some clue?? or articles? or sample codes??
thank you!!!!!no you don't understand me,what i want to know is the
mechanism of the cache,and how to implement it myself
without using the o-r mapping tools.
the dao pattern can encapsulate the database
access,but it can NOT cache the object .First you need to define how the caching occurs.
- Can the data in the database change without going through your code?
- Are there multiple copies of your app running at the same time. If yes then what happens if one is updated?
- How many of these can there be and what impact will this have on memory?
- etc.
You also need to identify the 'identity' of an object.
A simple strategy....
- Some layer requests an object using the 'identity'.
- The database layer code looks in a hash for the 'identity'. If it finds it it returns it.
- If it doesn't find it it uses a DAO to load it, then puts it in the hash, then returns it. -
How to remove objects cached in data cache?
With following query, I can find out how the table cached in data cache with the indexes:
Select CacheName,DBName,OwnerName,ObjectName,IndexID,sum(CachedKB) as "CachedKb"
from master..monCachedObject
where ObjectName = 'invent'
group by CacheName,DBName,OwnerName,ObjectName,IndexID
order by CacheName,DBName,OwnerName,ObjectName,IndexID
Is it possible to remove the cached data for particular index from data cache? I want to verify the performance issue is on particular index and reproduce the performance issue.SImply unbind the objects from their named cache. Then, bind the objects again back to their named cache. The "re-bind" step doesn't have to be done if we are talking about "default data cache".
For objects bound to named cache other than "default data cache":
exec sp_unbindcache <dbname>,<tablename>,<indexname>
exec sp_bindcache <cachename>,<dbname>,<tablename>,<indexname>
Now data will be read from disk on first retreival.
For "default data cache" objects:
exec sp_unbindcache <dbname>,<tablename>,<indexname> -
After REFRESH the cached object is not consistent with the database table
After REFRESH, the cached object is not consistent with the database table. Why?
I created a JDBC connection with the Oracle database (HR schema) using JDeveloper(10.1.3) and then I created an offline database (HR schema)
in JDeveloper from the existing database tables (HR schema). Then I made some updates to the JOBS database table using SQL*Plus.
Then I returned to the JDeveloper tool and refreshed the HR connection. But I found no any changes made to the offline database table JOBS in
JDeveloper.
How to make the JDeveloper's offline tables to be synchronized with the underling database tables?qkc,
Once you create an offline table, it's just a copy of a table definition as of the point in time you brought it in from the database. Refreshing the connection, as you describe it, just refreshes the database browser, and not any offline objects. If you want to syncrhnonize the offline table, right-click the offline table and choose "Generate or Reconcile Objects" to reconcile the object to the database. I just tried this in 10.1.3.3 (not the latest 10.1.3, I know), and it works properly.
John -
Re: Update Cache Objects in Delta Process Dosn't work
Hi All,
Re: Update Cache Objects in Delta Process doesn't work.
BI 7 - SP 17
This is the scenario I am working on, am running a bex query on a Cube(via a multi) with bunch aggregates.
The daily extraction & Aggregate rollup is correct, but when I run a Bex Query it display incorrect keyfigure values as compared to what we see in LISTCUBE for the infocube.
So when I ran the same query in RSRT with "Do not use Cache", it gave correct results and then when I ran the Bex Query again it fixed itself and it displayed correctly.
InfoCube - standard & No compression for requests
Query Properties are
Read Mode - H
Req Status - 1
Cache - Main Memory Cache Without swaping
Update Cache Objects in Delta Process (Flag selected)
SP grouping - 1
This problem occurs once in couple of weeks and my question is there a permanant fix for it??
OR should we turn the cache off??
Can anyone please help.
Thanking You.
RaoHi Kevin/Rao,
We are currently experiencing problems with the 'Update Cache Objects in Delta' process. Did either of you manage to resolve your issues, and if so, how? -
TopLink cached object changed are not commited to the database
Hello,
I'm using TopLink 10 and I have a writing issue with a use case:
1. I read an obect using TopLink that is in the IdentityMap
2. Using JSF this object is edited throught a web form.
3. I give the modified object to the data layer and try to modify inside a unit of work:
UnitOfWork uow = session.acquireUnitOfWork();
//laspEtapeDef comes from JSF and has been modfied previously
LaspEtapeDef laspEtapeDefClone = uow.readObject( laspEtapeDef );
//I update the clone field
laspEtapeDefClone.setDescription(laspEtapeDef.getDescription());
uow.commit();4. I use again the same object to display it once modified.
The object is modified in the cache but the modified fields are never commited to the database. This code works only if I disable the cache.
So, I've modified my JSF form to send the fields instead of modifying directly the object.
My question: Is there a way to commit changes mades in an cached object?
I've found the following section in the documentation, that explain the problem but doesn't gives the solution:
http://docs.oracle.com/cd/E14571_01/web.1111/b32441/uowadv.htm#CACGDJJH
Any idea?How are you reading in the object initially? The problem is likely that you are modifying an object from the session cache. When you then read in the object from the uow, it uses the object in the session cache as the back up. So there will not appear to be any changes to persist to the database.
You will need to make a copy of the object for modification, or use the copy from the unitofwork to make the changes instead of working directly on the object in the session. Disabling the cache means there is no copy in the session cache to use as a back up, so the uow read has to build an object from the database.
Best Regards,
Chris -
Latch: row cache objects
Hello everyone,
Note: Apologize for the bad formatting, tried but it seems I forgot how to use it
BANNER
Oracle Database 11g Release 11.2.0.2.0 - 64bit Production
I've seen high "*latch: row cache objects*" in SP/ASH report for ~14 hours back, when the users were unable to connect to the database. There were,
WARNING: inbound connection timed out (ORA-3136)
Time: 30-APR-2012 02:24:36
Tracing not turned on.
Tns error struct:
errors all over the alert log for the duration of 6 minutes of the problem.
I've put few records in bold due to which I concluded that the problem was with "dc_users" thing.
Can anybody tell me how/where I should proceed forward ?
SP report:Instance Efficiency Indicators
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.84 Optimal W/A Exec %: 100.00
Library Hit %: 97.43 Soft Parse %: 87.86
Execute to Parse %: 22.54 Latch Hit %: 99.95
Parse CPU to Parse Elapsd %: 0.30 % Non-Parse CPU: 87.83
Shared Pool Statistics Begin End
Memory Usage %: 45.09 46.98
% SQL with executions>1: 11.49 13.15
% Memory for SQL w/exec>1: 72.96 21.33
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
latch: row cache objects 6,655 634,260 95306 97.0
log file sync 289,923 6,469 22 1.0
CPU time 5,039 .8
db file sequential read 310,084 2,840 9 .4
log file parallel write 451,706 1,144 3 .2
ASH Report
Analysis Begin Time: 30-Apr-12 02:24:00
Analysis End Time: 30-Apr-12 02:30:00
Elapsed Time: 6.0 (mins)
Begin Data Source: DBA_HIST_ACTIVE_SESS_HISTORY
in AWR snapshot 12185
End Data Source: DBA_HIST_ACTIVE_SESS_HISTORY
in AWR snapshot 12185
Sample Count: 1,385
Average Active Sessions: 38.47
Avg. Active Session per CPU: 1.60
Report Target: None specified
Top User Events DB/Inst: NIKU/niku (Apr 30 02:24 to 02:30)
Avg Active
Event Event Class % Event Sessions
latch: row cache objects Concurrency 75.45 29.03
CPU + Wait for CPU CPU 9.75 3.75
log file sync Commit 3.83 1.47
db file sequential read User I/O 3.61 1.39
Top Event P1/P2/P3 Values DB/Inst: NIKU/niku (Apr 30 02:24 to 02:30)
Event % Event P1 Value, P2 Value, P3 Value % Activity
Parameter 1 Parameter 2 Parameter 3
latch: row cache objects 75.60 "42287858200","279","0" 75.60
address number tries
1* select addr, latch#, child#, name, misses, gets from v$latch_children where name like '%row%cache%objec%' order by gets , misses
niku> /
ADDR LATCH# CHILD# NAME MISSES GETS
0000000A16FF21C8 279 26 row cache objects 0 0
0000000A16FF14C8 279 2 row cache objects 0 0
00000009D88D7ED8 279 3 row cache objects 0 0
0000000A16FF1B48 279 14 row cache objects 0 0
00000009D88D8558 279 15 row cache objects 0 0
0000000A16FF1CE8 279 17 row cache objects 0 0
0000000A26265A28 279 19 row cache objects 0 0
0000000A16FF1E88 279 20 row cache objects 0 0
00000009D88D8898 279 21 row cache objects 0 0
0000000A26265BC8 279 22 row cache objects 0 0
0000000A16FF2028 279 23 row cache objects 0 0
00000009D88D8A38 279 24 row cache objects 0 0
0000000A26265D68 279 25 row cache objects 0 0
00000009D88D8BD8 279 27 row cache objects 0 0
0000000A26265F08 279 28 row cache objects 0 0
00000009D88D8D78 279 30 row cache objects 0 0
0000000A262660A8 279 31 row cache objects 0 0
0000000A16FF2508 279 32 row cache objects 0 0
0000000A16FF26A8 279 35 row cache objects 0 0
00000009D88D90B8 279 36 row cache objects 0 0
0000000A262663E8 279 37 row cache objects 0 0
0000000A262668C8 279 46 row cache objects 0 0
0000000A26266A68 279 49 row cache objects 0 0
0000000A16FF2368 279 29 row cache objects 0 11
0000000A16FF2848 279 38 row cache objects 0 116
0000000A16FF29E8 279 41 row cache objects 0 200
00000009D88D93F8 279 42 row cache objects 0 318
00000009D88D9258 279 39 row cache objects 0 1010
0000000A16FF2EC8 279 50 row cache objects 0 1406
00000009D88D9598 279 45 row cache objects 0 1472
0000000A26266588 279 40 row cache objects 0 1705
0000000A26266728 279 43 row cache objects 0 7383
0000000A16FF2B88 279 44 row cache objects 0 32346
00000009D88D98D8 279 51 row cache objects 19 63948
0000000A26265888 279 16 row cache objects 0 88045
0000000A26266248 279 34 row cache objects 0 141176
00000009D88D9738 279 48 row cache objects 0 326672
0000000A16FF19A8 279 11 row cache objects 867 1770385
00000009D88D8078 279 6 row cache objects 9 1979542
0000000A16FF2D28 279 47 row cache objects 2 3435018
00000009D88D86F8 279 18 row cache objects 2557 14956121
0000000A26265068 279 1 row cache objects 224 24335868
0000000A262653A8 279 7 row cache objects 29760 133991553
00000009D88D8F18 279 33 row cache objects 60612 677263122
00000009D88D83B8 279 12 row cache objects 23981 739014460
0000000A26265208 279 4 row cache objects 19973399 852043775
0000000A26265548 279 10 row cache objects 280137 856097342
00000009D88D8218 279 9 row cache objects 715879777 1219000976
0000000A262656E8 279 13 row cache objects 3856073 2397402780
0000000A16FF1668 279 5 row cache objects 12763217 2920278217
*0000000A16FF1808 279 8 row cache objects 67329804 4145389092*
51 rows selected.
niku> list
1 select addr, latch#, child#, name, misses, gets from v$latch_children where name like '%row%cache%objec%' order by gets , misses
niku> select distinct s.kqrstcln latch#,r.cache#,r.parameter name,r.type,r.subordinate#
from v$rowcache r,x$kqrst s
where r.cache#=s.kqrstcid
order by 1,4,5; 2 3 4
LATCH# CACHE# NAME TYPE SUBORDINATE#
1 3 dc_rollback_segments PARENT
2 1 dc_free_extents PARENT
3 4 dc_used_extents PARENT
4 2 dc_segments PARENT
5 0 dc_tablespaces PARENT
6 5 dc_tablespace_quotas PARENT
7 6 dc_files PARENT
*8 10 dc_users PARENT*
*8 7 dc_users SUBORDINATE 0*
*8 7 dc_users SUBORDINATE 1*
*8 7 dc_users SUBORDINATE 2*
9 8 dc_objects PARENT
9 8 dc_object_grants SUBORDINATE 0
10 17 dc_global_oids PARENT
11 12 dc_constraints PARENT
12 13 dc_sequences PARENT
13 16 dc_histogram_defs PARENT
13 16 dc_histogram_data SUBORDINATE 0
13 16 dc_histogram_data SUBORDINATE 1
14 54 dc_sql_prs_errors PARENT
15 32 kqlsubheap_object PARENT
16 19 dc_table_scns PARENT
16 19 dc_partition_scns SUBORDINATE 0
17 18 dc_outlines PARENT
18 14 dc_profiles PARENT
19 47 realm cache PARENT
19 47 realm auth SUBORDINATE 0
20 48 Command rule cache PARENT
21 49 Realm Object cache PARENT
21 49 Realm Subordinate Cache SUBORDINATE 0
22 46 Rule Set Cache PARENT
23 34 extensible security user and rol PARENT
24 35 extensible security principal pa PARENT
25 37 extensible security UID to princ PARENT
26 36 extensible security principal na PARENT
27 33 extensible security principal ne PARENT
28 38 XS security class privilege PARENT
29 39 extensible security midtier cach PARENT
30 43 AV row cache 1 PARENT
31 44 AV row cache 2 PARENT
32 45 AV row cache 3 PARENT
33 15 global database name PARENT
34 20 rule_info PARENT
35 21 rule_or_piece PARENT
35 21 rule_fast_operators SUBORDINATE 0
36 23 dc_qmc_ldap_cache_entries PARENT
37 52 qmc_app_cache_entries PARENT
38 53 qmc_app_cache_entries PARENT
39 27 qmtmrcin_cache_entries PARENT
40 28 qmtmrctn_cache_entries PARENT
41 29 qmtmrcip_cache_entries PARENT
42 30 qmtmrctp_cache_entries PARENT
43 31 qmtmrciq_cache_entries PARENT
44 26 qmtmrctq_cache_entries PARENT
45 9 qmrc_cache_entries PARENT
46 50 qmemod_cache_entries PARENT
47 24 outstanding_alerts PARENT
48 22 dc_awr_control PARENT
49 25 SMO rowcache PARENT
50 40 sch_lj_objs PARENT
51 41 sch_lj_oids PARENT
61 rows selected.
niku> select parameter, gets from v$rowcache order by gets desc;
PARAMETER GETS
dc_users 2802019571
dc_tablespaces 2405092307
dc_objects 1815427326jjk wrote:
I've already been thru the link that you've mentioned and unfortunately couldn't make much use of it.I didn't think it was really likely to be relevant, but there was always a long shot that it might have given you a clue.
Considering the "dc_users" had maximum gets, I thought (rather as per internet) that it might be the point of contention. However I did observe high misses on child# 9 which is "dc_objects". It's often the case that the misses is more important than the gets when you see lots of gets and misses on a few latches/caches - the bit that might have been most instructure was the dictionary cache bit from the AWR showing gets, misses, scans, scanmisses etc. It might have told us a little about what was going in and out of the dictionary cache and let us guess why.
In alert log:
Sun Apr 29 02:20:00 2012
29-APR-2012 02:20:00 -- xxxxxxx package - REGRANT_READONLY Begin re-grant read only roles
Sun Apr 29 02:24:34 2012
29-APR-2012 02:24:34 -- xxxxxxx package - REGRANT_READONLY End re-grant read only roles
Sun Apr 29 02:30:00 2012
29-APR-2012 02:30:00 -- xxxxxxx package - REGRANT_READWRITE Begin re-grant read write roles
Sun Apr 29 02:32:02 2012
29-APR-2012 02:32:02 -- xxxxxxx package - REGRANT_READWRITE End re-grant read write roles
Is this code that "regrants" roles to users who already have them ? That's what it sounds like, and that sounds like something that would impact on various parts of the dictionary cache, especially dc_users, and possibly dc_obejcts.
CPU per Elap per Old
Executions Rows Processed Rows per Exec Exec (s) Exec (s) Hash Value
161,198 1,244 0.0 0.00 0.00 978935325
select /*+ rule */ c.name, u.name from con$ c, cdef$ cd, user$ u
where c.con# = cd.con# and cd.enabled = :1 and c.owner# = u.us
er#
159,955 159,952 1.0 0.00 0.00 2458412332
select o.name, u.name from obj$ o, user$ u where o.obj# = :1 an
d o.owner# = u.user#
159,932 6 0.0 0.00 0.00 2636710067
insert into objauth$(option$,grantor#,obj#,privilege#,grantee#,c
ol#,sequence#) values(decode(:1,0,null,:1),:2,:3,:4,:5,decode(:6
,0,null,:6),object_grant.nextval)
147,168 147,168 1.0 0.00 0.00 3468666020
select text from view$ where rowid=:1
124,635 124,635 1.0 0.00 0.00 564166580
select count(*) from ( select u.
name from registry$ r, us
er$ u where r.status in (1,3,5)
and r.namespace = 'SERVER'The first one looks like a response to a constraint being breached.
The third one looks like something that might happen when you grant a privilege on an object to a user - and maybe the first one happens if the user has already got it and the insert raises a "duplicate key" error. The fourth one commonly happens when you have to re-optimize a query containing a view - and when you execute DDL (such as changing privileges on an object) you invalidate SQL and have to re-optimize it eventually. I can't remember where I've seen the second one appearing.
If you have a process that tries to do a lot of grants on objects to users and roles in a very short time, it's quite likely to create havoc in the dictionary cache - check what that package was up to and why it runs.
What is the missing information ?When I looked at some of your posting, the output didn't match the query, some of the later columns had gone missing - this might have been my browser rather than your input though.
Regards
Jonathan Lewis -
How the cache.invoke() works?
Hi, I am trying to play with the coherence 3.4.2 to find out how the cache.invoke() works. I modified the SimpleCacheExplorer.java coming as the example code as following:
1) I defined a distributed cache as following:
<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
<caching-scheme-mapping>
<!--
Caches with any name will be created as default replicated.
-->
<cache-mapping>
<cache-name>*</cache-name>
<scheme-name>default-dist</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<!--
Default Replicated caching scheme.
-->
<distributed-scheme>
<scheme-name>default-dist</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>
<class-scheme>
<scheme-ref>default-backing-map</scheme-ref>
</class-scheme>
</backing-map-scheme>
</distributed-scheme>
<!--
Default backing map scheme definition used by all
The caches that do not require any eviction policies
-->
<class-scheme>
<scheme-name>default-backing-map</scheme-name>
<class-name>com.tangosol.util.SafeHashMap</class-name>
</class-scheme>
</caching-schemes>
</cache-config>
2) I modified the SimpleCacheExplorer.java as following:
class MyProcessor extends com.tangosol.util.processor.AbstractProcessor
private static final long serialVersionURD = 8004040647128795431L;
private String cacheName = null;
public MyProcessor(String cacheName)
this.cacheName = cacheName;
public Object process(Entry entry)
NamedCache cache = CacheFactory.getCache(cacheName);
cache.clear();
return null;
public class SimpleCacheExplorer {
* Entry point.
* @param asArg command line arguments
public static void main(String[] asArg)
throws Exception
NamedCache cache = CacheFactory.getCache("Test");
cache.put("1", "one");
cache.invoke("1", new MyProcessor("Test"));
System.out.println("cache size = " + cache.size());
3) Then I got the following exception:
com.tangosol.util.AssertionException: poll() is a blocking call and cannot be called on the Service thread
Is there a way I can do the cache clear in the invocation way?
ThanksThe EntryProcessors operate against a particular Entry and usually executes on the node where the entry is stored. What you can do is to remove the particular entry this entryprocessor is aimed at by calling entry.remove()
Here is a modified entry processor that removes an entry.
class MyProcessor extends com.tangosol.util.processor.AbstractProcessor
private static final long serialVersionURD = 8004040647128795431L;
public MyProcessor() {}
public Object process(Entry entry)
if (entry.isPresent())
entry.remove(false);
return null;
}Getting a hold of the CacheFactory while executing an EntryProcessor is a re-entrant call in to the DistributedCacheService which is rendering the exception.
More on re-entrancy restrictions and recommendations in this whitepaper: [Coherence Planning: From Proof of Concept to Production|http://www.oracle.com/technology/products/coherence/pdf/Oracle_Coherence_Planning_WP.pdf] -
"latch: row cache objects" and high "VERSION_COUNT"
Hello,
we are being faced with a situation where the database spends most of it's time waiting for latches in the shared pool (as seen in the AWR report).
All statements issued by the application are using bind variables, but what we can see in V$SQL is that even though the statements are using bind variables some of them have a relatively high version_count (> 300) and many invaliadations (100 - 200) even though the tables involved are very small (some not more than 3 or 4 rows).
Here is some (hopefully enough) information about the environment
Version: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production (on RedHat EL 5)
Parameters:
cursor_bind_capture_destination memory+disk
cursor_sharing EXACT
cursor_space_for_time FALSE
filesystemio_options none
hi_shared_memory_address 0
memory_max_target 12288M
memory_target 12288M
object_cache_optimal_size 102400
open_cursors 300
optimizer_capture_sql_plan_baselines FALSE
optimizer_dynamic_sampling 2
optimizer_features_enable 11.2.0.2
optimizer_index_caching 0
optimizer_index_cost_adj 100
optimizer_mode ALL_ROWS
optimizer_secure_view_merging TRUE
optimizer_use_invisible_indexes FALSE
optimizer_use_pending_statistics FALSE
optimizer_use_sql_plan_baselines TRUE
plsql_optimize_level 2
session_cached_cursors 50
shared_memory_address 0The shared pool size (according to AWR) is 4,832M
The buffer cache is 3,008M
Now, my question: is a version_count of > 300 a problem (we have about 10-15 of those with a total of ~7000 statements in v$sqlarea). Those are also the statements listed in the AWR report at the top in the section "SQL ordered by Version Count" and "SQL ordered by Sharable Memory"
Is it possible that those statements are causing the the latch contention in the shared pool?
I went through https://blogs.oracle.com/optimizer/entry/why_are_there_more_cursors_in_11g_for_my_query_containing_bind_variables_1
The tables involved are fairly small and all the execution plans for each cursor are identical.
I can understand some of the invalidations that happen, because we have 7 schemas that have identical tables, but from my understanding that shouldn't cause such a high invalidation number. Or am I mistaken?
I'm not that experienced with Oracle tuning at that level, so I would appreciate any pointer on how I can find out where exactly the latch problem occurs
After flushing the shared pool, the problem seems to go away for a while. But apparently that is only fighting symptoms, not fixing the root cause of the problem.
Some of the statements in question:
SELECT * FROM QRTZ_SIMPLE_TRIGGERS WHERE TRIGGER_NAME = :1 AND TRIGGER_GROUP = :2
UPDATE QRTZ_TRIGGERS SET TRIGGER_STATE = :1 WHERE TRIGGER_NAME = :2 AND TRIGGER_GROUP = :3 AND TRIGGER_STATE = :4
UPDATE QRTZ_TRIGGERS SET TRIGGER_STATE = :1 WHERE JOB_NAME = :2 AND JOB_GROUP = :3 AND TRIGGER_STATE = :4
SELECT TRIGGER_STATE FROM QRTZ_TRIGGERS WHERE TRIGGER_NAME = :1 AND TRIGGER_GROUP = :2
UPDATE QRTZ_SIMPLE_TRIGGERS SET REPEAT_COUNT = :1, REPEAT_INTERVAL = :2, TIMES_TRIGGERED = :3 WHERE TRIGGER_NAME = :4 AND TRIGGER_GROUP = :5
DELETE FROM QRTZ_TRIGGER_LISTENERS WHERE TRIGGER_NAME = :1 AND TRIGGER_GROUP = :2So all of them are using bind variables.
I have seen that the columns used in the where clause all have histograms available. Would removing them reduce the number of invalidations?
Unfortunately I did not save the information from v$sql_shared_cursor before the shared pool was flushed, but most of the invalidations occurred in the ROLL_INVALID_MISMATCH column if that is of any help. There are some invalidations reported for AUTH_CHECK_MISMATCH and TRANSLATION_MISMATCH but to my understanding they caused by executing the statement for different schemas if I'm not mistaken.
Looking at v$latch_missed, most of the waits for parent = 'row cache objects' are for "kqrpre: find obj" and "kqreqd: reget">
In the AWR report, what does the Dictionary Cache Stats section say?
>
Here they are:
Dictionary Cache Stats
Cache Get Requests Pct Miss Scan Reqs Mod Reqs Final Usage
dc_awr_control 65 0.00 0 2 1
dc_constraints 729 33.33 0 729 1
dc_global_oids 60 23.33 0 0 31
dc_histogram_data 7,397 10.53 0 0 2,514
dc_histogram_defs 21,797 9.83 0 0 5,239
dc_object_grants 4 25.00 0 0 12
dc_objects 27,683 2.29 0 223 2,581
dc_profiles 1,842 0.00 0 0 1
dc_rollback_segments 1,634 0.00 0 0 39
dc_segments 7,335 6.94 0 360 1,679
dc_sequences 139 5.76 0 139 19
dc_table_scns 53 100.00 0 0 0
dc_tablespace_quotas 1,956 0.10 0 0 4
dc_tablespaces 17,488 0.00 0 0 11
dc_users 58,013 0.03 0 0 164
global database name 4,261 0.00 0 0 1
outstanding_alerts 54 0.00 0 0 9
sch_lj_oids 4 0.00 0 0 2
Library Cache Activity
Namespace Get Requests Pct Miss Pin Requests Pct Miss Reloads Invalidations
ACCOUNT_STATUS 3,664 0.03 0 0 0
BODY 560 2.14 2,343 0.60 0 0
CLUSTER 52 0.00 52 0.00 0 0
DBLINK 3,668 0.00 0 0 0
EDITION 1,857 0.00 3,697 0.00 0 0
INDEX 99 19.19 99 19.19 0 0
OBJECT ID 68 100.00 0 0 0
SCHEMA 2,646 0.00 0 0 0
SQL AREA 32,996 2.26 1,142,497 0.21 189 226
SQL AREA BUILD 848 62.15 0 0 0
SQL AREA STATS 860 82.09 860 82.09 0 0
TABLE/PROCEDURE 17,713 2.62 26,112 4.88 61 0
TRIGGER 1,704 2.00 6,737 0.52 1 0 -
Hi!
before we started in J2EE I used Castor as my o/r mapping tool. It caches the object nicely so the load on the database was low.
Now we started using J2EE, first using CMP for our entity beans (J2EE server in the SEAS 8 PE) with the built in JDO implementation. Here my first question: Does this implementation (or any JDO implementation) also cache the obejcts?
For a new project we have to use BMP for the entity beans because we have to versioning all updated in the database (an setXXX() on an entity bean creates a new row (=version) for the object in the database). We directly coded the sql statements in the bean (in ejbStore, ejbLoad, ...).
Any idea how we can cache objectes in our BMP approach? Should we also use JDO here?
Thank you for any comments about this in advance....W.g. we store information about orders in the databse which are not often changed, this information can be cached by the persistent layer. But when I access the db directly with sql statements (db is Oracle here), be have two db access everytime we access an order: One for the pk search and one for getting the data (when using value objects).
-
RMAN receives: OSB error: UUID not found OB cached object manager
I am receiving an error when backing up :
Starting backup at 17-MAR-2009 10:00:00
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: sid=137 devtype=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Secure Backup
channel ORA_SBT_TAPE_1: starting full datafile backupset
channel ORA_SBT_TAPE_1: specifying datafile(s) in backupset
input datafile fno=00001 name=+DGROUP1/seattle/datafile/system.826.679693097
channel ORA_SBT_TAPE_1: starting piece 1 at 17-MAR-2009 10:00:02
RMAN-03009: failure of backup command on ORA_SBT_TAPE_1 channel at 03/17/2009 10:02:57
ORA-27191: sbtinfo2 returned error
Additional information: 2
ORA-19511: Error received from media manager layer, error text:
sbt__rpc_cat_query: Query for piece 07ka4pt2_1_1 failed.
*(Oracle Secure Backup error: 'UUID not found (OB cached object manager)').*
Prior to this (when everything was working) I merely tried to re-label a tape. Why this has caused the problem I do not know but I can't seem to fix it.
Does anybody know what has happened and what the fix is?
On the HTTP administration page, when I try to configure the device I get the following error message:
Error: cannot read location object associated with device - UUID not found
It looks as though the device definition has been corrupted some how.The fix has been found (from Oracle Support). The cause is not yet understood.
I document it here for others who may run into the same problem.
It seems that the device "went missing". The fix was to add it.
ob> mkloc dat72
It is still being investigated and I will update the notes when I am in possession of more information. -
How to show object creation in UML
How to show object creation in UML
In a sequence diagram, it's a line (with arrow) pointing to the new object and the <creates> or <new> tag as mentioned above.
| obj 1 |
|
| <creates> ----------
| --------------> | obj 2 |
| ----------or----------
| obj 1 |
|
| <new> ----------
| --------------> | obj 2 |
| ---------- -
Plz help. How disable caching SQLJ statement on WebLogic server 10.3?
Plz help.
How disable caching statement by SQLJ on WebLogic server?
what the actual problem:
1. create or replace view vtest as select object_name from dba_objects where rownum<200
2. test.sqlj
#sql dx testIterator = {
select object_name from vtest
int cnt=0;
while( testIterator.next() ){
cnt++;
System.out.println("Count: "+cnt);
3. Restart WebLogic and deploy project
4. Run test on server, in log file
"*Count: 199*"
5. create or replace view vtest as select object_name from dba_objects where rownum<10
6. Run test on server, in log file
"*Count: 199*"
7. Restart WebLogic
8. Run test on server, in log file
"*Count: 9*"Hi bud,
Have you tried using WLST for what you are trying to achieve?
Please take a look at the following links:
http://docs.oracle.com/cd/E11035_01/wls100/config_scripting/domains.html
http://docs.oracle.com/cd/E13222_01/wls/docs91/config_scripting/domains.html
http://docs.oracle.com/cd/E13179_01/common/docs21/interm/config.html
Hope this helps.
Thanks,
Cris -
How to "get object" -- ResourceBundle
According to the ResourceBundle API:
ResourceBundle myResources = ResourceBundle.getBundle("MyResources");Besides getString, ResourceBundle also provides ... a generic getObject method for any other type of object. When using getObject, you'll have to cast the result to the appropriate type. For example:
int[] myIntegers = (int[]) myResources.getObject("intList");
{code}
Elsewhere,
+getBundle+ attempts to locate a property resource file.
Does it make sense/Is it possible to +getObject+ from a property resource file? If so, how is the object "expressed" in the property resource file?
For example, if the resource file looks like
1=Person
then the key 1 is associated with the String "Person"
If
[code]class Person {
String firstName;
String lastName;
... }[/code]
is it possible to associate "1" with a Person object, where firstName is "Foo" and lastName is "Bar"?
If so, how would it look in the property resource file?
Thanks in advance,
CIn the resource file, you can save the data like this:
numberofperson=2
firstname1=Foo
firstname2=dummyfirst
lastname1=Bar
lastname2=dummylast
For programming, you can code like this:
public class Person{
private String firstname;
private String lastname;
public Person(String firstname, String lastname){
this.firstname = firstname;
this.lastname = lastname;
int personNum = Integer.parseInt(myResources.getString("numberofperson"));
Person personArr[] = new Person[personNum];
for (int i=1; i<personNum; i++){
personArr[i] = new Person(myResources.getString("firstname"+i),myResources.getString("lastname"+i));
} -
How to create objects in ABAP Webdynpro?
Hi,
I want to create the object for the class: <b>CL_GUI_FRONTEND_SERVICES.</b>
then i want to call file_save_dialog method.
how shoud i write the code, plz?I have written this code:
v_guiobj TYPE REF TO cl_gui_frontend_services.
<u> ?????????????</u>
v_guiobj->file_save_dialog( ...).
How to create object in the place of ?????????????.
Bcoz, when i run this i am getting:
<b>Access via Null object reference not possible.</b>
Maybe you are looking for
-
Adding new attribute to Master data - Not present in ECC
Hi, We want to add a new field in the master data object 0plant. the new field is not present in the R/3 side. So we are planning to maintain it manually. Is it feasible? Can someone explain how to approach the situation? Since it is not going to com
-
Plz Help me.Very urgent
Hi Frds, When i am creating generic data source for master data table KNKK, getting following error : Invalid extract structure template KNKK of data source ZRF_KNKK_CRDMNGT. why I am getting error? In the table/view entered KNKK. may be this is wr
-
I have to find the name of the trigger which uses the table "tbl_job_run"
-
Increaseing the height and width of textfield and password fields
Dear friends in apex 4.1 when we create a application it automatically creates a login page with username and password field in it So my question is can i increase the height and width of the username and password field in that login page If it is po
-
How do I get Pixelmator to work with Photos?
I can't seem to get the photo editing program Pixelmator to work with my Photos. It won't let me save any of the edited photos into any albums.