Losing Cached Objects
I'm having a problem with losing Cached references to JNDI Objects(Home Interfaces,
JMS Connection Factories, Data Sources, etc.).
I have singleton(Let's call it Cache) that stores the cached objects.
A client calls Cache.lookup passing in the JNDI name.
If the object isn't cached then the Cache looks it up via the JNDI context, otherwise
it returns the cached reference.
My problem is that once it returns the cached version, the client receives a null
object.
Can anyone help?
- Anuj
It sounds to me as if your cache isn't working and its not really related to
JNDI?
After the first call, the cahce should be storing the object and not using
JNDI any more, right?
Have you tried stepping through your cache code with a debugger?
"Anuj Mehta" <[email protected]> wrote in message
news:[email protected]..
>
I'm having a problem with losing Cached references to JNDI Objects(HomeInterfaces,
JMS Connection Factories, Data Sources, etc.).
I have singleton(Let's call it Cache) that stores the cached objects.
A client calls Cache.lookup passing in the JNDI name.
If the object isn't cached then the Cache looks it up via the JNDIcontext, otherwise
it returns the cached reference.
My problem is that once it returns the cached version, the client receivesa null
object.
Can anyone help?
- Anuj
Similar Messages
-
After REFRESH the cached object is not consistent with the database table
After REFRESH, the cached object is not consistent with the database table. Why?
I created a JDBC connection with the Oracle database (HR schema) using JDeveloper(10.1.3) and then I created an offline database (HR schema)
in JDeveloper from the existing database tables (HR schema). Then I made some updates to the JOBS database table using SQL*Plus.
Then I returned to the JDeveloper tool and refreshed the HR connection. But I found no any changes made to the offline database table JOBS in
JDeveloper.
How to make the JDeveloper's offline tables to be synchronized with the underling database tables?qkc,
Once you create an offline table, it's just a copy of a table definition as of the point in time you brought it in from the database. Refreshing the connection, as you describe it, just refreshes the database browser, and not any offline objects. If you want to syncrhnonize the offline table, right-click the offline table and choose "Generate or Reconcile Objects" to reconcile the object to the database. I just tried this in 10.1.3.3 (not the latest 10.1.3, I know), and it works properly.
John -
Re: Update Cache Objects in Delta Process Dosn't work
Hi All,
Re: Update Cache Objects in Delta Process doesn't work.
BI 7 - SP 17
This is the scenario I am working on, am running a bex query on a Cube(via a multi) with bunch aggregates.
The daily extraction & Aggregate rollup is correct, but when I run a Bex Query it display incorrect keyfigure values as compared to what we see in LISTCUBE for the infocube.
So when I ran the same query in RSRT with "Do not use Cache", it gave correct results and then when I ran the Bex Query again it fixed itself and it displayed correctly.
InfoCube - standard & No compression for requests
Query Properties are
Read Mode - H
Req Status - 1
Cache - Main Memory Cache Without swaping
Update Cache Objects in Delta Process (Flag selected)
SP grouping - 1
This problem occurs once in couple of weeks and my question is there a permanant fix for it??
OR should we turn the cache off??
Can anyone please help.
Thanking You.
RaoHi Kevin/Rao,
We are currently experiencing problems with the 'Update Cache Objects in Delta' process. Did either of you manage to resolve your issues, and if so, how? -
TopLink cached object changed are not commited to the database
Hello,
I'm using TopLink 10 and I have a writing issue with a use case:
1. I read an obect using TopLink that is in the IdentityMap
2. Using JSF this object is edited throught a web form.
3. I give the modified object to the data layer and try to modify inside a unit of work:
UnitOfWork uow = session.acquireUnitOfWork();
//laspEtapeDef comes from JSF and has been modfied previously
LaspEtapeDef laspEtapeDefClone = uow.readObject( laspEtapeDef );
//I update the clone field
laspEtapeDefClone.setDescription(laspEtapeDef.getDescription());
uow.commit();4. I use again the same object to display it once modified.
The object is modified in the cache but the modified fields are never commited to the database. This code works only if I disable the cache.
So, I've modified my JSF form to send the fields instead of modifying directly the object.
My question: Is there a way to commit changes mades in an cached object?
I've found the following section in the documentation, that explain the problem but doesn't gives the solution:
http://docs.oracle.com/cd/E14571_01/web.1111/b32441/uowadv.htm#CACGDJJH
Any idea?How are you reading in the object initially? The problem is likely that you are modifying an object from the session cache. When you then read in the object from the uow, it uses the object in the session cache as the back up. So there will not appear to be any changes to persist to the database.
You will need to make a copy of the object for modification, or use the copy from the unitofwork to make the changes instead of working directly on the object in the session. Disabling the cache means there is no copy in the session cache to use as a back up, so the uow read has to build an object from the database.
Best Regards,
Chris -
Latch: row cache objects
Hello everyone,
Note: Apologize for the bad formatting, tried but it seems I forgot how to use it
BANNER
Oracle Database 11g Release 11.2.0.2.0 - 64bit Production
I've seen high "*latch: row cache objects*" in SP/ASH report for ~14 hours back, when the users were unable to connect to the database. There were,
WARNING: inbound connection timed out (ORA-3136)
Time: 30-APR-2012 02:24:36
Tracing not turned on.
Tns error struct:
errors all over the alert log for the duration of 6 minutes of the problem.
I've put few records in bold due to which I concluded that the problem was with "dc_users" thing.
Can anybody tell me how/where I should proceed forward ?
SP report:Instance Efficiency Indicators
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.84 Optimal W/A Exec %: 100.00
Library Hit %: 97.43 Soft Parse %: 87.86
Execute to Parse %: 22.54 Latch Hit %: 99.95
Parse CPU to Parse Elapsd %: 0.30 % Non-Parse CPU: 87.83
Shared Pool Statistics Begin End
Memory Usage %: 45.09 46.98
% SQL with executions>1: 11.49 13.15
% Memory for SQL w/exec>1: 72.96 21.33
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
latch: row cache objects 6,655 634,260 95306 97.0
log file sync 289,923 6,469 22 1.0
CPU time 5,039 .8
db file sequential read 310,084 2,840 9 .4
log file parallel write 451,706 1,144 3 .2
ASH Report
Analysis Begin Time: 30-Apr-12 02:24:00
Analysis End Time: 30-Apr-12 02:30:00
Elapsed Time: 6.0 (mins)
Begin Data Source: DBA_HIST_ACTIVE_SESS_HISTORY
in AWR snapshot 12185
End Data Source: DBA_HIST_ACTIVE_SESS_HISTORY
in AWR snapshot 12185
Sample Count: 1,385
Average Active Sessions: 38.47
Avg. Active Session per CPU: 1.60
Report Target: None specified
Top User Events DB/Inst: NIKU/niku (Apr 30 02:24 to 02:30)
Avg Active
Event Event Class % Event Sessions
latch: row cache objects Concurrency 75.45 29.03
CPU + Wait for CPU CPU 9.75 3.75
log file sync Commit 3.83 1.47
db file sequential read User I/O 3.61 1.39
Top Event P1/P2/P3 Values DB/Inst: NIKU/niku (Apr 30 02:24 to 02:30)
Event % Event P1 Value, P2 Value, P3 Value % Activity
Parameter 1 Parameter 2 Parameter 3
latch: row cache objects 75.60 "42287858200","279","0" 75.60
address number tries
1* select addr, latch#, child#, name, misses, gets from v$latch_children where name like '%row%cache%objec%' order by gets , misses
niku> /
ADDR LATCH# CHILD# NAME MISSES GETS
0000000A16FF21C8 279 26 row cache objects 0 0
0000000A16FF14C8 279 2 row cache objects 0 0
00000009D88D7ED8 279 3 row cache objects 0 0
0000000A16FF1B48 279 14 row cache objects 0 0
00000009D88D8558 279 15 row cache objects 0 0
0000000A16FF1CE8 279 17 row cache objects 0 0
0000000A26265A28 279 19 row cache objects 0 0
0000000A16FF1E88 279 20 row cache objects 0 0
00000009D88D8898 279 21 row cache objects 0 0
0000000A26265BC8 279 22 row cache objects 0 0
0000000A16FF2028 279 23 row cache objects 0 0
00000009D88D8A38 279 24 row cache objects 0 0
0000000A26265D68 279 25 row cache objects 0 0
00000009D88D8BD8 279 27 row cache objects 0 0
0000000A26265F08 279 28 row cache objects 0 0
00000009D88D8D78 279 30 row cache objects 0 0
0000000A262660A8 279 31 row cache objects 0 0
0000000A16FF2508 279 32 row cache objects 0 0
0000000A16FF26A8 279 35 row cache objects 0 0
00000009D88D90B8 279 36 row cache objects 0 0
0000000A262663E8 279 37 row cache objects 0 0
0000000A262668C8 279 46 row cache objects 0 0
0000000A26266A68 279 49 row cache objects 0 0
0000000A16FF2368 279 29 row cache objects 0 11
0000000A16FF2848 279 38 row cache objects 0 116
0000000A16FF29E8 279 41 row cache objects 0 200
00000009D88D93F8 279 42 row cache objects 0 318
00000009D88D9258 279 39 row cache objects 0 1010
0000000A16FF2EC8 279 50 row cache objects 0 1406
00000009D88D9598 279 45 row cache objects 0 1472
0000000A26266588 279 40 row cache objects 0 1705
0000000A26266728 279 43 row cache objects 0 7383
0000000A16FF2B88 279 44 row cache objects 0 32346
00000009D88D98D8 279 51 row cache objects 19 63948
0000000A26265888 279 16 row cache objects 0 88045
0000000A26266248 279 34 row cache objects 0 141176
00000009D88D9738 279 48 row cache objects 0 326672
0000000A16FF19A8 279 11 row cache objects 867 1770385
00000009D88D8078 279 6 row cache objects 9 1979542
0000000A16FF2D28 279 47 row cache objects 2 3435018
00000009D88D86F8 279 18 row cache objects 2557 14956121
0000000A26265068 279 1 row cache objects 224 24335868
0000000A262653A8 279 7 row cache objects 29760 133991553
00000009D88D8F18 279 33 row cache objects 60612 677263122
00000009D88D83B8 279 12 row cache objects 23981 739014460
0000000A26265208 279 4 row cache objects 19973399 852043775
0000000A26265548 279 10 row cache objects 280137 856097342
00000009D88D8218 279 9 row cache objects 715879777 1219000976
0000000A262656E8 279 13 row cache objects 3856073 2397402780
0000000A16FF1668 279 5 row cache objects 12763217 2920278217
*0000000A16FF1808 279 8 row cache objects 67329804 4145389092*
51 rows selected.
niku> list
1 select addr, latch#, child#, name, misses, gets from v$latch_children where name like '%row%cache%objec%' order by gets , misses
niku> select distinct s.kqrstcln latch#,r.cache#,r.parameter name,r.type,r.subordinate#
from v$rowcache r,x$kqrst s
where r.cache#=s.kqrstcid
order by 1,4,5; 2 3 4
LATCH# CACHE# NAME TYPE SUBORDINATE#
1 3 dc_rollback_segments PARENT
2 1 dc_free_extents PARENT
3 4 dc_used_extents PARENT
4 2 dc_segments PARENT
5 0 dc_tablespaces PARENT
6 5 dc_tablespace_quotas PARENT
7 6 dc_files PARENT
*8 10 dc_users PARENT*
*8 7 dc_users SUBORDINATE 0*
*8 7 dc_users SUBORDINATE 1*
*8 7 dc_users SUBORDINATE 2*
9 8 dc_objects PARENT
9 8 dc_object_grants SUBORDINATE 0
10 17 dc_global_oids PARENT
11 12 dc_constraints PARENT
12 13 dc_sequences PARENT
13 16 dc_histogram_defs PARENT
13 16 dc_histogram_data SUBORDINATE 0
13 16 dc_histogram_data SUBORDINATE 1
14 54 dc_sql_prs_errors PARENT
15 32 kqlsubheap_object PARENT
16 19 dc_table_scns PARENT
16 19 dc_partition_scns SUBORDINATE 0
17 18 dc_outlines PARENT
18 14 dc_profiles PARENT
19 47 realm cache PARENT
19 47 realm auth SUBORDINATE 0
20 48 Command rule cache PARENT
21 49 Realm Object cache PARENT
21 49 Realm Subordinate Cache SUBORDINATE 0
22 46 Rule Set Cache PARENT
23 34 extensible security user and rol PARENT
24 35 extensible security principal pa PARENT
25 37 extensible security UID to princ PARENT
26 36 extensible security principal na PARENT
27 33 extensible security principal ne PARENT
28 38 XS security class privilege PARENT
29 39 extensible security midtier cach PARENT
30 43 AV row cache 1 PARENT
31 44 AV row cache 2 PARENT
32 45 AV row cache 3 PARENT
33 15 global database name PARENT
34 20 rule_info PARENT
35 21 rule_or_piece PARENT
35 21 rule_fast_operators SUBORDINATE 0
36 23 dc_qmc_ldap_cache_entries PARENT
37 52 qmc_app_cache_entries PARENT
38 53 qmc_app_cache_entries PARENT
39 27 qmtmrcin_cache_entries PARENT
40 28 qmtmrctn_cache_entries PARENT
41 29 qmtmrcip_cache_entries PARENT
42 30 qmtmrctp_cache_entries PARENT
43 31 qmtmrciq_cache_entries PARENT
44 26 qmtmrctq_cache_entries PARENT
45 9 qmrc_cache_entries PARENT
46 50 qmemod_cache_entries PARENT
47 24 outstanding_alerts PARENT
48 22 dc_awr_control PARENT
49 25 SMO rowcache PARENT
50 40 sch_lj_objs PARENT
51 41 sch_lj_oids PARENT
61 rows selected.
niku> select parameter, gets from v$rowcache order by gets desc;
PARAMETER GETS
dc_users 2802019571
dc_tablespaces 2405092307
dc_objects 1815427326jjk wrote:
I've already been thru the link that you've mentioned and unfortunately couldn't make much use of it.I didn't think it was really likely to be relevant, but there was always a long shot that it might have given you a clue.
Considering the "dc_users" had maximum gets, I thought (rather as per internet) that it might be the point of contention. However I did observe high misses on child# 9 which is "dc_objects". It's often the case that the misses is more important than the gets when you see lots of gets and misses on a few latches/caches - the bit that might have been most instructure was the dictionary cache bit from the AWR showing gets, misses, scans, scanmisses etc. It might have told us a little about what was going in and out of the dictionary cache and let us guess why.
In alert log:
Sun Apr 29 02:20:00 2012
29-APR-2012 02:20:00 -- xxxxxxx package - REGRANT_READONLY Begin re-grant read only roles
Sun Apr 29 02:24:34 2012
29-APR-2012 02:24:34 -- xxxxxxx package - REGRANT_READONLY End re-grant read only roles
Sun Apr 29 02:30:00 2012
29-APR-2012 02:30:00 -- xxxxxxx package - REGRANT_READWRITE Begin re-grant read write roles
Sun Apr 29 02:32:02 2012
29-APR-2012 02:32:02 -- xxxxxxx package - REGRANT_READWRITE End re-grant read write roles
Is this code that "regrants" roles to users who already have them ? That's what it sounds like, and that sounds like something that would impact on various parts of the dictionary cache, especially dc_users, and possibly dc_obejcts.
CPU per Elap per Old
Executions Rows Processed Rows per Exec Exec (s) Exec (s) Hash Value
161,198 1,244 0.0 0.00 0.00 978935325
select /*+ rule */ c.name, u.name from con$ c, cdef$ cd, user$ u
where c.con# = cd.con# and cd.enabled = :1 and c.owner# = u.us
er#
159,955 159,952 1.0 0.00 0.00 2458412332
select o.name, u.name from obj$ o, user$ u where o.obj# = :1 an
d o.owner# = u.user#
159,932 6 0.0 0.00 0.00 2636710067
insert into objauth$(option$,grantor#,obj#,privilege#,grantee#,c
ol#,sequence#) values(decode(:1,0,null,:1),:2,:3,:4,:5,decode(:6
,0,null,:6),object_grant.nextval)
147,168 147,168 1.0 0.00 0.00 3468666020
select text from view$ where rowid=:1
124,635 124,635 1.0 0.00 0.00 564166580
select count(*) from ( select u.
name from registry$ r, us
er$ u where r.status in (1,3,5)
and r.namespace = 'SERVER'The first one looks like a response to a constraint being breached.
The third one looks like something that might happen when you grant a privilege on an object to a user - and maybe the first one happens if the user has already got it and the insert raises a "duplicate key" error. The fourth one commonly happens when you have to re-optimize a query containing a view - and when you execute DDL (such as changing privileges on an object) you invalidate SQL and have to re-optimize it eventually. I can't remember where I've seen the second one appearing.
If you have a process that tries to do a lot of grants on objects to users and roles in a very short time, it's quite likely to create havoc in the dictionary cache - check what that package was up to and why it runs.
What is the missing information ?When I looked at some of your posting, the output didn't match the query, some of the later columns had gone missing - this might have been my browser rather than your input though.
Regards
Jonathan Lewis -
"latch: row cache objects" and high "VERSION_COUNT"
Hello,
we are being faced with a situation where the database spends most of it's time waiting for latches in the shared pool (as seen in the AWR report).
All statements issued by the application are using bind variables, but what we can see in V$SQL is that even though the statements are using bind variables some of them have a relatively high version_count (> 300) and many invaliadations (100 - 200) even though the tables involved are very small (some not more than 3 or 4 rows).
Here is some (hopefully enough) information about the environment
Version: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production (on RedHat EL 5)
Parameters:
cursor_bind_capture_destination memory+disk
cursor_sharing EXACT
cursor_space_for_time FALSE
filesystemio_options none
hi_shared_memory_address 0
memory_max_target 12288M
memory_target 12288M
object_cache_optimal_size 102400
open_cursors 300
optimizer_capture_sql_plan_baselines FALSE
optimizer_dynamic_sampling 2
optimizer_features_enable 11.2.0.2
optimizer_index_caching 0
optimizer_index_cost_adj 100
optimizer_mode ALL_ROWS
optimizer_secure_view_merging TRUE
optimizer_use_invisible_indexes FALSE
optimizer_use_pending_statistics FALSE
optimizer_use_sql_plan_baselines TRUE
plsql_optimize_level 2
session_cached_cursors 50
shared_memory_address 0The shared pool size (according to AWR) is 4,832M
The buffer cache is 3,008M
Now, my question: is a version_count of > 300 a problem (we have about 10-15 of those with a total of ~7000 statements in v$sqlarea). Those are also the statements listed in the AWR report at the top in the section "SQL ordered by Version Count" and "SQL ordered by Sharable Memory"
Is it possible that those statements are causing the the latch contention in the shared pool?
I went through https://blogs.oracle.com/optimizer/entry/why_are_there_more_cursors_in_11g_for_my_query_containing_bind_variables_1
The tables involved are fairly small and all the execution plans for each cursor are identical.
I can understand some of the invalidations that happen, because we have 7 schemas that have identical tables, but from my understanding that shouldn't cause such a high invalidation number. Or am I mistaken?
I'm not that experienced with Oracle tuning at that level, so I would appreciate any pointer on how I can find out where exactly the latch problem occurs
After flushing the shared pool, the problem seems to go away for a while. But apparently that is only fighting symptoms, not fixing the root cause of the problem.
Some of the statements in question:
SELECT * FROM QRTZ_SIMPLE_TRIGGERS WHERE TRIGGER_NAME = :1 AND TRIGGER_GROUP = :2
UPDATE QRTZ_TRIGGERS SET TRIGGER_STATE = :1 WHERE TRIGGER_NAME = :2 AND TRIGGER_GROUP = :3 AND TRIGGER_STATE = :4
UPDATE QRTZ_TRIGGERS SET TRIGGER_STATE = :1 WHERE JOB_NAME = :2 AND JOB_GROUP = :3 AND TRIGGER_STATE = :4
SELECT TRIGGER_STATE FROM QRTZ_TRIGGERS WHERE TRIGGER_NAME = :1 AND TRIGGER_GROUP = :2
UPDATE QRTZ_SIMPLE_TRIGGERS SET REPEAT_COUNT = :1, REPEAT_INTERVAL = :2, TIMES_TRIGGERED = :3 WHERE TRIGGER_NAME = :4 AND TRIGGER_GROUP = :5
DELETE FROM QRTZ_TRIGGER_LISTENERS WHERE TRIGGER_NAME = :1 AND TRIGGER_GROUP = :2So all of them are using bind variables.
I have seen that the columns used in the where clause all have histograms available. Would removing them reduce the number of invalidations?
Unfortunately I did not save the information from v$sql_shared_cursor before the shared pool was flushed, but most of the invalidations occurred in the ROLL_INVALID_MISMATCH column if that is of any help. There are some invalidations reported for AUTH_CHECK_MISMATCH and TRANSLATION_MISMATCH but to my understanding they caused by executing the statement for different schemas if I'm not mistaken.
Looking at v$latch_missed, most of the waits for parent = 'row cache objects' are for "kqrpre: find obj" and "kqreqd: reget">
In the AWR report, what does the Dictionary Cache Stats section say?
>
Here they are:
Dictionary Cache Stats
Cache Get Requests Pct Miss Scan Reqs Mod Reqs Final Usage
dc_awr_control 65 0.00 0 2 1
dc_constraints 729 33.33 0 729 1
dc_global_oids 60 23.33 0 0 31
dc_histogram_data 7,397 10.53 0 0 2,514
dc_histogram_defs 21,797 9.83 0 0 5,239
dc_object_grants 4 25.00 0 0 12
dc_objects 27,683 2.29 0 223 2,581
dc_profiles 1,842 0.00 0 0 1
dc_rollback_segments 1,634 0.00 0 0 39
dc_segments 7,335 6.94 0 360 1,679
dc_sequences 139 5.76 0 139 19
dc_table_scns 53 100.00 0 0 0
dc_tablespace_quotas 1,956 0.10 0 0 4
dc_tablespaces 17,488 0.00 0 0 11
dc_users 58,013 0.03 0 0 164
global database name 4,261 0.00 0 0 1
outstanding_alerts 54 0.00 0 0 9
sch_lj_oids 4 0.00 0 0 2
Library Cache Activity
Namespace Get Requests Pct Miss Pin Requests Pct Miss Reloads Invalidations
ACCOUNT_STATUS 3,664 0.03 0 0 0
BODY 560 2.14 2,343 0.60 0 0
CLUSTER 52 0.00 52 0.00 0 0
DBLINK 3,668 0.00 0 0 0
EDITION 1,857 0.00 3,697 0.00 0 0
INDEX 99 19.19 99 19.19 0 0
OBJECT ID 68 100.00 0 0 0
SCHEMA 2,646 0.00 0 0 0
SQL AREA 32,996 2.26 1,142,497 0.21 189 226
SQL AREA BUILD 848 62.15 0 0 0
SQL AREA STATS 860 82.09 860 82.09 0 0
TABLE/PROCEDURE 17,713 2.62 26,112 4.88 61 0
TRIGGER 1,704 2.00 6,737 0.52 1 0 -
Mozilla Firefox 32.0.1 caching objects without cache control headers
Mozilla firefox is caching objects without any cache-control or expires header in response. The response does contain etag and date header but doesnt indicate anything about the duration it should be caching it. An example URL is https://www.priceless.com/content/dam/priceless/us/en/newyork/component/backgroundimages/NewYork_1920x596.jpg
Am I missing something very obvious here.That is a beautiful picture. I understand that you are looking for a return header information about expiration. The about:cache page will show that information akaik there is a column information about expiration time. Some expire and some don't. I am pretty sure the later is the case you are seeing.
-
RMAN receives: OSB error: UUID not found OB cached object manager
I am receiving an error when backing up :
Starting backup at 17-MAR-2009 10:00:00
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: sid=137 devtype=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Secure Backup
channel ORA_SBT_TAPE_1: starting full datafile backupset
channel ORA_SBT_TAPE_1: specifying datafile(s) in backupset
input datafile fno=00001 name=+DGROUP1/seattle/datafile/system.826.679693097
channel ORA_SBT_TAPE_1: starting piece 1 at 17-MAR-2009 10:00:02
RMAN-03009: failure of backup command on ORA_SBT_TAPE_1 channel at 03/17/2009 10:02:57
ORA-27191: sbtinfo2 returned error
Additional information: 2
ORA-19511: Error received from media manager layer, error text:
sbt__rpc_cat_query: Query for piece 07ka4pt2_1_1 failed.
*(Oracle Secure Backup error: 'UUID not found (OB cached object manager)').*
Prior to this (when everything was working) I merely tried to re-label a tape. Why this has caused the problem I do not know but I can't seem to fix it.
Does anybody know what has happened and what the fix is?
On the HTTP administration page, when I try to configure the device I get the following error message:
Error: cannot read location object associated with device - UUID not found
It looks as though the device definition has been corrupted some how.The fix has been found (from Oracle Support). The cause is not yet understood.
I document it here for others who may run into the same problem.
It seems that the device "went missing". The fix was to add it.
ob> mkloc dat72
It is still being investigated and I will update the notes when I am in possession of more information. -
Navigation Cache - Object Size
We're currently in the middle of our upgrade to SP15 and one of the new features we're implementing is the navigation cache. By default, the number of objects to be cached is set at 5000. So far the behavior is, all the navigation objects (pages) counts as objects. Then for each user entering the portal with an unique role combination, an additional set of objects (equaling the number of roles they have) is also added. In our dev environment where most users are superadmins, we're at around 1300 objects.
My question is what's an object limit before there's going to be a performance hit with too much memory usage. Is 5000 a safe limit? Or can it be higher? What happens if the object limit is reached, will it be like a queue where the oldest cached object gets deleted when a new one is added?
Any info on this subject is welcome. Any experiences with high availability environments using navigation cache would be appreciated. ThanksHi,
The Preliminary objective of Navigation Cache is to improve performance on the server-side. By saving the Navigation Nodes in memory, the number of calls to the PCD or any other backend systems is reduced.
The cache is implemented in a First in - First out manner (FIFO).
Try this link for more information:
http://help.sap.com/saphelp_erp2005/helpdata/en/5f/2720a513ea4ce9a5a4e5d285a1c09c/frameset.htm
Hope it's help
Best Regards,
Shimon. -
Memory Notification: Library Cache Object loaded into SGA
Dear Gurus
I am noticing the following error into my database. database version is Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi host sun solaris
Memory Notification: Library Cache Object loaded into SGA
Heap size 2905K exceeds notification threshold (2048K)
Details in trace file /orafs/app/oracle/admin/pblsw/bdump/pblsw_dw01_14545.trc
KGL object name :SELECT /*+rule*/ SYS_XMLGEN(VALUE(KU$), XMLFORMAT.createFormat2('VIEW_T', '7')), KU$.OBJ_NUM ,KU$.SCHEMA_OBJ.NAME ,KU$.SCHEMA_OBJ.NAME ,'VIEW' ,KU$.SCHEMA_OBJ.OWNER_NAME FROM SYS.KU$_VIEW_VIEW KU$ WHERE KU$.OBJ_NUM IN (SELECT * FROM TABLE(DBMS_METADATA.FETCH_OBJNUMS(200001)))
Regards
Rabirefer
Memory Notification: Library Cache Object Loaded Into Sga (Doc ID 330239.1)
http://support.oracle.com -
Notification about cache objects changes when node dies
Hi Guys,
Coherence 3.3.1/389
.Net API 3.3.1.2
Sorry, i did not find in forum something similar to my question.
Well, i have this situation:
I have 8 Coherence nodes.
I have one client connected to Coherence node number 1.
The client have been listening for notifications about cache objects changes.
Coherence node number 1 dead for some reason (not enough memory).
What's happened with the client which was connected to this node?
I think it just reconnect to other one, but what's happened with notifications which occurs until the client reconnects?
Regards,
Dmitry.Hi Dmitry,
Notifications are delivered only while the client is connected. So if the client or its proxy fail, upon reconnection, the client will need to recover appropriately.
If you're using Coherence's built-in client-side data management features (such as Near cache or ContinuousQueryCache), Coherence will do this for you automatically (resynchronizing the local datasets).
One other comment, the reconnection attempt is lazy and the client will not reconnect until your application code touches a clustered resource.
(EDIT: If store-and-forward guarantees are required, then you can queue those messages on the server on a per-client basis in a dedicated NamedCache, which the client can then consume at its leisure whenever connected. This is an application-level construct.)
Jon Purdy
Oracle -
Memory Notification:Library Cache Object loaded in Heap size 2262K exceeds
Dear all
I am facing the following problem. I am using Oracle 10gR2 on Windows.
Please help me.
Memory Notification: Library Cache Object loaded into SGA
Heap size 2262K exceeds notification threshold (2048K)
KGL object name :XDB.XDbD/PLZ01TcHgNAgAIIegtw==
ThanksThis is a normal warning message displayed for release 10.2.0.1.0, this is just a bug that by default has declared the kgllarge_heap_warning_threshold instance parameter to 8388608 . The bug is harmless, but the problem is that you will see a lot of messages displayed on the alert.log file, which renders this file difficult to read and it is uncomfortable to spot the real errors.
Just declare a higher value for the kgllarge_heap_warning_threshold undocumented instance parameter. This is meant to be corrected at 10.2.0.2.0, but you can manually have this parameter increased to a value higher than the highest value reported.
For further references take a look at this metalink note:
Memory Notification: Library Cache Object Loaded Into Sga
Doc ID: Note:330239.1
~ Madrid
http://hrivera99.blogspot.com/ -
Best practice - caching objects
What is the best practice when many transactions requires a persistent
object that does not change?
For example, in a ASP model supporting many organizations, organization is
required for many persistent objects in the model. I would rather look the
organization object up once and keep it around.
It is my understanding that once the persistence manager is closed the
organization can no longer be part of new transactions with other
persistence managers. Aside from looking it up for every transaction, is
there a better solution?
Thanks in advance
Garyproblem with using object id fields instead of PC object references in your
object model is that it makes your object model less useful and intuitive.
Taking to the extreme (replacing all object references with their IDs) you
will end up with object like a row in JDBC dataset. Plus if you use PM per
HTTP request it will not do you any good since organization data won't be in
PM anyway so it might even be slower (no optimization such as Kodo batch
loads)
So we do not do it.
What you can do:
1. Do nothing special just use JVM level or distributed cache provided by
Kodo. You will not need to access database to get your organization data but
object creation cost in each PM is still there (do not forget this cache we
are talking about is state cache not PC object cache) - good because
transparent
2. Designate a single application wide PM for all your read-only big
things - lookup screens etc. Use PM per request for the rest. Not
transparent - affects your application design
3. If large portion of your system is read-only use is PM pooling. We did it
pretty successfully. The requirement is to be able to recognize all PCs
which are updateable and evict/makeTransient those when PM is returned to
the pool (Kodo has a nice extension in PersistenceManagerImpl for removing
all managed object of a certain class) so you do not have stale data in your
PM. You can use Apache Commons Pool to do the pooling and make sure your PM
is able to shrink. It is transparent and increase performance considerably
One approach we use
"Gary" <[email protected]> wrote in message
news:[email protected]...
>
What is the best practice when many transactions requires a persistent
object that does not change?
For example, in a ASP model supporting many organizations, organization is
required for many persistent objects in the model. I would rather look the
organization object up once and keep it around.
It is my understanding that once the persistence manager is closed the
organization can no longer be part of new transactions with other
persistence managers. Aside from looking it up for every transaction, is
there a better solution?
Thanks in advance
Gary -
Setting the load factor for a HashMap used to cache objects
I intend to use a HashMap to cache a small number of object and I am trying to initialize it such that it will execute lookups with minimal cost (time). I intend to initialize the HashMap with an initial capacity of 4, but I would appreciate any insight into the appropriate load factor to use to achieve the desired low-cost lookups. (As well, if anyone has suggestions as to a better method of caching a small number of objects, I would, again, be appreciative.)
ShaunYour initial capacity is 4, so by a "small number of objects" you must mean "less than 10", I suppose. In this case pretty much any lookup will find the object almost immediately, including the O(n) sequential array search. So the only reason to care about the speed of this operation is if you are going to be doing it an extremely large number of times... is this the case?
-
Caching objects in the data cache as a result of an extent.
Patrick -
I wanted to post this since it's related to a question I posted about extents and the data cache on
11/8.
I discovered that the com.solarmetric.kodo.DefaultFetchBatchSize setting affects how many objects
get put into the data cache as a result of running an extent (in 2.3.2). If I have:
com.solarmetric.kodo.DefaultFetchBatchSize=20
then as soon as I execute the second line below:
Iterator anIterator = results.iterator();
Object anObject = anIterator.next();
I see 20 objects in my data cache. In a prior reply you indicated that you were going to check this
behavior in 2.4 so I wanted to send you this additional information. This behavior isn't a problem
for me.
LesLes,
This is expected behavior -- the DefaultBatchFetchSize instructs Kodo to
retrieve objects from the scrollable ResultSet in groups of 20. So,
getting the first item from the iterator will cause a page of 20 objects
to be pulled from the result set.
-Patrick
Les Selecky wrote:
Patrick -
I wanted to post this since it's related to a question I posted about
extents and the data cache on
11/8.
I discovered that the com.solarmetric.kodo.DefaultFetchBatchSize
setting affects how many objects
get put into the data cache as a result of running an extent (in
2.3.2). If I have:
com.solarmetric.kodo.DefaultFetchBatchSize=20
then as soon as I execute the second line below:
Iterator anIterator = results.iterator();
Object anObject = anIterator.next();
I see 20 objects in my data cache. In a prior reply you indicated that
you were going to check this
behavior in 2.4 so I wanted to send you this additional information.
This behavior isn't a problem
for me.
Les
Patrick Linskey [email protected]
SolarMetric Inc. http://www.solarmetric.com
Maybe you are looking for
-
A window with the headings: Validation, Browser Compatibility, Link Checker, etc. has somehow appeared at the bottom of my screen. It takes up about 25% of the screen, and I don't need it. I do suppose, however, that it will be useful to me someday.
-
My screen has some lines going through it, it is the old model.
My screen now has a line about 2 inches wide going across it. I have restarted it a couple of times to no avail. It has not fallen that I am aware of.
-
Hi there, It's 2 am and Im stuck at my edit suite trying to recapture some footage which was shot in HDV 1080i60i, I've captured all the tapes in DV NTSC and am now trying to Online the footage in UNCOMPRESSED 10 bit but apparently some of the footag
-
Can I Use Spanish Fonts for iWeb?
I want to write the word "Espanol" on my iWeb page but I don't know where to find the 'n' with the little thingee over it. I also would like to put a Spanish accent on some letters. Is this possible in iWeb. I am a native English speaker but I want t
-
Hi, In development we have installed ECC6.0 enhancement pack 5 (EHP5) installed and in this installation we have upgraded to EHP5, Now we wanted to install in Quality EHP5.So, my question is if we can install EHP5 directly in the quality or do we nee