Data Cache & Procedure Cache
All,
How to view the size of Data cache & procedure cache?
how to estimate the cahe size taken for a given query? Say for exampe, If i run a query , I just wanted to know how much of cache memort it took.
karthi_mrkg wrote:
All,
How to view the size of Data cache & procedure cache?
how to estimate the cahe size taken for a given query? Say for exampe, If i run a query , I just wanted to know how much of cache memort it took.Judging from this, and your other question on locking, you seem to have a background in some other SQL DBMS, other than Oracle...
It might be a good idea that you spend a day or two, studying the Oracle Concepts guide.
Similar Messages
-
SQL Cache/Procedure Cache Limit
Does anyone know if there is a memory limit for cache memory of a table? The reason for me asking is that I have an stored procedure that returns a ORA-22813. It appears that the in cache table memory has reached its full capacity (30K). Is there a way to increase cache table size?
Does anyone know if there is a memory limit for cache memory of a table? The reason for me asking is that I have an stored procedure that returns a ORA-22813. It appears that the in cache table memory has reached its full capacity (30K). Is there a way to increase cache table size?
-
Hi all,
I have a Application Server 11g and a Database 11g. We are using the mod_plsql package. And i have already set PlsqlCacheEnable to Off in cache.conf. But i still have cache for the procedure.... What can i do to avoid this?
Thanks for the help.karthi_mrkg wrote:
All,
How to view the size of Data cache & procedure cache?
how to estimate the cahe size taken for a given query? Say for exampe, If i run a query , I just wanted to know how much of cache memort it took.Judging from this, and your other question on locking, you seem to have a background in some other SQL DBMS, other than Oracle...
It might be a good idea that you spend a day or two, studying the Oracle Concepts guide. -
Best size of procedure cache size?
here is my dbcc memusage output:
DBCC execution completed. If DBCC printed error messages, contact a user with System Administrator (SA) role.
Memory Usage:
Meg. 2K Blks Bytes
Configured Memory: 14648.4375 7500000 15360000000
Non Dynamic Structures: 5.5655 2850 5835893
Dynamic Structures: 70.4297 36060 73850880
Cache Memory: 13352.4844 6836472 14001094656
Proc Cache Memory: 85.1484 43596 89284608
Unused Memory: 1133.9844 580600 1189068800
So if proc cache is too small? I can put used memory 1133M to proc cache. but as many suggested that proc cache should be 20% of total memory.
Not sure it should be 20% of max memory or Total named cache memory?Hi
Database size: 268288.0 MB
Procedure Cache size is ..
1> sp_configure 'procedure cache size'
2> go
Parameter Name Default Memory Used Config Value Run Value Unit Type
procedure cache size 7000 3362132 1494221 1494221 Memory pages(2k) dynamic
1> sp_monitorconfig 'procedure cache size'
2> go
Usage information at date and time: May 15 2014 11:48AM.
Name Num_free Num_active Pct_act Max_Used Reuse_cnt Instance_Name
procedure cache size 1101704 392517 26.27 787437 746136 NULL
1> sp_configure 'total logical memory'
2> go
Parameter Name Default Memory Used Config Value Run Value Unit Type
total logical memory 73728 15624170 7812085 7838533 memory pages(2k) read-only
I got to know that the oparameter 'Reuse_cnt' should be zero from an ASE expert.
Suggest me if I need to increase the procedure cache with explanation
Thanks
Rajesh -
Error 701, not enough procedure cache (*Urgent*)
Hi All,
Is there a SQL that I can use to determine the SPID that is the root cause of a 701 (not enough procedure cache)?
Additional information:
This is happening in 2 servers; 15G memory with 2G procedure cache and 10G memory with 1.25G procedure cache.
Environment: Sun Solaris 10 running ASE 15.0.3 ESD#4
Monitoring has been enabled.
Sybase recommends either increasing the procedure cache or running dbcc proc_cache(free_unused) periodically.
I would rather identify the process causing it or the configuration setting in these 2 environment that is causing this to happen.
Thanks in advance.
AnilAnother thing you might try is the dbcc memusage command. It will display the 20 biggest things in procedure cache, so if a few extraordinarily large cached procedures are the cause of the 701s, this may identify what the objects are. It won't identify who first ran it, though, or even show if any spid is currently executing any of those 20 items.
Example:
1> set switch on 3604
2> go
Switch 3604 ('print_output_to_client') is turned on.
All supplied switches are successfully turned on.
1> dbcc memusage
2> go
Memory Usage:
Meg. 2K Blks Bytes
Configured Memory: 156.2500 80000 163840000
Non Dynamic Structures: 5.3751 2753 5636252
Dynamic Structures: 123.1582 63057 129140736
Cache Memory: 9.0098 4613 9447424
Proc Cache Memory: 17.5703 8996 18423808
Unused Memory: 1.1191 573 1173504
Buffer Cache Memory, Top 8:
Cache Buf Pool DB Id Partition Id Index Id Meg.
default data c 31515 8 0 0.0117
2K 31515 8 0 0.0117
default data c 1 8 0 0.0039
2K 1 8 0 0.0039
default data c 4 8 0 0.0039
2K 4 8 0 0.0039
default data c 2 8 0 0.0020
2K 2 8 0 0.0020
default data c 3 8 0 0.0020
2K 3 8 0 0.0020
default data c 5 8 0 0.0020
2K 5 8 0 0.0020
default data c 14 8 0 0.0020
2K 14 8 0 0.0020
default data c 31514 8 0 0.0020
2K 31514 8 0 0.0020
Procedure Cache, Top 20:
Database Id: 31514
Object Id: 121048436
Object Name: sp_downgrade_esd
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Bytes lost for alignment 0 (Percentage of total: 0.000000)
Number of plans: 1
Size of plans: 1.217173 Mb, 1276298.000000 bytes, 628 pages
Bytes lost for alignment 6488 (Percentage of total: 0.508345)
Database Id: 31514
Object Id: 905051229
Object Name: sp_do_poolconfig
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Bytes lost for alignment 0 (Percentage of total: 0.000000)
Number of plans: 1
Size of plans: 1.080750 Mb, 1133248.000000 bytes, 582 pages
Bytes lost for alignment 3674 (Percentage of total: 0.324201)
Database Id: 31515
Object Id: 1344004788
Object Name: sp_dbcc_run_faultreport
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 1
Size of trees: 0.972553 Mb, 1019796.000000 bytes, 501 pages
Bytes lost for alignment 4609 (Percentage of total: 0.451953)
Number of plans: 0
Size of plans: 0.000000 Mb, 0.000000 bytes, 0 pages
Bytes lost for alignment 0 (Percentage of total: 0.000000) -
Problem in getting the data from a cache in hibernate
I am storing data inside a cache. When i am geeting the data from a cache its showing null.
How can i retrieve the data from a cache.
I am using EHCacheHi ,
You have done one mistake while setting the input parameters for BAPI..Do the following steps for setting input to BAPI
Bapi_Goodsmvt_Getitems_Input input = new Bapi_Goodsmvt_Getitems_Input();
wdContext.nodeBapi_Goodsmvt_Getitems_Input().bind(input);
Bapi2017_Gm_Material_Ra input1 = new Bapi2017_Gm_Material_Ra();
wdContext.nodeBapi2017_Gm_Material_Ra().bind(input1);
Bapi2017_Gm_Move_Type_Ra input2 = new Bapi2017_Gm_Move_Type_Ra();
wdContext.nodeBapi2017_Gm_Move_Type_Ra().bind(input2);
Bapi2017_Gm_Plant_Ra input3 = new Bapi2017_Gm_Plant_Ra();
wdContext.nodeBapi2017_Gm_Plant_Ra().bind(input3);
Bapi2017_Gm_Spec_Stock_Ra input4 = new Bapi2017_Gm_Spec_Stock_Ra();
wdContext.nodeBapi2017_Gm_Spec_Stock_Ra().bind(input4);
input1.setSign("I");
input1.setOption("EQ");
input1.setLow("1857");
input2.setSign("I");
input2.setOption("EQ");
input2.setLow("M110");
input3.setSign("I");
input3.setOption("EQ");
input3.setLow("309");
input4.setSign("I");
input4.setOption("EQ");
input4.setLow("W");
wdThis.wdGetWdsdgoodsmvmtcustController().execute_BAPI_GOODSMOVEMENT_GETITEMS();
Finally inavidate your output node like
wdContext.node<output node name>.invalidate();
also put your code inside try catch to display any exception if any
Regards,
Amit Bagati -
Client automatically cache the data got from cache server?
Hi expert,
I have 2 questions about the client local cache. Would you please help to give me some suggestion?
1. Will client automatically locally cache the data got from cache server the first time and automatically update the data in local cache when getting the same data from cache server again? I go through the API reference but cannot find any API to query the data currently cached in the local cache.
2. If client will automatically cache the data got from cache server. Is there any way for a client to get the data event that happens to its local cache, such as entry created in local cache, entry deleted from local cache and entry updated in local cache? In my opinion, when getting an entry from cache server the first time, the MapListener's entry create event should be triggered. When getting the same entry again, the entry update event should be triggered.
However, I have tried a client with replicated cache, a client with partitioned cache, an extend client with remote cache and a client with local cache(front cache part of near cache), the client (the NamedCache object has been set the MapListener) cannot get any event notification after getting data from cache server. By the way, my listener is OK since when putting data the entry create event and entry update event will be triggered.
Your suggestion is very appreciated. :)Hi
If I were you I would read this http://download.oracle.com/docs/cd/E14526_01/coh.350/e14510/toc.htm
and particularly the section about Near Caching here http://download.oracle.com/docs/cd/E14526_01/coh.350/e14510/nearcache.htm#CDEFEAJG
which is what you are asking about in your question.
Near Caching is how Coherence stores data in the locally - which is the answetr to your first question. How Near Caching works is explained in the documentation.
Events, which you ask about in your second question are explained here http://download.oracle.com/docs/cd/E14526_01/coh.350/e14510/delivereventsjava.htm#CBBIIEFA
It might be that ContinuousQueryCache is closer to what you want. This is explained here http://download.oracle.com/docs/cd/E14526_01/coh.350/e14510/queryabledatafabric.htm#sthref38 A ContinuousQueryCache is like having a sub-set of the underlying cache on the local client which you can then listen to etc...
JK -
# Question
Can you please help to understand how the firefox decides on the Expires date for a cached javascript file ( my server did not set any Expire header, but firebox set it down). I tried to understand but different javascript file gets different Expires date value when it is being cached. Please help me as I tried lot and could not get proper answer for this. Thanks in Advance.Try posting at the Web Development / Standards Evangelism forum at MozillaZine. The helpers over there are more knowledgeable about web page development issues with Firefox. <br />
http://forums.mozillazine.org/viewforum.php?f=25 <br />
You'll need to register and login to be able to post in that forum. -
How to acces and display datas storaged in cache for a SUP 2.0 workflow?
HI to all.
I have an application with a item menu which obtains data thought a online request. the result is shown is a listview.
My problem is when my BlackBerry has no conection ( offline scenario). When I select the menu item, I obtain an error.
How to acces and display datas storaged in cache for my MBO? I have read that I can use getMessageValueCollection in custom.js to access to my datas but once I get the datas, How can associate those datas to a Listview like a online request?? Do i have to develop my own screen in html or how?
Thanks.I'm not entirely clear on what you mean by "cache" in this context. I'm going to assume that what you are really referring to is the contents of the workflow message, so correct me if I'm wrong. There is, in later releases, the ability to set an device-side request cache time so that if you issue an online request it'll store the results in an on-device cache and if you subsequently reissue the same online request with the same parameter values within that timeout period it'll get the data from the cache rather than going to the server, but my gut instinct is that this is not what you are referring to.
To access the data in the workflow message, you are correct, you would call getMessageValueCollection(). It will return an object hierarchy with objects defined in WorkflowMessage.js. Note that if your online request fails, the data won't magically appear in your workflow message.
To use the data in the workflow message to update a listview, feel free to examine the code in the listview widgets and in API.js. You can also create a custom listview as follows:
function customBeforeNavigateForward(screenKey, destScreenKey) {
// In this example, we only want to replace the listview on the "My Approvals" screen
if (destScreenKey == 'My_Approvals'){
// First, we get the MessageValueCollection that we are currently operating on
var message = getCurrentMessageValueCollection();
// Next, we'll get the list MessageValue from that MessageValueCollection
var itemList = message.getData("LeaveApprovalItem3");
// Because its a list, the Value of the MessageValue will be an array
var items = itemList.getValue();
// Figure out how many items are in the list
var numOfItems = items.length;
// Iterate through the results and build our list
var i = 0;
var htmlOutput = '<div><ul data-role="listview" data-theme="k" data-filter="true">';
var firstChar = '';
while ( i < numOfItems ){
// Get the current item. This will be a MessageValueCollection.
var currItem= items<i>;
// Get the properties of the current item.
var owner = currItem.getData("LeaveApprovalItem_owner_attribKey").getValue();
var type = currItem.getData("LeaveApprovalItem_itemType_attribKey").getValue();
var status = currItem.getData("LeaveApprovalItem_itemStatus_attribKey").getValue();
var startDate = currItem.getData("LeaveApprovalItem_startDate_attribKey").getValue();
var endDate = currItem.getData("LeaveApprovalItem_endDate_attribKey").getValue();
// Format the data in a specific presentation
var formatStartDate = Date.parse(startDate).toString('MMM/d/yyyy');
var formatEndDate = Date.parse(endDate).toString('MMM/d/yyyy');
// Decide which thumbnail image to use
var imageToUse = ''
if (status == 'Pending'){
imageToUse = 'pending.png';
else if (status == 'Rejected'){
imageToUse = 'rejected.png';
else {
imageToUse = 'approved.png';
// Add a new line to the listview for this item
htmlOutput += '<li><a id ="' + currItem.getKey() + '" class="listClick">';
htmlOutput += '<img src="./images/' + imageToUse + '" class="ui-li-thumb">';
htmlOutput += '<h3 class = "listTitle">' + type;
htmlOutput += ' ( ' + owner + ' ) ';
htmlOutput += '</h3>';
htmlOutput += '<p>' + formatStartDate + ' : ' + formatEndDate + '</p>';
htmlOutput += '</a></li>';
i++;
htmlOutput += '</ul></div>';
// Remove the old listview and add in the new one. Note: this is suboptimal and should be fixed if you want to use it in production.
$('#My_ApprovalsForm').children().eq(2).hide();
$('#My_ApprovalsForm').children().eq(1).after(htmlOutput);
// Add in a handler so that when a line is clicked on, it'll go to the right details screen
$(".listClick").click(function(){
currListDivID = $(this).parent().parent();
$(this).parent().parent().addClass("ui-btn-active");
navigateForward("Request_Details", this.id );
if (isBlackBerry()) {
return;
// All done.
return true; -
Good Afternoon,
One of my clients has a SAP Netweaver CE 7.1 (EHP1) and there are several development team developing using the CAF in the system (several webservices, etc...).
One of the development teams is requesting a Netweaver Java Cache cleanup... The argument is that some objects developed by them should be in the system cache and the cache needs to be clean in order to reset these objects... So they're asking for a instance restart in a productive system, and they're asking that in a regular basis.
My administration knowledge is SAP ABAP systems related, i don't have deep knowledge in SAP JAVA instances. I've tried to find information about it but i don't find any procedure to do that manually without restarting the SAP system.
Does this cleanup of the Netweaver JAVA cache, makes sense to be asked to do? Is there any procedure (standard SAP) in order to do this without restarting the SAP instance?
Any help will be appreciated.
Thanks,
Pedro GasparFrom Control Panel, go to Java -> General tab, Temporary Internet Files
-> View... -> Delete all the content there.
Then go to:
> Integration buider page http:// (http://%3Chost%3E:%3Cport%3E/rep)
> Administration
> Tab - Directory (Repository as well)
> Java Web Start Administration
> Restore Archives and Generate New Signature
Portal Caches
The main portal caches are:
u2022 Navigation cache
u2022 PCD cache
u2022 Portal runtime cache
u2022 Database cache
u2022 UME cache
Navigation cache
To clear the Navigation cache:
System Administration -> Navigation
Clear the cache for the ROLES connector.
PRT cache
The content admin can enable portal content to be cached in the memory by the portal. This content will be stored in the portal runtime cache. This cache is actually storing iView content.
To clear the PRT cache:
http:// server:port /irj/servlet/prt/portal/prtroot/com.sap.portal.prt.cache.PRTRegionMemoryClear
PCD cache
To clear the PCD cache:
System Administration -> Support -> Support Desk -> Portal Content Directory -> PCD Administration
Database cache
To clear the DB cache:
http:// server:port /irj/servlet/prt/portal/prtroot/com.sap.portal.prt.cache.PRTRegionDBClear
UME cache
To clear this cache:
System Administration -> System Configuration -> UME Configuration -> Support -> Invalidate cluster wide cache
HTTP Provider Cache
This is the cache for static files like CSS, Javascript, Images, HTML. You`ll have to clear this cache when you redeploy an application and receive Javascript errors. To clear this cache:
Visual Administrator -> Cluster -> Server # -> Services -> HTTP Provider
Regards,
Dinesh -
Error when Publish data to passive cache
Hi,
I am using active-passive push replication. When I add data to active cache, the data added to cache but get the below error during publish data to passive cache.
011-03-31 22:38:06.014/291.473 Oracle Coherence GE 3.6.1.0 <Error> (thread=Proxy:ExtendTcpProxyService:TcpAcceptor, member=1): Failed to publish EntryOperation{siteName=site1, clusterName=Cluster1, cacheName=dist-contact-cache, operation=Insert, publishableEntry=PublishableEntry{key=Binary(length=12, value=0x154E094D65656E616B736869), value=Binary(length=88, value=0x12813A15A90F00004E094D65656E61
6B736869014E07506C6F74203137024E074368656E6E6169034E0954616D696C4E616475044E06363030303432401A155B014E0524737263244E0E73697465312D436C757374657231), originalValue=Binary(length=0, value=0x)}} to Cache dist-contact-cache because of
(Wrapped) java.io.StreamCorruptedException: invalid type: 78 Class:com.oracle.coherence.patterns.pushreplication.publishers.cache.AbstractCachePublisher
2011-03-31 22:38:06.014/291.473 Oracle Coherence GE 3.6.1.0 <D5> (thread=Proxy:ExtendTcpProxyService:TcpAcceptor, member=1): An exception occurred while processing a InvocationRequest for Service=Proxy:ExtendTcpProxyService:TcpAcceptor: (Wrapped: Failed to publish a batch with the publisher [Active Publisher] on cache[dist-contact-cache]) java.lang.IllegalStateException: Attempted to publish to cache dist-contact-cache
Here is my coherence cache config xml file
!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config xmlns:sync="class:com.oracle.coherence.patterns.pushreplication.configuration.PushReplicationNamespaceContentHandler">
<caching-schemes>
<sync:provider pof-enabled="true">
<sync:coherence-provider />
</sync:provider>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>dist-contact-cache</cache-name>
<scheme-name>distributed-scheme-with-publishing-cachestore</scheme-name>
<sync:publisher>
<sync:publisher-name>Active Publisher</sync:publisher-name>
<sync:publisher-scheme>
<sync:remote-cluster-publisher-scheme>
<sync:remote-invocation-service-name>remote-site2</sync:remote-invocation-service-name>
<sync:remote-publisher-scheme>
<sync:local-cache-publisher-scheme>
<sync:target-cache-name>dist-contact-cache</sync:target-cache-name>
</sync:local-cache-publisher-scheme>
</sync:remote-publisher-scheme>
<sync:autostart>true</sync:autostart>
</sync:remote-cluster-publisher-scheme>
</sync:publisher-scheme>
</sync:publisher>
</cache-mapping>
</caching-scheme-mapping>
<proxy-scheme>
<service-name>ExtendTcpProxyService</service-name>
<thread-count>5</thread-count>
<acceptor-config>
<tcp-acceptor>
<local-address>
<address>localhost</address>
<port>9099</port>
</local-address>
</tcp-acceptor>
</acceptor-config>
<autostart>true</autostart>
</proxy-scheme>
<remote-invocation-scheme>
<service-name>remote-site2</service-name>
<initiator-config>
<tcp-initiator>
<remote-addresses>
<socket-address>
<address>localhost</address>
<port>5000</port>
</socket-address>
</remote-addresses>
<connect-timeout>2s</connect-timeout>
</tcp-initiator>
<outgoing-message-handler>
<request-timeout>5s</request-timeout>
</outgoing-message-handler>
</initiator-config>
<serializer>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
</serializer>
</remote-invocation-scheme>
</caching-schemes>
</cache-config>Please find below for active and passive server start command, log files.
Thanks.
Active server start command
java -server -showversion -Xms128m -Xmx128m -Dtangosol.coherence.ttl=0 -Dtangosol.coherence.cacheconfig=active-cache-config.xml
-Dtangosol.coherence.cluster="Cluster1" -Dtangosol.coherence.site="site1" -Dtangosol.coherence.clusteraddress="224.3.6.2"
-Dtangosol.coherence.clusterport="3001" -Dtangosol.pof.config=pof-config.xml -cp "config;lib\custom-types.jar;lib\coherence.jar;lib/coherence-common-1.7.3.20019.jar;lib/coherence-pushreplicationpattern-3.0.3.20019.jar;lib/coherence-messagingpattern-2.7.3.20019.jar;" com.tangosol.net.DefaultCacheServer
Passive server start command
java -server -showversion -Xms128m -Xmx128m -Dtangosol.coherence.ttl=0 -Dtangosol.coherence.cacheconfig=passive-cache-config.xml
-Dtangosol.coherence.cluster="Cluster2" -Dtangosol.coherence.site="site2" -Dtangosol.coherence.clusteraddress="224.3.6.3"
-Dtangosol.coherence.clusterport="3003" -Dtangosol.pof.config=pof-config.xml -cp "config;lib\custom-types.jar;
lib\coherence.jar;lib/coherence-common-1.7.3.20019.jar;lib/coherence-pushreplicationpattern-3.0.3.20019.jar;
lib/coherence-messagingpattern-2.7.3.20019.jar;" com.tangosol.net.DefaultCacheServer
Active Server log
<Error> (thread=PublishingService:Thread-3, member=1): Failed to publish the range MessageTracker{MsgId{40-1} } of messages for subscription SubscriptionIdentifier{destinationIdentifier=Identifier{dist-contact-cache}, subscriberIdentifier=Identifier{Active Publisher}} with publisher com.oracle.coherence.patterns.pushreplication.publishers.RemoteClusterPublisher@423d4f Class:com.oracle.coherence.patterns.pushreplication.providers.coherence.CoherencePublishingService
2011-04-01 20:23:20.172/65.972 Oracle Coherence GE 3.6.1.0 <Info> (thread=PublishingService:Thread-3, member=1): Publisher Exception was as follows Class:com.oracle.coherence.patterns.pushreplication.providers.coherence.CoherencePublishingService
(Wrapped: Failed to publish a batch with the publisher [Active Publisher] on cache [dist-contact-cache]) java.lang.IllegalStateException: Attempted to publish t
o cache dist-contact-cache
at com.tangosol.util.Base.ensureRuntimeException(Base.java:293)
at com.oracle.coherence.patterns.pushreplication.publishers.RemoteClusterPublisher$RemotePublishingAgent.run(RemoteClusterPublisher.java:348)
at com.tangosol.coherence.component.net.extend.proxy.serviceProxy.InvocationServiceProxy.query(InvocationServiceProxy.CDB:6)
at com.tangosol.coherence.component.net.extend.messageFactory.InvocationServiceFactory$InvocationRequest.onRun(InvocationServiceFactory.CDB:12)
at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
at com.tangosol.coherence.component.net.extend.proxy.serviceProxy.InvocationServiceProxy.onMessage(InvocationServiceProxy.CDB:9)
at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:39)
at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer.onNotify(Peer.CDB:96)
at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
at java.lang.Thread.run(Thread.java:619)
Caused by: java.lang.IllegalStateException: Attempted to publish to cache dist-contact-cache
at com.oracle.coherence.patterns.pushreplication.publishers.cache.AbstractCachePublisher.publishBatch(AbstractCachePublisher.java:163)
at com.oracle.coherence.patterns.pushreplication.publishers.RemoteClusterPublisher$RemotePublishingAgent.run(RemoteClusterPublisher.java:343)
... 9 more
Caused by: (Wrapped) java.io.StreamCorruptedException: invalid type: 78
at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:266)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService$ConverterKeyToBinary.convert(PartitionedService.CDB:16)
at com.tangosol.util.ConverterCollections$ConverterInvocableMap.invoke(ConverterCollections.java:2156)
at com.tangosol.util.ConverterCollections$ConverterNamedCache.invoke(ConverterCollections.java:2622)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ViewMap.invoke(PartitionedCache.CDB:11)
at com.tangosol.coherence.component.util.SafeNamedCache.invoke(SafeNamedCache.CDB:1)
at com.oracle.coherence.patterns.pushreplication.publishers.cache.AbstractCachePublisher.publishBatch(AbstractCachePublisher.java:142)
... 10 moreCaused by: java.io.StreamCorruptedException: invalid type: 78
at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2266)
at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2254)
at com.tangosol.io.DefaultSerializer.deserialize(DefaultSerializer.java:74)
at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2708)
at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:262)
... 16 more
Passive Server log
2011-04-01 20:23:20.141/56.925 Oracle Coherence GE 3.6.1.0 <Error> (thread=Proxy:ExtendTcpProxyService:TcpAcceptor, member=1): Failed to publish EntryOperation{siteName=site1, clusterName=Cluster1, cacheName=dist-contact-cache, operation=Insert, publishableEntry=PublishableEntry{key=Binary(length=12, value=0x154E094D65656E616B736869), value=Binary(length=88, value=0x12813A15A90F00004E094D65656E616
B736869014E07506C6F74203137024E074368656E6E6169034E0954616D696C4E616475044E06363030303432401A155B014E0524737263244E0E73697465312D436C757374657231), originalValue=Binary(length=0, value=0x)}} to Cache dist-contact-cache because of (Wrapped) java.io.StreamCorruptedException: invalid type: 78 Class:com.oracle.coherence.patterns.pushreplication.publishers.cache.AbstractCachePublisher2011-04-01 20:23:20.141/56.925 Oracle Coherence GE 3.6.1.0 <D5> (thread=Proxy:ExtendTcpProxyService:TcpAcceptor, member=1): An exception occurred while processing a InvocationRequest for Service=Proxy:ExtendTcpProxyService:TcpAcceptor: (Wrapped: Failed to publish a batch with the publisher [Active Publisher] on cache [
dist-contact-cache]) java.lang.IllegalStateException: Attempted to publish to cache dist-contact-cache
at com.tangosol.util.Base.ensureRuntimeException(Base.java:293)
at com.oracle.coherence.patterns.pushreplication.publishers.RemoteClusterPublisher$RemotePublishingAgent.run(RemoteClusterPublisher.java:348)
at com.tangosol.coherence.component.net.extend.proxy.serviceProxy.InvocationServiceProxy.query(InvocationServiceProxy.CDB:6)
at com.tangosol.coherence.component.net.extend.messageFactory.InvocationServiceFactory$InvocationRequest.onRun(InvocationServiceFactory.CDB:12)
at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
at com.tangosol.coherence.component.net.extend.proxy.serviceProxy.InvocationServiceProxy.onMessage(InvocationServiceProxy.CDB:9)
at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:39)
at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer.onNotify(Peer.CDB:96)
at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
at java.lang.Thread.run(Thread.java:619)
Caused by: java.lang.IllegalStateException: Attempted to publish to cache dist-contact-cache
at com.oracle.coherence.patterns.pushreplication.publishers.cache.AbstractCachePublisher.publishBatch(AbstractCachePublisher.java:163)
at com.oracle.coherence.patterns.pushreplication.publishers.RemoteClusterPublisher$RemotePublishingAgent.run(RemoteClusterPublisher.java:343)
... 9 more
Caused by: (Wrapped) java.io.StreamCorruptedException: invalid type: 78
at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:266)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService$ConverterKeyToBinary.convert(PartitionedService.CDB:16)
at com.tangosol.util.ConverterCollections$ConverterInvocableMap.invoke(ConverterCollections.java:2156)
at com.tangosol.util.ConverterCollections$ConverterNamedCache.invoke(ConverterCollections.java:2622)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ViewMap.invoke(PartitionedCache.CDB:11)
at com.tangosol.coherence.component.util.SafeNamedCache.invoke(SafeNamedCache.CDB:1)
at com.oracle.coherence.patterns.pushreplication.publishers.cache.AbstractCachePublisher.publishBatch(AbstractCachePublisher.java:142)
... 10 more
Caused by: java.io.StreamCorruptedException: invalid type: 78
at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2266)
at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2254)
at com.tangosol.io.DefaultSerializer.deserialize(DefaultSerializer.java:74)
at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2708)
at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:262)
... 16 more -
Hi
Please let me know the FM/BAPI which will get me the MRP data from Live cache.
My requirement is to get the date and quantity for each element, MRP elements to be used are VC,VE,LF based on the plant and material
Thanks in advance
AmitHello Sebastian,
FM '/SAPAPO/OM_PEG_CAT_GET_ORDERS' seems to cover my requirements and I'm happy I found out about it here. Unfortunately I have some problems testing with SE37:
Specifying only PEGID together with needed categories was not successful. Searching for calling programs of this FM I found FM '/SAPAPO/ATP_DISP_LC_SINGLE' which additionally sets parameter IS_GEN_PARAMS-SIMVERSION to '000' and receives the correct values.
Now I have the problem that with setting the same input values I still receive exception 'LC_APPL_ERROR' and don't know why. Since I'm new to APO it could be some basic setting, still I'm wondering why this could be. Hopefully you can provide some helpful information, otherwise I think I'm lost
Thanks in advance,
eddy -
Keep Oracle Spatial data in Coherence Caches?
Can I keep Oracle Spatial data in Coherence Caches?
You can store the Oracle Spatial data in Coherence caches. But creating Spatial indexes in Coherence neeeds too much effort, I guess. How do you create the Spatial geocoding package, map symbols, map styling rules and Spatial network model?
Edited by: junez on 07-Jan-2010 12:24 -
Stale Near Cache data when all Cache Server fails and are restarted
Hi,
We are currently making use of Coherence 3.6.1. We are seeing an issue. The following is the scenario:
- caching servers and proxy server are all restarted.
- one of the cache say "TestCache" is bulk loaded with some data (e.g. key/value of : key1/value1, key2/value, key3/value3) on the caching servers.
- near cache client connects onto the server cluster via the Extend Proxy Server.
- near cache client is primed with all data from the cache server "TestCache". Hence, near cache client now has all key/values locally (i.e. key1/value1, key2/value, key3/value3).
- all caching servers in the cluster is down, but the extend proxy server is ok.
- all cache server in the cluster comes back up.
- we reload all cache data into "TestCache" on the cache server, but this time it only has key/value of : key1/value1, key2/value.
- So the caching server's state for "TestCache" is that it should only have key1/value1, key2/value, but the near cache client still thinks it's got key1/value1, key2/value, key3/value3. So in effect, it still knows about key3/value3 which no longer exists.
Is there anyway for the near cache client to invalidate the key3/value3 automatically? This scenario happens because the extend proxy server is actually not down, but all caching servers are, but the near cache client for some reason doesn't know about this and does not invalidate the cache client near cache data.
Can anyone help?
Thanks
Regards
Wilson.Hi,
I do have the invalidation strategy as "ALL". Remember this cache client is connected via the Extend proxy server where it's connectivity is still ok, just the caching server holding the storage data in the cluster are all down.
Please let me know why else we can try.
Thanks
Regards
Wilson. -
Choosing the type of data valid for caching
During discussions with other members of our team, we have been questioning what objects we should and shouldn't cache.
One suggestion has been to cache account details whilst they are used (like a user session) and then either remove or let the objects expire from the cache. This would allow system components to access the object without having to pass the object around.
To me, this isn't the sort of data that is traditionally cached but I'd be interested to hear other views. Coherence does seem to allow the boundaries to be pushed.
MikeHi Mike,
Coherence is commonly used for a number of scenarios:
1) The most obvious, data caching (near cache)
2) Metadata and security information (replicated cache)
3) Session management (near cache)
4) Messaging and queueing (distributed cache)
Coherence is very well suited for most situations that involve sharing data between cluster members or communicating between members. Pretty much any scenario outside of bulk video streaming (though people do ask about that occasionally).
In your situation, Coherence makes data sharing extremely simple, which is a further benefit (no need for RMI, etc).
The decision is based on whether it is cheaper to pull data from cache than from an external data source. The answer is almost always yes, for two reasons: (1) cached data is very efficient to manage and access and (2) the application tier is very inexpensive (cost-wise) in terms of capital and recurring expenses relative to the back-end tier(s).
Jon Purdy
Tangosol, Inc.
Maybe you are looking for
-
Is the Panasonic HDC SD 90 camcorder compatible with Final Cut Pro X
Is the Panasonic HDC SD camcorder compatible with FCP X?
-
BAdI for technical object replication
Hi, I have created a new attribut and settype for PoD (Point of delivery) family with tcodes COMM_ATTRSET and COMM_HIERARCHY. I want replicate this new field from SAP CRM to SAP R/3. For this, I have implemented two BAdI : IBSSI_SEND_TO_UPL (for SAP
-
OC4J AQ/JMS - Logging DriverManagerConnectionPoolConnection not closed
Hi, Since I do not directly utilise JDBC resource, I cannot work out how I might investigate this supposed leak. Every now and then (and it does appear to happen sporadically - I cannot identify a pattern), I get the messages. DriverManagerConnection
-
Which method ???
Hi Planning a upgrade of a 4.6C system to 6.40, i have to decide which method to use concerning zero interactive CPW model 590. Which method are the most common or which method do you prefer? 1: Carry out the upgrade in the background as described i
-
Error code when tring to sync.
All of a sudden, when I connected and tried to sync my 5th Gen iPod to iTunes 7.02, I got the following error message: The iPod "Name iPod" cannot be updated. An unknown error occurred (-208). Any idea what might be causing this error? I find now I c