Cache config for distributed cache and TCP*Extend
Hi,
I want to use distributed cache with TCP*Extend. We have defined "remote-cache-scheme" as the default cache scheme. I want to use a distributed cache along with a cache-store. The configuration I used for my scheme was
<distributed-scheme>
<scheme-name>MyScheme</scheme-name>
<backing-map-scheme>
<read-write-backing-map-scheme>
<internal-cache-scheme>
<class-scheme>
<class-name>com.tangosol.util.ObservableHashMap</class-name>
</class-scheme>
</internal-cache-scheme>
<cachestore-scheme>
<class-scheme>
<class-name>MyCacheStore</class-name>
</class-scheme>
<remote-cache-scheme>
<scheme-ref>default-scheme</scheme-ref>
</remote-cache-scheme>
</cachestore-scheme>
<rollback-cachestore-failures>true</rollback-cachestore-failures>
</read-write-backing-map-scheme>
</backing-map-scheme>
</distributed-scheme>
<remote-cache-scheme>
<scheme-name>default-scheme</scheme-name>
<initiator-config>
<tcp-initiator>
<remote-addresses>
<socket-address>
<address>XYZ</address>
<port>9909</port>
</socket-address>
</remote-addresses>
</tcp-initiator>
</initiator-config>
</remote-cache-scheme>I know that the configuration defined for "MyScheme" is wrong but I do not know how to configure "MyScheme" correctly to make my distributed cache the part of the same cluster to which all other caches, which uses the default scheme, are joined. Currently, this ain't happening.
Thanks.
RG
Message was edited by:
user602943
Hi,
Is it that I need to define my distributed scheme with the CacheStore in the server-coherence-cache-config.xml and then on the client side use remote cache scheme to connect to get my distributed cache?
Thanks,
Similar Messages
-
Set request timeout for distributed cache
Hi,
Coherence provides 3 parameters we can tune for the distributed cache
tangosol.coherence.distributed.request.timeout The default client request timeout for distributed cache services
tangosol.coherence.distributed.task.timeout The default server execution timeout for distributed cache services
tangosol.coherence.distributed.task.hung the default time before a thread is reported as hung by distributed cache services
It seems these timeout values are used for both system activities (node discovery, data re-balance etc.) and user activities (get, put). We would like to set the request timeout for get/put. But a low threshold like 10 ms sometimes causes the system activities to fail. Is there a way for us to separately set the timeout values? Or even is it possible to setup timeout on individual calls (like get(key, timeout))?
-thanksHi,
not necessarily for get and put methods, but for queries, entry-processor and entry-aggregator and invocable agent sending, you can make the sent filter or aggregator or entry-processor or agent implement PriorityTask, which allows you to make QoS expectations known to Coherence. Most or all stock aggregators and entry-processors implement PriorityTask, if I correctly remember.
For more info, look at the documentation of PriorityTask.
Best regards,
Robert -
How to use the same services-config for the local and remote servers.
My flex project works fine using the below but when I upload my flash file to the server I doesn't work, all the relative paths and files are the same execpt the remote one is a linux server.
<?xml version="1.0" encoding="UTF-8"?>
<services-config>
<services>
<service id="amfphp-flashremoting-service"
class="flex.messaging.services.RemotingService"
messageTypes="flex.messaging.messages.RemotingMessage">
<destination id="amfphp">
<channels>
<channel ref="my-amfphp"/>
</channels>
<properties>
<source>*</source>
</properties>
</destination>
</service>
</services>
<channels>
<channel-definition id="my-amfphp" class="mx.messaging.channels.AMFChannel">
<endpoint uri="http://localhost/domainn.org/amfphp/gateway.php" class="flex.messaging.endpoints.AMFEndpoint"/>
</channel-definition>
</channels>
</services-config>
I think the problem is the line
<endpoint uri="http://localhost/domainn.org/amfphp/gateway.php" class="flex.messaging.endpoints.AMFEndpoint"/>
but I'm not sure how to use the same services-config for the local and remote servers.paul.williams wrote:
You are confusing "served from a web-server" with "compiled on a web-server". Served from a web-server means you are downloading a file from the web-server, it does not necessarily mean that the files has been generated / compiled on the server.
The server.name and server.port tokens are replaced at runtime (ie. on the client when the swf has been downloaded and is running) not compile time (ie. while mxmlc / ant / wet-tier compiler is running). You do not need to compile on the server to take advantage of this.
Hi Paul,
In Flex, there is feature that lets developer to put all service-config.xml file configuration information into swf file. with
-services=path/to/services-config.xml
IF
services-config.xml
have tokens in it and user have not specified additional
-context-root
and this swf file is not served from web-app-server (like tomcat for example) than it will not work,
Flash player have no possible way to replace token values of service-config.xml file durring runtime if that service-config.xml file have been baked into swf file during compilation,
for example during development you can launch your swf file from your browser with file// protocol and still be able to access blazeDS services if
-services=path/to/services-config.xml
have been specified durring compilation.
I dont know any better way to exmplain this, but in summary there is two places that you can tell swf about service confogiration,
1) pass -services=path/to/services-config.xml parameter to compiler this way you tell swf file up front about all that good stuff,
or 2) you put that file on the webserver( in this case, yes you should have replacement tokens in that file) and they will be repaced at runtime . -
Need Help regarding initial configuration for distributed cache
Hi ,
I am new to tangosol and trying to setup a basic partitioned distributed cache ,But I am not being able to do so
Here is my Scenario,
My Application DataServer create the instance of Tangosolcache .
I have this config.xml set in my machine where my application start.
<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
<caching-scheme-mapping>
<!--
Caches with any name will be created as default near.
-->
<cache-mapping>
<cache-name>*</cache-name>
<scheme-name>default-distributed</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<!--
Default Distributed caching scheme.
-->
<distributed-scheme>
<scheme-name>default-distributed</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>
<class-scheme>
<scheme-ref>default-backing-map</scheme-ref>
</class-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
<!--
Default backing map scheme definition used by all the caches that do
not require any eviction policies
-->
<class-scheme>
<scheme-name>default-backing-map</scheme-name>
<class-name>com.tangosol.util.SafeHashMap</class-name>
<init-params></init-params>
</class-scheme>
</caching-schemes>
</cache-config>
Now on the same machine I start a different client using the command
java -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=near-cache-config.xml -classpath
"C:/calypso/software/release/build" -jar ../lib/coherence.jar
The problem I am facing is
1)If I do not start the client even then my application server cache the data .Ideally my config.xml setting is set to
distributed so under no case it should cache the data in its local ...
2)I want to bind my differet cache on different process on different machine .
say
for e.g
machine1 should cache cache1 object
machine2 should cache cache2 object
and so on .......but i could not find any documentation which explain how to do this setting .Can some one give me example of
how to do it ....
3)I want to know the details of cache stored in any particular node how do I know say for e.g machine1 contains so and so
cache and it corresponding object values ... etc .....
Regards
MaheshHi Thanks for answer.
After digging into the wiki lot i found out something related to KeyAssociation I think what I need is something like implementation of KeyAssociation and that
store the particular cache type object on particular node or group of node
Say for e,g I want to have this kind of setup
Cache1-->node1,node2 as I forecast this would take lot of memory (So i assign this jvms like 10 G)
Cache2-->node3 to assign small memory (like 2G)
and so on ...
From the wiki documentation i see
Key Association
By default the specific set of entries assigned to each partition is transparent to the application. In some cases it may be advantageous to keep certain related entries within the same cluster node. A key-associator may be used to indicate related entries, the partitioned cache service will ensure that associated entries reside on the same partition, and thus on the same cluster node. Alternatively, key association may be specified from within the application code by using keys which implement the com.tangosol.net.cache.KeyAssociation interface.
Do someone have any example of explaining how this is done in the simplest way .. -
Error handling for distributed cache synchronization
Hello,
Can somebody explain to me how the error handling works for the distributed cache synchronization ?
Say I have four nodes of a weblogic cluster and 4 different sessions on each one of those nodes.
On Node A an update happens on object B. This update is going to be propogated to all the other nodes B, C, D. But for some reason the connection between node A and node B is lost.
In the following xml
<cache-synchronization-manager>
<clustering-service>...</clustering-service>
<should-remove-connection-on-error>true</should-remove-connection-on-error>
If I set this to true does this mean that the Toplink will stop sending updates from node A to node B ? I presume all of this is transparent. In order to handle any errors I do not have to write any code to capture this kind of error .
Is that correct ?
Aswin.This "should-remove-connection-on-error" option mainly applies to RMI or RMI_IIOP cache synchronization. If you use JMS for cache synchronization, then connectivity and error handling is provided by the JMS service.
For RMI, when this is set to true (which is the default) if a communication exception occurs in sending the cache synchronization to a server, that server will be removed and no longer synchronized with. The assumption is that the server has gone down, and when it comes back up it will rejoin the cluster and reconnect to this server and resume synchronization. Since it will have an empty cache when it starts back up, it will not have missed anything.
You do not have to perform any error handling, however if you wish to handle cache synchronization errors you can use a TopLink Session ExceptionHandler. Any cache synchronization errors will be sent to the session's exception handler and allow it to handle the error or be notified of the error. Any errors will also be logged to the TopLink session's log. -
Using Tangosol Coherence in conjunction with Kodo JDO for distributing caching
JDO currently has a perception problem in terms of performance. Transparent
persistence is perceived to have a significant performance overhead compared
to hand-coded JDBC. That was certainly true a while ago, when the first JDO
implementations were evaluated. They typically performed about half as well
and with higher resource requirements. No doubt JDO vendors have closed that
gap by caching PreparedStatements, queries, data, and by using other
optimizations.
Aside from the ease of programming through transparent persistence, I
believe that using JDO in conjunction with distributed caching techniques in
a J2EE managed environment has the opportunity to transparently give
scalability, performance, and availability improvements that would otherwise
be much more difficult to realize through other persistence techniques.
In particular, it looks like Tangosol is doing a lot of good work in the
area of distributed caching for J2EE. For example, executing parallelized
searches in a cluster is a capability that is pretty unique and potentially
very valuable to many applications. It would appear to me to be a lot of
synergy between Kodo JDO and Tangosol Coherence. Using Coherence as an
implementation of Kodo JDO's distributed cache would be a natural desire for
enterprise applications that have J2EE clustering requirements for high
scalability, performance, and availability.
I'm wondering if Solarmetric has any ideas or plans for closer integration
(e.g., pluggability) of Tangosol Coherence into Kodo JDO. This is just my
personal opinion, but I think a partnership between your two organizations
to do this integration would be mutually advantageous, and it would
potentially be very attractive to your customers.
BenMarc,
Thanks for pointing that out. That is truly excellent!
Ben
"Marc Prud'hommeaux" <[email protected]> wrote in message
news:[email protected]...
Ben-
We do currently have a plug-in for backing our data cache with a
Tangosol cache.
See: http://docs.solarmetric.com/manual.html#datastore_cache_config
In article <[email protected]>, Ben Eng wrote:
JDO currently has a perception problem in terms of performance.
Transparent
persistence is perceived to have a significant performance overheadcompared
to hand-coded JDBC. That was certainly true a while ago, when the firstJDO
implementations were evaluated. They typically performed about half aswell
and with higher resource requirements. No doubt JDO vendors have closedthat
gap by caching PreparedStatements, queries, data, and by using other
optimizations.
Aside from the ease of programming through transparent persistence, I
believe that using JDO in conjunction with distributed cachingtechniques in
a J2EE managed environment has the opportunity to transparently give
scalability, performance, and availability improvements that wouldotherwise
be much more difficult to realize through other persistence techniques.
In particular, it looks like Tangosol is doing a lot of good work in the
area of distributed caching for J2EE. For example, executingparallelized
searches in a cluster is a capability that is pretty unique andpotentially
very valuable to many applications. It would appear to me to be a lot of
synergy between Kodo JDO and Tangosol Coherence. Using Coherence as an
implementation of Kodo JDO's distributed cache would be a natural desirefor
enterprise applications that have J2EE clustering requirements for high
scalability, performance, and availability.
I'm wondering if Solarmetric has any ideas or plans for closerintegration
(e.g., pluggability) of Tangosol Coherence into Kodo JDO. This is justmy
personal opinion, but I think a partnership between your twoorganizations
to do this integration would be mutually advantageous, and it would
potentially be very attractive to your customers.
Ben--
Marc Prud'hommeaux [email protected]
SolarMetric Inc. http://www.solarmetric.com -
Local Cache containing all Distributed Cache entries
Hello all,
I am seeing what appears to be some sort of problem. I have 2 JVMS running, one for the application and the other serving as a coherence cache JVM (near-cache scheme).
When i stop the cache JVM - the local JVM displays all 1200 entries even if the <high-units> for that cache is set to 300.
Does the local JVM keep a copy of the Distributed Data?
Can anyone explain this?
Thankshi,
i have configured a near-cahe with frontscheme and back scheme.in the front scheme i have used local cache and in the back scheme i have used the distributed cache .my idea is to have a distributed cache on the coherence servers.
i have 01 jvm which has weblogic app server while i have a 02 jvm which has 4 coherence servers all forming the cluster.
Q1: where is the local cache data stored.? is it on the weblogic app server or on the coherence servers (SSI)..
Q2: although i have shutdown my 4 coherence servers..i am still able to get the data in the app.so have a feel that the data is also stored locally..on the 01 jvm which has weblogic server runnng...
q3: does both the client apps and coherence servers need to use the same coherence-cache-config.xml
can somebody help me with these questions.Appreciate your time.. -
Anyone have definitive config for Neo2 Platinum and Speedfan
I've read almost equivalent posting indicating different info for Temp 1, 2, and 3.
I've got Temp 1 = NB, Temp 2 = CPU, Temp 3 = System.
based on my readings (NB = 14C, CPU = 38C, and System = 34C) the above makes sense. But many people say Temp 3 is the CPU. I can't believe my CPU would be cooler then my system temp, although, I guess it depends on where that temp is read from?
Can anyone confirm 100%?
What about Fans 1, 2, and 3? How do those corrolate to CPU, NB, and SYSFAN and SYSFAN2 on the mobo?
I also have a "Temp" that reads constant -1 C and shows as part of nForce2 SMBUS. What's that?
ALso with the fans, Fan 4 and 5 showing as part of nForce2 SMBUS with no readings. What's with those?Quote
Originally posted by syar2003
16C heh .. means that speedfan does do the readout correctly.
It's just not possible with air cooling that something is lower than ambient temperature. (mine is 30C with a self mounted small fan on the chipset running at 4300rpm , this is about 5C over ambient temperature)
corecentre reads theese two sensors data right .
So does MBM5 5.3.7.0 is setup with wizard and right motherboard chosen.
Make sure that speedfan is setup to monitor :
Sensor Type - Winbond W83627THF (through ISA 290h)
Sensors winbond1 , winbond2-diode
Edit : From your first post :
CPU = 38C, and System = 34C
This is the two real connected sensors , the other with 14/16C is just a bogus reading.
Your last comment, that doesn't make sense with your other posts, because the temp sensors giving me the 34/38 readings are 2 and 3. Well you say only #1/#2 are tied to Winbond with 1=NB and 2=CPU. Well how is it that #1 is giving me bogus readings in Smartfan then?
I looked at the config for the Winbond, sensor 1 = thermistor diode, 2 = PII diode, and 3 = thermistor diode. THis is the defaults it came up with. Is this incorrect? If I change these settings at all I get some 110+ readings. -
TransactionMap and TCP*Extend
Hi Guys,
From what I understand, from a Real Time Client I can't make use of local cache transactions, either single or multi cache. I appreciate that, if for no other reason :), death detection isn't as capable on an RTC node determining rollback conditions might not be robust enough. Is it possible that I can use the invocation service to proxy that operation onto a node in the cache cluster?
Kind Regards,
MaxMax,
This is definitely a very good use case for utilizing the Invocation service over Coherence*Extend.
Regards,
Gene -
ACE Probe Config for Blue Coat Proxy TCP Port 74 NETRJS-4
We are running 4710's with A5(2.2). We use Blue Coat proxies for our internet connections, specifcally TCP port 74. So when we open up a browser connection to www.cisco.com, the HTTP GET is actually encapsulated in TCP port 74 netrjs-4. We want to load-balance these proxies with ACE and I'm trying to setup health probes, but the only ones that work are the tcp probes PROXY_BCC_PROBE and PROXY_PROBE. I'd like to have health probes that hit external websites, but I'm confused whether the "ip address" Probe sub command is all I need, and netrjs is simple encapsulation of the HTTP request (which is what it looks like on a sniffer). Does anyone have Blue Coat proxies/ACE working? If so, how are your probes configured?
Thanks,
probe tcp PROXY_BCC_PROBE
port 8084
interval 3
passdetect interval 3
probe http PROXY_HTTP1_PROBE
ip address 198.133.219.25
port 74
interval 3
passdetect interval 3
request method head url /index.html
expect status 200 299
probe http PROXY_HTTP2_PROBE
ip address 198.133.219.25
port 74
interval 3
request method get url /
expect status 200 299
probe tcp PROXY_PROBE
port 74
interval 3
passdetect interval 3Hi,
I have seen this working for one of the customer.
probe http HTTPGET
description Tests that www.gmail.com returns 302 redirect
interval 10
request method get url http://www.gmail.com
expect status 302 302
If I modify your probe :
probe http PROXY_HTTP1_PROBE
ip address 198.133.219.25
port 74
interval 3
passdetect interval 3
request method get url
http://www.gmail.com
expect status 302 302
Give it a try and see if that helps.
regards,
Ajay Kumar -
Config for mov 601 and 101 simultaneously while PGI with sales GL entry
Hi,
Is there any possibility to STO and post 601 and 101 movement posting separate document.
where below accounting entry will be generated ..
Plant-X on 601 mvt
COGS Dr
WIP Inventory Cr
Inter plant Dr
Sales Cr
Plant-Y on 101 mvt
RM Inventory Dr
Interplant Cr
Pls suggestHi,
Customer wants to hit sales GL in this case which is not possible ..
Is there any other way around ,, i tried to full fill the requirement by making changes in movement type assigned in schedule line category from 641 to 601 and 647 to 101 so that from one plant it could be sold and in one plant material will be received by movement 101 ..
But SAP is not allowing the same..
Pls guide is there any other way around.. -
Issues with element:introduce-cache-config
we have a common cache configuration file (log4j-server-cache-config.xml) and we want to include it to other cache config file. but we tried different approaches, none of them seems work. here are the configuration file we use:
<?xml version='1.0'?>
<cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"
xmlns:element="class://com.oracle.coherence.environment.extensible.namespaces.XmlElementProcessingNamespaceContentHandler"
element:introduce-cache-config="log4j-server-cache-config.xml">
<defaults>
<serializer>
<instance>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
<init-params>
<init-param>
<param-type>String</param-type>
<param-value>etlm-pof-config.xml</param-value>
</init-param>
</init-params>
</instance>
</serializer>
</defaults>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>etlm-codelist</cache-name>
<scheme-name>etlm-distribute-scheme</scheme-name>
<init-params>
<init-param>
<param-name>max-size</param-name>
<param-value>5m</param-value>
</init-param>
<init-param>
<param-name>expiry-delay</param-name>
<param-value>5m</param-value>
</init-param>
</init-params>
</cache-mapping>
<cache-mapping>
<cache-name>etlm-*</cache-name>
<scheme-name>etlm-distribute-scheme</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<local-scheme>
<scheme-name>unlimited-local</scheme-name>
<high-units>{max-size 50m}</high-units>
<unit-calculator>BINARY</unit-calculator>
<expiry-delay>{expiry-delay 0}</expiry-delay>
</local-scheme>
<distributed-scheme>
<scheme-name>etlm-distribute-scheme</scheme-name>
<scheme-ref>etlm-base</scheme-ref>
<autostart>true</autostart>
</distributed-scheme>
<distributed-scheme>
<scheme-name>etlm-base</scheme-name>
<service-name>ETLMDistributedCache</service-name>
<partition-count>257</partition-count>
<backing-map-scheme>
<local-scheme>
<scheme-ref>unlimited-local</scheme-ref>
</local-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
<proxy-scheme>
<service-name>ExtendTcpProxyService</service-name>
<thread-count>20</thread-count>
<acceptor-config>
<tcp-acceptor>
<local-address>
<address system-property="tangosol.coherence.proxy.host">localhost</address>
<port system-property="tangosol.coherence.proxy.port">18030</port>
</local-address>
</tcp-acceptor>
</acceptor-config>
<autostart system-property="tangosol.coherence.proxy">true</autostart>
</proxy-scheme>
</caching-schemes>
</cache-config>
Following is the log4j-server-cache-config.xml
<?xml version='1.0'?>
<cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd">
<cachdding-scheme-mapping>
<cache-mapping>
<cache-name>dist-log4j</cache-name>
<scheme-name>engineering-distributed</scheme-name>
<init-params>
<init-param>
<param-name>write-delay</param-name>
<param-value>1s</param-value>
</init-param>
<init-param>
<param-name>write-batch-factor</param-name>
<param-value>0.5</param-value>
</init-param>
</init-params>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<local-scheme>
<scheme-name>unlimited-local</scheme-name>
<unit-calculator>BINARY</unit-calculator>
</local-scheme>
<distributed-scheme>
<scheme-name>engineering-distributed</scheme-name>
<service-name>Log4JDistributedCache</service-name>
<thread-count>20</thread-count>
<backing-map-scheme>
<read-write-backing-map-scheme>
<internal-cache-scheme>
<local-scheme/>
</internal-cache-scheme>
<cachestore-scheme>
<class-scheme>
<class-name>com.adp.cache.log4j.LoggingCacheStore</class-name>
<init-params>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>jdbc:oracle:thin:@10.17.134.215:1521:poc</param-value>
</init-param>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>oracle.jdbc.driver.OracleDriver</param-value>
</init-param>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>poc</param-value>
</init-param>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>tiger</param-value>
</init-param>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>INSERT INTO LOG_JDBC (LogTime, LogLevel, ClassName, LogMessage) VALUES (?)</param-value>
</init-param>
</init-params>
</class-scheme>
</cachestore-scheme>
<write-delay>
{write-delay 0}
</write-delay>
<write-batch-factor>
{write-batch-factor 0}
</write-batch-factor>
<write-requeue-threshold>
{write-requeue-threshold 0}
</write-requeue-threshold>
</read-write-backing-map-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
<proxy-scheme>
<service-name>Log4JTcpProxyService</service-name>
<thread-count>20</thread-count>
<acceptor-config>
<tcp-acceptor>
<local-address>
<address system-property="tangosol.coherence.proxy.host">localhost</address>
<port system-property="tangosol.coherence.proxy.port">20170</port>
</local-address>
</tcp-acceptor>
</acceptor-config>
<autostart system-property="tangosol.coherence.proxy.enabled">true</autostart>
</proxy-scheme>
</caching-schemes>
</cache-config>
I included coherence-common 2.1.1.288 jar file in front of all of the other jar files. the server can be started up successfully without any error or exceptions, but I don't see any services from Log4J-server-cache-config.xml from the startup console. here is the output from server startup:
C:\coherence\coherence3.7.1\bin>"C:\Progra~1\Java\jdk1.6.0_21\bin\java" -server -showversion ""-Xms512m -Xmx512m"" -Dtangosol.coherence.log.level=9 -Dtangosol.coherence.override=et
lm\etlm-server-config.xml -Dtangosol.coherence.localhost=10.17.134.215 -Dtangosol.coherence.localport=8088 -cp "etlm\coherence-common.jar;C:\coherence\coherence3.7.1\bin\\..\lib\c
oherence.jar;etlm\CacheService.jar;etlm\commons-pool-1.5.6.jar;etlm\commons-dbcp-1.4.jar;etlm\commons-lang3-3.1.jar;etlm\log4j-1.2.16.jar;etlm\ojdbc6.jar;etlm\etlm.jar" com.tangoso
l.net.DefaultCacheServer
java version "1.6.0_21"
Java(TM) SE Runtime Environment (build 1.6.0_21-b07)
Java HotSpot(TM) 64-Bit Server VM (build 17.0-b17, mixed mode)
2012-01-03 10:18:21.256/0.461 Oracle Coherence 3.7.1.0 <Info> (thread=main, member=n/a): Loaded operational configuration from "jar:file:/C:/coherence/coherence3.7.1/lib/coherence.
jar!/tangosol-coherence.xml"
2012-01-03 10:18:21.344/0.549 Oracle Coherence 3.7.1.0 <Info> (thread=main, member=n/a): Loaded operational overrides from "file:/C:/coherence/coherence3.7.1/bin/etlm/etlm-server-c
onfig.xml"
2012-01-03 10:18:21.349/0.554 Oracle Coherence 3.7.1.0 <D5> (thread=main, member=n/a): Optional configuration override "/custom-mbeans.xml" is not specified
2012-01-03 10:18:21.349/0.554 Oracle Coherence 3.7.1.0 <D6> (thread=main, member=n/a): Loaded edition data from "jar:file:/C:/coherence/coherence3.7.1/lib/coherence.jar!/coherence-
grid.xml"
Oracle Coherence Version 3.7.1.0 Build 27797
Grid Edition: Development mode
Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.
2012-01-03 10:18:21.672/0.877 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=n/a): Loaded cache configuration from "file:/C:/coherence/coherence3.7.1/bin/etlm/etlm-server-
cache-config.xml"
2012-01-03 10:18:22.313/1.518 Oracle Coherence GE 3.7.1.0 <D4> (thread=main, member=n/a): TCMP bound to /10.17.134.215:8088 using SystemSocketProvider
2012-01-03 10:18:22.559/1.764 Oracle Coherence GE 3.7.1.0 <D7> (thread=PacketListener1, member=n/a): Growing MultiplexingWriteBufferPool segment '65536' to 2 generations
2012-01-03 10:18:27.622/6.827 Oracle Coherence GE 3.7.1.0 <Info> (thread=Cluster, member=n/a): Created a new cluster "ETLMDIT" with Member(Id=1, Timestamp=2012-01-03 10:18:22.373,
Address=10.17.134.215:8088, MachineId=27829, Location=site:,machine:leon-desk,process:9064, Role=CoherenceServer, Edition=Grid Edition, Mode=Development, CpuCount=8, SocketCount=8)
UID=0x0A1186D700000134A42653A56CB51F98
2012-01-03 10:18:27.626/6.831 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=n/a): Started cluster Name=ETLMDIT
WellKnownAddressList(Size=1,
WKA{Address=10.17.134.215, Port=8088}
MasterMemberSet(
ThisMember=Member(Id=1, Timestamp=2012-01-03 10:18:22.373, Address=10.17.134.215:8088, MachineId=27829, Location=site:,machine:leon-desk,process:9064, Role=CoherenceServer)
OldestMember=Member(Id=1, Timestamp=2012-01-03 10:18:22.373, Address=10.17.134.215:8088, MachineId=27829, Location=site:,machine:leon-desk,process:9064, Role=CoherenceServer)
ActualMemberSet=MemberSet(Size=1
Member(Id=1, Timestamp=2012-01-03 10:18:22.373, Address=10.17.134.215:8088, MachineId=27829, Location=site:,machine:leon-desk,process:9064, Role=CoherenceServer)
MemberId|ServiceVersion|ServiceJoined|MemberState
1|3.7.1|2012-01-03 10:18:27.622|JOINED
RecycleMillis=1200000
RecycleSet=MemberSet(Size=0
TcpRing{Connections=[]}
IpMonitor{AddressListSize=0}
2012-01-03 10:18:27.657/6.862 Oracle Coherence GE 3.7.1.0 <D5> (thread=Invocation:Management, member=1): Service Management joined the cluster with senior service member 1
2012-01-03 10:18:27.893/7.098 Oracle Coherence GE 3.7.1.0 <Info> (thread=DistributedCache:ETLMDistributedCache, member=1): Loaded POF configuration from "jar:file:/C:/coherence/coh
erence3.7.1/bin/etlm/etlm.jar!/etlm-pof-config.xml"; this document does not refer to any schema definition and has not been validated.
2012-01-03 10:18:27.929/7.134 Oracle Coherence GE 3.7.1.0 <Info> (thread=DistributedCache:ETLMDistributedCache, member=1): Loaded included POF configuration from "jar:file:/C:/cohe
rence/coherence3.7.1/lib/coherence.jar!/coherence-pof-config.xml"
2012-01-03 10:18:27.995/7.200 Oracle Coherence GE 3.7.1.0 <D5> (thread=DistributedCache:ETLMDistributedCache, member=1): Service ETLMDistributedCache joined the cluster with senior
service member 1
2012-01-03 10:18:28.021/7.226 Oracle Coherence GE 3.7.1.0 <D6> (thread=DistributedCache:ETLMDistributedCache, member=1): Service ETLMDistributedCache: sending PartitionConfig Confi
gSync to all
2012-01-03 10:18:28.275/7.480 Oracle Coherence GE 3.7.1.0 <Info> (thread=Proxy:ExtendTcpProxyService:TcpAcceptor, member=1): TcpAcceptor now listening for connections on 10.17.134.
215:18030
2012-01-03 10:18:28.414/7.619 Oracle Coherence GE 3.7.1.0 <D5> (thread=Proxy:ExtendTcpProxyService:TcpAcceptor, member=1): Started: TcpAcceptor{Name=Proxy:ExtendTcpProxyService:Tcp
Acceptor, State=(SERVICE_STARTED), ThreadCount=20, HungThreshold=0, TaskTimeout=0, Codec=Codec(Format=POF), Serializer=com.tangosol.io.pof.ConfigurablePofContext, PingInterval=0, P
ingTimeout=30000, RequestTimeout=30000, SocketProvider=SystemSocketProvider, LocalAddress=[WRTVDCDVMJPCA33/10.17.134.215:18030], SocketOptions{LingerTimeout=0, KeepAliveEnabled=tru
e, TcpDelayEnabled=false}, ListenBacklog=0, BufferPoolIn=BufferPool(BufferSize=2KB, BufferType=DIRECT, Capacity=Unlimited), BufferPoolOut=BufferPool(BufferSize=2KB, BufferType=DIRE
CT, Capacity=Unlimited)}
2012-01-03 10:18:28.419/7.624 Oracle Coherence GE 3.7.1.0 <D5> (thread=Proxy:ExtendTcpProxyService, member=1): Service ExtendTcpProxyService joined the cluster with senior service
member 1
2012-01-03 10:18:28.422/7.627 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=1):
Services
ClusterService{Name=Cluster, State=(SERVICE_STARTED, STATE_JOINED), Id=0, Version=3.7.1, OldestMemberId=1}
InvocationService{Name=Management, State=(SERVICE_STARTED), Id=1, Version=3.1, OldestMemberId=1}
PartitionedCache{Name=ETLMDistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=257, BackupPartitions=0}
ProxyService{Name=ExtendTcpProxyService, State=(SERVICE_STARTED), Id=3, Version=3.7, OldestMemberId=1}
Started DefaultCacheServer...
It seems to me class XmlElementProcessingNamespaceContentHandler is not invoked.
thanks for the helpHi Leon,
Your problem might be that you have a custom overrides file so you are not including the overrides file from the Incubator Commons...
2012-01-03 10:18:21.344/0.549 Oracle Coherence 3.7.1.0 <Info> (thread=main, member=n/a): Loaded operational overrides from "file:/C:/coherence/coherence3.7.1/bin/etlm/etlm-server-config.xml" The tangosol-coherence-override file in the Incubator looks like this
<coherence>
<configurable-cache-factory-config>
<class-name>com.oracle.coherence.environment.extensible.ExtensibleEnvironment</class-name>
<init-params>
<init-param>
<param-type>java.lang.String</param-type>
<param-value system-property="tangosol.coherence.cacheconfig">coherence-cache-config.xml</param-value>
</init-param>
</init-params>
</configurable-cache-factory-config>
</coherence>If you do not have the configuration shown above then the Incubator com.oracle.coherence.environment.extensible.ExtensibleEnvironment will not be used as the Cache Factory and most of the Incubator functionality will not work.
You have two options; either make sure that the <configurable-cache-factory-config> section from the Incubator configuration added to your etlm-server-config.xml or make the first part of your etlm-server-config.xml file look like this...
<coherence xml-override="/tangosol-coherence-override.xml"> ...if you do this then your etlm-server-config.xml file will be read in then the tangosol-coherence-override file from the Incubator will be read in too.
JK -
Problem with Expiry Period for Multiple Caches in One Configuration File
I need to have a Cache System with multiple expiry periods, i.e. few records should exist for, lets say, 1 hour, some for 3 hours and others for 6 hours. To achieve it, I am trying to define multiple caches in the config file. Based on the data, I choose the Cache (with appropriate expiry period). Thats where, I am facing this problem. I am able to create the caches in the config file. They have different eviction policies i.e. for Cache1, it is 1 hour and for Cache2, it is 3 Hours. However, the data that is stored in Cache1 is not expired after 1 hour. It expires after the expiry period of other cache i.e.e Cache2.
Plz correct me if I am not following the correct way of achieving the required. I am attaching the config file here.<br><br> <b> Attachment: </b><br>near-cache-config1.xml <br> (*To use this attachment you will need to rename 142.bin to near-cache-config1.xml after the download is complete.)Hi Rajneesh,
In your cache mapping section, you have two wildcard mappings ("*"). These provide an ambiguous mapping for all cache names.
Rather than doing this, you should have a cache mapping for each cache scheme that you are using -- in your case the 1-hour and 3-hour schemes.
I would suggest removing one (or both) of the "*" mappings and adding entries along the lines of:
<pre>
<cache-mapping>
<cache-name>near-1hr-*</cache-name>
<scheme-name>default-near</scheme-name>
</cache-mapping>
<cache-mapping>
<cache-name>near-3hr-*</cache-name>
<scheme-name>default-away</scheme-name>
</cache-mapping>
</pre>
With this scheme, any cache that starts with "near-1hr-" (e.g. "near-1hr-Cache1") will have 1-hour expiry. And any cache that starts with "near-3hr-" will have 3-hour expiry. Or, to map your cache schemes on a per-cache basis, in your case you may replace "near-1hr-*" and "near-3hr-*" with Cache1 and Cache2 (respectively).
Jon Purdy
Tangosol, Inc. -
Split cache config file support in 12.1.2?
With 3.7.1 you could use the coherence-common library from incubator 11 to split cache config files.
This was done with the introduce-cache-config XML element:
<?xml version="1.0"?>
<cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"
xmlns:element="class://com.oracle.coherence.environment.extensible.namespaces.XmlElementProcessingNamespaceContentHandler"
element:introduce-cache-config="cache-config-1.xml, cache-config-2.xml, cache-config-3.xml">
</cache-config>
But this will not be available in incubator 12, the classes used for that will not be ported from incubator 11 (Coherence 3.7.x) to incubator 12 (Coherence 12.x):
http://coherence.oracle.com/download/attachments/14188570/Incubator+Update.pdf?version=2&modificationDate=1353937318977
How to achieve this with 12.1.2?Hi,
It is still possible to do this as I did it with the 12.1.2 beta and a beta of Incubator 12 so unless either of those changed at the last minute you should be able to still import cache configuration files.
The top of your XML needs to look something like this...
<?xml version="1.0"?>
<cache-config xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
xmlns:element="class://com.oracle.coherence.common.namespace.preprocessing.XmlPreprocessingNamespaceHandler"
element:introduce-cache-config="config/cluster.xml">
where this file is importing the "config/cluster.xml" file.
If you use custom name spaces in you configuration files then there is a bit more work to do as the XmlPreprocessingNamespaceHandler only knows how to merge the standard Coherence XML so you need to add extra code to you own namespace handlers. Specifically you need to implement...
public void mergeConfiguration(ProcessingContext processingContext, String sFromURI, XmlElement element, XmlElement xmlIntoCacheConfig, QualifiedName originatedFrom)
In the implementation you need to merge your custom XML (the element parameter) across to the main XML (the xmlIntoCacheConfig parameter). A very simple implementation of this would be...
@Override
public void mergeConfiguration(ProcessingContext processingContext, String sFromURI, XmlElement element, XmlElement xmlIntoCacheConfig, QualifiedName originatedFrom) {
// clone the element to merge
XmlElement xmlMergeElement = (XmlElement) element.clone();
// annotate the origin of the merging element
xmlMergeElement.addAttribute(originatedFrom.getName()).setString(sFromURI);
// Add the cloned element to the XML configuration
xmlIntoCacheConfig.getElementList().add(xmlMergeElement);
JK -
Distributed cache in custom developement.
Why it is not recommended to use distributed caching in custom development in SP2013? Is there a reason, other than the concern of overwriting the cache used by SharePoint?
If that would be the concern, any precautions we can take to overcome it?I think without a scope and priority defined for SharePoint named caches, custom named caches in SharePoint Distributed Cache cluster may evict SharePoint named caches if their usage is higher than that of SharePoint. I think the eviction algorithm is
based on 2 criteria: the named caches which has been accessed for the least number of times or named caches which were not accessed for the longest time.
This post is my own opinion and does not necessarily reflect the opinion or view of Slalom.
Maybe you are looking for
-
How to pass a string value from XL sheet cell to SQL query.
Hi, I am using SQL query in XL sheet to fetch some data. for that i am using ODBC connection. Now I want to pass a string from XL sheet Cell value in the where clause of Select statement, Please let me know how to do this. Below is My code: nge("A4")
-
Yahoo email account not open after i write this code in c#
Hi guys i make send email by this code but after one week the yahoo account not open why the yahoo account not open is [email protected] and this is my code string smtpAddress = "smtp.mail.yahoo.com";int portNumber = 587; bool enableSSL = true; s
-
IMessage continues to crash, what can I do to fix it?
My iMessage was working fine then all of a sudden it crashed and now I'm not able to open and read the messages. I can still receive messages and get alerted but I cant open iMessage. Ill tap the app and it will have the blank overview of it then af
-
Hi SSM 10.1 Experts, for people who have worked with this SSM 10.1 Version, I want to ask you if it is possible to insert and read comments in the KPI review in the Detail View, it is along a flow, I mean, to read other people comments who have alrea
-
'Tap to click' not working in certain applications in Vista SP2
System: Alu Macbook c2d 2.4ghz (late 2008) Bootcamp version 3/3.1 (Latest) Windows Vista SP2 OS X 10.6.2 OK, insignificant problem, but to explain; I have the Bootcamp control panel installed in Windows Vista under Bootcamp, and for the last 5 months