Coherence and WLP

Hi,
I'm looking for information about which version of coherence is supported with WLP? can someone point me to any documentation (if any)?
Thanks!

Please take a look at the latest version, v3.7, of Coherence Integration Guide, Chapter 4, Integrating WebLogic Portal and Oracle Coherence:
http://download.oracle.com/docs/cd/E18686_01/coh.37/e18691/wlportal.htm#sthref104
It has information and links to the supported Oracle WebLogic Portal (10.3.2).
-Luk

Similar Messages

  • Using Coherence and Oracle Database as the CacheStore

    We are working on implementing a solution using Coherence and Oracle Database as the CacheStore. We initially implemented the Cache as a distributed-scheme which in turn uses the backing-map-scheme. We are trying to introduce transaction management and I used a scheme-ref in a transactional-scheme to point to an already existing distributed-scheme. However when I bring up the server, my custom coherence-cache-config.xml file is not recognized and Coherence comes up with the default setting. Given below is the snippet of my configuration file.
    1)     I would like to understand why the below configuration doesn’t work and am I doing it the right way? If not, what is the correct way of doing it?
    2)     There are a multiple transaction management options given in the documentation. Which are the ones that will work with a distributed-scheme and read-write-backing-map-scheme?
    3)     If transactional-schemes cannot work with distributed-scheme, what is the best way to have a distributed cache with a oracle database as a cache store?
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>id<cache-name>
    <scheme-name>example-transactional<scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <transactional-scheme>
    <scheme-name>example-transactional</scheme-name>
    <scheme-ref>distributedcustomcache</scheme-ref>
    <thread-count>10</thread-count>
    </transactional-scheme>
    <distributed-scheme>
    <scheme-name>distributedcustomcache</scheme-name>
    <service-name>DistributedCache</service-name>
    <backing-map-scheme>
    <read-write-backing-map-scheme>
    <internal-cache-scheme>
    <local-scheme>
    <!--scheme-ref>categories-eviction</scheme-ref-->
    <scheme-name>inMemory</scheme-name>
    </local-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>spring-bean:coherenceCacheStore</class-name>
    <init-params>
    <init-param>
    <param-name>setEntityName</param-name>
    <param-value>{cache-name}</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </cachestore-scheme>
    <!--refresh-ahead-factor>0.5</refresh-ahead-factor-->
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>

    Hi,
    If you look at the documentation for transactional-scheme here: http://docs.oracle.com/cd/E24290_01/coh.371/e22837/appendix_cacheconfig.htm#BHCIABHA
    you will see that it says The transactional-scheme element defines a transactional cache, which is a specialized distributed cache. That means that a transactional-scheme is already a distributed-scheme.
    You will see from the same documentation above that there is no way in a transactional-scheme to configure things like cache-stores or listeners or even the backing-map-scheme as these are not supported on a transactional-scheme - so you cannot use a cache store.
    Personally I would not use transactional-scheme unless you have some really big reason to do so - the restrictions far outweigh any perceived advantage of having a transaction. There are better ways to build applications so they do not require transactions, that is what we have been doing for years with Coherence so far, and there is no real reason to change that.
    JK

  • Relationship between coherence and NIC teaming

    Hi,
    We are using Tangosol coherence for clustering purpose in our product Webmethods Integration server.
    When our server starts up it tries to jojn tne cluster.
    Our scenario is this :-
    We have 2 servers running on 2 separate boxes A&B.
    They are on same network segment.
    Multicast test is working properly .
    The issue is only one of the nodes(which is started first) in becoming the part of the cluster and other one remain disabled.
    We found out that the NIC teaming was disabled in the boxes.
    When we enabled NIC teaming with smart load balancing then both the nodes are able to join the cluster.
    My specific question is,
    Is there any relationship between Tangosol coherence and NIC teaming? If yes, what's the relationship.
    Regards,
    Ritwik Bhattacharyya

    I did some tinkering a while back trying to get 4Gb/s bonded etherchannels going on linux boxes but I had issues with out of order and missing packets:
    4Gb/s bonded ethernet test results - finally...
    But to answer your question there is no reason that you would need NIC teaming on in order to make Coherence work. It sounds like something is not configured correctly with your NIC or switch. Maybe try connecting the machines with a crossover cable instead of a switch just to eliminate the switch as a possible problem. It sounds like maybe you're just using the wrong ethernet port on a server or something.
    -Andrew

  • Oracle Coherence and OSB caching

    Hi Guys,
    I have a few queries regarding Oracle Coherence.
    1. Is OSB business Service Caching internally uses Oracle Coherence.
    2. If answer to above question is YES, then can I install Weblogic without Oracle Coherence.
    3. Again if answer to above question is YES, will OSB caching work without Oracle Coherence and if yes, then How?
    4. Can we create clustered weblogic domain without oracle coherence.
    Regards

    Thanks Durga/Abhinav,
    The links clarify a lot.
    Now what i need to know more is that when i installed Weblogic server (default configuration) and created a OSB server as managed server in wls domain. I could use the OSB caching in business services without any issue. However, there is no coherence server under "Coherence Server" in weblogic console. How the caching is working in OSB when i havn't configured any coherence server on weblogic?
    Regards

  • Coherence and myrinet/infiniband?

    We have some networks that run TCP/IP over myrinet and infiniband. Has anybody tried using this kind of nets with Coherence and in that case what was the experience performance as well as reliability wise? What configuration changes did you make in Coherence to get
    maximum perfromance?
    Best Regards
    Magnus

    Hi Magnus,
    In some small scale performance testing on IB, I was able to achieve throughputs of around 600MB/s between two IB connected Coherence JVMs. In order to get rates this high though I needed to use jumbo frames of max size (16KB I think) and correspondingly big data objects.
    Mark | Oracle Coherence

  • Coherence and XTP(Extreme Transaction Processing)

    There are many articles and reports about Coherence and XTP(Extreme Transaction Processing), however I think in many cases Coherence can improve performance for query operation not for write operation dramatically, because there are many restricts for cache persistence through Coherence, for example global transaction, XA datasource, transaction spans among many caches. So I think there is a long way to go if Coherence wants to implement real XTP, am I right, any comments?

    any comments?

  • Coherence and JRockit

    We toyed with Coherence and JRockit last Friday.
    Have a look at how a simple switch of JVM to JRockit improves both the performance AND predictability.
    http://jp.gal.free.fr/pub/Coherence-JRTt.htm
    (sorry no attachments allowed here)
    Manh-Kiet, JP

    Hi Nick,
    There is an FAQ for JRockit that should answer your questions here:
    http://www.oracle.com/technology/software/products/jrockit/FAQ.html
    As far as I know, none of us JRockit developers are actively monitoring this forum. I stumbled across this by chance. If you have any JRockit specific questions, it would be great if you could post them in the appropriate JRockit related forums:
    JRockit - JRockit
    JRockit Mission Control - http://forums.oracle.com/forums/forum.jspa?forumID=563
    JRockit Real Time - http://forums.oracle.com/forums/forum.jspa?forumID=564
    Hope this helps! :)
    Kind regards,
    Marcus

  • Coherence and database backend updates

    Hi
    I am new to coherence, I liked the features of Coherence replicated cache, cache through etc..
    My Question is if I am using Coherence with cache through and partitioned caching and I have a back end update on data through a oracle database stored procedure how the coherence cache get the latest data changed by the stored procedure. Is there any event driven mechanism to invalidate the cache to reload the data or it is not a good practice in these scenario.
    Rgds
    Anil

    Hi Anil,
    it really depends on what you need to achieve.
    There is a very good wiki which describes most of the things you can do with Coherence at the url: http://wiki.tangosol.com/display/COH33UG/Coherence+3.3+Home
    However, since you have your existing database model which you want to retain because you want the data still reside in the database, depending on the consistency requirements you might not be totally free in representing data in Coherence.
    The best feature of Coherence to significantly reduce the load on the database is the write-behind cache.
    Write-behind functionality allows you to coalesce multiple updates to the same DB row into a single update as data is written out only after a certain amount of time thereby combining the changes from multiple updates to a single one.
    It also allows ripe updates to multiple cached entries for which the primary copies reside in the same cache node to be written out in the same database operation (preferably in batch mode).
    Due to these behaviors write-behind has a profound effect on write-heavy applications.
    However that way of operation requires that for any logic that needs to query consistently from the data-set and all operations changing the data-set go to the cache, because the database is not guaranteed to be consistent. Therefore it might not be good for you.
    Another approach is that if you want to do your DB changes directly in the DB, you can simply cache data in whatever structures that suit your access patterns in a read-through cache, and if there are any changes to the database you invalidate entries which are stale.
    The cache structures can be whatever which you choose appropriate to your logic, you can cache single entries, you can cache entire top-down object hierarchies, you can cache query results keyed by the query parameters.
    The point is that you are free to choose the most appropriate structure of what to cache as opposed to the caching features of other frameworks which choose the caching structures to be aligned to their classes and not your needs.
    Just keep in mind that without doing serious locking (which adversely affects both read and write performance), between reading any two or more entries from the cache a change might have occurred to one or more of those entries. This means that when using multiple entries from the cache, there might not be any transaction-set in the database which contains all entries in the state which you were getting them.
    So if you need any such guarantees, then the data you need such guarantees on must reside in a single cache entry and that cache entry must have been retrieved from the database with a transaction which provides those guarantees at all (if you read data from the database with READ_COMMITTED isolation and with multiple queries, then you don't get that consistency even from the database, as some of the entries read by the previous operations in the transaction might have been overwritten when another transaction committed before subsequent read operations in your transaction).
    There can be other approaches as well.
    It really all depends on your access patterns and without knowing more about that it is hard to suggest the correct solution.
    Best regards,
    Robert

  • Coherence and EclipseLink - JTA Transaction Manager - slow response times

    A colleague and I are updating a transactional web service to use Coherence as an underlying L2 cache. The application has the following characteristics:
    Java 1.7
    Using Spring Framework 4.0.5
    EclipseLink 12.1.2
    TopLink grid 12.1.2
    Coherence 12.1.2
    javax.persistence 12.1.2
    The application is split, with a GAR in a WebLogic environment and the actual web service application deployed into IBM WebSphere 8.5.
    When we execute a GET from the server for a decently sized piece of data, the response time is roughly 20-25 seconds. From looking into DynaTrace, it appears that we're hitting a brick wall at the "calculateChanges" method within EclipseLink. Looking further, we appear to be having issues with the transaction manager but we're not sure what. If we have a local resource transaction manager, the response time is roughly 500 milliseconds for the exact same request. When the JTA transaction manager is involved, it's 20-25 seconds.
    Is there a recommendation on how to configure the transaction manager when incorporating Coherence into a web service application of this type?

    Hi Volker/Markus,
    Thanks a lot for the response.
    Yeah Volker, you are absolutely right. the 10-12 seconds happens when we have not used the transaction for several minutes...Looks like the transactions are moved away from the SAP buffer or something, in a very short time.
    and yes, the ABAP WP's are running in Pool 2 (*BASE) and the the JAVA server, I have set up in another memory pool of 7 GB's.
    I would say the performance of the JAVA part is much better than the ABAP part.
    Should I just remove the ABAP part of the SOLMAN from memory pool 2 and assign the JAVA/ABAP a separate huge memory pool  of say like 12-13 GB's.
    Will that likely to improve my performance??
    No, I have not deactivated RSDB_TDB in TCOLL from daily twice to weekly once on all systems on this box. It is running daily twice right now.
    Should I change it to weekly once on all the systems on this box?  How is that going to help me?? The only thinng I can think of is that it will save me some CPU utilization, as considerable CPU resources are needed for this program to run.
    But my CPU utilization is anyway only like 30 % average. Its a i570 hardware and right now running 5 CPU's.
    So you still think I should deactivate this job from daily twice to weekly once on all systems on this box??
    Markus, Did you open up any messages with SAP on this issue.?
    I remember working on the 3.2 version of soultion manager on change management and the response times very much better than this as compared to 4.0.
    Let me know guys and once again..thanks a lot for your help and valuable input.
    Abhi

  • Coherence and iptable firewall Question

    We have Coherence deployment on 3 linux virtual servers running behind firewall. The deployment is as follows..
    Server 1 - 2 WKA Nodes (Cache Servers) and 7 Storage disabled application Nodes
    Server 2 - 1 Storage Disabled application Node
    Server 3 - 2 WKA Nodes (Cache Servers) and 1 Storage disable application node
    Now the Question is.. do we need to open up firewall for all the local ports. Is there a way to avoid opening up these many ports?

    my say on this one is if the router is working fine dont upgrade the firmware, because whenever you upgrade the firmware of a router there is a itty bitty chance of bricking the router and since you told me that it is about 3 years old its already out of warranty. but if you want to upgrade the firmware of the router you can get the firmware at linksys.com/download, if you are just using the router for basic internet access and you are not changing any advanced configuration i say stick with your current firmware esp if you are not having problems with the router.
    "Love your job but never love your company. Because you never know when your company stops loving you"

  • MS SQLServer 7.0 and WLPS

    Hi,
    I am in the process of starting a new service and have some trouble deciding
    whether I should use
    Oracle or MS-SQL Server as the database. I would prefer to use MS-SQL for
    its ease of use
    and due to the fact that the load will not be an issue.
    My only concern is how hard it would be to port the WLPS DB-schemas to
    MS-SQL.
    Has anyone done this, and how hard is it?
    Is there any other problems that I might expect if I go for the MS-SQL
    solution?
    Please note that this is not an invitation to a Oracle vs MsSQL flame war.
    regards,
    Lars Hansson
    CTO
    Compost Marketing AB

    You may need some 'Actions' assigned to a role you have in the UME to be able to deploy drivers:
    JDBC Drivers:
    XMII_JDBCDriver_R
    XMII_JDBCDriver_RWD
    XMII_JDBCDriver_deploy
    XMII_JDBCDriver_all
    http://help.sap.com/saphelp_mii122sp01/helpdata/en/48/d0c6efbcb810b6e10000000a421138/content.htm?frameset=/en/45/5a399bec592a4de10000000a11466f/frameset.htm&current_toc=/en/44/e13108c4cb19d7e10000000a422035/plain.htm&node_id=7

  • ClassNotFound Exception integrating Coherence and Eclipselink with composite key entity objects

    I am hooking up coherence as an L2 cache for eclipselink in weblogic 12c (using the latest released weblogic and eclipselink 2.4.2.v20130514-5956486).  I have my application war and coherence gar packaged in the same EAR file.  For Entity Objects with single primary keys (Longs) coherence integration works as expected.  However I have several multi-part key Entity Objects that use an IdClass to represent the key.  When these objects get serialized, coherence throws a class not found exception.  I'm assuming its because the cachekey used is an instance of my applications IdClass, and the weblogic classloader doesn't have access to this.  Since eclipselink hides the cache integration with coherence, I cannot pass my classloader off to coherence (as i do with other caches i'm using directly with coherence).
    How can I get around this problem? 
    I saw this option in ExternalizableHelper.xml, but modifying it directly had no effect:
      <!-- if deploying Coherence in CLASSPATH and deploying application
           classes within a hot-redeployable archive (e.g. ".ear"), set this to
           true -->
      <!-- *** WARNING *** all cluster nodes must use the same setting -->
      <force-classloader-resolving>false</force-classloader-resolving>
    Here is the stack trace:
    ClassLoader: null) java.io.IOException: readObject failed: java.lang.ClassNotFoundException: com.oracle.pgbu.common.data.OverlayIdClass
      at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
      at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
      at java.security.AccessController.doPrivileged(Native Method)
      at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
      at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
      at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
      at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
      at java.lang.Class.forName0(Native Method)
      at java.lang.Class.forName(Class.java:270)
      at java.io.ObjectInputStream.resolveClass(ObjectInputStream.java:623)
      at weblogic.coherence.service.internal.io.WLSObjectInputStream.resolveClass(WLSObjectInputStream.java:45)
      at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1610)
      at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1515)
      at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1769)
      at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1348)
      at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1704)
      at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1342)
      at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
      at com.tangosol.util.ExternalizableHelper.readSerializable(ExternalizableHelper.java:2262)
      at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2393)
      at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2336)
      at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2315)
      at oracle.eclipselink.coherence.integrated.internal.cache.RelationshipUpdateProcessor.readExternal(RelationshipUpdateProcessor.java:82)
      at com.tangosol.util.ExternalizableHelper.readExternalizableLite(ExternalizableHelper.java:2086)
      at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2390)
      at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2336)
      at oracle.eclipselink.coherence.integrated.cache.WrapperSerializer.deserialize(WrapperSerializer.java:79)
      at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2791)
      at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:266)
    ClassLoader: null
      at com.tangosol.util.Base.ensureRuntimeException(Base.java:286)
      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.tagException(Grid.CDB:50)
      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onInvokeRequest(PartitionedCache.CDB:61)
      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvokeRequest.run(PartitionedCache.CDB:1)
      at com.tangosol.coherence.component.util.DaemonPool.add(DaemonPool.CDB:20)
      at com.tangosol.coherence.component.util.DaemonPool.add(DaemonPool.CDB:1)
      at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:2)
      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:38)
      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:23)
      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
      at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:51)
      at java.lang.Thread.run(Thread.java:724)
    Caused by: java.io.IOException: readObject failed: java.lang.ClassNotFoundException: com.oracle.pgbu.common.data.OverlayIdClass
      at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
      at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
      at java.security.AccessController.doPrivileged(Native Method)
      at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
      at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
      at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
      at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
      at java.lang.Class.forName0(Native Method)
      at java.lang.Class.forName(Class.java:270)
      at java.io.ObjectInputStream.resolveClass(ObjectInputStream.java:623)
      at weblogic.coherence.service.internal.io.WLSObjectInputStream.resolveClass(WLSObjectInputStream.java:45)
      at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1610)
      at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1515)
      at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1769)
      at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1348)
      at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1704)
      at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1342)
      at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
      at com.tangosol.util.ExternalizableHelper.readSerializable(ExternalizableHelper.java:2262)
      at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2393)
      at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2336)
      at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2315)
      at oracle.eclipselink.coherence.integrated.internal.cache.RelationshipUpdateProcessor.readExternal(RelationshipUpdateProcessor.java:82)
      at com.tangosol.util.ExternalizableHelper.readExternalizableLite(ExternalizableHelper.java:2086)
      at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2390)
      at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2336)
      at oracle.eclipselink.coherence.integrated.cache.WrapperSerializer.deserialize(WrapperSerializer.java:79)
      at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2791)
      at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:266)
    ClassLoader: null
      at com.tangosol.util.ExternalizableHelper.readSerializable(ExternalizableHelper.java:2270)
      at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2393)
      at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2336)
      at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2315)
      at oracle.eclipselink.coherence.integrated.internal.cache.RelationshipUpdateProcessor.readExternal(RelationshipUpdateProcessor.java:82)
      at com.tangosol.util.ExternalizableHelper.readExternalizableLite(ExternalizableHelper.java:2086)
      at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2390)
      at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2336)
      at oracle.eclipselink.coherence.integrated.cache.WrapperSerializer.deserialize(WrapperSerializer.java:79)
      at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2791)
      at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:266)
      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvokeRequest.deserializeProcessor(PartitionedCache.CDB:7)
      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onInvokeRequest(PartitionedCache.CDB:37)
      ... 10 more

    I am hooking up coherence as an L2 cache for eclipselink in weblogic 12c (using the latest released weblogic and eclipselink 2.4.2.v20130514-5956486).  I have my application war and coherence gar packaged in the same EAR file.  For Entity Objects with single primary keys (Longs) coherence integration works as expected.  However I have several multi-part key Entity Objects that use an IdClass to represent the key.  When these objects get serialized, coherence throws a class not found exception.  I'm assuming its because the cachekey used is an instance of my applications IdClass, and the weblogic classloader doesn't have access to this.  Since eclipselink hides the cache integration with coherence, I cannot pass my classloader off to coherence (as i do with other caches i'm using directly with coherence).
    How can I get around this problem? 
    I saw this option in ExternalizableHelper.xml, but modifying it directly had no effect:
      <!-- if deploying Coherence in CLASSPATH and deploying application
           classes within a hot-redeployable archive (e.g. ".ear"), set this to
           true -->
      <!-- *** WARNING *** all cluster nodes must use the same setting -->
      <force-classloader-resolving>false</force-classloader-resolving>
    Here is the stack trace:
    ClassLoader: null) java.io.IOException: readObject failed: java.lang.ClassNotFoundException: com.oracle.pgbu.common.data.OverlayIdClass
      at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
      at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
      at java.security.AccessController.doPrivileged(Native Method)
      at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
      at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
      at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
      at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
      at java.lang.Class.forName0(Native Method)
      at java.lang.Class.forName(Class.java:270)
      at java.io.ObjectInputStream.resolveClass(ObjectInputStream.java:623)
      at weblogic.coherence.service.internal.io.WLSObjectInputStream.resolveClass(WLSObjectInputStream.java:45)
      at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1610)
      at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1515)
      at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1769)
      at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1348)
      at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1704)
      at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1342)
      at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
      at com.tangosol.util.ExternalizableHelper.readSerializable(ExternalizableHelper.java:2262)
      at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2393)
      at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2336)
      at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2315)
      at oracle.eclipselink.coherence.integrated.internal.cache.RelationshipUpdateProcessor.readExternal(RelationshipUpdateProcessor.java:82)
      at com.tangosol.util.ExternalizableHelper.readExternalizableLite(ExternalizableHelper.java:2086)
      at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2390)
      at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2336)
      at oracle.eclipselink.coherence.integrated.cache.WrapperSerializer.deserialize(WrapperSerializer.java:79)
      at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2791)
      at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:266)
    ClassLoader: null
      at com.tangosol.util.Base.ensureRuntimeException(Base.java:286)
      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.tagException(Grid.CDB:50)
      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onInvokeRequest(PartitionedCache.CDB:61)
      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvokeRequest.run(PartitionedCache.CDB:1)
      at com.tangosol.coherence.component.util.DaemonPool.add(DaemonPool.CDB:20)
      at com.tangosol.coherence.component.util.DaemonPool.add(DaemonPool.CDB:1)
      at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:2)
      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:38)
      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:23)
      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
      at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:51)
      at java.lang.Thread.run(Thread.java:724)
    Caused by: java.io.IOException: readObject failed: java.lang.ClassNotFoundException: com.oracle.pgbu.common.data.OverlayIdClass
      at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
      at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
      at java.security.AccessController.doPrivileged(Native Method)
      at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
      at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
      at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
      at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
      at java.lang.Class.forName0(Native Method)
      at java.lang.Class.forName(Class.java:270)
      at java.io.ObjectInputStream.resolveClass(ObjectInputStream.java:623)
      at weblogic.coherence.service.internal.io.WLSObjectInputStream.resolveClass(WLSObjectInputStream.java:45)
      at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1610)
      at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1515)
      at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1769)
      at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1348)
      at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1704)
      at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1342)
      at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
      at com.tangosol.util.ExternalizableHelper.readSerializable(ExternalizableHelper.java:2262)
      at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2393)
      at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2336)
      at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2315)
      at oracle.eclipselink.coherence.integrated.internal.cache.RelationshipUpdateProcessor.readExternal(RelationshipUpdateProcessor.java:82)
      at com.tangosol.util.ExternalizableHelper.readExternalizableLite(ExternalizableHelper.java:2086)
      at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2390)
      at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2336)
      at oracle.eclipselink.coherence.integrated.cache.WrapperSerializer.deserialize(WrapperSerializer.java:79)
      at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2791)
      at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:266)
    ClassLoader: null
      at com.tangosol.util.ExternalizableHelper.readSerializable(ExternalizableHelper.java:2270)
      at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2393)
      at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2336)
      at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2315)
      at oracle.eclipselink.coherence.integrated.internal.cache.RelationshipUpdateProcessor.readExternal(RelationshipUpdateProcessor.java:82)
      at com.tangosol.util.ExternalizableHelper.readExternalizableLite(ExternalizableHelper.java:2086)
      at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2390)
      at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2336)
      at oracle.eclipselink.coherence.integrated.cache.WrapperSerializer.deserialize(WrapperSerializer.java:79)
      at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2791)
      at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:266)
      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvokeRequest.deserializeProcessor(PartitionedCache.CDB:7)
      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onInvokeRequest(PartitionedCache.CDB:37)
      ... 10 more

  • Coherence and OC4J container integration - is it possible ?

    I'm planning to use Coherence as a clustered cache solution for database objects assembled from the database.
    In order to do this I use a combination from event listener and invocation map, where I put a specially tailored message with connection information into the cache, then each node picks it up and spawns it's own listening process. But there is a catch: before we will put this connection message with database login/password we need to pull these values from somewhere. Currently I'm thinking about installing Coherence server as OC4J 10.1.3 cluster, which we're already using and utilizing it's features, like datasources configuration management etc.
    So, the question is: is it possible to install Coherence server on OC4J app server ? Any FAQ/docs available on that ?

    Hi,
    Let me restate what I think you are asking to make sure we are on the same page.
    What you want to do is use OC4J to manage credentials which ultimately need to
    be picked up by a Coherence cluster to have access information to serve a backing
    map listener that ultimately connects to a database.
    If this is what you are proposing, than I would suggest that you keep the server
    environments discrete (i.e. living in seperate JVMS) and have you Coherence logic
    living in OC4J run as a client.
    i.e. -Dtangosol.coherence.distributed.localstorage=false
    Typically it is a really bad idea to overload a JVM with two types of servers by definition they
    are extremely complex and will compete for a variety of resources.
    Regards,
    Bob

  • Coherence and Multicast Issues

    We are using Coherence 3.7.1 as a part of EAR application. These are deployed on WL Servers using multicast. We have observed if multicast is disabled on a node while deployment but enabled later, id does not joins the cluster,  A redeployment is required after multicast is re-enabled.
    Any suggestions on this will be appreciated.
    Though one option option is to run the the multicast-test.cmd utility manually before deployment and view the result, is there a way to automate this process without manual intervention.
    Using Unicast and WKA are not possible
    Looking forward to suggestions.

    Raff. wrote:
    Can you try to run your application linked to the 10.1 LCCS lib on a machine that only has FlashPlayer 10.0 installed ?
    Can an application which is linked to lccs for player 10.1, run on a 10.0 player?
    This sounds like a problem in lccs. Could you check the difference in code in WebcamSubscriber#playStream() between 10.0 and 10.1? (I cannot do this as 10.1 is not open source.) Whatever the problem is, it's initiated from that method.

  • Coherence and ADF 11g

    Has anyone used coherence for ADF BC 11g ?
    What are teh steps to follow and how BC components enables for coherence use ?
    Why no sample example on ADF 11g and coherence available ? step by step from install, config and ADF BC using coherence kind.
    thx
    dd

    What other ways to boost ADF BC scalability, performance on high traffic/ Multiple user site?
    Actually when you want to use second-level caching it only gives a boost for read-mostly classes.
    If you have data that is updated much more often than it is read, do not enable the
    second level cache. The price of maintaining the cache during updates can possibly outweigh
    the performance benefit of faster reads. Furthermore, the second-level cache can be
    dangerous in systems that share the database with other writing applications.
    So you must exercise careful jugdment here for each class and collection you want
    to enable caching for.
    When you have a highly multi threaded environment you can always use a cluster
    and a loadbalancer. This is probably the scalability (to increase the capacity of
    your application) you are looking for.

Maybe you are looking for