POF serialization with replicated cache?

Sorry again for the newbie question.
Can you use POF serialized objects in a replicated cache?
All of the examples show POF serialized objects being used with a partitioned cache.
If you can do this, are there any caveats involved with the "replication" cache? I assume it would have to be started using the same configuration as the "master" cache.

Thanks Rob.
So, you just start up the Coherence instance on the (or at the) replication site, using the same configuration as the "master"? (Of course with appropriate classpaths and such set correctly)

Similar Messages

  • Can I enable pof serialization for one cache and other JAVA serialization

    I had coherence cluster with few cache , Is there any way i can enable pof serialization for one cache and other to use normal JAVA serialization

    839051 wrote:
    I had coherence cluster with few cache , Is there any way i can enable pof serialization for one cache and other to use normal JAVA serializationHi,
    you can control serialization on a service-by-service basis. You can specify which serializer to use for the service with the <serializer> element in the service-scheme element corresponding to the service in the scache configuration file.
    Be aware, though, that if you use Coherence*Extend, and the service serializer configuration for the proxy service does not match the serializer configuration of the service which you are proxying to the extend client then the proxy node has to de- and reserialize the data which it moves between the service and the client.
    Best regards,
    Robert

  • Basic use of locks with replicated cache

    Hi,
    I'm in the process of evaluating Coherence and I have a few fundamental questions about best locking practices with a simple replicated cache. I've been through the docs & forums a few times, but I'm still confused.
    The docs say that a replicted cache is "fully coherent". What, exactly does this mean? Does this imply that Coherence handles all the locking for you? Under what situations do I have to lock a node?
    Thanks.

    Hi,
    There are generally two reasons for desiring full synchronicity:
    Fault tolerance ... however, note that by the time the put returns, the data is held by both the client and the issuer. Also, we recommend the use of the Distributed cache topology for transactional data (and not Replicated). Distributed is in fact fully synchronous.
    The other reasons people usually ask about this is concern over the logical view of data updates (simultaneous state across the cluster). Even within a single JVM running on a single CPU, data updates are not simultaneous or even synchronous. This is quite intentional (in fact most of the complexity of the Java Memory Model arises from the desire to avoid the requirement for simultaneous state changes).
    In the JVM, synchronized blocks are required to avoid race conditions and inconsistent views of data, and in Coherence, locks are required to do the same.
    The key point to remember is, from the point of view of a single cluster member, data updates are in fact synchronous and simultaneous (with or without locking).
    I'm assuming your question relates to one of those two issues (as those are the most common concerns that we hear); if not, could you provide a bit more background on your requirements?
    Jon Purdy
    Tangosol, Inc.

  • Error while putting an object in Replicated Cache

    Hi,
    I am running just a single node of coherence with Replicated cache. But when I am trying to add an object to it I am getting the below exception. However I don't get this error while doing the same thing in a Distributed cache. Can someone please tell me what could I be doing wrong here.
    Caused by: java.io.IOException: readObject failed: java.lang.ClassNotFoundException: com.test.abc.pkg.RRSCachedObject
         at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
         at java.security.AccessController.doPrivileged(Native Method)
         at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
         at java.lang.ClassLoader.loadClass(ClassLoader.java:316)
         at java.lang.ClassLoader.loadClass(ClassLoader.java:251)
         at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:374)
         at java.lang.Class.forName0(Native Method)
         at java.lang.Class.forName(Class.java:242)
         at java.io.ObjectInputStream.resolveClass(ObjectInputStream.java:585)
         at com.tangosol.io.ResolvingObjectInputStream.resolveClass(ResolvingObjectInputStream.java:68)
         at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1544)
         at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1466)
         at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1699)
         at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1305)
         at java.io.ObjectInputStream.readObject(ObjectInputStream.java:348)
         at com.tangosol.util.ExternalizableHelper.readSerializable(ExternalizableHelper.java:2145)
         at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2276)
         at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2673)
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:257)
         at com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$ConverterFromBinary.convert(CacheServiceProxy.CDB:4)
         at com.tangosol.util.ConverterCollections$ConverterCacheMap.put(ConverterCollections.java:2433)
         at com.tangosol.coherence.component.util.collections.wrapperMap.WrapperNamedCache.put(WrapperNamedCache.CDB:1)
         at com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$WrapperNamedCache.put(CacheServiceProxy.CDB:2)
         at com.tangosol.coherence.component.net.extend.messageFactory.NamedCacheFactory$PutRequest.onRun(NamedCacheFactory.CDB:6)
         at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
         at com.tangosol.coherence.component.net.extend.proxy.NamedCacheProxy.onMessage(NamedCacheProxy.CDB:11)
         at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:28)
         at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer$DaemonPool$WrapperTask.run(Peer.CDB:9)
         at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
         at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:69)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         at java.lang.Thread.run(Thread.java:613)
    ClassLoader: java.net.URLClassLoader@b5f53a
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:261)
         at com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$ConverterFromBinary.convert(CacheServiceProxy.CDB:4)
         at com.tangosol.util.ConverterCollections$ConverterCacheMap.put(ConverterCollections.java:2433)
         at com.tangosol.coherence.component.util.collections.wrapperMap.WrapperNamedCache.put(WrapperNamedCache.CDB:1)
         at com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$WrapperNamedCache.put(CacheServiceProxy.CDB:2)
         at com.tangosol.coherence.component.net.extend.messageFactory.NamedCacheFactory$PutRequest.onRun(NamedCacheFactory.CDB:6)
         at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
         at com.tangosol.coherence.component.net.extend.proxy.NamedCacheProxy.onMessage(NamedCacheProxy.CDB:11)
         at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:28)
         at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer$DaemonPool$WrapperTask.run(Peer.CDB:9)
         at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
         at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:69)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         at java.lang.Thread.run(Thread.java:613)
    This is my config file -
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>*</cache-name>
                   <scheme-name>MY-replicated-cache-scheme</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <!--
    Replicated caching scheme.
    -->
              <replicated-scheme>
                   <scheme-name>MY-replicated-cache-scheme</scheme-name>
                   <service-name>ReplicatedCache</service-name>
                   <backing-map-scheme>
                        <local-scheme>
                        </local-scheme>
                   </backing-map-scheme>
                   <lease-granularity>member</lease-granularity>
                   <autostart>true</autostart>
              </replicated-scheme>
              <proxy-scheme>
                   <service-name>ExtendTcpProxyService</service-name>
                   <thread-count>5</thread-count>
                   <acceptor-config>
                        <tcp-acceptor>
                             <local-address>
                                  <address>server</address>
                                  <port>port</port>
                             </local-address>
                             <receive-buffer-size>768k</receive-buffer-size>
                             <send-buffer-size>768k</send-buffer-size>
                        </tcp-acceptor>
                   </acceptor-config>
                   <autostart>true</autostart>
              </proxy-scheme>
         </caching-schemes>
    </cache-config>
    Edited by: user1945969 on Jun 5, 2010 4:16 PM

    By default, it should have used FIXED as unit-calculator. But look at the trace seems your replicated cache was using BINARY as unit-calculator.
    Could you try add <unit-calculator>FIXED</unit-calculator> in your cache config for the replicate cache.
    Or just try insert a object (both key and value) which implement Binary.
    Check the unit-calculator part on this link
    http://wiki.tangosol.com/display/COH35UG/local-scheme

  • Query regarding Replicated Caches that Persist on Disk

    I need to build a simple fault tolerant system that will replicate cache
    entries across a small set of systems. I want the cache to be persistent
    even if all cluster members are brought down.
    Is this something that requires writing a custom CacheStore implementation
    or is there a configuration that I can use that uses off-the-shelf pluggable
    caches? The documentation was somewhat vague about this.
    If I need or want to write my own CacheStore, when there is a cache write-through
    operation how does Coherence figure out which member of the cluster will do
    the actual work and persist a particular object?

    Hi rhanckel,
    write-through and cache stores are not supported with replicated caches, you need to use partitioned (distributed) cache for cache stores.
    You can use a number of out-of-the-box cache stores (Hibernate, TopLink, etc) or you can write your own if you don't find a suitable one. Configuration is the same, you need to specify the cache store class name appropriately in the <cache-store-scheme> child element of the <read-write-backing-map> element.
    You can look at the documentation for it on the following urls:
    http://wiki.tangosol.com/display/COH34UG/cachestore-scheme
    http://wiki.tangosol.com/display/COH34UG/read-write-backing-map-scheme
    http://wiki.tangosol.com/display/COH34UG/Read-Through%2C+Write-Through%2C+Write-Behind+Caching+and+Refresh-Ahead
    As for how Coherence figures out which member needs to write:
    in a partitioned cache, each cache key has an owner node which is algorithmically determined from the key itself and the distribution of partitions among nodes (these two do not depend on the actual data in the cache). More specifically, any key is always mapped to the same partition (provided you did not change the partition-count or the partition affinity-related settings, although if you did the latter, then it is arguably not the same key anymore). Therefore Coherence just needs to know, who owns a certain partition. Hence, the owner of that partition is the owner of the key, and that node is tasked with every operation related to that key.
    Best regards,
    Robert

  • Coherence 3.3.1 Version, Write Behind Replicated Cache Error

    Hi,
    I am using Cohernece 3.3.1 Version, i have a Write Behind cache, the put method is throwing following exception:
    java.lang.IllegalArgumentException: Invalid internal format: Inactive
    at com.tangosol.coherence.component.util.BackingMapManagerContext.addInternalValueDecoration(BackingMapManagerCo
    ntext.CDB:11)
    at com.tangosol.net.cache.ReadWriteBackingMap.put(ReadWriteBackingMap.java:737)
    at com.tangosol.coherence.component.util.CacheHandler.onLeaseUpdate(CacheHandler.CDB:52)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.ReplicatedCache.performUpdate(ReplicatedC
    ache.CDB:11)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.ReplicatedCache.onLeaseUpdateRequest(Repl
    icatedCache.CDB:22)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.ReplicatedCache$LeaseUpdateRequest.onRece
    ived(ReplicatedCache.CDB:5)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.onMessage(Service.CDB:9)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.onNotify(Service.CDB:123)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.ReplicatedCache.onNotify(ReplicatedCache.
    CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:35)
    at java.lang.Thread.run(Thread.java:534)
    ============
    The same cache works fine if i change the
    value of <write-delay-seconds> parameter to 0 i.e. if i make the cache write through.
    Could someone help me out with this issue.
    -thanks
    Krishan

    Write-behind caching is not supported with Replicated cache. Even with write-through, you'll end up generating replicated writes back to the back-end database, drastically increasing load.
    For more details, please see:
    http://wiki.tangosol.com/display/COH33UG/Read-Through,+Write-Through,+Refresh-Ahead+and+Write-Behind+Caching
    For applications where write-behind would be used, the partitioned (distributed) cache is almost always a far better option. Is there a reason to not use this?
    Jon Purdy
    Oracle

  • Replicated cache with cache store configuration

    Hi All,
    I am having two different applications. One is Admin kind of module from where data will be insterted/updated and another application will read data from Coherence cache.
    My requirement is to use Replicated cache and data also needs to be stored in the database. I am configuring cache with cache store DB operation.
    I have following coherence configuration. It works fine. Other application is able to read updated data. But while second application is trying to join the first application coherence cluster I am getting following exception in cache store. If I use distributed cache the same cache store works fine without any issues.
    Also note that eventhough it is throwing exception, application is working fine as expected. Other thing I am pre loading data on application start up in first application.
    Let me know if you need any further information.
    Thanks in advance.
    coherence-cache-config.xml
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-mapping>
    <cache-name>TestCache</cache-name>
    <scheme-name>TestCacheDB</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <replicated-scheme>
    <scheme-name>TestCacheDB</scheme-name>
    <service-name>ReplicatedCache</service-name>
    <backing-map-scheme>
    <local-scheme>
    <scheme-name>TestDBLocal</scheme-name>
    <cachestore-scheme>
    <class-scheme>
    <class-name>test.TestCacheStore</class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>TEST_SUPPORT</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </cachestore-scheme>
    </local-scheme>
    </backing-map-scheme>
    <listener/>
    <autostart>true</autostart>
    </replicated-scheme>
    <!--
    Proxy Service scheme that allows remote clients to connect to the
    cluster over TCP/IP.
    -->
    <proxy-scheme>
    <scheme-name>proxy</scheme-name>
    <service-name>ProxyService</service-name>
    <thread-count system-property="tangosol.coherence.extend.threads">10</thread-count>
    <acceptor-config>
    <tcp-acceptor>
    <local-address>
    <address system-property="tangosol.coherence.extend.address">localhost</address>
    <port system-property="tangosol.coherence.extend.port">7001</port>
    <reusable>true</reusable>
    </local-address>
    </tcp-acceptor>
    <serializer>
    <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
    <init-params>
    <init-param>
    <param-type>String</param-type>
    <param-value>pof-config.xml</param-value>
    </init-param>
    </init-params>
    </serializer>
    </acceptor-config>
    <autostart system-property="tangosol.coherence.extend.enabled">false</autostart> </proxy-scheme>
    </caching-schemes>
    </cache-config>
    Exception:
    2010-08-31 10:46:09.062/171.859 Oracle Coherence GE 3.5.2/463 <Error> (thread=ReplicatedCache, member=2): java.lang.Clas
    sCastException: com.tangosol.util.Binary cannot be cast to test.TestBean
    at test.TestCacheStore.store(TestCacheStore.java:137)
    at com.tangosol.net.cache.LocalCache$InternalListener.onModify(LocalCache.java:637)
    at com.tangosol.net.cache.LocalCache$InternalListener.entryInserted(LocalCache.java:599)
    at com.tangosol.util.MapEvent.dispatch(MapEvent.java:206)
    at com.tangosol.util.MapEvent.dispatch(MapEvent.java:166)
    at com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:556)
    at com.tangosol.net.cache.OldCache.dispatchEvent(OldCache.java:1916)
    at com.tangosol.net.cache.OldCache$Entry.onAdd(OldCache.java:1985)
    at com.tangosol.util.SafeHashMap.put(SafeHashMap.java:244)
    at com.tangosol.net.cache.OldCache.put(OldCache.java:266)
    at com.tangosol.coherence.component.util.CacheHandler.onLeaseUpdate(CacheHandler.CDB:42)
    at com.tangosol.coherence.component.util.CacheHandler.populateCache(CacheHandler.CDB:33)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ReplicatedCache$CacheUpdate.onReceiv
    ed(ReplicatedCache.CDB:5)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:136)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ReplicatedCache.onNotify(ReplicatedC
    ache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:619)
    2010-08-31 10:46:09.203/216.735 Oracle Coherence GE 3.5.2/463 <D5> (thread=DistributedCache, member=2): Deferring the di
    stribution due to 128 pending configuration updates
    TestBean.java
    public class TestBean implements PortableObject, Serializable {
         private static final long serialVersionUID = 1L;
         private String name;
         private String number;
         private String taskType;
         public String getName() {
              return name;
         public void setName(String name) {
              this.name = name;
         public String getNumber() {
              return productId;
         public void setNumber(String number) {
              this.number= number;
         public String getTaskType() {
              return taskType;
         public void setTaskType(String taskType) {
              this.taskType = taskType;
         @Override
         public void readExternal(PofReader reader) throws IOException {
              name = reader.readString(0);
              number = reader.readString(1);
              taskType = reader.readString(2);
         @Override
         public void writeExternal(PofWriter writer) throws IOException {
              writer.writeString(0, name);
              writer.writeString(1, number);
              writer.writeString(2, taskType);
    TestCacheStore.java
    public class TestCacheStore extends Base implements CacheStore {
         @Override
         public void store(Object oKey, Object oValue) {
         if(logger.isInfoEnabled())
              logger.info("store :"+oKey);
         TestBean testBean = (TestBean)oValue; //Giving classcast exception here
    //Doing some processing here over testBean
              ConnectionFactory connectionFactory = ConnectionFactory.getInstance();
              //Get the Connection
              Connection con = connectionFactory.getConnection();
              if(con != null){
    //Code to insert into the database
              }else{
                   logger.error("Connection is NULL");
    Edited by: user8279242 on Aug 30, 2010 11:44 PM

    Hello,
    The problem is that replicated caches are not supported with read write backing maps.
    Please refer to the link below for more information.
    http://download.oracle.com/docs/cd/E15357_01/coh.360/e15723/cache_rtwtwbra.htm#CFHEJHCI
    Best regards,
    -Dave

  • Pof Serialization Error leads to partial cache updates in XA Tran

    I am using coherence jca adapter to enlist in XA transactions with the database operations. The data is being stored in distributed caches with the cache member running on weblogic server with "Storage disabled". POF is being used for serialization. As a part of a single transaction, multiple caches, which are obtained from the CacheAdapter, are being updated. The application code does explicit updates to the cache and the database within the same transaction with the write to the cache happening after the write the database has been executed.
    It is being observed that when an exception happens during the serialization of an object all the cache updates prior to this error are not rolled back. Namely,
    I have CacheA for Object A, Cache B for Object B and Cache C for Object C.
    I am updating A, B, C with the same transaction and in the same order as the objects are listed. So database for A is updated followed by cache update for A, DataBase B is updated followed by Cache update for B and similarly for C.
    If there is an error while serializing C, all the database updates are rolled back, however updates to A and B are committed to the cache.
    Why aren't all the cache updates being rolled back ? Has this been fixed already in coherence 3.6.
    Thanks,
    Shamsur
    Application Server: Weblogic 10.3
    jdbc driver : XA thin driver
    coherence : 3.5.0
    Caused by: java.lang.IllegalStateException: decimal value exceeds IEEE754r 128-bit range: 7777777788888888888899999999999900000000000000044444444444447777777777777.00
    at com.tangosol.io.pof.PofHelper.calcDecimalSize(PofHelper.java:1517)
    at com.tangosol.io.pof.PofBufferWriter.writeBigDecimal(PofBufferWriter.java:562)
    at com.tangosol.io.pof.PofBufferWriter.writeObject(PofBufferWriter.java:1325)
    at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.writeObject(PofBufferWriter.java:2092)
    at com.apx.core.datalayer.data.basictypes.BigDecimalMoneyImpl.writeExternal(BigDecimalMoneyImpl.java:127)
    at com.tangosol.io.pof.PortableObjectSerializer.serialize(PortableObjectSerializer.java:88)
    at com.tangosol.io.pof.PofBufferWriter.writeObject(PofBufferWriter.java:1439)
    at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.writeObject(PofBufferWriter.java:2092)
    at com.apx.instrument.datalayer.data.domain.impl.DOtcInstrumentImpl.writeExternal(DOtcInstrumentImpl.java:200)
    at com.apx.core.datalayer.data.impl.AbstractDomainObject.writeExternal(AbstractDomainObject.java:109)
    ... 185 more
    at com.tangosol.coherence.ra.component.connector.resourceAdapter.cciAdapter.CacheAdapter$ManagedConnection$LocalTransaction.commit(CacheAdapter.CDB:37)
    at weblogic.connector.security.layer.AdapterLayer.commit(AdapterLayer.java:570)
    at weblogic.connector.transaction.outbound.NonXAWrapper.commit(NonXAWrapper.java:84)
    at weblogic.transaction.internal.NonXAServerResourceInfo.commit(NonXAServerResourceInfo.java:330)
    at weblogic.transaction.internal.ServerTransactionImpl.globalPrepare(ServerTransactionImpl.java:2251)
    at weblogic.transaction.internal.ServerTransactionImpl.internalCommit(ServerTransactionImpl.java:270)
    at weblogic.transaction.internal.ServerTransactionImpl.commit(ServerTransactionImpl.java:230)
    at weblogic.transaction.internal.TransactionManagerImpl.commit(TransactionManagerImpl.java:283)
    at $Proxy150.create(Unknown Source)
    javax.resource.spi.LocalTransactionException: CoherenceRA: Commit failed:
    java.lang.RuntimeException: error with the class: com.apx.instrument.datalayer.data.domain.impl.DOtcInstrumentImpl
    at com.apx.core.datalayer.data.impl.AbstractDomainObject.writeExternal(AbstractDomainObject.java:111)
    at com.tangosol.io.pof.PortableObjectSerializer.serialize(PortableObjectSerializer.java:88)
    at com.tangosol.io.pof.PofBufferWriter.writeObject(PofBufferWriter.java:1439)
    at com.tangosol.io.pof.ConfigurablePofContext.serialize(ConfigurablePofContext.java:338)
    at com.tangosol.util.ExternalizableHelper.serializeInternal(ExternalizableHelper.java:2508)
    at com.tangosol.util.ExternalizableHelper.toBinary(ExternalizableHelper.java:205)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ConverterValueToBinary.convert(DistributedCache.CDB:3)
    at com.tangosol.util.ConverterCollections$AbstractConverterEntry.getValue(ConverterCollections.java:3333)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.putAll(DistributedCache.CDB:19)
    at com.tangosol.util.ConverterCollections$ConverterMap.putAll(ConverterCollections.java:1570)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ViewMap.putAll(DistributedCache.CDB:1)
    at com.tangosol.coherence.component.util.SafeNamedCache.putAll(SafeNamedCache.CDB:1)
    at com.tangosol.coherence.component.util.collections.WrapperMap.putAll(WrapperMap.CDB:1)
    at com.tangosol.coherence.component.util.DeltaMap.resolve(DeltaMap.CDB:9)
    at com.tangosol.coherence.component.util.deltaMap.TransactionMap.commit(TransactionMap.CDB:1)
    at com.tangosol.coherence.component.util.TransactionCache.commit(TransactionCache.CDB:14)
    at com.tangosol.coherence.component.util.transactionCache.Local.commit(Local.CDB:1)
    at com.tangosol.coherence.ra.component.connector.resourceAdapter.cciAdapter.CacheAdapter$ManagedConnection$LocalTransaction.commit(CacheAdapter.CDB:25)
    at weblogic.connector.security.layer.AdapterLayer.commit(AdapterLayer.java:570)
    at weblogic.connector.transaction.outbound.NonXAWrapper.commit(NonXAWrapper.java:84)
    at weblogic.transaction.internal.NonXAServerResourceInfo.commit(NonXAServerResourceInfo.java:330)
    at weblogic.transaction.internal.ServerTransactionImpl.globalPrepare(ServerTransactionImpl.java:2251)
    at weblogic.transaction.internal.ServerTransactionImpl.internalCommit(ServerTransactionImpl.java:270)
    at weblogic.transaction.internal.ServerTransactionImpl.commit(ServerTransactionImpl.java:230)
    at weblogic.transaction.internal.TransactionManagerImpl.commit(TransactionManagerImpl.java:283)
    at org.springframework.transaction.jta.JtaTransactionManager.doCommit(JtaTransactionManager.java:1009)
    at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:754)
    at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:723)
    at org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:374)
    at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:120)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)

    Hi SR-APX
    The problem is that even though you are using the JCA adaptor Coherence (pre-3.6) is not really transactional. Once you commit all that data is still being pushed out to the distributed cluster members who all work independently. An error on one member will not stop data being written successfully to others.
    In Coherence 3.6 there are real transactions but you would need to see if the limitations on them fit your use-cases.
    JK

  • Replacing our Replicated Caches with Distributed+CQC

    Hi,
    I've been advised on this forum to replace our Replicated caches with Distributed+CQC with an AlwaysFilter. This should give me the "zero-latency" get() performance which really should be part of a Replicated Cache, but apparently isn't (the lease-model and serialization get in the way of this).
    My concern is now storage efficiency - my understanding is that my storage footprint will double, as the same information is now stored in two places (partitioned backing map and cqc front map). Is this correct? If so, I'm unsure why it would be considered a good replacement for the Replicated scheme.
    Thanks,
    Matt

    The second link looks like it helped you out
    - Re: The effects of "leases" on the read-performance of Replicated Caches
    Also if you are looking at (near) zero latency when garbage collections occur, you might want to have at the Zing JVM
    - http://www.azulsystems.com/products/zing/virtual-machine
    Layency due to garbage collection (i.e., introduced pause times) could break your (extreme) low-latency demand.
    You could try the CQC with the always filter:
    NamedCache cache = CacheFactory.getCache("somecache");
    ContinuousQueryCache localCache =new ContinousQueryCache(cache, AlwaysFilter.INSTANCE);The preceding code will result in a locally materialized view of the cache data that satisfies the specified filter. By default, both keys and values will be cached locally.
    If you want to cache only keys and retrieve values from the back cache as needed, which might be the best option if the values are large and accessed infrequently, or
    if you only care about having an up-to-date keyset locally, you can pass false as the third argument to the CQC constructor.
    To get data from the CQC you can use
    Iterator<Map.Entry<Integer, Klant>> data = localCache.entrySet().iterator();

  • Unknown user type with POF serialization

    Hi all,
    I'm using 3.6 and am just starting to implement POF.  In general it has been pretty easy but I seem to have a problem with my near scheme and POF.  Things work ok in my unit tests, but it doesn't work when I deploy to a single instance of WebLogic 12 on my laptop.  Here is an example scheme:
    <near-scheme>
      <scheme-name>prod-near</scheme-name>
      <autostart>true</autostart>
      <front-scheme>
        <local-scheme>
          <high-units>{high-units 2000}</high-units>
          <expiry-delay>{expiry-delay 2h}</expiry-delay>
        </local-scheme>
      </front-scheme>
      <back-scheme>
        <distributed-scheme>
          <backing-map-scheme>
            <local-scheme>
              <high-units>{high-units 10000}</high-units>
              <expiry-delay>{expiry-delay 2h}</expiry-delay>
            </local-scheme>
          </backing-map-scheme>
          <serializer>
            <instance>
              <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
              <init-params>
                <init-param>
                  <param-value>/Bus/pof-config.xml</param-value>
                  <param-value>String</param-value>
                </init-param>
              </init-params>
            </instance>
          </serializer>
        </distributed-scheme>
      </back-scheme>
    </near-scheme>
    I don't know if it matter, but some of my caches use another scheme that references this one as a parent:
    <near-scheme>
      <scheme-name>daily-near</scheme-name>
      <scheme-ref>prod-near</scheme-ref>
      <autostart>true</autostart>
      <back-scheme>
        <distributed-scheme>
          <backing-map-scheme>
            <local-scheme>
              <high-units system-property="daily-near-high-units">{high-units 10000}</high-units>
              <expiry-delay>{expiry-delay 1d}</expiry-delay>
            </local-scheme>
          </backing-map-scheme>
          <serializer>
            <instance>
              <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
              <init-params>
                <init-param>
                  <param-value>/Bus/pof-config.xml</param-value>
                  <param-value>String</param-value>
                </init-param>
              </init-params>
            </instance>
          </serializer>
        </distributed-scheme>
      </back-scheme>
    </near-scheme>
    Those schemes have existed for years.  I'm only now adding the serializers.  I use this same cache config file in my unit tests, as well as the same pof config file.  My unit tests do ExternalizableHelper.toBinary(o, pofContext) and ExternalizableHelper.fromBinary(b, pofContext).  I create the test pof context by doing new ConfigurablePofContext("/Bus/pof-config.xml").  I've also tried actually putting and getting an object to and from a cache in my unit tests.  Everything works as expected.
    My type definition looks like this:
    <user-type>
      <type-id>1016</type-id>
      <class-name>com.mycompany.mydepartment.bus.service.role.RoleResource</class-name>
    </user-type>
    I'm not using the tangosol.pof.enabled system property because I don't think it's necessary with the explicit serializers.
    Here is part of a stack trace:
    (Wrapped) java.io.IOException: unknown user type: com.mycompany.mydepartment.bus.service.role.RoleResource
        at com.tangosol.util.ExternalizableHelper.toBinary(ExternalizableHelper.java:214)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ConverterValueToBinary.convert(PartitionedCache.CDB:3)
        at com.tangosol.util.ConverterCollections$ConverterCacheMap.put(ConverterCollections.java:2486)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ViewMap.put(PartitionedCache.CDB:1)
        at com.tangosol.coherence.component.util.SafeNamedCache.put(SafeNamedCache.CDB:1)
        at com.tangosol.net.cache.CachingMap.put(CachingMap.java:943)
        at com.tangosol.net.cache.CachingMap.put(CachingMap.java:902)
        at com.tangosol.net.cache.CachingMap.put(CachingMap.java:814)
    Any idea what I'm missing?
    Thanks
    John

    SR-APX wrote:
    Aleks
    thanks for your response.
    However the include property needs to present inside the <user-type-list> tag....
    <user-type-list> <include>coherence-pof-config.xml </include></user-type-list>
    For all other interested users the following property, tangosol.pof.enabled=true , must also be set for pof serialization to work correctly.
    Thanks again...
    ShamsurHi Shamsur,
    it is not mandatory to use tangosol.pof.enabled=true, you can alternatively specify the serializer for the clustered services to be configured for POF (not necessarily all of them) on a service-by-service basis in the cache configuration file explicitly with the following element.
    <serializer>com.tangosol.io.pof.ConfigurablePofContext</serializer>Best regards,
    Robert

  • Issue with POF serialization - Failure to deserialize an Invocable object: java.io.StreamCorruptedException

    Am running into following exception even after following all guidelines to implement POF. The main objective is to perform Distributed Bulk cache loading.
    Oracle Coherence GE 3.7.1.10 <Error> (thread=Invocation:InvocationService, member=1): Failure to deserialize an Invocable object: java.io.StreamCorruptedException: unknown user type: 1001
    java.io.StreamCorruptedException: unknown user type: 1001
      at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3312)
      at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2604)
      at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:371)
      at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:1)
      at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.InvocationService$InvocationRequest.read(InvocationService.CDB:8)
      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.deserializeMessage(Grid.CDB:19)
      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:31)
      at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
      at java.lang.Thread.run(Thread.java:662)
    Following is the pof-config.xml
    <?xml version="1.0" encoding="UTF-8" ?>
    <pof-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                xmlns="http://xmlns.oracle.com/coherence/coherence-pof-config"
                xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-pof-config/1.1/coherence-pof-config.xsd">
      <user-type-list>
        <include>coherence-pof-config.xml</include>
        <user-type>
          <type-id>1001</type-id>
          <class-name>com.westgroup.coherence.bermuda.loader.DistributedLoaderAgent</class-name>
          <serializer>
            <class-name>com.tangosol.io.pof.PofAnnotationSerializer</class-name>     
            <init-params>
              <init-param>
                <param-type>int</param-type>
                <param-value>{type-id}</param-value>
              </init-param>
              <init-param>
                <param-type>java.lang.Class</param-type>
                <param-value>{class}</param-value>
              </init-param>
              <init-param>
                <param-type>boolean</param-type>
                <param-value>true</param-value>
              </init-param>
            </init-params>
          </serializer>
        </user-type>
         <user-type>
          <type-id>1002</type-id>
          <class-name>com.westgroup.coherence.bermuda.profile.lpa.LPACacheProfile</class-name>
          <serializer>
            <class-name>com.tangosol.io.pof.PofAnnotationSerializer</class-name>     
            <init-params>
              <init-param>
                <param-type>int</param-type>
                <param-value>{type-id}</param-value>
              </init-param>
              <init-param>
                <param-type>java.lang.Class</param-type>
                <param-value>{class}</param-value>
              </init-param>
              <init-param>
                <param-type>boolean</param-type>
                <param-value>true</param-value>
              </init-param>
            </init-params>
          </serializer>
        </user-type>
         <user-type>
          <type-id>1003</type-id>
          <class-name>com.westgroup.coherence.bermuda.profile.lpa.Address</class-name>
          <serializer>
            <class-name>com.tangosol.io.pof.PofAnnotationSerializer</class-name>     
            <init-params>
              <init-param>
                <param-type>int</param-type>
                <param-value>{type-id}</param-value>
              </init-param>
              <init-param>
                <param-type>java.lang.Class</param-type>
                <param-value>{class}</param-value>
              </init-param>
              <init-param>
                <param-type>boolean</param-type>
                <param-value>true</param-value>
              </init-param>
            </init-params>
          </serializer>
        </user-type>
         <user-type>
          <type-id>1004</type-id>
          <class-name>com.westgroup.coherence.bermuda.profile.lpa.Discipline</class-name>
          <serializer>
            <class-name>com.tangosol.io.pof.PofAnnotationSerializer</class-name>     
            <init-params>
              <init-param>
                <param-type>int</param-type>
                <param-value>{type-id}</param-value>
              </init-param>
              <init-param>
                <param-type>java.lang.Class</param-type>
                <param-value>{class}</param-value>
              </init-param>
              <init-param>
                <param-type>boolean</param-type>
                <param-value>true</param-value>
              </init-param>
            </init-params>
          </serializer>
        </user-type>
         <user-type>
          <type-id>1005</type-id>
          <class-name>com.westgroup.coherence.bermuda.profile.lpa.Employment</class-name>
          <serializer>
            <class-name>com.tangosol.io.pof.PofAnnotationSerializer</class-name>     
            <init-params>
              <init-param>
                <param-type>int</param-type>
                <param-value>{type-id}</param-value>
              </init-param>
              <init-param>
                <param-type>java.lang.Class</param-type>
                <param-value>{class}</param-value>
              </init-param>
              <init-param>
                <param-type>boolean</param-type>
                <param-value>true</param-value>
              </init-param>
            </init-params>
          </serializer>
        </user-type>
      </user-type-list>
      <allow-interfaces>true</allow-interfaces>
      <allow-subclasses>true</allow-subclasses>
    </pof-config>
    cache-config.xml
    <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
      xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config http://xmlns.oracle.com/coherence/coherence-cache-config/1.1/coherence-cache-config.xsd">
       <defaults>
        <serializer>pof</serializer>
        </defaults>
      <caching-scheme-mapping>
      <cache-mapping>
      <cache-name>DistributedLPACache</cache-name>
      <scheme-name>LPANewCache</scheme-name>
      <init-params>
      <init-param>
      <param-name>back-size-limit</param-name>
      <param-value>250MB</param-value>
      </init-param>
      </init-params>
      </cache-mapping>
      </caching-scheme-mapping>
      <caching-schemes>
      <!-- Distributed caching scheme. -->
      <distributed-scheme>
      <scheme-name>LPANewCache</scheme-name>
      <service-name>HBaseLPACache</service-name>
      <serializer>
      <instance>
                 <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                 <init-params>
                   <init-param>
                     <param-type>java.lang.String</param-type>
                     <param-value>pof-config.xml</param-value>
                   </init-param>
                 </init-params>
            </instance>
      </serializer>
      <backing-map-scheme>
      <read-write-backing-map-scheme>
      <internal-cache-scheme>
      <class-scheme>
      <class-name>com.tangosol.util.ObservableHashMap</class-name>
      </class-scheme>
      </internal-cache-scheme>
      <cachestore-scheme>
      <class-scheme>
      <class-name>com.westgroup.coherence.bermuda.profile.lpa.LPACacheProfile</class-name>
      </class-scheme>
      </cachestore-scheme>
      <read-only>false</read-only>
      <write-delay-seconds>0</write-delay-seconds>
      </read-write-backing-map-scheme>
      </backing-map-scheme>
      <autostart>true</autostart>
      </distributed-scheme>
      <invocation-scheme>
           <scheme-name>InvocationService</scheme-name>
           <service-name>InvocationService</service-name>
           <thread-count>5</thread-count>
           <autostart>true</autostart>
        </invocation-scheme>
      </caching-schemes>
    </cache-config>
    DistributedLoaderAgent (user type 1001)
    import java.io.IOException;
    import java.io.Serializable;
    import java.lang.annotation.Annotation;
    import org.apache.log4j.Logger;
    import com.tangosol.io.pof.PofReader;
    import com.tangosol.io.pof.PofWriter;
    import com.tangosol.io.pof.PortableObject;
    import com.tangosol.io.pof.annotation.Portable;
    import com.tangosol.io.pof.annotation.PortableProperty;
    import com.tangosol.net.AbstractInvocable;
    import com.tangosol.net.InvocationService;
    @Portable
    public class DistributedLoaderAgent extends AbstractInvocable implements PortableObject{
      private static final long serialVersionUID = 10L;
      private static Logger m_logger = Logger.getLogger(DistributedLoaderAgent.class);
      @PortableProperty(0)
      public String partDumpFileName = null;
      public String getPartDumpFileName() {
      return partDumpFileName;
      public void setPartDumpFileName(String partDumpFileName) {
      this.partDumpFileName = partDumpFileName;
      public DistributedLoaderAgent(){
      super();
      m_logger.debug("Configuring this loader ");
      public DistributedLoaderAgent(String partDumpFile){
      super();
      m_logger.debug("Configuring this loader to load dump file "+ partDumpFile);
      partDumpFileName = partDumpFile;
      @Override
      public void init(InvocationService service) {
      // TODO Auto-generated method stub
      super.init(service);
      @Override
      public void run() {
      // TODO Auto-generated method stub
      try{
      m_logger.debug("Invoked DistributedLoaderAgent");
      MetadataTranslatorService service = new MetadataTranslatorService(false, "LPA");
      m_logger.debug("Invoking service.loadLPACache");
      service.loadLPACache(partDumpFileName);
      }catch(Exception e){
      m_logger.debug("Exception in DistributedLoaderAgent " + e.getMessage());
      @Override
      public void readExternal(PofReader arg0) throws IOException {
      // TODO Auto-generated method stub
      setPartDumpFileName(arg0.readString(0));
      @Override
      public void writeExternal(PofWriter arg0) throws IOException {
      // TODO Auto-generated method stub
      arg0.writeString(0, getPartDumpFileName());
    Please assist.

    OK I have two suggestions.
    1. Always create and flush the ObjectOutputStream before creating the ObjectInputStream.
    2. Always close the output before you close the input. Actually once you close the output stream both the input stream and the socket are closed anyway so you can economize on this code. In the above you have out..writeObject() followed by input.close() followed by out.close(). Change this to out.writeObject() followed by out.close(). It may be that something needed flushing and the input.close() prevented the flush from happening.

  • Read through Scheme in Replicated Cache with berkely db

    Hi i have a 20 gb of data, while restarting the server i need to populate all the data in to coherence cache.if i create a pre load java class hadrly it will take 30 mins to 1 hour to load the data in to cache, while loading time if any request came means how can give the response to the request. i have gone through the Read through scheme its good. But i dont know how to implement with replicated cahce. is it possible to implement the read through+replicated cache+berkeley. if yes menas please post the sample code with full reference. thanks in advance
    Edited by: 875786 on Dec 5, 2011 8:10 PM

    If you read the documentation for the replicated scheme configuration here: http://docs.oracle.com/cd/E24290_01/coh.371/e22837/appendix_cacheconfig.htm#BABGHEJE ans specifically the part about the <backing-map> configutation you will see...
    http://docs.oracle.com/cd/E24290_01/coh.371/e22837/appendix_cacheconfig.htm#BABGHEJE:
    To ensure cache coherence, the backing-map of a replicated cache must not use a read-through pattern to load cache entries. Either use a cache-aside pattern from outside the cache service, or switch to the distributed-scheme, which supports read-through clustered caching.So it would appear that you cannot do read-through with a replicated cache - which makes sense really when you think about it.
    As I already asked - why are you trying to put 20GB in a replicated cache?
    Presumably you do not have JVMs with heaps well over 20GB to hold all that data, or do you - in which case you must have tuned GC very well. You say you are bothered about NFRs and network latency yet you are building a system that will require either very big heaps, and hence at some point there will be long GCs or you cannot hold all the data in memory so you have to configure expiry and then you have the latency of reading the data from the DB.
    If you are using read-through then presumably your use-case does not require all the data to be in the cache - i.e. all you data access is using a get by key and you do not do any filter queries. If this is the case then use a distributed cache where you can store all the data or use read-through. If all you access is using key based gets then you do not need to co-locate the caches and your application in a single JVM - have separate cache server JVMs to hold the data and configure near-caches in your application.
    There are various ways to hold 20GB of data that would be much more efficient than you are suggesting with a replicated cache.
    JK

  • Concurrency with Entry Processors on Replicated Cache

    Hi,
    In the documentation is says that entry processors in replicated caches are executed on the initiating node.
    How is concurrency handled in this situation?
    What happens if two or more nodes are asked to execute something on an entry at the same time?
    What happens if the node initiating the execution is a storage disabled node?
    Thanks!

    Jonathan.Knight wrote:
    In a distributed cache the entry processor runs on the node that owns the cache entry. In a replicated cache the same entries are on all nodes so I think one of the questions was what happens in this scenario. I presume the EP only execues on one of the nodes - it would not make sense for it to execute on all nodes - but which one does it use? Is there still a concept of owner for a replicated cache or is it random.
    At this point I would have coded a quick experiment to prove what happens but unfortunately I am a tad busy right now.
    JKHi Jonathan,
    in the replicated cache there is still a notion of ownership of an entry, in Coherence terms it is called a lease. It is always owned by the last node to have carried out a successful modification on it, where modification may be a put/remove but it can also be a lock operation. Lease granularity is per entry.
    Practically the lock operation in the code Dimitri pasted serves two purposes. First it ensures no other nodes can lock it, second it brings the lease to the locking node, so it can correctly execute the entry-processor locally on the entry.
    Best regards,
    Robert

  • Are put()'s to a replicated cache atomic?

    We are using Coherence for a storage application and we're observing what may be a Coherence issue with latency on put()'s to a replicated cache.
    We have two software nodes on different servers sharing information in the cache and I need to know if I can count on a put() being atomic with respect to all software and hardware nodes in the grid.
    Does anyone know if these operations are guaranteed to be atomic on replicated caches, and that an entry will be visible to ALL nodes when the put() returns?
    Thanks,
    Stacy Maydew

    You could use explicit locking, for example,
    if (cache.lock(key, timeout)) {
        try {
            Object value = cache.get(key);
            cache.put(key, value);
        } finally {
            cache.unlock(key);
    } else {
        // decide what to do if you cannot obtain a lock
    }Note that when using explicit locking, you will require multiple trips to the cache server: to lock the entry,
    to retrieve the value, to update it and to unlock it. Which increases the latency.
    You can also use an entry processor which carries the information needed for the update.
    An entry processor eliminates the need for explicit concurrency control.
    An example
    public class KlantNaamEntryProcessor extends AbstractProcessor implements Serializable {
        public KlantNaamEntryProcessor() {
        public Object process(InvocableMap.Entry entry) {
            Klant klant = (Klant) entry.getValue();
            klant.setNaam(klant.getNaam().toLowerCase());
            entry.setValue(klant);
            return klant;
    and it's usage
    cache.invokeAll(filter, new KlantNaamEntryProcessor()).entrySet().iterator();Better examples can be found here: http://wiki.tangosol.com/display/COH33UG/Transactions,+Locks+and+Concurrency

  • Default to Java Serialization in case Pof Serialization not defined

    Is this possible to do, i.e. essentially if for a certain class Pof serialization is not defined, use the Java serialization instead?
    Or to turn it around, is it possible to define pof serialization only for certain classes in a distributed cache and use Java serialization for the rest?

    Hi,
    the problem for this is that Java serialization is not aware of POF (or for that matter even ExternalizableLite), so if you have a Java-serialized class which has a member which is supposed to be POF-serializable, it in fact will not be serialized with POF, because Java serialization will not delegate to POF.
    So it is very hard to mix the two together. You can do it for top-level objects by providing a special PofSerializer instance for the non-POF class which serializes to byte array and you write the byte array as a POF attribute, but it is not possible for POF-aware objects contained within a non-POF aware object to be POF serialized.
    Also, if you attempt to do this, then you can kiss goodbye to platform independence. You must use Java on both ends and have all the libraries which the classes used in the state want to pull in.
    Best regards,
    Robert

Maybe you are looking for

  • Go Live Strategy for CO/MM

    Hi, Please suggest the Go Live strategy for CO / MM especially from Product costing point of view. I have an issue, whether material prices of FG and SFG should be uploaded along with the quantity? 1.Whether, these prices are to be maintained in mate

  • Inserting row in DB4 From SAP / ABAP

    Hi experts. I'm facing a problem during the insertion of new rows on DB4 from data commng from SAP. I have already configured TXN DBCO using next parameters: AS4_HOST=XXXXXX;AS4_DB_LIBRARY=FILEAMP;AS4_QAQQINILIB=FTS000; Using program ADBC_TEST_CONNEC

  • In which language is ICM in NWCE7.1 written?

    Hi, I happen to find that ICM in NWCE7.1 seems not be a java program, so by what language is ICM written in? C language? Many thanks. Eric

  • [Solved] Suspend to RAM problems with HP dv7 laptop

    Hello, I have an HP dv7-6000 notebook that I can't get suspend to RAM working on. When I tell Linux to suspend, the screen turns off, I hear the hard disk turn off, but all the lights stay on (power, wireless, and touchpad). After a couple minutes th

  • BSP: multiple values selection in select-option

    Hi, Our customer requires multiple values selection in BSP pages as he is used to in standard SAP GUI. I found it's impossible in standard. Could it be possible to used javascript for this ... each select-option would have its own script that would p