Default to Java Serialization in case Pof Serialization not defined
Is this possible to do, i.e. essentially if for a certain class Pof serialization is not defined, use the Java serialization instead?
Or to turn it around, is it possible to define pof serialization only for certain classes in a distributed cache and use Java serialization for the rest?
Hi,
the problem for this is that Java serialization is not aware of POF (or for that matter even ExternalizableLite), so if you have a Java-serialized class which has a member which is supposed to be POF-serializable, it in fact will not be serialized with POF, because Java serialization will not delegate to POF.
So it is very hard to mix the two together. You can do it for top-level objects by providing a special PofSerializer instance for the non-POF class which serializes to byte array and you write the byte array as a POF attribute, but it is not possible for POF-aware objects contained within a non-POF aware object to be POF serialized.
Also, if you attempt to do this, then you can kiss goodbye to platform independence. You must use Java on both ends and have all the libraries which the classes used in the state want to pull in.
Best regards,
Robert
Similar Messages
-
Why we need Java Class in c++ pof Serialization
Hi,
I'm really confused why we need java class which implements PortableObject to support complex objects of c++. If we are not using any queries or entry processors in the application can't we keep the object as serialized byte format and can't we retrieve in from the c++ deserialization.
Please share your thoughts if there's a way if we can skip any Java implementation.
regards,
Surafeel both are doing same work. Also can anyone tell me what is teh difference between Serilization and Exgternalization.If you need someone to tell you the difference, (a) how can you possibly 'feel both are doing the same work'? and (b) why don't you look it up in the Javadoc?
It's not a secret. -
Am running into following exception even after following all guidelines to implement POF. The main objective is to perform Distributed Bulk cache loading.
Oracle Coherence GE 3.7.1.10 <Error> (thread=Invocation:InvocationService, member=1): Failure to deserialize an Invocable object: java.io.StreamCorruptedException: unknown user type: 1001
java.io.StreamCorruptedException: unknown user type: 1001
at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3312)
at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2604)
at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:371)
at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:1)
at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.InvocationService$InvocationRequest.read(InvocationService.CDB:8)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.deserializeMessage(Grid.CDB:19)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:31)
at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
at java.lang.Thread.run(Thread.java:662)
Following is the pof-config.xml
<?xml version="1.0" encoding="UTF-8" ?>
<pof-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://xmlns.oracle.com/coherence/coherence-pof-config"
xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-pof-config/1.1/coherence-pof-config.xsd">
<user-type-list>
<include>coherence-pof-config.xml</include>
<user-type>
<type-id>1001</type-id>
<class-name>com.westgroup.coherence.bermuda.loader.DistributedLoaderAgent</class-name>
<serializer>
<class-name>com.tangosol.io.pof.PofAnnotationSerializer</class-name>
<init-params>
<init-param>
<param-type>int</param-type>
<param-value>{type-id}</param-value>
</init-param>
<init-param>
<param-type>java.lang.Class</param-type>
<param-value>{class}</param-value>
</init-param>
<init-param>
<param-type>boolean</param-type>
<param-value>true</param-value>
</init-param>
</init-params>
</serializer>
</user-type>
<user-type>
<type-id>1002</type-id>
<class-name>com.westgroup.coherence.bermuda.profile.lpa.LPACacheProfile</class-name>
<serializer>
<class-name>com.tangosol.io.pof.PofAnnotationSerializer</class-name>
<init-params>
<init-param>
<param-type>int</param-type>
<param-value>{type-id}</param-value>
</init-param>
<init-param>
<param-type>java.lang.Class</param-type>
<param-value>{class}</param-value>
</init-param>
<init-param>
<param-type>boolean</param-type>
<param-value>true</param-value>
</init-param>
</init-params>
</serializer>
</user-type>
<user-type>
<type-id>1003</type-id>
<class-name>com.westgroup.coherence.bermuda.profile.lpa.Address</class-name>
<serializer>
<class-name>com.tangosol.io.pof.PofAnnotationSerializer</class-name>
<init-params>
<init-param>
<param-type>int</param-type>
<param-value>{type-id}</param-value>
</init-param>
<init-param>
<param-type>java.lang.Class</param-type>
<param-value>{class}</param-value>
</init-param>
<init-param>
<param-type>boolean</param-type>
<param-value>true</param-value>
</init-param>
</init-params>
</serializer>
</user-type>
<user-type>
<type-id>1004</type-id>
<class-name>com.westgroup.coherence.bermuda.profile.lpa.Discipline</class-name>
<serializer>
<class-name>com.tangosol.io.pof.PofAnnotationSerializer</class-name>
<init-params>
<init-param>
<param-type>int</param-type>
<param-value>{type-id}</param-value>
</init-param>
<init-param>
<param-type>java.lang.Class</param-type>
<param-value>{class}</param-value>
</init-param>
<init-param>
<param-type>boolean</param-type>
<param-value>true</param-value>
</init-param>
</init-params>
</serializer>
</user-type>
<user-type>
<type-id>1005</type-id>
<class-name>com.westgroup.coherence.bermuda.profile.lpa.Employment</class-name>
<serializer>
<class-name>com.tangosol.io.pof.PofAnnotationSerializer</class-name>
<init-params>
<init-param>
<param-type>int</param-type>
<param-value>{type-id}</param-value>
</init-param>
<init-param>
<param-type>java.lang.Class</param-type>
<param-value>{class}</param-value>
</init-param>
<init-param>
<param-type>boolean</param-type>
<param-value>true</param-value>
</init-param>
</init-params>
</serializer>
</user-type>
</user-type-list>
<allow-interfaces>true</allow-interfaces>
<allow-subclasses>true</allow-subclasses>
</pof-config>
cache-config.xml
<cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config http://xmlns.oracle.com/coherence/coherence-cache-config/1.1/coherence-cache-config.xsd">
<defaults>
<serializer>pof</serializer>
</defaults>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>DistributedLPACache</cache-name>
<scheme-name>LPANewCache</scheme-name>
<init-params>
<init-param>
<param-name>back-size-limit</param-name>
<param-value>250MB</param-value>
</init-param>
</init-params>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<!-- Distributed caching scheme. -->
<distributed-scheme>
<scheme-name>LPANewCache</scheme-name>
<service-name>HBaseLPACache</service-name>
<serializer>
<instance>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
<init-params>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>pof-config.xml</param-value>
</init-param>
</init-params>
</instance>
</serializer>
<backing-map-scheme>
<read-write-backing-map-scheme>
<internal-cache-scheme>
<class-scheme>
<class-name>com.tangosol.util.ObservableHashMap</class-name>
</class-scheme>
</internal-cache-scheme>
<cachestore-scheme>
<class-scheme>
<class-name>com.westgroup.coherence.bermuda.profile.lpa.LPACacheProfile</class-name>
</class-scheme>
</cachestore-scheme>
<read-only>false</read-only>
<write-delay-seconds>0</write-delay-seconds>
</read-write-backing-map-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
<invocation-scheme>
<scheme-name>InvocationService</scheme-name>
<service-name>InvocationService</service-name>
<thread-count>5</thread-count>
<autostart>true</autostart>
</invocation-scheme>
</caching-schemes>
</cache-config>
DistributedLoaderAgent (user type 1001)
import java.io.IOException;
import java.io.Serializable;
import java.lang.annotation.Annotation;
import org.apache.log4j.Logger;
import com.tangosol.io.pof.PofReader;
import com.tangosol.io.pof.PofWriter;
import com.tangosol.io.pof.PortableObject;
import com.tangosol.io.pof.annotation.Portable;
import com.tangosol.io.pof.annotation.PortableProperty;
import com.tangosol.net.AbstractInvocable;
import com.tangosol.net.InvocationService;
@Portable
public class DistributedLoaderAgent extends AbstractInvocable implements PortableObject{
private static final long serialVersionUID = 10L;
private static Logger m_logger = Logger.getLogger(DistributedLoaderAgent.class);
@PortableProperty(0)
public String partDumpFileName = null;
public String getPartDumpFileName() {
return partDumpFileName;
public void setPartDumpFileName(String partDumpFileName) {
this.partDumpFileName = partDumpFileName;
public DistributedLoaderAgent(){
super();
m_logger.debug("Configuring this loader ");
public DistributedLoaderAgent(String partDumpFile){
super();
m_logger.debug("Configuring this loader to load dump file "+ partDumpFile);
partDumpFileName = partDumpFile;
@Override
public void init(InvocationService service) {
// TODO Auto-generated method stub
super.init(service);
@Override
public void run() {
// TODO Auto-generated method stub
try{
m_logger.debug("Invoked DistributedLoaderAgent");
MetadataTranslatorService service = new MetadataTranslatorService(false, "LPA");
m_logger.debug("Invoking service.loadLPACache");
service.loadLPACache(partDumpFileName);
}catch(Exception e){
m_logger.debug("Exception in DistributedLoaderAgent " + e.getMessage());
@Override
public void readExternal(PofReader arg0) throws IOException {
// TODO Auto-generated method stub
setPartDumpFileName(arg0.readString(0));
@Override
public void writeExternal(PofWriter arg0) throws IOException {
// TODO Auto-generated method stub
arg0.writeString(0, getPartDumpFileName());
Please assist.OK I have two suggestions.
1. Always create and flush the ObjectOutputStream before creating the ObjectInputStream.
2. Always close the output before you close the input. Actually once you close the output stream both the input stream and the socket are closed anyway so you can economize on this code. In the above you have out..writeObject() followed by input.close() followed by out.close(). Change this to out.writeObject() followed by out.close(). It may be that something needed flushing and the input.close() prevented the flush from happening. -
Can I enable pof serialization for one cache and other JAVA serialization
I had coherence cluster with few cache , Is there any way i can enable pof serialization for one cache and other to use normal JAVA serialization
839051 wrote:
I had coherence cluster with few cache , Is there any way i can enable pof serialization for one cache and other to use normal JAVA serializationHi,
you can control serialization on a service-by-service basis. You can specify which serializer to use for the service with the <serializer> element in the service-scheme element corresponding to the service in the scache configuration file.
Be aware, though, that if you use Coherence*Extend, and the service serializer configuration for the proxy service does not match the serializer configuration of the service which you are proxying to the extend client then the proxy node has to de- and reserialize the data which it moves between the service and the client.
Best regards,
Robert -
Overriding pof serialization in subclasses?
When subclassing a class that support POF-serialization how would I know what property index I can pass in for the additional elements that are added by the subclass assuming that I do not have the source code of the superclass (or the information exists in the javadoc of the superclass)?
Is it for instance possible to read the last property index used from the stream (PofWriter) somehow?
I tried to find anything in the PofContext and PofWriter interface but did not see anything that looked relevant...
Examples of situations were this would apply is for instance when creating a custom filter using a Coherence built in filter as base-class.
Or am I missing something obvious about how to handle subclassing and POF-serialization that makes this a non-issue?
Best Regards
MagnusMagnusE wrote:
Thanks for the VERY quick answer Robert - I hardly posted before you answered :-)
I think it sounds a bit fragile to rely on documntation for this (on the other hand most cases of extending classes you don't have the source to are more or less fragile!).
Since it does not sound to complicated to provide a method that return the highest property index used so far from the stream (or is there some inherent problem with that solution?) I would prefer that solution over simply documenting it.
Hi Magnus,
On the contrary.
IMHO, documentation of the first property index a subclass may use is the only correct way to do this.
This is because it is perfectly valid to encode null values by not writing anything to the PofWriter. Therefore it is possible that the property index that was last written is not the one the greatest one which can ever be used, in which case you can't just continue from the value following the one that was last written, as that depends on the superclass state and therefore it is not constant, even though the mapping of property indexes to attributes must not depend on state. At least the intent of the specification and the direction of ongoing development is that the mapping is constant and dependencies will most probably be introduced on this in upcoming versions (sorry, can't tell anything more specific about this, yet).
As for also providing the method which returns this CONSTANT value, that can also be requested, but the last written property index is not a good candidate for your purposes.
For Evolvable implementors, this is even trickier, as all properties from later versions most follow all properties from earlier versions, otherwise they wouldn't end up in the remainder and therefore Evolvable can't function as it should. This also means that all properties of all subclasses from later implementation versions must have larger property indexes than all properties from preceeding implementation versions.
Therefore for Evolvable implementors, the first property index a subclass may use for each implementation version must be documented separately.
Actually, this also poses a theoretical problem, as this also means that the superclass must contain information derived from all subclasses, which is impossible to do correctly when the subclass is from another vendor and not from the superclass vendor. The superclass vendor may allocate a large gap for subclass properties, but when another attribute is added to a later version of the superclass, it is theoretically possible that the gap is not enough.
Best regards,
Robert -
Hi i am getting the following error while doing the POF serialization. please help
java.lang.StackOverflowError
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)saiyansharwan wrote:
Hi i am getting the following error while doing the POF serialization. please help
java.lang.StackOverflowError
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.isEvolvable(PofBufferWriter.java:2753)
Hi saiyansharwan,
which Coherence version is this?
Best regards,
Robert -
Unknown user type with POF serialization
Hi all,
I'm using 3.6 and am just starting to implement POF. In general it has been pretty easy but I seem to have a problem with my near scheme and POF. Things work ok in my unit tests, but it doesn't work when I deploy to a single instance of WebLogic 12 on my laptop. Here is an example scheme:
<near-scheme>
<scheme-name>prod-near</scheme-name>
<autostart>true</autostart>
<front-scheme>
<local-scheme>
<high-units>{high-units 2000}</high-units>
<expiry-delay>{expiry-delay 2h}</expiry-delay>
</local-scheme>
</front-scheme>
<back-scheme>
<distributed-scheme>
<backing-map-scheme>
<local-scheme>
<high-units>{high-units 10000}</high-units>
<expiry-delay>{expiry-delay 2h}</expiry-delay>
</local-scheme>
</backing-map-scheme>
<serializer>
<instance>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
<init-params>
<init-param>
<param-value>/Bus/pof-config.xml</param-value>
<param-value>String</param-value>
</init-param>
</init-params>
</instance>
</serializer>
</distributed-scheme>
</back-scheme>
</near-scheme>
I don't know if it matter, but some of my caches use another scheme that references this one as a parent:
<near-scheme>
<scheme-name>daily-near</scheme-name>
<scheme-ref>prod-near</scheme-ref>
<autostart>true</autostart>
<back-scheme>
<distributed-scheme>
<backing-map-scheme>
<local-scheme>
<high-units system-property="daily-near-high-units">{high-units 10000}</high-units>
<expiry-delay>{expiry-delay 1d}</expiry-delay>
</local-scheme>
</backing-map-scheme>
<serializer>
<instance>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
<init-params>
<init-param>
<param-value>/Bus/pof-config.xml</param-value>
<param-value>String</param-value>
</init-param>
</init-params>
</instance>
</serializer>
</distributed-scheme>
</back-scheme>
</near-scheme>
Those schemes have existed for years. I'm only now adding the serializers. I use this same cache config file in my unit tests, as well as the same pof config file. My unit tests do ExternalizableHelper.toBinary(o, pofContext) and ExternalizableHelper.fromBinary(b, pofContext). I create the test pof context by doing new ConfigurablePofContext("/Bus/pof-config.xml"). I've also tried actually putting and getting an object to and from a cache in my unit tests. Everything works as expected.
My type definition looks like this:
<user-type>
<type-id>1016</type-id>
<class-name>com.mycompany.mydepartment.bus.service.role.RoleResource</class-name>
</user-type>
I'm not using the tangosol.pof.enabled system property because I don't think it's necessary with the explicit serializers.
Here is part of a stack trace:
(Wrapped) java.io.IOException: unknown user type: com.mycompany.mydepartment.bus.service.role.RoleResource
at com.tangosol.util.ExternalizableHelper.toBinary(ExternalizableHelper.java:214)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ConverterValueToBinary.convert(PartitionedCache.CDB:3)
at com.tangosol.util.ConverterCollections$ConverterCacheMap.put(ConverterCollections.java:2486)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ViewMap.put(PartitionedCache.CDB:1)
at com.tangosol.coherence.component.util.SafeNamedCache.put(SafeNamedCache.CDB:1)
at com.tangosol.net.cache.CachingMap.put(CachingMap.java:943)
at com.tangosol.net.cache.CachingMap.put(CachingMap.java:902)
at com.tangosol.net.cache.CachingMap.put(CachingMap.java:814)
Any idea what I'm missing?
Thanks
JohnSR-APX wrote:
Aleks
thanks for your response.
However the include property needs to present inside the <user-type-list> tag....
<user-type-list> <include>coherence-pof-config.xml </include></user-type-list>
For all other interested users the following property, tangosol.pof.enabled=true , must also be set for pof serialization to work correctly.
Thanks again...
ShamsurHi Shamsur,
it is not mandatory to use tangosol.pof.enabled=true, you can alternatively specify the serializer for the clustered services to be configured for POF (not necessarily all of them) on a service-by-service basis in the cache configuration file explicitly with the following element.
<serializer>com.tangosol.io.pof.ConfigurablePofContext</serializer>Best regards,
Robert -
I am pretty frustrated about having to write my own Serializable classes. I'm not sure if this is the right place to ask, but will the next version of Java supports Serializable 2D objects?
Further, I was trying to write my own class to extend java.awt.geom.GeneralPath to become Serializable, but it's declared "final". What should I do? (I had no problems with Rectangle2D.Double, Line2D.Double, etc.)
Any help is greatly appreciated.
SelwynYour code for serializing the state of the General path forgets two things:
1. the winding rule
2. the segments types!
You could use a vector, but I just directly wrote to the file:
private void writeObject(ObjectOutputStream oos) throws IOException
{ out.defaultWriteObject();
//write state of transient GeneralPath _gp;
out.writeInt(_gp.getWindingRule());
float[] coord = new float[6];
PathIterator i = _gp.getPathIterator(null);
while(!i.isDone())
{ int seg = i.currentSegment(coords);
writeInt(seg);
//switch on seg, writing correct # of floats from coords
i.next();
out.writeInt(-1); //sentinel for end-of-data: SEG_LINETO etc are [0,4]
private void readObject(ObjectInputStream in)
throws IOException, ClassNotFoundException
{ in.defaultReadObject();
int rule = in.readInt();
_gp = new GeneralPath(rule);
//etc...
}3. I'm just winging this code -- haven't tested it
--Nax -
hi
I would like to write an integration test for POF serialization.
Is it possible to make Coherence to serialize all objects I put into named cache ?
Or I need to start two nodes (in two classloaders) in my test ?If you want to just test the POF serialization of your classes you can just serialize, deserialize and check the results
e.g.
MyPofClass original = new MyPofClass();
// populate fields of original
// Serialize to Binary
ConfigurablePofContext pofContext = new ConfigurablePofContext("name of pof config file .xml");
Binary binary = ExternalizableHelper.toBinary(original, pofCOntext);
// Deserialize back to original class
MyPofClass deserialized = ExternalizableHelper.fromBinary(binary, pofContext);
// now do some asserts to check that the fields in the original match the fields in the deserialized instanceI usually do that test for all but the most simple POF classes I write.
JK -
I have a strange issue using a POF serializer where one of the fields of the class is not populated when the object is retrieved from the cache.
The class looks like:
public class RegionSerializer extends AbstractPersistentEntitySerializer<RegionImpl> {
private static final int REGION_CODE = 11;
private static final int NAME = 12;
private static final int ID_REGION = 13;
private static final int ID_CCY_RPT = 14;
@Override
protected RegionImpl createInstance() {
return new RegionImpl();
public void serialize(PofWriter out, Object o) throws IOException {
RegionImpl obj = (RegionImpl) o;
super.serialize(out, obj);
out.writeObject(REGION_CODE, obj.getRegionCode());
out.writeString(NAME, obj.getName());
out.writeInt(ID_REGION, obj.getIdRegion());
out.writeObject(ID_CCY_RPT, obj.getIdCcyRpt());
public Object deserialize(PofReader in) throws IOException {
RegionImpl obj = (RegionImpl) super.deserialize(in);
obj.setRegionCode((RegionCode) in.readObject(REGION_CODE));
obj.setName(in.readString(NAME));
obj.setIdRegion(in.readInt(ID_REGION));
obj.setIdCcyRpt((CurrencyCode) in.readObject(ID_CCY_RPT));
return obj;
}and the RegionCodeSerializer...
public class RegionCodeSerializer implements PofSerializer{
private static final int CODE = 11;
public void serialize(PofWriter out, Object o) throws IOException {
RegionCode obj = (RegionCode) o;
out.writeString(CODE, obj.getCode());
public Object deserialize(PofReader in) throws IOException {
RegionCode obj = new RegionCode();
obj.setCode(in.readString(CODE));
return obj;
}the output from the log after inserting and retrieving from the cache is
06-Oct-2010 10:11:28,277 BST DEBUG refdata.RefDataServiceImpl main - Region count:4
06-Oct-2010 10:11:28,277 BST INFO cache.TestCacheStartUp main - REGION FROM DAO: RegionImpl[RegionImpl[objectId=5],regionCode=EMEA,name=LONDON]
06-Oct-2010 10:11:28,277 BST INFO cache.TestCacheStartUp main - REGION FROM DAO: RegionImpl[RegionImpl[objectId=6],regionCode=US,name=NEW YORK]
06-Oct-2010 10:11:28,277 BST INFO cache.TestCacheStartUp main - REGION FROM DAO: RegionImpl[RegionImpl[objectId=7],regionCode=APAC,name=TOKYO]
06-Oct-2010 10:11:28,277 BST INFO cache.TestCacheStartUp main - REGION FROM DAO: RegionImpl[RegionImpl[objectId=8],regionCode=NON_PYRAMID_DESKS,name=NON PYRAMID DESKS]
06-Oct-2010 10:11:28,293 BST INFO cache.TestCacheStartUp main - Is Cache empty?: false
06-Oct-2010 10:11:28,293 BST INFO cache.TestCacheStartUp main - Cache Size is: 4
06-Oct-2010 10:11:28,324 BST INFO cache.TestCacheStartUp main - REGION FROM CACHE: RegionImpl[RegionImpl[objectId=6],regionCode=US,name=<null>]
06-Oct-2010 10:11:28,324 BST INFO cache.TestCacheStartUp main - REGION FROM CACHE: RegionImpl[RegionImpl[objectId=7],regionCode=APAC,name=<null>]
06-Oct-2010 10:11:28,324 BST INFO cache.TestCacheStartUp main - REGION FROM CACHE: RegionImpl[RegionImpl[objectId=8],regionCode=NON_PYRAMID_DESKS,name=<null>]
06-Oct-2010 10:11:28,324 BST INFO cache.TestCacheStartUp main - REGION FROM CACHE: RegionImpl[RegionImpl[objectId=5],regionCode=EMEA,name=<null>]as can be seen from the output the name field is null after retrieving. It seems that the 3 remaining fields after regionCode are being ignored by the deserialize (or serialize method) in the serializer class above but I can't see why? Any ideas?Hi,
You need to call read/writeRemainder() method at the end of serialization/deserialization to properly terminate reading/writing of a user type.
Thanks,
Wei -
XMLEncoder on java 1.5.0 no serialize minimumSize property
Hi,
XMLEncoder on java 1.5.0 no serialize minimumSize, preferredSize and maximumSize properties!
Exemple with outputs on java 1.4.2_04 and 1.5.0:
import javax.swing.JButton;
import java.beans.XMLEncoder;
import java.awt.Dimension;
public class SerializeMinimumSize
{ public static void main(String[] args)
{ XMLEncoder enc = new XMLEncoder(System.out);
JButton btt = new JButton("miao!");
btt.setMinimumSize(new Dimension(32,64));
enc.writeObject(btt);
enc.close();
}With 1.4.2_04:
<?xml version="1.0" encoding="UTF-8"?>
<java version="1.4.2_04" class="java.beans.XMLDecoder">
<object class="javax.swing.JButton">
<string>miao!</string>
<void property="minimumSize">
<object class="java.awt.Dimension">
<int>32</int>
<int>64</int>
</object>
</void>
</object>
</java>With 1.5.0:
<?xml version="1.0" encoding="UTF-8"?>
<java version="1.5.0" class="java.beans.XMLDecoder">
<object class="javax.swing.JButton">
<string>miao!</string>
</object>
</java>Omit the property minimumSize, why?I've found the soluction in java.bean.Metadata source... ~
I've created my PersistenceDelegate for my component. Is more simple that it may appear!
bye! -
POF Serialization for Exceptions thrown
If any Java Exceptions are being thrown and this needs to be shown to clients in some proper way.
How to make these Java Exceptions to be POF enable ?
Is com.tangosol.io.pof.ThrowablePofSerializer is meant for this ?
I can have my custom Exception class and have Portable Interface implemented but i wanted to know if their is some way to enable available java excpetions to
make POF enable ex: java.util.ConcurrentModificationException
Edited by: Khangharoth on Jul 19, 2009 10:40 PMIs com.tangosol.io.pof.ThrowablePofSerializer is meant for this ? : Yes and then we can have any Java thowable Pof enabled.
But a catch here "Any deserialized exception will loose type information, and simply be represented as a PortableException". -
The steps to enable Java Webstart in OSX Mavericks are not working.
The steps located here: http://support.apple.com/kb/ht5559 are not re-enabling the Java Webstart for Java 6 in OSX Mavericks. I have tried all four steps about 20 times and looked at all the settings in all my browsers (chrome, safari, and firefox) and all say "missing plugin" when I go to any website that requires Java -- in this case, the NY Times Crossword Puzzle's "solve with a friend", which is incompatible with Java 7 so upgrading does not help me. I've tried searching for other answers and all keep directing me to the same four steps that have not worked.
A note, those steps worked when I had the same problem after upgrading to Mountain Lion. Should something be altered for Mavericks?
Please advise.You'll possibly need to remove Oracle Java altogether before, see these notes on their WebSite:
http://www.java.com/en/download/help/mac_uninstall_java.xml
After you remove it, go through the steps to re-enable Apple Java again; and, inside Safari, you can also go to its menu and choose "Reset Safari".
Hope this helps. -
XMLTable default values for timestamp results in ORA-01843: not a valid month
When I try to provide a default for a timestamp value in the XMLTABLE function, I am greeted with an error - ORA-01843: not a valid month - no matter how I provide that default value. Whether there is a value present in the XML or not is irrelavant for this bug to occur. It appears to be an incomplete fix of bug number 9745897 (thread).
select x.*
from
xmltable('/DOC' passing xmltype('<DOC><DT>2013-08-14T15:08:31</DT></DOC>')
COLUMNS dt timestamp default sysdate) x;
select x.*
from
xmltable('/DOC' passing xmltype('<DOC><DT>2013-08-14T15:08:31</DT></DOC>')
COLUMNS dt timestamp default systimestamp) x;
select x.*
from
xmltable('/DOC' passing xmltype('<DOC><DT>2013-08-14T15:08:31</DT></DOC>')
COLUMNS dt timestamp default to_char(systimestamp, 'YYYY-MM-DD"T"HH24:MI:SS') ) x;
Edit: A little more followup.
This works:
select x.*
from
xmltable('/DOC' passing xmltype('<DOC></DOC>')
COLUMNS dt date default sysdate) x;
This also works, except for its just the date, and not the date/time
select x.*
from
xmltable('/DOC' passing xmltype('<DOC></DOC>')
COLUMNS dt timestamp default sysdate) x;
This doesn't work
select x.*
from
xmltable('/DOC' passing xmltype('<DOC></DOC>')
COLUMNS dt timestamp default systimestamp) x;
ORA-01861: literal does not match format stringHi,
First of all, let's check the manual for the DEFAULT clause :
XMLTABLE SQL/XML Function in Oracle XML DB
The optional DEFAULT clause specifies the value to use when the PATH expression results in an empty sequence (or NULL). Its expr is an XQuery expression that is evaluated to produce the default value.
According to the documentation, the DEFAULT clause should specify an XQuery expression.
However, that is wrong, the actual implementation only expects an expression that resolves to a string, the content is not interpreted.
So, bottom line is if we don't directly specify a string, the expression will be implicitly converted to one, and we all know how bad things can go when implicit conversions occur, especially when dates or timestamps are involved.
Now let's focus on how the DEFAULT clause affects the query evaluation.
When a DEFAULT clause is specified, Oracle rewrites the projection differently and do not use the native xs:dateTime format to convert the value :
select x.*
from
xmltable('/DOC' passing xmltype('<DOC><DT>2013-08-14T15:08:31</DT></DOC>')
COLUMNS dt timestamp default systimestamp
) x
becomes :
SELECT CASE EXISTSNODE(VALUE(KOKBF$),'/DOC/DT')
WHEN 1 THEN CAST(TO_TIMESTAMP(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(SYS_XQEXTRACT(VALUE(KOKBF$),'/DOC/DT')),50,1,2)) AS timestamp )
ELSE CAST(TO_TIMESTAMP(TO_CHAR(SYSTIMESTAMP(6)),'SYYYY-MM-DD"T"HH24:MI:SSXFF') AS timestamp )
END "DT"
FROM TABLE("SYS"."XQSEQUENCE"(EXTRACT("SYS"."XMLTYPE"('<DOC><DT>2013-08-14T15:08:31</DT></DOC>'),'/DOC'))) "KOKBF$"
See the red part : it doesn't use the format parameter, so the conversion relies on the session's NLS settings.
When there's no DEFAULT clause, the TO_TIMESTAMP function uses an explicit format :
select x.*
from
xmltable('/DOC' passing xmltype('<DOC><DT>2013-08-14T15:08:31</DT></DOC>')
COLUMNS dt timestamp --default systimestamp
) x
rewritten to :
SELECT CAST(
TO_TIMESTAMP(
SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(SYS_XQEXTRACT(VALUE(KOKBF$),'/DOC/DT'),0,0,20971520,0),50,1,2)
, 'SYYYY-MM-DD"T"HH24:MI:SSXFF'
AS timestamp --default systimestamp
) "DT"
FROM TABLE("SYS"."XQSEQUENCE"(EXTRACT("SYS"."XMLTYPE"('<DOC><DT>2013-08-14T15:08:31</DT></DOC>'),'/DOC'))) "KOKBF$"
so yes, maybe there's a bug here.
Edit: A little more followup.
This works:
select x.*
from
xmltable('/DOC' passing xmltype('<DOC></DOC>')
COLUMNS dt date default sysdate) x;
Actually no, it doesn't work. Granted, maybe it doesn't produce any error, but the result is incorrect.
As explained, the conversion relies on the session NLS (NLS_DATE_FORMAT in this case) :
SQL> show parameters nls_date_format
NAME TYPE VALUE
nls_date_format string DD/MM/RR
SQL>
SQL> select sysdate from dual;
SYSDATE
16/08/13
SQL> select x.*
2 from
3 xmltable('/DOC' passing xmltype('<DOC></DOC>')
4 COLUMNS dt date default sysdate) x;
DT
13/08/16
Oracle first converts SYSDATE to a string using current NLS_DATE_FORMAT, resulting in '16/08/13'
Then this string is converted to a DATE using the xs:date format 'YYYY-MM-DD' resulting in 13/08/0016 (August 13, 0016) which is incorrect.
The obvious workaround to this issue is to control how Oracle implicitly converts from string to date/timestamp format :
SQL> alter session set NLS_TIMESTAMP_FORMAT= 'YYYY-MM-DD"T"HH24:MI:SS';
Session altered.
SQL> select x.*
2 from
3 xmltable('/DOC' passing xmltype('<DOC><DT>2013-08-14T15:08:31</DT></DOC>')
4 COLUMNS dt timestamp default systimestamp
5 ) x;
DT
2013-08-14T15:08:31
SQL> select x.*
2 from
3 xmltable('/DOC' passing xmltype('<DOC></DOC>')
4 COLUMNS dt timestamp default systimestamp) x;
COLUMNS dt timestamp default systimestamp) x
ERROR at line 4:
ORA-01861: literal does not match format string
SQL> select x.*
2 from
3 xmltable('/DOC' passing xmltype('<DOC></DOC>')
4 COLUMNS dt timestamp default cast(systimestamp as timestamp)) x;
DT
2013-08-16T12:32:58 -
Bug: generate java objects generates error and does not terminate
During the build of java object generation...the following error occurs (below),
the generation progress dialog does not close,
and the process does not terminate.
Any suggestions?
Thank you.
Albert
va.lang.NullPointerException
at oracle.ideimpl.log.TabbedLogManager.getMsgPage(TabbedLogManager.java:101)
at oracle.toplink.addin.log.POJOGenerationLoggingAdapter.updateTask(POJOGenerationLoggingAdapter.java:42)
at oracle.toplink.addin.mappingcreation.MappingCreatorImpl.fireTaskUpdated(MappingCreatorImpl.java:1049)
at oracle.toplink.addin.mappingcreation.MappingCreatorImpl.generateMappedDescriptorsForTables(MappingCreatorImpl.java:231)
at oracle.toplink.addin.mappingcreation.MappingCreatorImpl.generateMappedDescriptorsForTables(MappingCreatorImpl.java:201)
at oracle.toplink.addin.wizard.jobgeneration.JobWizard$1.construct(JobWizard.java:401)
at oracle.ide.util.SwingWorker$1.run(SwingWorker.java:119)
at java.lang.Thread.run(Thread.java:595)
ADF configuration: Release 3 (10.1.3)
ADF Business Components 10.1.3.36.73
CVS Version Internal to Oracle JDeveloper 10g (client-only)
Java Platform 1.5.0_05
Oracle IDE 10.1.3.36.73
PMD JDeveloper Extension 1.8
Struts Modeler Version 10.1.3.36.73
UML Modelers Version 10.1.3.36.73
Versioning Support 10.1.3.36.73
Other Configuration:
Os Name Microsoft Windows Xp Home Edition
Version 5.1.2600 Service Pack 2 Build 2600
Os Manufacturer Microsoft Corporation
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
Schema:
Create Table Example.Questions
Qid Number Not Null,
Testid Number,
Question Varchar2(4000),
Answer Char(4) Default 'Zzzz' Not Null
Create Table Example.Testgenhistory
Testgenhistory_Testid Number Not Null,
Testid Number Not Null,
Requestor Varchar2(100),
Daterequested Date Default Systimestamp
Create Table Example.Testmaster
Testid Number Not Null,
Testname Varchar2(100),
Ownerrequestor Varchar2(100) Default '[email protected]' Not Null,
Testdatelastmodified Date
Create Table Example.Users
Requestor Varchar2(100) Not Null,
Adminauthority Number Default 0 Not Null
Alter Table Example.Questions
Add Constraint Questions_Pk Primary Key
Qid
Enable
Alter Table Example.Testgenhistory
Add Constraint Testgenhistory_Pk Primary Key
Testgenhistory_Testid
Enable
Alter Table Example.Testmaster
Add Constraint Testmaster_Pk Primary Key
Testid
Enable
Alter Table Example.Users
Add Constraint Users_Pk Primary Key
Requestor
Enable
Alter Table Example.Questions
Add Constraint Questions_Testmaster_Fk1 Foreign Key
Testid
References Myschema.Testmaster
Testid
) Enable
Alter Table Example.Testgenhistory
Add Constraint Testgenhistory_Users_Fk1 Foreign Key
Requestor
References Myschema.Users
Requestor
) Enable
Alter Table Example.Testgenhistory
Add Constraint Testgenhistory_Testmaster_Fk Foreign Key
Testid
References Myschema.Testmaster
Testid
) Enable
Create Index Example.Testmaster_Index1 On Example.Testmaster (Testid);
Create Sequence Example.Qidseq Increment By 1 Start With 1 Minvalue 1 ;
Create Sequence Example.Testidseq Increment By 1 Start With 1 Minvalue 1 ;Hi Anuj,
Sorry for the reply delay. I didn't get a notification of reply on the post.
I am still able to get the error message dialog, along with the failure for the generation to terminate. (reproduced today 4/22/06 and others seem to be seeing it as well).
I can't identify anything specific in the steps.
Basically...
created a new application with ejb, adf, toplink
choose new project
choose toplink generate java objects
choose an existing validated database connection
select objects (all 4 tables in my little schema)
click through (...next...next...) to finish
locks up and dialog appears
I made a video of the steps, including verifying the database connection. If you want to see it, I'll email it to you.
here it is today (I've applied all updates available up to today):
ava.lang.NullPointerException
at oracle.ideimpl.log.TabbedLogManager.getMsgPage(TabbedLogManager.java:101)
at oracle.toplink.addin.log .updateTask(POJOGenerationLoggingAdapter.java:42)
at oracle.toplink.addin.mappingcreation.MappingCreatorImpl.fireTaskUpdated(MappingCreatorImpl.java:1049)
at oracle.toplink.addin.mappingcreation.MappingCreatorImpl.generateMappedDescriptorsForTables(MappingCreatorImpl.java:231)
at oracle.toplink.addin.mappingcreation.MappingCreatorImpl.generateMappedDescriptorsForTables(MappingCreatorImpl.java:201)
at oracle.toplink.addin.wizard.jobgeneration.JobWizard$1.construct(JobWizard.java:401)
at oracle.ide.util.SwingWorker$1.run(SwingWorker.java:119)
at java.lang.Thread.run(Thread.java:595)
Maybe you are looking for
-
May someone please tell me how to delete the loops in Garageband that came with Soundtrack Pro? The the thing is, I uninstalled Soundtrack pro and it's loops. But Garageband still has SP's loops in the loop browser, but the loops are gone. Poof! Not
-
After installing a new hard drive can I install snow leopard directly from snow leopard disc
After installing a new hard drive can I install snow leopard directly from snow leopard disc. Or do I have to install previous versions first. iMac 7.1 (2007) 2.8ghz
-
Fiscal year 2010 Not allowed while creating Process order in COR1
Dear Friends, We have gone SAP live in the month of October. The fiscal yeat we are using is 2011 as the client is Indian client. I have made the fiscal year variant for 2011 (V3). The FI document posting with Fiscal year 2011. It is also creating th
-
Dobj.schtm_invalid_dup error in oim while unlocking user
When trying to unlock a user in oim 9.1.0.2 we get the following error. We have customized task created for this functionality to work.The task Xs Unlock User from UF to PF is getting rejected giving the following error Error Details Feb 20, 2013: Ad
-
Dashboard is not displaying correctly in IE9
I use IE9, and for some reason the new dashboard is not displaying correctly. The menu down the left hand side of the page is half cut off. Any ideas ?