Generics Java 5
Just started using Java 5 and generics with kodo JDO 3.3.3 and in one of my
classes (Army) used:
private Set<Corps> corps = new HashSet<Corps>();
When I tried to make the mapping I got an error dialog that just said "Kodo
Error" (nothing else) and a warning message:
WARNING: Collection/array field "model.units.Army.corps" has an unknown or
unsupported element type; reverting to a BLOB mapping.
It did not create the mapping file.
Hi Dashi,
How are you building the mapping? Do you get a stack trace in any log
files? Finally, does this work if you run mappingtool from the command line?
-Patrick
Dashi wrote:
Just started using Java 5 and generics with kodo JDO 3.3.3 and in one of my
classes (Army) used:
private Set<Corps> corps = new HashSet<Corps>();
When I tried to make the mapping I got an error dialog that just said "Kodo
Error" (nothing else) and a warning message:
WARNING: Collection/array field "model.units.Army.corps" has an unknown or
unsupported element type; reverting to a BLOB mapping.
It did not create the mapping file.
Similar Messages
-
Issue about Invoking a BPEL Process with the Generic Java API
I Invoking a BPEL Process with the Generic Java API and apache axis or axis2
it turn up a error as follow:
org.apache.axis2.AxisFault: ORABPEL-08021
Cannot find partner wsdl.
parnterLink "BPELProcess1" is not found in process "BPELProcess1" (revision "v2008_11_17__38943").
Please check the deployment descriptor of the process to find the correct partnerLink name.
at org.apache.axis2.util.Utils.getInboundFaultFromMessageContext(Utils.java:512)
at org.apache.axis2.description.OutInAxisOperationClient.handleResponse(OutInAxisOperation.java:370)
at org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:416)
at org.apache.axis2.description.OutInAxisOperationClient.executeImpl(OutInAxisOperation.java:228)
at org.apache.axis2.client.OperationClient.execute(OperationClient.java:163)
at org.apache.axis2.client.ServiceClient.sendReceive(ServiceClient.java:548)
at org.apache.axis2.client.ServiceClient.sendReceive(ServiceClient.java:528)
at wf.Test_axis2_callws.main(Test_axis2_callws.java:41)
i can't solve it !
what's problem ???
anyone use java code to invoke bpel process successfully..can show me some example?
thanks
chanHi,
Check below link may help you solve your problem.
http://www.activevos.com/cec/samples/content/sample-invoke/doc/index.html
Regards -
What do people think about the different Generic Java approaches?
I have seen a lot of different approaches for Generic Java, and when people find problems with each approach the normal response has been: the other approach is worse with such and such a problem, do you have a better way?
The different approaches I have seen are: (in no particular order)
Please correct me if I am wrong and add other approaches if they are worthy of mention.
1) PolyJ - by MIT
This is a completely different approach than the others, that introduces a new where clause for bounding the types, and involves changing java byte codes in order to meet it's goals.
Main comments were not a java way of doing things and far too greater risk making such big changes.
2) Pizza - by Odersky & Wadler
This aims at extending java in more ways than just adding Generics. The generic part of this was replaced by GJ, but with Pizza's ability to use primitives as generic types removed, and much bigger changes allowing GJ to interface with java.
Main comments were that Pizza doesn't work well with java, and many things in Pizza were done in parallel with java, hence were no longer applicable.
3) GJ - by Bracha, Odersky, Stoutamire & Wadler
This creates classes with erased types and bridging methods, and inserts casts when required when going back to normal java code.
Main comments are that type dependent operations such as new, instanceof, casting etc can't be done with parametric types, also it is not a very intuitive approach and it is difficult to work out what code should do.
4) Runtime Generic Information - by Natali & Viroli
Each instance holds information about its Runtime Type.
Main comments from people were that this consumes way too much memory as each instance holds extra information about its type, and the performance would be bad due to checking Type information at runtime that would have been known at compile.
5) NextGen - by Cartwright & Steele
For each parameterized class an abstract base class with types erased is made and then for each new type a lightweight wrapper class and interface are created re-using code from the base class to keep the code small.
Main comments from people were that this approach isn't as backwards compatible as GJ due to replacing the legacy classes with abstract base classes which can't be instantiated.
6) .NET common runtime - by Kennedy & Syme
This was written for adding Generics to C#, however the spec is also targeted at other languages such as VB.
Main comments from people were that this approach isn't java, hence it is not subject to the restrictions of changing the JVM like java is.
7) Fully Generated Generic Classes - by Agesen, Freund & Mitchell
For each new type a new class is generated by a custom class loader, with all the code duplicated for each different type.
Main comments from people were that the generated code size gets too big, and that it is lacking a base class for integration with legacy code.
8) JSR-14 - by Sun
This is meant to come up with a solution Generic Solution to be used in java. Currently it is heavily based on GJ and suffering from all the same problems as GJ, along with the fact that it is constantly undergoing change and so no one knows what to expect.
See this forum for comments about it.
As if we didn't have enough approaches already, here is yet another one that hopefully has all of the benefits, and none of the problems of the other approaches. It uses information learnt while experimenting with the other approaches. Now when people ask me if I think I have a better approach, I will have somewhere to point them to.
(I will be happy to answer questions concerning this approach).
9) Approach #x - by Phillips
At compile time 1 type is made per generic type with the same name.
e.g.class HashSet<TypeA> extends AbstractSet<TypeA> implements Cloneable, Serializable will be translated to a type: class HashSet extends AbstractSet implements Cloneable, SerializableAn instance of the class using Object as TypeA can now be created in 2 different ways.
e.g.Set a = new HashSet();
Set<Object> b = new HashSet<Object>();
//a.getClass().equals(b.getClass()) is trueThis means that legacy class files don't even need to be re-compiled in order to work with the new classes. This approach is completely backwards compatible.
Inside each type that was created from a generic type there is also some synthetic information.
Information about each of the bounding types is stored in a synthetic field.
Note that each bounding type may be bounded by a class and any number of interfaces, hence a ';' is used to separate bounding types. If there is no class Object is implied.
e.g.class MyClass<TypeA extends Button implements Comparable, Runnable; TypeB> will be translated to a type: class MyClass {
public static final Class[][] $GENERIC_DESCRIPTOR = {{Button.class, Comparable.class, Runnable.class}, {Object.class}};This information is used by a Custom Class Loader before generating a new class in order to ensure that the generic types are bounded correctly. It also gets used to establish if this class can be returned instead of a generated class (occurs when the generic types are the same as the bounding types, like for new HashSet<Object> above).
There is another synthetic field of type byte[] that stores bytes in order for the Custom Class Loader to generate the new Type.
There are also static methods corresponding to each method that contain the implementation for each method. These methods take parameters as required to gain access to fields, contructors, other methods, the calling object, the calling object class etc. Fields are passed to get and set values in the calling object. Constructors are passed to create new instances of the calling object. Other methods are passed when super methods are called from within the class. The calling object is almost always passed for non static methods, in order to do things with it. The class is passed when things like instanceof the generated type need to be done.
Also in this class are any non private methods that were there before, using the Base Bounded Types, in order that the class can be used exactly as it was before Generics.
Notes: the time consuming reflection stuff is only done once per class (not per instance) and stored in static fields. The other reflection stuff getting done is very quick in JDK1.4.1 (some earlier JDKs the same can not be said).
Also these static methods can call each other in many circumstances (for example when the method getting called is private, final or static).
As well as the ClassLoader and other classes required by it there is a Reflection class. This class is used to do things that are known to be safe (assuming the compiler generated the classes correctly) without throwing any exceptions.
Here is a cut down version of the Reflection class: public final class Reflection {
public static final Field getDeclaredField(Class aClass, String aName) {
try {
Field field = aClass.getDeclaredField(aName);
field.setAccessible(true);
return field;
catch (Exception ex) {
throw new Error(ex);
public static final Object get(Field aField, Object anObject) {
try {
return aField.get(anObject);
catch (Exception ex) {
throw new Error(ex);
public static final void set(Field aField, Object anObject, Object aValue) {
try {
aField.set(anObject, aValue);
catch (Exception ex) {
throw new Error(ex);
public static final int getInt(Field aField, Object anObject) {
try {
return aField.getInt(anObject);
catch (Exception ex) {
throw new Error(ex);
public static final void setInt(Field aField, Object anObject, int aValue) {
try {
aField.setInt(anObject, aValue);
catch (Exception ex) {
throw new Error(ex);
}Last but not least, at Runtime one very lightweight wrapper class per type is created as required by the custom class loader. Basically the class loader uses the Generic Bytes as the template replacing the erased types with the new types. This can be even faster than loading a normal class file from disk, and creating it.
Each of these classes has any non private methods that were there before, making calls to the generating class to perform their work. The reason they don't have any real code themselves is because that would lead to code bloat, however for very small methods they can keep their code inside their wrapper without effecting functionality.
My final example assumes the following class name mangling convention:
* A<component type> - Array
* b - byte
* c - char
* C<class name length><class name> - Class
* d - double
* f - float
* i - int
* l - long
* z - boolean
Final Example: (very cut down version of Vector)public class Vector<TypeA> extends AbstractList<TypeA> implements RandomAccess, Cloneable, Serializable {
protected Object[] elementData;
protected int elementCount;
protected int capacityIncrement;
public Vector<TypeA>(int anInitialCapacity, int aCapacityIncrement) {
if (anInitialCapacity < 0) {
throw new IllegalArgumentException("Illegal Capacity: " + anInitialCapacity);
elementData = new Object[initialCapacity];
capacityIncrement = capacityIncrement;
public synchronized void setElementAt(TypeA anObject, int anIndex) {
if (anIndex >= elementCount) {
throw new ArrayIndexOutOfBoundsException(anIndex + " >= " + elementCount);
elementData[anIndex] = anObject;
}would get translated as:public class Vector extends AbstractList implements RandomAccess, Cloneable, Serializable {
public static final Class[][] $GENERIC_DESCRIPTOR = {{Object.class}};
public static final byte[] $GENERIC_BYTES = {/*Generic Bytes Go Here*/};
protected Object[] elementData;
protected int elementCount;
protected int capacityIncrement;
private static final Field $0 = Reflection.getDeclaredField(Vector.class, "elementData"),
$1 = Reflection.getDeclaredField(Vector.class, "elementCount"),
$2 = Reflection.getDeclaredField(Vector.class, "capacityIncrement");
static void $3(int _0, Field _1, Object _2, Field _3, int _4) {
if (_0 < 0) {
throw new IllegalArgumentException("Illegal Capacity: " + _0);
Reflection.set(_1, _2, new Object[_0]);
Reflection.setInt(_3, _2, _4);
static void $4(int _0, Field _1, Object _2, Field _3, Object _4) {
if (_0 >= Reflection.getInt(_1, _2)) {
throw new ArrayIndexOutOfBoundsException(_0 + " >= " + Reflection.getInt(_1, _2));
((Object[])Reflection.get(_3, _2))[_0] = _4;
public Vector(int anInitialCapacity, int aCapacityIncrement) {
$3(anInitialCapacity, $0, this, $2, aCapacityIncrement);
public synchronized void setElementAt(Object anObject, int anIndex) {
$4(anIndex, $1, this, $0, anObject);
} and new Vector<String> would get generated as:public class Vector$$C16java_lang_String extends AbstractList$$C16java_lang_String implements RandomAccess, Cloneable, Serializable {
protected Object[] elementData;
protected int elementCount;
protected int capacityIncrement;
private static final Field $0 = Reflection.getDeclaredField(Vector$$C16java_lang_String.class, "elementData"),
$1 = Reflection.getDeclaredField(Vector$$C16java_lang_String.class, "elementCount"),
$2 = Reflection.getDeclaredField(Vector$$C16java_lang_String.class, "capacityIncrement");
public Vector$$C16java_lang_String(int anInitialCapacity, int aCapacityIncrement) {
Vector.$3(anInitialCapacity, $0, this, $2, aCapacityIncrement);
public synchronized void setElementAt(String anObject, int anIndex) {
Vector.$4(anIndex, $1, this, $0, anObject);
}Comparisons with other approaches:
Compared with PolyJ this is a very java way of doing things, and further more it requires no changes to the JVM or the byte codes.
Compared with Pizza this works very well with java and has been designed using the latest java technologies.
Compared with GJ all type dependent operations can be done, and it is very intuitive, code does exactly the same thing it would have done if it was written by hand.
Compared with Runtime Generic Information no extra information is stored in each instance and hence no extra runtime checks need to get done.
Compared with NextGen this approach is completely backwards compatible. NextGen looks like it was trying to achieve the same goals, but aside from non backwards compatibility also suffered from the fact that Vector<String> didn't extend AbstractList<String> causing other minor problems. Also this approach doesn't create 2 types per new types like NextGen does (although this wasn't a big deal anyway). All that said NextGen was in my opinion a much better approach than GJ and most of the others.
Compared to .NET common runtime this is java and doesn't require changes to the JVM.
Compared to Fully Generated Generic Classes the classes generated by this approach are very lightweight wrappers, not full blown classes and also it does have a base class making integration with legacy code simple. It should be noted that the functionality of the Fully Generated Generic Classes is the same as this approach, that can't be said for the other approaches.
Compared with JSR-14, this approach doesn't suffer from GJ's problems, also it should be clear what to expect from this approach. Hopefully JSR-14 can be changed before it is too late.(a) How you intend generic methods to be translated.
Given that Vector and Vector<Object> are unrelated types,
what would that type be represented as in the byte code of
the method? In my approach Vector and Vector<Object> are related types. In fact the byte code signature of the existing method is exactly the same as it was in the legacy code using Vector.
To re-emphasize what I had said when explaining my approach:
System.out.println(Vector.class == Vector<Object>.class); // displays true
System.out.println(Vector.class == Vector<String>.class); // displays false
Vector vector1 = new Vector<Object>(); // legal
Vector<Object> vector2 = new Vector(); // legal
// Vector vector3 = new Vector<String>(); // illegal
// Vector<String> vector4 = new Vector(); // illegal
Vector<String> vector5 = new Vector<String>(); // legal
You must also handle the case where the type
parameter is itself a parameterized type in which the type
parameter is not statically bound to a ground instantiation.This is also very straightforward: (let me know if I have misunderstood you)
(translation of Vector given in my initial description)
public class SampleClass<TypeA> {
public static void main(String[] args) {
System.out.println(new Vector<Vector<TypeA>>(10, 10));
}would get translated as:public class SampleClass {
public static final Class[][] $GENERIC_DESCRIPTOR = {{Object.class}};
public static final byte[] $GENERIC_BYTES = {/*Generic Bytes Go Here*/};
private static final Constructor $0 = Reflection.getDeclaredConstructor(Vector$$C16java_util_Vector.class, new Class[] {int.class, int.class});
static void $1(Constructor _0, int _1, int _2) {
try {
System.out.println(Reflection.newInstance(_0, new Object[] {new Integer(_1), new Integer(_2)}));
catch (Exception ex) {
throw (RuntimeException)ex;
public static void main(String[] args) {
$1($0, 10, 10);
}and SampleClass<String> would get generated as:public class SampleClass$$C16java_lang_String {
private static final Constructor $0 = Reflection.getConstructor(Vector$$C37java_util_Vector$$C16java_lang_String.class, new Class[] {int.class, int.class});
public static void main(String[] args) {
SampleClass.$1($0, 10, 10);
Also describe the implementation strategy for when these
methods are public or protected (i.e. virtual).As I said in my initial description that for non final, non static, non private method invocations a Method may be passed into the implementing synthetic method as a parameter.
Note: the following main method will display 'in B'.
class A {
public void foo() {
System.out.println("in A");
class B extends A {
public void foo() {
System.out.println("in B");
public class QuickTest {
public static void main(String[] args) {
try {
A.class.getMethod("foo", null).invoke(new B(), null);
catch (Exception ex) {}
}This is very important as foo() may be overwritten by a subclass as it is here. By passing a Method to the synthetic implementation this guarantees that covariance, invariance and contra variance all work exactly the same way as in java. This is a fundamental problem with many other approaches.
(b) The runtime overhead associated with your translationAs we don't have a working solution to compare this to, performance comments are hard to state, but I hope this helps anyway.
The Class Load time is affected in 4 ways. i) All the Generic Bytes exist in the Base Class, hence they don't need to be read from storage. ii) The custom class loader, time to parse the name and failed finds before it finally gets to define the class. iii) The generation of the generic bytes to parametric bytes (basically involves changing bytes from the Constant Pool worked out from a new Parametric type, Utf8, Class and the new Parametric Constant types may all be effected) iv) time to do the static Reflection stuff (this is the main source of the overhead). Basically this 1 time per class overhead is nothing to be concerned with, and Sun could always optimize this part further.
The normal Runtime overhead (once Classes have been loaded) is affected mainly by reflection: On older JDKs the reflection was a lot slower, and so might have made a noticeable impact. On newer JDKs (since 1.4 I think), the reflection performance has been significantly improved. All the time consuming reflection is done once per class (stored in static fields). The normal reflection is very quick (almost identical to what is getting done without reflection). As the wrappers simply include a single method call to another method, these can be in-lined and hence made irrelevant. Furthermore it is not too difficult to make a parameter that would include small methods in the wrapper classes, as this does not affect functionality in the slightest, however in my testing I have found this to be unnecessary.
(c) The space overhead (per instantiation)There are very small wrapper classes (one per new Type) that simply contain all non private methods, with single method calls to the implementing synthetic method. They also include any fields that were in the original class along with other synthetic fields used to store reflected information, so that the slow reflection only gets done once per new Type.
(d) The per-instance space overheadNone.
(e) Evidence that the proposed translation is sound and well-defined for all relevant cases (see below)Hope this is enough, if not let me know what extra proof you need.
(f) Evidence for backward compatibility
(For example, how does an old class file that passes a Vector
to some method handle the case when the method receives a Vector<T>
where T is a type parameter? In your translation these types are unrelated.)As explained above, in my approach these are only unrelated for T != Object, in the legacy case T == Object, hence legacy code passing in Vector is exactly the same as passing in Vector<Object>.
(g) Evidence for forward compatibility
(How, exactly, do class files that are compiled with a generics compiler run on an old VM)They run exactly the same way, the byte codes from this approach are all legal java, and all legal java is also legal in this approach. In order to take advantage of the Generics the Custom Class Loader would need to be used or else one would get ClassNotFoundExceptons, the same way they would if they tried using Collections on an old VM without the Collections there. The Custom Class Loader even works on older VMs (note it may run somewhat slower on older VMs).
(h) A viable implementation strategyType specific instantiations are at Class Load time, when the Custom Class Loader gets asked for a new Class, it then generates it.
The type specific instantiations are never shipped as they never get persisted. If you really wanted to save them all you need to do is save them with the same name (with the $$ and _'s etc), then the class loader would find them instead of generating them. There is little to be gained by doing this and the only reason I can think of for doing such a thing would be if there was some reason why the target VM couldn't use the Custom Class Loader (the Reflection class would still need to be sent as well, but that is nothing special). Basically they are always generated at Runtime unless a Class with the same name already exists in which case it would be used.
The $GENERIC_DESCRIPTOR and $GENERIC_BYTES from the base class along with the new Type name are all that is required to generate the classes at runtime. However many other approaches can achieve the same thing for the generation, and approaches such as NextGen's template approach may be better. As this generation is only done once per class I didn't put much research into this area. The way it currently works is that the $GENERIC_DESCRIPTOR are basically used to verify that a malicious class files is not trying to create a non Type Safe Type, ie new Sample<Object>() when the class definition said class Sample<TypeA extends Button>. The $GENERIC_BYTES basically correspond to the normal bytes of a wrapper class file, except that in the constant pool it has some constants of a new Parametric Constant type that get replaced at class load time. These parametric constants (along with possibly Utf8 and Class constants) are replaced by the Classes at the end of the new type name, a little more complex than that but you probably get the general idea.
These fine implementation details don't affect the approach so much anyway, as they basically come down to class load time performance. Much of the information in the $GENERIC_BYTES could have been worked out by reflection on the base type, however at least for now simply storing the bytes is a lot easier.
Note: I have made a small syntax change to the requested class:
public T(X datum) --> public T<X>(X datum)
class T<X> {
private X datum;
public T<X>(X datum) {
this.datum = datum;
public T<T<X>> box() {
return new T<T<X>>(this);
public String toString() {
return datum.toString();
public static void main(String[] args) {
T<String> t = new T<String>("boo!");
System.out.println(t.box().box());
}would get translated as:
class T {
public static final Class[][] $GENERIC_DESCRIPTOR = {{Object.class}};
public static final byte[] $GENERIC_BYTES = {/*Generic Bytes Go Here*/};
private Object datum;
private static final Field $0 = Reflection.getDeclaredField(T.class, "datum");
private static final Constructor $1 = Reflection.getDeclaredConstructor(T$$C1T.class, new Class[] {T.class});
static void $2(Field _0, Object _1, Object _2) {
Reflection.set(_0, _1, _2);
static Object $3(Constructor _0, Object _1) {
try {
return Reflection.newInstance(_0, new Object[] {_1});
catch (Exception ex) {
throw (RuntimeException)ex;
static String $4(Field _0, Object _1) {
return Reflection.get(_0, _1).toString();
static void $5() {
T$$C16java_lang_String t = new T$$C16java_lang_String("boo!");
System.out.println(t.box().box());
public T(Object datum) {
$2($0, this, datum);
public T$$C1T box() {
return (T$$C1T)$3($1, this);
public String toString() {
return $4($0, this);
public static void main(String[] args) {
$5();
}as the generic bytes aren't very meaningful and by no means a requirement to this approach (NextGen's template method for generation may work just as well), here are the generated classes with some unused code commented out instead:
class T$$C28T$$C22T$$C16java_lang_String {
private T$$C22T$$C16java_lang_String datum;
private static final Field $0 = Reflection.getDeclaredField(T$$C28T$$C22T$$C16java_lang_String.class, "datum");
// private static final Constructor $1 = Reflection.getDeclaredConstructor(T$$C34T$$C28T$$C22T$$C16java_lang_String.class, new Class[] {T$$C28T$$C22T$$C16java_lang_String.class});
public T$$C28T$$C22T$$C16java_lang_String(T$$C22T$$C16java_lang_String datum) {
T.$2($0, this, datum);
// public T$$C34T$$C28T$$C22T$$C16java_lang_String box() {
// return (T$$C34T$$C28T$$C22T$$C16java_lang_String)T.$3($1, this);
public String toString() {
return T.$4($0, this);
public static void main(String[] args) {
T.$5();
class T$$C22T$$C16java_lang_String {
private T$$C16java_lang_String datum;
private static final Field $0 = Reflection.getDeclaredField(T$$C22T$$C16java_lang_String.class, "datum");
private static final Constructor $1 = Reflection.getDeclaredConstructor(T$$C28T$$C22T$$C16java_lang_String.class, new Class[] {T$$C22T$$C16java_lang_String.class});
public T$$C22T$$C16java_lang_String(T$$C16java_lang_String datum) {
T.$2($0, this, datum);
public T$$C28T$$C22T$$C16java_lang_String box() {
return (T$$C28T$$C22T$$C16java_lang_String)T.$3($1, this);
public String toString() {
return T.$4($0, this);
public static void main(String[] args) {
T.$5();
class T$$C1T {
private T datum;
private static final Field $0 = Reflection.getDeclaredField(T$$C1T.class, "datum");
// private static final Constructor $1 = Reflection.getDeclaredConstructor(T$$C6T$$C1T.class, new Class[] {T$$C1T.class});
public T$$C1T(T datum) {
T.$2($0, this, datum);
// public T$$C6T$$C1T box() {
// return (T$$C6T$$C1T)T.$3($1, this);
public String toString() {
return T.$4($0, this);
public static void main(String[] args) {
T.$5();
class T$$C16java_lang_String {
private String datum;
private static final Field $0 = Reflection.getDeclaredField(T$$C16java_lang_String.class, "datum");
private static final Constructor $1 = Reflection.getDeclaredConstructor(T$$C22T$$C16java_lang_String.class, new Class[] {T$$C16java_lang_String.class});
public T$$C16java_lang_String(String datum) {
T.$2($0, this, datum);
public T$$C22T$$C16java_lang_String box() {
return (T$$C22T$$C16java_lang_String)T.$3($1, this);
public String toString() {
return T.$4($0, this);
public static void main(String[] args) {
T.$5();
}the methods from the Reflection class used in these answers not given in my initial description are:
public static final Object newInstance(Constructor aConstructor, Object[] anArgsArray) throws Exception {
try {
return aConstructor.newInstance(anArgsArray);
catch (InvocationTargetException ex) {
Throwable cause = ex.getCause();
if (ex instanceof Exception) {
throw (Exception)ex;
throw new Error(ex.getCause());
catch (Exception ex) {
throw new Error(ex);
public static final Constructor getDeclaredConstructor(Class aClass, Class[] aParameterTypesArray) {
try {
Constructor constructor = aClass.getDeclaredConstructor(aParameterTypesArray);
constructor.setAccessible(true);
return constructor;
catch (Exception ex) {
throw new Error(ex);
} -
Generic Java for Oracle Data Integrator 10g (10.1.3.5.0)
Hi,
How can I install the Oracle Data Integrator Client in my MacBook. So the generic version should be the right one.
But there are versions for windows, linux, solaris, HPUX and AIX.. but no generic java version.
The version 10.1.3.4.0 has an generic version.
How can I fix this issue, since I tried the linux version but doesn't install.
Thanks.Hi,
Install the 10.1.3.4 and make an "manual install" after that. It's means copy the oracledi directory over the old oracledi directory.
There is detailed instruction on install manual...
Did you already tried this?
Cezar Santos
[www.odiexperts.com] -
Generic Java class for working with Context Nodes
Hi,all
I would like to write generic Java class for working with Context Nodes:
populating node,
add element to node,
update node element,
remove node element
Any ideas how can I do it?Hi,Armin
Thanks for your answer.
I have many nodes with the same structure,but different data.
I don't want to work with each one of them individually.
This is the main reason.
Regards,
Michael
Any ideas? -
Hi, I am trying generic java downloaded from
http://developer.java.sun.com/developer/earlyAccess/adding_generics/
it works fine with "Test.java", an example included in the package, but it gives "cannot resolve symbol" error message when the source code contains something like "packagename.classname" in the type. For example:
List<network.Node> nodeList = new ArrayList<network.Node>();
it'll complain: cannot resolve symbol Node.
The reason to put a package name before the class name is because there are classes in different packages with the same name.
Any help with this problem would be appreciated.This is proably a dumb question, but are you using the fully qualified package name. If not, you must.
Chuck -
Hello,
I'm currently using Java 5.0 (especially for the Generics part) on a new Java/J2EE project, but having a strange issue with code working previously in a Java 1.4 project.
Below is an overriding of the toString() method provided by the Object class which allow me to view nicely in debug (dev. mode) the contents of my Transfer Objects (all the TO's must extend this ATO abstract class).
Previously this code displayed me something like:
[field1 => value1, field2 => value2] ... for a TO (sort of "Javabean") having e.g. two String fields with values initialized to "value1" (resp. "value2").
But unfortunately, this does (or seems) not to work anymore, having such display :
[field1 => null, field2 => null]I tried to debug, and the problem is that the call fieldValue = field.get(this); returns null while it should returns the actual value of the field.
I thing it it strongly related to Generics, but could not at the moment found how/why it does not work.
May someone help...? Thanks.
public abstract class ATO {
// Reflection for field value display
public String toString() {
StringBuffer sb = new StringBuffer("[");
MessageFormat mf = new MessageFormat("{0} => {1}, ");
Field[] fields = this.getClass().getDeclaredFields();
for (int i = 0; i < fields.length; i++) {
Field field = (Field) fields;
String fieldName = field.getName();
Object fieldValue = null;
try {
fieldValue = field.get(this);
} catch (IllegalArgumentException e) {
} catch (IllegalAccessException e) {
mf.format(new Object[] { fieldName, fieldValue }, sb, null);
if (sb.length() > 1) {
sb.setLength(sb.length() - 2);
sb.append("]");
return sb.toString();ejp wrote:
Field field = (Field) fields;
This cast is unnecessary.
Effectively, I haven't noticed it yet. Fixed.
} catch (IllegalArgumentException e) {
} catch (IllegalAccessException e) {
}Either the field value really is null or you are getting one of these exceptions which you are ignoring. Never write empty catch blocks.That's true, I missed something. Fixed with some code to log the eventual exceptions.
Thanks for you answer. -
Generic Java client to invoke web service
Hi,
1.We have a web service HelloWorld registered in UDDI registry (private) of weblogic.
2.We have written a client that returns the WSDL URL and it is inturn passed to another piece of code where it invokes the the remote method of webservice.
In the above case,our problem is that the client we wrote became specific to the webservice(HelloWorld port, HelloWorld everything specific to HelloWorld webservice).
But our intention os that we have to write a client which should be generic to access any webservice by providing the Service Name and business name.
The UDDI publishing and querying is going well with a code where it returns the WSDL url after this we have to write generic code to access the remote method on webservice.
I appreciate anyone give me a solution to solve.
Thanks
VRhi,
The WSDL is available at
http://www.webservicex.net/stockquote.asmx?WSDL.
This WSDL has multiple ports and bindings.How do I
now which port/binding I should use?Hmm, hard to find a general answer. Each <port> element is linked to a single <binding> element. So, if you want to handle a certain binding, you have to access the corresponding port. Talking about your WSDL example, if you want to access the web service over, let's say, HTTP/GET you must access the <port name="StockQuoteHttpGet"> element. If you want to send a SOAP message, you must access the <port name="StockQuoteSoap"> element, and so on.
So depending on what type of message you want/have to send to the server, you have to access the corresponding <port> element.
In my code I am fetching the list of ports using
service.getPorts().Then I am iterating thru the list
and fetching the last port available in the List.I am
using this port further in my program.Is this the
right way?Can I use any port among the ports
available in the service.Generally spoken no! As I explained above, the actual <port> element of interest depends on the type of service you want/have to call. Very seldom the Web Service provider will offer a HTTP version of a certain operation. Mostly you have to hit a SOAP endpoint to get your response.
When I use the above approach I have a problem with
the Ouput format.When executing the webservice
client,the output format received and the output
format mentioned in the WSDL for that portType are
different.I mean, I get an Output format defined by
some other portType than the portType my program is
binded to.
When parsing the above mentioned WSDL,my program
binds to the last port i.e., StockQuoteHttpPost,and
identifies the output format as "Body" tag.
But when I print the Output from the webservice it is
different.It is looks like thisOk, that looks strange indeed. I can't imagine that the web server would respond with a SOAP message to your HTTP/POST request. However, technically it is possible because everything defined in the corresponding WSDL section says that the response will contain an xsd:string. "Body" is the name of the part, to be able to identify it, and does not need to occur in the response. So, the response is probably ok...
<wsdl:message name="GetQuoteHttpPostOut">
<wsdl:part name="Body" element="tns:string"/>
</wsdl:message>Anyway, I would have expected a list of key-value pairs instead:
Symbol=SBYN
Last=3. 17
Date=6/22/2005
Time=3:59pm
Change=0.00
Open=N/A
High=N/A
Low=N/A
Volume=0
MktCap=2 72.0M
PreviousClose=3.17
PercentageChange=0.00%
AnnRange=2.53 - 4.20
Earns=0.04
P-E=79.25
Name=SEEBEYOND TECH COcheerz, r. -
LinkedList Integer [] (Generic Java)
Hi guys,
I read that we need to do thisLinkedList<Integer> test = new LinkedList<Integer>();in order to have to specify the object type. What if the linked list is an array?
My init code is something like this LinkedList[] list;
int size, max_length, total_elements;
// Constructor
public SortedList(int n) {
// Set Total Elements
total_elements = 0;
// Deteermine The Size Of The Linked List Array
size = (int) Math.sqrt(n);
// Determine The Max Length Of Individual Linked List
max_length = (int) n/size;
// Create The Linked List
list = new LinkedList[size];
// Init The Linked List
for(int i = 0; i < size; i++) {
list[i] = new LinkedList();
}sorry to trouble you.
My code now is // define your data structures here
LinkedList<Integer>[] list;
int size, max_length, total_elements;
// Constructor
public SortedList(int n) {
// Set Total Elements
total_elements = 0;
// Deteermine The Size Of The Linked List Array
size = (int) Math.sqrt(n);
// Determine The Max Length Of Individual Linked List
max_length = (int) n/size;
// Create The Linked List
list = new LinkedList[size];
// Init The Linked List
for(int i = 0; i < size; i++) {
list[i] = new LinkedList<Integer>();
}But it still get that error, after compiling with -Xlint:unchecked it still get that warning, as printed below:
illegal start of expression
list = new LinkedList[size];
---------^
1 error
any ideas? -
Error while Shell executing from Oracle Using java proc
Hi All,
After a long time I'm here again.
I'm facing one strange problem.
SQL>
SQL> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
Elapsed: 00:00:00.01
SQL>
SQL>And, the purpose of this procedure is to execute shell command from PL/SQL application.
It seems that this procedure is able to execute shell commands like *'pwd'*.
But, when it comes to other commands like *'ls -lrt'* or *'sqlldr'* it throws error.
Kindly find the following details -
SQL> declare
2 err_cd number;
3 err_desc varchar2(500);
4 begin
5 dbms_java.set_output(1000000);
6 host(p_command => '/a/mis/Sqlloader_script/loader_command/ST_det.sh');
7 dbms_output.put_line('Successfully Executed SQL Loader Command');
8 exception
9 when others then
10 err_cd := 1;
11 err_desc := substr(sqlerrm,1,500);
12 dbms_output.put_line(err_desc);
13 end;
14 /
Process err :/a/mis/Sqlloader_script/loader_command/ST_det.sh: line 19: sqlldr: No such file or directory
Successfully Executed SQL Loader Command
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.11
SQL>
SQL> declare
2 err_cd number;
3 err_desc varchar2(500);
4 begin
5 dbms_java.set_output(1000000);
6 host(p_command => 'pwd');
7 dbms_output.put_line('Successfully Executed SQL Loader Command');
8 exception
9 when others then
10 err_cd := 1;
11 err_desc := substr(sqlerrm,1,500);
12 dbms_output.put_line(err_desc);
13 end;
14 /
Process out :/home/oracle/app/oracle/product/11.2.0/dbhome_1/dbs
Successfully Executed SQL Loader Command
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.11
SQL>But, the file exists in the following path -
SQL> !ls -lrt /a/mis/Sqlloader_script/loader_command/ST_det.sh
-rwxrwxr-x 1 oracle root 1315 Apr 13 13:40 /a/mis/Sqlloader_script/loader_command/ST_det.shAny idea?
Where is the privileges that i need to give to the current user?
When i directly execute the Sql Loader from the oracle o/s - it ran successfully.
[oracle@pult loader_command]$ /a/mis/Sqlloader_script/loader_command/ST_det.sh
SQL*Loader: Release 11.2.0.1.0 - Production on Fri Apr 13 14:59:19 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Commit point reached - logical record count 64
Commit point reached - logical record count 128
Commit point reached - logical record count 192
Commit point reached - logical record count 256
Commit point reached - logical record count 320
Commit point reached - logical record count 384
Commit point reached - logical record count 448
Commit point reached - logical record count 512
Commit point reached - logical record count 576
Commit point reached - logical record count 640
Commit point reached - logical record count 704
Commit point reached - logical record count 732
Succesful execution of Loading.
[oracle@pult loader_command]$
[oracle@pult loader_command]$Thanks in advance.This is a generic java proc.
It works well with windows.
But, facing this problem in Linux.
Procedure looks like -
create or replace and compile java source named host as
import java.io.*;
public class Host {
public static void executeCommand(String command) {
try {
String[] finalCommand;
if (isWindows()) {
finalCommand = new String[4];
finalCommand[0] = "C:\\windows\\system32\\cmd.exe"; // Windows XP/2003
finalCommand[1] = "/y";
finalCommand[2] = "/c";
finalCommand[3] = command;
else {
finalCommand = new String[3];
finalCommand[0] = "/bin/bash";
finalCommand[1] = "-c";
finalCommand[2] = command;
final Process pr = Runtime.getRuntime().exec(finalCommand);
pr.waitFor();
new Thread(new Runnable(){
public void run() {
BufferedReader br_in = null;
try {
br_in = new BufferedReader(new InputStreamReader(pr.getInputStream()));
String buff = null;
while ((buff = br_in.readLine()) != null) {
System.out.println("Process out :" + buff);
try {Thread.sleep(100); } catch(Exception e) {}
br_in.close();
catch (IOException ioe) {
System.out.println("Failed to print.");
ioe.printStackTrace();
finally {
try {
br_in.close();
} catch (Exception ex) {}
}).start();
new Thread(new Runnable(){
public void run() {
BufferedReader br_err = null;
try {
br_err = new BufferedReader(new InputStreamReader(pr.getErrorStream()));
String buff = null;
while ((buff = br_err.readLine()) != null) {
System.out.println("Process err :" + buff);
try {Thread.sleep(100); } catch(Exception e) {}
br_err.close();
catch (IOException ioe) {
System.out.println("Process error.");
ioe.printStackTrace();
finally {
try {
br_err.close();
} catch (Exception ex) {}
}).start();
catch (Exception ex) {
System.out.println(ex.getLocalizedMessage());
public static boolean isWindows() {
if (System.getProperty("os.name").toLowerCase().indexOf("windows") != -1)
return true;
else
return false;
};Looking for your reply. -
21700 while calling java stored procedure in 9i
I create a java stored procedure that takes a varray of a rowtype. When I call that procedure, I get an ora-21700. If I remove the varray from the parameter list, I can call the procedure okay. My goal is to create a generic java procedure that can take a rowtype from any table and create XML. Once I get the oracle.sql.ARRAY into my code, I am okay. I just can't seem to pass it in. I have pasted my code below:
Java:
import oracle.sql.*;
public class test {
public static String sayHello() {
return("Hello, World!");
public static void genXML(oracle.sql.ARRAY a, oracle.sql.CLOB c) throws Exce
ption {
c.putString(1,"<test>This is a test!</test>");
PL/SQL:
create or replace package mike is
type dummy_record is varray (1) of dummy%rowtype;
PROCEDURE GENERATE_XML(i dummy_record, c CLOB);
FUNCTION SAYHELLO RETURN varchar2;
end mike;
show errors
create or replace package body mike is
PROCEDURE GENERATE_XML(i dummy_record,c CLOB)
AS LANGUAGE JAVA
NAME 'test.genXML(oracle.sql.ARRAY,oracle.sql.CLOB)';
FUNCTION SAYHELLO RETURN varchar2
AS LANGUAGE JAVA
NAME 'test.sayHello() return java.lang.String';
end mike;
show errors
DDL:
SQL> describe dummy
Name Null? Type
USERNAME VARCHAR2(20)
ID NUMBER
Test Script:
declare
m Mike.dummy_record := Mike.DUMMY_RECORD();
c CLOB;
begin
m.extend;
select xml into c from t_clob;
for rec in (select * from dummy)
LOOP
m(1) :=rec;
mike.generate_xml(m,c);
end loop;
end;
And finally, the output:
declare
ERROR at line 1:
ORA-21700: object does not exist or is marked for delete
ORA-06512: at "MMANGINO.MIKE", line 0
ORA-06512: at line 10
Sorry this post is so long, but I wanted it to be complete!
MikeThe first solution is to not do that in java in the first place.
DDL should be in script files which are applied to oracle outside of java.
Other than I believe there are some existing stored procedures in Oracle that take DDL strings and process them. Your user has to have permission of course. You can track them down via the documentation. -
Hi Folks,
Is there a limitation in BEA's web services implementation? I have a simple web
service that returns an array of java objects. However I am calling another middle
tier API that returns a Set. I convert this Set into array of object and return
it via the web service.
However the .jws file that implements the webservice does not compile. I get the
following error message:
java.util.Set is an interface. This interface is not supported.
Is there a limitation on using Collections within the .jws file? If that is the
case it is a severe limitation.
Note my Web Service API returns an array of java objects with no collections in
them.
SanjayHello,
Generic java collections can only be handled in a very generic, weakly
typed manner.
Take a look at the
http://workshop.bea.com/xmlbeans/guide/conXMLBeansSupportBuiltInSchemaTypes.html
and also
http://workshop.bea.com/xmlbeans/guide/conJavaTypesGeneratedFromUserDerived.html
You might also ask your question to the XMLBeans newsgroup:
http://newsgroups.bea.com/cgi-bin/dnewsweb?cmd=xover&group=weblogic.developer.interest.xmlbeans
Regards,
Bruce
Sanjay wrote:
>
Hi Folks,
Is there a limitation in BEA's web services implementation? I have a simple web
service that returns an array of java objects. However I am calling another middle
tier API that returns a Set. I convert this Set into array of object and return
it via the web service.
However the .jws file that implements the webservice does not compile. I get the
following error message:
java.util.Set is an interface. This interface is not supported.
Is there a limitation on using Collections within the .jws file? If that is the
case it is a severe limitation.
Note my Web Service API returns an array of java objects with no collections in
them.
Sanjay -
hi,
i got error in the following programme in java named dmdemotree.java the code and the error are as mentioned below
i have installed oracle 10g r2 and i have used JDK 1.4.2 softwares , i have set classpath for jdm.jar and ojdm_api.jar available in oracle 10g r2 software ,successfully compiled but at execution stage i got error as
F:\Mallari\DATA MINING demos\java\samples>java dmtreedemo localhost:1521:orcl scott tiger
--- Build Model - using cost matrix ---
javax.datamining.JDMException: Generic Error.
at oracle.dmt.jdm.resource.OraExceptionHandler.createException(OraExcept
ionHandler.java:142)
at oracle.dmt.jdm.resource.OraExceptionHandler.createException(OraExcept
ionHandler.java:91)
at oracle.dmt.jdm.OraDMObject.createException(OraDMObject.java:111)
at oracle.dmt.jdm.base.OraTask.saveObjectInDatabase(OraTask.java:204)
at oracle.dmt.jdm.OraMiningObject.saveObjectInDatabase(OraMiningObject.j
ava:164)
at oracle.dmt.jdm.resource.OraPersistanceManagerImpl.saveObject(OraPersi
stanceManagerImpl.java:245)
at oracle.dmt.jdm.resource.OraConnection.saveObject(OraConnection.java:3
83)
at dmtreedemo.executeTask(dmtreedemo.java:622)
at dmtreedemo.buildModel(dmtreedemo.java:304)
at dmtreedemo.main(dmtreedemo.java:199)
Caused by: java.sql.SQLException: Unsupported feature
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:134)
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:179)
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:269)
at oracle.jdbc.dbaccess.DBError.throwUnsupportedFeatureSqlException(DBEr
ror.java:690)
at oracle.jdbc.driver.OracleCallableStatement.setString(OracleCallableSt
atement.java:1337)
at oracle.dmt.jdm.utils.OraSQLUtils.createCallableStatement(OraSQLUtils.
java:126)
at oracle.dmt.jdm.utils.OraSQLUtils.executeCallableStatement(OraSQLUtils
.java:532)
at oracle.dmt.jdm.scheduler.OraProgramJob.createJob(OraProgramJob.java:7
7)
at oracle.dmt.jdm.scheduler.OraJob.saveJob(OraJob.java:107)
at oracle.dmt.jdm.scheduler.OraProgramJob.saveJob(OraProgramJob.java:85)
at oracle.dmt.jdm.scheduler.OraProgramJob.saveJob(OraProgramJob.java:290
at oracle.dmt.jdm.base.OraTask.saveObjectInDatabase(OraTask.java:199)
... 6 more
SO PLZ HELP ME OUT IN THIS , I WILL BE VERY THANK FULL
===========================================================
the sample code is
// Copyright (c) 2004, 2005, Oracle. All rights reserved.
// File: dmtreedemo.java
* This demo program describes how to use the Oracle Data Mining (ODM) Java API
* to solve a classification problem using Decision Tree (DT) algorithm.
* PROBLEM DEFINITION
* How to predict whether a customer responds or not to the new affinity card
* program using a classifier based on DT algorithm?
* DATA DESCRIPTION
* Data for this demo is composed from base tables in the Sales History (SH)
* schema. The SH schema is an Oracle Database Sample Schema that has the customer
* demographics, purchasing, and response details for the previous affinity card
* programs. Data exploration and preparing the data is a common step before
* doing data mining. Here in this demo, the following views are created in the user
* schema using CUSTOMERS, COUNTRIES, and SUPPLIMENTARY_DEMOGRAPHICS tables.
* MINING_DATA_BUILD_V:
* This view collects the previous customers' demographics, purchasing, and affinity
* card response details for building the model.
* MINING_DATA_TEST_V:
* This view collects the previous customers' demographics, purchasing, and affinity
* card response details for testing the model.
* MINING_DATA_APPLY_V:
* This view collects the prospective customers' demographics and purchasing
* details for predicting response for the new affinity card program.
* DATA MINING PROCESS
* Prepare Data:
* 1. Missing Value treatment for predictors
* See dmsvcdemo.java for a definition of missing values, and the steps to be
* taken for missing value imputation. SVM interprets all NULL values for a
* given attribute as "sparse". Sparse data is not suitable for decision
* trees, but it will accept sparse data nevertheless. Decision Tree
* implementation in ODM handles missing predictor values (by penalizing
* predictors which have missing values) and missing target values (by simple
* discarding records with missing target values). We skip missing values
* treatment in this demo.
* 2. Outlier/Clipping treatment for predictors
* See dmsvcdemo.java for a discussion on outlier treatment. For decision
* trees, outlier treatment is not really necessary. We skip outlier treatment
* in this demo.
* 3. Binning high cardinality data
* No data preparation for the types we accept is necessary - even for high
* cardinality predictors. Preprocessing to reduce the cardinality
* (e.g., binning) can improve the performance of the build, but it could
* penalize the accuracy of the resulting model.
* The PrepareData() method in this demo program illustrates the preparation of the
* build, test, and apply data. We skip PrepareData() since the decision tree
* algorithm is very capable of handling data which has not been specially
* prepared. For this demo, no data preparation will be performed.
* Build Model:
* Mining Model is the prime object in data mining. The buildModel() method
* illustrates how to build a classification model using DT algorithm.
* Test Model:
* Classification model performance can be evaluated by computing test
* metrics like accuracy, confusion matrix, lift and ROC. The testModel() or
* computeTestMetrics() method illustrates how to perform a test operation to
* compute various metrics.
* Apply Model:
* Predicting the target attribute values is the prime function of
* classification models. The applyModel() method illustrates how to
* predict the customer response for affinity card program.
* EXECUTING DEMO PROGRAM
* Refer to Oracle Data Mining Administrator's Guide
* for guidelines for executing this demo program.
// Generic Java api imports
import java.math.BigDecimal;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.ResultSetMetaData;
import java.sql.SQLException;
import java.sql.Statement;
import java.text.DecimalFormat;
import java.text.MessageFormat;
import java.util.Collection;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Stack;
// Java Data Mining (JDM) standard api imports
import javax.datamining.ExecutionHandle;
import javax.datamining.ExecutionState;
import javax.datamining.ExecutionStatus;
import javax.datamining.JDMException;
import javax.datamining.MiningAlgorithm;
import javax.datamining.MiningFunction;
import javax.datamining.NamedObject;
import javax.datamining.SizeUnit;
import javax.datamining.algorithm.tree.TreeHomogeneityMetric;
import javax.datamining.algorithm.tree.TreeSettings;
import javax.datamining.algorithm.tree.TreeSettingsFactory;
import javax.datamining.base.AlgorithmSettings;
import javax.datamining.base.Model;
import javax.datamining.base.Task;
import javax.datamining.data.AttributeDataType;
import javax.datamining.data.CategoryProperty;
import javax.datamining.data.CategorySet;
import javax.datamining.data.CategorySetFactory;
import javax.datamining.data.ModelSignature;
import javax.datamining.data.PhysicalAttribute;
import javax.datamining.data.PhysicalAttributeFactory;
import javax.datamining.data.PhysicalAttributeRole;
import javax.datamining.data.PhysicalDataSet;
import javax.datamining.data.PhysicalDataSetFactory;
import javax.datamining.data.SignatureAttribute;
import javax.datamining.modeldetail.tree.TreeModelDetail;
import javax.datamining.modeldetail.tree.TreeNode;
import javax.datamining.resource.Connection;
import javax.datamining.resource.ConnectionFactory;
import javax.datamining.resource.ConnectionSpec;
import javax.datamining.rule.Predicate;
import javax.datamining.rule.Rule;
import javax.datamining.supervised.classification.ClassificationApplySettings;
import javax.datamining.supervised.classification.ClassificationApplySettingsFactory;
import javax.datamining.supervised.classification.ClassificationModel;
import javax.datamining.supervised.classification.ClassificationSettings;
import javax.datamining.supervised.classification.ClassificationSettingsFactory;
import javax.datamining.supervised.classification.ClassificationTestMetricOption;
import javax.datamining.supervised.classification.ClassificationTestMetrics;
import javax.datamining.supervised.classification.ClassificationTestMetricsTask;
import javax.datamining.supervised.classification.ClassificationTestMetricsTaskFactory;
import javax.datamining.supervised.classification.ClassificationTestTaskFactory;
import javax.datamining.supervised.classification.ConfusionMatrix;
import javax.datamining.supervised.classification.CostMatrix;
import javax.datamining.supervised.classification.CostMatrixFactory;
import javax.datamining.supervised.classification.Lift;
import javax.datamining.supervised.classification.ReceiverOperatingCharacterics;
import javax.datamining.task.BuildTask;
import javax.datamining.task.BuildTaskFactory;
import javax.datamining.task.apply.DataSetApplyTask;
import javax.datamining.task.apply.DataSetApplyTaskFactory;
// Oracle Java Data Mining (JDM) implemented api imports
import oracle.dmt.jdm.algorithm.tree.OraTreeSettings;
import oracle.dmt.jdm.resource.OraConnection;
import oracle.dmt.jdm.resource.OraConnectionFactory;
import oracle.dmt.jdm.modeldetail.tree.OraTreeModelDetail;
public class dmtreedemo
//Connection related data members
private static Connection m_dmeConn;
private static ConnectionFactory m_dmeConnFactory;
//Object factories used in this demo program
private static PhysicalDataSetFactory m_pdsFactory;
private static PhysicalAttributeFactory m_paFactory;
private static ClassificationSettingsFactory m_clasFactory;
private static TreeSettingsFactory m_treeFactory;
private static BuildTaskFactory m_buildFactory;
private static DataSetApplyTaskFactory m_dsApplyFactory;
private static ClassificationTestTaskFactory m_testFactory;
private static ClassificationApplySettingsFactory m_applySettingsFactory;
private static CostMatrixFactory m_costMatrixFactory;
private static CategorySetFactory m_catSetFactory;
private static ClassificationTestMetricsTaskFactory m_testMetricsTaskFactory;
// Global constants
private static DecimalFormat m_df = new DecimalFormat("##.####");
private static String m_costMatrixName = null;
public static void main( String args[] )
try
if ( args.length != 3 ) {
System.out.println("Usage: java dmsvrdemo <Host name>:<Port>:<SID> <User Name> <Password>");
return;
String uri = args[0];
String name = args[1];
String password = args[2];
// 1. Login to the Data Mining Engine
m_dmeConnFactory = new OraConnectionFactory();
ConnectionSpec connSpec = m_dmeConnFactory.getConnectionSpec();
connSpec.setURI("jdbc:oracle:thin:@"+uri);
connSpec.setName(name);
connSpec.setPassword(password);
m_dmeConn = m_dmeConnFactory.getConnection(connSpec);
// 2. Clean up all previuosly created demo objects
clean();
// 3. Initialize factories for mining objects
initFactories();
m_costMatrixName = createCostMatrix();
// 4. Build model with supplied cost matrix
buildModel();
// 5. Test model - To compute accuracy and confusion matrix, lift result
// and ROC for the model from apply output data.
// Please see dnnbdemo.java to see how to test the model
// with a test input data and cost matrix.
// Test the model with cost matrix
computeTestMetrics("DT_TEST_APPLY_OUTPUT_COST_JDM",
"dtTestMetricsWithCost_jdm", m_costMatrixName);
// Test the model without cost matrix
computeTestMetrics("DT_TEST_APPLY_OUTPUT_JDM",
"dtTestMetrics_jdm", null);
// 6. Apply the model
applyModel();
} catch(Exception anyExp) {
anyExp.printStackTrace(System.out);
} finally {
try {
//6. Logout from the Data Mining Engine
m_dmeConn.close();
} catch(Exception anyExp1) { }//Ignore
* Initialize all object factories used in the demo program.
* @exception JDMException if factory initalization failed
public static void initFactories() throws JDMException
m_pdsFactory = (PhysicalDataSetFactory)m_dmeConn.getFactory(
"javax.datamining.data.PhysicalDataSet");
m_paFactory = (PhysicalAttributeFactory)m_dmeConn.getFactory(
"javax.datamining.data.PhysicalAttribute");
m_clasFactory = (ClassificationSettingsFactory)m_dmeConn.getFactory(
"javax.datamining.supervised.classification.ClassificationSettings");
m_treeFactory = (TreeSettingsFactory) m_dmeConn.getFactory(
"javax.datamining.algorithm.tree.TreeSettings");
m_buildFactory = (BuildTaskFactory)m_dmeConn.getFactory(
"javax.datamining.task.BuildTask");
m_dsApplyFactory = (DataSetApplyTaskFactory)m_dmeConn.getFactory(
"javax.datamining.task.apply.DataSetApplyTask");
m_testFactory = (ClassificationTestTaskFactory)m_dmeConn.getFactory(
"javax.datamining.supervised.classification.ClassificationTestTask");
m_applySettingsFactory = (ClassificationApplySettingsFactory)m_dmeConn.getFactory(
"javax.datamining.supervised.classification.ClassificationApplySettings");
m_costMatrixFactory = (CostMatrixFactory)m_dmeConn.getFactory(
"javax.datamining.supervised.classification.CostMatrix");
m_catSetFactory = (CategorySetFactory)m_dmeConn.getFactory(
"javax.datamining.data.CategorySet" );
m_testMetricsTaskFactory = (ClassificationTestMetricsTaskFactory)m_dmeConn.getFactory(
"javax.datamining.supervised.classification.ClassificationTestMetricsTask");
* This method illustrates how to build a mining model using the
* MINING_DATA_BUILD_V dataset and classification settings with
* DT algorithm.
* @exception JDMException if model build failed
public static void buildModel() throws JDMException
System.out.println("---------------------------------------------------");
System.out.println("--- Build Model - using cost matrix ---");
System.out.println("---------------------------------------------------");
// 1. Create & save PhysicalDataSpecification
PhysicalDataSet buildData =
m_pdsFactory.create("MINING_DATA_BUILD_V", false);
PhysicalAttribute pa = m_paFactory.create("CUST_ID",
AttributeDataType.integerType, PhysicalAttributeRole.caseId );
buildData.addAttribute(pa);
m_dmeConn.saveObject("treeBuildData_jdm", buildData, true);
//2. Create & save Mining Function Settings
// Create tree algorithm settings
TreeSettings treeAlgo = m_treeFactory.create();
// By default, tree algorithm will have the following settings:
// treeAlgo.setBuildHomogeneityMetric(TreeHomogeneityMetric.gini);
// treeAlgo.setMaxDepth(7);
// ((OraTreeSettings)treeAlgo).setMinDecreaseInImpurity(0.1, SizeUnit.percentage);
// treeAlgo.setMinNodeSize( 0.05, SizeUnit.percentage );
// treeAlgo.setMinNodeSize( 10, SizeUnit.count );
// ((OraTreeSettings)treeAlgo).setMinDecreaseInImpurity(20, SizeUnit.count);
// Set cost matrix. A cost matrix is used to influence the weighting of
// misclassification during model creation (and scoring).
// See Oracle Data Mining Concepts Guide for more details.
String costMatrixName = m_costMatrixName;
// Create ClassificationSettings
ClassificationSettings buildSettings = m_clasFactory.create();
buildSettings.setAlgorithmSettings(treeAlgo);
buildSettings.setCostMatrixName(costMatrixName);
buildSettings.setTargetAttributeName("AFFINITY_CARD");
m_dmeConn.saveObject("treeBuildSettings_jdm", buildSettings, true);
// 3. Create, save & execute Build Task
BuildTask buildTask = m_buildFactory.create(
"treeBuildData_jdm", // Build data specification
"treeBuildSettings_jdm", // Mining function settings name
"treeModel_jdm" // Mining model name
buildTask.setDescription("treeBuildTask_jdm");
executeTask(buildTask, "treeBuildTask_jdm");
//4. Restore the model from the DME and explore the details of the model
ClassificationModel model =
(ClassificationModel)m_dmeConn.retrieveObject(
"treeModel_jdm", NamedObject.model);
// Display model build settings
ClassificationSettings retrievedBuildSettings =
(ClassificationSettings)model.getBuildSettings();
if(buildSettings == null)
System.out.println("Failure to restore build settings.");
else
displayBuildSettings(retrievedBuildSettings, "treeBuildSettings_jdm");
// Display model signature
displayModelSignature((Model)model);
// Display model detail
TreeModelDetail treeModelDetails = (TreeModelDetail) model.getModelDetail();
displayTreeModelDetailsExtensions(treeModelDetails);
* Create and save cost matrix.
* Consider an example where it costs $10 to mail a promotion to a
* prospective customer and if the prospect becomes a customer, the
* typical sale including the promotion, is worth $100. Then the cost
* of missing a customer (i.e. missing a $100 sale) is 10x that of
* incorrectly indicating that a person is good prospect (spending
* $10 for the promo). In this case, all prediction errors made by
* the model are NOT equal. To act on what the model determines to
* be the most likely (probable) outcome may be a poor choice.
* Suppose that the probability of a BUY reponse is 10% for a given
* prospect. Then the expected revenue from the prospect is:
* .10 * $100 - .90 * $10 = $1.
* The optimal action, given the cost matrix, is to simply mail the
* promotion to the customer, because the action is profitable ($1).
* In contrast, without the cost matrix, all that can be said is
* that the most likely response is NO BUY, so don't send the
* promotion. This shows that cost matrices can be very important.
* The caveat in all this is that the model predicted probabilities
* may NOT be accurate. For binary targets, a systematic approach to
* this issue exists. It is ROC, illustrated below.
* With ROC computed on a test set, the user can see how various model
* predicted probability thresholds affect the action of mailing a promotion.
* Suppose I promote when the probability to BUY exceeds 5, 10, 15%, etc.
* what return can I expect? Note that the answer to this question does
* not rely on the predicted probabilities being accurate, only that
* they are in approximately the correct rank order.
* Assuming that the predicted probabilities are accurate, provide the
* cost matrix table name as input to the RANK_APPLY procedure to get
* appropriate costed scoring results to determine the most appropriate
* action.
* In this demo, we will create the following cost matrix
* ActualTarget PredictedTarget Cost
* 0 0 0
* 0 1 1
* 1 0 8
* 1 1 0
private static String createCostMatrix() throws JDMException
String costMatrixName = "treeCostMatrix";
// Create categorySet
CategorySet catSet = m_catSetFactory.create(AttributeDataType.integerType);
// Add category values
catSet.addCategory(new Integer(0), CategoryProperty.valid);
catSet.addCategory(new Integer(1), CategoryProperty.valid);
// Create cost matrix
CostMatrix costMatrix = m_costMatrixFactory.create(catSet);
// ActualTarget PredictedTarget Cost
costMatrix.setValue(new Integer(0), new Integer(0), 0);
costMatrix.setValue(new Integer(0), new Integer(1), 1);
costMatrix.setValue(new Integer(1), new Integer(0), 8);
costMatrix.setValue(new Integer(1), new Integer(1), 0);
//save cost matrix
m_dmeConn.saveObject(costMatrixName, costMatrix, true);
return costMatrixName;
* This method illustrates how to compute test metrics using
* an apply output table that has actual and predicted target values. Here the
* apply operation is done on the MINING_DATA_TEST_V dataset. It creates
* an apply output table with actual and predicted target values. Using
* ClassificationTestMetricsTask test metrics are computed. This produces
* the same test metrics results as ClassificationTestTask.
* @param applyOutputName apply output table name
* @param testResultName test result name
* @param costMatrixName table name of the supplied cost matrix
* @exception JDMException if model test failed
public static void computeTestMetrics(String applyOutputName,
String testResultName, String costMatrixName) throws JDMException
if (costMatrixName != null) {
System.out.println("---------------------------------------------------");
System.out.println("--- Test Model - using apply output table ---");
System.out.println("--- - using cost matrix table ---");
System.out.println("---------------------------------------------------");
else {
System.out.println("---------------------------------------------------");
System.out.println("--- Test Model - using apply output table ---");
System.out.println("--- - using no cost matrix table ---");
System.out.println("---------------------------------------------------");
// 1. Do the apply on test data to create an apply output table
// Create & save PhysicalDataSpecification
PhysicalDataSet applyData =
m_pdsFactory.create( "MINING_DATA_TEST_V", false );
PhysicalAttribute pa = m_paFactory.create("CUST_ID",
AttributeDataType.integerType, PhysicalAttributeRole.caseId );
applyData.addAttribute( pa );
m_dmeConn.saveObject( "treeTestApplyData_jdm", applyData, true );
// 2 Create & save ClassificationApplySettings
ClassificationApplySettings clasAS = m_applySettingsFactory.create();
HashMap sourceAttrMap = new HashMap();
sourceAttrMap.put( "AFFINITY_CARD", "AFFINITY_CARD" );
clasAS.setSourceDestinationMap( sourceAttrMap );
m_dmeConn.saveObject( "treeTestApplySettings_jdm", clasAS, true);
// 3 Create, store & execute apply Task
DataSetApplyTask applyTask = m_dsApplyFactory.create(
"treeTestApplyData_jdm",
"treeModel_jdm",
"treeTestApplySettings_jdm",
applyOutputName);
if(executeTask(applyTask, "treeTestApplyTask_jdm"))
// Compute test metrics on new created apply output table
// 4. Create & save PhysicalDataSpecification
PhysicalDataSet applyOutputData = m_pdsFactory.create(
applyOutputName, false );
applyOutputData.addAttribute( pa );
m_dmeConn.saveObject( "treeTestApplyOutput_jdm", applyOutputData, true );
// 5. Create a ClassificationTestMetricsTask
ClassificationTestMetricsTask testMetricsTask =
m_testMetricsTaskFactory.create( "treeTestApplyOutput_jdm", // apply output data used as input
"AFFINITY_CARD", // actual target column
"PREDICTION", // predicted target column
testResultName // test metrics result name
testMetricsTask.computeMetric( // enable confusion matrix computation
ClassificationTestMetricOption.confusionMatrix, true );
testMetricsTask.computeMetric( // enable lift computation
ClassificationTestMetricOption.lift, true );
testMetricsTask.computeMetric( // enable ROC computation
ClassificationTestMetricOption.receiverOperatingCharacteristics, true );
testMetricsTask.setPositiveTargetValue( new Integer(1) );
testMetricsTask.setNumberOfLiftQuantiles( 10 );
testMetricsTask.setPredictionRankingAttrName( "PROBABILITY" );
if (costMatrixName != null) {
testMetricsTask.setCostMatrixName(costMatrixName);
displayTable(costMatrixName, "", "order by ACTUAL_TARGET_VALUE, PREDICTED_TARGET_VALUE");
// Store & execute the task
boolean isTaskSuccess = executeTask(testMetricsTask, "treeTestMetricsTask_jdm");
if( isTaskSuccess ) {
// Restore & display test metrics
ClassificationTestMetrics testMetrics = (ClassificationTestMetrics)
m_dmeConn.retrieveObject( testResultName, NamedObject.testMetrics );
// Display classification test metrics
displayTestMetricDetails(testMetrics);
* This method illustrates how to apply the mining model on the
* MINING_DATA_APPLY_V dataset to predict customer
* response. After completion of the task apply output table with the
* predicted results is created at the user specified location.
* @exception JDMException if model apply failed
public static void applyModel() throws JDMException
System.out.println("---------------------------------------------------");
System.out.println("--- Apply Model ---");
System.out.println("---------------------------------------------------");
System.out.println("---------------------------------------------------");
System.out.println("--- Business case 1 ---");
System.out.println("--- Find the 10 customers who live in Italy ---");
System.out.println("--- that are least expensive to be convinced to ---");
System.out.println("--- use an affinity card. ---");
System.out.println("---------------------------------------------------");
// 1. Create & save PhysicalDataSpecification
PhysicalDataSet applyData =
m_pdsFactory.create( "MINING_DATA_APPLY_V", false );
PhysicalAttribute pa = m_paFactory.create("CUST_ID",
AttributeDataType.integerType, PhysicalAttributeRole.caseId );
applyData.addAttribute( pa );
m_dmeConn.saveObject( "treeApplyData_jdm", applyData, true );
// 2. Create & save ClassificationApplySettings
ClassificationApplySettings clasAS = m_applySettingsFactory.create();
// Add source attributes
HashMap sourceAttrMap = new HashMap();
sourceAttrMap.put( "COUNTRY_NAME", "COUNTRY_NAME" );
clasAS.setSourceDestinationMap( sourceAttrMap );
// Add cost matrix
clasAS.setCostMatrixName( m_costMatrixName );
m_dmeConn.saveObject( "treeApplySettings_jdm", clasAS, true);
// 3. Create, store & execute apply Task
DataSetApplyTask applyTask = m_dsApplyFactory.create(
"treeApplyData_jdm", "treeModel_jdm",
"treeApplySettings_jdm", "TREE_APPLY_OUTPUT1_JDM");
executeTask(applyTask, "treeApplyTask_jdm");
// 4. Display apply result -- Note that APPLY results do not need to be
// reverse transformed, as done in the case of model details. This is
// because class values of a classification target were not (required to
// be) binned or normalized.
// Find the 10 customers who live in Italy that are least expensive to be
// convinced to use an affinity card.
displayTable("TREE_APPLY_OUTPUT1_JDM",
"where COUNTRY_NAME='Italy' and ROWNUM < 11 ",
"order by COST");
System.out.println("---------------------------------------------------");
System.out.println("--- Business case 2 ---");
System.out.println("--- List ten customers (ordered by their id) ---");
System.out.println("--- along with likelihood and cost to use or ---");
System.out.println("--- reject the affinity card. ---");
System.out.println("---------------------------------------------------");
// 1. Create & save PhysicalDataSpecification
applyData =
m_pdsFactory.create( "MINING_DATA_APPLY_V", false );
pa = m_paFactory.create("CUST_ID",
AttributeDataType.integerType, PhysicalAttributeRole.caseId );
applyData.addAttribute( pa );
m_dmeConn.saveObject( "treeApplyData_jdm", applyData, true );
// 2. Create & save ClassificationApplySettings
clasAS = m_applySettingsFactory.create();
// Add cost matrix
clasAS.setCostMatrixName( m_costMatrixName );
m_dmeConn.saveObject( "treeApplySettings_jdm", clasAS, true);
// 3. Create, store & execute apply Task
applyTask = m_dsApplyFactory.create(
"treeApplyData_jdm", "treeModel_jdm",
"treeApplySettings_jdm", "TREE_APPLY_OUTPUT2_JDM");
executeTask(applyTask, "treeApplyTask_jdm");
// 4. Display apply result -- Note that APPLY results do not need to be
// reverse transformed, as done in the case of model details. This is
// because class values of a classification target were not (required to
// be) binned or normalized.
// List ten customers (ordered by their id) along with likelihood and cost
// to use or reject the affinity card (Note: while this example has a
// binary target, such a query is useful in multi-class classification -
// Low, Med, High for example).
displayTable("TREE_APPLY_OUTPUT2_JDM",
"where ROWNUM < 21",
"order by CUST_ID, PREDICTION");
System.out.println("---------------------------------------------------");
System.out.println("--- Business case 3 ---");
System.out.println("--- Find the customers who work in Tech support ---");
System.out.println("--- and are under 25 who is going to response ---");
System.out.println("--- to the new affinity card program. ---");
System.out.println("---------------------------------------------------");
// 1. Create & save PhysicalDataSpecification
applyData =
m_pdsFactory.create( "MINING_DATA_APPLY_V", false );
pa = m_paFactory.create("CUST_ID",
AttributeDataType.integerType, PhysicalAttributeRole.caseId );
applyData.addAttribute( pa );
m_dmeConn.saveObject( "treeApplyData_jdm", applyData, true );
// 2. Create & save ClassificationApplySettings
clasAS = m_applySettingsFactory.create();
// Add source attributes
sourceAttrMap = new HashMap();
sourceAttrMap.put( "AGE", "AGE" );
sourceAttrMap.put( "OCCUPATION", "OCCUPATION" );
clasAS.setSourceDestinationMap( sourceAttrMap );
m_dmeConn.saveObject( "treeApplySettings_jdm", clasAS, true);
// 3. Create, store & execute apply Task
applyTask = m_dsApplyFactory.create(
"treeApplyData_jdm", "treeModel_jdm",
"treeApplySettings_jdm", "TREE_APPLY_OUTPUT3_JDM");
executeTask(applyTask, "treeApplyTask_jdm");
// 4. Display apply result -- Note that APPLY results do not need to be
// reverse transformed, as done in the case of model details. This is
// because class values of a classification target were not (required to
// be) binned or normalized.
// Find the customers who work in Tech support and are under 25 who is
// going to response to the new affinity card program.
displayTable("TREE_APPLY_OUTPUT3_JDM",
"where OCCUPATION = 'TechSup' " +
"and AGE < 25 " +
"and PREDICTION = 1 ",
"order by CUST_ID");
* This method stores the given task with the specified name in the DMS
* and submits the task for asynchronous execution in the DMS. After
* completing the task successfully it returns true. If there is a task
* failure, then it prints error description and returns false.
* @param taskObj task object
* @param taskName name of the task
* @return boolean returns true when the task is successful
* @exception JDMException if task execution failed
public static boolean executeTask(Task taskObj, String taskName)
throws JDMException
boolean isTaskSuccess = false;
m_dmeConn.saveObject(taskName, taskObj, true);
ExecutionHandle execHandle = m_dmeConn.execute(taskName);
System.out.print(taskName + " is started, please wait. ");
//Wait for completion of the task
ExecutionStatus status = execHandle.waitForCompletion(Integer.MAX_VALUE);
//Check the status of the task after completion
isTaskSuccess = status.getState().equals(ExecutionState.success);
if( isTaskSuccess ) //Task completed successfully
System.out.println(taskName + " is successful.");
else //Task failed
System.out.println(taskName + " failed.\nFailure Description: " +
status.getDescription() );
return isTaskSuccess;
private static void displayBuildSettings(
ClassificationSettings clasSettings, String buildSettingsName)
System.out.println("BuildSettings Details from the "
+ buildSettingsName + " table:");
displayTable(buildSettingsName, "", "order by SETTING_NAME");
System.out.println("BuildSettings Details from the "
+ buildSettingsName + " model build settings object:");
String objName = clasSettings.getName();
if(objName != null)
System.out.println("Name = " + objName);
String objDescription = clasSettings.getDescription();
if(objDescription != null)
System.out.println("Description = " + objDescription);
java.util.Date creationDate = clasSettings.getCreationDate();
String creator = clasSettings.getCreatorInfo();
String targetAttrName = clasSettings.getTargetAttributeName();
System.out.println("Target attribute name = " + targetAttrName);
AlgorithmSettings algoSettings = clasSettings.getAlgorithmSettings();
if(algoSettings == null)
System.out.println("Failure: clasSettings.getAlgorithmSettings() returns null");
MiningAlgorithm algo = algoSettings.getMiningAlgorithm();
if(algo == null) System.out.println("Failure: algoSettings.getMiningAlgorithm() returns null");
System.out.println("Algorithm Name: " + algo.name());
MiningFunction function = clasSettings.getMiningFunction();
if(function == null) System.out.println("Failure: clasSettings.getMiningFunction() returns null");
System.out.println("Function Name: " + function.name());
try {
String costMatrixName = clasSettings.getCostMatrixName();
if(costMatrixName != null) {
System.out.println("Cost Matrix Details from the " + costMatrixName
+ " table:");
displayTable(costMatrixName, "", "order by ACTUAL_TARGET_VALUE, PREDICTED_TARGET_VALUE");
} catch(Exception jdmExp)
System.out.println("Failure: clasSettings.getCostMatrixName()throws exception");
jdmExp.printStackTrace();
// List of DT algorithm settings
// treeAlgo.setBuildHomogeneityMetric(TreeHomogeneityMetric.gini);
// treeAlgo.setMaxDepth(7);
// ((OraTreeSettings)treeAlgo).setMinDecreaseInImpurity(0.1, SizeUnit.percentage);
// treeAlgo.setMinNodeSize( 0.05, SizeUnit.percentage );
// treeAlgo.setMinNodeSize( 10, SizeUnit.count );
// ((OraTreeSettings)treeAlgo).setMinDecreaseInImpurity(20, SizeUnit.count);
TreeHomogeneityMetric homogeneityMetric = ((OraTreeSettings)algoSettings).getBuildHomogeneityMetric();
System.out.println("Homogeneity Metric: " + homogeneityMetric.name());
int intValue = ((OraTreeSettings)algoSettings).getMaxDepth();
System.out.println("Max Depth: " + intValue);
double doubleValue = ((OraTreeSettings)algoSettings).getMinNodeSizeForSplit(SizeUnit.percentage);
System.out.println("MinNodeSizeForSplit (percentage): " + m_df.format(doubleValue));
doubleValue = ((OraTreeSettings)algoSettings).getMinNodeSizeForSplit(SizeUnit.count);
System.out.println("MinNodeSizeForSplit (count): " + m_df.format(doubleValue));
doubleValue = ((OraTreeSettings)algoSettings).getMinNodeSize();
SizeUnit unit = ((OraTreeSettings)algoSettings).getMinNodeSizeUnit();
System.out.println("Min Node Size (" + unit.name() +"): " + m_df.format(doubleValue));
doubleValue = ((OraTreeSettings)algoSettings).getMinNodeSize( SizeUnit.count );
System.out.println("Min Node Size (" + SizeUnit.count.name() +"): " + m_df.format(doubleValue));
doubleValue = ((OraTreeSettings)algoSettings).getMinNodeSize( SizeUnit.percentage );
System.out.println("Min Node Size (" + SizeUnit.percentage.name() +"): " + m_df.format(doubleValue));
* This method displayes DT model signature.
* @param model model object
* @exception JDMException if failed to retrieve model signature
public static void displayModelSignature(Model model) throws JDMException
String modelName = model.getName();
System.out.println("Model Name: " + modelName);
ModelSignature modelSignature = model.getSignature();
System.out.println("ModelSignature Deatils: ( Attribute Name, Attribute Type )");
MessageFormat mfSign = new MessageFormat(" ( {0}, {1} )");
String[] vals = new String[3];
Collection sortedSet = modelSignature.getAttributes();
Iterator attrIterator = sortedSet.iterator();
while(attrIterator.hasNext())
SignatureAttribute attr = (SignatureAttribute)attrIterator.next();
vals[0] = attr.getName();
vals[1] = attr.getDataType().name();
System.out.println( mfSign.format(vals) );
* This method displayes DT model details.
* @param treeModelDetails tree model details object
* @exception JDMException if failed to retrieve model details
public static void displayTreeModelDetailsExtensions(TreeModelDetail treeModelDetails)
throws JDMException
System.out.println( "\nTreeModelDetail: Model name=" + "treeModel_jdm" );
TreeNode root = treeModelDetails.getRootNode();
System.out.println( "\nRoot node: " + root.getIdentifier() );
// get the info for the tree model
int treeDepth = ((OraTreeModelDetail) treeModelDetails).getTreeDepth();
System.out.println( "Tree depth: " + treeDepth );
int totalNodes = ((OraTreeModelDetail) treeModelDetails).getNumberOfNodes();
System.out.println( "Total number of nodes: " + totalNodes );
int totalLeaves = ((OraTreeModelDetail) treeModelDetails).getNumberOfLeafNodes();
System.out.println( "Total number of leaf nodes: " + totalLeaves );
Stack nodeStack = new Stack();
nodeStack.push( root);
while( !nodeStack.empty() )
TreeNode node = (TreeNode) nodeStack.pop();
// display this node
int nodeId = node.getIdentifier();
long caseCount = node.getCaseCount();
Object prediction = node.getPrediction();
int level = node.getLevel();
int children = node.getNumberOfChildren();
TreeNode parent = node.getParent();
System.out.println( "\nNode id=" + nodeId + " at level " + level );
if( parent != null )
System.out.println( "parent: " + parent.getIdentifier() +
", children=" + children );
System.out.println( "Case count: " + caseCount + ", prediction: " + prediction );
Predicate predicate = node.getPredicate();
System.out.println( "Predicate: " + predicate.toString() );
Predicate[] surrogates = node.getSurrogates();
if( surrogates != null )
for( int i=0; i<surrogates.length; i++ )
System.out.println( "Surrogate[" + i + "]: " + surrogates[i] );
// add child nodes in the stack
if( children > 0 )
TreeNode[] childNodes = node.getChildren();
for( int i=0; i<childNodes.length; i++ )
nodeStack.push( childNodes[i] );
TreeNode[] allNodes = treeModelDetails.getNodes();
System.out.print( "\nNode identifiers by getNodes():" );
for( int i=0; i<allNodes.length; i++ )
System.out.print( " " + allNodes.getIdentifier() );
System.out.println();
// display the node identifiers
int[] nodeIds = treeModelDetails.getNodeIdentifiers();
System.out.print( "Node identifiers by getNodeIdentifiers():" );
for( int i=0; i<nodeIds.length; i++ )
System.out.print( " " + nodeIds[i] );
System.out.println();
TreeNode node = treeModelDetails.getNode(nodeIds.length-1);
System.out.println( "Node identifier by getNode(" + (nodeIds.length-1) +
"): " + node.getIdentifier() );
Rule rule2 = treeModelDetails.getRule(nodeIds.length-1);
System.out.println( "Rule identifier by getRule(" + (nodeIds.length-1) +
"): " + rule2.getRuleIdentifier() );
// get the rules and display them
Collection ruleColl = treeModelDetails.getRules();
Iterator ruleIterator = ruleColl.iterator();
while( ruleIterator.hasNext() )
Rule rule = (Rule) ruleIterator.next();
int ruleId = rule.getRuleIdentifier();
Predicate antecedent = (Predicate) rule.getAntecedent();
Predicate consequent = (Predicate) rule.getConsequent();
System.out.println( "\nRULE " + ruleId + ": support=" +
rule.getSupport() + " (abs=" + rule.getAbsoluteSupport() +
"), confidence=" + rule.getConfidence() );
System.out.println( antecedent );
System.out.println( "=======>" );
System.out.println( consequent );
* Display classification test metrics object
* @param testMetrics classification test metrics object
* @exception JDMException if failed to retrieve test metric details
public static void displayTestMetricDetails(
ClassificationTestMetrics testMetrics) throws JDMException
// Retrieve Oracle ABN model test metrics deatils extensions
// Test Metrics Name
System.out.println("Test Metrics Name = " + testMetrics.getName());
// Model Name
System.out.println("Model Name = " + testMetrics.getModelName());
// Test Data Name
System.out.println("Test Data Name = " + testMetrics.getTestDataName());
// Accuracy
System.out.println("Accuracy = " + m_df.format(testMetrics.getAccuracy().doubleValue()));
// Confusion Matrix
ConfusionMatrix confusionMatrix = testMetrics.getConfusionMatrix();
Collection categories = confusionMatrix.getCategories();
Iterator xIterator = categories.iterator();
System.out.println("Confusion Matrix: Accuracy = " + m_df.format(confusionMatrix.getAccuracy()));
System.out.println("Confusion Matrix: Error = " + m_df.format(confusionMatrix.getError()));
System.out.println("Confusion Matrix:( Actual, Prection, Value )");
MessageFormat mf = new MessageFormat(" ( {0}, {1}, {2} )");
String[] vals = new String[3];
while(xIterator.hasNext())
Object actual = xIterator.next();
vals[0] = actual.toString();
Iterator yIterator = categories.iterator();
while(yIterator.hasNext())
Object predicted = yIterator.next();
vals[1] = predicted.toString();
long number = confusionMatrix.getNumberOfPredictions(actual, predicted);
vals[2] = Long.toString(number);
System.out.println(mf.format(vals));
// Lift
Lift lift = testMetrics.getLift();
System.out.println("Lift Details:");
System.out.println("Lift: Target Attribute Name = " + lift.getTargetAttributeName());
System.out.println("Lift: Positive Target Value = " + lift.getPositiveTargetValue());
System.out.println("Lift: Total Cases = " + lift.getTotalCases());
System.out.println("Lift: Total Positive Cases = " + lift.getTotalPositiveCases());
int numberOfQuantiles = lift.getNumberOfQuantiles();
System.out.println("Lift: Number Of Quantiles = " + numberOfQuantiles);
System.out.println("Lift: ( QUANTILE_NUMBER, QUANTILE_TOTAL_COUNT, QUANTILE_TARGET_COUNT, PERCENTAGE_RECORDS_CUMULATIVE,CUMULATIVE_LIFT,CUMULATIVE_TARGET_DENSITY,TARGETS_CUMULATIVE, NON_TARGETS_CUMULATIVE, LIFT_QUANTILE, TARGET_DENSITY )");
MessageFormat mfLift = new MessageFormat(" ( {0}, {1}, {2}, {3}, {4}, {5}, {6}, {7}, {8}, {9} )");
String[] liftVals = new String[10];
for(int iQuantile=1; iQuantile<= numberOfQuantiles; iQuantile++)
liftVals[0] = Integer.toString(iQuantile); //QUANTILE_NUMBER
liftVals[1] = Long.toString(lift.getCases((iQuantile-1), iQuantile));//QUANTILE_TOTAL_COUNT
liftVals[2] = Long.toString(lift.getNumberOfPositiveCases((iQuantile-1), iQuantile));//QUANTILE_TARGET_COUNT
liftVals[3] = m_df.format(lift.getCumulativePercentageSize(iQuantile).doubleValue());//PERCENTAGE_RECORDS_CUMULATIVE
liftVals[4] = m_df.format(lift.getCumulativeLift(iQuantile).doubleValue());//CUMULATIVE_LIFT
liftVals[5] = m_df.format(lift.getCumulativeTargetDensity(iQuantile).doubleValue());//CUMULATIVE_TARGET_DENSITY
liftVals[6] = Long.toString(lift.getCumulativePositiveCases(iQuantile));//TARGETS_CUMULATIVE
liftVals[7] = Long.toString(lift.getCumulativeNegativeCases(iQuantile));//NON_TARGETS_CUMULATIVE
liftVals[8] = m_df.format(lift.getLift(iQuantile, iQuantile).doubleValue());//LIFT_QUANTILE
liftVals[9] = m_df.format(lift.getTargetDensity(iQuantile, iQuantile).doubleValue());//TARGET_DENSITY
System.out.println(mfLift.format(liftVals));
// ROC
ReceiverOperatingCharacterics roc = testMetrics.getROC();
System.out.println("ROC Details:");
System.out.println("ROC: Area Under Curve = " + m_df.format(roc.getAreaUnderCurve()));
int nROCThresh = roc.getNumberOfThresholdCandidates();
System.out.println("ROC: Number Of Threshold Candidates = " + nROCThresh);
System.out.println("ROC: ( INDEX, PROBABILITY, TRUE_POSITIVES, FALSE_NEGATIVES, FALSE_POSITIVES, TRUE_NEGATIVES, TRUE_POSITIVE_FRACTION, FALSE_POSITIVE_FRACTION )");
MessageFormat mfROC = new MessageFormat(" ( {0}, {1}, {2}, {3}, {4}, {5}, {6}, {7} )");
String[] rocVals = new String[8];
for(int iROC=1; iROC <= nROCThresh; iROC++)
rocVals[0] = Integer.toString(iROC); //INDEX
rocVals[1] = m_df.format(roc.getProbabilityThreshold(iROC));//PROBABILITY
rocVals[2] = Long.toString(roc.getPositives(iROC, true));//TRUE_POSITIVES
rocVals[3] = Long.toString(roc.getNegatives(iROC, false));//FALSE_NEGATIVES
rocVals[4] = Long.toString(roc.getPositives(iROC, false));//FALSE_POSITIVES
rocVals[5] = Long.toString(roc.getNegatives(iROC, true));//TRUE_NEGATIVES
rocVals[6] = m_df.format(roc.getHitRate(iROC));//TRUE_POSITIVE_FRACTION
rocVals[7] = m_df.format(roc.getFalseAlarmRate(iROC));//FALSE_POSITIVE_FRACTION
System.out.println(mfROC.format(rocVals));
private static void displayTable(String tableName, String whereCause, String orderByColumn)
StringBuffer emptyCol = new StringBuffer(" ");
java.sql.Connection dbConn =
((OraConnection)m_dmeConn).getDatabaseConnection();
PreparedStatement pStmt = null;
ResultSet rs = null;
try
pStmt = dbConn.prepareStatement("SELECT * FROM " + tableName + " " + whereCause + " " + orderByColumn);
rs = pStmt.executeQuery();
ResultSetMetaData rsMeta = rs.getMetaData();
int colCount = rsMeta.getColumnCount();
StringBuffer header = new StringBuffer();
System.out.println("Table : " + tableName);
//Build table header
for(int iCol=1; iCol<=colCount; iCol++)
String colName = rsMeta.getColumnName(iCol);
header.append(emptyCol.replace(0, colName.length(), colName));
emptyCol = new StringBuffer(" ");
System.out.println(header.toString());
//Write table data
while(rs.next())
StringBuffer rowContent = new StringBuffer();
for(int iCol=1; iCol<=colCount; iCol++)
int sqlType = rsMeta.getColumnType(iCol);
Object obj = rs.getObject(iCol);
String colContent = null;
if(obj instanceof java.lang.Number)
try
BigDecimal bd = (BigDecimal)obj;
if(bd.scale() > 5)
colContent = m_df.format(obj);
} else
colContent = bd.toString();
} catch(Exception anyExp) {
colContent = m_df.format(obj);
} else
if(obj == null)
colContent = "NULL";
else
colContent = obj.toString();
rowContent.append(" "+emptyCol.replace(0, colContent.length(), colContent));
emptyCol = new StringBuffer(" ");
System.out.println(rowContent.toString());
} catch(Exception anySqlExp) {
anySqlExp.printStackTrace();
}//Ignore
private static void createTableForTestMetrics(String applyOutputTableName,
String testDataName,
String testMetricsInputTableName)
//0. need to execute the following in the schema
String sqlCreate =
"create table " + testMetricsInputTableName + " as " +
"select a.id as id, prediction, probability, affinity_card " +
"from " + testDataName + " a, " + applyOutputTableName + " b " +
"where a.id = b.id";
java.sql.Connection dbConn = ((OraConnection) m_dmeConn).getDatabaseConnection();
Statement stmt = null;
try
stmt = dbConn.createStatement();
stmt.executeUpdate( sqlCreate );
catch( Exception anySqlExp )
System.out.println( anySqlExp.getMessage() );
anySqlExp.printStackTrace();
finally
try
stmt.close();
catch( SQLException sqlExp ) {}
private static void clean()
java.sql.Connection dbConn =
((OraConnection) m_dmeConn).getDatabaseConnection();
Statement stmt = null;
// Drop apply output table
try
stmt = dbConn.createStatement();
stmt.executeUpdate("DROP TABLE TREE_APPLY_OUTPUT1_JDM");
} catch(Exception anySqlExp) {}//Ignore
finally
try
stmt.close();
catch( SQLException sqlExp ) {}
try
stmt = dbConn.createStatement();
stmt.executeUpdate("DROP TABLE TREE_APPLY_OUTPUT2_JDM");
} catch(Exception anySqlExp) {}//Ignore
finally
try
stmt.close();
catch( SQLException sqlExp ) {}
try
stmt = dbConn.createStatement();
stmt.executeUpdate("DROP TABLE TREE_APPLY_OUTPUT3_JDM");
} catch(Exception anySqlExp) {}//Ignore
finally
try
stmt.close();
catch( SQLException sqlExp ) {}
// Drop apply output table created for test metrics task
try
stmt = dbConn.createStatement();
stmt.executeUpdate("DROP TABLE DT_TEST_APPLY_OUTPUT_COST_JDM");
} catch(Exception anySqlExp) {}//Ignore
finally
try
stmt.close();
catch( SQLException sqlExp ) {}
try
stmt = dbConn.createStatement();
stmt.executeUpdate("DROP TABLE DT_TEST_APPLY_OUTPUT_JDM");
} catch(Exception anySqlExp) {}//Ignore
finally
try
stmt.close();
catch( SQLException sqlExp ) {}
//Drop the model
try {
m_dmeConn.removeObject( "treeModel_jdm", NamedObject.model );
} catch(Exception jdmExp) {}
// drop test metrics result: created by TestMetricsTask
try {
m_dmeConn.removeObject( "dtTestMetricsWithCost_jdm", NamedObject.testMetrics );
} catch(Exception jdmExp) {}
try {
m_dmeConn.removeObject( "dtTestMetrics_jdm", NamedObject.testMetrics );
} catch(Exception jdmExp) {}Hi
I am not sure whether this will help but someone else was getting an error with a java.sql.SQLexception: Unsupported feature. Here is a link to the fix: http://saloon.javaranch.com/cgi-bin/ubb/ultimatebb.cgi?ubb=get_topic&f=3&t=007947
Best wishes
Michael -
Java RFC lookup from XSLT mapping
I tried to implement a generic Java RFC Lookup class to be called as a Java extension from my XSLT mapping. I found the How-To-Guide "Easy RFC lookup from XSLT mappings using a Java helper class" ([Easy RFC lookup pdf site|http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/05a3d62e-0a01-0010-14bc-adc8efd4ee14?quicklink=index&overridelayout=true]) and tailored the source code to my XSLT program. I am getting the error "Variable '$inputparam' has not been bound to a value". Can anyone tell me what I am missing? I am not familiar with java at all, and only moderately familiar with XSLT.
> I am getting the error "Variable '$inputparam' has not been bound to a value". Can anyone tell me what I am missing?
At runtime the variable does not get value. You need to assign value to the variable inputparam. Just curious why dont use the RFC lookup graphical which is very easy and no need to handle java programming. You need to configure just reciever RFC communication channel. -
Call RFC from java (j2ee) / call to j2ee from R/3
hello
i´ve browsed the forum for some time to find how to:
1.) call ejb from r/3 system via rfc
2.) call rfc enabled function modules on r/3 from within an j2ee enviroment
but i didn´t quite get it, because i was a bit confused about all the mentionend techniques
what i found out about
1.) use ejb (session bean) and jndi; configure RFC-Engine Service (we use sap webas)
2.) use jco / jca
(or all rfc enables rfm´s are available as web service, but didn´t find anything about this)
so my question:
are these the preferred techniques to connect j2ee (webAS) <-> r/3; if not are there any others, maybe easier methods?
and last but not least: are there any good online tutorials for this topic?
thanks in advance
franzJust as a short partial reply.
The generic Java --> RFC method is JCO (it will work on older versions as well), you can think of it as a JDBC driver where R/3 is the database, it behaves very similar in many ways.
EJB development on SAP WAS is really not any different from EJB development on any other J2EE server. The deploy tool is superb. very easy to use and the JNDI registry, etc. are standard stuff...
ABAP to EJB calls, haven't looked at this in over a year now, but back then we did a Proof Of Concept based on information at http://help.sap.com and it did work indeed. The only thing was back then that you needed to do a few tweaks to get it to work properly.
As mentioned above, look at the JCO examples and then you can ask more specific questions once you get stuck.
Good Luck!
Cheers,
Kalle
Maybe you are looking for
-
I have 9.0 GB available on iCloud. Why am I unable to install iOS7.1.1 on iPhone? Gray area says it requires at least 1.9 GB of storage!
-
DB2 Migration from Windows - Linux supported with Backup/Restore ?
Hi folks, we have to do a DB2 V9.1 migration from Windows -> Linux. Both servers are Intel based. Is it official supported by SAP to do this without SAP migration tools? E.g. with a backup/restore or redirected restore? I heard about it, but i'm not
-
Best methods to speed up and increase performance of Linux?
I would like this thread to be dedicated to various speed up techniques and performance tweaks for Linux and especially arch. K.Mandla offers quite a bit of interesting tweaks in the guide "Howto: Set up Hardy for speed", most of it applies to all li
-
Need sample database for Practise
Hi Experts, I am learner looking for Sample data base to play. eMotors eStaff these are sample database used by training centers. Thanks
-
Permit Functionality in SAP PM
Hello Experts We are considering using permit functionality in SAP PM I am new to this area. I would like to know how this fucntionality can be used in SAP and what are the detaile steps. Thanks