Mixing generic and concrete classes

I am going over the generics tutorial by Gilad Bracha offered by Sun. Something strikes me as wrong
Collection c;
Collection<Part> k = c; //compile-time unchecked warning
Collection<Part> k = (Collection<Part>) c; //compile-time unchecked warningFor me, the first unchecked warning is mildly acceptable, but my honest opinion is it should be an error not a warning. Isint this effectively an automatic narrowing cast? I personally like my cases to be inside of (), and not hidden. I could be wrong.
The second unchecked warning however should not even be given. If the 2nd warning is acceptable, then why not flag all casts as 'unchecked warnings?'
I have a feeling this has to do with this 'erasure' stuff and the resultant class files. I would appreciate any light you could shed on this for me.

Well Java is really a hobby for me and i end up doing other stuff for several months at a time. And now I am back into Java. I'm finding that I usually do it in the fall and winter, strange..
Plus 1.5 is so exciting, I'm like a baby all over again with so much to learn!
I am learning that generics is all compile-time, so I kind of understand that there is really no such thing as casting a generic type. Or that it makes no sense since casting and generics live in different "times" for lack of a better term.
I think i get it. The second would give the false impression that a cast is actually taking place, and would thus pollute the notion of casting with this fake cast. And the first is not a warning about that line in particular, but a warning about the fact that you are taking your type-safety into your own hands when you subvert the system in this fashion. Which also applies to the second. In fact the cast is immaterial.
OK, I see now how they are the same thing. But why is generic casting even allowed if its meaningless?

Similar Messages

  • Abstract classes, Interfaces, and concrete classes

    I have another technical interview tomorrow and somehow I keep getting asked the same question and I feel my answer is not really up to par. The question is:
    "What is the advantage of subclassing an abstract class versus concrete class?"
    "What is the difference of using an interface versus an abstract class, which is better to use?"
    For the first question, I usually answer performance is the advantage because you don't have to instantiate the class.
    For the second question, I usually say that you can put implementation in an abstract class and you can't in an interface. I really can't answer the second part to this question.
    Any ideas?

    For the first question, I usually answer performance
    is the advantage because you don't have to instantiate
    the class. Try invoking the class B in the following somewhere in another class.
    abstract class A{
       A(){
          System.out.println("abstract instantiated");
    class B extends A{
      B(){super();}
    }

  • Generics and inner classes?

    How can I say my inner class uses the same type as it's genericised host class?
    Should I just not declare a "generic" type in the inner class?
    The code
    public class LinkedList<E> implements java.util.List<E>
      ... code omitted for brevity ...
       * An internal implementation of java.util.Iterator.
      private class Iterator<E> implements java.util.Iterator<E> {
        protected Node<E> current;
        public Iterator() {
          this.current = head; // error here
      ... code omitted for brevity ...
    produces the compiler error
    C:\Java\home\src\linkedlist\LinkedList.java:59: incompatible types
    found   : linkedlist.LinkedList.Node<E>
    required: linkedlist.LinkedList.Node<E>
          this.current = head;
                         ^I understand the meaning of the compiler error... it's effectively saying that "E" is not the same type within in the Iterator class as it is in the parent LinkedList class... What I don't understand is how to make E the same type within the Iterator... if I just leave the <E> off of Iterator<E> then it throws "unchecked operation" warnings... do I just have to put up with these warnings... but no that can't be right because java.util.LinkedList has an iterator and it's not throwing unchecked operation compiler warnings... so there has to be a way...
    Thanx all. Keith.

    One more dumbshit question...
    Is there a way to do this without the warnings OR the @SuppressWarnings({"unchecked"})
       * Returns the index of the last occurrence of the specified element in this
       * list, or -1 if this list does not contain the element.
      //@SuppressWarnings({"unchecked"})
      public int lastIndexOf(Object object) {
        int i = 0;
        int last = -1;
        for(Node<E> node=this.head.next; node!=null; node=node.next) {
          if (node.item.equals((E) object)) {
            last = i;
          i++;
        return(last);
    produces the warning
    C:\Java\home\src\linkedlist\LinkedList.java:313: warning: [unchecked] unchecked cast
    found   : java.lang.Object
    required: E
          if (node.item.equals((E) object)) {
                                   ^... remembering that List specifies +public int lastIndexOf(Object object);+ as taking a raw Object, not E element, as I would have expected.
    Thanx all. Keith.

  • Javadoc, generics and inner classes

    I have implemented a generic class DiGraph with inner classes Vertex and Edge:
    public class DiGraph<V,E> implements Iterable<DiGraph<V,E>.Vertex> {
       public Vertex addVertex(V value) {...}
       public Iterator<DiGraph<V,E>.Vertex> iterator() {... }
       public class Vertex implements Iterable<Edge>  {
          V value;
       public class Edge {
          E  value;
    }When i run Javadoc it yields the following documentation of the class DiGraph :
    public class DiGraph extends Object implements Iterable<DiGraph.Vertex>What I expected was
    public class DiGraph extends Object implements Iterable<DiGraph<V,E>.Vertex>In a similar way the method addVertex appears as:
    public DiGraph.Vertex addVertex(V value)instead of
    public DiGraph<V,E>.Vertex addVertex(V value)This is very confusing, because if I create a graph
    DiGraph<String,Integer> myGraph and the use e.g., the method addVertex I must write:
    DiGraph<String,Integer>.Vertex v = myGraph.addVertex("a");Can anyone explain why the documentation lacks <V,E> ?

    One more dumbshit question...
    Is there a way to do this without the warnings OR the @SuppressWarnings({"unchecked"})
       * Returns the index of the last occurrence of the specified element in this
       * list, or -1 if this list does not contain the element.
      //@SuppressWarnings({"unchecked"})
      public int lastIndexOf(Object object) {
        int i = 0;
        int last = -1;
        for(Node<E> node=this.head.next; node!=null; node=node.next) {
          if (node.item.equals((E) object)) {
            last = i;
          i++;
        return(last);
    produces the warning
    C:\Java\home\src\linkedlist\LinkedList.java:313: warning: [unchecked] unchecked cast
    found   : java.lang.Object
    required: E
          if (node.item.equals((E) object)) {
                                   ^... remembering that List specifies +public int lastIndexOf(Object object);+ as taking a raw Object, not E element, as I would have expected.
    Thanx all. Keith.

  • Subclass Vs Concrete class

    Hi,
    What is the exact difference between subclass and concrete class?
    What is the need in subclassing abstract class to again abstract class?
    Thanks in Advance,
    venu.

    Hi,
    What is the exact difference between subclass
    and concrete class?"Subclass" defines a class' relationship to another class.
    class A extends BB is a subclas of A.
    Every class in Java except Object is a subclass of some other class.
    A concrete class is one that is not abstract.
    class A {}
    abstract class B{}Cass A is concrete. Class B is not.
    A class can be both a subclass and concrete. In fact, every single concrete class except Object is a subclass.
    What is the need in subclassing abstract class to
    again abstract class?You would do that when you're able to provide concrete implementations for some but not all of the abstract methods

  • Problem in generic value copier class / reflection and generic classes

    Hello experts,
    I try to archive the following and am struggling for quite some time now. Can someone please give an assessment if this is possible:
    I am trying to write a generic data copy method. It searches for all (parameterless) getter methods in the source object that have a corresponding setter method (with same name but prefixed by "set" instead of "get" and with exactly one parameter) in the destination object.
    For each pair I found I do the following: If the param of the setter type (T2) is assignable from the return type of the getter (T1), I just assign the value. If the types are not compatible, I want to instantiate a new instance of T2, assign it via the setter, and invoke copyData recursively on the object I get from the getter (as source) and the newly created instance (as destination). The assumption is here, that the occurring source and destination objects are incompatible but have matching getter and setter names and at the leaves of the object tree, the types of the getters and setters are compatible so that the recursion ends.
    The core of the problem I am struggling with is the step where I instantiate the new destination object. If T2 is a non-generic type, this is straightforward. However, imagine T1 and T2 are parametrized collections: T1 is List<T3> and T2 is List<T4>. Then I need special handling of the collection. I can easily iterate over the elements of the source List and get the types of the elements, but I can not instantiate only a generic version of the destinatino List. Further I cannot create elements of T4 and add it to the list of T2 and go into recursion, since the information that the inner type of the destination list is T4 is not available at run-time.
    public class Source {
       T1 getA();
       setA(T1 x);
    public class Dest {
       T2 getA();
       setA(T2 x);
    public class BeanDataCopier {
       public static void copyData(Object source, Object destination) {
          for (Method getterMethod : sourceGetterMethods) {
             ... // find matching getter and setter names
             Class sourceParamT = [class of return value of the getter];
             Class destParamT = [class of single param of the setter];
             // special handling for collections -  I could use some help here
             // if types are not compatible
             Object destParam = destination.getClass().newInstance();
             Object sourceParam = source.[invoke getter method];
             copyData(sourceParam, destParam);
    // usage of the method
    Souce s = new Source(); // this is an example, I do not know the type of s at copying time
    Dest d = new Dest(); // the same is true for d
    // initialize s in a complicated way (actually JAX-B does this)
    // copy values of s to d
    BeanDataCopier.copyData(s, d);
    // now d should have copied values from s Can you provide me with any alternative approaches to implement this "duck typing" behaviour on copying properties?
    Best regards,
    Patrik
    PS: You might have guessed my overall use case: I am sending an object tree over a web service. On the server side, the web service operation has a deeply nested object structure as the return type. On the client side, these resulting object tree has instances not of the original classes, but of client classes generated by axis. The original and generated classes are of different types but have the identically named getter and setter methods (which again have incompatible parameter types that however have consistent names). On the client side, I want to simply create an object of the original class and have the values of the client object (including the whole object tree) copied into it.
    Edited by: Patrik_Spiess on Sep 3, 2008 5:09 AM

    As I understand your use case this is already supported by Axis with beanMapping [http://ws.apache.org/axis/java/user-guide.html#EncodingYourBeansTheBeanSerializer]
    - Roy

  • Concrete classes implement abstract class and implements the interface

    I have one query..
    In java collection framework, concrete classes extend the abstract classes and implement the interface. What is the reason behind extending the class and implementing the interface when the abstract class actually claims to implement that interface?
    For example :
    Class Vector extends AbstractList and implements List ,....
    But the abstract class AbstractList implements List.. So the class Vector need not explicitly implement interface List.
    So, what is the reason behind this explicit definition...?
    If anybody knows please let me know..
    Thanx
    Rajendra.

    Why do you post this question again? You already asked this once in another thread and it has been extensively debated in that thread: http://forum.java.sun.com/thread.jsp?forum=31&thread=347682

  • Concrete classes

    hey can someone show me the code on how to write a concrete class for automobiles because ive written the main abstract superclass which should contain the commonalities of serialnumber, and quantity, but im not sure how to write out the first concrete subclass and what it should contain if someone can provide the code thatd be cool thanks.
    { from EdenMonaro }
    "I'd start with a pencil and a blank sheet of paper. An abstract Asset class and concrete child classes seem to be a good place to start. What information is common to all your assets? For example, asset number, asset description, purchase date, purchase price, accumulated depreciation, current location.
    What information is unique to the child classes? Serial numbers on automobiles and electronic gear? Automobiles have lots of identification numbers: engine numbers, VINs, registration, insurance. CDs have a title and artist(s) with tracks. Furniture is different again, type (desk, chair, etc), colour, finish.
    You need another class with collections of assets, perhaps AssetRegister. This is the class you put methods in to list and search assets. "

    Implement the asset tracking program. allow the user
    to add, modify, and delete electronics, automobiles,
    furniture, and cd's. Allow the user to list the assets
    by category and search for an asset by its serial
    number.
    Write an add method, a modify method, a delete method, and your search methods. Make these generic so that can be used for all categories. For a GUI are you going to use a servlet/jsp/html or an application/applet? You need to start writing the pieces that I mentioned and come back with specific implementation questions.

  • Adding a jar to the classpath of an executable jar (mixing -jar and -cp)

    Hello,
    frankly I hesitated over posting this to "New to Java"; my apologies (but also, eternal gratefulness) if there is an ultra-simple answer I have overlooked...
    I integrate a black-box app (I'm not supposed to have the source) that comes packaged as an executable jar (with a Manifest.MF that specifies the main class and a bunch of dependent jars), along with a few dependent jars and a startup script. Long story short, the application code supports adding jars in the classpath, but I can't find a painless way to add a jar in its "classpath".
    The app's "vendor" (another department of my customer company) has a slow turnaround on support requests, so while waiting for their suggestion as to how exactly to integrate custom jars, I'm trying to find a solution at the pure Java level.
    The startup script features a "just run the jar" launch line:
    java -jar startup.jarI tried tweaking this line to add a custom jar in the classpath
    java -cp mycustomclasses.jar -jar startup.jarBut that didn't seem to work ( NoClassDefFound at the point where the extension class is supposed to be loaded).
    I tried various combination of order, -cp/-classpath, using the CLASSPATH environment variable,... and eventually gave up and devised a manual launch line, which obviously worked:
    java -cp startup.jar;dependency1.jar;dependency2.jar;mycustomclasses.jar fully.qualified.name.of.StartupClassI resent this approach though, which not only makes me have to know the main class of the app, but also forces me to specify all the dependencies explicitly (the whole content of the Manifest's class-path entry).
    I'm surprised there isn't another approach: really, can't I mix -jar and -cp options?
    - [url http://download.oracle.com/javase/6/docs/technotes/tools/windows/classpath.html]This document (apparently a bible on the CLASSPATH), pointed out by a repited forum member recently, does not document the -jar option.
    - the [url http://download.oracle.com/javase/tutorial/deployment/jar/run.html]Java tutorial describes how to use the -jar option, but does not mention how it could play along with -cp
    Thanks in advance, and best regards,
    J.
    Edited by: jduprez on Dec 7, 2010 11:35 PM
    Ahem, the "Java application launcher" page bundled with the JDK doc (http://download.oracle.com/javase/6/docs/technotes/tools/windows/java.html) specifies that +When you use [the -jar] option, the JAR file is the source of all user classes, and other user class path settings are ignored+
    So this behavior is deliberate indeed... my chances diminish to find a way around other than specifying the full classpath and main class...

    I would have thought that the main-class attribute of the JAR you name in the -jar option is the one that is executed.Then I still have the burden of copying that from the initial startup.jar's manifest. Slightly less annoying than copying the whole Class-path entry, but it's an impediment to integrating it as a "black-box".
    The 'cascading' behavior is implicit in the specification
    I know at least one regular in addition to me that would issue some irony about putting those terms together :o)
    Anyway, thank you for confirming the original issue, and merci beaucoup for your handy "wrapper" trick.
    I'll revisit the post markers once I've actually tried it.
    Best regards,
    Jérôme

  • Abstract Class and polymorphism - class diagram

    Hi,
    ]Im trying to draw a class diagram for my overall system but im not too sure how to deal with classes derived from abstract classes. I'll explain my problem using the classic shape inheritance senario...
    myClass has a member of type Shape, but when the program is running shape gets instantiated to a circle, square or triangle for example -
    Shape s = new circle();
    since they are shapes. But at no stage is the class Shape instantiated.
    I then call things like s.Area();
    On my class diagram should there be any lines going from myClass directly to circle or triangle, or should a line just be joining myClass to Shape class?
    BTW - is s.Area() polymorphism?
    Thanks,
    Conor.

    Sorry, my drawing did not display very well.
    If you have MyClass, and it has a class variable of type Shape, and the class is responsible for creating MyClass, use Composition on your UML diagram to link Shape and MyClass.
    If you have MyClass, and it has a class variable of type Shape, and the class is created elsewhere, use Aggregation on your UML diagram to link Shape and MyClass.
    If you have MyClass, and it is used in method signatures, but it is not a class variable, use Depedency on your UML diagram to link Shape and MyClass. The arrow will point to Shape.
    Shape and instances of Circle, Triangle, Square, etc. will be linked using a Generalization on your UML diagram. The arrow will always point to Shape.
    Anything that is abstract (class, method, variable, etc.) should be italicized. Concrete items (same list) should be formatted normally.
    BTW, the distinction between Composition, Aggregation and Dependency will vary from project to project or class to class. It's a gray area. Just be consistent or follow whatever guidelines have been established.
    - Saish

  • Generic and Inheritance, how to use them together?

    Hi guys, I am trynig to design some components, and they will use both Generics and Inheritance.
    Basically I have a class
    public class GModel <C>{
        protected C data;
        public C getData(){return data;}
    }//And its subclass
    public class ABCModel <C> extends GModel{
    }//On my guess, when I do
    ABCModel abcModel = new ABCModel<MyObject>();The data attribute should be from MyObject type, is it right?
    So, usign the netbeans when I do
    MyObject obj = abcModel.getData();I should not use any casting, since the generics would tell that return object from getDta would be the MyObject.
    Is this right? If yes; did someone try to do that on netbeans?
    Thanks and Regards

    public class GModel <C>{
    public class ABCModel <C> extends GModel{public class ABCModel <C> extends GModel<C>{
    ABCModel abcModel = new ABCModel<MyObject>();ABCModel<MyObject> abcModel = new ABCModel<MyObject>();

  • How to build secondary keys dynamicaly in a generic entity definition class

    Hi developers,
    try to rebuild a generic entity definition class, where every instanced entity typ can has its own set of secondary fields. On the first moment it seems to be no problem, but now my problem is how to bind the annontation @Secondarykey... to the particular fields during construction of the entity shell be used?
    Hope I counld make the problem visible?
    Thanks for any help!
    ApeVeloce

    Hi Joachim,
    After understanding more about your application (off forum) I think I can answer your question more clearly. The general problem is: How can a dynamic, user defined schema be represented in Berkeley DB for multiple entity types, each with any number of properties, and arbitrary secondary keys for selected properties?
    There are many ways to do this and I'll describe one approach. I do not recommend using DPL for this case. In our documentation,
    http://www.oracle.com/technology/documentation/berkeley-db/je/PersistenceAPI/introduction.html
    we say: "Note that we recommend you use the DPL if all you want to do is make classes with a relatively static schema to be persistent." For a dynamic schema that is not represented using Java classes, the base API provides more flexibility.
    The common requirements for a dynamic schema are:
    1) Each entity has a type, which is roughly equivalent to a Java class. But unlike classes, types may be added dynamically. Queries are normally limited to entities of a particular type; this is not true for all applications, but I will make this assumption here.
    2) Each entity contains, at least logically, a Map of key-value properties. The property name is equivalent to a Java field name and the property value is equivalent to the field value. Properties may be added dynamically.
    3) Any number of properties for a given type may be defined as secondary keys. Secondary keys may be added dynamically.
    One approach for implementing this is as follows.
    A) A Database per entity type is created. This allows queries against a given type to be performed withing a single database. For simplicity the database name can be set to the name of entity type.
    The alternative is to have a single Database for all entity types. This works well for applications that want to query against properties that are common to all entity types. While such a schema is not common, it does occur; for example, an LDAP schema has this characteristic. If you do store all types in a single Database, and you want to query against a single entity type, you'll need to make the entity type a secondary key and perform a BDB "join" (see Database.join) that includes the type key.
    When queries are performed against a single type it is more efficient to have a Database per type. This avoids the extra secondary key for entity type, and the overhead of using it in each query.
    Another advantage to using a Database per type is that you can remove the entire Database in one step if the type is deleted. Calling Environment.removeDatabase is more efficient than removing the individual records.
    B) The properties for each entity can be stored as serialized key-value pairs. There are several ways to do this, but they are all roughly equivalent:
    B-1) If you use a SerialBinding, Java serialization does the work for you, but Java serialization produce larger data and is slower than using a TupleBinding.
    B-2) A TupleBinding can be implemented to write the key-value pairs. Each key is the string name. The value can also be a String, if all your data values are already Strings or if you convert values to/from Strings when they are used.
    B-3) If you have typed values (the type of each property is known) and you don't want to store values as Strings, you can write each value as a particular type in your TupleBinding. See TupleInput and TupleOutput for details.
    If your property values are not simple types (String, Integer, etc) then this approach is more complex. When using a TupleBinding, you will need custom marshaling code for each complex type. If this is too complex, then consider using a SerialBinding. Java serialization will handle arbitrary complex types, as long as each type is serializable.
    Another approach is to store each property in record of its own in a Properties database. This is a natural idea to consider, because this is what you might do with a relational database. But I do not recommend this approach because it will be inefficient to reconstruct the entire entity record from the individual property records. With BDB, we are not limited to the relational model so we can store the entire key-value map easily in a single record.
    However, this is one advantage to using a Properties database: If you commonly add or remove a single property, and entities normally have a large number of properties, and you do not commonly reconstruct the entire entity (all properties), then the Properties database approach may be more efficient. I do not think this is common, so I won't describe this approach in detail.
    C) To support any number of secondary keys, you will need to know which properties are secondary keys and you will need to implement the SecondaryKeyCreator or SecondaryMultiKeyCreator interface.
    C-1) Assuming that you have a Database per entity type, the simplest approach is to create a single SecondaryDatabase for each entity type that is associated with the primary Database for that type. The name of the SecondaryDatabase could be the name of the primary Database with a special suffix added.
    In this case, you should implement SecondaryMultiKeyCreator that returns all secondary key properties for a given entity. A two-part key should be returned for each property that is a secondary key -- {name, value} -- where name is property name and value is the value (either as a String or as a binary type using TupleInput/TupleOutput). When you perform queries for a given property, you will need to construct the {name, value} key to do the query.
    The SecondaryDatabase will need to permit duplicates (see SecondaryConfig.setSortedDuplicates) to allow any of the secondary keys to have the same value for more than one entity. If you have secondary keys that are defined to be unique (only one value per entity) then you will need to enforce that restriction yourself by performing a lookup.
    C-2) The alternative is to create a SecondaryDatabase for every property that is a secondary key. The database name could be the primary database name with a suffix identifying the secondary key.
    In this case you can implement either SecondaryKeyCreator or SecondaryMultiKeyCreator, depending on whether multiple values for each secondary key are permitted. And you can configure duplicates for the secondary database only if more than one entity may have the same secondary key value.
    Each secondary key consists only of its value, the name is not included because it is implied by the SecondaryDatabase in which it is stored. This approach is therefore more efficient than C-1 where we have a single SecondaryDatabase for all properties, because the secondary keys are smaller. However, there is the added work of maintaining the SecondaryDatabases each time the schema is changed: You must create and remove the secondary databases as the schema changes.
    For best performance, I recommend C-2.
    D) If you have a dynamic schema, you must store your metadata: the information that describes the names of each entity type, the properties allowed for that type (if restricted), and the list of which properties are secondary keys.
    I recommend storing metadata in a Database rather than in a file. This database could have a special name that cannot be used for any entity type. When metadata changes, you will be adding and removing databases. To ensure consistency, you should change the metadata and change the databases in a single transaction. This is possible only if you store the metadata in a Database.
    The Berkeley DB base API provides the flexibility to implement a dynamic schema in a very efficient manner. But be warned that implementing a data store with a dynamic schema is a complex task. As requirements grow over time, the metadata becomes more complex. You may even have to support evolution of metadata. If possible, try to keeps things as simple as possible.
    If you want complete flexibility in your schema and/or you want to store complex data types, you may want to consider storing XML. This is possible with Berkeley DB Java Edition since you can store arbitrary data in a Database, but you must implement secondary indexes for the XML data yourself -- this could be a very complex task. If you don't require pure Java, you should consider the Berkeley DB XML product. BDB XML supports many kinds of indexes as well as large documents, XPath, XQuery, and other features.
    Mark

  • How to improve speed of queries that use ORM one table per concrete class

    Hi,
    Many tools that make ORM (Object Relational Mapping) like Castor, Hibernate, Toplink, JPOX, etc.., have the one table per concrete class feature that maps objects to follow structure:
    CREATE TABLE ABSTRACTPRODUCT (
        ID VARCHAR(8) NOT NULL,
        DESCRIPTION VARCHAR(60) NOT NULL,
        PRIMARY KEY(ID)
    CREATE TABLE PRODUCT (
        ID VARCHAR(8) NOT NULL REFERENCES ABSTRACTPRODUCT(ID),
        CODE VARCHAR(10) NOT NULL,
        PRICE DECIMAL(12,2),
        PRIMARY KEY(ID)
    CREATE UNIQUE INDEX iProduct ON Product(code)
    CREATE TABLE BOOK (
        ID VARCHAR(8) NOT NULL REFERENCES PRODUCT(ID),
        AUTHOR VARCHAR(60) NOT NULL,
        PRIMARY KEY (ID)
    CREATE TABLE COMPACTDISK (
        ID VARCHAR(8) NOT NULL REFERENCES PRODUCT(ID),
        ARTIST VARCHAR(60) NOT NULL,
        PRIMARY KEY(ID)
    there is a way to improve queries like
    SELECT
        pd.code CODE,   
        abpd.description DESCRIPTION,
        DECODE(bk.id,NULL,cd.artist,bk.author) PERSON
    FROM
        ABSTRACTPRODUCT abpd,
        PRODUCT pd,
        BOOK bk,
        COMPACTDISK cd
    WHERE
        pd.id = abpd.id AND
        bk.id(+) = abpd.id AND
        cd.id(+) = abpd.id AND
        pd.code like '101%'
    or like this:
    SELECT
        pd.code CODE,   
        abpd.description DESCRIPTION,
        DECODE(bk.id,NULL,cd.artist,bk.author) PERSON
    FROM
        ABSTRACTPRODUCT abpd,
        PRODUCT pd,
        BOOK bk,
        COMPACTDISK cd
    WHERE
        pd.id = abpd.id AND
        bk.id(+) = abpd.id AND
        cd.id(+) = abpd.id AND
        abpd.description like '%STARS%' AND
        pd.price BETWEEN 1 AND 10
    think in a table with many rows, then exists something inside MaxDB to improve this type of queries? like some anotations on SQL? or declare tables that extends another by PK? on other databases i managed this using Materialized Views, but i think that this can be faster just using PK, i'm wrong? the better is to consolidate all tables in one table? what is the impact on database size with this consolidation?
    note: with consolidation i will miss NOT NULL constraint at database side.
    thanks for any insight.
    Clóvis

    Hi Lars,
    i dont understand because the optimizer get that Index for TM at execution plan, and because dont use the join via KEY column, note the WHERE clause is "TM.OID = MF.MY_TIPO_MOVIMENTO" by the key column, and the optimizer uses an INDEX that the indexed column is ID_SYS, that isnt and cant be a primary key, because its not UNIQUE, follow the index columns:
    indexes of TipoMovimento
    INDEXNAME     COLUMNNAME          SORT     COLUMNNO     DATATYPE     LEN     INDEX_USED     FILESTATE     DISABLED
    ITIPOMOVIMENTO     TIPO               ASC     1          VARCHAR          2     220546          OK          NO
    ITIPOMOVIMENTO     ID_SYS               ASC     2          CHAR          6     220546          OK          NO
    ITIPOMOVIMENTO     MY_CONTA_DEBITO          ASC     3          CHAR          8     220546          OK          NO
    ITIPOMOVIMENTO     MY_CONTA_CREDITO     ASC     4          CHAR          8     220546          OK          NO
    ITIPOMOVIMENTO1     ID_SYS               ASC     1          CHAR          6     567358          OK          NO
    ITIPOMOVIMENTO2     DESCRICAO          ASC     1          VARCHAR          60     94692          OK          NO
    after i create the index iTituloCobrancaX7 on TituloCobranca(OID,DATA_VENCIMENTO) in a backup instance and get surprised with the follow explain:
    OWNER     TABLENAME     COLUMN_OR_INDEX          STRATEGY                    PAGECOUNT     
         TC          ITITULOCOBRANCA1     RANGE CONDITION FOR INDEX          5368     
                   DATA_VENCIMENTO               (USED INDEX COLUMN)          
         MF          OID               JOIN VIA KEY COLUMN               9427     
         TM          OID               JOIN VIA KEY COLUMN               22     
                                  TABLE HASHED          
         PS          OID               JOIN VIA KEY COLUMN               1350     
         BOL          OID               JOIN VIA KEY COLUMN               497     
                                       NO TEMPORARY RESULTS CREATED          
         JDBC_CURSOR_19                    RESULT IS COPIED   , COSTVALUE IS     988
    note that now the optimizer gets the index ITITULOCOBRANCA1 as i expected, if i drop the new index iTituloCobrancaX7 the optimizer still getting this execution plan, with this the query executes at 110 ms, with that great news i do same thing in the production system, but the execution plan dont changes, and i still getting a long execution time this time at 413516 ms. maybe the problem is how optimizer measure my tables.
    i checked in DBAnalyser that the problem is catalog cache hit rate (we discussed this at [catalog cache hit rate, how to increase?|;
    ) and the low selectivity of this SQL command, then its because of this that to achieve a better selectivity i must have an index with, MF.MY_SACADO, MF.TIPO and TC.DATA_VENCIMENTO, as explained in previous posts, since this type of index inside MaxDB isnt possible, i have no choice to speed this type of query without changing tables structure.
    MaxDB developers can develop this type of index? or a feature like this dont have any plans to be made?
    if no, i must create another schema, to consolidate tables to speed queries on my system, but with this consolidation i will get more overhead, i must solve the less selectivity because i think if the data on tables increase, the query becomes impossible, i see that CREATE INDEX supports FUNCTION, maybe a   FUNCTION that join data of two tables can solve this?
    about instance configuration it is:
    Machine:
    Version:       '64BIT Kernel'
    Version:       'X64/LIX86 7.6.03   Build 007-123-157-515'
    Version:       'FAST'
    Machine:       'x86_64'
    Processors:    2 ( logical: 8, cores: 8 )
    data volumes:
    ID     MODE     CONFIGUREDSIZE     USABLESIZE     USEDSIZE     USEDSIZEPERCENTAGE     DROPVOLUME     TOTALCLUSTERAREASIZE     RESERVEDCLUSTERAREASIZE     USEDCLUSTERAREASIZE     PATH     
    1     NORMAL     4194304          4194288          379464          9               NO          0               0               0               /db/SPDT/data/data01.dat     
    2     NORMAL     4194304          4194288          380432          9               NO          0               0               0               /db/SPDT/data/data02.dat     
    3     NORMAL     4194304          4194288          379184          9               NO          0               0               0               /db/SPDT/data/data03.dat     
    4     NORMAL     4194304          4194288          379624          9               NO          0               0               0               /db/SPDT/data/data04.dat     
    5     NORMAL     4194304          4194288          380024          9               NO          0               0               0               /db/SPDT/data/data05.dat
    log volumes:
    ID     CONFIGUREDSIZE     USABLESIZE     PATH               MIRRORPATH
    1     51200          51176          /db/SPDT/log/log01.dat     ?
    parameters:
    KERNELVERSION                         KERNEL    7.6.03   BUILD 007-123-157-515
    INSTANCE_TYPE                         OLTP
    MCOD                                  NO
    _SERVERDB_FOR_SAP                     YES
    _UNICODE                              NO
    DEFAULT_CODE                          ASCII
    DATE_TIME_FORMAT                      ISO
    CONTROLUSERID                         DBM
    CONTROLPASSWORD                       
    MAXLOGVOLUMES                         2
    MAXDATAVOLUMES                        11
    LOG_VOLUME_NAME_001                   /db/SPDT/log/log01.dat
    LOG_VOLUME_TYPE_001                   F
    LOG_VOLUME_SIZE_001                   6400
    DATA_VOLUME_NAME_0005                 /db/SPDT/data/data05.dat
    DATA_VOLUME_NAME_0004                 /db/SPDT/data/data04.dat
    DATA_VOLUME_NAME_0003                 /db/SPDT/data/data03.dat
    DATA_VOLUME_NAME_0002                 /db/SPDT/data/data02.dat
    DATA_VOLUME_NAME_0001                 /db/SPDT/data/data01.dat
    DATA_VOLUME_TYPE_0005                 F
    DATA_VOLUME_TYPE_0004                 F
    DATA_VOLUME_TYPE_0003                 F
    DATA_VOLUME_TYPE_0002                 F
    DATA_VOLUME_TYPE_0001                 F
    DATA_VOLUME_SIZE_0005                 524288
    DATA_VOLUME_SIZE_0004                 524288
    DATA_VOLUME_SIZE_0003                 524288
    DATA_VOLUME_SIZE_0002                 524288
    DATA_VOLUME_SIZE_0001                 524288
    DATA_VOLUME_MODE_0005                 NORMAL
    DATA_VOLUME_MODE_0004                 NORMAL
    DATA_VOLUME_MODE_0003                 NORMAL
    DATA_VOLUME_MODE_0002                 NORMAL
    DATA_VOLUME_MODE_0001                 NORMAL
    DATA_VOLUME_GROUPS                    1
    LOG_BACKUP_TO_PIPE                    NO
    MAXBACKUPDEVS                         2
    LOG_MIRRORED                          NO
    MAXVOLUMES                            14
    LOG_IO_BLOCK_COUNT                    8
    DATA_IO_BLOCK_COUNT                   64
    BACKUP_BLOCK_CNT                      64
    _DELAY_LOGWRITER                      0
    LOG_IO_QUEUE                          50
    _RESTART_TIME                         600
    MAXCPU                                8
    MAX_LOG_QUEUE_COUNT                   0
    USED_MAX_LOG_QUEUE_COUNT              8
    LOG_QUEUE_COUNT                       1
    MAXUSERTASKS                          500
    _TRANS_RGNS                           8
    _TAB_RGNS                             8
    _OMS_REGIONS                          0
    _OMS_RGNS                             7
    OMS_HEAP_LIMIT                        0
    OMS_HEAP_COUNT                        8
    OMS_HEAP_BLOCKSIZE                    10000
    OMS_HEAP_THRESHOLD                    100
    OMS_VERS_THRESHOLD                    2097152
    HEAP_CHECK_LEVEL                      0
    _ROW_RGNS                             8
    RESERVEDSERVERTASKS                   16
    MINSERVERTASKS                        28
    MAXSERVERTASKS                        28
    _MAXGARBAGE_COLL                      1
    _MAXTRANS                             4008
    MAXLOCKS                              120080
    _LOCK_SUPPLY_BLOCK                    100
    DEADLOCK_DETECTION                    4
    SESSION_TIMEOUT                       180
    OMS_STREAM_TIMEOUT                    30
    REQUEST_TIMEOUT                       5000
    _IOPROCS_PER_DEV                      2
    _IOPROCS_FOR_PRIO                     0
    _IOPROCS_FOR_READER                   0
    _USE_IOPROCS_ONLY                     NO
    _IOPROCS_SWITCH                       2
    LRU_FOR_SCAN                          NO
    _PAGE_SIZE                            8192
    _PACKET_SIZE                          131072
    _MINREPLY_SIZE                        4096
    _MBLOCK_DATA_SIZE                     32768
    _MBLOCK_QUAL_SIZE                     32768
    _MBLOCK_STACK_SIZE                    32768
    _MBLOCK_STRAT_SIZE                    16384
    _WORKSTACK_SIZE                       8192
    _WORKDATA_SIZE                        8192
    _CAT_CACHE_MINSIZE                    262144
    CAT_CACHE_SUPPLY                      131072
    INIT_ALLOCATORSIZE                    262144
    ALLOW_MULTIPLE_SERVERTASK_UKTS        NO
    _TASKCLUSTER_01                       tw;al;ut;2000*sv,100*bup;10*ev,10*gc;
    _TASKCLUSTER_02                       ti,100*dw;63*us;
    _TASKCLUSTER_03                       equalize
    _DYN_TASK_STACK                       NO
    _MP_RGN_QUEUE                         YES
    _MP_RGN_DIRTY_READ                    DEFAULT
    _MP_RGN_BUSY_WAIT                     DEFAULT
    _MP_DISP_LOOPS                        2
    _MP_DISP_PRIO                         DEFAULT
    MP_RGN_LOOP                           -1
    _MP_RGN_PRIO                          DEFAULT
    MAXRGN_REQUEST                        -1
    _PRIO_BASE_U2U                        100
    _PRIO_BASE_IOC                        80
    _PRIO_BASE_RAV                        80
    _PRIO_BASE_REX                        40
    _PRIO_BASE_COM                        10
    _PRIO_FACTOR                          80
    _DELAY_COMMIT                         NO
    _MAXTASK_STACK                        512
    MAX_SERVERTASK_STACK                  500
    MAX_SPECIALTASK_STACK                 500
    _DW_IO_AREA_SIZE                      50
    _DW_IO_AREA_FLUSH                     50
    FBM_VOLUME_COMPRESSION                50
    FBM_VOLUME_BALANCE                    10
    _FBM_LOW_IO_RATE                      10
    CACHE_SIZE                            262144
    _DW_LRU_TAIL_FLUSH                    25
    XP_DATA_CACHE_RGNS                    0
    _DATA_CACHE_RGNS                      64
    XP_CONVERTER_REGIONS                  0
    CONVERTER_REGIONS                     8
    XP_MAXPAGER                           0
    MAXPAGER                              64
    SEQUENCE_CACHE                        1
    _IDXFILE_LIST_SIZE                    2048
    VOLUMENO_BIT_COUNT                    8
    OPTIM_MAX_MERGE                       500
    OPTIM_INV_ONLY                        YES
    OPTIM_CACHE                           NO
    OPTIM_JOIN_FETCH                      0
    JOIN_SEARCH_LEVEL                     0
    JOIN_MAXTAB_LEVEL4                    16
    JOIN_MAXTAB_LEVEL9                    5
    _READAHEAD_BLOBS                      32
    CLUSTER_WRITE_THRESHOLD               80
    CLUSTERED_LOBS                        NO
    RUNDIRECTORY                          /var/opt/sdb/data/wrk/SPDT
    OPMSG1                                /dev/console
    OPMSG2                                /dev/null
    _KERNELDIAGFILE                       knldiag
    KERNELDIAGSIZE                        800
    _EVENTFILE                            knldiag.evt
    _EVENTSIZE                            0
    _MAXEVENTTASKS                        2
    _MAXEVENTS                            100
    _KERNELTRACEFILE                      knltrace
    TRACE_PAGES_TI                        2
    TRACE_PAGES_GC                        20
    TRACE_PAGES_LW                        5
    TRACE_PAGES_PG                        3
    TRACE_PAGES_US                        10
    TRACE_PAGES_UT                        5
    TRACE_PAGES_SV                        5
    TRACE_PAGES_EV                        2
    TRACE_PAGES_BUP                       0
    KERNELTRACESIZE                       5369
    EXTERNAL_DUMP_REQUEST                 NO
    _AK_DUMP_ALLOWED                      YES
    _KERNELDUMPFILE                       knldump
    _RTEDUMPFILE                          rtedump
    _UTILITY_PROTFILE                     dbm.utl
    UTILITY_PROTSIZE                      100
    _BACKUP_HISTFILE                      dbm.knl
    _BACKUP_MED_DEF                       dbm.mdf
    _MAX_MESSAGE_FILES                    0
    _SHMKERNEL                            44601
    __PARAM_CHANGED___                    0
    __PARAM_VERIFIED__                    2008-05-03 23:12:55
    DIAG_HISTORY_NUM                      2
    DIAG_HISTORY_PATH                     /var/opt/sdb/data/wrk/SPDT/DIAGHISTORY
    _DIAG_SEM                             1
    SHOW_MAX_STACK_USE                    NO
    SHOW_MAX_KB_STACK_USE                 NO
    LOG_SEGMENT_SIZE                      2133
    _COMMENT                              
    SUPPRESS_CORE                         YES
    FORMATTING_MODE                       PARALLEL
    FORMAT_DATAVOLUME                     YES
    OFFICIAL_NODE                         
    UKT_CPU_RELATIONSHIP                  NONE
    HIRES_TIMER_TYPE                      CPU
    LOAD_BALANCING_CHK                    30
    LOAD_BALANCING_DIF                    10
    LOAD_BALANCING_EQ                     5
    HS_STORAGE_DLL                        libhsscopy
    HS_SYNC_INTERVAL                      50
    USE_OPEN_DIRECT                       YES
    USE_OPEN_DIRECT_FOR_BACKUP            NO
    SYMBOL_DEMANGLING                     NO
    EXPAND_COM_TRACE                      NO
    JOIN_TABLEBUFFER                      128
    SET_VOLUME_LOCK                       YES
    SHAREDSQL                             YES
    SHAREDSQL_CLEANUPTHRESHOLD            25
    SHAREDSQL_COMMANDCACHESIZE            262144
    MEMORY_ALLOCATION_LIMIT               0
    USE_SYSTEM_PAGE_CACHE                 YES
    USE_COROUTINES                        YES
    FORBID_LOAD_BALANCING                 YES
    MIN_RETENTION_TIME                    60
    MAX_RETENTION_TIME                    480
    MAX_SINGLE_HASHTABLE_SIZE             512
    MAX_HASHTABLE_MEMORY                  5120
    ENABLE_CHECK_INSTANCE                 YES
    RTE_TEST_REGIONS                      0
    HASHED_RESULTSET                      YES
    HASHED_RESULTSET_CACHESIZE            262144
    CHECK_HASHED_RESULTSET                0
    AUTO_RECREATE_BAD_INDEXES             NO
    AUTHENTICATION_ALLOW                  
    AUTHENTICATION_DENY                   
    TRACE_AK                              NO
    TRACE_DEFAULT                         NO
    TRACE_DELETE                          NO
    TRACE_INDEX                           NO
    TRACE_INSERT                          NO
    TRACE_LOCK                            NO
    TRACE_LONG                            NO
    TRACE_OBJECT                          NO
    TRACE_OBJECT_ADD                      NO
    TRACE_OBJECT_ALTER                    NO
    TRACE_OBJECT_FREE                     NO
    TRACE_OBJECT_GET                      NO
    TRACE_OPTIMIZE                        NO
    TRACE_ORDER                           NO
    TRACE_ORDER_STANDARD                  NO
    TRACE_PAGES                           NO
    TRACE_PRIMARY_TREE                    NO
    TRACE_SELECT                          NO
    TRACE_TIME                            NO
    TRACE_UPDATE                          NO
    TRACE_STOP_ERRORCODE                  0
    TRACE_ALLOCATOR                       0
    TRACE_CATALOG                         0
    TRACE_CLIENTKERNELCOM                 0
    TRACE_COMMON                          0
    TRACE_COMMUNICATION                   0
    TRACE_CONVERTER                       0
    TRACE_DATACHAIN                       0
    TRACE_DATACACHE                       0
    TRACE_DATAPAM                         0
    TRACE_DATATREE                        0
    TRACE_DATAINDEX                       0
    TRACE_DBPROC                          0
    TRACE_FBM                             0
    TRACE_FILEDIR                         0
    TRACE_FRAMECTRL                       0
    TRACE_IOMAN                           0
    TRACE_IPC                             0
    TRACE_JOIN                            0
    TRACE_KSQL                            0
    TRACE_LOGACTION                       0
    TRACE_LOGHISTORY                      0
    TRACE_LOGPAGE                         0
    TRACE_LOGTRANS                        0
    TRACE_LOGVOLUME                       0
    TRACE_MEMORY                          0
    TRACE_MESSAGES                        0
    TRACE_OBJECTCONTAINER                 0
    TRACE_OMS_CONTAINERDIR                0
    TRACE_OMS_CONTEXT                     0
    TRACE_OMS_ERROR                       0
    TRACE_OMS_FLUSHCACHE                  0
    TRACE_OMS_INTERFACE                   0
    TRACE_OMS_KEY                         0
    TRACE_OMS_KEYRANGE                    0
    TRACE_OMS_LOCK                        0
    TRACE_OMS_MEMORY                      0
    TRACE_OMS_NEWOBJ                      0
    TRACE_OMS_SESSION                     0
    TRACE_OMS_STREAM                      0
    TRACE_OMS_VAROBJECT                   0
    TRACE_OMS_VERSION                     0
    TRACE_PAGER                           0
    TRACE_RUNTIME                         0
    TRACE_SHAREDSQL                       0
    TRACE_SQLMANAGER                      0
    TRACE_SRVTASKS                        0
    TRACE_SYNCHRONISATION                 0
    TRACE_SYSVIEW                         0
    TRACE_TABLE                           0
    TRACE_VOLUME                          0
    CHECK_BACKUP                          NO
    CHECK_DATACACHE                       NO
    CHECK_KB_REGIONS                      NO
    CHECK_LOCK                            NO
    CHECK_LOCK_SUPPLY                     NO
    CHECK_REGIONS                         NO
    CHECK_TASK_SPECIFIC_CATALOGCACHE      NO
    CHECK_TRANSLIST                       NO
    CHECK_TREE                            NO
    CHECK_TREE_LOCKS                      NO
    CHECK_COMMON                          0
    CHECK_CONVERTER                       0
    CHECK_DATAPAGELOG                     0
    CHECK_DATAINDEX                       0
    CHECK_FBM                             0
    CHECK_IOMAN                           0
    CHECK_LOGHISTORY                      0
    CHECK_LOGPAGE                         0
    CHECK_LOGTRANS                        0
    CHECK_LOGVOLUME                       0
    CHECK_SRVTASKS                        0
    OPTIMIZE_AGGREGATION                  YES
    OPTIMIZE_FETCH_REVERSE                YES
    OPTIMIZE_STAR_JOIN                    YES
    OPTIMIZE_JOIN_ONEPHASE                YES
    OPTIMIZE_JOIN_OUTER                   YES
    OPTIMIZE_MIN_MAX                      YES
    OPTIMIZE_FIRST_ROWS                   YES
    OPTIMIZE_OPERATOR_JOIN                YES
    OPTIMIZE_JOIN_HASHTABLE               YES
    OPTIMIZE_JOIN_HASH_MINIMAL_RATIO      1
    OPTIMIZE_OPERATOR_JOIN_COSTFUNC       YES
    OPTIMIZE_JOIN_PARALLEL_MINSIZE        1000000
    OPTIMIZE_JOIN_PARALLEL_SERVERS        0
    OPTIMIZE_JOIN_OPERATOR_SORT           YES
    OPTIMIZE_QUAL_ON_INDEX                YES
    DDLTRIGGER                            YES
    SUBTREE_LOCKS                         NO
    MONITOR_READ                          2147483647
    MONITOR_TIME                          2147483647
    MONITOR_SELECTIVITY                   0
    MONITOR_ROWNO                         0
    CALLSTACKLEVEL                        0
    OMS_RUN_IN_UDE_SERVER                 NO
    OPTIMIZE_QUERYREWRITE                 OPERATOR
    TRACE_QUERYREWRITE                    0
    CHECK_QUERYREWRITE                    0
    PROTECT_DATACACHE_MEMORY              NO
    LOCAL_REDO_LOG_BUFFER_SIZE            0
    FILEDIR_SPINLOCKPOOL_SIZE             10
    TRANS_HISTORY_SIZE                    0
    TRANS_THRESHOLD_VALUE                 60
    ENABLE_SYSTEM_TRIGGERS                YES
    DBFILLINGABOVELIMIT                   70L80M85M90H95H96H97H98H99H
    DBFILLINGBELOWLIMIT                   70L80L85L90L95L
    LOGABOVELIMIT                         50L75L90M95M96H97H98H99H
    AUTOSAVE                              1
    BACKUPRESULT                          1
    CHECKDATA                             1
    EVENT                                 1
    ADMIN                                 1
    ONLINE                                1
    UPDSTATWANTED                         1
    OUTOFSESSIONS                         3
    ERROR                                 3
    SYSTEMERROR                           3
    DATABASEFULL                          1
    LOGFULL                               1
    LOGSEGMENTFULL                        1
    STANDBY                               1
    USESELECTFETCH                        YES
    USEVARIABLEINPUT                      NO
    UPDATESTAT_PARALLEL_SERVERS           0
    UPDATESTAT_SAMPLE_ALGO                1
    SIMULATE_VECTORIO                     IF_OPEN_DIRECT_OR_RAW_DEVICE
    COLUMNCOMPRESSION                     YES
    TIME_MEASUREMENT                      NO
    CHECK_TABLE_WIDTH                     NO
    MAX_MESSAGE_LIST_LENGTH               100
    SYMBOL_RESOLUTION                     YES
    PREALLOCATE_IOWORKER                  NO
    CACHE_IN_SHARED_MEMORY                NO
    INDEX_LEAF_CACHING                    2
    NO_SYNC_TO_DISK_WANTED                NO
    SPINLOCK_LOOP_COUNT                   30000
    SPINLOCK_BACKOFF_BASE                 1
    SPINLOCK_BACKOFF_FACTOR               2
    SPINLOCK_BACKOFF_MAXIMUM              64
    ROW_LOCKS_PER_TRANSACTION             50
    USEUNICODECOLUMNCOMPRESSION           NO
    about send you the data from tables, i dont have permission to do that, since all data is in a production system, the customer dont give me the rights to send any information. sorry about that.
    best regards
    Clóvis

  • Datastore id and flat class mapping

    Hi,
    I have
    - an abstract persistent class A with 2 concrete persistent subclasses A1
    and A2. I'm using datastore identity and flat class mapping.
    - a class B that has a field fb with a one-many mapping to A1 objects
    (Hashset).
    - a class C that has a field fc with a one-many mapping to A objects
    (Hashset).
    - an instance a1 of A1 (id = 5)
    - an instance b of B in which fb contains a1
    - an instance c of C in which fc contains a1
    When loading b and then c, i happen to have 2 instances representing a1 in
    the same persistent manager. the one loaded in b has A1-5 as ObjectId and
    the one loaded in c has A-5 as ObjectId. Thus those two objects have a
    different object id while they represents the same data.
    I would expect to find only one.
    Do you have any idea ?
    Thanks,
    Laurent Czinczenheim

    I found the problem! There is no more jdo-1.0.1.jar in the kodo rar :-)
    Czinczenheim wrote:
    I have only kodo in the rar. If i put the kodo rar 3.1.3, i can deploy it.
    if i put the kodo rar 3.2.0, i cannot and get the previous exception. Is
    there any difference in the packages used by kodo 3.2.0 (other than kodo
    packages) that could interfer with the one i could have in my jboss lib
    directories ?
    thanks
    laurent
    Stephen Kim wrote:
    Kodo should either not be in the classpath and only in the rar or
    viceversa. It still seems like a classpath issue. Can you inspect your
    kodo-jdo-runtime.jars for the existence of kodo/util/FatalUserException?
    Czinczenheim wrote:
    I have only one version of Kodo in my classpath. Therefore, when i
    replace
    the rar by the one from version 3.1.3 (or any older version), i don'thave
    any problem to deploy the kodo resource adapter.
    Stephen Kim wrote:
    It appears that you may be having classpath problems. Do you have
    multiple versions of Kodo in the classpath or ear/rar?
    Czinczenheim wrote:
    Marc,
    i wanted to try it with the new 3.2 beta version but i can't even deploy
    kodo 3.2.b1 in JBoss 3.2.3. Here is the stacktrace i get when deploying
    the rar (My kodo-ds.xml is the same as the one i used with kodo 3.1.3):
    11:47:52,975 INFO [RARDeployment] Starting
    11:47:53,036 WARN [ServiceController] Problem starting service
    jboss.jca:service=ManagedConnectionFactory,name=jdo/pmf/prisma01
    java.lang.NoClassDefFoundError: kodo/util/FatalUserException
         at java.lang.Class.getDeclaredConstructors0(Native Method)
         at java.lang.Class.privateGetDeclaredConstructors(Class.java:1610)
         at java.lang.Class.getConstructor0(Class.java:1922)
         at java.lang.Class.newInstance0(Class.java:278)
         at java.lang.Class.newInstance(Class.java:261)
         at
    org.jboss.resource.connectionmanager.RARDeployment.startService(RARDeployment.java:533)
    >>>
         at
    org.jboss.system.ServiceMBeanSupport.start(ServiceMBeanSupport.java:192)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    >>>
         at java.lang.reflect.Method.invoke(Method.java:324)
         at
    org.jboss.mx.capability.ReflectedMBeanDispatcher.invoke(ReflectedMBeanDispatcher.java:284)
    >>>
         at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:546)
         at
    org.jboss.system.ServiceController$ServiceProxy.invoke(ServiceController.java:976)
    >>>
         at $Proxy12.start(Unknown Source)
         at org.jboss.system.ServiceController.start(ServiceController.java:394)
         at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
         at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    >>>
         at java.lang.reflect.Method.invoke(Method.java:324)
         at
    org.jboss.mx.capability.ReflectedMBeanDispatcher.invoke(ReflectedMBeanDispatcher.java:284)
    >>>
         at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:546)
         at org.jboss.mx.util.MBeanProxyExt.invoke(MBeanProxyExt.java:177)
         at $Proxy4.start(Unknown Source)
         at org.jboss.deployment.SARDeployer.start(SARDeployer.java:226)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    >>>
         at java.lang.reflect.Method.invoke(Method.java:324)
         at
    org.jboss.mx.capability.ReflectedMBeanDispatcher.invoke(ReflectedMBeanDispatcher.java:284)
    >>>
         at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:546)
         at
    org.jboss.mx.util.JMXInvocationHandler.invoke(JMXInvocationHandler.java:177)
         at $Proxy18.start(Unknown Source)
         at org.jboss.deployment.XSLSubDeployer.start(XSLSubDeployer.java:231)
         at org.jboss.deployment.MainDeployer.start(MainDeployer.java:824)
         at org.jboss.deployment.MainDeployer.deploy(MainDeployer.java:632)
         at org.jboss.deployment.MainDeployer.deploy(MainDeployer.java:605)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    >>>
         at java.lang.reflect.Method.invoke(Method.java:324)
         at
    org.jboss.mx.capability.ReflectedMBeanDispatcher.invoke(ReflectedMBeanDispatcher.java:284)
    >>>
         at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:546)
         at org.jboss.mx.util.MBeanProxyExt.invoke(MBeanProxyExt.java:177)
         at $Proxy6.deploy(Unknown Source)
         at
    org.jboss.deployment.scanner.URLDeploymentScanner.deploy(URLDeploymentScanner.java:302)
    >>>
         at
    org.jboss.deployment.scanner.URLDeploymentScanner.scan(URLDeploymentScanner.java:476)
    >>>
         at
    org.jboss.deployment.scanner.AbstractDeploymentScanner$ScannerThread.doScan(AbstractDeploymentScanner.java:201)
    >>>
         at
    org.jboss.deployment.scanner.AbstractDeploymentScanner.startService(AbstractDeploymentScanner.java:274)
    >>>
         at
    org.jboss.system.ServiceMBeanSupport.start(ServiceMBeanSupport.java:192)
         at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
         at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    >>>
         at java.lang.reflect.Method.invoke(Method.java:324)
         at
    org.jboss.mx.capability.ReflectedMBeanDispatcher.invoke(ReflectedMBeanDispatcher.java:284)
    >>>
         at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:546)
         at
    org.jboss.system.ServiceController$ServiceProxy.invoke(ServiceController.java:976)
    >>>
         at $Proxy0.start(Unknown Source)
         at org.jboss.system.ServiceController.start(ServiceController.java:394)
         at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
         at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    >>>
         at java.lang.reflect.Method.invoke(Method.java:324)
         at
    org.jboss.mx.capability.ReflectedMBeanDispatcher.invoke(ReflectedMBeanDispatcher.java:284)
    >>>
         at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:546)
         at org.jboss.mx.util.MBeanProxyExt.invoke(MBeanProxyExt.java:177)
         at $Proxy4.start(Unknown Source)
         at org.jboss.deployment.SARDeployer.start(SARDeployer.java:226)
         at org.jboss.deployment.MainDeployer.start(MainDeployer.java:824)
         at org.jboss.deployment.MainDeployer.deploy(MainDeployer.java:632)
         at org.jboss.deployment.MainDeployer.deploy(MainDeployer.java:605)
         at org.jboss.deployment.MainDeployer.deploy(MainDeployer.java:589)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    >>>
         at java.lang.reflect.Method.invoke(Method.java:324)
         at
    org.jboss.mx.capability.ReflectedMBeanDispatcher.invoke(ReflectedMBeanDispatcher.java:284)
    >>>
         at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:546)
         at org.jboss.mx.util.MBeanProxyExt.invoke(MBeanProxyExt.java:177)
         at $Proxy5.deploy(Unknown Source)
         at org.jboss.system.server.ServerImpl.doStart(ServerImpl.java:384)
         at org.jboss.system.server.ServerImpl.start(ServerImpl.java:291)
         at org.jboss.Main.boot(Main.java:150)
         at org.jboss.Main$1.run(Main.java:388)
         at java.lang.Thread.run(Thread.java:534)
    Thanks for your help since the initial bug i described is critical forus.
    Laurent
    Marc Prud'hommeaux wrote:
    Laurent-
    I believe I have seen that problem, but I can't recall the exact
    symptoms (or the exact bug number). However, I do think that it was
    fixed for Kodo 3.2. Can you download the 3.2 beta and see if the
    problem
    still occurs?
    If it does still happen, can you provide us with your .jdo, .mapping,
    and .java files for the classes so we can take a look?
    In article <[email protected]>, Czinczenheim wrote:
    Hi,
    I have
    - an abstract persistent class A with 2 concrete persistent subclasses
    A1
    and A2. I'm using datastore identity and flat class mapping.
    - a class B that has a field fb with a one-many mapping to A1 objects
    (Hashset).
    - a class C that has a field fc with a one-many mapping to A objects
    (Hashset).
    - an instance a1 of A1 (id = 5)
    - an instance b of B in which fb contains a1
    - an instance c of C in which fc contains a1
    When loading b and then c, i happen to have 2 instances representing
    a1
    in
    the same persistent manager. the one loaded in b has A1-5 as ObjectIdand
    the one loaded in c has A-5 as ObjectId. Thus those two objects have a
    different object id while they represents the same data.
    I would expect to find only one.
    Do you have any idea ?
    Thanks,
    Laurent Czinczenheim
    Marc Prud'hommeaux
    SolarMetric Inc.
    Steve Kim
    [email protected]
    SolarMetric Inc.
    http://www.solarmetric.com
    Steve Kim
    [email protected]
    SolarMetric Inc.
    http://www.solarmetric.com

  • Generics and static methods

    Hi,
    I need a sanity check to make sure I have not missed design pattern.
    I believe that it is not possible to call a static method defined in a generic type.
    For example:
    public class Red extends Color
        public static Color getHue()
    public class Green extends Color
        public static Color getHue()
    public class GenericColorTest<C extends Color>
        public void fooBar()
             Color hue= C.getHue();  // Is this a valid method call ?
    }Since the base class Color can not define static methods, it seems resonable that a generic type can't make the call to get getHue(). Or am I missing something ? Is there a way for the class GenericColorTest's generic type C to call a static method ?
    Thanks
    HB

    I think I can be ever more specific than gafter on this.
    Your call, "C.getHue()", just plain makes no sense. C is "some kind of Color object", and the class "Color" has no function (static or not) called "getHue()". So calling "C.getHue()" is a simple case of you trying to call a function that isn't in the class.
    In fact you can call static functions, but only if they are in the base class require by the generic definition (erm, sorry can't remember the exact term right now). So, for example, if there was a "getHue()" static function in your class "Color", then sure, you can call "C.getHue()" in your code. But since static functions can't be overridden, the "C.getHue()" will do exactly the same thing no matter which actual class C was at the moment, which doesn't seem to be what you want.
    The right thing to do is just make "getHue()" non-static. Then things will work great.

Maybe you are looking for

  • USB 2.0 External Hard Drive Will Not Mount (Mac Mini Duo)

    I have recently purchased a refurbished USB 2.0 500 GB Lacie Big Disk d2 Hard Drive. It is supposed to be plug & play but I cannot get it to mount. I have tried Disk Utility but nothing. Any help will be greatly appreciaited. Thanks

  • Change Icon (Symbols) for Alert  in Analysis Item in BI 7 Web Templates

    I want to change the Icons (Symbols) displayed for alerts in the Analysis Web Item. I want to choose between arrows and stop lights. I can define this in the Table Interface in BW 3.5. In BI7 you must use the Analysis Web Item SYMBOL but it is not do

  • How do I search for data that's not there?

    Scenario: The store managers are supposed to upload the previous days data by 11am the following day. I have a table of stores (74 records, 1 for each store) and a table of events (3600+ records,1 for each store per day). When I look at the event dat

  • UPA issue with health

    Hello All, I'm experiencing below mentioned error in Health analyser with one of our SharePoint 2013 farm. Any help would be appreciated. People search relevance is not optimized when the Active Directory has errors in the manager reporting structure

  • In ECC6.0,why chinese character is 1 byte?

    Dear experts, In our current project, SAP version is ECC 6.0. In my program, I found chinese character only is 1 byte, for example: data: l_char(20) type c. l_char = '北京'. l_char+4(6) = '欢迎您'. write: l_char. The result is: 北京  欢迎您 There are two blank