Collection framework for primitive types

Hi,
I am developing an open source collection library for primitive data types. It has just moved out of beta and into production development, menaing that from now on, the API is backwards compatible.
The sources, binaries, and docs can be downloaded from: http://pcj.sourceforge.net.
Comments and suggestions are always welcome!
Regards,
Søren Bak

Thanks, I'm downloading it now.

Similar Messages

  • Create array of arbitrary dimensions / get Class object for primitive types

    Here's an interesting problem: is there any way to code for the creation of an n-dimensional array where n is only known at runtime (input by user, etc.)?
    java.lang.reflect.Array has a newInstance method which returns an array of an arbitrary number of dimensions specified by the int[] argument, which would give me what I want. However, the other argument is a Class object representing the component type of the array. If you want to use this method to get an array of a primitive type, is it possible to get a Class object representing a primitive type?

    That doesn't help, since the whole problem is that the number of dimensions aren't known until runtime. You can't simply declare a variable of type int[][] or use an int[][] cast, since the number of []'s is variable.
    The int.class construction was partly what I was looking for. This returns the array needed (assuming some read method):
    int dimensions = read();
    int size = read();
    int[] index = new int[dimensions];
    for (int i = 0; i < dimensions; i++)
        index[i] = size;
    Object myArray = Array.newInstance(int.class, index);The only problem is, as an Object, I can't use myArray usefully as an array. And I can't use a cast, since that requires a fixed number of dimensions.

  • Arrays of arbitrary dimensions / Class object for primitive types

    Here's an interesting problem: is there any way to code for the creation of an n-dimensional array where n is only known at runtime (input by user, etc.)?
    java.lang.reflect.Array has a newInstance method which returns an array of an arbitrary number of dimensions specified by the int[] argument, which would give me what I want. However, the other argument is a Class object representing the component type of the array. If you want to use this method to get an array of a primitive type, is it possible to get a Class object representing a primitive type?

    Devil is in the detailspublic class Test2 {
      public static void main(String[] args) {
        Object[] myArray = new Object[10];
        fillDimensionalArray(myArray, 5, 5);
      public static void fillDimensionalArray(Object[] array, int dim, int size) {
        if (dim==1) {
          for (int i=0; i<size; i++) {
            array[i] = new int[size];
            for (int j=0; j<size; j++) ((int[])array)[j]=999;
    } else {
    for (int i=0; i<array.length; i++) {
    array[i] = new Object[size];
    for (int j=0; j<size; j++) {
    fillDimensionalArray( (Object[]) array[i], dim - 1, size);

  • Container for primitive datatypes?

    Which is the best way to store primitive datatypes if I need the functionality to expand and shrink the container.
    I've tried to use Vector and Arraylist but I have to convert each datatype to it's wrapper and add them one by one.
    Aren't there some container which have a method like. Container.add(primitive[] myprimitive)?

    Sort of related: JDK 1.4 has some new IO packages that have primitive buffers for primitive types (java.nio). Not exactly what you're looking for, but just thought I'd throw in my two cents.
    Best bet is to write your own container that acts like a Vector, but only provide add and get methods which accept and return doubles/ints/etc.
    class DoubleContainer
      java.util.Vector vec = new java.util.Vector();
      public DoubleContainer()
      void add(double d)
        vec.add(new Double(d));
      double get(int index)
        return ( (Double)vec.get(index)).doubleValue();
      // some more methods might be needed here, like insert, remove...
    }

  • Need suggestion for variable size array of primitive type.

    Hi all,
    I am working on a problem in which I need to keep an array of primitive (type int) sorted. Also I insert new elements while keeping it sorted, and at times I merge two similar arrays with unique elements.
    I would welcome suggestions on the implementation aspects as to which type/class to use. My primary consideration is performance. Here are my observations:
    1. type int
    Efficient operation, but arrays are fixed size so it makes clumsy code and overhead when the array is expanded or merged with another.
    Methods in class Arrays can be used, incl. binary search and sort.
    2. Vector, ArrayList, List, LinkedList
    Accept objects only. I can use Integer instead of int, but it will involve overhead, and I'll need to write my own sort and search routines (already done).
    For now, I choose option 2 using Vectors. Is there a better choice, or are there other options, given the conditions and consideration enumerated at the beginning?
    Appreciate any input or comments.

    mathmate wrote:
    I have not had the occasion to use them in parallel to recognize the difference. It pays to talk to people!A small benchmark is easily constructed:
    import bak.pcj.list.IntArrayList;
    import java.util.Random;
    import java.util.ArrayList;
    public class CollectionTest {
        static void testObjectsList(ArrayList<Integer> list, int n, long seed) {
            long start = System.currentTimeMillis();
            Random rand = new Random(seed);
            for(int i = 0; i < n; i++)
                list.add(new Integer(rand.nextInt()));
            for(int i = 0; i < n/2; i++)
                list.remove(rand.nextInt(list.size()));
            long end = System.currentTimeMillis();
            System.out.println("objectsList -> "+(end-start)+" ms.");
        static void testPrimitiveList(IntArrayList list, int n, long seed) {
            long start = System.currentTimeMillis();
            Random rand = new Random(seed);
            for(int i = 0; i < n; i++)
                list.add(rand.nextInt());
            for(int i = 0; i < n/2; i++)
                list.remove(rand.nextInt(list.size()));
            long end = System.currentTimeMillis();
            System.out.println("primitiveList -> "+(end-start)+" ms.");
        public static void main(String[] args) {
            long seed = System.currentTimeMillis();
            int numTests = 100000;
            testObjectsList(new ArrayList<Integer>(), numTests, seed);
            testPrimitiveList(new IntArrayList(), numTests, seed);
    }In the test above, Java's ArrayList is about 5 times faster. But when commenting out the remove(...) methods, the primitive 3rd party IntArrayList is about 4 to 5 times faster.
    Give it a spin.
    Again: just try to do it with the standard classes: if performance lacks, look at something else.

  • Collection query for computers with windows management framework 3.0

    Hi,
    collection query for computers with windows management framework 3.0, but I cant found a way. I cant see that it is in the inventory data for SMS_G_System_ADD_REMOVE_PROGRAMS_64.DisplayName.
    So any way to get computers with windows management framework 3.0?
    /SaiTech

    select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client from SMS_R_System inner join SMS_G_System_SoftwareProduct on SMS_G_System_SoftwareProduct.ResourceId
    = SMS_R_System.ResourceId where SMS_G_System_SoftwareProduct.ProductName like "Windows Management Framework 3.0%"
    That won't work because as mentioned, it doesn't appear in ARP.
    Torsten's suggestion will work or you can resort to software/hardware inventory using the info at
    http://serverfault.com/questions/555100/methods-to-detect-version-of-windows-management-framework
    Jason | http://blog.configmgrftw.com

  • Managing statistics for object collections used as table types in SQL

    Hi All,
    Is there a way to manage statistics for collections used as table types in SQL.
    Below is my test case
    Oracle Version :
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE    11.2.0.3.0      Production
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    SQL> Original Query :
    SELECT
         9999,
         tbl_typ.FILE_ID,
         tf.FILE_NM ,
         tf.MIME_TYPE ,
         dbms_lob.getlength(tfd.FILE_DATA)
    FROM
         TG_FILE tf,
         TG_FILE_DATA tfd,
              SELECT
              FROM
                   TABLE
                        SELECT
                             CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                             OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
                        FROM
                             dual
         )     tbl_typ
    WHERE
         tf.FILE_ID     = tfd.FILE_ID
    AND tf.FILE_ID  = tbl_typ.FILE_ID
    AND tfd.FILE_ID = tbl_typ.FILE_ID;
    Elapsed: 00:00:02.90
    Execution Plan
    Plan hash value: 3970072279
    | Id  | Operation                                | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                         |              |     1 |   194 |  4567   (2)| 00:00:55 |
    |*  1 |  HASH JOIN                               |              |     1 |   194 |  4567   (2)| 00:00:55 |
    |*  2 |   HASH JOIN                              |              |  8168 |   287K|   695   (3)| 00:00:09 |
    |   3 |    VIEW                                  |              |  8168 |   103K|    29   (0)| 00:00:01 |
    |   4 |     COLLECTION ITERATOR CONSTRUCTOR FETCH|              |  8168 | 16336 |    29   (0)| 00:00:01 |
    |   5 |      FAST DUAL                           |              |     1 |       |     2   (0)| 00:00:01 |
    |   6 |    TABLE ACCESS FULL                     | TG_FILE      |   565K|    12M|   659   (2)| 00:00:08 |
    |   7 |   TABLE ACCESS FULL                      | TG_FILE_DATA |   852K|   128M|  3863   (1)| 00:00:47 |
    Predicate Information (identified by operation id):
       1 - access("TF"."FILE_ID"="TFD"."FILE_ID" AND "TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
       2 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
    Statistics
              7  recursive calls
              0  db block gets
          16783  consistent gets
          16779  physical reads
              0  redo size
            916  bytes sent via SQL*Net to client
            524  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              2  rows processed Indexes are present in both the tables ( TG_FILE, TG_FILE_DATA ) on column FILE_ID.
    select
         index_name,blevel,leaf_blocks,DISTINCT_KEYS,clustering_factor,num_rows,sample_size
    from
         all_indexes
    where table_name in ('TG_FILE','TG_FILE_DATA');
    INDEX_NAME                     BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR     NUM_ROWS SAMPLE_SIZE
    TG_FILE_PK                          2        2160        552842             21401       552842      285428
    TG_FILE_DATA_PK                     2        3544        852297             61437       852297      852297 Ideally the view should have used NESTED LOOP, to use the indexes since the no. of rows coming from object collection is only 2.
    But it is taking default as 8168, leading to HASH join between the tables..leading to FULL TABLE access.
    So my question is, is there any way by which I can change the statistics while using collections in SQL ?
    I can use hints to use indexes but planning to avoid it as of now. Currently the time shown in explain plan is not accurate
    Modified query with hints :
    SELECT    
        /*+ index(tf TG_FILE_PK ) index(tfd TG_FILE_DATA_PK) */
        9999,
        tbl_typ.FILE_ID,
        tf.FILE_NM ,
        tf.MIME_TYPE ,
        dbms_lob.getlength(tfd.FILE_DATA)
    FROM
        TG_FILE tf,
        TG_FILE_DATA tfd,
            SELECT
            FROM
                TABLE
                        SELECT
                             CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                             OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
                        FROM
                             dual
        tbl_typ
    WHERE
        tf.FILE_ID     = tfd.FILE_ID
    AND tf.FILE_ID  = tbl_typ.FILE_ID
    AND tfd.FILE_ID = tbl_typ.FILE_ID;
    Elapsed: 00:00:00.01
    Execution Plan
    Plan hash value: 1670128954
    | Id  | Operation                                 | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                          |                 |     1 |   194 | 29978   (1)| 00:06:00 |
    |   1 |  NESTED LOOPS                             |                 |       |       |            |          |
    |   2 |   NESTED LOOPS                            |                 |     1 |   194 | 29978   (1)| 00:06:00 |
    |   3 |    NESTED LOOPS                           |                 |  8168 |  1363K| 16379   (1)| 00:03:17 |
    |   4 |     VIEW                                  |                 |  8168 |   103K|    29   (0)| 00:00:01 |
    |   5 |      COLLECTION ITERATOR CONSTRUCTOR FETCH|                 |  8168 | 16336 |    29   (0)| 00:00:01 |
    |   6 |       FAST DUAL                           |                 |     1 |       |     2   (0)| 00:00:01 |
    |   7 |     TABLE ACCESS BY INDEX ROWID           | TG_FILE_DATA    |     1 |   158 |     2   (0)| 00:00:01 |
    |*  8 |      INDEX UNIQUE SCAN                    | TG_FILE_DATA_PK |     1 |       |     1   (0)| 00:00:01 |
    |*  9 |    INDEX UNIQUE SCAN                      | TG_FILE_PK      |     1 |       |     1   (0)| 00:00:01 |
    |  10 |   TABLE ACCESS BY INDEX ROWID             | TG_FILE         |     1 |    23 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       8 - access("TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
       9 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
           filter("TF"."FILE_ID"="TFD"."FILE_ID")
    Statistics
              0  recursive calls
              0  db block gets
             16  consistent gets
              8  physical reads
              0  redo size
            916  bytes sent via SQL*Net to client
            524  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              2  rows processed
    Thanks,
    B

    Thanks Tubby,
    While searching I had found out that we can use CARDINALITY hint to set statistics for TABLE funtion.
    But I preferred not to say, as it is currently undocumented hint. I now think I should have mentioned it while posting for the first time
    http://www.oracle-developer.net/display.php?id=427
    If we go across the document, it has mentioned in total 3 hints to set statistics :
    1) CARDINALITY (Undocumented)
    2) OPT_ESTIMATE ( Undocumented )
    3) DYNAMIC_SAMPLING ( Documented )
    4) Extensible Optimiser
    Tried it out with different hints and it is working as expected.
    i.e. cardinality and opt_estimate are taking the default set value
    But using dynamic_sampling hint provides the most correct estimate of the rows ( which is 2 in this particular case )
    With CARDINALITY hint
    SELECT
        /*+ cardinality( e, 5) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 1467416936
    | Id  | Operation                             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                      |      |     5 |    10 |    29   (0)| 00:00:01 |
    |   1 |  COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     5 |    10 |    29   (0)| 00:00:01 |
    |   2 |   FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    With OPT_ESTIMATE hint
    SELECT
         /*+ opt_estimate(table, e, scale_rows=0.0006) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Execution Plan
    Plan hash value: 4043204977
    | Id  | Operation                              | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                       |      |     5 |   485 |    29   (0)| 00:00:01 |
    |   1 |  VIEW                                  |      |     5 |   485 |    29   (0)| 00:00:01 |
    |   2 |   COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     5 |    10 |    29   (0)| 00:00:01 |
    |   3 |    FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    With DYNAMIC_SAMPLING hint
    SELECT
        /*+ dynamic_sampling( e, 5) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 1467416936
    | Id  | Operation                             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                      |      |     2 |     4 |    11   (0)| 00:00:01 |
    |   1 |  COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     2 |     4 |    11   (0)| 00:00:01 |
    |   2 |   FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    Note
       - dynamic sampling used for this statement (level=2)I will be testing the last option "Extensible Optimizer" and put my findings here .
    I hope oracle in future releases, improve the statistics gathering for collections which can be used in DML and not just use the default block size.
    By the way, are you aware why it uses the default block size ? Is it because it is the smallest granular unit which oracle provides ?
    Regards,
    B

  • Configure array of primitive type for POF

    My impression is that Coherence's POF implementation can handle Java primitive and String by default thus we don't need to provide a customized POF configuration file if the object been put into the cache is Java primitive type.
    How about arrary of primitive type? For example, byte[]. Can Coherence's POF implementation handle byte[] as well?

    Just asking since we plan to move from using Externalizable to POF and not sure if we need special handling for byte[] type.
    Thanks for the reply.
    Regards,
    Chen

  • TCode for collective changes in inspection type in Material Master

    Hi
    Can any one tell me the TCODE for mass changes for inspection type in Material Master.
    Charuhas

    you can use MM17 or through LSMW or create a BDC through ABAP.
    Regards,
    qsm sap

  • Redesigning the Collections Framework

    Hi!
    I'm sort of an experienced Java programmer, in the sense that I program regularly in Java. However, I am not experienced enough to understand the small design specifics of the Collections Framework (or other parts of Javas standard library).
    There's been a number of minor things that bugged me about the framework, and all these minor things added up to a big annoyance, after which I've decided to (try to) design and implement my own framework.
    The thing is however, that since I don't understand many design specifics about the Collection Framework and the individual collection implementations, I risk coming up short with my own.
    (And if you are still reading this, then I thank you for your time, because already now I know that this entry is going to be long. : ) )
    Since I'm doing my Collection framework nearly from scratch, I don't have to worry too much about the issue of backwards compatibility (altough I should consider making some parts similar to the collection framework as it is today, and provide a wrapper that implements the original collection interfaces).
    I also have certain options of optimizing several of the collections, but then again, there may be very specific design issues concerning performance and usability that the developers of the framework (or other more experienced Java progammers) knew about, that I don't know.
    So I'm going to share all of my thoughts here. I hope this will start an interesting discussion : )
    (I'm also not going to make a fuss about the source code of my progress. I will happily share it with anyone who is interested. It is probably even neccessary in order for others to understand how I've intended to solve my annoyances (or understand what these annoyances were in the first place). ).
    (I've read the "Java Collections API Design FAQ", btw).
    Below, I'm going to go through all of the things that I've thought about, and what I've decided to do.
    1.
    The Collections class is a class that consists only of static utility methods.
    Several of them return wrapper classes. However the majority of them work on collections implementing the List interface.
    So why weren't they built into the List interface (same goes for methods working only with the Collection interface only, etc)? Several of them can even be implemented more efficiently. For example calling rotate for a LinkedList.
    If the LinkedList is circular, using a sentry node connecting the head and tail, rotate is done simply by relocating the sentry node (rotating with one element would require one single operation). The Collections class makes several calls to the reverse method instead (because it lacks access to the internal workings of a LinkedList).
    If it were done this way, the Collections class would be much smaller, and contain mostly methods that return wrapped classes.
    After thinking about it a while, I think I can answer this question myself. The List interface would become rather bloated, and would force an implementation of methods that the user may not need.
    At any rate, I intend to try to do some sort of clean-up. Exactly how, is something I'm still thinking about. Maybe two versions of List interfaces (one "light", and the other advanced), or maybe making the internal fields package private and generally more accessible to other package members, so that methods in other classes can do some optimizations with the additional information.
    2.
    At one point, I realized that the PriorityQueue didn't support increase\decrease key operations. Of course, elements would need to know where in the backing data structure it was located, and this is implementation specific. However, I was rather dissapointed that this wasn't supported somehow, so i figured out a way to support this anyway, and implemented it.
    Basically, I've wrapped the elements in a class that contains this info, and if the element would want to increase its key, it would call a method on the wrapping class it was contained in. It worked fine.
    It may cause some overhead, but at least I don't have to re-implement such a datastructure and fiddle around so much with the element-classes just because I want to try something with a PriorityQueue.
    I can do the same thing to implement a reusable BinomialHeap, FibonacciHeap, and other datastructures, that usually require that the elements contain some implementation-specific fields and methods.
    And this would all become part of the framework.
    3.
    This one is difficult ot explain.
    It basically revolves around the first question in the "Java Collections API Design FAQ".
    It has always bothered me that the Collection interface contained methods, that "maybe" would be implemented.
    To me it didn't make sense. The Collection should only contain methods that are general for all Collections, such as the contains method. All methods that request, and manipulate the Collection, belonged in other interfaces.
    However, I also realized that the whole smart thing about the current Collection interface, is that you can transfer elements from one Collection to another, without needing to know what type of Collection you were transferring from\to.
    But I still felt it violated "something" out there, even if it was in the name of convenience. If this convenience was to be provided, it should be done by making a separate wrapper interface with the purpose of grouping the various Collection types together.
    If you don't know what I'm trying to say then you might have to see the interfaces I've made.
    And while I as at it, I also fiddled with the various method names.
    For example, add( int index, E element), I felt it should be insert( int index, E element). This type of minor things caused a lot of confusion for me back then, so I cared enough about this to change it to somthing I thought more appropriate. But I have no idea how appropriate my approach may seem to others. : )
    4.
    I see an iterator as something that iterates through a collection, and nothing else.
    Therefor, it bothered me that the Iterator interface had an optional remove method.
    I myself have never needed it, so maybe I just don't know how to appreciate it. How much is it used? If its heavily used, I guess I'm going to have to include it somehow.
    5.
    A LinkedList doesnt' support random access. But true random access is when you access randomly relative to the first index.
    Iterating from the first to the last with a for statement isn't really random access, but it still causes bad performance in the current LinkedList implementation. One would have to use the ListIterator to achieve this.
    But even here, if you want a ListIterator that starts in the middle of the list, you still need to traverse the list to reach that point.
    So I've come up with LinkedList that remembers the last accessed element using the basic methods get, set, remove etc, and can use it to access elements relatively from it.
    Basically, there is now an special interal "ListIterator" that is used to access elements when the basic methods are used. This gives way for several improvements (although that may depend how you look at it).
    It introduces some overhead in the form of if-else statemenets, but otherwise, I'm hoping that it will generally outperform the current LinkedList class (when using lists with a large nr of elements).
    6.
    I've also played around with the ArrayList class.
    I've implemented it in a way, that is something like a random-access Deque. This has made it possible to improvement certain methods, like inserting an element\Collection at some index.
    Instead of always shifting subsequent element to the right, elements can be shifted left as well. That means that inserting at index 0 only requires a single operation, instead of k * the length of the list.
    Again, this intrduces some overhead with if-else statements, but is still better in many cases (again, the List must be large for this to pay off).
    7.
    I'm also trying to do a hybrid between an ArrayList and a Linked list, hopefully allowing mostly constant insertion, but constant true random access as well. It requires more than twice the memory, since it is backed by both an ArrayList and a LinkedList.
    The overhead introduced , and the fact that worst case random access is no better than that of a pure LinkedList (which occurs when elelements are inserted at the same index many times, and you then try to access these elements), may make this class infeasible.
    It was mostly the first three points that pushed my over the edge, and made me throw myself at this project.
    You're free to comment as much as you like.
    If no real discussion starts, thats ok.
    Its not like I'm not having fun with this thing on my own : )
    I've started from scratch several times because of design problems discovered too late, so if you request to see some of the source code, it is still in the works and I would have to scurry off and add a lot of java-comments as well, to explain code.
    Great. Thanks!

    This sort of reply has great value : )
    Some of them show me that I need to take some other things into consideration. Some of them however, aren't resolved yet, some because I'm probably misunderstanding some of your arguments.
    Here goes:
    1.
    a)
    Are you saying that they're were made static, and therefor were implemented in a utility class? Isn't it the other way around? Suppose that I did put them into the List interface, that would mean they don't need to be static anymore, right?
    b)
    A bloated List interface is a problem. Many of them will however have a default not-so-alwyas-efficient implementation in a abstract base class.
    Many of the list-algorithms dump sequential lists in an array, execute the algorithm, and dump the finished list back into a sequential list.
    I believe that there are several of them where one of the "dumps" can be avoided.
    And even if other algorithms would effectively just be moved around, it wouldn't neccesarily be in the name of performance (some of them cannot really be done better), but in the name of consistency\convenience.
    Regarding convenience, I'm getting the message that some may think it more convenient to have these "extra" methods grouped in a utility class. That can be arranged.
    But when it comes to consistency with method names (which conacerns usability as well), I felt it is something else entirely.
    For example take the two methods fill and replaceAll in the Collections class. They both set specific elements (or all of them) to some value. So they're both related to the set method, but use method names that are very distinguished. For me it make sense to have a method called setAll(...), and overload it. And since the List interface has a set method, I would very much like to group all these related methods together.
    Can you follow my idea?
    And well, the Collections class would become smaller. If you ask me, it's rather bloated right now, and supports a huge mixed bag of related and unrelated utitlity methods. If we should take this to the extreme, then The Collections class and the Arrays class should be merged.
    No, right? That would be hell : )
    2,
    At a first glance, your route might do the trick. But there's several things here that aren't right
    a)
    In order to delete an object, you need to know where it is. The only remove method supported by PriorityQueue actually does a linear search. Increase and decrease operations are supposed to be log(n). Doing a linear search would ruin that.
    You need a method something like removeAt( int i), where i would be the index in the backing array (assuming you're using an array). The elemeny itself would need to know that int, meaning that it needs an internal int field, even though this field only is required due to the internal workings of PriorityQueue. Every time you want to insert some element, you need to add a field, that really has nothing to with that element from an object-oriented view.
    b)
    Even if you had such a remove method, using it to increase\decrease key would use up to twice the operations neccesary.
    Increasing a key, for example, only requires you to float the element up the heap. You don't need to remove it first, which would require an additional log(n) operations.
    3.
    I've read the link before, and I agree with them. But I feel that there are other ways to avoid an insane nr of interfaces. I also think I know why I arrive at other design choices.
    The Collection interface as it is now, is smart because it can covers a wide range of collection types with add and remove methods. This is useful because you can exchange elements between collections without knowing the type of the collection.
    The drawback is of course that not all collection are, e.g modifiable.
    What I think the problem is, is that the Collection interface is trying to be two things at once.
    On one side, it wants to be the base interface. But on the other side, it wants to cast a wide net over all the collection types.
    So what I've done is make a Collection interface that is infact a true base interface, only supporting methods that all collection types have in common.
    Then I have a separate interface that tries to support methods for exchanging elements between collections of unknown type.
    There isn't neccesarily any performance benefit (actually, it may even introduces some overhead), but in certainly is easier to grasp, for me at least, since it is more logically grouped.
    I know, that I'm basically challenging the design choices of Java programmers that have much more experience than me. Hell, they probably already even have considered and rejected what I'm considering now. In that case, I defend myself by mentioning that it isn't described as a possiblity in the FAQ : )
    4.
    This point is actually related to point 3., becausue if I want the Collection interface to only support common methods, then I can't have an Iterator with a remove method.
    But okay....I need to support it somehow. No way around it .
    5. 6. & 7.
    The message I'm getting here, is that if I implement these changes to LinkedList and ArrayList, then they aren't really LinkedList and ArrayList anymore.
    And finally, why do that, when I'm going to do a class that (hopefully) can simulate both anyway?
    I hadn't thought of the names as being the problem.
    My line of thought was, that okay, you have this arraylist that performs lousy insertion and removal, and so you avoid it.
    But occasionally, you need it (don't ask me how often this type of situation arises. Rarely?), and so you would appreciate it if performed "ok". It would still be linear, but would often perform much better (in extreme cases it would be in constant time).
    But these improvements would almost certainly change the way one would use LinkedList and ArrayList, and I guess that requires different names for them.
    Great input. That I wouldn't have thought of. Thanks.
    There is however some comments I should comment:
    "And what happens if something is suibsequently inserted or removed between that element and the one you want?"
    Then it would perform just like one would expect from a LinkedList. However if that index were closer to the last indexed position, it would be faster. As it is now, LinkedList only chooses either the first index or the last to start the traversal from.
    If you're working with a small number of elements, then this is definitely not worth it.
    "It sounds to me like this (the hybrid list) is what you really want and indeed all you really need."
    You may be right. I felt that since the hybrid list would use twice as much memory, it would not always be the best choice.
    I going to think about that one. Thanks.

  • Primitive type Changing???

    One of my friends sent me this e-mail. Can anyone help??
    ok wood, here's one for you:
    the software i'm writing get's data from one of two processors on a
    satelite. these processors use different #'s of bytes for their
    primitive types.
    bool - 1 byte
    short - 2 bytes
    int - 2 bytes
    long - 4 bytes
    float - 4 bytes.
    so..as you know the jvm uses 4 bytes for an int, 8 for a long. I
    want to be able to have people who are developing sims ontop of my
    framework be able to use a primitiave type, such as an int, and have
    it only be 2 bytes...now i'm pretty sure this isn't possible. You
    can't overload a primitive type in java right? I have some ideas
    here on what to do (create whole new objects, holding byte arrays and
    use java.nio to manaage them), but it just seems like such a hastel.
    i guess i was wondering if you have ever seen this kindof thing
    before and if you had any advice. thanks man.

    In C, primitive types can vary in size, based on the platform being used. In Java, the size of primitives is static. If he wants 16-bit storage, short is the way to go. Well, a char is a couple of bytes, too, but it's unsigned, intended for Unicode characters, and probably not what he's looking for anyway...
    :o)

  • Does TopLink support arrays of primitive types?

    Does TopLink support arrays of primitive types?

    You must use a collection.
    If you really wanted to you could store the data as an array and use get/set methods to convert between the array and collection, or customize your own ContainerPolicy, however arrays a very inefficent to add/remove from so TopLink only provides support for collections.

  • Goods Receipt Collective Slip for Mat Doc is not printing

    HI All,
    I have a strange issues. I'm trying to print collective slip for Goods Receipt which is created against of PO.
    I able to print Individual Slip when i create GR(Mat Doc) against PO with movement type 101. But I'm not able to print the Collective Slip when I create GR(Mat Doc) against PO with movement type 101. I'm saving the GR document by selecting collective slip. Error I'm getting is "The system could not create Output".
    I check the condition records in MN21, condition record is in place.
    I check in OMB5, for movement type and material document configuratio its there.
    I check in NACR, condition record is there.
    I found that In table NAST, entry for GR document created using collective slip is not available, but Individual slip GR Doc entries are there in NAST table.
    any help on this matter is highly appreciated.
    I check in SDN, Googled it, but no result.
    Thanks in Advance.

    check this
    2. Before you set up the output determination, you have to run the program RM07NCUS for each client in which you want to use the output determination function.
    This program copies the basic settings from the SAP standard client (000).
    from there:
    http://help.sap.com/saphelp_46c/helpdata/en/ed/6cec6eb435d1118b3f0060b03ca329/content.htm
    and check you have define some printer for the output.

  • Collective deliveries for open sales orders

    Is there any process or transaction code  for creating the collective Picking  for open sales orders.
            In my company we are not using the Warehouse management, so we pick the goods manually.
           If there is any way plzpost the answer.

    Check if VG01 helps you if you enter "K" as Group Type
    Cheers

  • Selecting #of bits for data types in Java

    hi,
    i would like to know if we can constrain compiler on allocating memory to basic data types, such as int, float, short etc. i would like to read an exactly four byte integer into an int, 2 byte short into a short etc. i need to read these parameters exactly of their specified number of bytes - not more or less - as if i read anything more/less, i will be reading a part of other parameter.
    i know that C allows us to do this by letting us to specify number of bits we like to allocate for each data type (for eg. unsigned int:16; means i need an unsigned int of exactly 16bits (2 bytes long) -no more and no less).
    Is there anything similar i can do in Java?
    any suggestion is greately appreciated.
    Thanks,

    All primitive types in Java are well-defined. In contrast to C/C++, an int in Java is allways 32 bits, using one's complement, a char is allways 16 bits etc.
    Java does not give you direct acces to physical memory - that would compromise the basic design of Java.
    You can read individual bytes from files, and you can do all the integer arithmetic in Java as you can in C/C++. To convert eg. fout bytes to one int, you could do the following:
    byte b1= (some byte value),
         b2,
         b3,
         b4;
    int i= (((b1 << 8) | b2 << 8) | b3 << 8) | b4;
    b1 being the most significant byte, b4 being the least significant.
    Did this answer you questions?

Maybe you are looking for

  • Logical column using data source from 2 generations of same hierarchy

    Hi experts, I'm using Essbase as my data source in CEIM physical layer, and I have a hierarchy called "Entity" which contains different level of companies, in Generation 2 I have only one member called "group totals" and in Generation 3 are 5 members

  • How come I can't download forms into Smartview 11.1.1.3?

    Hi, 2 weeks ago I was able to download forms from Planning 11.1.1.3. Today I can't. The forms option is faded. Essbase client was so much simpler than Smartview!! How come I can't download forms into Smartview 11.1.1.3?

  • N95 - Won't Recognise Incoming Calls

    Just got the N95 so this might be finger problems or I might have missed a setting somewhere .... When I get an incoming call it doesn't seem to tie it up with existing contacts. It simply shows the incoming number rather than contact name. When I tr

  • Hp Dv6700 Special edition Laptop audio not working with Television Via HDMI

    I have a HP Dv6700 Laptop Special edition laptop windows 7 64 bit  which is connected to my Emerson 32" Flatscreen TV, The Picture is working perfectly fine, but there is no audio being tranferred, nor is there a device in the sound devices listed as

  • How to execute a java application from java itself

    Hi, I�m working in Client Server environment. After starting the server, i want to automatically call the client. ( java Server (in Server.java the call (java Client) has to be done. Can anyone help me for this problem. thanks.