Heap implementation

Hi, I'm trying to implement a min and a max heap. So far i am inserting nodes into an array, and setting them so they will know who their parent node, and left and right children are.
My problem is, when it it tries to set the parent, I get a null pointer. It has to be something simple that im just missing.
here is my code:
public class Node {
    Node leftChild;
    Node rightChild;
    Node parent;
    int nData;
    boolean Root;
    boolean Last;
    public Node(int nData)
        this.nData = nData;
    public void root()
        Root = true;
    public void last()
        Last = true;
    public void removeLast()
        Last = false;
    public void checkRoot(Node node)
}The last line of the Test class is where the null pointer is coming in, nodes[k].parent = nodes[k/2];
public class Test {
    Node[] nodes;
    int i = 1;
    public Test()
        nodes = new Node[25];
        addNode(new Node(10));
        addNode(new Node(15));
        addNode(new Node(12));
        addNode(new Node(9));
     //Prints Node Info  
        for(int j = 1; j<5; j++)
            System.out.println(nodes[j].nData);
           if(nodes[j].leftChild != null)
            System.out.println("leftchild" + nodes[j].leftChild.nData);
             if(nodes[j].rightChild != null)
            System.out.println("right child" + nodes[j].rightChild.nData);
            if(nodes[j].parent != null)
            System.out.println("parent" + nodes[j].parent.nData);
            System.out.println("root" + nodes[j].Root);
            System.out.println("last" + nodes[j].Last);
    public void addNode(Node node)
    // Sets Root and Last Nodes
              nodes[i] = node;
              if(i==1)
                 node.root();
            if(nodes[i+1] == null)
                 nodes.last();
if(nodes[i-1] != null)
nodes[i-1].removeLast();
update();
i++;
// Sets Left Child, Right Child and Parent nodes
public void update()
for(int k = 1; k<5;k++)
if(nodes[2*k] != null)
nodes[k].leftChild = nodes[2*k];
if(nodes[2*k + 1] != null)
nodes[k].rightChild = nodes[2*k + k];
if(nodes[k/2] != null)
nodes[k].parent = nodes[k/2];
Thank you for any replies, and apologize for being inadequate to solve something so simple, hah.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

The last line of the Test class is where the null pointer is coming in, nodes[k].parent = nodes[k/2];If that's where you're getting the NPE, then nodes[k] must be null.
Either nodes[k] never had a non-null value, or it had one and you overwrote it with null. Examine your code to see if you ever assign a value to nodes[k].

Similar Messages

  • Heap implementation in ABAP

    I try to implement in ABAP a heap. I created an class which simulate this heap but I'm interesting that in ABAP is an ready implementation of heap class?
    My definition of class have methods:
    - GET_LAST_TOKEN
    - ADD_TOKEN
    - RELEASE_TOKEN
    - CHECK_IS_EMPTY
    - FLIP_HEAP
    Attibutes:
    - HEAP //table
    - TOTAL //counter
    - C_FALSE //boolean false
    - C_TRUE //bolean true
    Implementation (only overall view)
    method ADD_TOKEN .
      APPEND i_token to heap.
      IF sy-subrc = 0.
        total = total + 1.
      ENDIF.
    endmethod.
    METHOD release_token .
      IF total > 0.
        DELETE heap INDEX total.
        IF sy-subrc = 0.
          total = total - 1.
        ENDIF.
      ENDIF.
    ENDMETHOD.
    METHOD get_last_token .
      DATA: l_token LIKE LINE OF heap.
      IF total is not INITIAL.
        READ TABLE heap
          INDEX total
          INTO e_token.
      ENDIF.
    ENDMETHOD.
    Is this good idea?

    hi
    good
    go through this link about heap , i hope this ll help you to solve your problem
    http://www.redbooks.ibm.com/redbooks/pdfs/sg246291.pdf
    thanks
    mrutyun^

  • Heap implementation problem

    Could anyone give me some idea about heap imlementation with linked structure?(As far as i know, i had no problem with the "HeapNode" class, but i have a lot of difficulties with implementing the heap class. Also, i got the following code segment from a programming book, but i really cant keep everything in my mind.
    public void insertItem(Object k, Object e)
             throws InvalidKeyException {
    if (!comparator.isComparable(k))
             throw new InvalidKeyException("Invalid Key")
    Position z; // Position to insert
         if (isEmpty())
             z = root();
         else {
             z = last;
        while (!isRoot(z) && !isLeftChild(z))
             z = parent(z);
        if (!isRoot(z))
             z = rightChild(parent(z));
        while (!isExternal(z))
              z = leftChild(z);
      expandExternal(z);
      replace(z, new KeyElementPair(k, e));
    last = z;
      // Up-heap bubbling comes next
    Position u;
    while (!isRoot(z)) { //Up-heap bubble
        u = parent(z);
    if (comparator.isLessThanOrEqualTo(
                                keyOfPosition(u),
                                keyOfPosition(z))) break;
    swap(u, z);
    z = u;
    private Object keyOfPosition(Position p)
                       throws InvalidPositionException {
      return ((KeyElementPair) p.element()).key();
    // cast on line above should not fail
    any help would be greatly appreciated.

    I'm just using the -xml option. But I don't know what is the "datapath" element.
    This is my command line:
    java -Xmx1024M -classpath ".\UploadTest\classes;E:\software\apache\java\xalan-j_2_7_1-bin\xalan.jar;E:\software\apa che\java\xalan-j_2_7_1-bin\serializer.jar" -Djavax.xml.transform.TransformerFactory=org.apache.xalan.xsltc.trax.TransformerFactoryIm pl com.adobe.adept.upload.UploadTest http://************:8080/packaging/Package "E:\Documenti\eBook personali\****\*****.pdf" -pass ******** -xml -jpg
    The xalan implementation for xml serialization is more efficient. The -xml that you suggested is already on the command line.
    The xml content is:
    <package xmlns="http://ns.adobe.com/adept">
    <metadata xmlns:dc="http://purl.org/dc/elements/1.1/">
      <dc:title>*******</dc:title>
      <dc:creator>**************</dc:creator>
      <dc:publisher>**********</dc:publisher>
      <dc:format>application/pdf</dc:format>
    </metadata>
    <permissions>
      <display>
       <device/>
       <until>2013-01-13T13:10:49-07:00</until>
      </display>
      <excerpt>
       <until>2013-01-13T13:10:49-07:00</until>  
      </excerpt>
      <print>
       <count initial="10" max="20" incrementInterval="3600"/>
       <maxResolution>300</maxResolution>
      </print>
    </permissions>
    </package>
    What's "the path of the using the dataPath element"?
    Thank you.
    Gianpiero

  • Java priority queue

    Java provides PriorityQueue, and I have gone through its API.
    The implementation of PriorityQueue in Java does not provide method for increase or decrease key,
    and there must be a reason for it.
    But, when i go through books on data strucutre, a lot of them talk about the increase/decrease key function
    of the PriorityQueue.
    So I am just wondering, why is it that increase/decrease function not provided in PriorityQueue. I cannot come up with a reason for it, but i think there must be. Does anybody have any thought on this. Or is it just
    because the designers thought its not needed?
    I checked the source for the Priority Queue and the heapify() method was declared private.

    lupansansei wrote:
    I have used Java's priority queue an I have written my own but I have never come accros the terms "increase or decrease key". Do you mean something like 'upheap' or 'downheap' in relation to a 'heap' implementation meaning move an entry to it correct position if the key changes? If so then one should make the 'key' immutable so that the functions are not needed.
    Yes, i mean 'upheap' or 'downheap' by increase or decrease key. Sorry
    maybe my choice of words was not correct.
    I couldn't get what you mean by 'key' immutable. Can you please explain it. If the key cannot change (i.e. it is immutable) then there is no need to ever change the position of an element.
    >
    Correct. Since the PriorityQueue does not need to implemented using a 'heap' there is no need for the heapify() method to be exposed. If one implemented using a balanced tree or a skip list the heapify() method would not be applicable.I am using PriorityQueue and i need to update the priority of the elements and i was wondering whether to implement the whole queue
    myself or look for a better way of using the PriorityQueue class.
    Do you have any suggestions for efficiently updating the priority of element?I have a priority queue implementation where elements know they are in a heap and know where they are in the heap. By doing this I can modify 'keys' and then move a value to it's correct place in the queue in a very short time. The limitations this feature imposes on the elements and the possibility of corrupting the heap means I don't often use this these days. It is far too error prone.
    These days in my simulations I normally remove an element from the queue, process the element and then create new elements and insert them back in the queue. This sometimes takes 2 lots of Log(n) operations where my specialized priority queue just takes Log(n) operations. The code is just so much more maintainable and I accept the hit.

  • Best shortest path algorithm ??

    Hi
    I am developing an application to guide a user from one point to another. I need to work out the shortest path between these two points.
    Is Dijkstra's algorithm the best to use? or should I be looking for a different one? I am looking for speed.
    Thanks

    Choosing a "single source shortest path" algorithm depends on if you are going to use a weighted or not weighted graph, if all weights are non-negative or negative etc. If negative -weight edges are allowed, Djikstra's algorithm can not be used. Instead you can use the Bellman-Ford algorithm.
    The running time of Djikstra's algorithm is lower than that of the Bellman-Ford algorithm. Runningtime for the Bellman-Ford algorithm is in general O(V^2E). Because Djikstra's algorithm always chooses the "closest" vertex (in a weigthed directed graph G=(V,E)) , we say it uses a greedy strategy. This strategy does not always yield optimal result. How fast the Djiktra algorithm is, depends how the min-priority queue is implemented. f.example: if you use the linear array implementation of the min-priority queue, the runningtime for Djikstra's algorithm is O(v^3) while a binary min-heap implementation of min-priority queue gives a runningtime of O(V E lg V).

  • D-ary heap with Priority Queue implementation

    I have to construct a program that find the k-th smallest integer in a given set S of numbers; read from the standard input a first line containing positive integers N, k, and d separated by spaces. Each of the following N lines contains a positive integer of the set S. I have to implement a generic d-ary heap class that implements all methods of the priority queue interface.
    i have the following code...but the inserting bubbling doesnt seem to wokr right...
    any help would be great:
    import java.util.*;   
    public class Heap {
        static Element[] heap;
        int N;
        static int k;
        int d;
        static int size = 0;
        Compare comp;
        public Heap(int nodes, int max, Compare c)
            N = max;
            d = nodes;
            heap = new Element[N];
            comp = c;
        public static void main(String args[])
            Scanner _scan = new Scanner(System.in);
      //      String Nkd = _scan.nextLine();
       //     Scanner _scanNkd = new Scanner(Nkd);
            int _N = 0;
            int _d = 0;
            Compare _c = new Compare();
                _N = _scan.nextInt();
                k = _scan.nextInt();
                _d = _scan.nextInt();
            Heap _heap = new Heap(_d,_N,_c);
            int i=0;
            int num=0;
            while(_scan.hasNextLine()&&num<_N)
                System.out.println("test" + _scan.nextInt());
                _heap.insert(i, _scan.nextInt());
                i++;
                size++;
                num++;
            for(int z=0;z<_N;z++)
            //    System.out.println(heap[z].getKey());
            Element kth = null;
            for(int j = 1; j <=k; j++)
                kth = _heap.removeMin();
            System.out.print(kth.getKey());
            System.out.print('\n');
            /*System.out.print(k);
            System.out.print('\n');
            System.out.print(_heap.size());
            System.out.print('\n');
            System.out.print('\n');
            System.out.print(heap[0].getKey());
            System.out.print('\n');
            System.out.print(heap[1].getKey());
            System.out.print('\n');
            System.out.print(heap[2].getKey());
            System.out.print('\n');
            System.out.print(heap[3].getKey());
            System.out.print('\n');
            System.out.print(heap[4].getKey());
            System.out.print('\n');
            System.out.print(heap[5].getKey());*/
        public void insert(int i, int e)
            heap[i] = new Element(e,i);
            this.bubbleUp(heap);
    public int size() {return size;}
    public boolean isEmpty() {return(size == 0);}
    public int min(){return heap[0].getKey();}
    public Element remove()
    int i = size-1;
    size--;
    return heap[i];
    public Element removeMin()
    Element min = this.root();
    if(size == 1)
    this.remove();
    else
    this.replace(this.root(), this.remove());
    this.bubbleDown(this.root());
    return min;
    public Element replace(Element a, Element b)
    a.setIndex(b.getIndex());
    a.setKey(b.getKey());
    return a;
    public void bubbleUp(Element e)
    Element f;
    while(!e.isRoot(e.getIndex()))
    f = this.getParent(e.getIndex());
    if(comp.compare(f,e) <= 0)
    break;
    else
    int temp = f.getIndex();
    f.setIndex(e.getIndex());
    e.setIndex(temp);
    swap(f,e);
    System.out.println("bubbling");
    e=f;
    public void bubbleDown(Element e)
    int i = e.getIndex();
    while(e.isInternal(i, size))
    Element s;
    if(!e.hasRight(i, size))
    s = this.getLeft(i);
    else if(comp.compare(this.getLeft(i), this.getRight(i)) <= 0)
    s = this.getLeft(i);
    else
    s = this.getRight(i);
    if(comp.compare(s,e) < 0)
    swap(e,s);
    e = s;
    else
    break;
    public void swap(Element x, Element y)
    int temp = x.getIndex();
    x.setIndex(y.getIndex());
    y.setIndex(temp);
    public Element root() {return heap[0];}
    public Element getLeft(int i) {return heap[i*2];}
    public Element getRight(int i) {return heap[i*2+1];}
    public Element getParent(int i) {return heap[i/2];}
    class Element
    private int key;
    private int index;
    public Element(int k, int i)
    key = k;
    index = i;
    public int getKey() {return key;}
    public void setKey(int k) {key = k;}
    public int getIndex() {return index;}
    public void setIndex(int i) {index = i;}
    public boolean isRoot(int i) {
    if (i == 0)
    return true;
    else
    return false;
    //return i == 1;
    public boolean hasLeft(int i, int size) {return 2*i <= size;}
    public boolean hasRight(int i, int size) {return 2*i+1 <= size;}
    public boolean isInternal(int i, int size) {return hasLeft(i, size);}
    public boolean isExternal(int i, int size) {return !isInternal(i, size);}
    class Compare implements Comparator<Element>
    public Compare(){}
    public int compare(Element a, Element b)
    int x=0;
    if(a.getKey() < b.getKey())
    x = -1;
    else if(a.getKey() == b.getKey())
    x = 0;
    else if(a.getKey() > b.getKey())
    x = 1;
    return x;

    Well, this might be a swifty thing to do, unfortunately the Java Dudes in their infinite wisdom decided that asynchronous servlets were a bad thing. I disagree mind you. So while you could do what you wanted, you still have all these threads hangin' out waiting for their work to be done, which is just really lamo.
    Anyhoo, to do this, just add a reference to the socket in the entry class, and when you pick up an entry from the heap, you can fetch the socket again, and send the results back to that socket. Of course you're probably going to moof up session info, and timeouts et. cetera, but it might work.

  • What is the best way to verify default heap size in Java

    Hi All,
    What is the best way to verify default heap size in Java ? does it vary over JVM to JVM . I was reading this article http://javarevisited.blogspot.sg/2011/05/java-heap-space-memory-size-jvm.html , and it says default size is 128 MB but When I run following code :
    public static void main(String args[]) {
    int MB = 1024*1024;
    System.out.println(Runtime.getRuntime().totalMemory()/MB);
    It print "870" i.e. 870 MB.
    I am bit confused, what is the best way to verify default heap size in any JVM ?
    Edited by: 938864 on Jun 5, 2012 11:16 PM

    938864 wrote:
    Hi Kayaman,
    Sorry but I don't agree with you on verification part, Why not I can verify it ? to me default means value when I don't specify -Xms and -Xmx and by the way I was testing that program on 32 bit JRE 1.6 on Windows. I am also curious significant difference between 128MB and 870MB I saw, do you see anything obviously wrong ?That spec is outdated. Since Java 6 update 18 (Sun/Oracle implementation) the default maximum heap space is calculated based on total memory availability, but never more than 1GB on 32 bits JVMs / client VMs. On a 64 bits server VM the default can go as high as 32gb.
    The best way to verify ANYTHING is to address multiple sources of information and especially those produced by the source, not some page you find on the big bad internet. Even wikipedia is a whole lot better than any random internet site IMO. That's common sense, I can't believe you put much thought into it that you have to ask in a forum.

  • Threaded inner classes & heap memory exhaustion

    (_) how can i maximize my threading without running out of
    heap memory?
    push it to the limit, but throttle back before an
    java.lang.OutOfMemoryError.
    (_) within 1 threaded class ThreadClass, i have two threaded inner classes. for each instance of ThreadClass i only
    start one instance of each inner class.
    and, i start hundreds of ThreadClass, but not until the previously running ThreadClass object exits, so only one should be running at any given time.
    so, what about threaded inner classes?
    are they good? bad? cause "OutOfMemoryErrors"?
    are those inner threads not dying?
    what are common causes of:
    java.lang.OutOfMemoryError: java heap space?
    my program runs for about 5-minutes, then
    bails with the memory error.
    how can i drill down and see what
    is eating-up all my memory?
    thanks.

    A Thread class is not the same as a thread of
    execution. Those inner class based threads of
    execution are not dying.maybe. but this is the way i test a thread's life:
    public void run() {
    System.out.println("thread start");
    System.out.println("thread dies and release memory");
    }for each inner thread, and the outer thread, this approach for
    testing thread life reveals that they die.
    Why don't you use a thread pool?ok. i will think about how to do this.
    >
    If not, you need to ensure those inner threads have
    exited and completed.what is a 100% sure check to guarantee a thread exits other than
    the one i use above?
    note:
    the outer thread is running on a remote host, and the inner threads
    are running locally. here are the details:
    public class BB implements Runnable, FinInterface {
      public void run() {
        // do some work on the remote machine
      private void startResultsHandler(OisXoos oisX) {
         ResultHandler rh = new ResultHandler(oisX);
         rh.start();
      public void startDataProxy(OisXoos oisX, String query) {
         DataProxy dp = new DataProxy(oisX, query);
         dp.start();
            public class ResultsHandler extends Thread {
               // runs locally; waits for results from servers
               public void run() {
                   ObjectInputStream ois = new ObjectInputStream(oisX.input);
                    Set result = (Set) ois.readObject();
            }  // ____ class :: _ ResultsHandler _ :: class ____
           public class DataProxy extends Thread {
               // runs locally; performs db queries on behalf of servers
               public void run() {
                   ObjectOutputStream oos = new ObjectOutputStream(oisX.output);
                    while(moreData) {
                        .... // sql queries
                        oos.writeObject(data);
                 StartResultsHandler(oisX);
            } // _____ class  :: _ DataProxy _ :: class _____
    }now, the BB class is not started locally.
    the inner threads are started locally to both service data requests
    by the BB thread as well as wait for its results.
    (_) so, maybe the inner threads cannot exit (but they sure look
    like they exit) until their parent BB thread exits.
    (_) yet, those inner threads have no knowledge that the BB
    thread is running.
    externalizing those inner thread classes will put 2-weeks of work
    in the dust bin. i want to keep them internal.
    thanks.
    here this piece of code that controls everything:
    while(moreData) {
      FinObjects finObj = new BB();
      String symb = (String) data_ois.readObject();
      OisXoos oisX = RSAdmin.getServer();
      oisX.xoos.writeObject(finObj);
      finObj.startDataProxy(finObj, oisX, symb);
    }

  • JVM heap size limit under Windows

    Hi,
    I'm looking either for some help with a workaround, or
    confirmation that the information I've found is still the case for the
    current state of Java.
    Development machine is Win XP Pro, 2G RAM.
    Biggest heap I can allocate is about 1.6G, and that is not large enough for this
    app.
    I have a Swing application that
    1) must run on Win XP, 32 bit
    2) must implement an editor (similar to Excel but with fewer features) to handle large csv files
    ( up to about 800Mb).
    3) Strong preference for Java 5, though higher could conceivably be supported.
    Research so far tells me that this is the result of process memory limitations
    of Windows and the JVM, and that I might be able to squeeze a little more heap with
    Windows' rebase command, but probably not enough and I would start running the
    risk of conflicts with other applications on my users' systems. Ugh.
    Also I read of the Windows /3GB switch, but posts say that the JDK's available are not
    built to be able to use that feature. I havent had a chance to add memory to
    test that yet. However, I'm also under the impression that I should be able to
    allocate a heap larger than physical RAM ... except for that process size limit.
    So ... my information is basically that I'm stuck with a limit of about 1.6G for
    heap size, regardless of the RAM on my computer.
    Can anyone confirm whether that is still correct, preferably with a pointer to some
    official reference ?
    Or better yet, point me toward a workaround?
    Thanks!
    -tom

    >
    Some bookmarks I have on this topic.
    http://sinewalker.wordpress.com/2007/03/04/32-bit-windows-and-jvm-virtual-memory-limit/
    http://stackoverflow.com/questions/171205/java-maximum-memory-on-windows-xp
    The first link pulled together what I found in lots of bits and pieces elsewhere, nice to have a coherent summary :)
    The second link offered a bit of insight into the jvm that I hadn't seen yet .
    Thanks!

  • Implementation & performance of StoredCollection.removeAll()

    I'm working with an app that uses StoredMap extensively. In one case, I have a StoredMap with about 10 million entries, and then roughly 10% of the entries with arbitrary keys need to be removed.
    Using StoredMap.remove() works just fine, although it is somewhat slow. I have a large known sequence of keys to remove so I tried using StoredMap.keySet().removeAll(), and encountered OOM errors every time. I expected that at worst removeAll() would perform the same as iterating over remove(), and I was hoping that it would be much faster similar to how putAll() is faster than iterating over put(), presumably because multiple elements are inserted in a single transaction.
    Looking at the source for StoredCollection.removeAll(), it appears that it attempts to iterate over the entire keySet, tests if the iterated key exists in the collection argument. My naive assumption would be that it should iterate over the collection argument, not the stored key set. The only time the existing approach might make sense is if the the collection argument is larger than the StoredCollection itself.
    I'm also not sure why its generating OOM errors. The test below is with only a 64mb heap size, but even in my app with a multi gigabyte app I'm seeing OOM generated when using removeAll().
    Below is a simple test class demonstrating the poor performance. With a StoredMap of 100,000 items, my results are:
    removeUsingRemoveOnMap 92ms
    removeUsingRemoveAllOnKeySet 735ms
    With a StoredMap of 1,000,000 items, my results are:
    removeUsingRemoveOnMap 1475ms
    Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
         at com.sleepycat.je.log.LogUtils.readByteArray(LogUtils.java:333)
         at com.sleepycat.je.tree.IN.readFromLog(IN.java:3438)
         at com.sleepycat.je.log.entry.INLogEntry.readEntry(INLogEntry.java:96)
         at com.sleepycat.je.log.LogManager.getLogEntryFromLogSource(LogManager.java:747)
         at com.sleepycat.je.log.LogManager.getLogEntry(LogManager.java:664)
         at com.sleepycat.je.tree.IN.fetchTarget(IN.java:1215)
         at com.sleepycat.je.tree.Tree.getNextBinInternal(Tree.java:1419)
         at com.sleepycat.je.tree.Tree.getNextBin(Tree.java:1282)
         at com.sleepycat.je.dbi.CursorImpl.getNextWithKeyChangeStatus(CursorImpl.java:1629)
         at com.sleepycat.je.dbi.CursorImpl.getNext(CursorImpl.java:1499)
         at com.sleepycat.je.dbi.CursorImpl.getNextNoDup(CursorImpl.java:1688)
         at com.sleepycat.je.Cursor.retrieveNextAllowPhantoms(Cursor.java:2189)
         at com.sleepycat.je.Cursor.retrieveNext(Cursor.java:1987)
         at com.sleepycat.je.Cursor.getNextNoDup(Cursor.java:872)
         at com.sleepycat.util.keyrange.RangeCursor.doGetNextNoDup(RangeCursor.java:920)
         at com.sleepycat.util.keyrange.RangeCursor.getNextNoDup(RangeCursor.java:475)
         at com.sleepycat.collections.DataCursor.getNextNoDup(DataCursor.java:463)
         at com.sleepycat.collections.StoredIterator.move(StoredIterator.java:619)
         at com.sleepycat.collections.StoredIterator.hasNext(StoredIterator.java:151)
         at com.sleepycat.collections.StoredCollection.removeAll(StoredCollection.java:354)
         at com.sleepycat.collections.StoredCollection.removeAll(StoredCollection.java:330)
         at JETester.removeUsingRemoveAllOnKeySet(JETester.java:54)
         at JETester.main(JETester.java:37)
    import com.sleepycat.bind.tuple.TupleBinding;
    import com.sleepycat.collections.StoredMap;
    import com.sleepycat.je.*;
    import java.io.File;
    import java.util.ArrayList;
    import java.util.List;
    public class JETester {
    public static void main(final String[] args) throws DatabaseException {
    final StoredMap<String,String> storedMap = setupMap(new File("c:\\temp"));
    populateMap(storedMap, 100000);
    removeUsingRemoveOnMap(storedMap);
    removeUsingRemoveAllOnKeySet(storedMap);
    private static void removeUsingRemoveOnMap(final StoredMap<String,String> storedMap) {
    final long startTime = System.currentTimeMillis();
    for (int i = 0; i < 100; i++) {
    storedMap.remove(String.valueOf(i));
    System.out.println("removeUsingRemoveOnMap " + (System.currentTimeMillis() - startTime) + "ms");
    private static void removeUsingRemoveAllOnKeySet(final StoredMap<String,String> storedMap) {
    final List<String> removalList = new ArrayList<String>();
    for (int i = 100; i < 200; i++) {
    removalList.add(String.valueOf(i));
    final long startTime = System.currentTimeMillis();
    storedMap.keySet().removeAll(removalList);
    System.out.println("removeUsingRemoveAllOnKeySet " + (System.currentTimeMillis() - startTime) + "ms");
    private static void populateMap(final StoredMap<String, String> storedMap, final int size) {
    for (int i = 0; i <= size; i++ ) {
    storedMap.put(String.valueOf(i),String.valueOf(i));
    private static StoredMap<String,String> setupMap(final File databaseDirectory)
    throws DatabaseException
    final EnvironmentConfig envConfig = new EnvironmentConfig();
    envConfig.setAllowCreate(true);
    envConfig.setTransactional(true);
    final Environment environment = new Environment(databaseDirectory, envConfig);
    final DatabaseConfig dbConfig = new DatabaseConfig();
    dbConfig.setAllowCreate(true);
    dbConfig.setTransactional(true);
    final Database database = environment.openDatabase(null, "test", dbConfig);
    final TupleBinding<String> stringBinding = TupleBinding.getPrimitiveBinding(String.class);
    return new StoredMap<String,String>(database, stringBinding, stringBinding, true);
    }

    Hi,
    Thank you for posting a complete test!
    I'll take a look at why removeAll is implemented the way it is later on, and report back here.
    For now I just wanted to say that the OOME is probably caused by the large transaction. removeAll (if the database is transactional) must do the entire operation within a single transaction. Each record is locked, which takes up memory. In general, large transactions do take up a lot of memory. So if you don't need to delete the entire set of keys atomically, you're better off not to use removeAll (use remove instead).
    To be honest, the bulk collections APIs (putAll, removeAll, etc) were not implemented for performance reasons, but mainly to adhere to the Java collections interface spec for compatibility. It would be great to make these also provide performance benefits, and we should continue to improve them -- your test will be useful for doing that, thanks for that. But you should know that a lot of effort has not been put into that aspect of their implementation so far.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Large heap sizes, GC tuning and best practices

    Hello,
    I’ve read in the best practices document that the recommended heap size (without JVM GC tuning) is 512M. It also indicates that GC tuning, object number/size, and hardware configuration play a significant role in determining what the optimal heap size is. My particular Coherence implementation contains a static data set that is fairly large in size (150-300k per entry). Our hardware platform contains 16G physical RAM available and we want to dedicate at least 1G to the system and 512M for a proxy instance (localstorage=false) which our TCP*Extend clients will use to connect to the cache. This leaves us 14.5G available for our cache instances.
    We’re trying to determine the proper balance of heap size vs num of cache instances and have ended up with the following configuration. 7 cache instances per node running with 2G heap using a high-units value of 1.5G. Our testing has shown that using the Concurrent Mark Sweep GC algorithm warrants no substantial GC pauses and we have also done testing with a heap fragmentation inducer (http://www.azulsystems.com/e2e/docs/Fragger.java) which also shows no significant pauses.
    The reason we opted for a larger heap was to cut down on the cluster communication and context switching overhead as well as the administration challenges that 28 separate JVM processes would create. Although our testing has shown successful results, my concern here is that we’re straying from the best practices recommendations and I’m wondering what others thoughts are about the configuration outlined above.
    Thanks,
    - Allen Bettilyon

    Hello,
    I’ve read in the best practices document that the recommended heap size (without JVM GC tuning) is 512M. It also indicates that GC tuning, object number/size, and hardware configuration play a significant role in determining what the optimal heap size is. My particular Coherence implementation contains a static data set that is fairly large in size (150-300k per entry). Our hardware platform contains 16G physical RAM available and we want to dedicate at least 1G to the system and 512M for a proxy instance (localstorage=false) which our TCP*Extend clients will use to connect to the cache. This leaves us 14.5G available for our cache instances.
    We’re trying to determine the proper balance of heap size vs num of cache instances and have ended up with the following configuration. 7 cache instances per node running with 2G heap using a high-units value of 1.5G. Our testing has shown that using the Concurrent Mark Sweep GC algorithm warrants no substantial GC pauses and we have also done testing with a heap fragmentation inducer (http://www.azulsystems.com/e2e/docs/Fragger.java) which also shows no significant pauses.
    The reason we opted for a larger heap was to cut down on the cluster communication and context switching overhead as well as the administration challenges that 28 separate JVM processes would create. Although our testing has shown successful results, my concern here is that we’re straying from the best practices recommendations and I’m wondering what others thoughts are about the configuration outlined above.
    Thanks,
    - Allen Bettilyon

  • NetBeans and Heap Size

    I'm trying to write a program using NetBeans and I keep getting OutOfMemoryErrors. I have 2GB of RAM and I've attempted to modify the netbeans.conf file to increase the heap size at startup. The changes I've made aren't being reflected in the memory manager tool (it still indicates a maximum size of ~50MB). The pertinent code from the config file is as follows:
    netbeans_default_options="-J-Xms512m -J-Xmx1024m -J-XX:PermSize=128m -J-XX:MaxPermSize=512m -J-Xverify:none -J-Dapple.laf.useScreenMenuBar=true -J-XX:NewSize256m"
    Assuming my memory usage won't exceed these limits, any suggestions as to how to make sure the changes are being implemented?

    I would subscribe to the netbeans user mail list, and pose your question to that list. The Sun Netbeans gurus live there.
    I would also use a seperate email account to subscribe to the list as it generates alot of messages daily. If thats not an option, definatly try to set up a rule that seperates messages from the list from your regular email.
    The main netbeans top level mail list web page is here with all the instructions to subscribe and unsubscribe.
    Good luck,
    JJ

  • Relationship between Dynamic Memory Heap and Heap Data Structure

    This question is not strictly related to Java, but rather to programming in general, and I tend to get better answers from this community than any where else.
    Somehow, my school and industry experience have somehow not given me the opportunity to explore and understand heaps (the data structure), so I'm investigating them now, and in particular, I've been looking at applications. I know they can be used for priority queues, heap sorts, and shortest path searches. However, I would have thought that, obviously, there must be some sort of relationship between the heap data structure, and the dynamic memory heap. Otherwise, I can think of no good reason why the dynamic memory heap would be named "heap". Surprisingly, after searching the web for 90 minutes or so, I've seen vague references, but nothing conclusive (trouble seems to be that it's hard to get Google to understand that I'm using the word "heap" in two different contexts, and similarly, it would not likely understand that web authors would use the word in two different contexts).
    The Java Virtual Machine Spec is silent on the subject, as "The Java virtual machine assumes no particular type of automatic storage management system, and the storage management technique may be chosen according to the implementor's system requirements."
    I've seen things like:
    [of dynamic memory] "All the blocks of a particular size are kept in a sorted linked list or tree (I extrapolate that sorted tree could imply heap)"
    [of dynamic memory] "The free and reserved areas of memory are maintained in a data structure similar to binary trees called a heap"
    [of dynamic memory] "This is not related to the heap data structure"
    [of dynamic memory] "Not to be confused with the data structure known as a "heap"
    [of data structure] "Not to be confused with the dynamic memory pool, often known as TheHeap"
    At this point, I've come to surmise that some (but not all) memory management algorithms use heaps to track which (pages? blocks? bytes?) of memory are used, and which are not. However, the point of a heap is to store data so that the max (or min) key is at the root of the heap. But we might want to allocate memory of different sizes at different times, so it wouldn't make sense to key on the amount of available memory in a particular region of the free store.
    I must assume then that there would be a different heap maintained for each size of memory block that can be allocated, and the key must have something to do with the attractiveness of the particular memory block in the heap (perhaps the lowest address, resulting, hopefully, in growing the free store space less often, leaving more space for the stack to grow, or perhaps keyed based on the fragmentation, to hopefully result in less fragmentation, and therefore more efficient use of the memory space, or perhaps based on page boundaries, keeping as much data in the same page as possible, etc).
    So at this point, I have a few questions I've been unable to resolve completely:
    1. Am I correct that the heap was so named because (perhaps at one point in time), a heap is/was commonly used to track the available memory in the free store?
    2. If so, would it be correct that there would be a heap per standard block size?
    3. Also, at what level of granularity would a heap typically be used (memory page, memory blocks, individual words (4-bytes))?
    4. What would be the most likely property one would use as a key. That is, what makes the root item on the heap ideal?
    5. Would a industrial strength system like the jvm use a (perhaps modified or tuned) heap for this sort of task, or would this typically be too naive for an real world solution today?
    Any insight would be awesome!
    Thanks,
    A.

    jschell wrote:
    I think you are not only mixing terms but domains.
    For starters the OS allocs memory. Applications, regardless of language, request memory from the OS and use it in various ways.
    There are many variations of the term "heap" like the following.
    [http://en.wikipedia.org/wiki/Heap_(data_structure)]
    [http://en.wikipedia.org/wiki/Dynamic_memory_allocation]
    A java VM will request memory from the OS (from a 'heap') and use it in its application 'heap' (C/C++) and then create the Java 'heap'. There can be variations of that along the way that can and likely will include variations of how each heap is used, potentially code that creates its own heap, and potentially other allocators which use something which is not a heap.This last part, I find a bit confusing. By "use something which is not a heap", do you mean the heap data structure, or the dynamic memory pool meaning of heap? If the former, then you would be implying that it would be common for a heap data structure to be used to manage the heap dynamic memory pool. If the latter, what would this "something which is not a heap" be? The best definition of "heap" I've found simply states that it is a pool of memory that can be dynamically allocated. If there is some other way of allocating dynamic memory, then it would suggest that the previous definition of "heap" is incomplete.
    >
    So to terms.
    1. Am I correct that the heap was so named because (perhaps at one point in time), a heap is/was commonly used to track the available memory in the free store?Which 'heap'? The VM one? It is probably named that because the implementors of the Sun VM were familar with how C++ and Smalltalk allocated memory.Okay, but that begs the question, was the heap in C++ and/or Smalltalk so named for the above queried reason?
    >
    2. If so, would it be correct that there would be a heap per standard block size?Not sure what you are referring to but probably a detail of the implementation. And since there are different levels the question doesn't mean much.
    However OS allocations are always by block if that helps. After that it requires making the question much, much more specific.
    3. Also, at what level of granularity would a heap typically be used (memory page, memory blocks, individual words (4-bytes))?Again not specific enough. A typical standard implementation of heap could not be at the word level. And it is unlikely, but not impossible, that variations would support word size allocations.
    The VM heap might use word boundaries (but not size), where the application heap certainly does (word boundary.)My understanding of it is that the application would request blocks from the OS, and then something like malloc would manage the memory within the allocated blocks. malloc (or whatever equivalent Java uses) would have to keep track of the memory it has allocated somehow, and I would think it would have to do this at the word level, since it's most commonly going to allocate memory at the word level to be references to other objects, etc.
    So I guess my question here would really be, if the dynamic memory heap is so named because there has been a memory management strategy that relied upon a heap data structure (which I've found no proof, but have found some suggestive literature), then would that probably have applied at the OS Page Fault level, tracking allocated blocks, or would that have applied at the malloc level, allocating individual words as necessary?
    >
    4. What would be the most likely property one would use as a key. That is, what makes the root item on the heap ideal?"Key" is not a term that will apply in this discussion.
    You appear to be referring to strategies for effective allocation of memory such as allocations from different regions by size comparison.
    It is possible that all levels might use such an allocator. General purpose applications do not sort allocations though (as per your one reference that mentions 'key'.) Sorry, I got the term "key" from an article I read regarding heaps, that indicates that a "key" is used to sort the elements, which I guess would be a more generalized way to make a heap than assuming a natural ordering on the elements in the heap. I'm not sure if the terminology is standard.
    >
    5. Would a industrial strength system like the jvm use a (perhaps modified or tuned) heap for this sort of task, or would this typically be too naive for an real world solution today?Again too indefinite. The Sun VM uses a rather complicated allocator, the model for which originated after years of proceeding research certainly in Smalltalk and in Lisp as well, both commercially and academically.
    I am sure the default is rules driven either explicitly or implicitly as well. So it is self tuning.
    There are command line options that allow you to change how it works as well.I guess perhaps I could attempt to clarify my initial question a bit.
    There is a 1:1 correspondence between the runtime stack, and a stack data structure. That is, when you call a function, it pushes a stack frame onto the runtime stack. When you return from a function, it pops a stack frame from the runtime stack. This is almost certainly the reasons the runtime stack is named as it is.
    The question is, is there or has there ever been a 1:1 correspondence between some aspect of the dynamic memory heap or how it is managed, and a heap data structure? If so, it would explain the name, but I'm a bit puzzled as to how a heap data structure would be of assistance in creating or managing the dynamic memory heap. If not, on the other hand, then does anybody know where the name "heap" came from, as it applies to the dynamic memory pool?
    A.

  • Large and long lived objects, heap vs. NIO?

    Hi,
    Our application keeps large number of objects in cache (based on Oracle Coherence) and objects are related to subscriber information which means they are supposed to stay in cache for very long time.
    The heap size we are looking at is about 3+ GB and probably the majority of the heap will be used to hold the cached objects. Is there anyway to let JRockit GC know about the 'cache' concept so it won't spend too much effort on these 'cached' objects?
    Any JRockit tuning tips for this kind of application?
    Is NIO a better approach means use the NIO to hold the cache objects rather than heap?
    But every NIOed object also consumes some heap (create Java object) at least in the Sun HotSpot implementation, not sure what is the trade-off. What is JRockit standing point on NIO given its superior GC to Sun HotSpot, because JRockit's 'guaranteed pausetime' and be able to large heap size at the same time?
    NIO seems to allow go beyond the traditional problem long pause concern when heap size is large.
    Our application is supposed to handle at least several hundred GB in distributed configuration (cluster based on Oracle Coherence).
    Is this thing JRockit may or may not have advantage/disadvantage?
    We heard about 'distributed GC' in distributed environement meaning the dependency on other JVM may cause one local GC to depend on the remote GC on the remote node if GC happens at the same time on different nodes. Any thought from JRockit perspective?
    Regards,
    Jasper

    Thanks a lot, Stefan
    I appreciate the extra miles you went beyond just the JRockit...
    One clarification in my original question, actually I mean a large number of objects rather than large-sized object. The objects range from 50 bytes to a few kb and the total number of objects are lilley in several millions.
    We have been testing Heap vs. NIO based on Sun JVM.
    But we are not able to see clear advantages one over the other at this point.
    -Using NIO does use less heap but the overall process memory between HEAP and NIO is not much different in our case.
    -Our original thinking to use NIO is because we want to cache more than 2+GB of objects per JVM so we can make good use of standard server configurations (8/16 cores 32/64G RAM) without huge number of JVMs. We're targeting 100+ million subscribers in the telecom space. The 2 GB is basically the limit that normal GC (HotSpot) probably can handle without causing long pause, otherwise it will break our SLA (latency requirement is in the ranage of milliseconds).
    -From reading the HotSpot JVM, it turns out using NIO is not totally HEAP-free meaning, additional book-keeping for the NIO resident objects are created in the form of Java objects which consumes heap too. So GC is not totally immuned. Also there are some more overheads in handling the NIO objects (create/delete) via the JNI calls.
    -Even though we are not using NIO directly, we use Coherence and Coherence provides the HEAP and NIO options for the cached objects, but ultimately it's up to JVM. Therefore I try to bring this up with JRockit. I think the NIO size limit is probably due to one direct memory allocation can be up to 'int' can cover, but if one allocation call is not big enough, multiple allocation calls should do (probably more complex/effort...). But 'int' is up to 4GB and is already good enough in our case for a JVM caching objects in NIO.
    -Our ultimate decision is based on TCO, low CPU/memory with high throughput and low latency meet the SLAs.
    We will come back to share our test results based on JRockit RealTime later and may ask for more of your insights then.
    Best Regards,
    Jasper

  • Object in cache count and total heap size consumed.

    Hi
    Can you please help me to extract - total number of custom value objects (count) and its total heap space consumed. How do we extract this information (from jrcmd or jrockit command control - jrockit jvm).
    Do we need to use - com.tangosol.net.management.Registry? can you share some example/documentation in this regard.
    Thanks
    sunder

    Hi NJ,
    Thanks for your response - if I have to implement unit-calculator - i need to use instrumentation API to compute the size of a entry which will affect the performance of system.
    Here is my problem.
    1 - Have a com.tangosol.net.DefaultCacheServer (distributed cache) instance running
    2 - A client joins the distributed cluster and inserts records into cluster and leaves the cluster.
    when I do jrcmd on "DefaultCacheServer" i dont see any of custom java components (beans set distributed cache) - I want to have visibility of objects being put in cache and actual space consumed. Is there a way to get this?
    [xx@yyy ~]$ jrcmd 10693 print_object_summary
    10693:
    --------- Detailed Heap Statistics: ---------
    64.7% 6565k 2571 +6565k [B
    10.2% 1034k 14816 +1034k [C
    3.7% 374k 3199 +374k java/lang/Class
    3.6% 364k 15560 +364k java/lang/String
    2.8% 280k 3468 +280k [Ljava/lang/Object;
    1.2% 119k 118 +119k [Lcom/tangosol/util/RecyclingLinkedList$Node;
    1.0% 98k 2510 +98k com/tangosol/io/ByteArrayWriteBuffer
    0.9% 94k 2418 +94k com/tangosol/run/xml/SimpleElement
    0.8% 81k 2088 +81k com/tangosol/run/xml/SimpleElement$AttributeMap
    0.8% 80k 696 +80k [Ljava/util/HashMap$Entry;
    0.6% 64k 2740 +64k java/util/HashMap$Entry
    Note there are no custom component been listed in DefaultCacheServer instance.
    Thanks
    sunder

Maybe you are looking for

  • Multiple Groups in Radius

    HI all - Quick questions that will be easy for all you experts. I am using Juniper Steel-belted Radius for Remote Access Authenticaion off of our Concentrator right now. I want to start deploying 802.1x for vlan assignment and login authentication fo

  • Flying RC helicopters on Boot Camp or at least trying to!

    After lots of grief and a trip to the Mac store, I finally got my MacBook Pro (MBP) up and running again.The Mac Store replaced my optical drive and I have added another drive externally. I just installed Boot Camp in a 32 MB partition and it's runni

  • How to create Transfer order from material document   uFF08Move type is 101uFF09

    Hi all, is there any function module, BAPI to create transfer order from a material document number? I want to use FM L_TO_CREATE_SINGLE,but the result is "Movement type 101 for manual transfer orders does not exist". Is the parameters error? Thank y

  • Runtime analysis(SE30) for a Background job

    Hi Experts, How to get runtime analysis for a Background job in SE30 transaction. Please advise. Thanks in advance, Vivenchandar R

  • Keep a button set to Hover once clicked

    Hi, I was wondering if there was a quick way to set a button to it's hover state once clicked? I know how to do this by having it be a movie clip and changing its time line position onRollOver. onRollOut and onRelease. I am just trying to do this qui