Running out of memory - where do I not free objects?

Hello there,
I am analyzing data on an Itanium with 1GB of memory using the JSDK 1.4 (Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.2-beta-b19)). I have a lot of memory troubles reading data as small as 120MB and processing it (using the Pattern class and ArrayList or static String array), so I wrote this little program to test the memory:
               print("Starting memcheck - press key...");
               int border= 100000;
               int bCount = border;
               int limit = 18000000;
               String tst = new String("Hi, Tester!");
               System.in.read();
               print("Key for 1-String-Buffer...");
               System.in.read();
               StringBuffer checkSt = new StringBuffer(limit*tst.length());
               print("1-String-Buffer created. Key for clearance...");
               System.in.read();
               checkSt = null;               
               System.gc();
               print("Key for Strings...");
               System.in.read();
               String [] checkS = new String[limit];
               for (int i=0; i<limit; i++) {
                    checkS[i] = (new String(tst));
                    if (--bCount == 0) {
                         bCount = border;
                         print("Created: "+i+" strings!");
               print("Strings created. Key for clearance...");
               System.in.read();
               checkS = null;               
               System.gc();As I originally thought, this only uses about 800MB with the static String array, 471MB with the StringBuffer, and crashes (out of memory) when being permitted up to 1000MB with ArrayList.
The odd thing is though, that when I put the part with the static String array after the StringBuffer part, it crashes too (out of memory). So I guess, that I have not fully understood how to free memory in java yet and that I am overlooking something here - can someone please help me? I have already tried for more than a week to fix this problem, but I must be overlooking the obvious...
According to my math, if I read 100MB, it should at most use 2*100MB+overhead (2* because of Java's internal unicode representation). Where am I doing wrong?
Thanks in advance!

Double posted,
http://forum.java.sun.com/thread.jsp?forum=31&thread=409677&tstart=0&trange=15

Similar Messages

  • CVI 2013 Error: The compiler has run out of memory.

    Hello,
    I get this error in a source file I'd like to debug:
    1, 1 Error: The compiler has run out of memory.
    1, 1 Note: You may be able to work around the problem:
    1, 1 A. Set the debugging level to 'no run-time checking'.
    1, 1 B. Split your source file into smaller files.
    1, 1 C. Enable the 'O' option for your source file in the project.
    1, 1 D. Move large static data structures into new files and
    1, 1 enable the 'O' option for the new files.
    Options A and C disable debugging aids mostly, and I don't dare editing it.
    So any possibility to increase the memory limit?
    /* Nothing past this point should fail if the code is working as intended */
    Solved!
    Go to Solution.

    This is the "strange code"
    #pragma pack (push,8)
    typedef struct
    int struct1int;
    } STRUCT1;
    typedef struct
    STRUCT1* s1_ptr;
    } STRUCT2;
    typedef struct
    char c[2];
    int i;
    STRUCT2 s2;
    } STRUCT3;
    #pragma pack (pop)
    static STRUCT3 s3_global;
    void SomeFunc(void)
    s3_global.i = 0;
     I believe clang fails to compute the struct paddings.
    /* Nothing past this point should fail if the code is working as intended */

  • I have a file where I am running out of memory can anyone take a look at this file and see?

    I am trying to make this file 4'x8'.
    Please let me know if anyone can help me get this file to that size.
    I have a quad core processor with 6 gig of ram and have gotten the file to 50"x20", but I run out of memory shortly thereafter.  Any help would be appreciated.
    Thanks,

    Where to begin? You should look into using a swatch pattern instead of those repeating circles. Also, I see that each circle in your pattern is actually a stack of four circles, but I see no reason why. Perhaps Illustrator is choking on the huge number of objects required to make that patterns as you haave constructed it.
    Here is a four foot by eight foot Illustrator file using a swatch pattern. Note that, despite the larger size, the file is less than one 16th the size.

  • We updated our phones and now we are running out of memory.  Never happened before the update and we have not added anything new.  We have hardly anything on our phones.

    We updated our phones and now we are running out of memory.  Never happened before the update and we have not added anything new.  We have hardly anything on our phones.  Why am I having to remove stuff that has always been on my phone. 

    Thanks for the reply TJBUSMC1973. 
    I guess that means back to the shop.  Ridiculous to sell a phone that can't handle the new iOS more efficiently (or vice versa).  I can't imagine anyone going near a 5C now (I have a feeling mine had already been used and taken back once and sold as new at O2).
    Charlie

  • Generating large amounts of XML without running out of memory

    Hi there,
    I need some advice from the experienced xdb users around here. I´m trying to map large amounts of data inside the DB (Oracle 11.2.0.1.0) and by large I mean files up to several GB. I compared the "low level" mapping via PL/SQL in combination with ExtractValue/XMLQuery with the elegant XML View Mapping and the best performance gave me the View Mapping by using the XMLTABLE XQuery PATH constructs. So now I have a View that lies on several BINARY XMLTYPE Columns (where the XML files are stored) for the mapping and another view which lies above this Mapping View and constructs the nested XML result document via XMLELEMENT(),XMLAGG() etc. Example Code for better understanding:
    CREATE OR REPLACE VIEW MAPPING AS
    SELECT  type, (...)  FROM XMLTYPE_BINARY,  XMLTABLE ('/ROOT/ITEM' passing xml
         COLUMNS
          type       VARCHAR2(50)          PATH 'for $x in .
                                                                let $one := substring($x/b012,1,1)
                                                                let $two := substring($x/b012,1,2)
                                                                return
                                                                    if ($one eq "A")
                                                                      then "A"
                                                                    else if ($one eq "B" and not($two eq "BJ"))
                                                                      then "AA"
                                                                    else if (...)
    CREATE OR REPLACE VIEW RESULT AS
    select XMLELEMENT("RESULTDOC",
                     (SELECT XMLAGG(
                             XMLELEMENT("ITEM",
                                          XMLFOREST(
                                               type "ITEMTYPE",
    ) as RESULTDOC FROM MAPPING;
    ----------------------------------------------------------------------------------------------------------------------------Now all I want to do is materialize this document by inserting it into a XMLTYPE table/column.
    insert into bla select * from RESULT;
    Sounds pretty easy but can´t get it to work, the DB seems to load a full DOM representation into the RAM every time I perform a select, insert into or use the xmlgen tool. This Representation takes more than 1 GB for a 200 MB XML file and eventually I´m running out of memory with an
    ORA-19202: Error occurred in XML PROCESSING
    ORA-04030: out of process memory
    My question is how can I get the result document into the table without memory exhaustion. I thought the db would be smart enough to generate some kind of serialization/datastream to perform this task without loading everything into the RAM.
    Best regards

    The file import is performed via jdbc, clob and binary storage is possible up to several GB, the OR storage gives me the ORA-22813 when loading files with more than 100 MB. I use a plain prepared statement:
            File f = new File( path );
           PreparedStatement pstmt = CON.prepareStatement( "insert into " + table + " values ('" + id + "', XMLTYPE(?) )" );
           pstmt.setClob( 1, new FileReader(f) , (int)f.length() );
           pstmt.executeUpdate();
           pstmt.close(); DB version is 11.2.0.1.0 as mentioned in the initial post.
    But this isn´t my main problem, the above one is, I prefer using binary xmltype anyway, much easier to index. Anyone an idea how to get the large document from the view into a xmltype table?

  • ORA-27102: out of memory SVR4 Error: 12: Not enough space

    We got image copy of one of production server that runs on Solaris 9 and our SA guys restored and handed over to us (DBAs). There is only one database running on source server. I have to up the database on the new server. while startup the database I'm getting following error.
    ====================================================================
    SQL*Plus: Release 10.2.0.1.0 - Production on Fri Aug 6 16:36:14 2010
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Connected to an idle instance.
    SQL> startup
    ORA-27102: out of memory
    SVR4 Error: 12: Not enough space
    SQL>
    ====================================================================
    ABOUT THE SERVER AND DATABASE
    Server:
    uname -a
    SunOS ush** 5.9 Generic_Virtual sun4u sparc SUNW,T5240*
    Database: Oracle 10.2.0.1.0
    I'm giving the "top" command output below:
    Before attempt to start the database:
    load averages: 2.85, 9.39, 5.50 16:35:46
    31 processes: 30 sleeping, 1 on cpu
    CPU states: 98.9% idle, 0.7% user, 0.4% kernel, 0.0% iowait, 0.0% swap
    Memory: 52G real, 239G free, 49M swap in use, 16G swap free
    the moment I run the "startup" command
    load averages: 1.54, 7.88, 5.20 16:36:44
    33 processes: 31 sleeping, 2 on cpu
    CPU states: 98.8% idle, 0.0% user, 1.2% kernel, 0.0% iowait, 0.0% swap
    Memory: 52G real, 224G free, 15G swap in use, 771M swap free
    and I compared the Semaphores and Kernel Parameters in /etc/system . Both are Identical.
    and ulimit -a gives as below..
    root@ush**> ulimit -a*
    time(seconds) unlimited
    file(blocks) unlimited
    data(kbytes) unlimited
    stack(kbytes) 8192
    coredump(blocks) unlimited
    nofiles(descriptors) 256
    memory(kbytes) unlimited
    root@ush**>*
    and ipcs shows nothing as below:
    root@ush**> ipcs*
    IPC status from <running system> as of Fri Aug 6 19:45:06 PDT 2010
    T ID KEY MODE OWNER GROUP
    Message Queues:
    Shared Memory:
    Semaphores:
    Finally Alert Log gives nothing, but "instance starting"...
    Please let us know where else I should check for route cause ... Thank You.

    and I compared the Semaphores and Kernel Parameters in /etc/system . Both are Identical.are identical initSID,ora or spfile being used to start the DB.
    Clues indicate Oracle is requesting more shared memory than OS can provide.
    Do any additional clues exist within alert_SID.log file?

  • Workfow Iterate to subprocess runs out of memory

    I have a workflow that returns all suspended tasks and then calls a subprocess for each task. The subprocess decides whether the task needs to be deleted and if so it processes the task in various ways before deleting the taskinstance.
    I have no issues when there are not too many tasks returned by the query but when the workflow returns 2000+ items, I run out of memory.
    What is the best way to workflow to call the subprocess without running out of memory?
    Do I need to cleanup something at the end of subprocess?
    Do I need to add something in the workflow to beakup the list of tasks into smaller chunks?
    <Activity id='3' name='ProcessTasks'>
    <Action id='0' name='processTasks' process='processTheTask'>
    <Iterate for='taskInstanceName' in='mytasks'/>
    <Argument name='taskInstanceName' value='$(taskInstanceName)'/>
    </Action>
    Edited by: user1937458 on Mar 14, 2012 3:12 PM

    I didnt think that this would put that much stress on the system.
    1) Use IDM best practice to generate low memory tasks, use exposedVariables and extendedVariables in manual actions to generate low memory tasks, that will save lots of memory.
    No manual action. This is a scheduled task.
    2) Run this workflow on dedicated server which is responsible to run this task only.
    I have run this when no one else was using the system but that did not help either.
    3) You can put some more conditions to get the limited return data which your server can handle in one go.
    we normally have 8000 tasks in the system. About 5000 are completed so I can ignore those in the workflow. The rest need to be looked at to determine if we need to update the request. Let's say that I can use a rule to determine that in the workflow before the subprocess is called and I end up with a list of 500 taskinstance names, I think that the process will still run out of memory unless there is some other solution.
    2000 task names in a list should not take up that much space. I am pretty sure that the subprocess which determines if the task needs to be deleted is chewing up resources. This is going to be a scheduled task with no manual actions.
    My thinking was that workflow calls the subprocess and the subprocess does a lot of work as far as canceling a request, disabling accounts in some cases, auditing and notifying users that their request was cancelled. Upon return to the workflow to get the next taskinstance name, there is probably some variable that keeps getting larger with each iteration.
    I have run smaller lists and the flow diagram that returns at the end shows the flowchart for every item that was deleted so that is probably 1 place where the variable keeps getting larger.
    Is there a way to clean everything so that each subprocess acts as if it was the 1st and only time it was getting called?
    I tried the following at the end of the subprocess but that did not help:
    <Action id='0' name='CleanUp'>
    <expression>
    <set name='WF_CASE_RESULT'/>
    </expression>
    </Action>
    I will try to debug and see what variables are getting larger and larger but any other suggestions are appreciated.

  • GC isn't working: WLS runs out of memory and dies

    Periodically webservers just runs out of memory and dies. It looks like the garbage collection isn't working correctly and it never gets kicked off,
    Where can I configure the GC in weblogic?
    Have the min and max set to 512m,on WL92mp1, JDK 150_06. 40 - 50 concurrent users and this is a Financials system.
    Full thread dump Java HotSpot(TM) Server VM (1.5.0_04-b05 mixed mode):
    "NwWriter" daemon prio=5 tid=0x1abc9008 nid=0x1220 in Object.wait() [0x1fbdf000..0x1fbdfc1c]
         at java.lang.Object.wait(Native Method)
         - waiting on <0x19e40930> (a bea.jolt.OutQ)
         at java.lang.Object.wait(Object.java:474)
         at bea.jolt.OutQ.getFromQ(OutQ.java:89)
         - locked <0x19e40930> (a bea.jolt.OutQ)
         at bea.jolt.NwWriter.run(NwHdlr.java:3980)
    "NwReader" daemon prio=5 tid=0x1d644e48 nid=0x2f8 runnable [0x1fb9f000..0x1fb9fc9c]
         at java.net.SocketInputStream.socketRead0(Native Method)
         at java.net.SocketInputStream.read(SocketInputStream.java:129)
         at java.io.DataInputStream.readFully(DataInputStream.java:176)
         at bea.jolt.NwReader.run(NwHdlr.java:3625)
    "NwWriter" daemon prio=5 tid=0x1cf5e388 nid=0x1098 in Object.wait() [0x1f7df000..0x1f7dfa1c]
         at java.lang.Object.wait(Native Method)
         - waiting on <0x0def6e90> (a bea.jolt.OutQ)
         at java.lang.Object.wait(Object.java:474)
         at bea.jolt.OutQ.getFromQ(OutQ.java:89)
         - locked <0x0def6e90> (a bea.jolt.OutQ)
         at bea.jolt.NwWriter.run(NwHdlr.java:3980)
    "NwReader" daemon prio=5 tid=0x1ced8be0 nid=0x12c4 runnable [0x1f79f000..0x1f79fa9c]
         at java.net.SocketInputStream.socketRead0(Native Method)
         at java.net.SocketInputStream.read(SocketInputStream.java:129)
         at java.io.DataInputStream.readFully(DataInputStream.java:176)
         at bea.jolt.NwReader.run(NwHdlr.java:3625)
    "NwWriter" daemon prio=5 tid=0x1ed1c408 nid=0x1494 in Object.wait() [0x1fadf000..0x1fadfc1c]
         at java.lang.Object.wait(Native Method)
         - waiting on <0x0dee6e30> (a bea.jolt.OutQ)
         at java.lang.Object.wait(Object.java:474)
         at bea.jolt.OutQ.getFromQ(OutQ.java:89)
         - locked <0x0dee6e30> (a bea.jolt.OutQ)
         at bea.jolt.NwWriter.run(NwHdlr.java:3980)
    "NwReader" daemon prio=5 tid=0x1abc8b80 nid=0x8ec runnable [0x1fa9f000..0x1fa9fc9c]
         at java.net.SocketInputStream.socketRead0(Native Method)
         at java.net.SocketInputStream.read(SocketInputStream.java:129)
         at java.io.DataInputStream.readFully(DataInputStream.java:176)
         at bea.jolt.NwReader.run(NwHdlr.java:3625)
    "NwWriter" daemon prio=5 tid=0x1bf71bd8 nid=0x134 in Object.wait() [0x1fa5f000..0x1fa5fa1c]
         at java.lang.Object.wait(Native Method)
         - waiting on <0x0dee9db8> (a bea.jolt.OutQ)
         at java.lang.Object.wait(Object.java:474)
         at bea.jolt.OutQ.getFromQ(OutQ.java:89)
         - locked <0x0dee9db8> (a bea.jolt.OutQ)
         at bea.jolt.NwWriter.run(NwHdlr.java:3980)
    "NwReader" daemon prio=5 tid=0x1c6a7d98 nid=0x10cc runnable [0x1fa1f000..0x1fa1fa9c]
         at java.net.SocketInputStream.socketRead0(Native Method)
         at java.net.SocketInputStream.read(SocketInputStream.java:129)
         at java.io.DataInputStream.readFully(DataInputStream.java:176)
         at bea.jolt.NwReader.run(NwHdlr.java:3625)
    "NwWriter" daemon prio=5 tid=0x1c5d2008 nid=0x8b4 in Object.wait() [0x1f6df000..0x1f6dfb1c]
         at java.lang.Object.wait(Native Method)
         - waiting on <0x0dee2370> (a bea.jolt.OutQ)
         at java.lang.Object.wait(Object.java:474)
         at bea.jolt.OutQ.getFromQ(OutQ.java:89)
         - locked <0x0dee2370> (a bea.jolt.OutQ)
         at bea.jolt.NwWriter.run(NwHdlr.java:3980)
    "NwReader" daemon prio=5 tid=0x1c88ed98 nid=0x8a0 runnable [0x1f69f000..0x1f69fb9c]
         at java.net.SocketInputStream.socketRead0(Native Method)
         at java.net.SocketInputStream.read(SocketInputStream.java:129)
         at java.io.DataInputStream.readFully(DataInputStream.java:176)
         at bea.jolt.NwReader.run(NwHdlr.java:3625)
    "NwWriter" daemon prio=5 tid=0x006a2b58 nid=0x270 in Object.wait() [0x1f9df000..0x1f9dfc1c]
         at java.lang.Object.wait(Native Method)
         - waiting on <0x0decaf68> (a bea.jolt.OutQ)
         at java.lang.Object.wait(Object.java:474)
         at bea.jolt.OutQ.getFromQ(OutQ.java:89)
         - locked <0x0decaf68> (a bea.jolt.OutQ)
         at bea.jolt.NwWriter.run(NwHdlr.java:3980)
    "NwReader" daemon prio=5 tid=0x1c958920 nid=0x1680 runnable [0x1f99f000..0x1f99fc9c]
         at java.net.SocketInputStream.socketRead0(Native Method)
         at java.net.SocketInputStream.read(SocketInputStream.java:129)
         at java.io.DataInputStream.readFully(DataInputStream.java:176)
         at bea.jolt.NwReader.run(NwHdlr.java:3625)
    "NwWriter" daemon prio=5 tid=0x1d9c0428 nid=0x17a8 in Object.wait() [0x1f85f000..0x1f85fb1c]
         at java.lang.Object.wait(Native Method)
         - waiting on <0x0decfa98> (a bea.jolt.OutQ)
         at java.lang.Object.wait(Object.java:474)
         at bea.jolt.OutQ.getFromQ(OutQ.java:89)
         - locked <0x0decfa98> (a bea.jolt.OutQ)
         at bea.jolt.NwWriter.run(NwHdlr.java:3980)
    "NwReader" daemon prio=5 tid=0x1abede28 nid=0x11d0 runnable [0x1f81f000..0x1f81fb9c]
         at java.net.SocketInputStream.socketRead0(Native Method)
         at java.net.SocketInputStream.read(SocketInputStream.java:129)
         at java.io.DataInputStream.readFully(DataInputStream.java:176)
         at bea.jolt.NwReader.run(NwHdlr.java:3625)
    "NwWriter" daemon prio=5 tid=0x1c7b8540 nid=0x11f8 in Object.wait() [0x1fd9f000..0x1fd9fb1c]
         at java.lang.Object.wait(Native Method)
         - waiting on <0x0de98618> (a bea.jolt.OutQ)
         at java.lang.Object.wait(Object.java:474)
         at bea.jolt.OutQ.getFromQ(OutQ.java:89)
         - locked <0x0de98618> (a bea.jolt.OutQ)
         at bea.jolt.NwWriter.run(NwHdlr.java:3980)
    "NwReader" daemon prio=5 tid=0x1bf85510 nid=0x370 runnable [0x1fd5f000..0x1fd5fb9c]
         at java.net.SocketInputStream.socketRead0(Native Method)
         at java.net.SocketInputStream.read(SocketInputStream.java:129)
         at java.io.DataInputStream.readFully(DataInputStream.java:176)
         at bea.jolt.NwReader.run(NwHdlr.java:3625)
    "NwWriter" daemon prio=5 tid=0x1c391b48 nid=0x1768 in Object.wait() [0x1fd1f000..0x1fd1fa1c]
         at java.lang.Object.wait(Native Method)
         - waiting on <0x0de9ff48> (a bea.jolt.OutQ)
         at java.lang.Object.wait(Object.java:474)
         at bea.jolt.OutQ.getFromQ(OutQ.java:89)
         - locked <0x0de9ff48> (a bea.jolt.OutQ)
         at bea.jolt.NwWriter.run(NwHdlr.java:3980)
    "NwReader" daemon prio=5 tid=0x1be90440 nid=0x10d4 runnable [0x1fcdf000..0x1fcdfa9c]
         at java.net.SocketInputStream.socketRead0(Native Method)
         at java.net.SocketInputStream.read(SocketInputStream.java:129)
         at java.io.DataInputStream.readFully(DataInputStream.java:176)
         at bea.jolt.NwReader.run(NwHdlr.java:3625)
    "NwWriter" daemon prio=5 tid=0x1d2d0bd0 nid=0x1020 in Object.wait() [0x1f75f000..0x1f75fd1c]
         at java.lang.Object.wait(Native Method)
         - waiting on <0x0de5d3c0> (a bea.jolt.OutQ)
         at java.lang.Object.wait(Object.java:474)
         at bea.jolt.OutQ.getFromQ(OutQ.java:89)
         - locked <0x0de5d3c0> (a bea.jolt.OutQ)
         at bea.jolt.NwWriter.run(NwHdlr.java:3980)
    "NwReader" daemon prio=5 tid=0x1d3472d0 nid=0x10e0 runnable [0x1f71f000..0x1f71fd9c]
         at java.net.SocketInputStream.socketRead0(Native Method)
         at java.net.SocketInputStream.read(SocketInputStream.java:129)
         at java.io.DataInputStream.readFully(DataInputStream.java:176)
         at bea.jolt.NwReader.run(NwHdlr.java:3625)
    "NwWriter" daemon prio=5 tid=0x1bf71a30 nid=0x1b0 in Object.wait() [0x1f95f000..0x1f95fd1c]
         at java.lang.Object.wait(Native Method)
         - waiting on <0x0de11e90> (a bea.jolt.OutQ)
         at java.lang.Object.wait(Object.java:474)
         at bea.jolt.OutQ.getFromQ(OutQ.java:89)
         - locked <0x0de11e90> (a bea.jolt.OutQ)
         at bea.jolt.NwWriter.run(NwHdlr.java:3980)
    "NwReader" daemon prio=5 tid=0x1ac06ab8 nid=0x17ec runnable [0x1f91f000..0x1f91fd9c]
         at java.net.SocketInputStream.socketRead0(Native Method)
         at java.net.SocketInputStream.read(SocketInputStream.java:129)
         at java.io.DataInputStream.readFully(DataInputStream.java:176)
         at bea.jolt.NwReader.run(NwHdlr.java:3625)
    "NwWriter" daemon prio=5 tid=0x1bddfde8 nid=0x133c in Object.wait() [0x1ff9f000..0x1ff9fb1c]
         at java.lang.Object.wait(Native Method)
         - waiting on <0x0d4a71f0> (a bea.jolt.OutQ)
         at java.lang.Object.wait(Object.java:474)
         at bea.jolt.OutQ.getFromQ(OutQ.java:89)
         - locked <0x0d4a71f0> (a bea.jolt.OutQ)
         at bea.jolt.NwWrite

    There's nothing wrong with how GC works. If you don't give it anything to collect, it won't collect anything. You are simply not allowing enough memory to get anything done. As the other responder said, bump up the max mem to at least 1g. If it still fails, set it even higher. I can't tell what platform you're on, but if you're on Windows, you may be limited to about 1536m.

  • How can I avoid running out of memory when creating components dynamically

    Hello everyone,
    Recently, I am planning to design a web application. It will be used by all middle school teachers in a region to make examination papers and it must contain the following main functions.
    1)Generate test questions dynamically. For instance, a teacher who logs on the web application will only see a select one menu and a Next Quiz button. The former is used for determining the number of options for the current multiple/single choice question. The later is dedicated to creating appropriate input text elements according to the selected option number. That is to say, if the teacher selects 4 in the menu and presses the Next Quiz button, 5 input text form elements will appear. The first one is for the question to be asked such as "1.What is the biggest planet in the solar system?", the others are optional answers like a)Uranus. b) Saturn. c)Jupiter. d)Earch. Each answer stands for an input text elements. When the teacher fills in the fourth answer, another select one menu and Next Quiz button will emerge on the fly just under this answer, allowing the teacher to make the second question. The same thing repeats for the following questions.
    2)Undo and Redo. Whenever a teacher wants to roll back or redo what he has done, just press the Undo or[i] Redo button. In the previous example, if the teacher selects the third answer and presses the Delete button to drop this answer, it will delete both the literal string content[i] and the input text element, changing the answer d to c automatically. After that, he decides to get back the original answer c, Jupiter, he can just click the Undo button as if he hadn�ft made the deleting operation.
    3)Save the unfinished working in the client side. If a teacher has done half of his work, he can choose to press the Save button to store what he has done in his computer. The reason for doing so is simply to alleviate the burden of the server. Although all finished test papers must be saved in a database on the server, sometimes the unfinished papers could be dropped forever or could form the ultimate testing papers after several months. So if these papers keep in the server, it will waste the server computer�fs room. Next time the teacher can press the Restore button on the page to get the previously stored part of the test paper from his own computer and continue to finish the whole paper.
    4)Allow at least 1,000 teachers to make test papers at the same time. The maximum question number per examination paper is 60.
    Here are my two rough solutions,
    A.Using JSF.
    B.Using JavaScript and plain JSP[b] without JSF.
    The comparison of the two solutions:
    1)Both schemas can implement the first and the second requirements. In JSF page I could add a standard panelGird tag and use its binding attribute. In the backing bean, the method specified by the binding attribute is responsible for generating HtmlInput objects and adding them to the HtmlPanelGird object on the fly. Every HtmlInput object is corresponding to a question subject or an optional answer. The method is called by an actionListener, which is registered in the Next Quiz commandButton, triggering by the clicking on this button in the client side. Using JSF can also be prone to managing the HtmlInput objects, e.g. panelGird.getChildren().add(HtmlInput) and panelGird.getChildren().remove(HtmlInput) respond to the undoing operation of deleting an optional answer and the redoing operation of the deleting action respectively. I know JavaScript can also achieve these goals. It could be more complex since I don�ft know well about JavaScript.
    2)I can not find a way to meet the third demand right now. I am eager to know your suggestion.
    3)Using JSF, I think, can�ft allow 1,000 teachers to do their own papers at the same time. Because in this scenario, suppose each questionnaire having 60 questions and 4 answers per question, there will be approximately 300,000 HtmlInput objects (1,000X60X(4+1)) creating on the server side. The server must run out of memory undoubtedly. To make things better, we can use a custom component which can be rendered as a whole question including its all optional answers. That is to say, a new custom component on the server side stands for a whole question on the client side. Even so, about 60,000(1,000X60) this type of custom components will be created progressively and dynamically, plus other UISelectOne and UICommand objects, it also can�ft afford for most servers. Do I have to use JavaScript to avoid occupying the server's memory in this way? If so, I have to go back and use JavaScript and plain JSP without JSF.
    Thank you in advance!
    Best Regards, Ailsa
    2007/5/4

    Thank you for your quick response, BalusC. I really appreciate your answer.
    Yes, you are right. If I manually code the same amount of those components in the JSF pages instead of generating them dynamically, the server will still run out of memory. That is to say, JSF pages might not accommodate a great deal of concurrent visiting. If I upgrade the server to just allow 1,000 teachers making their own test papers at the same time, but when over 2,000 students take the same questionnaire simultaneously, the server will need another upgrading. So I have to do what you have told me, using JS+DOM instead of upgrading the server endlessly.
    Best Regards, Ailsa

  • Lightroom 5 permanently runs out of memory

    Lightroom 5 on Windows 7 32 Bit and 8 Gigabytes of memory (more than the 32 Bit system can use) permanently runs out of memory when doing some more complex edits on a RAW file, especially when exporting to 16 Bit TIFF. The RAW files were created by cameras with 10 up to 16 megapixel sensors with bit depths between 12 and 14.
    After exporting one or two images to 16 Bit uncompressed TIFF an error message "Not enough memory" will be displayed and only a Lightroom restart solves that - for the next one to two exports. If an image has much brush stroke edits, every additional stroke takes more and more time to see the result until the image disappears followed by the same "Not enough memory" error message.
    A tab character in the XMP sidecar file is *not* the reason (ensured that), as mentioned in a post. It seems that Lightroom in general does not allocate enough memory and frees too less/late allocated.
    Please fix that bug, it's not productive permanently quit and restart Lightroom when editing/exporting a few RAW files. Versions prior to Lightroom 4 did not have that bug.
    P.S. Posting here, because it was not possible to post it at http://feedback.photoshop.com/photoshop_family/topics/new It's very bad design, to let a user take much time to write and then say: "Log in", but a log in with the Adobe ID and password does not work (creating accounts on Facebook etc. is not an acceptable option, Adobe ID should be enough). Also a bugtracker such as Bugzilla would be a much better tool for improving a software and finding relevant issues to avoid duplicate postings.

    First of all: I personally agree with your comments regarding the feedback webpage. But that is out of our hands since this is a user-to-user forum, and there is nothing we users can do about it.
    Regarding your RAM: You are running Win7 32-bit, so 4 GB of your 8 GB of RAM sit idle since the system cannot use it. And, frankly, 4 GB is very scant for running Lr, considering that the system uses 1 GB of that. So there's only 3 GB for Lr - and that only if you are not running any other programs at the same time.
    Since you have a 8 GB system already, why don't you go for Win7 64-bit. Then you can also install Lr 64-bit and that - together with 8 GB of RAM - will bring a great boost in Lr performance.
    Adobe recommends to run Lr in the 64-bit version. For more on their suggestion on improving Lr performance see here:
    http://helpx.adobe.com/lightroom/kb/performance-hints.html?sdid=KBQWU
    for more: http://forums.adobe.com/thread/1110408?tstart=0

  • My mac's run out of memory and I can't find the culprit!

    Hi, I'm in serious need of some help! I'm sure this is simple, but I'm about to break down over it – I use my mac for everything. I've got a 200gb 2009 macbook (running iOS7), and it's told me it's run out of memory. The storage tab in 'about this mac' tells me 108GB is being used for video – but I can't find them! My iPhoto has about 17GB of movies, my iTunes has around 20GB, and I've got maybe another 10GB in files within finder – but that's still only half the videos my mac is saying it has? How do I find the rest? I've got 80GB being used by 'other' as well – is that just pages and numbers documents, along with the iOS? Is there a way of finding exactly what all my memory's being allocated to?
    I've got the entire mac backed up on an external hard drive, but I'm terrified of deleting anything from the mac in case that fails. I plan on getting a second external HD, but even then I think I'll be too worried (I've heard about so many hard drives continuously failing). How does anyone manage all their stuff?!?
    Thank you in advance, for any help you can offer.

    Just a slight correction to start, you're not running iOS 7. You're running a version of OS X, iOS is for mobile devices like iPhones and iPads. To find out which version OS OS X you're running click the Apple menu at the top left and select About This Mac.
    This http://pondini.org/OSX/LionStorage.html should help you understand "Other".

  • Oracle 9i running out of memory

    Folks !
    I have a simple 3 table schema with a few thousand entries each. After dedicating gigabytes of hard disk space and 50% of my 1+ GB memory, I do a few simple Oracle Text "contains" searches (see below) on these tables and oracle seems to grow some 25 MB after each query (which typically return less than a dozen rows each) till it eventually runs out of memory and I have to reboot the system (Sun Solaris).
    This is on Solaris 9/Sparc with Oracle 9.2 . My query is simple right outer join. I think the memory growth is related to Oracle Text index/caching since memory utilization seems pretty stable with simple like '%xx%' queries.
    "top" shows a dozen or so processes each with about 400MB RSS/SIZE. It has been a while since I did Oracle DBA work but I am nothing special here. Databse has all the default settings that you get when you create an Oracle database.
    I have played with SGA sizes and no matter how large or small the size of SGA/PGA, Oracle runs out of memory and crashes the system. Pretty stupid to an Enterprise databas to die like that.
    Any clue on how to arrest the fatal growth of memory for Oracle 9i r2?
    thanks a lot.
    -Sanjay
    PS: The query is:
    SELECT substr(sdn_name,1,32) as name, substr(alt_name,1,32) as alt_name, sdn.ent_num, alt_num, score(1), score(2)
    FROM sdn, alt
    where sdn.ent_num = alt.ent_num(+)
    and (contains(sdn_name,'$BIN, $LADEN',1) > 0 or
    contains(alt_name,'$BIN, $LADEN',2) > 0)
    order by ent_num, score(1), score(2) desc;
    There are following two indexes on the two tables:
    create index sdn_name on sdn(sdn_name) indextype is ctxsys.context;
    create index alt_name on alt(alt_name) indextype is ctxsys.context;

    I am already using MTS.
    Atached is the init.ora file below.
    may be I should repost this article with subject "memory leak in Oracle" to catch developer attention. I posted this a few weeks back in Oracle Text groiup and no response there either.
    Thanks for you help.
    -Sanjay
    # Copyright (c) 1991, 2001, 2002 by Oracle Corporation
    # Cache and I/O
    db_block_size=8192
    db_cache_size=33554432
    db_file_multiblock_read_count=16
    # Cursors and Library Cache
    open_cursors=300
    # Database Identification
    db_domain=""
    db_name=ofac
    # Diagnostics and Statistics
    background_dump_dest=/space/oracle/admin/ofac/bdump
    core_dump_dest=/space/oracle/admin/ofac/cdump
    timed_statistics=TRUE
    user_dump_dest=/space/oracle/admin/ofac/udump
    # File Configuration
    control_files=("/space/oracle/oradata/ofac/control01.ctl", "/space/oracle/oradata/ofac/control02.ctl", "/space/oracle/oradata/ofac/control03.ctl")
    # Instance Identification
    instance_name=ofac
    # Job Queues
    job_queue_processes=10
    # MTS
    dispatchers="(PROTOCOL=TCP) (SERVICE=ofacXDB)"
    # Miscellaneous
    aq_tm_processes=1
    compatible=9.2.0.0.0
    # Optimizer
    hash_join_enabled=TRUE
    query_rewrite_enabled=FALSE
    star_transformation_enabled=FALSE
    # Pools
    java_pool_size=117440512
    large_pool_size=16777216
    shared_pool_size=117440512
    # Processes and Sessions
    processes=150
    # Redo Log and Recovery
    fast_start_mttr_target=300
    # Security and Auditing
    remote_login_passwordfile=EXCLUSIVE
    # Sort, Hash Joins, Bitmap Indexes
    pga_aggregate_target=25165824
    sort_area_size=524288
    # System Managed Undo and Rollback Segments
    undo_management=AUTO
    undo_retention=10800
    undo_tablespace=UNDOTBS1

  • Running out of memory building csv file

    I'm attempting to write a script that does a query on my
    database. It will generally be working with about 10,000 - 15,000
    records. It then checks to see if a certain file exists. If it
    does, it will add the record to an array. When its done looping
    over all the records, it takes the array that was created and
    outputs a csv file (usually with about 5,000 - 10,000 lines).
    But... before that ever happens, it runs out of memory. What can I
    do to make it not run out of memory?

    quote:
    Originally posted by:
    nozavroni
    I'm attempting to write a script that does a query on my
    database. It will generally be working with about 10,000 - 15,000
    records. It then checks to see if a certain file exists.
    Sounds pretty inefficient to me. Is there no way you can
    modify the query so that it only selects the records for which the
    file exists?

  • Running out of memory after latest update

    First of all:
    Why doesn't anybody answer my questions from Dez. 26th?? They are not that hard, I believe...
    After I installed the update No. 5, my system runs out of memory after a certain time.
    I'm working an an 1,7Centrino with 1GB memory....
    Is it bc of the update? Do RUN change so many things?
    Hope for an answer this time...
    Mark.

    Hi Mark
    Aplogies for not responding to your earlier post on Debugging Rowset. I am still working on that. I am sure I can give you something todday if there is any straight solution.
    OK coming to the OutOfMemoryExceptions, yes this has been observed because of preview feature added in Updaet 5. Look @ http://swforum.sun.com/jive/thread.jspa?forumID=123&threadID=50422 for more details.
    Thanks
    Srinivas

  • Running out of memory with Tomcat !!!!!

    Hello gurus and good folk:
    How can I ensure that the a JSP page that sets a ResultSet doesn't run out of memory? I have set the X flag to j2Se to be 1024mb and still runs out of memory! The size of the data being queried is only 30 MB. One would think the JDBC driver will be optimized for large ResultSet. Any pointers will be very helpful.
    Many thanks
    Murthy

    Hi
    As far as i believe, 30 mb data is pretty big for an online app. If you have too many rows in ur resultset, you could(or should) consider implementing paging and fetch x records at a time. Or you could just have a max limit for the records to be fetched(typically useful for 'search and list' type of apps) using Statement.setMaxRows(). This should ensure that Out of memory errors do not happen.
    If your data chunk per row is large, consider displaying only a summary in the result and fetching the 'BIG' data column only when required(e.g. fetch the column value for a particular row only when that row is clicked).
    Hope this helps !

Maybe you are looking for

  • Cisco ASA 5505 SSL VPN

    Hi Everyone, In my study home lab, I wanted to configure a cisco ASA 5505 ( Base license) to allow SSL VPN. I follow carefully the configuration procedure as instructed on a short videos I downloaded on youtube. I configured my outside e0/0 with a va

  • Help with shared objects...again...

    Okay, I've been looking at a really good tutorial for shared objects in AS2, and I think I've gotten the basic concept down of how they work... now the problem lies in just integrating it into what I already have, which is making my brain hurt.  Basi

  • TS1373 hi all, i can't pass to disk mode

    hi all please help me to recover my ipod it allways stuck at this sad face          http://km.support.apple.com/library/APPLE/APPLECARE_ALLGEOS/TS1373/TS1373_4.gif

  • How well does CS2 run on the MBP?

    Now that the MBP core duo 2s are out, Im sorely tempted to upgrade my current powerbook. However, I need to run Creative Suite(photoshop, illustrator, etc for those of you who don't know) which I am aware is not intel native. ive heard mixed reports

  • Labview drivers for sorensen dlm power supply

    I am looking for a LabVIEW driver for a Sorensen DLM 150-4 power supply.  I am using ethernet to communicate with it.  Can anyone help me? Thanks.