OutOfMemory

We are facing "OutOfMemoryError" problem under following circumstance.
As part of our application we are generating PDF and Excel reports. Report generation is carried out using Actuate Reporting tool. Target environment is Solaris with XVFB emulator and Weblogic 6.1 SP2.
Problem occurs if we continuosly generate reports. We have taken care to flush/close the outputstream after printing the report.

You could be riding the edge and so simply increasing the java heap size will solve it.
There could be a leak in the third party libraries (C or java) which use up the memory. One solution to this, is to create a seperate process to generate each report - for example using Runtime.exec() to spawn another java process.

Similar Messages

  • OutOfMemory error in java.awt.image.DataBufferInt. init

    We have an applet application that performs Print Preview of the images in the canvas. The images are like a network of entities (it has pictures of the entities involve (let's say Person) and how it links to other entities). We are using IE to launch the applet.
    We set min heap space to 128MB, JVM max heap space to 256MB, java plugin max heap space to 256MB using the Control Panel > Java.
    When the canvas width is about 54860 and height is 1644 and perform Print Preview, it thows an OutOfMemoryError in java.awt.image.DataBufferInt.<int>, hence, the Print Preview page is not shown. The complete stack trace (and logs) is as follows:
    Width: 54860 H: 1644
    Max heap: 254 # using Runtime.getRuntime().maxMemory()
    javaplugin.maxHeapSize: 256M # using System.getProperties("javaplugin.maxHeapSize")
    n page x n page : 1x1
    Exception in thread "AWT-EventQueue-2" java.lang.OutOfMemoryError: Java heap space
         at java.awt.image.DataBufferInt.<init>(Unknown Source)
         at java.awt.image.Raster.createPackedRaster(Unknown Source)
         at java.awt.image.DirectColorModel.createCompatibleWritableRaster(Unknown Source)
         at java.awt.image.BufferedImage.<init>(Unknown Source)
         at com.azeus.gdi.chart.GDIChart.preparePreview(GDIChart.java:731)
         at com.azeus.gdi.chart.GDIChart.getPreview(GDIChart.java:893)
         at com.azeus.gdi.ui.GDIUserInterface.printPreviewOp(GDIUserInterface.java:1526)
         at com.azeus.gdi.ui.GDIUserInterface$21.actionPerformed(GDIUserInterface.java:1438)
         at javax.swing.AbstractButton.fireActionPerformed(Unknown Source)
         at javax.swing.AbstractButton$Handler.actionPerformed(Unknown Source)
         at javax.swing.DefaultButtonModel.fireActionPerformed(Unknown Source)
         at javax.swing.DefaultButtonModel.setPressed(Unknown Source)
         at javax.swing.plaf.basic.BasicButtonListener.mouseReleased(Unknown Source)
         at java.awt.Component.processMouseEvent(Unknown Source)
         at javax.swing.JComponent.processMouseEvent(Unknown Source)
         at java.awt.Component.processEvent(Unknown Source)
         at java.awt.Container.processEvent(Unknown Source)
         at java.awt.Component.dispatchEventImpl(Unknown Source)
         at java.awt.Container.dispatchEventImpl(Unknown Source)
         at java.awt.Component.dispatchEvent(Unknown Source)
         at java.awt.LightweightDispatcher.retargetMouseEvent(Unknown Source)
         at java.awt.LightweightDispatcher.processMouseEvent(Unknown Source)
         at java.awt.LightweightDispatcher.dispatchEvent(Unknown Source)
         at java.awt.Container.dispatchEventImpl(Unknown Source)
         at java.awt.Component.dispatchEvent(Unknown Source)
         at java.awt.EventQueue.dispatchEvent(Unknown Source)
         at java.awt.EventDispatchThread.pumpOneEventForFilters(Unknown Source)
         at java.awt.EventDispatchThread.pumpEventsForFilter(Unknown Source)
         at java.awt.EventDispatchThread.pumpEventsForHierarchy(Unknown Source)
         at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
         at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
         at java.awt.EventDispatchThread.run(Unknown Source)
    Drilling down the cause of the problem. The OutOfMemory occurred in the constructor of DataBufferInt when it tried to create an int array:
    public DataBufferInt(int size) {
    super(STABLE, TYPE_INT, size);
    data = new int[size]; # this part produce out of memory error when size = width X height
    bankdata = new int[1][];
    bankdata[0] = data;
    The OutOfMemory error occurred when size is width * height (54860 X 1644) which is 90,189,840 bytes (~86MB).
    I can replicate the OutOfMemory error when initiating an int array using a test class when it uses the default max heap space but if I increase the heap space to 256MB, it cannot be replicated in the test class.
    Using a smaller width and height with product not exceeding 64MB, the applet can perform Print Preview successfully.
    Given this, I think the java applet is not using the value assigned in javaplugin.maxHeapSize to set the max heap space, hence, it still uses the default max heap size and throws OutOfMemory in int array when size exceeds the default max heap space which is 64MB.
    For additional information, below is some of the java properties (when press S in java applet console):
    browser = sun.plugin
    browser.vendor = Sun Microsystems, Inc.
    browser.version = 1.1
    java.awt.graphicsenv = sun.awt.Win32GraphicsEnvironment
    java.awt.printerjob = sun.awt.windows.WPrinterJob
    java.class.path = C:\PROGRA~1\Java\jre6\classes
    java.class.version = 50.0
    java.class.version.applet = true
    java.runtime.name = Java(TM) SE Runtime Environment
    java.runtime.version = 1.6.0_17-b04
    java.specification.version = 1.6
    java.vendor.applet = true
    java.version = 1.6.0_17
    java.version.applet = true
    javaplugin.maxHeapSpace = 256M
    javaplugin.nodotversion = 160_17
    javaplugin.version = 1.6.0_17
    javaplugin.vm.options = -Xms128M -Djavaplugin.maxHeapSpace=256M -Xmx256m -Xms128M
    javawebstart.version = javaws-1.6.0_17
    Kindly advise if this is a bug in JRE or wrong setting. If wrong setting, please advise on the proper way to set the heap space to prevent OutOfMemory in initializing int array.
    Thanks a lot.
    Edited by: rei_xanther on Jun 28, 2010 12:01 AM
    Edited by: rei_xanther on Jun 28, 2010 12:37 AM

    rei_xanther wrote:
    ..But the maximum value of the int data type is 2,147,483,647. That is the maximum positive integer value that can be stored in (the 4 bytes of) a signed int, but..
    ..The value that I passed in the int array size is only 90,189,840...its only connection with RAM is that each int requires 4 bytes of memory to hold it.
    new int[size] -- size is 90,189,840Sure. So the number of bytes required to hold those 90,189,840 ints is 360,759,360.
    I assumed that one element in the int array is 1 byte. ..Your assumption is wrong. How could it be possible to store 32 bits (4 bytes) in 8 bits (1 byte)? (a)
    a) Short of some clever compression algorithm applied to the data.

  • SSAS Tabular : MDX query goes OutOfMemory for a larger dataset

    Hello all,
    I am using SSAS 2012 Tabular to build the cube to support the organizational reporting requirements. Right now the server is Windows 2008 x64 with 16GB of Ram installed. I have the following MDX query. What this query does is get the member caption of the
    “OrderGroupNumber” non-key attribute as a measure where order group numbers pertain to a specific day and which occurs in specific seconds of a day. As I want to find in which second I have order group numbers, I cross the time dimension’s members with a specific
    day and filter the tuples using the transaction count. The transaction count is a non-zero value if an Order Group Number occurs within a specific second of a selected day.
    At present “TransactionsInflight].[OrderGroupNumber].[OrderGroupNumber]” has 170+ million members (Potentially this could grow rapidly) and time dimension has 86400 members.
    WITH
    MEMBER [Measures].[OrderGroupNumber]
    AS IIF([Measures].[Transaction Count] > 0, [TransactionsInflight].[OrderGroupNumber].CURRENTMEMBER.MEMBER_CAPTION,
    NULL)
    SELECT
    NON EMPTY{[TransactionsInflight].[OrderGroupNumber].[OrderGroupNumber].MEMBERS}
    ON COLUMNS,
    {FILTER(([Date].[Calendar Hierarchy].[Date].&[2012-07-05T00:00:00], [Time].[Time].[Time].MEMBERS),
    [Measures].[Transaction Count] > 0) } ON
    ROWS
    FROM [OrgDataCube]
    WHERE [Measures].[OrderGroupNumber]
    After I run this query it reaches to a dead-end and freezes the server (Sometimes SSAS server throws OutOfMemory exception but sometimes it does not). Even though I have 16GB of memory it uses all the memory and doing nothing. I have to do a hard-reset against
    the server to get the server online. Even I limit the time members using the “:” range operator still the machine freeze. I have run out of solutions to fine-tune the design. Could you guys provide me some guidelines to optimize this query? I am willing to
    do a design change if it is necessary.
    Thanks and best regards,
    Chandima

    Hi Greg,
    Finally I found the problem why the query goes out of memory in tabular mode. I guess this information will helpful for others and I am posting my findings.
    Some of the non-key attribute columns in the tabular model tables (mainly the tables which form dimensions) do not contain pretty names. So for the non-key attribute columns which I need to provide pretty names I renamed the columns to something else.
    For an example, in my date dimension there is a non-key attribute named “DateAltKey”. This is the date column which I am using. As this is not pretty to the client tools I renamed this column as “Date” inside the designer (Dimension
    design screen). I deployed the cube, processed the cube and no problem.
    Now here comes the fun part. For every table, inside the Tables node (Tabular SSAS Database > Tables) you can view the partition details. You have single partition per dimension table if you do not create extra partitions. I opened the partitions screen
    and clicked on the “Edit” icon and performed a Syntax Check. Surprisingly it failed. It complains about the renamed column. It complained “Date” cannot be found in source. So I realized that I cannot simply rename the columns like that.
    After that I created calculated columns (with a pretty name) for all the columns which complained and all the source columns to the calculated columns were hid from the client tools. I deployed the cube, processed the cube and performed a
    syntax check. No errors and everything were perfect.
    I ran the query which gave me trouble and guess what... it executed within 5 seconds. My problem is solved. I really do not know who did this improve the performance but the trick worked for me.
    Thanks a lot for your support.
    Chandima

  • OutOfMemory error while executing sql query

    Hello!
    My program gets multiple datas from database in every ten minutes, and stores them in memory for hundreds of users, requesting datas via web-interface simoultaneously.
    I dont have access to change database structures, write stored procedures, etc, just read from db.
    There is a table in database with lot of million rows, and sometimes when I try to execute a SELECT on this table it takes minutes to get back the result.
    To avoid waiting for database server for a long time, I set querytimeout to 30 seconds.
    If the server throws back the execution with Query Timed out Exception, I want to 'forget' this data, and 0 value is acceptable because of fast run is more important. So I put the boolean broken variable to check if there is any problem with db server.
    The size of the used memory is about 150Mb if things going well, but I set the max heap to 512 MB, just in case anything happens.
    I'm logging all threads stacktrace, and free/used/allocated memory size in every 5 seconds.(threadwatching.log) 2.appendix
    Sometimes, not in every case (I dont know what is this depends on), when I get the next phase of refreshing cached datas (you can see it below), the process reaches the fiorst checkpoint (signed in code below), starts to execute the sql query, and never reaches the second checkpoint , but used memory growing 50-60 Mb-os in every 5 seconds, as I can see in threadwatching.log until it reaches the max memory and throws OutOfMemory error: java heap space.
    I'm using DbConnectionBroker for connection pooling, SQLCommandBean for handling Statements, PreparedStatements, etc, and jTDS jdbc connector.
    SQLCommandBean closes statements, resultsets, so these objects doesnt stays open.
    I cant figured out what causes the memory leak, if someone have an idea, please help me.
    1. Part of the cached data refreshing (DataFactory.createPCVPPMforSiemens()):
            PCVElement element = new PCVElement(m, ProcessControlView.PPM);
            String s = DateTime.getDate(interval.getStartDate());
            boolean broken=false;
            int value = 0;
            for (int j = 0; j < 48; j++) {
                try {
                    if (!broken) {
                        d1 = DateTime.getDate(new Date(start + ((j + 1) * 600000)));
                        sqlBean = new SQLCommandBean();
                        conn = broker.getConnection();
                        sqlBean.setConnection(conn);
                        sqlBean.setQueryTimeOut(30);
                        System.out.println(DateTime.getDate(new Date())+" "+m.getName()+"   "+j);// first checkpoint
                        value = SiemensWorks.getPCVPPM(sqlBean, statId, s, d1);
                        System.out.println(DateTime.getDate(new Date())+" "+m.getName()+"   "+j);// second checkpoint
                    } else value=0;
                } catch (Exception ex) {
                    System.out.println("ERROR: DataFactory.createPCVPPMforSiemens 1 :" + ex.getMessage());
                    ex.printStackTrace();
                    value = 0;
                    broken=true;
                } finally {
                    try {
                        broker.freeConnection(conn);
                    } catch (Exception ex) {}
                element.getAvgValues()[j] = value;
            }2. SiemensWorks.getPCVPPM()
        public static int getPCVPPM(SQLCommandBean sqlBean,int statID,String start,String end)
                throws SQLException, UnsupportedTypeException, NoSuchColumnException {
            sqlBean.setSqlValue(SiemensSQL.PCV_PPM);
            Vector values=new Vector();
            values.add(new StringValue(statID+""));
            values.add(new StringValue(start));
            values.add(new StringValue(end));
            sqlBean.setValues(values);
            Vector rows=sqlBean.executeQuery();
            if (rows==null || rows.size()==0) return 0;
            Row row=(Row)rows.firstElement();
            try {
                float ret=Float.parseFloat(row.getString(1));
                if (ret<=0) ret=0;
                return Math.round(ret);
            } catch (Exception ex) {
                return 0;
        }3. Part of Threadwatching.log
    2006-10-13 16:46:56 Name: SMT Refreshing Threads
    2006-10-13 16:46:56 Thread count: 4
    2006-10-13 16:46:56 Active count: 4
    2006-10-13 16:46:56 Active group count: 0
    2006-10-13 16:46:56 Daemon: false
    2006-10-13 16:46:56 Priority: 5
    2006-10-13 16:46:57 Free memory: 192,228,944 bytes
    2006-10-13 16:46:57 Max memory: 332,988,416 bytes
    2006-10-13 16:46:57 Memory in use: 140,759,472 bytes
    2006-10-13 16:46:57 ---------------------------------
    2006-10-13 16:46:57 0. Name: CachedLayerTimer
    2006-10-13 16:46:57 0. Id: 19
    2006-10-13 16:46:57 0. Priority: 5
    2006-10-13 16:46:57 0. Parent: SMT Refreshing Threads
    2006-10-13 16:46:57 0. State: RUNNABLE
    2006-10-13 16:46:57 0. Alive: true
    2006-10-13 16:46:57 java.io.FileOutputStream.close0(Native Method)
    2006-10-13 16:46:57 java.io.FileOutputStream.close(Unknown Source)
    2006-10-13 16:46:57 sun.nio.cs.StreamEncoder$CharsetSE.implClose(Unknown Source)
    2006-10-13 16:46:57 sun.nio.cs.StreamEncoder.close(Unknown Source)
    2006-10-13 16:46:57 java.io.OutputStreamWriter.close(Unknown Source)
    2006-10-13 16:46:57 xcompany.smtmonitor.chart.ChartCreator.createChart(ChartCreator.java:663)
    2006-10-13 16:46:57 xcompany.smtmonitor.chart.ChartCreator.create(ChartCreator.java:441)
    2006-10-13 16:46:57 xcompany.smtmonitor.CachedLayerRefreshenerTask.run(CachedLayerRefreshenerTask.java:463)
    2006-10-13 16:46:57 java.util.TimerThread.mainLoop(Unknown Source)
    2006-10-13 16:46:57 java.util.TimerThread.run(Unknown Source)
    Software runs well until I get the DataFactory.createPCVPPMforSiemens function in my code ->
    2006-10-13 16:47:01 Name: SMT Refreshing Threads
    2006-10-13 16:47:01 Thread count: 4
    2006-10-13 16:47:01 Active count: 4
    2006-10-13 16:47:01 Active group count: 0
    2006-10-13 16:47:01 Daemon: false
    2006-10-13 16:47:01 Priority: 5
    2006-10-13 16:47:02 Free memory: 189,253,304 bytes
    2006-10-13 16:47:02 Max memory: 332,988,416 bytes
    2006-10-13 16:47:02 Memory in use: 143,735,112 bytes
    2006-10-13 16:47:02 ---------------------------------
    2006-10-13 16:47:02 0. Name: CachedLayerTimer
    2006-10-13 16:47:02 0. Id: 19
    2006-10-13 16:47:02 0. Priority: 5
    2006-10-13 16:47:02 0. Parent: SMT Refreshing Threads
    2006-10-13 16:47:02 0. State: RUNNABLE
    2006-10-13 16:47:02 0. Alive: true
    2006-10-13 16:47:02 java.util.LinkedList$ListItr.previous(Unknown Source)
    2006-10-13 16:47:02 net.sourceforge.jtds.util.TimerThread.setTimer(TimerThread.java:174)
    2006-10-13 16:47:02 net.sourceforge.jtds.jdbc.TdsCore.wait(TdsCore.java:3734)
    2006-10-13 16:47:02 net.sourceforge.jtds.jdbc.TdsCore.executeSQL(TdsCore.java:997)
    2006-10-13 16:47:02 net.sourceforge.jtds.jdbc.JtdsStatement.executeSQLQuery(JtdsStatement.java:320)
    2006-10-13 16:47:02 net.sourceforge.jtds.jdbc.JtdsPreparedStatement.executeQuery(JtdsPreparedStatement.java:667)
    2006-10-13 16:47:02 xcompany.database.sql.SQLCommandBean.executeQuery(SQLCommandBean.java:91)
    2006-10-13 16:47:02 xcompany.smtmonitor.data.SiemensWorks.getPCVPPM(SiemensWorks.java:409)
    2006-10-13 16:47:02 xcompany.smtmonitor.data.DataFactory.createPCVPPMforSiemens(DataFactory.java:6103)
    2006-10-13 16:47:02 xcompany.smtmonitor.data.DataFactory.refreshProcessControlView(DataFactory.java:5791)
    2006-10-13 16:47:02 xcompany.smtmonitor.CachedLayerRefreshenerTask.run(CachedLayerRefreshenerTask.java:514)
    2006-10-13 16:47:02 java.util.TimerThread.mainLoop(Unknown Source)
    2006-10-13 16:47:02 java.util.TimerThread.run(Unknown Source)
    2006-10-13 16:47:06 Name: SMT Refreshing Threads
    2006-10-13 16:47:06 Thread count: 4
    2006-10-13 16:47:06 Active count: 4
    2006-10-13 16:47:06 Active group count: 0
    2006-10-13 16:47:06 Daemon: false
    2006-10-13 16:47:06 Priority: 5
    2006-10-13 16:47:08 Free memory: 127,428,192 bytes
    2006-10-13 16:47:08 Max memory: 332,988,416 bytes
    2006-10-13 16:47:08 Memory in use: 205,560,224 bytes
    2006-10-13 16:47:08 ---------------------------------
    2006-10-13 16:47:08 0. Name: CachedLayerTimer
    2006-10-13 16:47:08 0. Id: 19
    2006-10-13 16:47:08 0. Priority: 5
    2006-10-13 16:47:08 0. Parent: SMT Refreshing Threads
    2006-10-13 16:47:08 0. State: RUNNABLE
    2006-10-13 16:47:08 0. Alive: true
    2006-10-13 16:47:08 java.util.LinkedList$ListItr.previous(Unknown Source)
    2006-10-13 16:47:08 net.sourceforge.jtds.util.TimerThread.setTimer(TimerThread.java:174)
    2006-10-13 16:47:08 net.sourceforge.jtds.jdbc.TdsCore.wait(TdsCore.java:3734)
    2006-10-13 16:47:08 net.sourceforge.jtds.jdbc.TdsCore.executeSQL(TdsCore.java:997)
    2006-10-13 16:47:08 net.sourceforge.jtds.jdbc.JtdsStatement.executeSQLQuery(JtdsStatement.java:320)
    2006-10-13 16:47:08 net.sourceforge.jtds.jdbc.JtdsPreparedStatement.executeQuery(JtdsPreparedStatement.java:667)
    2006-10-13 16:47:08 xcompany.database.sql.SQLCommandBean.executeQuery(SQLCommandBean.java:91)
    2006-10-13 16:47:08 xcompany.smtmonitor.data.SiemensWorks.getPCVPPM(SiemensWorks.java:409)
    2006-10-13 16:47:08 xcompany.smtmonitor.data.DataFactory.createPCVPPMforSiemens(DataFactory.java:6103)
    2006-10-13 16:47:08 xcompany.smtmonitor.data.DataFactory.refreshProcessControlView(DataFactory.java:5791)
    2006-10-13 16:47:08 xcompany.smtmonitor.CachedLayerRefreshenerTask.run(CachedLayerRefreshenerTask.java:514)
    2006-10-13 16:47:08 java.util.TimerThread.mainLoop(Unknown Source)
    2006-10-13 16:47:08 java.util.TimerThread.run(Unknown Source)
    2006-10-13 16:47:12 Name: SMT Refreshing Threads
    2006-10-13 16:47:12 Thread count: 4
    2006-10-13 16:47:12 Active count: 4
    2006-10-13 16:47:12 Active group count: 0
    2006-10-13 16:47:12 Daemon: false
    2006-10-13 16:47:12 Priority: 5
    2006-10-13 16:47:15 Free memory: 66,760,208 bytes
    2006-10-13 16:47:15 Max memory: 332,988,416 bytes
    2006-10-13 16:47:15 Memory in use: 266,228,208 bytes
    2006-10-13 16:47:15 ---------------------------------
    2006-10-13 16:47:15 0. Name: CachedLayerTimer
    2006-10-13 16:47:15 0. Id: 19
    2006-10-13 16:47:15 0. Priority: 5
    2006-10-13 16:47:15 0. Parent: SMT Refreshing Threads
    2006-10-13 16:47:15 0. State: RUNNABLE
    2006-10-13 16:47:15 0. Alive: true
    2006-10-13 16:47:15 java.util.LinkedList.addBefore(Unknown Source)
    2006-10-13 16:47:15 java.util.LinkedList.access$300(Unknown Source)
    2006-10-13 16:47:15 java.util.LinkedList$ListItr.add(Unknown Source)
    2006-10-13 16:47:15 net.sourceforge.jtds.util.TimerThread.setTimer(TimerThread.java:175)
    2006-10-13 16:47:15 net.sourceforge.jtds.jdbc.TdsCore.wait(TdsCore.java:3734)
    2006-10-13 16:47:15 net.sourceforge.jtds.jdbc.TdsCore.executeSQL(TdsCore.java:997)
    2006-10-13 16:47:15 net.sourceforge.jtds.jdbc.JtdsStatement.executeSQLQuery(JtdsStatement.java:320)
    2006-10-13 16:47:15 net.sourceforge.jtds.jdbc.JtdsPreparedStatement.executeQuery(JtdsPreparedStatement.java:667)
    2006-10-13 16:47:15 xcompany.database.sql.SQLCommandBean.executeQuery(SQLCommandBean.java:91)
    2006-10-13 16:47:15 xcompany.smtmonitor.data.SiemensWorks.getPCVPPM(SiemensWorks.java:409)
    2006-10-13 16:47:15 xcompany.smtmonitor.data.DataFactory.createPCVPPMforSiemens(DataFactory.java:6103)
    2006-10-13 16:47:15 xcompany.smtmonitor.data.DataFactory.refreshProcessControlView(DataFactory.java:5791)
    2006-10-13 16:47:15 xcompany.smtmonitor.CachedLayerRefreshenerTask.run(CachedLayerRefreshenerTask.java:514)
    2006-10-13 16:47:15 java.util.TimerThread.mainLoop(Unknown Source)
    2006-10-13 16:47:15 java.util.TimerThread.run(Unknown Source)
    2006-10-13 16:47:17 Name: SMT Refreshing Threads
    2006-10-13 16:47:17 Thread count: 4
    2006-10-13 16:47:17 Active count: 4
    2006-10-13 16:47:17 Active group count: 0
    2006-10-13 16:47:17 Daemon: false
    2006-10-13 16:47:17 Priority: 5
    2006-10-13 16:47:20 Free memory: 23,232,496 bytes
    2006-10-13 16:47:20 Max memory: 332,988,416 bytes
    2006-10-13 16:47:20 Memory in use: 309,755,920 bytes
    2006-10-13 16:47:20 ---------------------------------
    2006-10-13 16:47:20 0. Name: CachedLayerTimer
    2006-10-13 16:47:20 0. Id: 19
    2006-10-13 16:47:20 0. Priority: 5
    2006-10-13 16:47:20 0. Parent: SMT Refreshing Threads
    2006-10-13 16:47:20 0. State: RUNNABLE
    2006-10-13 16:47:20 0. Alive: true
    2006-10-13 16:47:20 net.sourceforge.jtds.util.TimerThread.setTimer(TimerThread.java:171)
    2006-10-13 16:47:20 net.sourceforge.jtds.jdbc.TdsCore.wait(TdsCore.java:3734)
    2006-10-13 16:47:20 net.sourceforge.jtds.jdbc.TdsCore.executeSQL(TdsCore.java:997)
    2006-10-13 16:47:20 net.sourceforge.jtds.jdbc.JtdsStatement.executeSQLQuery(JtdsStatement.java:320)
    2006-10-13 16:47:20 net.sourceforge.jtds.jdbc.JtdsPreparedStatement.executeQuery(JtdsPreparedStatement.java:667)
    2006-10-13 16:47:20 xcompany.database.sql.SQLCommandBean.executeQuery(SQLCommandBean.java:91)
    2006-10-13 16:47:20 xcompany.smtmonitor.data.SiemensWorks.getPCVPPM(SiemensWorks.java:409)
    2006-10-13 16:47:20 xcompany.smtmonitor.data.DataFactory.createPCVPPMforSiemens(DataFactory.java:6103)
    2006-10-13 16:47:20 xcompany.smtmonitor.data.DataFactory.refreshProcessControlView(DataFactory.java:5791)
    2006-10-13 16:47:20 xcompany.smtmonitor.CachedLayerRefreshenerTask.run(CachedLayerRefreshenerTask.java:514)
    2006-10-13 16:47:20 java.util.TimerThread.mainLoop(Unknown Source)
    2006-10-13 16:47:20 java.util.TimerThread.run(Unknown Source)
    2006-10-13 16:47:23 Name: SMT Refreshing Threads
    2006-10-13 16:47:23 Thread count: 4
    2006-10-13 16:47:23 Active count: 4
    2006-10-13 16:47:23 Active group count: 0
    2006-10-13 16:47:23 Daemon: false
    2006-10-13 16:47:23 Priority: 5
    2006-10-13 16:47:26 Free memory: 4,907,336 bytes
    2006-10-13 16:47:26 Max memory: 332,988,416 bytes
    2006-10-13 16:47:26 Memory in use: 328,083,768 bytes
    2006-10-13 16:47:26 ---------------------------------
    2006-10-13 16:47:26 0. Name: CachedLayerTimer
    2006-10-13 16:47:26 0. Id: 19
    2006-10-13 16:47:26 0. Priority: 5
    2006-10-13 16:47:26 0. Parent: SMT Refreshing Threads
    2006-10-13 16:47:26 0. State: RUNNABLE
    2006-10-13 16:47:26 0. Alive: true
    2006-10-13 16:47:26 java.util.LinkedList.addBefore(Unknown Source)
    2006-10-13 16:47:26 java.util.LinkedList.access$300(Unknown Source)
    2006-10-13 16:47:26 java.util.LinkedList$ListItr.add(Unknown Source)
    2006-10-13 16:47:26 net.sourceforge.jtds.util.TimerThread.setTimer(TimerThread.java:175)
    2006-10-13 16:47:26 net.sourceforge.jtds.jdbc.TdsCore.wait(TdsCore.java:3734)
    2006-10-13 16:47:26 net.sourceforge.jtds.jdbc.TdsCore.executeSQL(TdsCore.java:997)
    2006-10-13 16:47:26 net.sourceforge.jtds.jdbc.JtdsStatement.executeSQLQuery(JtdsStatement.java:320)
    2006-10-13 16:47:26 net.sourceforge.jtds.jdbc.JtdsPreparedStatement.executeQuery(JtdsPreparedStatement.java:667)
    2006-10-13 16:47:26 xcompany.database.sql.SQLCommandBean.executeQuery(SQLCommandBean.java:91)
    2006-10-13 16:47:26 xcompany.smtmonitor.data.SiemensWorks.getPCVPPM(SiemensWorks.java:409)
    2006-10-13 16:47:26 xcompany.smtmonitor.data.DataFactory.createPCVPPMforSiemens(DataFactory.java:6103)
    2006-10-13 16:47:26 xcompany.smtmonitor.data.DataFactory.refreshProcessControlView(DataFactory.java:5791)
    2006-10-13 16:47:26 xcompany.smtmonitor.CachedLayerRefreshenerTask.run(CachedLayerRefreshenerTask.java:514)
    2006-10-13 16:47:26 java.util.TimerThread.mainLoop(Unknown Source)
    2006-10-13 16:47:26 java.util.TimerThread.run(Unknown Source)
    2006-10-13 16:47:35 Name: SMT Refreshing Threads
    2006-10-13 16:47:37 Thread count: 4
    2006-10-13 16:47:38 Active count: 4
    2006-10-13 16:47:38 Active group count: 0
    2006-10-13 16:47:38 Daemon: false
    2006-10-13 16:47:38 Priority: 5
    2006-10-13 16:47:42 Free memory: 35,316,120 bytes
    2006-10-13 16:47:42 Max memory: 332,988,416 bytes
    2006-10-13 16:47:42 Memory in use: 297,672,296 bytes
    2006-10-13 16:47:42 ---------------------------------
    2006-10-13 16:47:42 0. Name: CachedLayerTimer
    2006-10-13 16:47:42 0. Id: 19
    2006-10-13 16:47:42 0. Priority: 5
    2006-10-13 16:47:42 0. Parent: SMT Refreshing Threads
    2006-10-13 16:47:42 0. State: TIMED_WAITING
    2006-10-13 16:47:42 0. Alive: true
    2006-10-13 16:47:42 java.lang.Object.wait(Native Method)
    2006-10-13 16:47:42 java.util.TimerThread.mainLoop(Unknown Source)
    2006-10-13 16:47:42 java.util.TimerThread.run(Unknown Source)
    4. Tomcat default logging file:
    2006-10-13 16:47:36 ERROR CachedLayerRefreshenerTask: external error: Java heap space
    5. DbConnectionBroker (connection pooling) logging file:
    Handing out connection 1 --> 10/13/2006 04:47:01 PM
    Handing out connection 0 --> 10/13/2006 04:47:01 PM
    Handing out connection 1 --> 10/13/2006 04:47:01 PM
    Handing out connection 0 --> 10/13/2006 04:47:02 PM
    Warning. Connection 0 in use for 3141 ms
    Warning. Connection 0 in use for 24891 ms
    ----> Error: Could not free connection!!!
    I would appreciate for any help.

    What does your query bring back from this table?This is the query:
    SELECT case sum(c.picked) when 0 then 0 else
    ((sum(c.picked)-(sum(c.picked)-(sum(c.vacuum)+sum(c.id
    ent))))*cast((1000000/cast(sum(c.picked) as float))
    as bigint)) end as PPM
    FROM sip_comp c
    LEFT JOIN sip_pcb pc ON pc.id=c.pcbid
    LEFT JOIN sip_period p on p.id=pc.periodid
    WHERE p.stationid=? AND pc.time BETWEEN ? AND ?Has anybody who knows SQL tried EXPLAIN PLAN to optimize this table? You're joining on a table with a million rows and you're wondering why the performance is poor?
    What is the index situation with these tables?
    .> When I execute it from query manager, it takes from 1
    to 60 secs depend on servers availability. So how will that be any different for JDBC and Java?
    ..> You're right. Thats why I am here.
    What I mean by that is we can't read minds, either. You need to get some hard data to tell you where the bottleneck is. Asking at a forum won't help.
    But tell me, if the java process enters to this query
    execution, and doesnt quit until OOM thrown, how can
    be the problem in caching?I was guessing about caching, because I didn't know what the query was.
    You expect a lot.
    .> No.
    Then how do you ever expect to solve this?
    I tried YourKit Profiler at home, where I'm
    developing software, but this OOM never thrown here,
    even if I have the same database size.Then you aren't replicating the problem. You have to run it on the system that has the problem if you're going to solve it.
    YourKit isn't an industry leader. How well do you know how to use it?
    It just happened at the company where the system
    runs, and I cannot run this profiler there because
    the PC where my tomcat runs dramatically slowed.You have to run something to figure out what the problem is. What about Log4J, some trace logging statements and a batch job to harvest the log?
    Bottom line: you've got to be a scientist and get some real data. We can theorize all we want here, but that won't get you to a solution.
    %

  • Java.lang.OutOfMemory error while retrieving data from a large table

    Hi,
    i am trying to fetch data using "executeQuery()" into a ResultSet from the database. But since the data in that table is large. i am recieving "java.lang.OutOfMemory" Error. So, to resolve that, i have used "setMaxRows()" for my statement object. This resolved the error but i don't recieve the entire data. If i call "executeQuery()" again, i recieve the same data. I don't even know a filtering criterion where by i can filter the data for each "executeQuery()"..
    How can i resolve this problem
    Thanx in advance
    --Chaitanya                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    Either use some criteria you develop related to one of the keys on the table or use some sort of record limiting method.
    Note the method of limiting will vary related to the database you are using. You will have to look at the documentation.
    For example I am told this will work in MySQL to get 200 records starting at record 100.
    SELECT * FROM myTable ORDER BY whatever ASC LIMIT 100,200
    Because you are running out of memroy I assume the table is large,
    I am not sure what the impact of the above will have on performance because if in the above if the order by is not based on an index at the server level all the records will be selected and sorted before the records are limited.
    I would make sure you have an appropriate index.
    If you use the advanced search over the user forums using "resultset paging" and possibility the database you are using you should be able to get some ideas.
    I hope this makes sense to you.
    rykk

  • Bad Performance/OutOfMemory Error in CMP Entity Bean with Large DB

    Hello:
    I have an CMP Entity deployed on WLS 7.0
    The entity bean maps to a table that has 97,480 records.
    It has a finder: findAll() -- SELECT OBJECT(e) FROM Equipment e
    I have a JSP client that invokes the findALL()
    The performance is very poor ~ 150 seconds just to perform the findAll() - (Benchmark
    from within the JSP code)
    If more than one simultaneous call is made then I get outOfMemory Error.
    WLS is started with max memory of 512MB
    EJB is deployed with <max-beans-in-cache>100000</max-beans-in-cache>
    (without max-beans-in-cache directive the performance is worse)
    Is there any documentation available to help us in deploying CMP Entity Beans
    with very large number of records (instances) ?
    Any help is greatly appreciated.
    Regards
    Rajan

    Hi
    You should use a Select Method, it does support cursors.
    Or a Home Select Method combination.
    Regards
    Thomas
    WLS is started with max memory of 512MB
    EJB is deployed with <max-beans-in-cache>100000</max-beans-in-cache>>
    (without max-beans-in-cache directive the performance is worse)>
    Is there any documentation available to help us in deploying CMP
    Entity Beans with very large number of records (instances) ? Any help
    is greatly appreciated.>
    Regards>
    Rajan>
    >
    "Rajan Jena" <[email protected]> schrieb im Newsbeitrag
    news:3dadd7d1$[email protected]..
    >
    Hello:
    I have an CMP Entity deployed on WLS 7.0
    The entity bean maps to a table that has 97,480 records.
    It has a finder: findAll() -- SELECT OBJECT(e) FROM Equipment e
    I have a JSP client that invokes the findALL()
    The performance is very poor ~ 150 seconds just to perform the findAll() -(Benchmark
    from within the JSP code)
    If more than one simultaneous call is made then I get outOfMemory Error.
    WLS is started with max memory of 512MB
    EJB is deployed with <max-beans-in-cache>100000</max-beans-in-cache>
    (without max-beans-in-cache directive the performance is worse)
    Is there any documentation available to help us in deploying CMP EntityBeans
    with very large number of records (instances) ?
    Any help is greatly appreciated.
    Regards
    Rajan

  • Jrockit crashes due to outofmemory and illegal memory acces

    Hello,
    We have been using jrmc-3.1.2-1.6.0 and lately we are seeing JVM crashes after every couple days. Note: We have been seeing these crashes recently , for last 1 year we did not see such crashes.
    Following are different issues that we have seen over last few days:
    1. Crash due to illegal memory access
    2. Crash due to out of memory error
    3. Sever becoming unresponsive/Idle
    For illegal memory access and out of memory crashes, there was jRockit dump file created and Dump file seems to be pointing at Jrockit libjvm.so module for the exception/crash.
    Any help would be appreciated!
    *** Dump for Illegal Memory Access
    Error Message: Illegal memory access. [54]
    Signal info : si_signo=11, si_code=1 si_addr=0x10
    Version : BEA JRockit(R) R27.6.5-32_o-121899-1.6.0_14-20091001-2113-linux-ia32
    CPU : Intel Core i7 (HT) SSE SSE2 SSE3 SSSE3 SSE4.1 SSE4.2 Core Intel64
    Number CPUs : 4
    Tot Phys Mem : 12762509312 (12171 MB)
    OS version : Red Hat Enterprise Linux Server release 5.6 (Tikanga)
    Linux version 2.6.18-164.15.1.el5PAE ([email protected]) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-46)) #1 SMP Mon Mar 1 11:14:09 EST 2010 (i686)
    Thread System: NPTL
    Java locking : Lazy unlocking enabled (class banning) (transfer banning)
    State : JVM is running
    Command Line : -Djava.util.logging.config.file=/usr/local/springsource/tcServer-6.0/zplus/conf/logging.properties -Djava.util.logging.manager=com.springsource.tcserver.serviceability.logging.TcServerLogManager -Xmx2048m -Xms2048m -Djava.rmi.server.hostname=300714-web8.echovox.com -XgcPrio:pausetime -DTERRACOTTA_URL=338449-web10.echovox.com:9510,338450-web11.echovox.com:9510 -Dapp_version= -Xmanagement -Dcom.sun.management.jmxremote.port=7091 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dehcache.monitor.enabled=True -Djava.endorsed.dirs=/usr/local/springsource/tcServer-6.0/tomcat-6.0.20.A/endorsed -Dcatalina.base=/usr/local/springsource/tcServer-6.0/zplus -Dcatalina.home=/usr/local/springsource/tcServer-6.0/tomcat-6.0.20.A -Djava.io.tmpdir=/usr/local/springsource/tcServer-6.0/zplus/temp -Dsun.java.launcher=SUN_STANDARD org.apache.catalina.startup.Bootstrap start
    java.home : /usr/local/jrmc-3.1.2-1.6.0/jre
    j.class.path : :/usr/local/springsource/tcServer-6.0/tomcat-6.0.20.A/bin/bootstrap.jar
    j.lib.path : /usr/local/jrmc-3.1.2-1.6.0/jre/lib/i386/jrockit:/usr/local/jrmc-3.1.2-1.6.0/jre/lib/i386:/usr/local/jrmc-3.1.2-1.6.0/jre/../lib/i386
    JAVA_HOME : <not set>
    JAVAOPTIONS: <not set>
    LD_LIBRARY_PATH: /usr/local/jrmc-3.1.2-1.6.0/jre/lib/i386/jrockit:/usr/local/jrmc-3.1.2-1.6.0/jre/lib/i386:/usr/local/jrmc-3.1.2-1.6.0/jre/../lib/i386
    LD_PRELOAD : <not set>
    LD_ASSUME_KERNEL: <not set>
    StackOverFlow: 0 StackOverFlowErrors have occured
    OutOfMemory : 0 OutOfMemoryErrors have occured
    C Heap : Good; no memory allocations have failed
    GC Strategy : Mode: pausetime. Currently using strategy: genconcon
    GC Status : OC currently running, in phase: marking. This is OC#2905.
    : YC is not running. Last finished YC was YC#100045.
    OC History : Strategy genconcon was used for OC#2776.
    : Strategy genconpar was used for OC#2777 to OC#2781.
    : Strategy genconcon was used for OC#2782 to OC#2783.
    : Strategy genconpar was used for OC#2784 to OC#2790.
    : Strategy genconcon was used for OC#2791 to OC#2905.
    YC History : Ran 1 YCs before OC#2901.
    : Ran 0 YCs before OC#2902.
    : Ran 1 YCs before OC#2903.
    : Ran 0 YCs before OC#2904.
    : Ran 2 YCs before OC#2905.
    YC Promotion : Last YC successfully promoted all objects
    Heap : 0x8b00000 - 0x88b00000 (Size: 2048 MB)
    Compaction : 0x49b00000 - 0x51b00000 (Current compaction type: internal)
    NurseryList : 0xaf67e18 - 0x8226c988
    KeepArea : 0x817031e8 - 0x8226c988
    NurseryMarker: [ 0x80d031f0,  0x817031e8 ]
    CompRefs : References are 32-bit.
    Registers (from ThreadContext: 0x96b75420 / OS context: 0x96b7551c):
    eax = 00000000 ecx = ffffffcc edx = 8b80eb78 ebx = 8b80eb78
    esp = 96b75814 ebp = 96b75960 esi = 96b75a48 edi = 00000000
    es = 0000007b cs = 00000073 ss = 0000007b ds = 0000007b
    fs = 00000000 gs = 00000033
    eip = b7d3a397 eflags = 00210296
    Loaded modules:
    (* denotes the module causing the exception)
    08048000-08058233 /usr/local/jrmc-3.1.2-1.6.0/jre/bin/java
    b7f24000-b7f2462b /usr/local/jrmc-3.1.2-1.6.0/jre/bin/java
    00754000-00768f17 /lib/libpthread.so.0
    006b2000-006d8a23 /lib/libm.so.6
    006ab000-006ad0fb /lib/libdl.so.2
    00550000-006a2723 /lib/libc.so.6
    00531000-0054b4f7 /lib/ld-linux.so.2
    b7c46000-b7e9fea7 */usr/local/jrmc-3.1.2-1.6.0/jre/lib/i386/jrockit/libjvm.so
    007a3000-007a9ebf /lib/librt.so.1
    b722f000-b723839b /lib/libnss_files.so.2
    b7118000-b71229bb /usr/local/jrmc-3.1.2-1.6.0/jre/lib/i386/libverify.so
    b70f3000-b7115f57 /usr/local/jrmc-3.1.2-1.6.0/jre/lib/i386/libjava.so
    007ae000-007c27a7 /lib/libnsl.so.1
    b723e000-b7243ef0 /usr/local/jrmc-3.1.2-1.6.0/jre/lib/i386/native_threads/libhpi.so
    b59f0000-b59fe3e4 /usr/local/jrmc-3.1.2-1.6.0/jre/lib/i386/libzip.so
    b5805000-b580a666 /usr/local/jrmc-3.1.2-1.6.0/jre/lib/i386/libmanagement.so
    b5224000-b5236a18 /usr/local/jrmc-3.1.2-1.6.0/jre/lib/i386/libnet.so
    b723c000-b723c6ad /usr/local/jrmc-3.1.2-1.6.0/jre/lib/i386/librmi.so
    b2d42000-b2d45c2f /lib/libnss_dns.so.2
    b2d2e000-b2d3d74b /lib/libresolv.so.2
    b2d4a000-b2d503a4 /usr/local/jrmc-3.1.2-1.6.0/jre/lib/i386/libnio.so
    b1f9d000-b1fc5857 /usr/local/springsource/tcServer-6.0/zplus/temp/tmpSigarJars841342613926113968186721535772/libsigar-x86-linux-1.6.4.so
    Stack:
    (* marks the word pointed to by the stack pointer)
    96b75814: b7cbce41* 00000000 006772e2 b04dbab8 006a4ff4 00000030
    96b7582c: 00000002 96b758a0 00000000 00000000 00010000 0061eea8
    96b75844: b7e48510 00010000 00000000 0061eeb9 b7d8f0f8 20000000
    96b7585c: 00010000 00000000 00004022 ffffffff 00000000 96b75890
    Code:
    (* marks the word pointed to by the instruction pointer)
    b7d3a364: c0a108ec e8b7f0fe ffffffa0 f0fed4a1 ff96e8b7 b8c9ffff
    b7d3a37c: 00000001 900debc3 90909090 90909090 90909090 8be58955
    b7d3a394: 8b5d0845* d2851050 0fc0950f b60fc0b6 768dc3c0 27bc8d00
    b7d3a3ac: 00000000 53e58955 0134ec81 9d8d0000 fffffee8 04245c89
    "RMI TCP Connection(idle)" id=440712 idx=0x7c4 tid=825 lastJavaFrame=0x96b75ecc
    Stack 0: start=0x96b54000, end=0x96b78000, guards=0x96b59000 (ok), forbidden=0x96b57000
    Thread Stack Trace:
    at jniExceptionCheck+7()@0xb7d3a397
    at cmgrGenerateCode+260()@0xb7cbe024
    at generate_code2+937()@0xb7da6c99
    at generate_code+97()@0xb7da6f11
    at get_runnable_codeinfo2+275()@0xb7da74c3
    at call_java+317()@0xb7d42f0d
    at jniInvoke+110()@0xb7d451be
    -- Java stack --
    at jrockit/vm/Reflect.invokeMethod(Ljava/lang/Object;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;(Native Method)
        at sun/reflect/NativeConstructorAccessorImpl.newInstance0(Ljava/lang/reflect/Constructor;[Ljava/lang/Object;)Ljava/lang/Object;(Native Method)
        at sun/reflect/NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
        at sun/reflect/DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)[optimized]
    at java/lang/reflect/Constructor.newInstance(Constructor.java:513)[optimized]
    at java/lang/Class.newInstance0(Class.java:355)[inlined]
    at java/lang/Class.newInstance(Class.java:308)[optimized]
    at sun/reflect/MethodAccessorGenerator$1.run(MethodAccessorGenerator.java:381)
    at jrockit/vm/AccessController.doPrivileged(AccessController.java:233)[inlined]
    at jrockit/vm/AccessController.doPrivileged(AccessController.java:241)[inlined]
    at sun/reflect/MethodAccessorGenerator.generate(MethodAccessorGenerator.java:377)[optimized]
    at sun/reflect/MethodAccessorGenerator.generateMethod(MethodAccessorGenerator.java:59)
    at sun/reflect/NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:28)
    at sun/reflect/DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)[optimized]
    at java/lang/reflect/Method.invoke(Method.java:597)[inlined]
    at java/io/ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:945)[inlined]
    at java/io/ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1461)[optimized]
    at java/io/ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1392)[inlined]
    at java/io/ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150)[inlined]
    at java/io/ObjectOutputStream.writeObject(ObjectOutputStream.java:326)[optimized]
    at sun/rmi/server/UnicastRef.marshalValue(UnicastRef.java:274)
    at sun/rmi/server/UnicastServerRef.dispatch(UnicastServerRef.java:315)
    at sun/rmi/transport/Transport$1.run(Transport.java:159)
    at jrockit/vm/AccessController.doPrivileged(AccessController.java:255)[optimized]
    at sun/rmi/transport/Transport.serviceCall(Transport.java:155)
    at sun/rmi/transport/tcp/TCPTransport.handleMessages(TCPTransport.java:535)
    at sun/rmi/transport/tcp/TCPTransport$ConnectionHandler.run0(TCPTransport.java:790)
    at sun/rmi/transport/tcp/TCPTransport$ConnectionHandler.run(TCPTransport.java:649)
    at java/util/concurrent/ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)[inlined]
    at java/util/concurrent/ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)[optimized]
    at java/lang/Thread.run(Thread.java:619)[optimized]
    at jrockit/vm/RNI.c2java(IIIII)V(Native Method)
    -- end of trace
    Extended, platform specific info:
    libc release: 2.5-stable
    Elf headers:
    libc ehdrs: EI: 7f454c46010101000000000000000000 ET: 3 EM: 3 V: 1 ENTRY: 00565fe0 PHOFF: 00000034 SHOFF: 0019ccbc EF: 0x0 HS: 52 PS: 32 PHN; 10 SS: 40 SHN: 75 STIDX: 74
    libpthread ehdrs: EI: 7f454c46010101000000000000000000 ET: 3 EM: 3 V: 1 ENTRY: 00758850 PHOFF: 00000034 SHOFF: 00021474 EF: 0x0 HS: 52 PS: 32 PHN; 9 SS: 40 SHN: 40 STIDX: 39
    libjvm ehdrs: EI: 7f454c46010101000000000000000000 ET: 3 EM: 3 V: 1 ENTRY: 0004c3a0 PHOFF: 00000034 SHOFF: 012c9a58 EF: 0x0 HS: 52 PS: 32 PHN; 4 SS: 40 SHN: 29 STIDX: 26
    * If you see this dump, please go to *
    * http://edocs.bea.com/jrockit/go2troubleshooting.html *
    * for troubleshooting information. *
    ===== END DUMP ===============================================================
    ****** Dump for OutOfMemory error
    Error Message: Out of memory [68]
    Signal info : si_signo=11, si_code=1 si_addr=(nil)
    Fatal Error : Reference Iteration refIterInit src/jvm/code/runtime/refiter.c:167
    Version : BEA JRockit(R) R27.6.5-32_o-121899-1.6.0_14-20091001-2113-linux-ia32
    CPU : Intel Core i7 (HT) SSE SSE2 SSE3 SSSE3 SSE4.1 SSE4.2 Core Intel64
    Number CPUs : 4
    Tot Phys Mem : 12762509312 (12171 MB)
    OS version : Red Hat Enterprise Linux Server release 5.6 (Tikanga)
    Linux version 2.6.18-164.15.1.el5PAE ([email protected]) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-46)) #1 SMP Mon Mar 1 11:14:09 EST 2010 (i686)
    Thread System: NPTL
    Java locking : Lazy unlocking enabled (class banning) (transfer banning)
    State : JVM is running
    Command Line : -Djava.util.logging.config.file=/usr/local/springsource/tcServer-6.0/zplus/conf/logging.properties -Djava.util.logging.manager=com.springsource.tcserver.serviceability.logging.TcServerLogManager -Xmx2048m -Xms2048m -Djava.rmi.server.hostname=300714-web8.echovox.com -XgcPrio:pausetime -DTERRACOTTA_URL=338449-web10.echovox.com:9510,338450-web11.echovox.com:9510 -Dapp_version= -Xmanagement -Dcom.sun.management.jmxremote.port=7091 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dehcache.monitor.enabled=True -Djava.endorsed.dirs=/usr/local/springsource/tcServer-6.0/tomcat-6.0.20.A/endorsed -Dcatalina.base=/usr/local/springsource/tcServer-6.0/zplus -Dcatalina.home=/usr/local/springsource/tcServer-6.0/tomcat-6.0.20.A -Djava.io.tmpdir=/usr/local/springsource/tcServer-6.0/zplus/temp -Dsun.java.launcher=SUN_STANDARD org.apache.catalina.startup.Bootstrap start
    java.home : /usr/local/jrmc-3.1.2-1.6.0/jre
    j.class.path : :/usr/local/springsource/tcServer-6.0/tomcat-6.0.20.A/bin/bootstrap.jar
    j.lib.path : /usr/local/jrmc-3.1.2-1.6.0/jre/lib/i386/jrockit:/usr/local/jrmc-3.1.2-1.6.0/jre/lib/i386:/usr/local/jrmc-3.1.2-1.6.0/jre/../lib/i386
    JAVA_HOME : <not set>
    JAVAOPTIONS: <not set>
    LD_LIBRARY_PATH: /usr/local/jrmc-3.1.2-1.6.0/jre/lib/i386/jrockit:/usr/local/jrmc-3.1.2-1.6.0/jre/lib/i386:/usr/local/jrmc-3.1.2-1.6.0/jre/../lib/i386
    LD_PRELOAD : <not set>
    LD_ASSUME_KERNEL: <not set>
    StackOverFlow: 0 StackOverFlowErrors have occured
    OutOfMemory : 0 OutOfMemoryErrors have occured
    C Heap : 1 memory allocations have failed
    : First failure was a mmMalloc of 20 bytes
    : Last failure was a mmMalloc of 20 bytes
    GC Strategy : Mode: pausetime. Currently using strategy: genconpar
    GC Status : OC is not running. Last finished OC was OC#1945.
    : YC is not running. Last finished YC was YC#90824.
    OC History : Strategy genconpar was used for OC#1745 to OC#1755.
    : Strategy genparpar was used for OC#1756 to OC#1757.
    : Strategy genconpar was used for OC#1758 to OC#1938.
    : Strategy genparpar was used for OC#1939 to OC#1940.
    : Strategy genconpar was used for OC#1941 to OC#1945.
    YC History : Ran 1 YCs before OC#1941.
    : Ran 6 YCs before OC#1942.
    : Ran 4 YCs before OC#1943.
    : Ran 6 YCs before OC#1944.
    : Ran 4 YCs before OC#1945.
    : Ran 5 YCs since last OC.
    YC Promotion : Last YC successfully promoted all objects
    Heap : 0x8100000 - 0x88100000 (Size: 2048 MB)
    Compaction : 0x64100000 - 0x68100020 (Current compaction type: internal)
    NurseryList : 0x8214640 - 0x84438f20
    KeepArea : (no keeparea in use)
    NurseryMarker: [ 0x827bfd10,  0x83996a00 ]
    CompRefs : References are 32-bit.
    Registers (from ThreadContext: 0x2a96c20 / OS context: 0x2a96d1c):
    eax = 00001267 ecx = 00000000 edx = 00000042 ebx = b7eb0de4
    esp = 02a97010 ebp = 02a97028 esi = 00000044 edi = 02a97078
    es = 0000007b cs = 00000073 ss = 0000007b ds = 0000007b
    fs = 00000000 gs = 00000033
    eip = b7cfca45 eflags = 00010206
    Loaded modules:
    (* denotes the module causing the exception)
    08048000-08058233 /usr/local/jrmc-3.1.2-1.6.0/jre/bin/java
    b7f44000-b7f4462b /usr/local/jrmc-3.1.2-1.6.0/jre/bin/java
    00754000-00768f17 /lib/libpthread.so.0
    006b2000-006d8a23 /lib/libm.so.6
    006ab000-006ad0fb /lib/libdl.so.2
    00550000-006a2723 /lib/libc.so.6
    00531000-0054b4f7 /lib/ld-linux.so.2
    b7c66000-b7ebfea7 */usr/local/jrmc-3.1.2-1.6.0/jre/lib/i386/jrockit/libjvm.so
    007a3000-007a9ebf /lib/librt.so.1
    b724f000-b725839b /lib/libnss_files.so.2
    b7138000-b71429bb /usr/local/jrmc-3.1.2-1.6.0/jre/lib/i386/libverify.so
    b7113000-b7135f57 /usr/local/jrmc-3.1.2-1.6.0/jre/lib/i386/libjava.so
    007ae000-007c27a7 /lib/libnsl.so.1
    b725e000-b7263ef0 /usr/local/jrmc-3.1.2-1.6.0/jre/lib/i386/native_threads/libhpi.so
    b5972000-b59803e4 /usr/local/jrmc-3.1.2-1.6.0/jre/lib/i386/libzip.so
    b5803000-b5808666 /usr/local/jrmc-3.1.2-1.6.0/jre/lib/i386/libmanagement.so
    b51a8000-b51baa18 /usr/local/jrmc-3.1.2-1.6.0/jre/lib/i386/libnet.so
    b725c000-b725c6ad /usr/local/jrmc-3.1.2-1.6.0/jre/lib/i386/librmi.so
    b2dae000-b2db1c2f /lib/libnss_dns.so.2
    b2d9a000-b2da974b /lib/libresolv.so.2
    b47ae000-b47b43a4 /usr/local/jrmc-3.1.2-1.6.0/jre/lib/i386/libnio.so
    b1ec7000-b1eef857 /usr/local/springsource/tcServer-6.0/zplus/temp/tmpSigarJars551489478996731294326116727652638760/libsigar-x86-linux-1.6.4.so
    Stack:
    (* marks the word pointed to by the stack pointer)
    02a97010: b7ec5040* 00000200 b7eb0de4 02a97078 02a97078 02a97034
    02a97028: 02a97048 b7e78894 00000044 b7eb0de4 02a97078 00000001
    02a97040: 02a970d0 02a97110 02a97068 b7e788bf 00000044 b7eb0de4
    02a97058: 02a97078 b666692a 00000001 02a970d0 02a97088 b7e2147a
    Code:
    (* marks the word pointed to by the instruction pointer)
    b7cfca14: 4c892074 458b0824 2404c710 b7ec5040 0c244489 000200b8
    b7cfca2c: 24448900 b5dae804 01b80017 a3000000 b7ec5024 001267b8
    b7cfca44: 0000a300* 04c70000 00003f24 bdcae800 768d0017 27bc8d00
    b7cfca5c: 00000000 e589fc55 53c03157 b9e87d8d 00000004 00c0ec81
    "tomcat-http--119" id=24059 idx=0x60c tid=8771 lastJavaFrame=0xfffffffc
    Stack 0: start=0x2a74000, end=0x2a98000, guards=0x2a79000 (ok), forbidden=0x2a77000
    Thread Stack Trace:
    at dumpForceDump+117()@0xb7cfca45
    at vmFatalErrorMsgV+84()@0xb7e78894
    at vmFatalErrorMsg+31()@0xb7e788bf
    at fatalError+42()@0xb7e2147a
    at refIterInit+111()@0xb7e215bf
    at trProcessLocksForThread+41()@0xb7e300f9
    at get_all_locks+106()@0xb7d476ca
    at javaLockConvertLazyToThin+99()@0xb7d477b3
    at javaLockUnmatchedLock+802()@0xb7d488f2
    at jniMonitorEnter+48()@0xb7d66cf0
    at vmtiDetachFromThreadObject+85()@0xb7d9ec85
    at tsiThreadStub+147()@0xb7d9f0f3
    at ptiThreadStub+18()@0xb7e0e1d2
    at start_thread+226()@0x759832
    at __clone+94()@0x62245e
    -- Java stack --

    From the command-line (-Dehcache.monitor.enabled=True) you are using some form of caching.
    The out-of-memory occured as the JVM was unable the allocate an object: C Heap : 1 memory allocations have failed
    Could you check how the live data set is going (or the memory leak detector)
    Some concerns with regard to tune a JVM that runs a cache can be found here: http://middlewaremagic.com/weblogic/?p=7083
    Note that the example given discusses Coherence, but can be adopted for another caching mechanism as well.

  • OutOfMemory error when trying to display large tables

    We use JDeveloper 10.1.3. Our project uses ADF Faces + EJB3 Session Facade + TopLink.
    We have a large table (over 100K rows) which we try to show to the user via an ADF Read-only Table. We build the page by dragging the facade findAllXXX method's result onto the page and choosing "ADF Read-only Table".
    The problem is that during execution we get an OutOfMemory error. The Facade method attempts to extract the whole result set and to transfer it to a List. But the result set is simply too large. There's not enough memory.
    Initially, I was under the impression that the table iterator would be running queries that automatically fetch just a chunk of the db table data at a time. Sadly, this is not the case. Apparently, all the data gets fetched. And then the iterator simply iterates through a List in memory. This is not what we needed.
    So, I'd like to ask: is there a way for us to show a very large database table inside an ADF Table? And when the user clicks on "Next", to have the iterator automatically execute queries against the database and fetch the next chunk of data, if necessary?
    If that is not possible with ADF components, it looks like we'll have to either write our own component or simply use the old code that we have which supports paging for huge tables by simply running new queries whenever necessary. Alternatively, each time the user clicks on "Next" or "Previous", we might have to intercept the event and manually send range information to a facade method which would then fetch the appropriate data from the database. I don't know how easy or difficult that would be to implement.
    Naturally, I'd prefer to have that functionality available in ADF Faces. I hope there's a way to do this. But I'm still a novice and I would appreciate any advice.

    Hi Shay,
    We do use search pages and we do give the users the opportunity to specify search criteria.
    The trouble comes when the search criteria are not specific enough and the result set is huge. Transferring the whole result set into memory will be disastrous, especially for servers used by hundreds of users simultaneously. So, we'll have to limit the number of rows fetched at a time. We should do this either by setting the Maximum Rows option for the TopLink query (or using rownum<=XXX inside the SQL), or through using a data provider that supports paging.
    I don't like the first approach very much because I don't have a good recipe for calculating the optimum number of Maximum Rows for each query. By specifying some average number of, say, 500 rows, I risk fetching too many rows at once and I also risk filling the TopLink cache with objects that are not necessary. I can use methods like query.dontMaintainCache() but in my case this is a workaround, not a solution.
    I would prefer fetching relatively small chunks of data at a time and not limiting the user to a certain number of maximum rows. Furthermore, this way I won't fetch large amounts of data at the very beginning and I won't be forced to turn off the caching for the query.
    Regarding the "ADF Developer's Guide", I read there that "To create a table using a data control, you must bind to a method on the data control that returns a collection. JDeveloper allows you to do this declaratively by dragging and dropping a collection from the Data Control Palette."
    So, it looks like I'll have to implement a collection which, in turn, implements the paging functionality that I need. Is the TopLink object you are referring to some type of collection? I know that I can specify a collection class that TopLink should use for queries through the query.useCollectionClass(...) method. But if TopLink doesn't provide the collection I need, I will have to write that collection myself. I still haven't found the section in the TopLink documentation that says what types of Collections are natively provided by TopLink. I can see other collections like oracle.toplink.indirection.IndirectList, for example. But I have not found a specific discussion on large result sets with the exception of Streams and Cursors and I feel uneasy about maintaining cursors between client requests.
    And I completely agree with you about reading the docs first and doing the programming afterwards. Whenever time permits, I always do that. I have already read the "ADF Developer's Guide" with the exception of chapters 20 and 21. And I switched to the "TopLink Developer's Guide" because it seems that we must focus on the model. Unfortunately, because of the circumstances, I've spent a lot of time reading and not enough time practicing what I read. So, my knowledge is kind of shaky at the moment and perhaps I'm not seeing things that are obvious to you. That's why I tried using this forum -- to ask the experts for advice on the best method for implementing paging. And I'm thankful to everyone who replied to my post so far.

  • Urgent! Please help. JVM Perm size OutOfMemory with wls9.1

    Sorry for posting this here since I could not find a general weblogic JVM trouble shooting newsgroup. Basically we have an issue in production where OutOfMemory Error occurred in the Perm space after server has been up for half an hour. We recently upgraded from wls8.1+sun hotspot 1.4.2 to wls9.1 + Sun hotspot 1.5.0_06. Originally under wls8.1, the perm space usages was pretty stable at about 80MB (Perm space was set to 128MB). But now 128MB seemed being filled up very quickly. I found the increase size in weblogic.jar and rt.jar both from weblogic upgrade and sun jre upgrade. So last night the production server's perm space size was increased to 192MB. This morning everything looked fine, perm space usage was stable at about 120MB. However, at noon, suddendly 2 servers the perm space started being filled up quickly. Half an hour later, so was the other app server.
    Have anybody seen this issue before? Could it be an issue in some weblogic subsystem with jre 1.5.0_06 since there is no code change made to the application? Is 192MB too aggressive which could cause some issue?
    Thanks in advance. Any help will be greatly appreciated.
    Bing

    Bing Zou wrote:
    Sorry for posting this here since I could not find a general weblogic JVM trouble shooting newsgroup. Basically we have an issue in production where OutOfMemory Error occurred in the Perm space after server has been up for half an hour. We recently upgraded from wls8.1+sun hotspot 1.4.2 to wls9.1 + Sun hotspot 1.5.0_06. Originally under wls8.1, the perm space usages was pretty stable at about 80MB (Perm space was set to 128MB). But now 128MB seemed being filled up very quickly. I found the increase size i
    n weblogic.jar and rt.jar both from weblogic upgrade and sun jre upgrade. So last night the production server's perm space size was increased to 192MB. This morning everything looked fine, perm space usage was stable at about 120MB. However, at noon, suddendly 2 servers the perm space started being filled up quickly. Half an hour later, so was the other app server.
    Have anybody seen this issue before? Could it be an issue in some weblogic subsystem with jre 1.5.0_06 since there is no code change made to the application? Is 192MB too aggressive which could cause some issue?
    Thanks in advance. Any help will be greatly appreciated.
    BingHi Bing, I wich I could help you more directly but I suggest you contact BEA
    official support. They are the best for knowing or sifting through all
    possibly relevant changes and/or fixes for a given set of symptoms.
    Joe

  • JSF: partial page rendering is causing memory leak leading to outofmemory

    JDeveloper 10.1.3.2.0
    JDK: 1.6.0_06
    Operating System: Windows XP.
    I test my application for memory leaks. For that purpose, I use jconsole to monitor java heap space. I have an edit page that has two dependent list components. One displays all countries and the other displays cities of the selected country.
    I noticed java heap space keeps growing as I change country from country list.
    I run garbage collection and memory usage does not go down. If I keep changing the province for 5 minutes, then I hit a java heap space outofmemory exception.
    To narrow down the problem, I removed the second city component and the problem still exists.
    To narrow it down further, I removed autosubmit attribute from the country component and then memory usage stopped increasing as I change country.
    country/city partial page rendering is just an example. I am able to reproduce the same problem on every page where i use partial page rendering. My conclusion is PPR is causing memory leak or at least the autosubmit attribute.
    This is really bad. Anyone out there experienced same issue. Any help/advice is highly appreciated !!
    Thanks
    <af:panelLabelAndMessage
    inlineStyle="font-weight:bold;"
    label="Country:"
    tip=" "
    showRequired="true"
    for="CountryId">
    <af:selectOneChoice id="CountryId"
                   valuePassThru="true"
                   value="#{bindings.CountryId.inputValue}"
                   autoSubmit="true"
                   inlineStyle="width:221px"
                   simple="true">
         <af:forEach var="item"
              items="#{bindings.CountriesListIterator.allRowsInRange}">
         <af:selectItem value="#{item.countryId}"
                   label="#{item.countryName}"/>
         </af:forEach>
    </af:selectOneChoice>
    </af:panelLabelAndMessage>
    <af:panelLabelAndMessage
    inlineStyle="font-weight:bold;"
    label="City:"
    tip=" "
    showRequired="true"
    for="CityId">
    <af:selectOneChoice id="CityId"
                   valuePassThru="true"
                   value="#{bindings.CityId.inputValue}"
                   partialTriggers="CountryId"
                   autoSubmit="true"
                   inlineStyle="width:221px"
                   unselectedLabel="--Select City--"
                   simple="true">
         <f:selectItems value="#{backing_CountryCityBean.citiesSelectItems}"/>
    </af:selectOneChoice>
    </af:panelLabelAndMessage>

    Samsam,
    I haven't seen this problem myself, no.
    To clarify - are you seeing this behaviour when running your app in JDeveloper, or when running in an application server? If in JDeveloper, a copuple of suggestions:
    * (may not matter, but...) It's not supported to run JDev 10g with JDK 6
    * have you tried the [url http://www.oracle.com/technology/pub/articles/masterj2ee/j2ee_wk11.html]memory profiler
    Best,
    John

  • Ejbc outofmemory error in weblogic 8.1 sp2 when compiling cmp entity beans

    I have a set of entity beans which compiled [ejbc step]fine under weblogic7.0 sp4. But when i try to to the same thing under weblogic 8.1 i get an outofmemory error. Any pointers

    I am using 81 sp4

  • How can I get heap dump for 1.4.2_11 when OutOfMemory Occured

    Hi guys,
    How can I get heap dump for 1.4.2_11 when OutOfMemory Occured, since it has no options like: -XX:+HeapDumpOnOutOfMemoryError and -XX:+HeapDumpOnCtrlBreak
    We are running Webloic 8.1 SP3 applications using this Sun 1.4.2_11 JVM and it's throwing out OutOfMemory, but we can not find a heap dump. The application is running as a service in Windows Server 2003. How can I do some more analysis on this issue.
    Thanks.

    The HeapDumpOnOutOfMemoryError option was added to 1.4.2 in update 12. Further work to support all collectors was done in update 15.

  • SNMP trap on OutOfMemory Error Log record

    I would like to implement SNMP trap on OutOfMemory Error Log record.
    In theory SNMP LogFilter with Severity Level "Error" and Message Substring "OutOfMemory" should do the trick.
    In reality it does not work (doh)(see explanations below), I wonder if someone managed to make it work.
    Log entry has following format:
    ----------- entry begin ----------
    ####<Nov 12, 2003 3:09:23 PM EST> <Error> <HTTP> <ustrwd2021> <local> <ExecuteThread: '14' for queue: 'default'> <> <> <101020> <[WebAppServletContext(747136,logs2,/logs2)] Servlet failed with Exception>
    java.lang.OutOfMemoryError
         <<no stack trace available>>
    ------------ entry end ------------
    Notice that java.lang.... is NOT part of the log record, yep it seems that exception stack trace is not part of log record! Thus filter could be applied only to "<[WebAppServletContext(747136,logs2,/logs2)] Servlet failed with Exception>" string, which is really useless.
    Here is fragment of trap data (i had to remove Message Substring in order to get Error trap to work)
    1.3.6.1.4.1.140.625.100.50: trapLogMessage: [WebAppServletContext(747136,logs2,/logs2)] Servlet failed with Exception

    Andriy,
    I dont think you could do much here, since Outofmemory is not part of
    log record SNMP agent cannot filter on this. I would be curious to hear
    if anyone got it to work using SNMP.
    sorry,
    -satya
    Andriy Potapov wrote:
    I would like to implement SNMP trap on OutOfMemory Error Log record.
    In theory SNMP LogFilter with Severity Level "Error" and Message Substring "OutOfMemory" should do the trick.
    In reality it does not work (doh)(see explanations below), I wonder if someone managed to make it work.
    Log entry has following format:
    ----------- entry begin ----------
    ####<Nov 12, 2003 3:09:23 PM EST> <Error> <HTTP> <ustrwd2021> <local> <ExecuteThread: '14' for queue: 'default'> <> <> <101020> <[WebAppServletContext(747136,logs2,/logs2)] Servlet failed with Exception>
    java.lang.OutOfMemoryError
         <<no stack trace available>>
    ------------ entry end ------------
    Notice that java.lang.... is NOT part of the log record, yep it seems that exception stack trace is not part of log record! Thus filter could be applied only to "<[WebAppServletContext(747136,logs2,/logs2)] Servlet failed with Exception>" string, which is really useless.
    Here is fragment of trap data (i had to remove Message Substring in order to get Error trap to work)
    1.3.6.1.4.1.140.625.100.50: trapLogMessage: [WebAppServletContext(747136,logs2,/logs2)] Servlet failed with Exception

  • OutofMemory Error in Business connector

    Hi,
    I am facing OutofMemory Error while installing a package(3MB) on Business connector 4.0.1 system on windows 2000 and I see none of the services are loaded.
    I have set the maximum ram to 1024M in the server.sh and also installed latest SR7. But still the error faced.
    Please help if someone know what could be the solution.

    Hi,
    you should take care that you sweep regularly the SAP-transaction-log. There is a transaction 'sweepTransaction' which can be scheduled on a regurlar basis to get rid of old transactions.
    Another possible reason is that your service is consuming vast amounts of main memory, this could happen with very large objects, i.e. extremely big xml-files.
    Refer this thread from an external website which has answers to your query.
    http://www.wmusers.com/wmusers/messages/117/48840.shtml?1106329484
    Please reward points if it helps
    Thanks
    Vikranth

  • OutOfMemory from UnitOfWork and strange Merge behavior.

    My object model is like this:
    Object A has a list of object B(One-to-many mapping, readOnly for this attributes).
    Each object B back refers to A(bi-directional, one-to-one mapping).
    My application has only one DatabaseSession.
    In one thread, I need to go through a loop for about 200 to 500 times.
    The code is like this
    int n=200;
    for (int i=0; i<n; i++)
         UnitOfWork uow=session.acquireUnitOfWork();
         A aclone=(A)uow.registerExistingObject(a);
         B b=....;
         aclone.addB(b); // b's A will set to a inside of this method.
         long startTime=System.currentTimeMillis();
         uow.commit();
         long endTime=System.currentTimeMillis();
         System.out.println(String.valueOf(endTime-startTime));
    If n is less than 100, the topLink behaves consistently.
    It takes less than 100 milliseconds to add an object B into database.
    However if the n is 200 or bigger, the time for each UnitOfWork.commit() is sharply
    increased(from 1000 to 8000 milliseconds). Finally it throwed an OutOfMemory exception.
    I have a DescriptorEventListener registered for my class. This DescriptorEventListener monitors
    the postMerge event. I expected that in each loop, the merge of the clone to
    original object should be called for the one new object I inserted. It is indeed called
    for the object b. However The merge is called for all existing bs
    associated with A. Maybe it is why it takes more and more times to add a new B to A atomically.
    Anyone has the same experience or any suggestion to solve this problem.

    When I added the Nth b to A, A only has (n-1) b associated to it (the bs I added in the loop).
    I tried your suggestion. It did not work.
    I got this exception.
    EXCEPTION [TOPLINK-6004] (TopLink - 9.0.3 (Build 423)): oracle.toplink.exceptions.QueryException
    EXCEPTION DESCRIPTION: The object [com.test.A@7c4c51], of class [class com.test.A], with identity hashcode (System.identityHashCode()) [8,146,001],
    is not from this UnitOfWork object space, but the parent session's. The object was never registered in this UnitOfWork,
    but read from the parent session and related to an object registered in the UnitOfWork. Ensure that you are correctly
    registering your objects. If you are still having problems, you can use the UnitOfWork.validateObjectSpace() method to
    help debug where the error occurred. For more information, see the manual or FAQ.
    QUERY: WriteObjectQuery(com.test.Parent1@7c4c51)
         at oracle.toplink.exceptions.QueryException.backupCloneIsOriginalFromParent(Unknown Source)
         at oracle.toplink.publicinterface.UnitOfWork.getBackupClone(Unknown Source)
         at oracle.toplink.publicinterface.UnitOfWork.getBackupCloneForCommit(Unknown Source)
         at oracle.toplink.queryframework.ObjectLevelModifyQuery.prepareForExecution(Unknown Source)
         at oracle.toplink.queryframework.WriteObjectQuery.prepareForExecution(Unknown Source)
         at oracle.toplink.queryframework.DatabaseQuery.execute(Unknown Source)
         at oracle.toplink.publicinterface.Session.internalExecuteQuery(Unknown Source)
         at oracle.toplink.publicinterface.UnitOfWork.internalExecuteQuery(Unknown Source)
         at oracle.toplink.publicinterface.Session.executeQuery(Unknown Source)
         at oracle.toplink.publicinterface.Session.executeQuery(Unknown Source)
         at oracle.toplink.internal.sessions.CommitManager.commitAllObjects(Unknown Source)
         at oracle.toplink.publicinterface.Session.writeAllObjects(Unknown Source)
         at oracle.toplink.publicinterface.UnitOfWork.commitToDatabase(Unknown Source)
         at oracle.toplink.publicinterface.UnitOfWork.commitRootUnitOfWork(Unknown Source)
         at oracle.toplink.publicinterface.UnitOfWork.commit(Unknown Source)
         at TestAdd.main(TestAdd.java:58)
    That means that I have to register the object A. This is really bad feature for UnitOfWork. I just want to add a new Object B. Why does I need the burden from A?
    This brings up another question. If I do not want the burden from A, I need to break the association from A and B and maintains the relationship manually. However I really wish my persistence layer can handle the association transparently for me.
    I also notice that you suggest all the 200 bs in a UnitOfWork. In my situation, I need 200 UnitOfWork, I need to add each b in a separate UnitOfWork. When I call the UnitOfWork.commit(), the UnitOfWork is released cleanly. Is it right?
    Thansk

  • OutOfMemory error in Weblogic jvm even when enough heap is still available

    Hi Everyone,
    We get a strange OutOfMemory error that happens only once in our Weblogic server in production environment.The Server immediately recovers and there is no issue as far as the application functionality is concerned.
    This is the exact error as seen in weblogic console.log file :
    caused by: java.lang.OutOfMemoryError: Java heap space
    I enabled the verbose gc logging too with the jvm parameter "-XX:+PrintHeapAtGC".Am copying the gc.log file excerpt when we got the Out of Memory issue in console.log.
    gc.log when error was seen in console.log:
    172047.050: [GC {Heap before gc invocations=28392:
    par new generation   total 49088K, used 49024K [0xb8000000, 0xbb000000, 0xbb000000)
      eden space 49024K, 100% used [0xb8000000, 0xbafe0000, 0xbafe0000)
      from space 64K,   0% used [0xbaff0000, 0xbaff0000, 0xbb000000)
      to   space 64K,   0% used [0xbafe0000, 0xbafe0000, 0xbaff0000)
    concurrent mark-sweep generation total 737280K, used 364191K [0xbb000000, 0xe8000000, 0xe8000000)
    concurrent-mark-sweep perm gen total 262144K, used 166556K [0xe8000000, 0xf8000000, 0xf8000000)
    172047.050: [ParNew: 49024K->0K(49088K), 0.1014386 secs] 413215K->382087K(786368K)Heap after gc invocations=28393:
    par new generation total 49088K, used 0K [0xb8000000, 0xbb000000, 0xbb000000)
    eden space 49024K, 0% used [0xb8000000, 0xb8000000, 0xbafe0000)
    from space 64K, 0% used [0xbafe0000, 0xbafe0000, 0xbaff0000)
    to space 64K, 0% used [0xbaff0000, 0xbaff0000, 0xbb000000)
    concurrent mark-sweep generation total 737280K, used 382087K [0xbb000000, 0xe8000000, 0xe8000000)
    concurrent-mark-sweep perm gen total 262144K, used 166556K [0xe8000000, 0xf8000000, 0xf8000000)
    , 0.1021810 secs]
    172055.735: [GC {Heap before gc invocations=28393:
    par new generation   total 49088K, used 49008K [0xb8000000, 0xbb000000, 0xbb000000)
      eden space 49024K,  99% used [0xb8000000, 0xbafdc128, 0xbafe0000)
      from space 64K,   0% used [0xbafe0000, 0xbafe0000, 0xbaff0000)
      to   space 64K,   0% used [0xbaff0000, 0xbaff0000, 0xbb000000)
    concurrent mark-sweep generation total 737280K, used 382087K [0xbb000000, 0xe8000000, 0xe8000000)
    concurrent-mark-sweep perm gen total 262144K, used 166616K [0xe8000000, 0xf8000000, 0xf8000000)
    172055.735: [ParNew: 49008K->0K(49088K), 0.0536844 secs] 431095K->387656K(786368K)Heap after gc invocations=28394:
    par new generation total 49088K, used 0K [0xb8000000, 0xbb000000, 0xbb000000)
    eden space 49024K, 0% used [0xb8000000, 0xb8000000, 0xbafe0000)
    from space 64K, 0% used [0xbaff0000, 0xbaff0000, 0xbb000000)
    to space 64K, 0% used [0xbafe0000, 0xbafe0000, 0xbaff0000)
    concurrent mark-sweep generation total 737280K, used 387656K [0xbb000000, 0xe8000000, 0xe8000000)
    concurrent-mark-sweep perm gen total 262144K, used 166616K [0xe8000000, 0xf8000000, 0xf8000000)
    , 0.0544288 secs]
    172059.106: [Full GC {Heap before gc invocations=28394:
    par new generation   total 49088K, used 22668K [0xb8000000, 0xbb000000, 0xbb000000)
      eden space 49024K,  46% used [0xb8000000, 0xb9623158, 0xbafe0000)
      from space 64K,   0% used [0xbaff0000, 0xbaff0000, 0xbb000000)
      to   space 64K,   0% used [0xbafe0000, 0xbafe0000, 0xbaff0000)
    concurrent mark-sweep generation total 737280K, used 387656K [0xbb000000, 0xe8000000, 0xe8000000)
    concurrent-mark-sweep perm gen total 262144K, used 166616K [0xe8000000, 0xf8000000, 0xf8000000)
    172059.106: [CMS: 387656K->335512K(737280K), 6.1797673 secs] 410325K->335512K(786368K), [CMS Perm : 166616K->166119K(262144K)]Heap after gc invocations=28395:
    par new generation total 49088K, used 0K [0xb8000000, 0xbb000000, 0xbb000000)
    eden space 49024K, 0% used [0xb8000000, 0xb8000000, 0xbafe0000)
    from space 64K, 0% used [0xbaff0000, 0xbaff0000, 0xbb000000)
    to space 64K, 0% used [0xbafe0000, 0xbafe0000, 0xbaff0000)
    concurrent mark-sweep generation total 737280K, used 335512K [0xbb000000, 0xe8000000, 0xe8000000)
    concurrent-mark-sweep perm gen total 262144K, used 166119K [0xe8000000, 0xf8000000, 0xf8000000)
    , 6.1804623 secs]
    172073.913: [GC {Heap before gc invocations=28395:
    par new generation   total 49088K, used 49024K [0xb8000000, 0xbb000000, 0xbb000000)
      eden space 49024K, 100% used [0xb8000000, 0xbafe0000, 0xbafe0000)
      from space 64K,   0% used [0xbaff0000, 0xbaff0000, 0xbb000000)
      to   space 64K,   0% used [0xbafe0000, 0xbafe0000, 0xbaff0000)
    concurrent mark-sweep generation total 737280K, used 335512K [0xbb000000, 0xe8000000, 0xe8000000)
    concurrent-mark-sweep perm gen total 262144K, used 166146K [0xe8000000, 0xf8000000, 0xf8000000)
    172073.913: [ParNew: 49024K->0K(49088K), 0.0616737 secs] 384536K->341716K(786368K)Heap after gc invocations=28396:
    par new generation total 49088K, used 0K [0xb8000000, 0xbb000000, 0xbb000000)
    eden space 49024K, 0% used [0xb8000000, 0xb8000000, 0xbafe0000)
    from space 64K, 0% used [0xbafe0000, 0xbafe0000, 0xbaff0000)
    to space 64K, 0% used [0xbaff0000, 0xbaff0000, 0xbb000000)
    concurrent mark-sweep generation total 737280K, used 341716K [0xbb000000, 0xe8000000, 0xe8000000)
    concurrent-mark-sweep perm gen total 262144K, used 166146K [0xe8000000, 0xf8000000, 0xf8000000)
    , 0.0623851 secs]
    I read at some forums that Heap fragmentation could be the reason when you get the OutOfMemory error?Also the rate with which GC is running if it consumes more than 98% of time and releases less than 2% of heap,this error could be thrown.
    Could any of the above be a cause for this?We dont have the flexibilty of increasing the Xmx anymore.
    We are using a Concurrent Mark Sweep collector as can be seen from script below.
    Here are the jvm parameters am using in Server start script:
    JAVA_HOME/bin/java -server -Xms768m -Xmx768m -XX:MaxPermSize=256m -XX:MaxNewSize=64m -Xbootclasspath/a:$BOOTCLASSPATH -classpath $CLASSPATH -Dbea.home=$BEA_HOME -Dweblogic.Domain=$DOMAIN_NAME -Dweblogic.Name=$SERVER_NAME -Djava.security.auth.login.config=../properties/login.config -Dcom.sun.management.jmxremote.port=7003 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -XX:+UsePerfData -verbose:gc -XX:+PrintGCDetails -XX:+PrintHeapAtGC -XX:+PrintGCTimeStamps -Xloggc:../log/gc.log -Djava.security.policy==$WL_HOME/lib/weblogic.policy -Dweblogic.ProductionModeEnabled=true -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled -Dweblogic.management.discover=false -XX:-TraceClassUnloading -Dweblogic.management.server=scc-prd-admin.XXX.com:7001 weblogic.Server
    We are using jdk1.5.0_06 version of java with Weblogic.
    Could someone please help resolve this issue?Any thoughts?

    Are you using RMI? Have you adjusted sun.rmi.dgc.server.gcInterval and sun.rmi.dgc.client.gcInterval from their default values for calling System.gc()? I gather (from the format of your logs) that you are running an older JVM, where the default RMI GC intervals were 60000 milliseconds (1 minute). In later JVM's, the default RMI GC interval was increased to 1 hour. See the [JavaTM RMI Release Notes for JDK 6|http://java.sun.com/javase/6/docs/technotes/guides/rmi/relnotes.html] for some details on this.
    Otherwise you might be using some third party code that calls System.gc() periodically. A later JVM will show "(System)" in the full GC output for collections that were caused by a call to System.gc(). You can effectively disable that behavior with the -XX:+DisableExplicitGC command line option, though that might have bad effects on your application if the third party code needed those full collections. But it would be diagnostic.

Maybe you are looking for

  • SAPRouter on Windows Server 2008 R2 /  Service definition

    Hi, I installed SAPRouter on a Windows Server 2008 R2 and i have troubles registering saprouter as a service... The installation directory is: C:\usr\sap\saprouter. There are these files: niping.exe sapcar.exe ntscmgr.exesaprouter.exe I am able to st

  • IPhoto will not open - says: Library created with unreleased version

    I can't even open the app. due to a message that reads "The photo library was created with an unreleased version of iPhoto. Please quit and update this photo library by opening it in iPhoto 2 or iPhoto 4." I haven't done anything to the library recen

  • How to show workflow in oracle forms

    Hi, i want to use oracle workflow in forms in my inhouse application which is in forms10r2 . could anyone tell me how can i show workflow in forms . oracle apps is using the bean area to show the work flow . any method is there .

  • Interesting FYI: MySQL and "mm/dd/yyyy" date constants

    On the MySQL server that I just tried it on...  date constants of the format "m/d/yyyy" did not work. But they were accepted, without comment, and generated a zero-rows-output query result that did not fail. Recoding the query output to use "YYYY-MM-

  • Module externe AEGP Plugin CINEWARE SceneLayer: CINERENDER - Close (5027 § 12)

    Hi adobe, I get this error everytime I try to ram preview or render my project. I searched hour and hour to find a solution to my problem ... Every C4d project I import creates this error. Config : Proc: Intel(R) Core(TM) i7-4790k CPU @ 4.00Ghz RAM: