Running out of memory with Tomcat !!!!!

Hello gurus and good folk:
How can I ensure that the a JSP page that sets a ResultSet doesn't run out of memory? I have set the X flag to j2Se to be 1024mb and still runs out of memory! The size of the data being queried is only 30 MB. One would think the JDBC driver will be optimized for large ResultSet. Any pointers will be very helpful.
Many thanks
Murthy

Hi
As far as i believe, 30 mb data is pretty big for an online app. If you have too many rows in ur resultset, you could(or should) consider implementing paging and fetch x records at a time. Or you could just have a max limit for the records to be fetched(typically useful for 'search and list' type of apps) using Statement.setMaxRows(). This should ensure that Out of memory errors do not happen.
If your data chunk per row is large, consider displaying only a summary in the result and fetching the 'BIG' data column only when required(e.g. fetch the column value for a particular row only when that row is clicked).
Hope this helps !

Similar Messages

  • What causes my iphone 5s to keep on running out of memory with any new download. I recently updated to  ios 8.2

    What causes my iphone 5s to keep on running out of memory without any new download. I recently updated to  ios 8.2

    I meant to say without downloading or receiving anything

  • Generating large amounts of XML without running out of memory

    Hi there,
    I need some advice from the experienced xdb users around here. I´m trying to map large amounts of data inside the DB (Oracle 11.2.0.1.0) and by large I mean files up to several GB. I compared the "low level" mapping via PL/SQL in combination with ExtractValue/XMLQuery with the elegant XML View Mapping and the best performance gave me the View Mapping by using the XMLTABLE XQuery PATH constructs. So now I have a View that lies on several BINARY XMLTYPE Columns (where the XML files are stored) for the mapping and another view which lies above this Mapping View and constructs the nested XML result document via XMLELEMENT(),XMLAGG() etc. Example Code for better understanding:
    CREATE OR REPLACE VIEW MAPPING AS
    SELECT  type, (...)  FROM XMLTYPE_BINARY,  XMLTABLE ('/ROOT/ITEM' passing xml
         COLUMNS
          type       VARCHAR2(50)          PATH 'for $x in .
                                                                let $one := substring($x/b012,1,1)
                                                                let $two := substring($x/b012,1,2)
                                                                return
                                                                    if ($one eq "A")
                                                                      then "A"
                                                                    else if ($one eq "B" and not($two eq "BJ"))
                                                                      then "AA"
                                                                    else if (...)
    CREATE OR REPLACE VIEW RESULT AS
    select XMLELEMENT("RESULTDOC",
                     (SELECT XMLAGG(
                             XMLELEMENT("ITEM",
                                          XMLFOREST(
                                               type "ITEMTYPE",
    ) as RESULTDOC FROM MAPPING;
    ----------------------------------------------------------------------------------------------------------------------------Now all I want to do is materialize this document by inserting it into a XMLTYPE table/column.
    insert into bla select * from RESULT;
    Sounds pretty easy but can´t get it to work, the DB seems to load a full DOM representation into the RAM every time I perform a select, insert into or use the xmlgen tool. This Representation takes more than 1 GB for a 200 MB XML file and eventually I´m running out of memory with an
    ORA-19202: Error occurred in XML PROCESSING
    ORA-04030: out of process memory
    My question is how can I get the result document into the table without memory exhaustion. I thought the db would be smart enough to generate some kind of serialization/datastream to perform this task without loading everything into the RAM.
    Best regards

    The file import is performed via jdbc, clob and binary storage is possible up to several GB, the OR storage gives me the ORA-22813 when loading files with more than 100 MB. I use a plain prepared statement:
            File f = new File( path );
           PreparedStatement pstmt = CON.prepareStatement( "insert into " + table + " values ('" + id + "', XMLTYPE(?) )" );
           pstmt.setClob( 1, new FileReader(f) , (int)f.length() );
           pstmt.executeUpdate();
           pstmt.close(); DB version is 11.2.0.1.0 as mentioned in the initial post.
    But this isn´t my main problem, the above one is, I prefer using binary xmltype anyway, much easier to index. Anyone an idea how to get the large document from the view into a xmltype table?

  • System running out of memory

    I have deployed a Windows Embedded Standard 7 on a x64 machine. My answer file includes the File Based Write Filter and my system has 8GB RAM installed. I have excluded some working folders for a specific software and other than that no big change would
    happen in the system. I have set the overlay size of FBWF to be 1GB.
    Now my problem is that after the system works for some time, the amount of free memory starts to decline and after around 7-8 hours the available memory reaches a critical amount and the system is unusable and I have to reset the system manually. I have
    increased the size of the overlay to 2GB but this happens again.
     Is it possible that this problem is due to FBWF? If I set the overlay size to be 2GB the system should not touch any more than that 2GB so I would never run out of memory with 8GB installed RAM. am I right?

    Would you please take a look at my situation and give me a possible diagnosis:
    1- I have "File Based Write Filter" on Windows Embedded Standard 7 x64 SP1.
    2- The installed RAM is 8GB and size of overlay of FBWF is set to 2GB.
    3- When the system is giving the critical memory message the conditions are as follows:
    a) The consumed memory in task manager is somewhere around 4 to 4.5 GB out of 8GB
    b) A process schedule.exe (from our software) is running more than a hundred time and is consuming
    memory,
    but its .exe file is located inside an unprotected folder.
    c) executing fbwfmgr.exe /overlaydetail is reporting that only 135MB of overlay volume is full!
    Memory consumed by directory structure: 35.6 MB
    Memory consumed by file data: 135 MB
    d) The CPU usage is normal
    I don't know what exactly is full? Memory has free space, FBWF overlay volume has free space, then which memory is full?
    p.s.: I checked my answer file and paging file is disabled as required.

  • Problems with PNGs... Overall compression + Running out of memory!

    We're having a number of issues with PNGs while working on our first iPhone project, and any assistance would be greatly appreciated!
    Our game is using a large number of PNG assets, some of which are full-frame (though the full-frame files tend to mostly be transparent/use alphas, though apparently that doesn't help memory issues much).
    We're running into two huge problems -
    1) We're running out of memory on device when calling to these full frame sequences, which tend to be anywhere from 10-40 frames each at anywhere from 50 to 250kb in size.
    2) Our overall package size is huge, sitting at around 60mb. I've already compressed the pngs through photoshop to the best of my ability, and I'm not having much luck with downloadable compressors like pngcrush. IS there a way to compress all the pngs through xcode/c++/objective C? The programmers are informing me right now that the only compression possible is whatever I apply directly to my pngs on my end - nothing through code.
    I'm stumped as to how I'm seeing seemingly-complex apps with plenty of content at 1-5mbs, and running smoothly with full-frame animations. I'm imagining the problem is that we're not using a proprietary engine to properly manage things, but I'm wondering if there is a simple solution.
    Thanks in advance, guys!
    P.S. - Already done a bunch of research on my own, and not having much luck. Just wondering if there is something obvious that both the programmers and I are missing!

    * Make image frame smaller? A fully transparent border around an image is pure overhead -- there is still data in the actual pixels (even though you cannot see them), plus the transparent border itself. All it needs is an x/y coordinate and a tiny code adjustment.
    * Use less transparency bits? Perhaps you could do with 1-bit transparency on some images. It'll use 1/8 times the memory (well, globally).
    * Use less colors? 24 bit color might compress down to 16 bits without visual artifacts (esp. when you don't have lots of gradients). Perhaps even 8 bit palettized.
    * Make images smaller? You might be able to get away with enlarging some images.
    Just a few things you could check right away, without major rewrites.

  • I am running out of memory on my hard drive and need to delete files. How can I see all the files/applications on my hard drive so I can see what is taking up a lot of room?

    I am running out of memory on my hard drive and need to delete files. How can I see all the files/applications on my hard drive so I can see what is taking up a lot of room?
    Thanks!
    David

    Either of these should help.
    http://grandperspectiv.sourceforge.net/
    http://www.whatsizemac.com/
    Or search 'disk size' in the App Store.
    Be carefull with what you delete & have a backup BEFORE you start, you may also want to reboot to try to free any memory that may have been written to disk.

  • Lightroom 5 permanently runs out of memory

    Lightroom 5 on Windows 7 32 Bit and 8 Gigabytes of memory (more than the 32 Bit system can use) permanently runs out of memory when doing some more complex edits on a RAW file, especially when exporting to 16 Bit TIFF. The RAW files were created by cameras with 10 up to 16 megapixel sensors with bit depths between 12 and 14.
    After exporting one or two images to 16 Bit uncompressed TIFF an error message "Not enough memory" will be displayed and only a Lightroom restart solves that - for the next one to two exports. If an image has much brush stroke edits, every additional stroke takes more and more time to see the result until the image disappears followed by the same "Not enough memory" error message.
    A tab character in the XMP sidecar file is *not* the reason (ensured that), as mentioned in a post. It seems that Lightroom in general does not allocate enough memory and frees too less/late allocated.
    Please fix that bug, it's not productive permanently quit and restart Lightroom when editing/exporting a few RAW files. Versions prior to Lightroom 4 did not have that bug.
    P.S. Posting here, because it was not possible to post it at http://feedback.photoshop.com/photoshop_family/topics/new It's very bad design, to let a user take much time to write and then say: "Log in", but a log in with the Adobe ID and password does not work (creating accounts on Facebook etc. is not an acceptable option, Adobe ID should be enough). Also a bugtracker such as Bugzilla would be a much better tool for improving a software and finding relevant issues to avoid duplicate postings.

    First of all: I personally agree with your comments regarding the feedback webpage. But that is out of our hands since this is a user-to-user forum, and there is nothing we users can do about it.
    Regarding your RAM: You are running Win7 32-bit, so 4 GB of your 8 GB of RAM sit idle since the system cannot use it. And, frankly, 4 GB is very scant for running Lr, considering that the system uses 1 GB of that. So there's only 3 GB for Lr - and that only if you are not running any other programs at the same time.
    Since you have a 8 GB system already, why don't you go for Win7 64-bit. Then you can also install Lr 64-bit and that - together with 8 GB of RAM - will bring a great boost in Lr performance.
    Adobe recommends to run Lr in the 64-bit version. For more on their suggestion on improving Lr performance see here:
    http://helpx.adobe.com/lightroom/kb/performance-hints.html?sdid=KBQWU
    for more: http://forums.adobe.com/thread/1110408?tstart=0

  • I have a file where I am running out of memory can anyone take a look at this file and see?

    I am trying to make this file 4'x8'.
    Please let me know if anyone can help me get this file to that size.
    I have a quad core processor with 6 gig of ram and have gotten the file to 50"x20", but I run out of memory shortly thereafter.  Any help would be appreciated.
    Thanks,

    Where to begin? You should look into using a swatch pattern instead of those repeating circles. Also, I see that each circle in your pattern is actually a stack of four circles, but I see no reason why. Perhaps Illustrator is choking on the huge number of objects required to make that patterns as you haave constructed it.
    Here is a four foot by eight foot Illustrator file using a swatch pattern. Note that, despite the larger size, the file is less than one 16th the size.

  • Out of memory with no swap causes disk activity

    Can someone explain what exactly is being read/written from/to disk in this situation?
    I have 2 GB of RAM and no swap partitions. Occasionally I'll forget how inefficient gwenview is at displaying very large images and accidentally double-click one. The entire system freezes; even alt+sysrq keystrokes are ineffective (and yes I do have them enabled).
    For about 5 minutes, the system is locked up and the hard drive light is flickering. That scares me a bit, because with no swap, what could it possibly be doing for 5 straight minutes? I used to think it was synching before doing the OOM-killing, but there's no way a sync could take that long. Judging from the sound of the hard drive, it's hda (the drive / and all the other system partitions are on).
    A few times after recovering from this I've run extensive data verification and never found any evidence of corruption, but I'd like to know for sure that the kernel isn't randomly deciding to use some filesystem as swap space.
    In the mean time, I'm playing with disabling overcommit -- setting vm.overcommit_memory = 2 in /etc/sysctl.conf. That enforces a hard memory commit limit of swap size + overcommit_ratio * ram size (so I've read) -- and I've also read that the default overcommit_ratio is only 50%. What the bloody hell? It's almost like someone thinks swap is more important than RAM -- hell-llo, I have 2 GB of RAM so that I can get *away* from swap!
    Anyway, I've set the ratio to 97% and so far things seem happy -- if I deliberately run out of memory, the process that did it always gets killed instantly and the system doesn't freeze up on OOM anymore."
    Another thing -- in all my out of memory situations so far, VMWare has been running. I suppose it's possible that VMWare is the one doing the swappage; I'll have to investigate that further.
    ~Felix.

    I think I've finally figured this out. It's a kernel bug -- I'm guessing that under normal circumstances, the "cached" column in the free command "doesn't count" towards how much memory the system thinks it's using. After all, it's just cached copies of stuff that should be elsewhere, and if you run out of memory, you can safely dump that, right? Unfortunately, /dev/shm is counted under cached rather than used memory (as I discovered in an earlier post).
    So if I've got 500 MB of stuff in /dev/shm * (which is where I mount my /tmp), there's now 500MB of stuff in the "cached" column that really does count -- system reaches all RAM full, decides it needs to dump cache, and suddenly finds that the 500MB it thought it could use isn't usable. For some reason it takes about 5 minutes of hard drive thrashing (probably because it's already chucked all of the system libraries, etc. out of cached and needs to re-read them from disk every time) before something finally figures out that it really is out of memory and that that 500MB isn't letting go and invokes OOM-killer.
    *: VMWare does this; it creates a 512MB file (the amount of RAM in my virtual machine) then hides it by keeping the file open and deleting it, so the inode's still there, but you can't see it and it makes the df command really perplexing... but that's another story.
    I haven't had a chance to try this with a newer kernel (maybe they've fixed it now?); I'm still running 2.6.23-ARCH here. (pacman -Syu upgrades are a major production for me because I have lots of RAID arrays and things, and an nvidia graphics card, and I use gnucash which sometimes needs manual recompiling, and so on...)

  • My mac's run out of memory and I can't find the culprit!

    Hi, I'm in serious need of some help! I'm sure this is simple, but I'm about to break down over it – I use my mac for everything. I've got a 200gb 2009 macbook (running iOS7), and it's told me it's run out of memory. The storage tab in 'about this mac' tells me 108GB is being used for video – but I can't find them! My iPhoto has about 17GB of movies, my iTunes has around 20GB, and I've got maybe another 10GB in files within finder – but that's still only half the videos my mac is saying it has? How do I find the rest? I've got 80GB being used by 'other' as well – is that just pages and numbers documents, along with the iOS? Is there a way of finding exactly what all my memory's being allocated to?
    I've got the entire mac backed up on an external hard drive, but I'm terrified of deleting anything from the mac in case that fails. I plan on getting a second external HD, but even then I think I'll be too worried (I've heard about so many hard drives continuously failing). How does anyone manage all their stuff?!?
    Thank you in advance, for any help you can offer.

    Just a slight correction to start, you're not running iOS 7. You're running a version of OS X, iOS is for mobile devices like iPhones and iPads. To find out which version OS OS X you're running click the Apple menu at the top left and select About This Mac.
    This http://pondini.org/OSX/LionStorage.html should help you understand "Other".

  • Oracle 9i running out of memory

    Folks !
    I have a simple 3 table schema with a few thousand entries each. After dedicating gigabytes of hard disk space and 50% of my 1+ GB memory, I do a few simple Oracle Text "contains" searches (see below) on these tables and oracle seems to grow some 25 MB after each query (which typically return less than a dozen rows each) till it eventually runs out of memory and I have to reboot the system (Sun Solaris).
    This is on Solaris 9/Sparc with Oracle 9.2 . My query is simple right outer join. I think the memory growth is related to Oracle Text index/caching since memory utilization seems pretty stable with simple like '%xx%' queries.
    "top" shows a dozen or so processes each with about 400MB RSS/SIZE. It has been a while since I did Oracle DBA work but I am nothing special here. Databse has all the default settings that you get when you create an Oracle database.
    I have played with SGA sizes and no matter how large or small the size of SGA/PGA, Oracle runs out of memory and crashes the system. Pretty stupid to an Enterprise databas to die like that.
    Any clue on how to arrest the fatal growth of memory for Oracle 9i r2?
    thanks a lot.
    -Sanjay
    PS: The query is:
    SELECT substr(sdn_name,1,32) as name, substr(alt_name,1,32) as alt_name, sdn.ent_num, alt_num, score(1), score(2)
    FROM sdn, alt
    where sdn.ent_num = alt.ent_num(+)
    and (contains(sdn_name,'$BIN, $LADEN',1) > 0 or
    contains(alt_name,'$BIN, $LADEN',2) > 0)
    order by ent_num, score(1), score(2) desc;
    There are following two indexes on the two tables:
    create index sdn_name on sdn(sdn_name) indextype is ctxsys.context;
    create index alt_name on alt(alt_name) indextype is ctxsys.context;

    I am already using MTS.
    Atached is the init.ora file below.
    may be I should repost this article with subject "memory leak in Oracle" to catch developer attention. I posted this a few weeks back in Oracle Text groiup and no response there either.
    Thanks for you help.
    -Sanjay
    # Copyright (c) 1991, 2001, 2002 by Oracle Corporation
    # Cache and I/O
    db_block_size=8192
    db_cache_size=33554432
    db_file_multiblock_read_count=16
    # Cursors and Library Cache
    open_cursors=300
    # Database Identification
    db_domain=""
    db_name=ofac
    # Diagnostics and Statistics
    background_dump_dest=/space/oracle/admin/ofac/bdump
    core_dump_dest=/space/oracle/admin/ofac/cdump
    timed_statistics=TRUE
    user_dump_dest=/space/oracle/admin/ofac/udump
    # File Configuration
    control_files=("/space/oracle/oradata/ofac/control01.ctl", "/space/oracle/oradata/ofac/control02.ctl", "/space/oracle/oradata/ofac/control03.ctl")
    # Instance Identification
    instance_name=ofac
    # Job Queues
    job_queue_processes=10
    # MTS
    dispatchers="(PROTOCOL=TCP) (SERVICE=ofacXDB)"
    # Miscellaneous
    aq_tm_processes=1
    compatible=9.2.0.0.0
    # Optimizer
    hash_join_enabled=TRUE
    query_rewrite_enabled=FALSE
    star_transformation_enabled=FALSE
    # Pools
    java_pool_size=117440512
    large_pool_size=16777216
    shared_pool_size=117440512
    # Processes and Sessions
    processes=150
    # Redo Log and Recovery
    fast_start_mttr_target=300
    # Security and Auditing
    remote_login_passwordfile=EXCLUSIVE
    # Sort, Hash Joins, Bitmap Indexes
    pga_aggregate_target=25165824
    sort_area_size=524288
    # System Managed Undo and Rollback Segments
    undo_management=AUTO
    undo_retention=10800
    undo_tablespace=UNDOTBS1

  • Running out of memory building csv file

    I'm attempting to write a script that does a query on my
    database. It will generally be working with about 10,000 - 15,000
    records. It then checks to see if a certain file exists. If it
    does, it will add the record to an array. When its done looping
    over all the records, it takes the array that was created and
    outputs a csv file (usually with about 5,000 - 10,000 lines).
    But... before that ever happens, it runs out of memory. What can I
    do to make it not run out of memory?

    quote:
    Originally posted by:
    nozavroni
    I'm attempting to write a script that does a query on my
    database. It will generally be working with about 10,000 - 15,000
    records. It then checks to see if a certain file exists.
    Sounds pretty inefficient to me. Is there no way you can
    modify the query so that it only selects the records for which the
    file exists?

  • Running out of memory after latest update

    First of all:
    Why doesn't anybody answer my questions from Dez. 26th?? They are not that hard, I believe...
    After I installed the update No. 5, my system runs out of memory after a certain time.
    I'm working an an 1,7Centrino with 1GB memory....
    Is it bc of the update? Do RUN change so many things?
    Hope for an answer this time...
    Mark.

    Hi Mark
    Aplogies for not responding to your earlier post on Debugging Rowset. I am still working on that. I am sure I can give you something todday if there is any straight solution.
    OK coming to the OutOfMemoryExceptions, yes this has been observed because of preview feature added in Updaet 5. Look @ http://swforum.sun.com/jive/thread.jspa?forumID=123&threadID=50422 for more details.
    Thanks
    Srinivas

  • Running Out of Memory Since Yosemite

    Let me start by saying I was originally part of the Yosemite Beta and was running into the same issue.
    After running my system for >20-25 minutes a menu pops up and says I've run out of memory and it has paused my programs.  Looking at my Activity Monitor, it says my Mail is running at 64+ GB of memory.  When I restart my system, Mail ranges from 64 MB - 120 MB, then it some how creeps up to 64 GB and crashes.
    When the final release of Yosemite was released I did a complete clean install, thinking that maybe that was the issue.  Tonight I received the same error.  After searching online I didn't really find anything of help.  I'm hoping someone in this community can help.
    Thanks.
    My System:
    rMBP- 2.6 GHz i7 - 16 GB ram - 1TB SSD

    I'm having the exact same issue, on both a  2013 MacBook Air, and on a 2009 iMac. I've used activity monitor, and can observe the mail app increasing in memory usage from 200mb during normal conditions to a sudden rise to 60+GB. Same activity monitor screens as in this post. If I force quite the mail app, everything returns to normal, but this happens at least once every hour.  So my assumption is that 1) yes it is the mail.app, 2) it's happening to quite a few people, 3) it's happening on a range of recent as well as older machines, 4) it was introduced with Yosemite, 5) it's not a "plugin" as someone suggested in other posts, 6) no help from clearing cache, clean installs, deleting preferences or container folders in the library.
    I would lIke to think Apple will address this issue, but find it alarming that someone in this thread has raised 12 tickets about it in beta without receiving a response. For those of us affected, we might be in for a long wait.
    Apple, please help!

  • Running out of memory despite having set je.maxMemory to a moderate value

    I have set je.maxMemory to 20MB (je.maxMemory=20000000) and allowed a max heap size of 512MB (-Xms256M -Xmx512M).
    After two hours of running my web service, I'm running out of memory. After having profiled my service (using Yourkit Java Profiler 1.10.6), I can see the following:
    Name                                               Objects ShallowSize  RetainedSize
    byte[]                                               16711   124124880     124124880
    com.sleepycat.je.tree.BIN                              181       24616     116254200
    com.sleepycat.je.tree.Node[]                           187       98736     115743184
    com.sleepycat.je.tree.LN                              7092      226944     115253600
    java.util.concurrent.ConcurrentHashMap$HashEntry       554       17728      78328944
    java.util.concurrent.ConcurrentHashMap$HashEntry[]    1053       34728      77489632
    java.util.concurrent.ConcurrentHashMap                 117        5616      71812072
    java.util.concurrent.ConcurrentHashMap$Segment[]       118       10304      71807912
    java.util.concurrent.ConcurrentHashMap$Segment        1052       42080      71798808
    com.sleepycat.je.tree.IN                                 6         672      45592352
    java.lang.String                                    135888     4348416      14152664The memory profiler claims further, that com.sleepycat.je.tree.BIN is responsible for 71% of all heap memory.
    In any case, com.sleepycat.je.tree.BIN claims ~ 116MB of heap memory, which is by any goodwill, exceeded the limit of 20MB.
    How can this be?
    How is JE ensuring that the limit is not exceeded? Is there a timer (thread) running which once a while checks the memory used and then cleans up ; or is memory usage checked creating a com.sleepycat.je.tree.BIN object?
    My environment:
    BDB JE 4.0.92 - used as cache loader within Jboss Cache (3.2.7.GA), running on a JBOSS Application Server, Java 1.6 (IBM) on Linux. Further details are listed in the system properties below (except some deleted security items).
    System properties:
    (java.lang.String, int, java.lang.StringBuffer, int)=contains
    DestroyJavaVM helper thread=(java.lang.String, java.security.KeyStore$Entry, java.security.KeyStore$ProtectionParameter)
    base.collection.name=CD2JAVA
    bind.address=10.12.25.130
    catalina.base=/work/ocrgws_test/server0
    catalina.ext.dirs=/work/ocrgws_test/server0/lib
    catalina.home=/work/ocrgws_test/server0
    catalina.useNaming=false
    com.arjuna.ats.arjuna.objectstore.objectStoreDir=/work/ocrgws_test/server0/data/tx-object-store
    com.arjuna.ats.jta.lastResourceOptimisationInterface=org.jboss.tm.LastResource
    com.arjuna.ats.tsmx.agentimpl=com.arjuna.ats.internal.jbossatx.agent.LocalJBossAgentImpl
    com.arjuna.common.util.logger=log4j_releveler
    com.arjuna.common.util.logging.DebugLevel=0x00000000
    com.arjuna.common.util.logging.FacilityLevel=0xffffffff
    com.arjuna.common.util.logging.VisibilityLevel=0xffffffff
    com.ibm.cpu.endian=little
    com.ibm.jcl.checkClassPath=
    com.ibm.oti.configuration=scar
    com.ibm.oti.jcl.build=20100326_1904
    com.ibm.oti.shared.enabled=false
    com.ibm.oti.vm.bootstrap.library.path=/opt/ibm/java-x86_64-60/jre/lib/amd64/compressedrefs:/opt/ibm/java-x86_64-60/jre/lib/amd64
    com.ibm.oti.vm.library.version=24
    com.ibm.util.extralibs.properties=
    com.ibm.vm.bitmode=64
    common.loader=${catalina.home}/lib,${catalina.home}/lib/*.jar
    epo.jboss.deploymentscanner.extradirs=/work/ocrgws_test/app/
    external.cert.ldap.* = ***************
    file.encoding=UTF-8
    file.separator=/
    flipflop.activation.time=16:30
    hibernate.bytecode.provider=javassist
    ibm.signalhandling.rs=false
    ibm.signalhandling.sigchain=true
    ibm.signalhandling.sigint=true
    ibm.system.encoding=UTF-8
    jacorb.config.log.verbosity=0
    java.assistive=ON
    java.awt.fonts=
    java.awt.graphicsenv=sun.awt.X11GraphicsEnvironment
    java.awt.printerjob=sun.print.PSPrinterJob
    java.class.path=/work/ocrgws_test/config:/usr/local/jboss-eap-4.3-cp07/bin/run.jar:/opt/ibm/java-x86_64-60/lib/tools.jar
    java.class.version=50.0
    java.compiler=j9jit24
    java.endorsed.dirs=/usr/local/jboss-eap-4.3-cp07/lib/endorsed
    java.ext.dirs=/opt/ibm/java-x86_64-60/jre/lib/ext
    java.fullversion=JRE 1.6.0 IBM J9 2.4 Linux amd64-64 jvmxa6460sr8-20100401_55940 (JIT enabled, AOT enabled)
    J9VM - 20100401_055940
    JIT - r9_20100401_15339
    GC - 20100308_AA_CMPRSS
    java.home=/opt/ibm/java-x86_64-60/jre
    java.io.tmpdir=/tmp
    java.jcl.version=20100408_01
    java.library.path=/opt/ibm/java-x86_64-60/jre/lib/amd64/compressedrefs:/opt/ibm/java-x86_64-60/jre/lib/amd64:/usr/lib64/mpi/gcc/openmpi/lib64:/usr/lib
    java.naming.factory.initial=org.jnp.interfaces.NamingContextFactory
    java.naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces
    java.net.preferIPv4Stack=true
    java.protocol.handler.pkgs=org.jboss.net.protocol
    java.rmi.server.codebase=http://10.12.25.130:8083/
    java.rmi.server.hostname=10.12.25.130
    java.rmi.server.randomIDs=true
    java.runtime.name=Java(TM) SE Runtime Environment
    java.runtime.version=pxa6460sr8-20100409_01 (SR8)
    java.security.krb5.conf=/usr/local/jboss/etc/krb5.conf
    java.specification.name=Java Platform API Specification
    java.specification.vendor=Sun Microsystems Inc.
    java.specification.version=1.6
    java.util.prefs.PreferencesFactory=java.util.prefs.FileSystemPreferencesFactory
    java.vendor.url=http://www.ibm.com/
    java.vendor=IBM Corporation
    java.version=1.6.0
    java.vm.info=JRE 1.6.0 IBM J9 2.4 Linux amd64-64 jvmxa6460sr8-20100401_55940 (JIT enabled, AOT enabled)
    J9VM - 20100401_055940
    JIT - r9_20100401_15339
    GC - 20100308_AA_CMPRSS
    java.vm.name=IBM J9 VM
    java.vm.specification.name=Java Virtual Machine Specification
    java.vm.specification.vendor=Sun Microsystems Inc.
    java.vm.specification.version=1.0
    java.vm.vendor=IBM Corporation
    java.vm.version=2.4
    javax.management.builder.initial=org.jboss.mx.server.MBeanServerBuilderImpl
    javax.net.ssl.trustStore=/usr/local/jboss/etc/ldap.truststore
    javax.net.ssl.trustStorePassword=password
    jboss.bind.address=10.12.25.130
    jboss.home.dir=/usr/local/jboss-eap-4.3-cp07
    jboss.home.url=file:/usr/local/jboss-eap-4.3-cp07/
    jboss.identity=30df88bc0a52e350x6e2ff59cx136c17794d5x-8000757
    jboss.lib.url=file:/usr/local/jboss-eap-4.3-cp07/lib/
    jboss.messaging.controlchanneludpaddress=239.1.200.4
    jboss.messaging.datachanneludpaddress=239.1.200.4
    jboss.partition.name=ocrgws_test_Partition
    jboss.partition.udpGroup=239.1.200.4
    jboss.remoting.domain=JBOSS
    jboss.remoting.instanceid=30df88bc0a52e350x6e2ff59cx136c17794d5x-8000757
    jboss.remoting.jmxid=luu002t.internal.epo.org_1334685694459
    jboss.remoting.version=22
    jboss.security.disable.secdomain.option=true
    jboss.server.config.url=file:/work/ocrgws_test/server0/conf/
    jboss.server.data.dir=/work/ocrgws_test/server0/data
    jboss.server.home.dir=/work/ocrgws_test/server0
    jboss.server.home.url=file:/work/ocrgws_test/server0/
    jboss.server.lib.url=file:/work/ocrgws_test/server0/lib/
    jboss.server.log.dir=/work/ocrgws_test/server0/log
    jboss.server.name=luu002t_ocrgws_test_server0
    jboss.server.temp.dir=/work/ocrgws_test/server0/tmp
    jboss.tomcat.udpGroup=239.1.200.4
    jbossmx.loader.repository.class=org.jboss.mx.loading.UnifiedLoaderRepository3
    je.maxMemory=20000000
    jgroups.bind_addr=10.12.25.130
    jmx.console.bindcredential=3bpwdmpc
    jmx.console.binddn=cn=jbossauth-ro,ou=accounts,ou=auth,dc=epo,dc=org
    jmx.console.rolesctxdn=ou=roles-test,ou=jboss,ou=applications,ou=internal,dc=epo,dc=org
    jndi.datasource.name=java:MainframeDS
    jnp.disableDiscovery=true
    jxe.current.romimage.version=15
    jxe.lowest.romimage.version=15
    line.separator=
    mainframelogin.password=720652a1e842fc7f
    mainframelogin.username=test_t
    org.apache.commons.logging.Log=org.apache.commons.logging.impl.Log4JLogger
    org.apache.tomcat.util.http.ServerCookie.VERSION_SWITCH=true
    org.epo.jboss.application.home=/work/ocrgws_test
    org.hyperic.sigar.path=/work/ocrgws_test/server0/./deploy/hyperic-hq.war/native-lib
    org.jboss.ORBSingletonDelegate=org.jacorb.orb.ORBSingleton
    org.omg.CORBA.ORBClass=org.jacorb.orb.ORB
    org.omg.CORBA.ORBSingletonClass=org.jboss.system.ORBSingleton
    org.w3c.dom.DOMImplementationSourceList=org.apache.xerces.dom.DOMXSImplementationSourceImpl
    os.arch=amd64
    os.name=Linux
    os.version=2.6.32.46-0.3-xen
    package.access=sun.,org.apache.catalina.,org.apache.coyote.,org.apache.tomcat.,org.apache.jasper.,sun.beans.
    package.definition=sun.,java.,org.apache.catalina.,org.apache.coyote.,org.apache.tomcat.,org.apache.jasper.
    path.separator=:
    poll.interval.milliseconds=300000
    program.name=run.sh
    server.loader=
    shared.loader=
    spnego.config=/usr/local/jboss/etc/spnego.properties
    sun.arch.data.model=64
    sun.boot.class.path=/usr/local/jboss-eap-4.3-cp07/lib/endorsed/xercesImpl.jar:/usr/local/jboss-eap-4.3-cp07/lib/endorsed/xalan.jar:/usr/local/jboss-eap-4.3-cp07/lib/endorsed/serializer.jar:/opt/ibm/java-x86_64-60/jre/lib/amd64/compressedrefs/jclSC160/vm.jar:/opt/ibm/java-x86_64-60/jre/lib/annotation.jar:/opt/ibm/java-x86_64-60/jre/lib/beans.jar:/opt/ibm/java-x86_64-60/jre/lib/java.util.jar:/opt/ibm/java-x86_64-60/jre/lib/jndi.jar:/opt/ibm/java-x86_64-60/jre/lib/logging.jar:/opt/ibm/java-x86_64-60/jre/lib/security.jar:/opt/ibm/java-x86_64-60/jre/lib/sql.jar:/opt/ibm/java-x86_64-60/jre/lib/ibmorb.jar:/opt/ibm/java-x86_64-60/jre/lib/ibmorbapi.jar:/opt/ibm/java-x86_64-60/jre/lib/ibmcfw.jar:/opt/ibm/java-x86_64-60/jre/lib/rt.jar:/opt/ibm/java-x86_64-60/jre/lib/charsets.jar:/opt/ibm/java-x86_64-60/jre/lib/resources.jar:/opt/ibm/java-x86_64-60/jre/lib/ibmpkcs.jar:/opt/ibm/java-x86_64-60/jre/lib/ibmcertpathfw.jar:/opt/ibm/java-x86_64-60/jre/lib/ibmjgssfw.jar:/opt/ibm/java-x86_64-60/jre/lib/ibmjssefw.jar:/opt/ibm/java-x86_64-60/jre/lib/ibmsaslfw.jar:/opt/ibm/java-x86_64-60/jre/lib/ibmjcefw.jar:/opt/ibm/java-x86_64-60/jre/lib/ibmjgssprovider.jar:/opt/ibm/java-x86_64-60/jre/lib/ibmjsseprovider2.jar:/opt/ibm/java-x86_64-60/jre/lib/ibmcertpathprovider.jar:/opt/ibm/java-x86_64-60/jre/lib/ibmxmlcrypto.jar:/opt/ibm/java-x86_64-60/jre/lib/management-agent.jar:/opt/ibm/java-x86_64-60/jre/lib/xml.jar:/opt/ibm/java-x86_64-60/jre/lib/jlm.jar:/opt/ibm/java-x86_64-60/jre/lib/javascript.jar:/tmp/yjp201202191932.jar
    sun.boot.library.path=/opt/ibm/java-x86_64-60/jre/lib/amd64/compressedrefs:/opt/ibm/java-x86_64-60/jre/lib/amd64
    sun.io.unicode.encoding=UnicodeLittle
    sun.java.command=org.jboss.Main -b 10.12.25.130 -Djboss.server.home.dir=/work/ocrgws_test/server0 -Djboss.server.home.url=file:/work/ocrgws_test/server0 -Djboss.server.name=luu002t_ocrgws_test_server0 -Djboss.partition.name=ocrgws_test_Partition -Depo.jboss.deploymentscanner.extradirs=/work/ocrgws_test/app/ -Dorg.epo.jboss.application.home=/work/ocrgws_test
    sun.java.launcher.pid=17781
    sun.java.launcher=SUN_STANDARD
    sun.java2d.fontpath=
    sun.jnu.encoding=UTF-8
    sun.rmi.dgc.client.gcInterval=3685000
    sun.rmi.dgc.server.gcInterval=3685000
    system=java.io.ObjectStreamField
    tomcat.util.buf.StringCache.byte.enabled=true
    user.country=US
    user.dir=/work/ocrgws_test
    user.home=*****************
    user.language=en
    user.name=***********
    user.timezone=Europe/Berlin
    user.variant=

    The memory profiler claims further, that com.sleepycat.je.tree.BIN is responsible for 71% of all heap memory. In any case, com.sleepycat.je.tree.BIN claims ~ 116MB of heap memory, which is by any goodwill, exceeded the limit of 20MB. >
    I'm not sure whether the profiler is reporting live objects only (referenced) or all objects (including those not yet reclaimed). If the latter, it isn't telling you how much memory is actually referenced by the JE cache.
    Please look at the JE stats to see what the cache usage is, from JE's point of view.
    If you believe there is a bug in JE cache management, you'll need to write a small standalone test to demonstrate it and submit it to us, since we don't know of any such bug. Also note that we'll have difficulty supporting JE 4.0 (without a support contract anyway). Please use JE 5.0, or at least 4.1.
    Eviction occurs as objects are allocated, as well as in background threads. Eviction in background threads and concurrent eviction were greatly improved in JE 4.1.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

Maybe you are looking for

  • Xperia Facebook integration doesn't Work

    Hello, When I want to Connect my Xperia Z1 to my facebook account using Xperia Facebook integration a black Screen with loading bars appears and it stays that way indefinetely. I tried cleaning data for Xperia Facebook integration. I uninstalled face

  • I want to use Weblogic on the IIS

    I want to make JSP so i want to try Weblogic to build web JSP server . But I also make ASP . There is IIS in my computer. Is it possibly or not. Please advice for me? Bayar

  • Problems transferring old iPhoto library to iPhoto 6

    Several months ago, I upgraded to iPhoto 6. Prior to doing so, I saved my photo files to another hard drive as a backup. Somehow, the old iPhoto library (~9000 photos) did not transfer to iPhoto 6. Then, I tried to drag the "Originals folder "into iP

  • Firefox wont open on Samsung Moment

    I downloaded Fennec on my SPH-M900 moment, opened it, and nothing happened. It came up said Installing Libraries then closed. With no error messages Please Help!

  • Acrobat Reader 9 + DdeClientTransaction + FilePrintTo

    Hello, this is my problem, I'm developing a software C++ based with a PDF Print functionality using DDE messages. The actual released version print PDF with Acrobat 7 and all run correctly, DDE management below: ShellExecute for Acrobat Reader execut