What needs to be released to avoid memory leaks in the jvm?

I've got a C application server that makes Java calls to process requests.
Eventually, the JVM runs out of memory, I assume because my C code is leaking references.
My question is... what references should I be freeing? I am freeing local/global object references, but it still leaks memory.
Should I also be freeing classIds? Method ids? Is DeleteLocalRef() the way to do that?
Thanks,
Jeff

The simple answer is that you are responsible for deallocating every single reference (basically anything that isn't either a fieldID or a methodID) that is returned from a JNI call. The return value from each JNI call is a newly allocated local reference that you must delete with a call to DeleteLocalRef. For example, you must delete the jclass returned by FindClass() or the jthrowable returned by ExceptionOccurred(). In addition to that, you must also ensure that you delete any global references that you explicitly allocate yourself.
Notice that the rules change, when you are executing from within the context of a native method invoked from the JVM. In that case, the JVM will automatically clean up any local references that are left over once the native method returns. Even then, though, it's smart to be careful with local references, because you can easily exceed the maximum number of local refs, or hold on to excessive amounts of objects, or hold onto objects for excessive amounts of time, because none of the local refs are deallocated by the VM until the native method actually returns.
Local reference management is a real pain in the butt in JNI. For example, it took me quite some time to get it right in Jace.
God bless,
-Toby Reyelts
Check out the free, open-source, JNI toolkit, Jace - http://jace.reyelts.com/jace

Similar Messages

  • Memory leak in the JVM leading to system oom

    Hi,
    We are running application server using java 1.5, tomcat 5.5 ...
    The problem is that the JVM is allocating memory continuously.
    If we look at the JVM memory from JConsole everything is OK the amount of memory allocated in the heap does not grow significantly.
    If we look at the JVM memory consumption from the system perspective, the memory usage is growing until the system runs out of memory and kill the JVM. The JVM memory usage goes beyong 1.5 Go of memory whatever options you use when launching the JVM.
    The memory allocation is so intense that it can lead to a crash in few hours with very few users connected to the application.
    We have tried JDK 1.5.05 and 1.5.06 still the same.
    We are running on a Linux Debian system wth a 2.6.8 kernel.
    Any idea of where all this memory goes ?
    Any idea on how to track and solve this memory leak ?

    Hi Martinux,
    Tiger and Mustang come with a number of diagnosing tools that could help
    you spot memory problems. In particular there's something called 'jmap' which can
    take a snapshot of the JVM memory.
    Danny Coward has recently written a nice blog to emphasize the existence
    of this new tools - see
    "Crash Course: Java SE Monitoring, Management and Troubleshooting"
    http://blogs.sun.com/roller/page/dannycoward?entry=crash_course_java_se_monitoring
    Of course if the problem isn't in the JVM...
    BTW: you did also look at the non-heap memory, right? and also at the total
    number of loaded classes?
    hope this helps,
    -- daniel
    JMX, SNMP, Java, etc...
    http://blogs.sun.com/roller/page/jmxetc

  • What needs to be done to avoid the vulnerability discovered according to hacking story yesterday?

    what needs to be done to avoid the vulnerability discovered according to hacking story yesterday?

    abombaci wrote:
    This software update is only available for people that have Java installed on their Macs. For someone like me since I don't, I don't get the update because I don't need it.
    Then you don't need to worry about the Java vulnerability being exploited on your Mac.

  • What needs to be done to avoid WIFI drop-offs with Apple AirPort Extreme Base station with Extended Wifi

    Hi there,
    can anybody please advise me what needs to be done to avoid WIFI drop-offs every half an hour or 1 hours.
    I use voip phoe calls a lot using my WIFI, However my AirPort WIFI connection drops-off while I am in the middle of WIFI VOIP calls.  This is causing significant disturbance in my family.
    Please assist.
    In details, I have extended my WIFI range using 2 Apple Airport Extreme Base Station.
    Main Base is - A1408 9 (5th Gen),  and I have extended my WIFI in my L shape house by connecting another Apple AiPort A1354 ( 4th Gen) with AA1408.
    Note1:
    > when I am close to A1408 my iPhone4 connected with A1408 automatically
    > When I am close ot A1354 , my iPhone4 connected with A1354 automatically.
    Note2: these drop-offs never occurred when I had only 1 Apple AirPort Extreme Base station WIFI
    I think, this Auto switching causing the slight drop outs in my Wifi when I am on the VOIP call.
    Do you think that something is going wrong with my Apple AirPort Wifi extended?
    Much appreciated

    To help eliminate drop-offs you need to ensure that your extended network is operating at its peak bandwidth performance. Base station placement is critical in an extended network. Please check out the following AirPort User Tip for details.

  • How can I avoid memory leak problem ?

    I use Jdev 10.1.2 . I have a memory leak problem with ADF .
    My application is very large . We have at least 30 application module , each application module contain many view object
    and I have to support a lot of concurrent users .
    as I know ADF stored data of view object in http session .
    and http session live is quite long . when I use application for a while It raise Ouf of Memory error .
    I am new for ADF.
    I try to use clearCache() on view object when I don't use it any more .
    and call resetState() when I don't use Application Module any more
    I don't know much about behavior of clearCache() and resetState() .
    I am not sure that It can avoid memory leak or not .
    Do you have suggestion to avoid this problem ?

    ADF does not store data in the HTTP session.
    See Chapter 28 "Application Module State Management" in the ADF Developer's Guide for Forms/4GL Developers on the ADF Learning Center at http://download-uk.oracle.com/docs/html/B25947_01/toc.htm for more information.
    See Chapter 29 "Understanding Application Module Pooling" to learn how you can tune the pooling parameters to control how many modules are used and how many modules "hang around" for what periods of time.

  • Free memory after using GetRS232ErrorString() to avoid memory leak?

    Hello,
    Is it necessary to free memory after using function GetRS232ErrorString() to avoid memory leak?
    Example 1:
    int main();
    char *strError=NULL;
    strError = GetRS232ErrorString(55); /* just an example for error message */
    free(strError ); /* Do I need to free this pointer? */
    Example 2:
    int main();
    MessagePopup ("Error", GetRS232ErrorString(55)); ; /* Will I get a memory leak with this function call? */
    BR
    Frank

    It's a pity that the documentation is indeed so poor in this case, but testing shows that it always returns the same pointer, no matter the error code, so it seems to be using an internal buffer and you are not supposed to free the string (but need to copy it before the next call to GetRS232ErrorString if you need to keep the text). It does however return a different pointer for every thread, so atl least it seems to be thread safe.
    Cheers, Marcel 

  • Memory leaks in the app

    While doing memory leak profiling in an application, I came across leaks due to certain custom methods like readHeaderBytes or httpProtocolStart from CFNetwork, GSEventRunModal from GraphicsServices etc. What should I do to correct these memory leaks???

    Find the objects that you are allocating that aren't being freed and call release on them. Check out this link:
    http://developer.apple.com/documentation/Cocoa/Conceptual/MemoryMgmt/MemoryMgmt. html

  • Memory leak on the native side

    Hello,
    I am hoping someone here can offer some troubleshooting advice, as I am completely stumped. I am running JBoss 5.1 with JDK 1.6_u18 (same problem with u17 and u16 too)
    - 32-bit, Linux (RHEL 5).
    - min/max heap setting of 1024M
    - permgen max of 256M
    - Thread stack size of 128K
    - No JNI
    My problem: The memory footprint of the JVM slowly grows until it hits the 3G OS limit. This takes about 8 hours under moderate load. At this time, it of course dies as it has no more addressable memory left.
    Here is the strange part: I have used every possible memory debugging tool (jmap, Eclipse MAT, etc) and nothing looks out of the ordinary in my Java heap. Thread count stays at a reasonable 350 threads, Java heap size stabilizes at about 500M. For the first hour or so, the JVM footprint stays at about 1.7G, which makes sense. After that it starts to slowly grow until in exceeds the 3G limit.
    What can I do to figure out where the leak is occurring? There is clearly some native resource that is being allocated but not freed. As I indicated, all the Java analysis tools report a healthy, stable heap and thread count.
    Thanks in advance.
    Jon

    Thanks for the reply.
    I have confirmed with -verbose:jni that the only JNI libraries getting loaded are those belonging to the JDK. My application does make heavy use of the ProcessClassLoader from the Jboss JBPM library, but classes all seem to be unloading normally, and my Permgen usage stays very low and stable. Is there anything I should look at in regards to this class loader? I have tried both a lack of GC parameters and specified the concurrent mark sweep collector, with the same results.

  • Is there any memory leaks in the version 4.6.21?

    Hi All:
    My en so weak ,so I try my best to let you understand my mean...
    I add replication to my app rencently.
    It's seem any memory leaks in the version 4.6.21.
    Threr is an env in my app's database. and there are 2000 dbs in the env. The 2000 dbs distributing In 200 directory . And the 200 directory is the subdirectory under the DATADIR.
    In the app, client and master create all subdirectory befoer the env been opened . And In my test ,if the master app use relative path to create Db and r/w data, the data will be correct send to client, The data in the master and client will keep identical.
    But, There is many memory leaks in the app.In my test ,the app will use up all memory that lead malloc failed and crash.
    the app use DB_LOG_AUTOREMOVE,DB_LOG_INMEMORY,DB_REPMGR_ACKS_NONE.
    cache_size = 500M.
    void event_callback(DbEnv* dbenv, u_int32_t which, void *info)
         bool * isMaster = (bool *)dbenv->get_app_private();
         info = NULL;                    /* Currently unused. */
         switch (which) {
         case DB_EVENT_REP_MASTER:
              *isMaster = 1;
              dbenv->errx("switch to master mode");
              break;
         case DB_EVENT_REP_CLIENT:
              *isMaster = 0;
              dbenv->errx("switch to slaver mode");
              break;
         case DB_EVENT_REP_STARTUPDONE: /* FALLTHROUGH */
         case DB_EVENT_REP_NEWMASTER:
         case DB_EVENT_REP_PERM_FAILED:
              // I don't care about this one, for now.
              break;
         default:
              dbenv->errx("ignoring event %d", which);
    Thanks
    d.j
    Message was edited by:
    user623248

    I setup logdir,datadir and envdir into different directories with
    the functions set_data_dir and set_log_dir.I created two directories
    under the datadir and created more than 100 sub-directories
    respectively for the two dir, where the 2000 databases located.By the way, even though it doesn't seem likely to be the cause of the
    memory leak, you should still fix this illegal usage of subdirectories
    before we get much further investigating. We might as well eliminate
    any possible source of problem, no matter how unlikely it seems.
    Also, just as an experiment, I would be very tempted to try running
    again without using any separate directories. In other words, don't
    call set_data_dir or set_log_dir at all, and just let everything be
    put into the one single directory. If that changes the results, that
    will be a big clue to help us know where to look for the problem.
    I'am writting a demo , but it take time.
    i will reply later .Thank you. Take your time -- we'll be here.
    Alan Bram
    Oracle

  • How to get memory usage of the JVM...

    Hi,
    I want to know the total amoutn of memory used by the JVM, while I am doing some particular transaction over my web application.
    One way is to use the methods in java.lang.Runtime. But the output seems to be fluctuating to a significant extent.
    Is there an alternative way to accomplish the same.
    TIA,
    Basu.

    Complie with the setting -verbose:gc,
    whenever the GC runs, it prints out the acutal amount of mem originally used, the amount used after GC and the JVM mem usage.

  • How to limit the maximum memory used by the JVM

    Dears
    we enabled sybase webservice and started it , and now find the process is running much memory,
    how could I limit the maximum memory used by the JVM ?
    aidcd02:/> prstat -p 8164 -p 17768
       PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP      
      8164 sybase   1079M  848M sleep   59    0   0:10:44 0.0% java/30  
    aidcd02:/> pargs 8164
    8164:   /sybase/shared/JRE-6_0_24/bin/java -Dsybase.home=/sybase -Docs.home=OCS-15_0 -D
    argv[0]: /sybase/shared/JRE-6_0_24/bin/java
    argv[1]: -Dsybase.home=/sybase
    argv[2]: -Docs.home=OCS-15_0
    argv[3]: -Dcom.sybase.ase.ws.libtcl=
    argv[4]: -classpath
    argv[5]: /sybase/WS-15_0/lib/axis.jar:/sybase/WS-15_0/lib/jaxrpc.jar:/sybase/WS-15_0/lib/saaj.jar:/sybase/WS-15_0/lib/commons-logging-1.0.4.jar:/sybase/WS-15_0/lib/commons-discovery-0.2.jar:/sybase/WS-15_0/lib/log4j-1.2.4.jar:/sybase/WS-15_0/lib/wsdl4j-1.5.1.jar:/sybase/jConnect-6_0/classes/jconn3.jar:/sybase/jConnect-6_0/classes/jTDS3.jar:/sybase/WS-15_0/lib/xercesImpl.jar:/sybase/WS-15_0/lib/xmlParserAPIs.jar:/sybase/WS-15_0/lib/server.jar:/sybase/WS-15_0/lib/sqlx.jar:/sybase/WS-15_0/lib/flexlm.jar:/sybase/WS-15_0/lib/servlet-api-2.5-20081211.jar:/sybase/WS-15_0/lib/jetty-6.1.22.jar:/sybase/WS-15_0/lib/jetty-sslengine-6.1.22.jar:/sybase/WS-15_0/lib/jetty-util-6.1.22.jar:/sybase/WS-15_0/lib/tools.jar:/sybase/WS-15_0/lib/jcert.jar:/sybase/WS-15_0/lib/jnet.jar:/sybase/WS-15_0/lib/jsse.jar:/sybase/WS-15_0/lib/jce1_2_2.jar:/sybase/WS-15_0/lib/sunjce_provider.jar:/sybase/WS-15_0/lib/ws.jar:/sybase/shared/lib/dsparser.jar:/sybase/WS-15_0/props:/sybase/WS-15_0/producer/WEB-INF
    thanks.

    You can limit the maximum memory used via command line options to the VM.
    Noting of course that typically if you are in fact actually using that memory that reducing it is going to have some negative impact on your application including catastrophic failure.
    Conversely if the application is not in fact actively using that memory then limiting it just because some tool reports that the OS has reserved it then limiting it is pointless. 

  • Huge memory leak in java jvm after update 2 for Snow leopard

    Since I updated to Java Update 2 for Snow Leopard my JVM suddenly grows massive (10GB+ real memory - -Xmx=3500m) consuming all memory and rendering my iMac unusable. This does not happen predictably but does happen several times a day now requiring I power off and on again.
    I had been living happily with update 1 with no such problem.
    I need to either go back to update 1 (should have it on Time Machine) or find a solution for this problem.

    Our application is a j2ee-based commercial application facing to specified customers, having about 120 access request an hour.
    We ' re doing stress test on the test server. The strange memory leak occurs at 1:20 am this morning while we're out of company , and no job was scheduled to run at that time. So I have the tendency to image that there is something inside oc4j had occured.
    I have used OptimizeIt to monitor the heap status. However , as the memory leak problem occurs very occasionally ,and that tool deadly slows our server, we are currently using no profiling tools.

  • Possible of memory leak in the loop

    Recently my application do get OutOfMemory issue. I realized the memory is keep on stack up as i saw in the task manager, the jlaunch keep growing and it won't drop back. That day i'm running a search function and it will query the table to retrieve the data. The jlaunch shoot from 500MB -> 2.2GB and now remain in there. Wondering is it during it query it populate at least 10,000 records into the arraylist and then the memory already allocated and once i finish run the function, it will clear the allocated memory to re-use it.
    public ArrayList ejbHomeInJDBCConnection(Map map){
         ArrayList beanList = new ArrayList();
         try{
              Context ctx = new InitialContext();
              DataSource ds = (DataSource) ctx.lookup("jdbc/POOL");
              Connection con = ds.getConnection();
              String query = "SELECT * FROM USER WHERE ";
              for (Iterator iterator = map.keySet().iterator(); iterator.hasNext();) {
                   Object key = iterator.next();
                   if(key.toString().startsWith("TIME") | key.toString().startsWith("TIMEIN")){
                        long longValue = Long.parseLong(map.get(key).toString());
                        query += key.toString()+ longValue + " AND ";      
                   }else{
                        String value = (String)map.get(key);
                        query += key.toString()+ value + " AND ";      
              String newquery = query.substring(0, query.length()-5);
              newquery += " ORDER BY TIMEIN DESC";
              Statement stmt = con.createStatement();
              try {
                   ResultSet rs = stmt.executeQuery(newquery);
                  try {
                        while (rs.next()){
                             InBean bean = new InBean();
                             bean.setSmsId(rs.getString("EMP"));
                             beanList.add(bean);
                   }finally{
                        rs.close();
              }finally{
                   stmt.close();
         }catch(Exception e){
              System.err.println(e.fillInStackTrace());
    return beanList;
    Wondering is it the InBean will cause any memory leak as if there is 10,000 records, which mean it will create 10,000 objects and once it add into the arraylist the previous bean is not in use, will the GC clear it as i didn't set it as null. Do i need to do something like reallocate/defragment the memory?
    Thanks.

    Hi,
    I'm sure a "count" would not generate the overhead you are concerned.
    To understand some aspects, you need to read the source files of Java, and understand how the stack would work in your case.
    Evertime you "add" an element to you list, the implementation will run the ensureCapacity, and grown the list one by one. Understand that the list is an Array with a lot more functions, but below, you are still working with an Array, and it needs to have a defined size. Everytime you add, it's doing a System.arraycopy(all the crap) - So you can save this, everytime you add something if you create your List with the right size.
    Note, this is not an issue if we consider small lists, of small objects, but working with large lists, you can feel slow downs.
    About the GC stuff, well.. I'm sure you can do some reading how it works. One good start point would be
    Link: [http://java.sun.com/docs/hotspot/gc1.4.2/]
    I'm sure you don't need that, but still, it's good reading. Maybe you should just increase your heap size, or you can manually clear the List using list.clear();
    Rgds,
    Daniel

  • Memory Leak in Spawned JVM?

    I have set up the Activatable RMI example form Sun and noticed something unusual while monitoring the Windows Task Manager.
    When I run the client program for the first time two JVM's are spawned instead of one. When the client program finishes executing one JVM terminates while the other is left running. Everytime I execute the client program after that a new JVM is spawned and terminates (I assume this is the client program) while the other JVM grows by about 4-28k. Finally, when I terminate the rmi daemon the 'other' JVM terminates.
    What is running in this 'other' JVM?
    Why does it appear only after the first execution of the client program and not when the rmi daemon is started or when the setup program is run?
    How can I terminate this JVM without killing the rmi daemon?
    What is causing the memory leak?
    Any ideas would be greatly apreciated, thanks.

    Obviously the second JVM is for the Activatable object!
    OK.
    So when I run the client for the first time two JVM's are started one for the client program and one for the activatable object. When the client program finishes executing its JVM terminates. Meanwhile the activatable object's JVM still hangs around (and leaks memory).
    I don't want this JVM to hang around too long. Ideally I would like to control how long an activatable object hangs around after the client is finished with it. Is there anyway to shutdown the activatable Objects JVM programatically?

  • No need for Xerces jars since Sun includes it in the jvm?

    Hey Everybody,
    If one would like to use Xerces in a java app, why should they download it from Apache when it's included in Sun's jvm releases as of 1.4?
    I noticed that the "rt.jar" in Sun's jvm for Windows contains an apache folder: jdk1.6.0_11\jre\lib\rt.jar\com\sun\org\apache\,
    that contains the following apache folders: bcel, regexp, xalan, xerces, xml, xpath
    How does one tell what version of Xerces is being released with a particular Sun jvm release? I looked all over the Sun site - it seems they haven't documented that they are using files from Xerces. They even took the trouble of burying the "org/apache" folder down under the "com/sun" folder -- as if to hide the fact that apache is being used....maybe to take credit for it to make it look like the functionality was developed by Sun?
    Is there a chance that one could successfully compile java code that has references to Xerces classes where the developer's project does not contain Xerces jars, but instead relies on the Sun bundled xerces classes -- and while that code would compile file, there could exist runtime exceptions due to potential classes from Xerces that Sun did not include in its jvm -- while if the project did include the Xerces jars - there would be no runtime errors?
    Thanks!

    EnNuages wrote:
    Hey Everybody,
    If one would like to use Xerces in a java app, why should they download it from Apache when it's included in Sun's jvm releases as of 1.4?One reason could be because the apache one is newer and thus is better in some way.
    >
    I noticed that the "rt.jar" in Sun's jvm for Windows contains an apache folder: jdk1.6.0_11\jre\lib\rt.jar\com\sun\org\apache\,
    that contains the following apache folders: bcel, regexp, xalan, xerces, xml, xpath
    How does one tell what version of Xerces is being released with a particular Sun jvm release? I looked all over the Sun site - it seems they haven't documented that they are using files from Xerces. They even took the trouble of burying the "org/apache" folder down under the "com/sun" folder -- as if to hide the fact that apache is being used....maybe to take credit for it to make it look like the functionality was developed by Sun?
    No.
    More likely the choice of location was without thought. Or because they might want to change it or remove it.
    Is there a chance that one could successfully compile java code that has references to Xerces classes where the developer's project does not contain Xerces jars, but instead relies on the Sun bundled xerces classes -- and while that code would compile file, there could exist runtime exceptions due to potential classes from Xerces that Sun did not include in its jvm -- while if the project did include the Xerces jars - there would be no runtime errors?No. You can't compile using classes that don't exist.

Maybe you are looking for