Solaris JVM Process Growth

Hi,
I am investigating a problem where we experience continual growth of our JVM process. The overall process size and native heap size of the JVM process continually grow at the same rate. I am monitoring these using the commands 'ps - o pid,vsz,rss' and 'pmap -x' respectively. The increases are in multiples of 8Kb.
I have checked our java application using Optimizeit and it is not leaking memory. I have also monitored the size of the VM java heap using the '-verbose:gc' garbage collection debugging option. Garbage collection appears normal and the VM heap size remains below that specified by the '-Xmx' option.
It appears that the memory growth is occurring in native code of the JVM process but I am at a loss on how to determine what is causing this. Can anyone advise me what may be causing this JVM process growth or ways in which I may be able to find this out?
I am using JRE 1.4.2 SE (1.4.2_08_b03) on Solaris 8. Within the JVM we are running our web app in Tomcat 4.1.
The shared libraries loaded by the JVM (as shown by pldd) are:
/usr/lib/libthread.so.1
/usr/lib/libdl.so.1
/usr/lib/libc.so.1
/usr/platform/sun4u/lib/libc_psr.so.1
/vob/ntg-thirdparty/java/j2sdk1.4.2_08/jre/lib/sparc/client/libjvm.so
/usr/lib/libCrun.so.1
/usr/lib/libsocket.so.1
/usr/lib/libnsl.so.1
/usr/lib/libm.so.1
/usr/lib/libsched.so.1
/usr/lib/libmp.so.2
/vob/ntg-thirdparty/java/j2sdk1.4.2_08/jre/lib/sparc/native_threads/libhpi.so
/vob/ntg-thirdparty/java/j2sdk1.4.2_08/jre/lib/sparc/libverify.so
/vob/ntg-thirdparty/java/j2sdk1.4.2_08/jre/lib/sparc/libjava.so
/vob/ntg-thirdparty/java/j2sdk1.4.2_08/jre/lib/sparc/libzip.so
/vob/ntg-thirdparty/java/j2sdk1.4.2_08/jre/lib/sparc/libjdwp.so
/vob/ntg-thirdparty/java/j2sdk1.4.2_08/jre/lib/sparc/libdt_socket.so
/usr/lib/nss_files.so.1
/vob/ntg-thirdparty/java/j2sdk1.4.2_08/jre/lib/sparc/libnet.so
/vob/ntg-thirdparty/tibco/rv-7.1/sol28/lib/libtibrvj.so
/vob/ntg-thirdparty/tibco/rv-7.1/sol28/lib/libtibrvcmq.so
/vob/ntg-thirdparty/tibco/rv-7.1/sol28/lib/libtibrvcm.so
/vob/ntg-thirdparty/tibco/rv-7.1/sol28/lib/libtibrvft.so
/vob/ntg-thirdparty/tibco/rv-7.1/sol28/lib/libtibrv.so
/usr/lib/libpthread.so.1
/vob/ntg-thirdparty/java/j2sdk1.4.2_08/jre/lib/sparc/libnio.so
/usr/lib/librt.so.1
/usr/lib/libaio.so.1
/usr/lib/libsendfile.so.1
/vob/ntg/dev/resources/lib/sol8gcc/libjavaperljni.so
/vob/ntg/dev/thirdparty/perl-5.8.0-gcc-thread/lib/libperl.so
/usr/lib/libw.so.1
/vob/ntg/dev/resources/lib/sol8gcc/libstdc++.so.2.10.0
/vob/ntg-thirdparty/java/j2sdk1.4.2_08/jre/lib/sparc/libioser12.so
/vob/ntg-thirdparty/java/j2sdk1.4.2_08/jre/lib/sparc/libawt.so
/vob/ntg-thirdparty/java/j2sdk1.4.2_08/jre/lib/sparc/libmlib_image.so
/vob/ntg-thirdparty/java/j2sdk1.4.2_08/jre/lib/sparc/headless/libmawt.so
/vob/ntg-thirdparty/java/j2sdk1.4.2_08/jre/lib/sparc/libcmm.so
/vob/ntg-thirdparty/java/j2sdk1.4.2_08/jre/lib/sparc/libfontmanager.so
/vob/ntg-thirdparty/java/j2sdk1.4.2_08/jre/lib/sparc/libdcpr.so
/vob/ntg-thirdparty/java/j2sdk1.4.2_08/jre/lib/sparc/libjpeg.so
Any Help is much appreciated.

Hi
If u can, use 1.4.2_10 (latest as of now). There is a bug 6250517, fixed in _09. Not sure if u r making any calls to NetworkInterface.getNetworkInterfaces.
Also noticed that you are using tibco. How about putting -Xcheck:jni and see if it picks up anything.
Unfortunately Solaris 8 didnt have libumem for tracking memory allocation. If u have any Solaris 9/10 boxes, you can use libumem to track it down.
http://access1.sun.com/techarticles/libumem.html

Similar Messages

  • How to determine the size of the JVM process?

    Hi,
    How to determine the total process size of the JVM process (that includes Heap, Non Heap and Native memory)?
    Is there any command to obtain this value on Solaris (for Sun JVM)?
    I refer the process size to http://middlewaremagic.com/weblogic/wp-content/uploads/2010/11/Java_Heap_Diagram_12.jpg) here.
    Many thanks for your help in advance!

    Hi,
    Make sure that Total Heap + Native memory will be consider as total Memory.
    That means in 32 bit you will have only at most 4 GB for process + additional 2 GB for OS.
    So let assume if you have 4GM RAM then out of the 4GB you can allocate 2GB as Heap and 512m as Perm in case of Hot spot and remaining will be consider as Native memory.
    But in case of 64 bit will change you will have good amount of the memory so you can use plenty of Heap and Perm size.
    Still if you have query let me know.
    Regards,
    Kal

  • How to increase JVM Process size for WebLogic running SOA Applications.

    Hi,
    I believe 32 Bit OS can address up to 4GB memory so theoretically 32 Bit JVM can use 4GB but practical convention is 2GB as other 2GB is used by OS itself and also this default JVM Process size is set somewhere and I also believe that if JVM is 32 bit on 64Bit OS even though JVM will run on 32Bit Virtual Machine so JVM does not know that it is 64Bit OS in that case again it can use max Process default size up to 2GB.
    And for 64Bit JVM, I can allocate more than 4GB depend on my available RAM size to Xmx, MaxPermSize parameters in java.exe file and after that I can set the same value in “setSOADomainEnv.cmd” or to “setDomainEnv.cmd” file.
    But I am 99% sure by just assigning more memory value to Xmx, MaxPermSize in “setSOADomainEnv.cmd” file only won’t work (not setting Xmx in java.exe), if it would have worked then in my case when I was assigning 1536 to Xmx in “setSOADomainEnv.cmd” file then why it was showing out of memory error. I think that is because it was only taking default 2GB for my 32 Bit JVM not considering 3GB or 4GB. So i think i have to change default memory size what JVM can use (<http://www.wikihow.com/Increase-Java-Memory-in-Windows-7> but i am using windows 8 so for that I don’t know the option to change this default Process Size)
    I also believe that first JVM start and before start it check how much memory it can use from it’s own –Xmx parameter in some ware configuration or java.exe file and after that it allocate that much size of JVM Process Memory in RAM then after it loads Weblogic or Java Applications in its (Heap + Non-heap + Native Area) which are parts of JVM Process memory
    I read post on :< http://stackoverflow.com/questions/3143579/how-can-jvm-use-more-than-4gb-of-memory > and < http://alvinalexander.com/blog/post/java/java-xmx-xms-memory-heap-size-control >
    All used  : 
    java -Xmx64m -classpath ".:${THE_CLASSPATH}" ${PROGRAM_NAME}
    java –Xmx6g     //command which will call java/JVM interpreter which will hold –Xmx parameter to set Heap size for JVM
                                    before JVM comes in memory (JVM process memory)
    now my question is can I manually open any configuration file or java.exe same like “setSOADomainEnv.cmd” or “setDomainEnv.cmd” (I know since java.exe is exe I can’t open simply but I want similar work around)
    so that I don’t need to type java –Xmx6g every time when I run weblogic (and then later I can change weblogic “setDomainEnv.cmd” Xmx and PermSize to more than default size 4GB to 5GB or 6GB in the case of 64Bit OS)
    Please correct me if I am wrong in my understanding.
    Thanks.

    These days the VM will detect a "server" machine and set up the memory appropriate for that.
    You can create a simple java console application and have it print the memory settings (find the appropriate java class fort that.)
    There is of course the possibility that your application is running out of memory because it is doing something wrong and not because the VM doesn't have enough memory.  You can force that in a test setup by LOWERING the maximum amount of memory and thus making it more likely that an out of memory exception will occur.

  • Capturing stdout from JVM process

    Hi,
    Using JNI native methods I call functions in a shared C library from my Java program. I do not have access to the shared library source code. The shared library writes informational messages to stdout. I want to be able to capture these messages and display them in my Java GUI as they occur. I need a cross-platform solution because the Java program needs to run on both Windows and Linux.
    I have googled and searched the JavaSoft forums but I cannot find an answer to what I am trying to do. I have seen answers on how to do it if you are using Runtime.exec methods, but I am not doing that. Also, redirection on the command line will not work since I want to show these messages as they occur in my GUI.
    I have thought of redirecting stdout to a pipe in the JNI code and using select to read the bytes off the pipe. Then sending the bytes up to a Java object. All this would run in a separate thread. But this seems overly complicated.
    Any suggestions?
    charlie

    I developed a solution to this problem using named pipes. It works well on Linux (2.6 kernel) and it may work on Windows but I don't know. Example code follows. I would be most interested in any feedback on this solution or on the code itself.
    There are 2 files, StdoutRedirect.java and StdoutRedirect.c.
    1) Compile the java file and run javah on it to get StdoutRedirect.h.
    2) Compile the C file into a shared library, here's a makefile:
    StdoutRedirect: StdoutRedirect.o
    gcc -shared -o libStdoutRedirect.so StdoutRedirect.o
    3) Run the java class file.
    charlie
    **** StdoutRedirect.java ****
    import java.io.FileNotFoundException;
    import java.io.IOException;
    * This class, along with its JNI library, demonstrates a method of redirecting stdout of the JVM process to a Java
    * Reader thread. Using this method the stdout bytes can be sent anywhere in the Java program; e.g., displayed in a GUI.
    * This has only been tested on a Linux 2.6 kernel.
    public class StdoutRedirect {
        static {
            System.loadLibrary("StdoutRedirect");
        final static public String NAMED_PIPE = "/tmp/stdoutRedirect";
        native private void setupNamedPipe();
        native private void redirectStdout();
        native public void someRoutine();
        // Flag to indicate to Reader thread when to terminate
        protected boolean keepReading = true;
        public static void main(String[] args) throws IOException {
            StdoutRedirect redir = new StdoutRedirect();
            redir.setupNamedPipe();
            // The first reader or writer to connect to the named pipe will block. So, the reader
            // must be opened first and must be in a new thread. We want it to be in a separate
            // thread anyways so we can receive data asynchronously.
            redir.openReader();
            // At this point, the reader thread is blocked on creating the FileInputStream
            // because it is the first thing to connect to the named pipe. We grab the lock
            // here and redirect stdout to the named pipe. This opens a writer on the named
            // pipe and the reader thread will unblock. We want to wait for the reader thread
            // to unblock and be ready to receive data before continuing.
            synchronized (redir) {
                redir.redirectStdout();
                try {
                    // wait for the reader thread to be ready to receive data
                    redir.wait();
                } catch (InterruptedException e) {
            // write some data to stdout in our C routine
            redir.someRoutine();
            // All done now, so indicate this with our flag
            redir.keepReading = false;
            // The reader thread may be blocked waiting for something to read and not see
            // the flag. So, wake it up.
            System.out.println("Shut down");
            // Make sure everything is out of stdout and then close it.
            System.out.flush();
            System.out.close();
            // stdout is closed. This will not be visible.
            System.out.println("Won't see this.");
         * Starts the reader thread which listens to the named pipe and spits the data
         * it receives out to stderr.
        private void openReader() {
            new Thread() {
                public void run() {
                    try {
                        int BUFF_SIZE = 256;
                        byte[] bytes = new byte[BUFF_SIZE];
                        int numRead = 0;
                        // At this point there is no writer connected to the named pipe so this statement
                        // will block until there is.
                        FileInputStream fis = new FileInputStream(NAMED_PIPE);
                        // The reader thread is ready to accept data. Notify the main thread.
                        synchronized (StdoutRedirect.this) {
                            StdoutRedirect.this.notify();
                        // Keep reading data until EOF or we're told to quit and there is no more data to read
                        while (numRead != -1 && (StdoutRedirect.this.keepReading || fis.available() != 0)) {
                            numRead = fis.read(bytes, 0, BUFF_SIZE);
                            System.err.print("Received - " + new String(bytes, 0, numRead));
                        if (fis != null) {
                            fis.close();
                        System.err.println("Receiver shut down");
                    } catch (FileNotFoundException e) {
                        e.printStackTrace();
                    } catch (IOException e) {
                        e.printStackTrace();
            }.start();
    } // class StdoutRedirect**** StdoutRedirect.c ****
    #include "StdoutRedirect.h"
    #include <sys/types.h>
    #include <sys/stat.h>
    #include <stdio.h>
    #include <fcntl.h>
    #include <string.h>
    #include <errno.h>
    // The filesystem location for the named pipe
    const char *namedPipe = "/tmp/stdoutRedirect";
    * Create the named pipe we're going to redirect stdout through. After this
    * method completes, the pipe will exist but nothing will be connected to it.
    JNIEXPORT void JNICALL Java_StdoutRedirect_setupNamedPipe(JNIEnv *env, jobject obj) {
      // make sure there is no pre-existing file in our way
      remove(namedPipe);
      // create the named pipe for reading and writing
      mkfifo(namedPipe, S_IRWXU);
    * Redirect stdout to our named pipe. After this method completes, stdout
    * and the named pipe will be identical.
    JNIEXPORT void JNICALL Java_StdoutRedirect_redirectStdout(JNIEnv *env, jobject obj) {
      // Open the write end of the named pipe
      int  namedPipeFD = open(namedPipe, O_WRONLY);
      printf("Before redirection...\n");
      // make sure there is nothing left in stdout
      fflush(stdout);
      // duplicate stdout onto our named pipe
      if ( dup2(namedPipeFD, fileno(stdout)) == -1 ) {
        fprintf(stderr, "errno %s.\n", strerror(errno));
        fprintf(stderr, "Couldn't dup stdout\n");
      printf("After redirection.\n");
      // flushing is necessary, otherwise output does not stay in sync with Java layer
      fflush(stdout);
    * Do some random writing to stdout.
    JNIEXPORT void JNICALL Java_StdoutRedirect_someRoutine(JNIEnv *env, jobject obj) {
      int i;
      for ( i = 0; i < 3; i++ ) {
        printf("Message %d\n", i);
      printf("End of messages\n");
      // flushing is necessary, otherwise output does not stay in sync with Java layer
      fflush(stdout);
    }

  • File comparaison works on win32 JVM, not on 64bit solaris JVM

    Hi all!
    I have the following code comparing 2 files. It works on win2000 but doesnt work on unix solaris. (same JVM version)
    win32 JVM is 32 bit, solaris JVM 64 bit
    private boolean fichierIdentiqueBytePourByte( InputStream in1, InputStream in2 ) throws IOException{
              int a = 0, b = 0;
              while(true){
                // read next byte from both stream
                a = in1.read();
                b = in2.read();
                // if its different then files are different and we are done
                if (a!=b) return false;
                // if both are at eof then all checked must be the same
                if (a==-1&&b==-1) return true;
                // if either are at eof then they are different sizes
                if (a==-1||b==-1) return false;
         }any ideas of what the problem might be?
    Thanks

    Presumably when you say it didn't work you mean that in one case it returned true and in the other it returned false.
    If something else then you need to explain what it is.
    Naturally one obvious source of the problem is that in fact the files are not the same on the 64 bit machine. For example they were transferred using ftp and one was transferred using ftp text and other via ftp binary. Then the lengths would be different.
    Another possibility is that you are not running the code that you think you are.
    If the files are very large then it is possible there is some difference there.

  • Solaris JVM question

    Hi,
    I am a newbie on Solaris and have found that there is a significant degredation in performance when my java application is run on a Solaris JVM as opposed to a Windows JVM.
    I'm running with the -server option which has helped, but was wondering if there is some other tuning that I could do in order to make the HotSpot JVM as fast as possible.
    Thanks,
    Tom

    Check your JVM settings, they make a lot of difference to the performance. It also depends on the load you have on the system. Windows does not do as well when compared with Sun when there is a high load.

  • Anybody could told me when the JVM process will disappear?

    1.The system is RHEL5.1 and PC is my private machine, not connect to internet or local network.
    2.I was had a tomcat performance test, login as "tomcat" and start tomcat server, it's pid was 8902.
    3. after 12 hours, I relogin to the server but can not found process 8902.
    4.I could not found any hs_err_pid.log and core.pid file under "tomcat" home, and only myself use this machine.
    5. the jvm process have enough privilege for the file read and write.
    So, I lost my road, why the jvm process will disappear and not leave any mark? thank you for help me !

    Roy_Li wrote:
    1.The system is RHEL5.1 and PC is my private machine, not connect to internet or local network.
    2.I was had a tomcat performance test, login as "tomcat" and start tomcat server, it's pid was 8902.I don't know how you start it, but it's possible that your program gets a hangup signal and quits when you logout. You should in that case start it with nohup.
    [http://en.wikipedia.org/wiki/Nohup]
    Kaj

  • Sometimes "target applet or jvm process exited abruptly" liveconnect

    I have a problem with a applet with is embedded in my webpage ans which I am using to setup a socket connection to a server. I am communicating from the webpage to the applet with live connect to send data, and from the applet to the page with live connect for the received data.
    This works fine most of the time, but occasionaly at the customers site i get the error:
    "target applet or jvm process exited abruptly"
    which seems to indicate the JVM has crashed. I searched the client for hs_err* files but did not find them. The client is running the latest JRE 1.7_05
    Does anyone have any ideas how to find the root cause of this problem ?
    Kind regards,
    Marco

    Enable full tracing and see what is happening with applet around the time you get this message. See also other troubleshooting hints in
    http://docs.oracle.com/javase/7/docs/webnotes/tsg/TSG-Desktop/html/plugin.html
    May be browser "reloads" page and reinstantiate the applet or it could be bug in applet or Java platform.

  • When Os Kills JVM Process?

    When Os Kills JVM Process?

    How can we write OS Schedular to run as cron job? It
    should work as a platform independent .These two statements are not compatible. Cron is not cross platform.

  • What's in the JVM Process's Memory Space?

    Hello
    I'm noticing the following behavior on an NT system. On
    application startup, I see
    Total Heap 9 MB
    Used Heap 5.5 MB
    java.exe memory (from NT Task Manager) 36 MB
    After a "login" operation which loads a few more
    classes:
    Total Heap 12.5 MB
    Used Heap 8.2 MB
    java.exe memory (from NT Task Manager) 53 MB
    Heap memory leaks have been ruthlessly suppressed
    (thanks to OptimizeIt and careful programming). The
    behavior I do not understand is that the NT process
    (java.exe) increased in size by 17 MB when the Java
    heap increased by only 3 MB. The .jar file in which the
    application resides is less than 1 MB, so this 14 MB
    growth can not be attributed to new classes being
    loaded.
    Does anyone know what is going into the process
    memory space of java.exe? It tends to grow larger
    and larger.
    Should I even care? Do I want a large allocation of
    process memory for java.exe, or will that hamper
    performance of machines with less physical memory?
    Posts on related topics in this forum have sometimes
    advocated allocating a lot of memory to the JVM with
    -X options.
    Thanks

    Hello
    I am facing exactly the same problem on NT. However, on 2000 Server this problem doesn't seem to exist. Are you, by any chance, using JNI? We are extensively using JNI in our Servlets and found that there is definitely some momory leak there. We could not figure out any substantial leak at Java end. In NT's "Task Manager" java.exe is always listed first and very rarely the memory usage seems to come down. On 2000 Server the performance is far better.
    Please visit this link:
    http://forum.java.sun.com/thread.jsp?forum=33&thread=211330
    Regards
    Manish Bhatnagar

  • Switching back from 64 bit jvm process to 32 bit process

    I have my customer, go-live on WLS 9.2 MP2 on Sun Solaris 10 on 64 -bit. We have faced several crashes related to different reasons. I had applied 3X4R patch from Weblogic as well. Now, because they are on production and server crash is unacceptable, I need to rollback to 32 bit version that is more stable. So, can anyone point me to documentation that states the steps to rolling back to 32 bit process from 64 bit. The concern here is to ensure that rolling back along with 64 bit patches from weblogic shouldn't create any new issue.
    Please let me know.
    Thanks.

    First of all I am kind of surprised you've got a
    speed gain of 20% using the 64 bit mode. I thought in
    general there was not much speed gain and sometimes
    even a loss of performance. My own reason for using
    64bit mode is to be able to address more than 4gb of
    memory.
    Saying that, it seems you have the choice of either
    run in 64bit mode and connect somehow (RMI???) to a
    32bit JVM that will run your 32bit native libraries.
    This seems to be a bit of a fragile and dingy
    solution to me, which probably quickly loses any
    speedgains you get from running in 64bit mode.
    Or you just go for the 32bit mode all over, and hope
    that some day in future your 3rd part libraries
    become available in 64 bit mode.
    Honestly, unless you're going to address lots of
    memory, I think this last solution is the most viable
    option.Good advice!

  • Solaris/JVM/JNI crashes

    Hello,
    I am experiencing periodic crashes in an application while using Solaris 2.8 and JVM 1.3.0. The application is invoking multi-threaded java code from C++ and the conflict causing the crashes seems to be in how the threads are being handled. Is anybody aware of problems using Solaris 2.8 with JVM 1.3 for this purpose?
    Thanks.

    By the way that presumes that your JNI code is actually robust (either long time use or extensive testin/profiling) to rule out a problem with it. If not then the problem is probably with the JNI code and has nothing to do with java.

  • Solaris 8 processes memory consumption

    Hi,
    I'm working with solaris 8 and I have the following problem. On a random base one of the users that are logged on the server starts a dtterm process that uses a lot of available memory and the only solution is to log out that user and then login again. I report the ps -eafl involving the processes
    8 S ais 24241 24233 0 57 20 ? 148286 ? 10:13:19 ? 0:00 /usr/dt/bin/dtterm -session dtjmaac
    8 S ais 24243 24233 0 67 20 ? 148286 ? 10:13:19 ? 0:00 /usr/dt/bin/dtterm -session dt8maWb
    and the father process
    8 S ais 24233 24210 0 48 20 ? 970 ? 10:13:14 pts/14 0:00 /usr/dt/bin/dtsession.
    Sorry if this problem has already been discussed but I'm a newbie of this forum and thanks for any help...
    Andrea Parrini

    tzzhc4 wrote:
    prtmem was part of the MEMTOOLS package you just listed, I belive it relies on a kernel module in that package and doesn't work on any of the newer kernel revisions.But it certainly works on 8, right? And that's the OS you were referring to, so I assumed you were thinking of something else.
    From that page:
    System Requirements:     SPARC/Solaris 2.6
                   SPARC/Solaris 7
                   SPARC/Solaris 8
                   SPARC/Solaris 9
                   x86 /Solaris 8
                   x86 /Solaris 9
    So if that's what you want to use, go for it!
    I thought freemem didn't include pages that had an identity, so there could be more memory free then was actually listed in freemem.What do you mean by 'identity'? Most pages are either allocated/reserved by a process (in use) or used by the disk cache. Under Solaris 7 and earlier, both reduced the 'freemem' number. Under 8 and later, only the first one does.
    Darren

  • Segmentation Fault on Solaris JVM

    Hi,
    We have a Java application that executes all 'C' code through JNI code.
    It works fine on NT but on solaris, the JVM suddenly crashes with a
    segmentation fault. The crashes are random. We did a lot of debugging to ensure that the JVM doesn't crash when we are in the 'C' code.
    We are using jdk1.4.0-b92. Any ideas?
    Here's the stack trace from gdb:
    Program received signal SIGSEGV, Segmentation fault.
    0xfa535000 in ?? ()
    (gdb) bt
    #0 0xfa535000 in ?? ()
    #1 0xfa53908c in ?? ()
    #2 0xfa538f50 in ?? ()
    #3 0xfa534480 in ?? ()
    #4 0xfa52d560 in ?? ()
    #5 0xfa405c54 in ?? ()
    #6 0xfa405b88 in ?? ()
    #7 0xfa405da8 in ?? ()
    #8 0xfa405da8 in ?? ()
    #9 0xfa405da8 in ?? ()
    #10 0xfa400440 in ?? ()
    #11 0xfe0fd9ac in __1cJJavaCallsLcall_helper6FpnJJavaValue_pnMmethodHandle_pnRJavaCallArguments_pnGThread__v_ ()
    #12 0xfe10f644 in __1cJJavaCallsMcall_virtual6FpnJJavaValue_nLKlassHandle_nMsymbolHandle_4pnRJavaCallArguments_pnGThread__v_ ()
    #13 0xfe10f4a4 in __1cJJavaCallsMcall_virtual6FpnJJavaValue_nGHandle_nLKlassHandle_nMsymbolHandle_5pnGThread__v_ ()
    #14 0xfe10f42c in __1cMthread_entry6FpnKJavaThread_pnGThread__v_ ()
    #15 0xfe10f13c in __1cKJavaThreadDrun6M_v_ ()
    #16 0xfe0fc284 in _start ()

    I had same segmentation fault and found out some char array variable's value length is over the size of char array variable. After fixed it, then it doesn't have that segmentation fault.
    In Solaris, when I use java JFileChooser to select a file, the file path usually including a lot of "../" which causes a path over 200 char long. When I assigned that path to a char array variable which only have 200 char length, the segmentation fault happened.

  • Solaris 10 Process Creation Cost (SMP)

    Hey there,
    I'm writing a authentication adapter for an inhouse application in C.
    The first throwaway - prototype forks a child and execs a shell script which uses the solaris ldapsearch utility to query an AD controller.
    An alternative would be to implement the whole logic in the adapter itself. This way no new processes get forked.
    The nice thing about the script - way is, that I get configurability and easy debugging for free.
    The bad thing is that at ~2 Additional Processes get craeated for each user that logins.
    ~400 users with similiar usage patterns over the day, peak between at 8.00 - 10.00 I guess.
    Now the question: Besides the cpu & memory resources each process needs, are there any solaris / sun hw - specific issues to take into account?
    I heard horror stories about cpu cross - calls halting all the n cpus while a process gets created and similar things.
    Linux is / was known for a pretty low process creation overhead. How's Solaris doing in this field?
    Thanks for any hints / pointers whatever
    Regards Robert

    Well, since you asked....
    I benchmarked the exact same machine, once under Linux (Fedora Core 7 with a vanilla kernel) and once under Solaris (Solaris 10 125101-10). The machine used is based on a Tyan Tiger MP motherboard (AMD760MPX chipset) with 2 Athlon MP2400+ CPU's and 2GB DDR memory. The HD used under Solaris is an 80GB Seagate Barracuda running on the AMD chipset's ATA100 bus, and the filesystem is UFS with logging. The Linux install is running off of a pair of 250GB Seagate Barracudas connected to a Promise SATA300+4 controller (which I am in the process of writing a Solaris driver for). The filesystem is ext3 on a mirrored volume. Compilation on both systems were performed with GCC4.2.1 using identical compiler flags, and bash was the default shell on both systems.
    Here's the Linux results:
    BYTE UNIX Benchmarks (Version 4.1.0)
    System -- Linux defiant 2.6.22.1 #1 SMP PREEMPT Sun Aug 5 16:34:04 MST 2007 i686 athlon i386 GNU/Linux
    Start Benchmark Run: Mon Aug 27 20:43:40 MST 2007
      1 interactive users.
      20:43:40 up 3 min,  1 user,  load average: 1.20, 0.83, 0.34
    lrwxrwxrwx 1 root root 4 2007-08-05 04:07 /bin/sh -> bash
    /bin/sh: symbolic link to `bash'
                          227873832 116041816 100069944  54% /
    Dhrystone 2 using register variables     5778801.2 lps   (10.0 secs, 10 samples)
    Double-Precision Whetstone                 1681.3 MWIPS (10.0 secs, 10 samples)
    System Call Overhead                     813180.9 lps   (10.0 secs, 10 samples)
    Pipe Throughput                          415162.7 lps   (10.0 secs, 10 samples)
    Pipe-based Context Switching              64908.7 lps   (10.0 secs, 10 samples)
    Process Creation                           6424.4 lps   (30.0 secs, 3 samples)
    Execl Throughput                           1421.4 lps   (29.9 secs, 3 samples)
    File Read 1024 bufsize 2000 maxblocks    282682.0 KBps  (30.0 secs, 3 samples)
    File Write 1024 bufsize 2000 maxblocks   197633.0 KBps  (30.0 secs, 3 samples)
    File Copy 1024 bufsize 2000 maxblocks    113833.0 KBps  (30.0 secs, 3 samples)
    File Read 256 bufsize 500 maxblocks      140090.0 KBps  (30.0 secs, 3 samples)
    File Write 256 bufsize 500 maxblocks      84382.0 KBps  (30.0 secs, 3 samples)
    File Copy 256 bufsize 500 maxblocks       50505.0 KBps  (30.0 secs, 3 samples)
    File Read 4096 bufsize 8000 maxblocks    384206.0 KBps  (30.0 secs, 3 samples)
    File Write 4096 bufsize 8000 maxblocks   305153.0 KBps  (30.0 secs, 3 samples)
    File Copy 4096 bufsize 8000 maxblocks    164475.0 KBps  (30.0 secs, 3 samples)
    Shell Scripts (1 concurrent)               2355.7 lpm   (60.0 secs, 3 samples)
    Shell Scripts (8 concurrent)                458.3 lpm   (60.0 secs, 3 samples)
    Shell Scripts (16 concurrent)               236.0 lpm   (60.0 secs, 3 samples)
    Arithmetic Test (type = short)           376603.4 lps   (10.0 secs, 3 samples)
    Arithmetic Test (type = int)             397807.5 lps   (10.0 secs, 3 samples)
    Arithmetic Test (type = long)            391745.4 lps   (10.0 secs, 3 samples)
    Arithmetic Test (type = float)           783164.0 lps   (10.0 secs, 3 samples)
    Arithmetic Test (type = double)          786266.0 lps   (10.0 secs, 3 samples)
    Arithoh                                       0.0 lps   (10.0 secs, 3 samples)
    C Compiler Throughput                       460.0 lpm   (60.0 secs, 3 samples)
    Dc: sqrt(2) to 99 decimal places          42626.4 lpm   (30.0 secs, 3 samples)
    Recursion Test--Tower of Hanoi            99086.0 lps   (20.0 secs, 3 samples)
                        INDEX VALUES            TEST         BASELINE     RESULT      INDEX
    Dhrystone 2 using register variables        116700.0  5778801.2      495.2
    Double-Precision Whetstone                      55.0     1681.3      305.7
    Execl Throughput                                43.0     1421.4      330.6
    File Copy 1024 bufsize 2000 maxblocks         3960.0   113833.0      287.5
    File Copy 256 bufsize 500 maxblocks           1655.0    50505.0      305.2
    File Copy 4096 bufsize 8000 maxblocks         5800.0   164475.0      283.6
    Pipe Throughput                              12440.0   415162.7      333.7
    Process Creation                               126.0     6424.4      509.9
    Shell Scripts (8 concurrent)                     6.0      458.3      763.8
    System Call Overhead                         15000.0   813180.9      542.1
                                                                    =========
        FINAL SCORE                                                     392.9Here are the Solaris results:
      BYTE UNIX Benchmarks (Version 4.1.0)
      System -- SunOS defiant 5.10 Generic_125101-10 i86pc i386 i86pc
      Start Benchmark Run: Monday, August 27, 2007  9:50:37 PM MST
       4 interactive users.
        9:50pm  up 10 min(s),  1 user,  load average: 0.16, 0.17, 0.13
      lrwxrwxrwx   1 root     root           4 Aug 27 21:49 /bin/sh -> bash
      /bin/sh:      ELF 32-bit LSB executable 80386 Version 1, dynamically linked, stripped
      /dev/dsk/c1d0s0      16525422 13567043 2793125    83%    /
    Dhrystone 2 using register variables     5209324.3 lps   (10.0 secs, 10 samples)
    Double-Precision Whetstone                 1677.5 MWIPS (10.0 secs, 10 samples)
    System Call Overhead                     562730.0 lps   (10.0 secs, 10 samples)
    Pipe Throughput                          573459.5 lps   (10.0 secs, 10 samples)
    Pipe-based Context Switching              32378.9 lps   (10.0 secs, 10 samples)
    Process Creation                            964.9 lps   (30.0 secs, 3 samples)
    Execl Throughput                            493.8 lps   (29.9 secs, 3 samples)
    File Read 1024 bufsize 2000 maxblocks    270440.0 KBps  (30.0 secs, 3 samples)
    File Write 1024 bufsize 2000 maxblocks   164019.0 KBps  (30.0 secs, 3 samples)
    File Copy 1024 bufsize 2000 maxblocks     86882.0 KBps  (30.0 secs, 3 samples)
    File Read 256 bufsize 500 maxblocks      138906.0 KBps  (30.0 secs, 3 samples)
    File Write 256 bufsize 500 maxblocks      87541.0 KBps  (30.0 secs, 3 samples)
    File Copy 256 bufsize 500 maxblocks       44631.0 KBps  (30.0 secs, 3 samples)
    File Read 4096 bufsize 8000 maxblocks    348029.0 KBps  (30.0 secs, 3 samples)
    File Write 4096 bufsize 8000 maxblocks   214112.0 KBps  (30.0 secs, 3 samples)
    File Copy 4096 bufsize 8000 maxblocks    119302.0 KBps  (30.0 secs, 3 samples)
    Shell Scripts (1 concurrent)                578.7 lpm   (60.0 secs, 3 samples)
    Shell Scripts (8 concurrent)                112.7 lpm   (60.0 secs, 3 samples)
    Shell Scripts (16 concurrent)                57.0 lpm   (60.0 secs, 3 samples)
    Arithmetic Test (type = short)           391136.1 lps   (10.0 secs, 3 samples)
    Arithmetic Test (type = int)             407158.5 lps   (10.0 secs, 3 samples)
    Arithmetic Test (type = long)            407443.4 lps   (10.0 secs, 3 samples)
    Arithmetic Test (type = float)           837535.1 lps   (10.0 secs, 3 samples)
    Arithmetic Test (type = double)          837482.8 lps   (10.0 secs, 3 samples)
    Arithoh                                       0.0 lps   (10.0 secs, 3 samples)
    C Compiler Throughput                       617.7 lpm   (60.0 secs, 3 samples)
    Dc: sqrt(2) to 99 decimal places          14656.0 lpm   (30.0 secs, 3 samples)
    Recursion Test--Tower of Hanoi           100918.0 lps   (20.0 secs, 3 samples)
                         INDEX VALUES
    TEST                                        BASELINE     RESULT      INDEX
    Dhrystone 2 using register variables        116700.0  5209324.3      446.4
    Double-Precision Whetstone                      55.0     1677.5      305.0
    Execl Throughput                                43.0      493.8      114.8
    File Copy 1024 bufsize 2000 maxblocks         3960.0    86882.0      219.4
    File Copy 256 bufsize 500 maxblocks           1655.0    44631.0      269.7
    File Copy 4096 bufsize 8000 maxblocks         5800.0   119302.0      205.7
    Pipe Throughput                              12440.0   573459.5      461.0
    Pipe-based Context Switching                  4000.0    32378.9       80.9
    Process Creation                               126.0      964.9       76.6
    Shell Scripts (8 concurrent)                     6.0      112.7      187.8
    System Call Overhead                         15000.0   562730.0      375.2
                                                                     =========
         FINAL SCORE                                                     211.7You can see that Linux is much faster than Solaris when it comes to process creation. Linux has traditionally had a pretty lightweight process creation. Solaris process creation is slower, although it doesn't seem to be "heavy". I ran 20 concurrent instances of "spawn" (the benchmark for process creation) under Solaris and despite 100% CPU usage the system was completely responsive. The creation of processes didn't monopolize the CPUs if other processes needed to run. It is also interesting to note that no matter whether I run one instance or many instances of spawn the total number of processes created per second is always around 900ish. This suggests to me that the processes are spending most of their time waiting on mutexes. The locks are no doubt adaptive, so at least one creation thread is spinning if it can, which causes the high CPU utilization. Nevertheless, the scheduler will make sure that any other processes that need CPU time get it regardless.
    What this boils down to is that a lot of concurrent process creation isn't going to make much impact on system performance despite the high usage reported by top. It also means that, unless you need to create more than several hundred processes per second you should be fine. Linux might be able to create six times as many processes per second, but not when those processes actually want CPU cycles. Finally, keep in mind that Solaris is more thread-centric, and thread creation on any system is always cheaper than process creation.
    This is what top reads as I type this (I also opened Sun Studio, just to add some load to the system):
    load averages: 32.16, 21.47, 17.61                                     20:33:19
    144 processes: 105 sleeping, 28 running, 8 zombie, 1 stopped, 2 on cpu
    CPU states:  0.0% idle,  7.7% user, 92.3% kernel,  0.0% iowait,  0.0% swap
    Memory: 1791M real, 1039M free, 808M swap in use, 2670M swap freeI don't even notice it. So, unless making the LDAP calls are extremely processor intensive I would say you should be fine.

Maybe you are looking for