Extremely Large Memory Footprint under Linux

I have not experienced this problem as I don't use linux regularly but a friend of mine has dismissed java as too slow because of this. He says that after he launches Forte or JBuilder (both java apps) they take about 500 MB of RAM. I know Forte is a memory hog but something is very wrong here. He says that he's using IBM's JRE 1.3.1 and some debian distro of linux.
Also, he said that he found info on java object "headers" each taking up a huge amount of memory. Plain java objects take only 8 bytes of memory and I've never heard of this header business before. This is the main reason I posted in the advanced forum.
Has anyone seen this type of problem before? I have no problem running java apps on windows (the performance is very good) and I'm assuming many people are running it successfully on Linux as well. Any info on this is much appreciated.
Thanks.

Your friend is incorrect. The typical JVM footprint in Linux is not what he thinks it is...
Part of the "problem" is the way Linux reports threads in CLI programs like ps and top. If you look at the output from those programs, you'd think the JVM is eating you alive - but it's not. I have 512Mb of memory and I constantly run one JVM for a small home control client, not to mention firing off an IDE to work/test in - with no memory problems whatsoever.
For example, I just started NetBeans 3.3.1 up while I was typing this and "top" reports this (sorted by memory usage):
  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
5722 crackers  20   0 98772  96M 41968 S     0.0 19.2   0:00 java
5723 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
5724 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:03 java
5725 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
5726 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
5727 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
5728 crackers  20   0 98772  96M 41968 S     0.0 19.2   0:00 java
5729 crackers  20   0 98772  96M 41968 S     0.0 19.2   0:00 java
5730 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:03 java
5732 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
5733 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
5735 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
5736 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:10 java
5737 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
5738 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
5739 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
5740 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:02 java
5741 crackers  16   0 98772  96M 41968 S     0.0 19.2   0:00 java
5742 crackers  16   0 98772  96M 41968 S     0.0 19.2   0:00 java
5743 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
5744 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
5746 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
5747 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
5749 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
5750 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
5751 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
5752 crackers  16   0 98772  96M 41968 S     0.0 19.2   0:00 java
5754 crackers  20   0 98772  96M 41968 S     0.0 19.2   0:00 java
5755 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
5756 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
5757 crackers  16   0 98772  96M 41968 S     0.0 19.2   0:00 javaAmazing how all those java "processes" each take 19.2% of the memory - which would mean that I'm using up 614% of memory. The actual footprint is really what one of those threads reports.
Java is no slower (and actually a trifle faster) on Linux than it is on a Windows machine.

Similar Messages

  • Lightroom's large memory footprint

    After massaging many pictures in "develop" mode, the system began to become very slow (like locking up for 30 seconds). I opened process explorer and found LightRoom was consuming 1.8Gig of virtual memory and had a working set of about 1.2Gig. This seems quite excessive for general photo editing. I'm really only performing simple adjustments like color and contrast.
    I closed down Lightroom and restarted it, and it then worked fine again for another 50 or 60 pictures, at which time slowness occurred again, and the memory footprint was up again. Now that I know what to expect, I'm shutting LR down every 30 pictures or so to avoid the excessive memory consumption.
    I suspect there is a memory leak or creep in LR.
    I have a machine with 4Gig of RAM, running Vista Ultimate.

    EricP,
    LR does "accumulated background work" when nothing else is going on, esp if you have the Library set to All Photos. Also it appears that LR is very sensitive to where the pagefile(s) are located and their size. I only can speak to XP Pro though. Vista is a different animal. You might try putting a controlled size [1.5 RAM size for both Min and Max values] on both [or more] HDs you have. Also set the system to keep as much of the Kernel in RAM as possible and set the system to optimize for Applications. Those changes helped me. If they can be accomplished in Vista, they may help also.
    Good luck and keep us informed if you get any fixes working.
    Mel

  • ADF UIX WebStart - large memory footprint

    Hi everyone,
    I am running a three-tier model jclient app with java webstart. java sdk 5.0 with jvm 5.0 - it creates a large footprint and then it says no more memory left to allocate to the app.
    i looked at if the jvm was the cause but i am using java hotspot virtual machine. now what i am wondering is where the problem is? is it the model .... etc ?
    Any help would be appreciated. Thanks!

    Hi there,
    This observation may be coming little late to be of use to you. But, we thought we'd post it here for others benefit.
    We've encountered similar situation with our ADF application. In the end, the following tweaking helped reduce the heap size and brought back our app's GUI performance.
    1. Added the following options to Sun JDK.
    -XX:SoftRefLRUPolicyMSPerMB=100 -XX:+ParallelRefProcEnabled
    This did the magic. Over and above this, we have also tried the following option setting to tune ADF Security, but it didn't seem to give any further improvement.
    -DUSE_JAAS=false -Djps.policystore.hybrid.mode=false -Djps.combiner.optimize.lazyeval=true -Djps.combiner.optimize=true -Djps.authz=ACC -Djbo.debugoutput=silent
    2. Alternatively, we also tried JRockIt JVM and interestingly enough, JRockIt handled the soft references clearing very well out of box. No tweaking was required on this.
    We suspect this could an issue with configuration of security in our app. As of now, we are not sure yet. But, we have a temporary workaround.

  • Large Memory Pages and Mapped Byte Buffers worthwhile?

    Hi
    I have just been reading about large memory pages that let the OS work with memory pages of 2M instead of 4k thus reducing the Translation-Lookaside Buffer Cache misses on the CPU. However my use case revolves around using mapped bytebuffers on a machine with 32GB of memory and file sizes of 400GB or more for the file underlying the mapped buffer, my program is very memory intensive and causes the OS to do alot of paging, hence my desire for any way to speed up access. Would using large memory pages in linux and the java vm option +UseLargePages have any affect on read/writing data from mapped byte buffers or is it only applicable to data that is always resident in memory?
    Thanks in advance

    Hi
    I have just been reading about large memory pages that let the OS work with memory pages of 2M instead of 4k thus reducing the Translation-Lookaside Buffer Cache misses on the CPU. However my use case revolves around using mapped bytebuffers on a machine with 32GB of memory and file sizes of 400GB or more for the file underlying the mapped buffer, my program is very memory intensive and causes the OS to do alot of paging, hence my desire for any way to speed up access. Would using large memory pages in linux and the java vm option +UseLargePages have any affect on read/writing data from mapped byte buffers or is it only applicable to data that is always resident in memory?
    Thanks in advance

  • Safari on Windows has huge memory footprint

    Running Gmail app:
    Safari - 85,304K
    IE7 - 33,884K
    Firefox - 28,572K
    I know it's a beta, but I've noticed that Firefox seems to be the best still. It's the smallest memory footprint, it's as fast (that I can humanly tell) as Safari, and the most standards compliant.
    There are still some web pages that use heavy AJAX controls and other JavaScript stuff that Safari doesn't do well with. Hopefully they'll get ironed out in the beta and the memory footprint under better control.
    IBM ThinkPad Intel Duo   Windows Vista  

    Hmmm... Firefox conservative with memory? Thats a good joke!
    The other day, I was surfing the web, and doing little else, when I noticed a marked slowdown in performance (ihave an e67000, and 2gb of ram), so I was annoyed that my system could be faultring just from using Firefox alone.
    One look at task manager made my jaw hit the floor! That crafty fox had hogged almost1.35gb of ram!!!! with the optional extras I load up at start, I was left with 34mb to play with! Now thats a ridiculous memory footprint!
    Alas this problem has been around since the stone age, the dev guys at Mozilla seem unwilling or incapable of sorting the problem out.

  • Getting JACE examples to compile under linux

    Hi,
    I've been very interested in using the JACE classes (jace.sf.net) but i've been able to get any help from their users. I'm stuck trying to get the basic examples to compile under redhat 9 linux. In the examples folders there are various scripts called "compile.sh" and "link.sh" but with no documentation or usage tips, and also the script has ill-defined paths to the source and includes. I'm interested in learning if other people has been able to use this seemingly interesting but largely unsupported library under linux, and how they got around to make it work
    cheers all

    It looks like you don't have your classpath set to the proper value.
    There are many postings in these forums that discuss setting the classpath. Try searching for the keyword classpath.

  • Is it wrong or why Firefox uses it as an extremely large amount of memory? I'm running Firefox 7.0.1 on Windows 7 and it is more or less constant over 1.5 GB of memory utilization, twice as much as they early version 6. Best regards Jonas Walther

    Is it wrong or why Firefox uses it as an extremely large amount of memory?
    I'm running Firefox 7.0.1 on Windows 7 and it is more or less constant over 1.5 GB of memory utilization, twice as much as they early version 6.
    Best regards
    Jonas Walther

    Hi musicfan,<br />Sorry you are having problems with Firefox. Maybe you should have asked earlier and we could have fixed it.
    Reading your comments I do not see that rolling back to an insecure Firefox 22 will actually help you much. You are probably best using IE, unless you have also damaged that.
    *[[Export bookmarks to Internet Explorer]]
    You should not use old versions they are insecure. Security fixes are publicised and exploitable.
    * [[Install an older version of Firefox]]
    * https://www.mozilla.org/security/known-vulnerabilities/firefox.html
    Most others will not be having such problems. We are now able to say that with confidence because after developers missed a regression in Firefox 4 telemetry was introduced so that data was obtained. It may be an idea to turn on your telemetry, if you have not already done so, and decide to stick with Firefox.
    *[[Send performance data to Mozilla to help improve Firefox]]
    Trying safe mode takes seconds. Unfortunatly if you are not willing to do even rudimentary troubleshooting there is not anything we can do to help you.
    *[[Troubleshoot Firefox issues using Safe Mode]]

  • Memory leak under GNU/Linux when using exec()

    Hi,
    We detected that our application was taking all the free memory of the computer when we were using intensively and periodically the method exec() to execute some commands of the OS. The OS of the computer is a GNU/Linux based OS.
    So, in order to do some monitoring we decided to wrote a simple program that called exec() infinite number of times, and using the profiler tool of Netbeans we saw a memory leak in the program because the number of surviving generations increased during all the execution time. The classes that have more surviving generations are java.lang.ref.Finalizer, java.io.FileDescriptor and byte[].
    We also decided to test this simple program using Windows, and in that OS we saw that the memory leak disappeared: the number of surviving generations was almost stable.
    I attach you the code of the program.
    import java.io.BufferedReader;
    import java.io.IOException;
    import java.io.InputStream;
    import java.io.InputStreamReader;
    public class testExec
        public static void main(String args[]) throws IOException, InterruptedException
            Runtime runtime = Runtime.getRuntime();
            while (true)
                Process process = null;
                InputStream is = null;
                InputStreamReader isr = null;
                BufferedReader br = null;
                try
                    process = runtime.exec("ls");
                    //process = runtime.exec("cmd /c dir");
                    is = process.getInputStream();
                    isr = new InputStreamReader(is);
                    br = new BufferedReader(isr);
                    String line;
                    while ((line = br.readLine()) != null)
                        System.out.println(line);
                finally
                    process.waitFor();
                    if (is != null)
                        is.close();
                    if (isr != null)
                        isr.close();
                    if (br != null)
                        br.close();
                    if (process != null)
                        process.destroy();
    }¿Is anything wrong with the test program we wrote? (we know that is not usual to call infinite times the command ls/dir, but it's just a test)
    ¿Why do we have a memory leak in Linux but not in Windows?
    I will appreciate any help or ideas. Thanks in advance.

    Hi Joby,
    From our last profiling results, we haven't found yet a proper solution. We think that probably the problem is caused by the byte[]'s/FileInputStreams created by the class UNIXProcess that manage the stdin, stdout and stderr streams. It seems that these byte arrays cannot be removed correctly by the garbage collector and they become bigger and bigger, so at the end they took all the memory of the system.
    We downloaded the last version of OpenJDK 6 (build b19) and modified UNIXProcess.java.linux so when we call its method destroy(), we assign to null those streams. We did that because we wanted to indicate to the garbage collector that these objects could be removed, as we saw that the close() methods doesn't do anything on their implementation.
    public void destroy() {
         // There is a risk that pid will be recycled, causing us to
         // kill the wrong process!  So we only terminate processes
         // that appear to still be running.  Even with this check,
         // there is an unavoidable race condition here, but the window
         // is very small, and OSes try hard to not recycle pids too
         // soon, so this is quite safe.
         synchronized (this) {
             if (!hasExited)
              destroyProcess(pid);
            try {
                stdin_stream.close();
                stdout_stream.close();
                stderr_stream.close();
                // LINES WE ADDED
                stdin_stream = null;
                stdout_stream = null;
                stderr_stream = null;
            } catch (IOException e) {
                // ignore
                e.printStackTrace();
        }But this didn't work at all. We saw that we were able to execute for a long time our application and that the free memory of the system wasn't decreasing as before, but we did some profiling with this custom JVM and the test application and we still see more or less the same behaviour: lots of surviving generations, at some point increase of the used heap to the maximum allowed, and finally the crash of the test app.
    So sadly, we still don't have a solution for that problem. You could try to compile OpenJDK 6, modify it, and try it with your program to see if the last version works for you. Compiling OpenJDK 6 in Linux is quite easy: you just have to download the source and the binaries from here and configure your environment with something like this:
    export ANT_HOME=/opt/apache-ant-1.7.1/
    export ALT_BOOTDIR=/usr/lib/jvm/java-6-sun
    export ALT_OUTPUTDIR=/tmp/openjdk
    export ALT_BINARY_PLUGS_PATH=/opt/openjdk-binary-plugs/
    export ALT_JDK_IMPORT_PATH=/usr/lib/jvm/java-6-sun
    export LD_LIBRARY_PATH=
    export CLASSPATH=
    export JAVA_HOME=
    export LANG=C
    export CC=/usr/bin/gcc-4.3
    export CXX=/usr/bin/g++-4.3Hope it helps Joby :)
    Cheers.

  • Query is allocating too large memory error in OBIEE 11g

    Hi ,
    We have one pivot table(A) in our dashboard displaying , revenue against a Entity Hierarchy (i.e we have 8 levels under the hierarchy) And we have another pivot table (B) displaying revenue against a customer hierarchy (3 levels under it) .
    Both tables running fine under our OBIEE 11.1.1.6 environment (windows) .
    After deploying the same code (RPD&catalog) in a unix OBIEE 11.1.1.6 server , its throwing the below error ,while populating Pivot table A :
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    *State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 96002] Essbase Error: Internal error: Query is allocating too large memory ( > 4GB) and cannot be executed. Query allocation exceeds allocation limits. (HY000)*
    But , pivot table B is running fine . Help please !!!!!
    data source used : essbase 11.1.2.1
    Thanks
    sayak

    Hi Dpka ,
    Yes ! we are hitting a seperate essbase server with Linux OBIEE enviorement .
    I'll execute the query in essbase and get back to you !!
    Thanks
    sayak

  • When do WL streams release byte[]. Smaller footprint under JBOSS.

    We have a customer that started having memory issues. When we jprobed, we saw that half the memory usage they were seeing on the client was occupied by the byte[] corresponding to the objects streamed back from the ssb running on the application server. In fact, several calls to different ssbs that return very large amounts of data were responsible for large amounts of data hanging around on the client. I understand that weblogic has to hold onto these data structures to resolve multiple streams of the same object on the server side to the same object on the client. My question is, what controls garbage collection of this data? Under what circumstances will it be released?
    When we ran the same tests under JBOSS the memory footprint was much smaller. If we could understand the BEA mechanism and force the stream to be reset (or do something that indirectly forces the stream to be reset) we would be happy.
    Thoughts?
    Thanks in advance.
    Eric

    Excellent Blog. Thank You
    Small clarification on Step **6) Oracle Home Directory, ...a) Resize the Root Partition**
    Ubuntu 11.10 has Gparted available as a Ubuntu software download, DONT use that while trying the above step, instead download the ISO file from http://sourceforge.net/projects/gparted/files/gparted-live-stable/ gparted-live-0.12.0-5.iso (124.6 MB)
    Burn that ISO file on a Blank DVD, reboot the Ubuntu , during startup select Boot from DVD Option if not already selected. this will take to Boot Menu Options of Gparted Live then select the first menu option, and this allows to do further action such as Re-sizing .
    and once you have chosen and executed step a) . do NOT run step b) also that is "Setup External Storage"
    I hope this minor clarification can avoid some confusion
    Regards
    Madhusudhan Rao
    Edited by: MadhusudhanRao on Mar 24, 2012 11:30 PM

  • Memory footprint is HUGE

    I just wanted to see if anyone else has a concern with the memory footprint and when/if this will be addressed. We have an ADF web app and now when we try to run under jdeveloper 11G the combination of jdeveloper and the weblogic java process is over 900M and grows when you do any clicking around. Under the previous TP4 release this was less than half.
    I have windows XP with Firefox, OracleXE, jdeveloper/weblogic and the memory footprint is at 2G. We already had to upgrade our systems, do we need to upgrade yet again???

    It seems that the commandline for starting the embedded weblogic has two instances of the -Xmx and Xms parameters. I think the last one is the one that is used, and it is set to 1024M, which is large for a large portion of development projects.
    The parameters are present in setDomainEnv.sh/cmd. It is situated in <JDEV_HOME?>system11.1.1.0.31.51.56/DefaultDomain/bin
    I've seen this directory show up in funny places so search for it if you can't find it.
    I've set the second set of parameters to the same as the first ones -Xms256m -Xmx512m.
    Trygve

  • Query is allocating too large memory

    I’m building an Analysis in OBIEE against an ASO cube and am seeing the following error:
    Query is allocating too large memory ( > 4GB) and cannot be executed. Query allocation exceeds allocation limits
    The report we’re trying to build is intended to show information from eight dimensions. However, when I try to add just a few of the dimensions we get the “Query is allocating too large memory” error. Even if I filter down the information so that I only have 1 or 2 rows in the Analysis I get the error. It seems like there is something wrong that is causing our queries to become so bloated. We're using OBIEE 11.1.1.6.0.
    Any help would be appreciated.

    950121 wrote:
    I’m building an Analysis in OBIEE against an ASO cube and am seeing the following error:
    Query is allocating too large memory ( > 4GB) and cannot be executed. Query allocation exceeds allocation limits
    The report we’re trying to build is intended to show information from eight dimensions. However, when I try to add just a few of the dimensions we get the “Query is allocating too large memory” error. Even if I filter down the information so that I only have 1 or 2 rows in the Analysis I get the error. It seems like there is something wrong that is causing our queries to become so bloated. We're using OBIEE 11.1.1.6.0.
    Any help would be appreciated.Hi,
    This sounds like a known Bug 13331507 : RFA - DEBUGGING 'QUERY IS ALLOCATING TOO LARGE MEMORY ( > 4GB)' FROM ESSBASE.
    Cause:
    A filter has been added in several lines in the 'Data Filters' Tab of the 'Users Permissions' Screen in the Administration Tool (click on Manage and then Identity menu items). This caused the MDX Filter statement to be added several times to the MDX issues to the underlying Database, which in turn caused too much memory to be used in processing the request.
    Refer to Doc ID: 1389873.1 for more information on My Oracle Support.

  • Too many resources used under Linux

    I have tested sun jvm under linux red-dat 6.2
    and 7.1 (listed by Sun as official platform supported)
    The JVM I test was from 1.3.0 to the 1.4.1
    I have written a simple application server that
    runs java application starting a new JVM.
    When the java application runs (with the -server or
    -hotspot option) I see that 10 java process are
    istantiated. (10 JVM)
    This is a real problem because for 4 users that runs 4
    applications there are 40 JVM running
    (each one allocates about 20 Mb of memory)
    and the system became very slow.
    With the -classic option the situation is better: only 1 JVM
    for application is started, (the application is the same, I
    have change only the -server option with -classic)
    but JVM seems to be less stable: I have a lot of segmentation
    fault errors. (with core dump)
    With the new distribution of Sun JVM (1.4.1) the -classic option
    is no longer supported.
    Does anyone knows if is there a way to have only one JVM
    for application? (-classic does not works well)
    Is there others JVM for linux more stable instead of the Sun one?
    PS: I have tested the same code under win/2000 using JVM 1.3.1_03
    and the -classic option. All goes well (the same application):
    I haver only 1 thread for application without jvm runtime error.
    Now my doubt is: does Sun belive in the Linux Word?
    JVM for Windows is much more stable!!!
    Thank you in advanced and best regards.

    I am assuming that you believe that there are 10 processes because that is what ps shows. However on Linux, each thread in a java program shows up in ps as a separate process. The memory for each "ps process" corresponding to a java thread is shared among all the threads of the JVM. Therefore if you see 10 processes with 20MB each you are not using 200MB, just 20.
    If you are running 4 JVMs then you would be using 40MB not 800 as shown in ps.
    I think Sun's JVM for linux is quite good (as I use it every day). If you need to run so many JVMs you should invest in more memory.
    BTW, How much memory is installed in your system?

  • ORA-00385: cannot enable Very Large Memory with new buffer cache 11.2.0.2

    [oracle@bnl11237dat01][DWH11]$ sqlplus / as sysdba
    SQL*Plus: Release 11.2.0.2.0 Production on Mon Jun 20 09:19:49 2011
    Copyright (c) 1982, 2010, Oracle. All rights reserved.
    Connected to an idle instance.
    SQL> startup mount pfile=/u01/app/oracle/product/11.2.0/dbhome_1/dbs//initDWH11.ora
    ORA-00385: cannot enable Very Large Memory with new buffer cache parameters
    DWH12.__large_pool_size=16777216
    DWH11.__large_pool_size=16777216
    DWH11.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
    DWH12.__pga_aggregate_target=2902458368
    DWH11.__pga_aggregate_target=2902458368
    DWH12.__sga_target=4328521728
    DWH11.__sga_target=4328521728
    DWH12.__shared_io_pool_size=0
    DWH11.__shared_io_pool_size=0
    DWH12.__shared_pool_size=956301312
    DWH11.__shared_pool_size=956301312
    DWH12.__streams_pool_size=0
    DWH11.__streams_pool_size=134217728
    #*._realfree_heap_pagesize_hint=262144
    #*._use_realfree_heap=TRUE
    *.audit_file_dest='/u01/app/oracle/admin/DWH/adump'
    *.audit_trail='db'
    *.cluster_database=true
    *.compatible='11.2.0.0.0'
    *.control_files='/dborafiles/mdm_bn/dwh/oradata01/DWH/control01.ctl','/dborafiles/mdm_bn/dwh/orareco/DWH/control02.ctl'
    *.db_block_size=8192
    *.db_domain=''
    *.db_name='DWH'
    *.db_recovery_file_dest='/dborafiles/mdm_bn/dwh/orareco'
    *.db_recovery_file_dest_size=7373586432
    *.diagnostic_dest='/u01/app/oracle'
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=DWH1XDB)'
    DWH12.instance_number=2
    DWH11.instance_number=1
    DWH11.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=bnl11237dat01-vip)(PORT=1521))))'
    DWH12.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=bnl11237dat02-vip)(PORT=1521))))'
    *.log_archive_dest_1='LOCATION=/dborafiles/mdm_bn/dwh/oraarch'
    *.log_archive_format='DWH_%t_%s_%r.arc'
    #*.memory_max_target=7226785792
    *.memory_target=7226785792
    *.open_cursors=1000
    *.processes=500
    *.remote_listener='LISTENERS_SCAN'
    *.remote_login_passwordfile='exclusive'
    *.sessions=555
    DWH12.thread=2
    DWH11.thread=1
    DWH12.undo_tablespace='UNDOTBS2'
    DWH11.undo_tablespace='UNDOTBS1'
    SPFILE='/dborafiles/mdm_bn/dwh/oradata01/DWH/spfileDWH1.ora' # line added by Agent
    [oracle@bnl11237dat01][DWH11]$ cat /etc/sysctl.conf
    # Kernel sysctl configuration file for Red Hat Linux
    # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and
    # sysctl.conf(5) for more details.
    # Controls IP packet forwarding
    net.ipv4.ip_forward = 0
    # Controls source route verification
    net.ipv4.conf.default.rp_filter = 1
    # Do not accept source routing
    net.ipv4.conf.default.accept_source_route = 0
    # Controls the System Request debugging functionality of the kernel
    kernel.sysrq = 0
    # Controls whether core dumps will append the PID to the core filename
    # Useful for debugging multi-threaded applications
    kernel.core_uses_pid = 1
    # Controls the use of TCP syncookies
    net.ipv4.tcp_syncookies = 1
    # Controls the maximum size of a message, in bytes
    kernel.msgmnb = 65536
    # Controls the default maxmimum size of a mesage queue
    kernel.msgmax = 65536
    # Controls the maximum shared segment size, in bytes
    kernel.shmmax = 68719476736
    # Controls the maximum number of shared memory segments, in pages
    #kernel.shmall = 4294967296
    kernel.shmall = 8250344
    # Oracle kernel parameters
    fs.aio-max-nr = 1048576
    fs.file-max = 6815744
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    kernel.shmmax = 536870912
    net.ipv4.ip_local_port_range = 9000 65500
    net.core.rmem_default = 262144
    net.core.rmem_max = 4194304
    net.core.wmem_default = 262144
    net.core.wmem_max = 1048586
    net.ipv4.tcp_wmem = 262144 262144 262144
    net.ipv4.tcp_rmem = 4194304 4194304 4194304
    Please can I know how to resolve this error.

    CAUSE: User specified one or more of { db_cache_size , db_recycle_cache_size, db_keep_cache_size, db_nk_cache_size (where n is one of 2,4,8,16,32) } AND use_indirect_data_buffers is set to TRUE. This is illegal.
    ACTION: Very Large Memory can only be enabled with the old (pre-Oracle_8.2) parameters

  • JVM virtual memory footprint

    I'm running java processes (specifically Tomcat) on a linux system with not too much available memory and I run into a lots of problems as java seems to consume a lot more memory than it would need.
    To understand the problem I created this "unit test":
    [root@vps download]# ulimit -v unlimited
    [root@vps download]# java -version
    java version "1.5.0_14"
    Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_14-b03)
    Java HotSpot(TM) Client VM (build 1.5.0_14-b03, mixed mode)
    [root@vps download]# ulimit -v 230000
    [root@vps download]# java -version
    java version "1.5.0_14"
    Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_14-b03)
    Java HotSpot(TM) Client VM (build 1.5.0_14-b03, mixed mode)
    [root@vps download]# ulimit -v 220000
    [root@vps download]# java -version
    Error occurred during initialization of VM
    Could not reserve enough space for object heap
    Could not create the Java virtual machine.As you can see the java vm won't start up when the allowed virtual memory is under 230 Mb, which is an amount of memory I can't give java in a virtual server. I have tried limiting the memory consumption with any memory option I could find (starting with the famous -Xmx and -Xms ones) but always with the same results and problems.
    By running Tomcat and using "top" I noticed java requests over 250 Mb virtual memory, while actually using only around 35 Mb. I would be glad to allow this process to reserve up to 100Mb in virtual memory, but over that seems crazy, since I perfectly know my tomcat instance will never require that much memory and since I cannot "afford" all that memory usage from a single process. With that kind of consumption I can't even stop tomcat using the ordinary script (the shutdown.sh script will start another vm to send the stop signal to tomcat but since the first one is already consuming 250 Mb virtual memory and the second one tries to allocate as much I will get the heap error message as my server can't allocate that much total memory on top of the other processes).
    What solutions are there to stop java from reserving memory it won't use?

    ingoio256 wrote:
    First I quote yourself from another (someway similar) thread:
    "Under Linux, the OS allows over-allocation of memory (which means it can reserve memory that it doesn't have the physical RAM or swap file to back it up with)"
    If I was able to do that my problems would be solved: in fact my system doesn't have enough physical+swap memory, but since a lot less will be used by java processes if I was able to instruct the system to allocate it even if it's not available I would have my problems solved.
    Any idea about where I can find help on doing that?It may already be doing that; under Linux, I believe it only looks for memory to back up the allocation when the page is touched. Whether the java process touches all the memory it allocates, I don't know. Maybe it behaves differently when there's no page file.
    The second question is the following:
    I noticed java processes on the windows system don't use as much virtual memory as it happens in my vps: the reported virtual memory allocation is just slightly above the used physical one. Does that depend on a different implementation or does it happen after some time the jvm is running, with unused virtual memory being given back to the system? If the latter was right it would partly ease the problem as I would just need for the java process to be running for a while and then I would have some system resources released. If it's not the case... how can I have linux behave similarly to windows? As in this case it seems more efficient ;)Regarding the memory usage difference, I don't know why that is.
    I've seen people say that the Sun JVM doesn't return memory to the OS after it allocates, but I'm not sure of this. This is another aspect that a different JVM implementation could help with. I think IBM and BEA both have their own implementations you could check out.

Maybe you are looking for

  • How to turn off uncommitted data warning?

    I have a bounded task flow that calls a second bounded task flow. The second task flow shares transaction and data control scope with calling task flows. Both task flows use the main area of the UI shell. I make changes in second task flow and when I

  • How do you add an extension to a phone number in iCloud?

    I just attempted to edit a contact's business telephone by adding an extension to the end. Here's what the number looked like to start:  (212) 555-1234 My goal:  (212) 555-1234 x101 What resulted in iCloud when I hit save:  2125551234101 I tried seve

  • Cannot create ActiveX component

    So, I've looked all over for an answer for this one and so far, everything I've seen offered as fixes do not work. Exception Details: System.Exception: Cannot create ActiveX component. Source Error: Line 39: Line 40:             ' Create Acrobat Appl

  • Folder size and library size don't match?

    I have my iTunes music library located on an external hard drive and when I do the "get info" tool on it it shows up being 43 gigs. However, when I'm using iTunes it says that my total music library is 19 gigs, my videos are only 383 mb, and my podca

  • In which  join condition case, it will go to infiniti loop , in Oracle data

    Hi Experts, Can any one please tell me , in which join condition case, it will go to infiniti loop , in Oracle database. Thanks&Regards, Sanjeev.