Out of Memory Runtime Error

Hello,
I'm new to Java, this is only my third time using the language, and first time writing an applet. What I'm trying to do is create an applet that will plot 2D a set of coordinates based on an input string. Inexplicably, the VM gives an "<<Out of Memory>>" error while running. I urgently need a solution to this problem (as in, in the next two days... by August 9th, 2001). Any help or suggestions would be greatly appreciated.
-Mark Radtke, radrik2001<REMOVE>@yahoo.com
import java.awt.*;
import java.applet.*;
import java.io.*;
import Orbit;
public class TwoDApplet extends Applet {
     private Orbit stringHolder = new Orbit();
     public void init() {
          System.out.println("execution");
          stringToDraw();
          System.out.println("Done executing.");
     public boolean stringToDraw() {
          Graphics g = getGraphics();
          g.setColor(Color.black);
           StringReader sReader = new StringReader(stringHolder.case1);
           int XYline = 0, pArrayPlace = 0;
           int X = 0, Y = 0, last_x = 0, last_y = 0;
           float temp_x = 0, temp_y = 0;
           final char lineBreaker = '%', separator = ' ';     //These char's set the char that denotes line breaks and spaces between words, '\n' and ' ' by default
           StringBuffer buffer = new StringBuffer();
           final StringBuffer blankBuffer = new StringBuffer(" ");
          char temp;
          boolean flag = false;
          try {
                temp = (char)sReader.read();
           } catch(IOException e) {
                System.out.println("\n\n\tERROR: " + e);
                return false;
     try {
          do {
               switch(temp) {
                    case lineBreaker:
                         switch(XYline) {
                              case 0:
                                   last_x = X;
                                   temp_x = Float.parseFloat(buffer.toString());
                                   X = (int)(temp_x * Math.pow(10.0,11.0));
                                   g.drawLine(last_x, 0, X, 0);
                                   break;
                              case 1:
                                   last_y = Y;
                                   temp_y = Float.parseFloat(buffer.toString());
                                   Y = (int)(temp_y * Math.pow(10.0,11.0));
                                   g.drawLine(last_x, last_y, X, Y);
                                   break;
                              default: //anything beyond 2 numbers in a row is ignored
                                   g.drawLine(last_x, last_y, X, Y);
                                   break;
                         } //end of nested switch
                         buffer = blankBuffer;
                         XYline = 0;
                         break;
                    case separator: //case separator:
                         switch(XYline) {
                              case 0:
                                   System.out.print(buffer.toString());
                                   last_x = X;
                                   temp_x = Float.parseFloat(buffer.toString());
                                   X = (int)(temp_x * Math.pow(10.0,11.0));
                                   System.out.print("X = " + X + " ");
                                   break;
                              case 1:
                                   last_y = Y;
                                   temp_y = Float.parseFloat(buffer.toString());
                                   Y = (int)(temp_y * Math.pow(10.0,11.0));
                                   System.out.print("Y = " + Y + " ");
                                   break;
                              default:
                                   break;
                         } //end of nested switch
                         buffer = blankBuffer;
                         XYline++;
                         System.out.println("OK\n");
                         break;
                    default:
                         if(buffer.toString().equals(" ")) //i used to test is buffer == new StringBuffer(" "), but I figured that may have been part of the problem... it didn't help
                              buffer.setCharAt(0, temp);
                         else
                              buffer.append(temp);
                         break;
               } //end of switch
               try {
                     temp = (char)sReader.read();
                } catch(IOException e) {
                     System.out.println("IOException caught... throwing...");
                     flag = true;
                     throw e;
                } catch(NullPointerException e) {
                     System.out.println("Caught NullPointerException. You suck."); }
          } while(flag == false); //end of while
     } //end of try
     catch(NullPointerException e) {
          System.out.println("Caught NullPointerException.");
     catch(IOException e) {
          System.out.println("Caught IOException: " + e + "\n");
          return true;
public class Orbit { //when totally completed, this class will just hold a few strings, most of which are much, much larger than this one
     public String case1 = "3.0745036727705142e-009 3.9417146050976244e-009%4.9852836681565192e-009 3.5573952047714837e-010%3.6200601148208685e-009 3.4445682318680393e-009%8.1006295549636224e-011 4.9953846385005008e-009%3.7105805578461100e-009 3.3440347969092934e-009%4.9772361223912569e-009 4.1037439683639425e-010%3.1322774963314919e-009 3.8884603960830088e-009%";

Do you need to hold those values in a String? If not then simply store those values in a doubly-dimensioned array and avoid parsing the string altogether (see sample applet below). If you do need to store the values in a String consider using a StringTokenizer for parsing.
import java.awt.*;
import java.applet.*;
import java.io.*;
public class TwoDApplet extends Applet implements Runnable
Thread th;
int x;
int y;
int lastX;
int lastY;
double coordinates[][] = {{3.0745036727705142e-009,3.9417146050976244e-009},{4.9852836681565192e-009,3.5573952047714837e-010},{3.6200601148208685e-009,3.4445682318680393e-009},{8.1006295549636224e-011,4.9953846385005008e-009},{3.7105805578461100e-009,3.3440347969092934e-009},{4.9772361223912569e-009,4.1037439683639425e-010},{3.1322774963314919e-009,3.8884603960830088e-009}};
final double operand = Math.pow(10.0,11.0);
public void start()
  if (th == null)
   th = new Thread(this);
   th.start();
public void stop()
  if (th != null)
   th = null;
public synchronized void run()
  for (int z = 0; z < coordinates.length; z++)
   lastX = x;
   lastY = y;
   x = (int)(coordinates[z][0] * operand);
   y = (int)(coordinates[z][1] * operand);
   repaint();
   try
    wait();
    Thread.sleep(100);
   catch(InterruptedException ie)
public void update(Graphics g)
  paint(g);
public void paint(Graphics g)
  g.drawLine(lastX, lastY, x, y);
  synchronized(this)
   notifyAll();
}

Similar Messages

  • ORA-27102: out of memory SVR4 Error: 12: Not enough space

    We got image copy of one of production server that runs on Solaris 9 and our SA guys restored and handed over to us (DBAs). There is only one database running on source server. I have to up the database on the new server. while startup the database I'm getting following error.
    ====================================================================
    SQL*Plus: Release 10.2.0.1.0 - Production on Fri Aug 6 16:36:14 2010
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Connected to an idle instance.
    SQL> startup
    ORA-27102: out of memory
    SVR4 Error: 12: Not enough space
    SQL>
    ====================================================================
    ABOUT THE SERVER AND DATABASE
    Server:
    uname -a
    SunOS ush** 5.9 Generic_Virtual sun4u sparc SUNW,T5240*
    Database: Oracle 10.2.0.1.0
    I'm giving the "top" command output below:
    Before attempt to start the database:
    load averages: 2.85, 9.39, 5.50 16:35:46
    31 processes: 30 sleeping, 1 on cpu
    CPU states: 98.9% idle, 0.7% user, 0.4% kernel, 0.0% iowait, 0.0% swap
    Memory: 52G real, 239G free, 49M swap in use, 16G swap free
    the moment I run the "startup" command
    load averages: 1.54, 7.88, 5.20 16:36:44
    33 processes: 31 sleeping, 2 on cpu
    CPU states: 98.8% idle, 0.0% user, 1.2% kernel, 0.0% iowait, 0.0% swap
    Memory: 52G real, 224G free, 15G swap in use, 771M swap free
    and I compared the Semaphores and Kernel Parameters in /etc/system . Both are Identical.
    and ulimit -a gives as below..
    root@ush**> ulimit -a*
    time(seconds) unlimited
    file(blocks) unlimited
    data(kbytes) unlimited
    stack(kbytes) 8192
    coredump(blocks) unlimited
    nofiles(descriptors) 256
    memory(kbytes) unlimited
    root@ush**>*
    and ipcs shows nothing as below:
    root@ush**> ipcs*
    IPC status from <running system> as of Fri Aug 6 19:45:06 PDT 2010
    T ID KEY MODE OWNER GROUP
    Message Queues:
    Shared Memory:
    Semaphores:
    Finally Alert Log gives nothing, but "instance starting"...
    Please let us know where else I should check for route cause ... Thank You.

    and I compared the Semaphores and Kernel Parameters in /etc/system . Both are Identical.are identical initSID,ora or spfile being used to start the DB.
    Clues indicate Oracle is requesting more shared memory than OS can provide.
    Do any additional clues exist within alert_SID.log file?

  • ORA-27102: out of memory SVR4 Error: 22: Invalid argument

    Hi all,
    I'm doing an install of a Solaris 10.2, Oracle 10.2 system. During the Create Database phase, I am getting;
    ORA-27102: out of memory SVR4 Error: 22: Invalid argument
    Doing some research, and reading through the details here:
    Link: [http://technopark02.blogspot.com/2006/09/solaris-10oracle-fixing-ora-27102-out.html]
    I think my issue is my SHM parameters, reinforced by the repeated entry in the Oracle Alert log:
    +WARNING: EINVAL creating segment of size 0x0000000085000000+*
    +fix shm parameters in /etc/system or equivalent+*
    when the create fails.
    I am not familar with Solaris' new project mechanism, although from what I have read, it seems to be set up properly.
    Here are my server details:
    # prtconf | grep Mem
    Memory size: 8192 Megabytes
    # prctl -n project.max-shm-memory -i project 200
    project: 200: QBI
    NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
    project.max-shm-memory
            privileged      10.0EB      -   deny                                 -
            system          16.0EB    max   deny                                 -
    And as for Oracle:
    shared_pool_size = 1522029035
    shared_pool_reserved_size = 152202903
    pga_aggregate_target = 2705829396
    sga_max_size = 3439329280
    db_cache_size = 1159641169
    During the course of troubleshooting, I have:
    1 - Increased the amount of SHM allocated in the project. I have tried 16GB, 8 GB, 10 GB, 11GB etc, to no effect, so I reset it to 10GB (as seen above) and focused my efforts elsewhere.
    2. SHARED_POOL_SIZE - I have decreased this by roughly 75% from the original value, again to no effect.
    3. PGA and SGA sizes - I have increased these from the original values by an increment of 25%
    Following the advice from the referenced blog (which does a good job of explaining the logic behind the actions) I have determined that the Alert log error message is telling me that it is lacking
    2231369728
    (Hex conversion value, which I think I need to read as 2GB, not 100% sure)
    I've increased my project allocation, and the PGA sizes, did I just not do it enough?
    Any advice?
    Thanks for any input,
    Troy Shane

    Hi,
    check following sap note
    Note 546006 - Problems with Oracle due to operating system errors
    Note 743328 - Composite SAP note: ORA-27102
    regards,
    kaushal

  • Final Cut Pro Error: 'out of memory'/'general error' on export

    I'm trying to export a simple timeline from FCP but keep getting Final Cut Pro Error: 'out of memory'/'general error'. Exporting 5 sec timeline (yes, just a short trailer) via Quicktime conversion H264 at 1080. Strangely, if I export exactly same timeline at 720 its fine. Original media ProRes422 1080.
    Have exported at 1080 before ok. No nested files involved; re-connected all media; no JPEGs involved; plenty of memory (all of which i have heard can be issues).
    Any advice please as i've read all of the FAQs and nothing helping so far?!
    Spec: New MacPro 12-core; 12 MB ram; 2TB drive; FCP7

    Thanks for workflow clarity Shane, but when you export as self-contained QT movie, what do the 'Current Settings' refer to? Is this the current settings of your (source media in) timeline (ie in my case Pro Res 422) or the last settings used by the export? Apparently 'If you deselect the Recompress All Frames option and choose Current Settings from the Setting pop-up menu, Final Cut Pro simply copies frames from existing media files into the new file with no recompression.' But when i did this I got a 994.1 MB QT movie, which, when i got 'more info' told me MPEG-2 Video (?!) Apparently a compressed file when its supposed to be uncompressed (?!) When i exported manually setting to Pro Res I got 6.77 GB. Clearly uncompressed Apple ProRes 422 (HQ). Do i have to manually set settings to Pro Res to match my source media and ensure i get an uncompressed file? Im searching boards to get some clarity on this too: all references so far advise using 'current settings' but dont actually say what they are.

  • 10g XE: ORA-27102: out of memory, Linux Error: 28: No space left on device

    Hi,
    I just installed oracle-xe-10.2.0.1-1.0.i386.rpm on a virtual CentOS box. After I run "/etc/init.d/oracle-xe configure", only listener is running, not the instance itself and $ORACLE_HOME/config/log/CloneRmanRestore.log says
    ORA-27102: out of memory
    Linux Error: 28: No space left on device
    According to installation requirements (http://download.oracle.com/docs/cd/B25329_01/doc/install.102/b25144/toc.htm) I checked
    RAM and swap:
    [root@56 ~]# free -m
    total used free shared buffers cached
    Mem: 7961 7453 507 0 39 923
    -/+ buffers/cache: 6491 1470
    Swap: 16378 7012 9366
    Available disk space:
    [root@56 ~]# df -h
    Filesystem Size Used Avail Use% Mounted on
    /dev/vzfs 2.9G 969M 1.9G 34% /
    Kernel params:
    [root@56 ~]# sysctl -a | egrep 'sem|shm'
    error: "Operation not permitted" reading key "kernel.cap-bound"
    kernel.sem = 250 32000 100 128
    kernel.shmmni = 4096
    kernel.shmall = 2097152
    kernel.shmmax = 536870912
    So it all looks to be OK.
    Any ideas what can be wrong?
    Thanks,
    Radek

    rskokan wrote:
    The distro version is "CentOS release 5.4 (Final)".
    Unfortunately the wmem/rmem params can't be changed, as I learned it's a limitation of the virtual hosting, Virtuozzo. Anyway they are "just" for network setting, not for RAM.Yes - but worthy of noting once you run into your next problem. :-)
    >
    Couldn't be the problem that I really don't have enough disk space? Only 1.9 GB free.It's unlikely.
    I suspect that the SHMMAX setting needs to be increased to 1/2 physical RAM, as described in the doc I referenced.
    I suspect the installer is looking at the physical RAM and guessing the target SGA size based on that. It would therefore assume, for DB creation, that it should crank the SGA to around 700MB (1GB max, leave some for PGA, but use some SGA because of SHared Server for APEX)

  • "Out of memory!" error message when trying to save...

    Yesterday I got Autotune 5 plug-in and when I use it on a track then try to save the garageband file the error message "out of memory!" comes up. I have like 35 gigs of memory left on my hard drive so that cant be the reason. And ontop of that the track that I spent all day recording (and the one I was using Autotune 5 on) was deleted without my permission. This is just super frusterating and I cant figure out why it would say I have no memory left when I clearly do. And Im afraid to keep using Autotune in fear of another track being deleted and me not being able to save because of the error message.
    If anyone can help me that would be awesome.

    Dang! Yeah, it seemed like a good step.
    Like I said, I've never seen GB display an Out of Memory error, and actually, I don't recall anyone ever posting that, so I'm puzzled.
    One other thought, generally a log-out clears everything, but have you tried from a complete reboot? IOW restart the machine, don't run ANYthing other then GB, and try again.
    My gut kind of says Autotune is the culprit, just because I also don't recall anyone else here using it, but I hate not being able to figure out a puzzle, and like you said, with me here and you way over there, it's tough to guess at all the possibilities. B-(>

  • Out of Memory! error messages

    For a few months I have been getting a series of about 40 error messages when Logic 7.2.1 is starting up. They appear at the end of the startup process. I dismiss each one and at the end of all the messages, Logic appears and seems to work then with no problems. The messages are:
    Out of Memory!
    Out of memory! Couldn't insert or delete data
    Couldn't creat Region
    Out of memory! Couldn't insert or delete data
    These appear with Logic in Native mode and also if I add the DAE audio engine.
    I have trashed prefs. I have cleared out all plugins. The error messages do not go away.
    Does anyone have any ideas as to why these error messages are appearing?

    Is there a specific song it's trying to load on
    startup? Maybe that's corrupted?
    Thanks for the suggestion. There is no Autoload song to load in any of the folders that might have it. The errors appear after I have trashed ALL Logic preferences and Restarted the G5 and then started up Logic.
    If you hold down the Option key near the end of Logic starting up (while it is checking MIDI), this causes Logic to appear without any song loaded. In this case I get no memory error messages. If I then select File / New and load in a generic new song, the error messages appear again at that point.
    So, I think you are correct in thinking it relates to the song that is being loaded but why the generic new song should cause these error messages is not clear to me yet.

  • SharePoint 2013 Search - Zip - Parser server ran out of memory - Processing this item failed because of a IFilter parser error

    Moving content databases from 2010 to 2013 August CU. Have 7 databases attached and ready to go, all the content is crawled successfully except zip files. Getting errors such as 
    Processing this item failed because of a IFilter parser error. ( Error parsing document 'http://sharepoint/file1.zip'. Error loading IFilter for extension '.zip' (Error code is 0x80CB4204). The function encountered an unknown error.; ; SearchID = 7A541F21-1CD3-4300-A95C-7E2A67B2563C
    Processing this item failed because the parser server ran out of memory. ( Error parsing document 'http://sharepoint/file2.zip'. Document failed to be processed. It probably crashed the server.; ; SearchID = 91B5D685-1C1A-4C43-9505-DA5414E40169 )
    SharePoint 2013 in a single instance out-of-the-box. Didn't install custom iFilters as 2013 supports zip. No other extensions have this issue. Range in file size from 60-90MB per zip. They contain mp3 files. I can download and unzip the file as needed. 
    Should I care that the index isn't being populated with these items since they contain no metadata? I am thinking I should just omit these from the crawl. 

    This issue came back up for me as my results aren't displaying since this data is not part of the search index.
    Curious if anyone knows of a way to increase the parser server memory in SharePoint 2013 search?
    http://sharepoint/materials-ca/HPSActiveCDs/Votrevieprofessionnelleetvotrecarrireenregistrement.zip
    Processing this item failed because the parser server ran out of memory. ( Error parsing document 'http://sharepoint/materials-ca/HPSActiveCDs/Votrevieprofessionnelleetvotrecarrireenregistrement.zip'. Document failed to be processed. It probably crashed the
    server.; ; SearchID = 097AE4B0-9EB0-4AEC-AECE-AEFA631D4AA6 )
    http://sharepoint/materials-ca/HPSActiveCDs/Travaillerauseindunequipemultignrationnelle.zip
    Processing this item failed because of a IFilter parser error. ( Error parsing document 'http://sharepoint/materials-ca/HPSActiveCDs/Travaillerauseindunequipemultignrationnelle.zip'. Error loading IFilter for extension '.zip' (Error code is 0x80CB4204). The
    function encountered an unknown error.; ; SearchID = 4A0C99B1-CF44-4C8B-A6FF-E42309F97B72 )

  • SAP AS JAVA installation in Solaris Zone envirn throws OUT of Memory error.

    Hi,
    We are installing SAP NW2004s 7.0 SR3 AS JAVA on Solaris 10 in zone environment. Thsi was prod server getting build where on top of this we will install GRC 5.3.
    We have faced no issues in development build.
    But during the Prod build at databse create step, we are getting the below error
    ORA-27102: out of memory
    SVR4 Error: 22: Invalid argument
    Disconnected
    SAPinst log entries where error messages started:
    04:43:58.128
    Execute step createDatabase of component |NW_Onehost|ind|ind|ind|ind|0|0|NW_Onehost_System|ind|ind|ind|ind|2|0|NW_CreateDBandLoad|ind|ind|ind|ind|10|0|NW_CreateDB|ind|ind|ind|ind|0|0|NW_OraDBCheck|ind|ind|ind|ind|0|0|NW_OraDBMain|ind|ind|ind|ind|0|0|NW_OraDBStd|ind|ind|ind|ind|3|0|NW_OraDbBuild|ind|ind|ind|ind|5|0
    INFO 2011-04-01 04:45:14.590
    Working directory changed to /tmp/sapinst_exe.16718.1301647358.
    INFO 2011-04-01 04:45:14.595
    Working directory changed to /tmp/sapinst_instdir/NW04S/SYSTEM/ORA/CENTRAL/AS.
    INFO 2011-04-01 04:45:14.609
    Working directory changed to /tmp/sapinst_exe.16718.1301647358.
    INFO 2011-04-01 04:45:14.621
    Working directory changed to /tmp/sapinst_instdir/NW04S/SYSTEM/ORA/CENTRAL/AS.
    INFO 2011-04-01 04:45:14.850
    Account oraac5 already exists.
    INFO 2011-04-01 04:45:14.852
    Account dba already exists.
    INFO 2011-04-01 04:45:14.852
    Account oraac5 already exists.
    INFO 2011-04-01 04:45:14.853
    Account dba already exists.
    INFO 2011-04-01 04:45:14.867
    Working directory changed to /tmp/sapinst_exe.16718.1301647358.
    INFO 2011-04-01 04:45:14.899
    Working directory changed to /tmp/sapinst_instdir/NW04S/SYSTEM/ORA/CENTRAL/AS.
    ERROR 2011-04-01 04:45:32.280
    CJS-00084  SQL statement or script failed. DIAGNOSIS: Error message: ORA-27102: out of memory
    SVR4 Error: 22: Invalid argument
    Disconnected
    . SOLUTION: See ora_sql_results.log and the Oracle documentation for details.
    ERROR 2011-04-01 04:45:32.286
    MUT-03025  Caught ESAPinstException in Modulecall: ORA-27102: out of memory
    SVR4 Error: 22: Invalid argument
    Disconnected
    ERROR 2011-04-01 04:45:32.453
    FCO-00011  The step createDatabase with step key |NW_Onehost|ind|ind|ind|ind|0|0|NW_Onehost_System|ind|ind|ind|ind|2|0|NW_CreateDBandLoad|ind|ind|ind|ind|10|0|NW_CreateDB|ind|ind|ind|ind|0|0|NW_OraDBCheck|ind|ind|ind|ind|0|0|NW_OraDBMain|ind|ind|ind|ind|0|0|NW_OraDBStd|ind|ind|ind|ind|3|0|NW_OraDbBuild|ind|ind|ind|ind|5|0|createDatabase was executed with status ERROR ( Last error reported by the step :Caught ESAPinstException in Modulecall: ORA-27102: out of memory
    SVR4 Error: 22: Invalid argument
    Disconnected
    ora_sql_results.log
    04:45:15 SAPINST ORACLE start logging for
    SHUTDOWN IMMEDIATE;
    exit;
    Output of SQL executing program:
    SQL*Plus: Release 10.2.0.4.0 - Production on Fri Apr 1 04:45:15 2011
    Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.
    Connected to an idle instance.
    ORA-01034: ORACLE not available
    ORA-27101: shared memory realm does not exist
    SVR4 Error: 2: No such file or directory
    Disconnected
    SAPINST: End of output of SQL executing program /oracle/AC5/102_64/bin/sqlplus.
    SAPINST found errors.
    SAPINST The current process environment may be found in sapinst_ora_environment.log.
    2011-04-01, 04:45:15 SAPINST ORACLE stop logging
    ================================================================================
    2011-04-01, 04:45:15 SAPINST ORACLE start logging for
    STARTUP NOMOUNT;
    exit;
    Output of SQL executing program:
    SQL*Plus: Release 10.2.0.4.0 - Production on Fri Apr 1 04:45:15 2011
    Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.
    Connected to an idle instance.
    ORA-27102: out of memory
    SVR4 Error: 22: Invalid argument
    Disconnected
    SAPINST: End of output of SQL executing program /oracle/AC5/102_64/bin/sqlplus.
    SAPINST found errors.
    SAPINST The current process environment may be found in sapinst_ora_environment.log.
    2011-04-01, 04:45:32 SAPINST ORACLE stop logging
    Already viewed S-note
    724713 - parameter settings for Solaris 10 - (Parameters are set as per this note)
    743328 - Composite SAP note: ORA-27102 - (no much information in memory problem on zones)
    Please provide your suggestions/resolution.
    Thankyou.

    Hi,
    @ Sunny: Thanks for response, the referred note was already checked and parameters are in sync as per note.
    @Mohit: SAP wouldn't proceed till create database if Oracle software was not installed. thanks for the response.
    @Markus: Thanks I agree with you, but I have doubt in this area. Isn't it like project.max-shm-memory was new parameter we need to set in local zone rather using shmsys:shminfo_shmmax in /etc/system. Do we need to still maintain this parameter in /etc/system in global zone.
    As per SUN doc below parameter was obsolete from Solaris 10.
    The following parameters are obsolete.
    ■ shmsys:shminfo_shmmni
    ■ shmsys:shminfo_shmmax
    As per your suggestion, do we need to set below parameters in that case, please clarify.
    Parameter                           Replaced by Resource Control      Recommended Value
    semsys:seminfo_semmni   project.max-sem-ids                      100
    semsys:seminfo_semmsl   process.max-sem-nsems               256
    shmsys:shminfo_shmmax  project.max-shm-memory             4294967295
    shmsys:shminfo_shmmni   project.max-shm-ids                       100
    Also   findings of /etc/release
    more /etc/release
    Solaris 10 10/08 s10s_u6wos_07b SPARC
    Regards,
    Sitarama.

  • Out of memory errors

    I'm having problems starting 8i. When started from /etc/rc.d/
    init.d/dbora during bootup or manually by root, I get an out of
    memory error.
    If I start it from the dbadmin user (oracle), I either get the
    same out of memory error, or I end up with several hundred shell
    logins and the database still doesn't respond.
    This is RedHat Linux 6.0, kernel 2.2.5-22smp. Here is a sample of
    what happens when I get the out of memory error:
    Mem: 160448K av, 53488K used, 106960K free, 26264K shrd,
    6532K buff
    Swap: 656360K av, 0K used, 656360K free
    36328K cached
    Oracle Server Manager Release 3.1.5.0.0 - Production
    (c) Copyright 1997, Oracle Corporation. All Rights Reserved.
    Oracle8i Release 8.1.5.0.0 - Production
    With the Java option
    PL/SQL Release 8.1.5.0.0 - Production
    SVRMGR> Connected.
    SVRMGR> ORA-27102: out of memory
    Linux Error: 22: Invalid argument
    SVRMGR>
    Server Manager complete.
    Database "ORCL" warm started.
    null

    It turns out that the problem I was having with a large number of
    shell processes being created was due to the use of oraenv in my
    .bashrc file (so much for following the instructions!) It was
    calling itself recursively until the process limit was reached.
    However, even with this fixed, the out of memory error still
    exists.
    max (guest) wrote:
    : dan....
    : check your init.ora......
    Aside from comments, it has these lines, which were created by
    dbassist:
    db_name = test
    instance_name = ORCL
    service_names = test
    control_files = ("/u02/oradata/test/control01.ctl", "/u02/oradata/
    test/control02.ctl")
    db_block_buffers = 8192
    shared_pool_size = 4194304
    log_checkpoint_interval = 10000
    log_checkpoint_timeout = 1800
    # I reduced processes to see if it would help
    processes = 10
    log_buffer = 163840
    background_dump_dest = /u01/admin/test/bdump
    core_dump_dest = /u01/admin/test/cdump
    user_dump_dest = /u01/admin/test/udump
    db_block_size = 2048
    remote_login_passwordfile = exclusive
    os_authent_prefix = ""
    compatible = "8.1.0"
    : also check ulimit
    Here's ulimit -a:
    core file size (blocks) 1000000
    data seg size (kbytes) unlimited
    file size (blocks) unlimited
    max memory size (kbytes) unlimited
    stack size (kbytes) 8192
    cpu time (seconds) unlimited
    max user processes 256
    pipe size (512 bytes) 8
    open files 1024
    virtual memory (kbytes) 2105343
    Everything looks pretty large to me.
    : Dan Wilga (guest) wrote:
    : : I'm having problems starting 8i. When started from /etc/rc.d/
    : : init.d/dbora during bootup or manually by root, I get an out
    : of
    : : memory error.
    : : If I start it from the dbadmin user (oracle), I either get
    the
    : : same out of memory error, or I end up with several hundred
    : shell
    : : logins and the database still doesn't respond.
    : : This is RedHat Linux 6.0, kernel 2.2.5-22smp. Here is a
    sample
    : of
    : : what happens when I get the out of memory error:
    : : Mem: 160448K av, 53488K used, 106960K free, 26264K shrd,
    : : 6532K buff
    : : Swap: 656360K av, 0K used, 656360K free
    : : 36328K cached
    : : Oracle Server Manager Release 3.1.5.0.0 - Production
    : : (c) Copyright 1997, Oracle Corporation. All Rights Reserved.
    : : Oracle8i Release 8.1.5.0.0 - Production
    : : With the Java option
    : : PL/SQL Release 8.1.5.0.0 - Production
    : : SVRMGR> Connected.
    : : SVRMGR> ORA-27102: out of memory
    : : Linux Error: 22: Invalid argument
    : : SVRMGR>
    : : Server Manager complete.
    : : Database "ORCL" warm started.
    null

  • Adobe CS6.0 out of memory error on iMac (early 2009)

    Hi:
    Adobe CS6.0 Photoshop is returning "Out of Memory (RAM)" errors and is unusable on a lab of iMacs.
    The system exceeds Adobe minimum specs (1GB ram).
    Our school has a lab of computers with both Adobe CS5.5 and Adobe CS6.0 Master Collection installed on a lab of iMacs (early 2009).
    The computers all have 4GB of memory and come with an NVidia GeForce 9400M 256MB video card.
    I notice that this video card shares ram with the main system.
    CS5.5 runs perfectly well but CS6.0 is not co-operating.
    We have updated Photoshop to version 13.04 and have run Apple's Software Update (latest version).
    I have looked online and this is a known problem with computers that have Adobe CS5 & 6 installed with Mac OS10.7.x but only on certain video cards.
    There are people out there with 32 GB ram having the same problem.
    I'm guessing the answer may be to update the video card drivers somehow.
    Your input is greatly appreciated.
    Regards
    ODrew

    Hi ODrew,
    I'm shocked you can operate OK with only 4GB of RAM, that certainly wasn't enough for my 2007 iMac to not continually slow down!?
    CS6 Sys requiremnts...
    Mac OS
    Multicore Intel processor with 64-bit support
    Mac OS X v10.6.8, v10.7, or v10.8**
    4GB of RAM (8GB recommended)
    4GB of available hard-disk space for installation; additional free space required during installation (cannot install on a volume that uses a case-sensitive file system or on removable flash storage devices)
    Additional disk space required for preview files and other working files (10GB recommended)
    1280x900 display
    7200 RPM hard drive (multiple fast disk drives, preferably RAID 0 configured, recommended)
    OpenGL 2.0–capable system
    DVD-ROM drive compatible with dual-layer DVDs (SuperDrive for burning DVDs; Blu-ray burner for creating Blu-ray Disc media)
    QuickTime 7.6.6 software required for QuickTime features
    Optional: Adobe-certified GPU card for GPU-accelerated performance
    And I don't see that Video card supported...
    http://www.adobe.com/products/premiere/tech-specs.html
    I don't think you can upgrade that video card...
    http://www.ifixit.com/Answers/View/93250/Upgrade+the+NVIDIA+GEFORCE+9400M+GRAPHI CS+PROCESS

  • Frm-40024 out of memory error

    I am trying to upgrade forms 10g to 11g.
    after installing the weblogic 10.30.2 , fusion middleware 11.1.1.2.0, I compiled the old forms to a new version.
    there was no compile error and .fmx file generated very successfully.
    I runned the file in my local computer. there was no problem.
    So,
    I uploaded .fmx into the WAS Server, and when I opened the form screen in the menu, I got an error!!!!
    "frm-40024 out of memory error"
    what is the problem?

    Hi user13763783
    The error is direct and straight forward ...
    Error Message: FRM-40024: Out of memory.
    Error Cause:
    Internal error. Your computer does not have enough memory to run the form.
    Action:The designer might be able to modify the form so that it will run.
    If that is not feasible, your installation must make more memory available, either by modifying the operating system parameters or by adding more memory to the computer.
    Level: >25 Type: ErrorHope this helps...
    Regards,
    Abdetu...

  • Cannot see images occassionally: "Out of memory"

    Hi,
    I love Lightroom, but have the problem that is really interfering with my work. When I look at a larger collection of images (100+), every now and then the image area will be greyed out, and it will say 'Out Of Memory' in red (upside down and back to front) in the lower left corner.
    This never happens when I am looking at thumbnails in Library-mode. It does happen when I am looking one image in Library-mode, in deverloper mode, and frequently in the Slide-show mode. Interestingly enough: when an image is not visible in Library-mode, chances are it is visible in Developer-mode and Slide-show (and vice-versa).
    I use Lightroom for a lot of subtle retouching, so I can imagine the files are large. I would understand in Slide-show that the CPU simply doesn't have the time to render the image with all the alterations I apply to it -- but in Developer and Libarary-mode I can wait and wait, and the image nevr shows up. Then I go to Developer, and there it is!
    I store my images on a seperate dedicated external harddisk connected through USB 2.0. I have browsed through the forum already, and found a solution in changing the windows pageview setting. I did this, but it made no difference. The problem still occurs with the same frequency. I have not installed LR 1.1 yet; the negative reports on this forum have scared me: I really don't need my database messed up right now.
    Any suggestions you might have will be greatly appreciated!
    Rogier Bos
    Rotterdam, The Netherlands.

    Guys,
    Before I start this rant... I LOVE LIGHTROOM... but no more ADOBE, I cannot stand it anymore... sort out the OOM errors now, before I put my Lightroom CD into the shredder and then post this SHAM on every ADOBE forum I can find.
    I have experienced the out of memory error 11 times today. I am working with 30,000+ images in Lightroom 1.1 on XP, but only looking through 58 files while doing an import of 100 files I get the "OUT OF MEMORY A2" error. This looks suspiciously like another of the, now famous, debug errors used by the developers to track the 'pain' issues. I would challenge Adobe that they know EXACTLY why this happens, a limitation in the programming techniques for their chosen development platform is highlighting issues in the Windows Memory sub-system and they cannot do diddly squat about it... they are just 'attempting' to clean up memory leaks where they can, by putting markers (A2) into the code. It would seem that they are down a creek without the proverbial paddle.
    Tom Hogarty & ADOBE listen up... stabilize the memory issues stop listening to your marketing bods who are probably leaning over your shoulder as I write these words. You are going to suffer at the hands of Aperture and Capture One if you dont get your act together. I have used both of the above with little or NO errors at all.
    Tell us ADOBE why do none of your engineers or developers talk openly about OOM errors, stop silencing them... lets get a discussion going there are some very capable developers out there willing to help for FREE! Are you mad... admit the problems and get people to help... Open Standards could teach you guys a thing or two!
    Remember for everyone of of us that actually has the balls to complain, there are a 1000 who just sit and suffer not knowing what to do!
    Fix the OOM issues before you do anything else this week!! Provide an interim 1.1.x update and make people happy they bought ADOBE and above all else remember at the end of the day these people PAY your wages.
    ...Now where is my Capture One license key...
    Damon Knight
    Photographer
    London

  • Out of memory - Cannot alocate 2gb memory to SGA  - SUsE 9 / 10gR2

    I need help, please !!!!
    Cannot alocate > 2gb memory to SGA
    SHMMAX:
    SUsE:/home/oracle # /sbin/sysctl -p
    kernel.shmall = 2097152
    kernel.shmmax = 3221225472
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    fs.file-max = 65536
    net.ipv4.ip_local_port_range = 1024 65000
    net.core.rmem_default = 1048576
    net.core.rmem_max = 1048576
    net.core.wmem_default = 262144
    net.core.wmem_max = 262144
    vm.disable_cap_mlock = 1
    SUsE:/home/oracle #
    When startup database:
    ORA-27102: out of memory
    Linux Error: 12: Cannot allocate memory
    Additional information: 1
    Additional information: 4620291
    Dell powered 2800 - 6gb memory
    SUsE 9
    Oracle 10g Std.

    You're most likely correct - I have been running without issue with the existing kernel params and SGA set to 1536M - but at this point, I just want to get back to my original settings to start the first node, then do additional research on the kernel params and SGA settings. So, any help in setting the SGA back to what I had previously would be most appreciated.
    Here are my kernel params:
    kernel.shmall = 2097152
    kernel.shmmax = 2147483648
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    net.ipv4.ip_local_port_range = 1024 65500
    net.core.rmem_default = 1048576
    net.core.rmem_max = 4194304
    net.core.wmem_default = 262144
    net.core.wmem_max =1048536

  • Out of memory Exception - Citrix

    In our organization we have about 200 Primavera users, we deliver the application via a farm of 3 load balanced Citrix servers. The database sits on Oracle 10g. We recently upgraded to P6.2.1. The upgrade was successful except that users sometimes recieve the following error:
    TCVirtualTable.LoadDelyedMemo: table=(Activity
    Resource Summary),sql=(SELECT spread_data FROM
    trsrcsum WHERE taskrsrc_sum_id= :taskrsrc_sum_id)
    , Exception=(out of memory)
    The error generally occurs when a Filter or Layout is applied.
    Further information:
    Behavior is temperamental, sometimes Filters / Layout load ok.
    Error cannot be reproduced by logging directly onto Citrix.
    Error cannot be reproduced by using stand alone client instead of Citrix.
    Does anyone have any experience of dealing with this / similar out of memory errors?
    Many thanks

    user12211922 wrote:
    In our organization we have about 200 Primavera users, we deliver the application via a farm of 3 load balanced Citrix servers. The database sits on Oracle 10g. We recently upgraded to P6.2.1. The upgrade was successful except that users sometimes recieve the following error:
    TCVirtualTable.LoadDelyedMemo: table=(Activity
    Resource Summary),sql=(SELECT spread_data FROM
    trsrcsum WHERE taskrsrc_sum_id= :taskrsrc_sum_id)
    , Exception=(out of memory) The table that error is referencing (TRSRCSUM) holds summary information about your activity resource assignments.
    It's quite possible there is something wrong with your summary data causing this error. What I would do is to delete all summary data (highlite all of your projects and then right click and select 'Delete project summaries...') and then resummarize them by highliting all projects and selecting 'Summarize Project'.
    Note: This could take a long time depending on how many projects you have and how large they are.
    See if this helps at all and post back.

Maybe you are looking for

  • List all fonts used in the illustrator document

    Hi All, Having searched google I can't seem to find what I am after but am hoping someone here might be able to help me. Due to a recent issue with a designer using a font that wasn't part of the brand guidelines for a particular company we would now

  • Problem: Plugging an LED-display into my MBP 15"

    Hello! I recently bought a BenQ G2222HDL 21.5" LED Monitor and I have encountered some problems when trying to connect it to my MBP via the Mini DisplayPort to DVI-adapter. My MBP (spring 2010) won't find the display but when I try to connect the dis

  • Paste ( cmd+v ) not working

    For some reason I can not paste into any application anymore using cmd + v on my MacBook running Lion 10.7. Copy ( cmd+c ) seems to work though. So far I have tried repairing permission and setting all keyboard shortcuts to default. Didn´t help. Anyo

  • File write error.

    I am trying to write a flat file out from a BPEL process. I get the following error when I invoke my partner link which is the Oracle File adaptor: <bindingFault> <part name="code" > <code>null</code> </part> <part name="summary" > <summary>file:/C:/

  • Audit logs for read operation on tables

    I have a requirement of implementing audit logs for tables on read / select operation in addition to insert,update,delete operations. Is there any way to achieve this since triggers are present only for insert,update and delete ? thanks in advance