Out of memory with lots of memory

this is the log output:
05/08/27 19:18:05 Runtime.getRuntime().freeMemory()=856 M
05/08/27 19:18:05 Runtime.getRuntime().totalMemory()=1012 M
05/08/27 19:18:05 In use=156 M
05/08/27 19:18:05 Runtime.getRuntime().maxMemory()=1012 M
05/08/27 19:18:09 java.lang.OutOfMemoryError
as u see there are enough memory and still out of memory error comes.
I run standalone server with 1.5GB ram. heap size is set to 1024 and also Xss and MaxPermSize are set to appropriate sizes.
what could be the error source ?

That must be puzzling. I do not have the answer, but a little more data from you would be helpful.
If you are using Windows, can you go to task manager to verify how much memory your java process is using? On unix, the command "top". Set your MaxPerSize to 256M just for sure. If the process hangs when the OutOfMemoryError happens, do a full stack dump by "ctrl+break" on Windows or "ctrl+^" on unix/linux to see what is happening. If a stackTrace of the thread that causes the OutOfMemoryError can be obtained, that will be the most helpful.

Similar Messages

  • I am running out of memory on my hard drive and need to delete files. How can I see all the files/applications on my hard drive so I can see what is taking up a lot of room?

    I am running out of memory on my hard drive and need to delete files. How can I see all the files/applications on my hard drive so I can see what is taking up a lot of room?
    Thanks!
    David

    Either of these should help.
    http://grandperspectiv.sourceforge.net/
    http://www.whatsizemac.com/
    Or search 'disk size' in the App Store.
    Be carefull with what you delete & have a backup BEFORE you start, you may also want to reboot to try to free any memory that may have been written to disk.

  • Problem with out of memory and reservation of memory

    Hi,
    we are running a very simple java program on HP-UX that do some text substitution - replacing special characters with other characters.
    The files that are converted are sometimes very large, and now we have come to a point where the java server doing the work crashes with "Out of memory" message. (no stack) when it process one single 500MB large file.
    I have encountered this error before(with smaller files) and then I have made the maximum Heap larger, but now when I try to set it to 4000M
    i get the message:
    "Error occurred during initialization of VM
    Could not reserve enough space for old generation heap"
    When it crash with this message, my settings are:
    -XX:NewSize=500m -XX:MaxNewSize=1000m -XX:SurvivorRatio=
    8 -Xms1000m -Xmx4000m
    If I run with Xmx3000m instead the java program starts but I get Out of memory error like:
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    The GC log file created when it crashes looks like:
    <GC: -1 31.547669 1 218103808 32 219735744 0 419430400 0 945040 52428800 0 109051904 524288000 877008 877008 1048576 0.934021
    >
    <GC: -1 62.579563 2 436207616 32 218103808 0 419430400 945040 944592 52428800 109051904 327155712 524288000 877008 877008 1048
    576 2.517598 >
    <GC: 1 65.097909 1 436207616 32 0 0 419430400 944592 0 52428800 327155712 219048400 524288000 877008 877008 1048576 2.061976 >
    <GC: 1 67.160178 2 436207616 32 0 0 419430400 0 0 52428800 219048400 219048400 524288000 877008 877008 1048576 0.041408 >
    <GC: -1 128.133097 3 872415232 32 0 0 419430400 0 0 52428800 655256016 655256016 960495616 877008 877008 1048576 0.029950 >
    <GC: 1 128.163584 3 872415232 32 0 0 419430400 0 0 52428800 655256016 437152208 960495616 877008 877008 1048576 3.971305 >
    <GC: 1 132.135106 4 872415232 32 0 0 419430400 0 0 52428800 437152208 437152208 960495616 877008 876656 1048576 0.064635 >
    <GC: -1 256.378152 4 1744830464 32 0 0 419430400 0 0 52428800 1309567440 1309567440 1832910848 876656 876656 1048576 0.058970
    >
    <GC: 1 256.437652 5 1744830464 32 0 0 733282304 0 0 91619328 1309567440 873359824 1832910848 876656 876656 1048576 8.255321 >
    <GC: 1 264.693275 6 1744830464 32 0 0 733282304 0 0 91619328 873359824 873359824 1832910848 876656 876656 1048576 0.103764 >
    We are running:
    java version "1.3.1.02"
    Java(TM) 2 Runtime Environment, Standard Edition (build 1.3.1.02-011206-02:17)
    Java HotSpot(TM) Server VM (build 1.3.1 1.3.1.02-JPSE_1.3.1.02_20011206 PA2.0, mixed mode)
    We have 132GB of physical memory and a lot of not used Swap space, so I cant imagine we have a problem with that.
    Can anyone please suggest what to do proceed troubleshooting or to change some settings? I'm not into this Java really so I really need some help.
    Usually the java program handles thousands of smaller files (around 500 KB - 1 MB sized files).
    Thanks!

    You have a one to one mapping? Where one character is replaced with another?
    And all you do is read the file, replace and then write?
    Then there is no reason to have the entire file in memory.
    Other than that you need to determine if the VM (which is not a Sun VM) has an upper memory bound. That would be the limit that the VM will not go beyond regardless of memory in the system.
    We have 132GB of physical memory and a lot of not used Swap spaceOne would wonder why you have swap space at all.

  • Out of memory with no swap causes disk activity

    Can someone explain what exactly is being read/written from/to disk in this situation?
    I have 2 GB of RAM and no swap partitions. Occasionally I'll forget how inefficient gwenview is at displaying very large images and accidentally double-click one. The entire system freezes; even alt+sysrq keystrokes are ineffective (and yes I do have them enabled).
    For about 5 minutes, the system is locked up and the hard drive light is flickering. That scares me a bit, because with no swap, what could it possibly be doing for 5 straight minutes? I used to think it was synching before doing the OOM-killing, but there's no way a sync could take that long. Judging from the sound of the hard drive, it's hda (the drive / and all the other system partitions are on).
    A few times after recovering from this I've run extensive data verification and never found any evidence of corruption, but I'd like to know for sure that the kernel isn't randomly deciding to use some filesystem as swap space.
    In the mean time, I'm playing with disabling overcommit -- setting vm.overcommit_memory = 2 in /etc/sysctl.conf. That enforces a hard memory commit limit of swap size + overcommit_ratio * ram size (so I've read) -- and I've also read that the default overcommit_ratio is only 50%. What the bloody hell? It's almost like someone thinks swap is more important than RAM -- hell-llo, I have 2 GB of RAM so that I can get *away* from swap!
    Anyway, I've set the ratio to 97% and so far things seem happy -- if I deliberately run out of memory, the process that did it always gets killed instantly and the system doesn't freeze up on OOM anymore."
    Another thing -- in all my out of memory situations so far, VMWare has been running. I suppose it's possible that VMWare is the one doing the swappage; I'll have to investigate that further.
    ~Felix.

    I think I've finally figured this out. It's a kernel bug -- I'm guessing that under normal circumstances, the "cached" column in the free command "doesn't count" towards how much memory the system thinks it's using. After all, it's just cached copies of stuff that should be elsewhere, and if you run out of memory, you can safely dump that, right? Unfortunately, /dev/shm is counted under cached rather than used memory (as I discovered in an earlier post).
    So if I've got 500 MB of stuff in /dev/shm * (which is where I mount my /tmp), there's now 500MB of stuff in the "cached" column that really does count -- system reaches all RAM full, decides it needs to dump cache, and suddenly finds that the 500MB it thought it could use isn't usable. For some reason it takes about 5 minutes of hard drive thrashing (probably because it's already chucked all of the system libraries, etc. out of cached and needs to re-read them from disk every time) before something finally figures out that it really is out of memory and that that 500MB isn't letting go and invokes OOM-killer.
    *: VMWare does this; it creates a 512MB file (the amount of RAM in my virtual machine) then hides it by keeping the file open and deleting it, so the inode's still there, but you can't see it and it makes the df command really perplexing... but that's another story.
    I haven't had a chance to try this with a newer kernel (maybe they've fixed it now?); I'm still running 2.6.23-ARCH here. (pacman -Syu upgrades are a major production for me because I have lots of RAID arrays and things, and an nvidia graphics card, and I use gnucash which sometimes needs manual recompiling, and so on...)

  • Server goes out of memory when annotating TIFF File. Help with Tiled Images

    I am new to JAI and have a problem with the system going out of memory
    Objective:
    1)Load up a TIFF file (each approx 5- 8 MB when compressed with CCITT.6 compression)
    2)Annotate image (consider it as a simple drawString with the Graphics2D object of the RenderedImage)
    3)Send it to the servlet outputStream
    Problem:
    Server goes out of memory when 5 threads try to access it concurrently
    Runtime conditions:
    VM param set to -Xmx1024m
    Observation
    Writing the files takes a lot of time when compared to reading the files
    Some more information
    1)I need to do the annotating at a pre-defined specific positions on the images(ex: in the first quadrant, or may be in the second quadrant).
    2)I know that using the TiledImage class its possible to load up a portion of the image and process it.
    Things I need help with:
    I do not know how to send the whole file back to servlet output stream after annotating a tile of the image.
    If write the tiled image back to a file, or to the outputstream, it gives me only the portion of the tile I read in and watermarked, not the whole image file
    I have attached the code I use when I load up the whole image
    Could somebody please help with the TiledImage solution?
    Thx
    public void annotateFile(File file, String wText, OutputStream out, AnnotationParameter param) throws Throwable {
    ImageReader imgReader = null;
    ImageWriter imgWriter = null;
    TiledImage in_image = null, out_image = null;
    IIOMetadata metadata = null;
    ImageOutputStream ios = null;
    try {
    Iterator readIter = ImageIO.getImageReadersBySuffix("tif");
    imgReader = (ImageReader) readIter.next();
    imgReader.setInput(ImageIO.createImageInputStream(file));
    metadata = imgReader.getImageMetadata(0);
    in_image = new TiledImage(JAI.create("fileload", file.getPath()), true);
    System.out.println("Image Read!");
    Annotater annotater = new Annotater(in_image);
    out_image = annotater.annotate(wText, param);
    Iterator writeIter = ImageIO.getImageWritersBySuffix("tif");
    if (writeIter.hasNext()) {
    imgWriter = (ImageWriter) writeIter.next();
    ios = ImageIO.createImageOutputStream(out);
    imgWriter.setOutput(ios);
    ImageWriteParam iwparam = imgWriter.getDefaultWriteParam();
    if (iwparam instanceof TIFFImageWriteParam) {
    iwparam.setCompressionMode(ImageWriteParam.MODE_EXPLICIT);
    TIFFDirectory dir = (TIFFDirectory) out_image.getProperty("tiff_directory");
    double compressionParam = dir.getFieldAsDouble(BaselineTIFFTagSet.TAG_COMPRESSION);
    setTIFFCompression(iwparam, (int) compressionParam);
    else {
    iwparam.setCompressionMode(ImageWriteParam.MODE_COPY_FROM_METADATA);
    System.out.println("Trying to write Image ....");
    imgWriter.write(null, new IIOImage(out_image, null, metadata), iwparam);
    System.out.println("Image written....");
    finally {
    if (imgWriter != null)
    imgWriter.dispose();
    if (imgReader != null)
    imgReader.dispose();
    if (ios != null) {
    ios.flush();
    ios.close();
    }

    user8684061 wrote:
    U are right, SGA is too large for my server.
    I guess oracle set SGA automaticlly while i choose default installion , but ,why SGA would be so big? Is oracle not smart enough ?Default database configuration is going to reserve 40% of physical memory for SGA for an instance, which you as a user can always change. I don't see anything wrong with that to say Oracle is not smart.
    If i don't disincrease SGA, but increase max-shm-memory, would it work?This needs support from the CPU architecture (32 bit or 64 bit) and the kernel as well. Read more about the huge pages.

  • I'm getting an error Out of memory message when I try to render - There's lots of memory

    I'm getting an error Out of memory message when I try to render - there's lots of memory and I've never gotten this before - any ideas?

    Thanks - I did go through and change all the profiles to Apple RGB, there are several to choose - but I'm still having problems - I think it has to do with corrupt  images - Im backing up and starting over - I've worked with images large and small for years and never had this problem - thanks

  • Out of memory issues with PSE 8

    I am using PSE 8 64 bit on a Dell desktop computer with Intel (R) Core (TM) i7 CPU 920 at 2.67 GHz. with 8 GB of ram. My operating system is Windows 7, 64 bit.
    My problem is that I get out of memory or insufficient ram messages in PSE Editor with some PSE tools when my CPU resource utilization reaches 37 to 38%. In other words, even though my computer is telling me I have almost 4 GB of memory left, PSE is saying it does not have enough memory to complete the operation. It looks to me as if PSE is only using 4GB of my 8 GB of Ram. Is this true and what do I need to do to allow PSE to utilize all of my available ram.

    Thanks, that does answer what the problem is, but not necessarily a solution. I like working with 8 to 10 pictures (files) in the editor tray at a time. I make whatever changes needed to each and then group 4 or 5 into an 8.5 X 11 collage. Each picture in the collage is a separate layer and each separate picture may multiple layers of its own. I print the collage on 8.5 x 11 photo paper and then put the page in a photo album. I like the pictures in different sizes, orientations and sometimes shapes, so the album and multiple picture options offered in PSE are not much help. My process eats a lot of memory, which I mistakenly thought, my 8 gb of ram would solve.
    Anyway, now that I know the limitations, I can adjust the process to avoid the memory issue and hopefully, a future version of Elements will accommodate 64 bit.
    Maybe, I am wondering, do I need to look at other programs or am I missing a PSE function that would make my chore easier.

  • Out of memory Error with jdk 1.6

    Hello,
    I have a swing application launched on the client with the help of Java web start. The applications works fine in jre 1.4 and jre 1.5. The heap sizes are :
    initial-heap-size="5m" max-heap-size="24m"
    But when I run this using jre 1.6.0_05-b13, I am getting Out of memory Error, java heap size. And I see the memory usage is growing rapidly which I didn't notice in other jre versions (1.4 and 1.5).
    Does anyone have any idea on this?
    Thanks in advance,
    MR.

    Thanks for your response Peter. During my continuous testing I identified that error happens on jdk 1.5 also. And I have increased the min-heap-size to 24 MB and max-heap-size to 64 MB. But in that case also I noticed the out of memory error. The interesting thing is, the min-heap-size is never increased from 24MB and lot of free memory also.
    Memory: 24,448K Free: 12,714K (52%) ... completed.
    The Outofmemoryerror triggers from the reader thread which does the job of reading the data from the InputStream. One thing is we are continuously pushing more data on the output stream from the other end. Is that any limitation on InputStream which can hold some definite amount of data.
    Please throw some light on this.

  • Out Of Memory with tons of free mem

    I need help diagnosing a problem I'm having with a server application.
    The problem:
    I'm getting general flakeyness and sometimes "OutOfMemory: cannot create new native thread" after the application has been running several hours (with 150 or so logins and 300 or so signs rendered).
    The environment:
    We run Tomcat 3.2.4, Xalan2, JAXP1.1, mysql w/ mm.sql JDBC driver v1.2.0.13, JDK 1.3.1_03
    on a SunFire v880 w/ 4 processors, 8Gig RAM running Solaris 8. JVM args set to -server -Xms3g -Xmx3g -XX:NewSize=1024m -XX:MaxNewSize=1024m -XX:SurvivorRatio=16 -XX:+DisableExplicitGC
    What I've tried:
    I've run under OptimizeIt and found only small amounts of mem being consumed ongoing. Something the GC ought to be able to handle. I've played with the GC parameters in an effort to get this problem under control. I monitor the freemem and can see it go up and down through out the day. The lowest freemem has come down is to 900Meg, then bounces back up to 2.7Gig or so. I suspected fragmentation, but from what I read, hotspot shouldn't allow that to happen.
    That exception is thrown from native code in "jvm.cpp" in the JVM_StartThread() method. Comments say it could happen when memory is low (duh).
    Our App:
    We render in store signs for a major US department store. Our app provides simplified sign composition. We render the signs for printing (generating postscript for the printers). Sometimes we deal with buffers as large as 112Meg. We use XML for sign templates and jobs. We spit out XML from servlets, using XSLT to transform to HTML for the browser. Our sign templates use some XSLT to add common things into the templates as they are loaded. We support image upload to a repository that the sign creators can use in the signs. So, we deal with lots if image data and XML throughout the app. We hit the database throughout the workflow.
    HELP!

    Hi,
    I had the same problem under Linux, but found a solution:
    There are 2 different factors for this problem.
    There is the "Xss" option of Java and the "stack size"
    as reported/set by ulimit:
    ulimit -acore file size (blocks) 0
    data seg size (kbytes) unlimited
    file size (blocks) unlimited
    max locked memory (kbytes) unlimited
    max memory size (kbytes) unlimited
    open files 1024
    pipe size (512 bytes) 8
    stack size (kbytes) 8192
    cpu time (seconds) unlimited
    max user processes 16896
    virtual memory (kbytes) unlimited
    When using java 1.3.1, the "Xss" option is ignored, so
    only the "stack size" of ulimit counts. Setting the
    stack size using ulimit is done using the -s option:
    ulimit -s 8129This sets the stack size per Java thread to 8MB.
    The default stack size in Redhat Enterprise Server 3.1
    is 10MB. When a Java thread is started, this 10MB is not taken
    immediately, but the Linux kernel does guarantee this thread 10MB
    if it needs it. Suppose you hava 2GB memory in your machine, then
    after starting 200 Java threads, the light goes out even though
    "top" reports 1.5GB of free memory. The memory is free, but the
    kernel has already promised all of it to the 200 executing Java
    threads.
    Under java 1.4.2, the Xss option is not ignored. It has the same
    effect as the "stack size" set by ulimit; specifying a large amount
    of memory for Xss will limit the amount of threads you can start.
    Using java 1.4.2 it seems these settings both have to be tuned.
    Greetz,
    Chris Twigt

  • NIGHTMARE with ERROR OUT OF MEMORY

    I was going to finish a 80 hour editing proyect and suddenly this morning my fcp says ERROR OUT OF MEMORY! so NOW, I can´t open anymore the editing sequence.
    HELP! I am desesperate!
    any idea??

    Thanks a lot! I read another post more or less like this, and somebody has some kind of programa thats named FCP RESCUE and it works really good!
    I erased all render files, then with this program (I still dont understand well how this works) deleted preferences of fcp (with fcp closed -- no instructions available of the use of this little program) and then make off line the sequence.
    YES! I Think the problem was a cmyk graphic I haved. so then I opened the sequence -everything was off line (as suppossed to be) and then opened a new proyect --diffferent name (by caution) changed name, and copy paste all the edition of the original editing sequence to the new project, but I erased the CMYK GRAPHIC -- now working fine in the new project.
    Have to say that in the older one --even that I erased the cmyk graphic --still having troubles so...
    its better to work in a new project.
    Thanks a lot for your responses VERY VERY USEFUL!!
    Ah, one more thing, fcp (apple) should warn users that fcp is not compatible with cmyk graphics and that will give conflicts.

  • Data merge with football tickets. Printing causes out of memory errors. Can only print 10 at a time!

    Here's the deal. Printing football tickets for the local high school. 10 tickets per 8.5 x 11 sheet. All black and white. I couldn't data merge into one whole document of tickets because it was failing to do that. So, I end up with 3 documents. First two have 50 pages of tickets and the last has 37 pages.
    When I go to print, I can only print 10 pages at a time or InDesign straight up crashes or I get an "Out of Memory" error which is bull because I've got plenty of ram available (unless it's complaining about lack of HD space for some reason)
    Incredibly frustrating.
    Each ticket has 3 images on it. Two of the images are the same. Those two images are about 103kb .ai files. Very very simple. The other image is a TIFF that is about 76kb. So there are 1000 .ai files / document and 500 TIFFs.
    Here's a video I took of it ripping: http://www.youtube.com/watch?v=UCjvdmXuaYs&feature=c4-overview&list=UUf_1UFp80YLZJfCyIxXEg kg
    Also, when printing, I have the Send Data set to Optimized Subsampling instead of "All". That's not seeming to help.
    Any thoughts?
    IDCS 5.5
    OSX 10.8.4

    I just ran a sample on my laptop.
    AI file, duplicated, is 284k. Image is roughly 200k. The CSV has 2054 records, and the merged document with 10 per page is 250 megabytes. PDF is just under 50 megs.
    This laptop has only 4 gigs memory but has a lot of free disk space. It took a good amount of time to do the merge, a bit longer to create the PDF-X1a PDF.
    Mike

  • Problems with PNGs... Overall compression + Running out of memory!

    We're having a number of issues with PNGs while working on our first iPhone project, and any assistance would be greatly appreciated!
    Our game is using a large number of PNG assets, some of which are full-frame (though the full-frame files tend to mostly be transparent/use alphas, though apparently that doesn't help memory issues much).
    We're running into two huge problems -
    1) We're running out of memory on device when calling to these full frame sequences, which tend to be anywhere from 10-40 frames each at anywhere from 50 to 250kb in size.
    2) Our overall package size is huge, sitting at around 60mb. I've already compressed the pngs through photoshop to the best of my ability, and I'm not having much luck with downloadable compressors like pngcrush. IS there a way to compress all the pngs through xcode/c++/objective C? The programmers are informing me right now that the only compression possible is whatever I apply directly to my pngs on my end - nothing through code.
    I'm stumped as to how I'm seeing seemingly-complex apps with plenty of content at 1-5mbs, and running smoothly with full-frame animations. I'm imagining the problem is that we're not using a proprietary engine to properly manage things, but I'm wondering if there is a simple solution.
    Thanks in advance, guys!
    P.S. - Already done a bunch of research on my own, and not having much luck. Just wondering if there is something obvious that both the programmers and I are missing!

    * Make image frame smaller? A fully transparent border around an image is pure overhead -- there is still data in the actual pixels (even though you cannot see them), plus the transparent border itself. All it needs is an x/y coordinate and a tiny code adjustment.
    * Use less transparency bits? Perhaps you could do with 1-bit transparency on some images. It'll use 1/8 times the memory (well, globally).
    * Use less colors? 24 bit color might compress down to 16 bits without visual artifacts (esp. when you don't have lots of gradients). Perhaps even 8 bit palettized.
    * Make images smaller? You might be able to get away with enlarging some images.
    Just a few things you could check right away, without major rewrites.

  • "Out of memory" - with silly proportions

    Hello everyone,
    Even though I´m not the most frequent user of Encore, I do DVD-productions in my studies from time to time. So I was surprised to get an error on my new computer configuration, and cannot help to think that it might be something wrong that I do not know of.
    I run a Wintel with;
    * Windows 7, 64-bit Ultimate (updated, genuine build - as in no beta/RC)
    * 8Gb RAM
    * HDD´s with at least 20Gb left on system drive and 60Gb left on drive with projects
    * CS4, Master edition (updated, student licence)
    trying to render a project with;
    * ONE menu, 6 chapters
    * A 3second transition-effect
    * A pre-rendered (Premiere) "DVD-MPEG2" file that is a total of 1,1Gb in size
    and no matter how I do it (making new timelines, re-render the main file, burn directly vs ISO-file vs folder) - I get a "Out of memory"-fail at the end of the creation.
    So how can it go wrong? Is it that Encore CS4 is not "compatible" with Win7 64-bit or what? Since Win7 and getting 8Gb of RAM is the only thing that is "new" since I last rendered a DVD.
    Thank you for your quick replies, I need to get this sorted for a presentation today... preferably.
    Best regards

    >Windows 7, 64-bit Ultimate
    Scan the forum for titles, or do a search... LOTS of problems with Win7 64bit and Encore
    Try this http://www.microsoft.com/windows/virtual-pc/download.aspx
    I have not used it (building a Win7 computer next year) but it MAY help
    Some other ideas...
    Win7 Help http://social.technet.microsoft.com/Forums/en-US/category/w7itpro/
    Compatibility http://www.microsoft.com/windows/compatibility/windows-7/en-us/Default.aspx
    Adobe Notes http://kb2.adobe.com/cps/508/cpsid_50853.html#tech
    Optimizing http://www.blackviper.com/Windows_7/servicecfg.htm

  • Create database fails with ORA-27102 -out of memory

    Hi,
    I have Solaris 10 server with 16 GB ram. On it there are 10 databases (8 of them 9.2.0.7, and 2 of them 10.2.0.4) running -but they have small SGAs -300 mb each (some even smaller 200 mb or so). Now I have to create two more database on it. When I try to create the db, it fails with the error:
    Connected to an idle instance.
    ORA-27102: out of memory
    SVR4 Error: 22: Invalid argument
    And alert log has meesages as below:
    Starting ORACLE instance (normal)
    Tue May 26 07:37:39 2009
    WARNING: EINVAL creating segment of size 0x0000000029002000
    fix shm parameters in /etc/system or equivalent
    Also see the output of this command :
    prctl -n project.max-shm-memory -i project user.root
    project: 1: user.root
    NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
    project.max-shm-memory
    privileged      3.92GB      -   deny                                 -
    system          16.0EB    max   deny                                 -
    Now I tried to change this with this command (as suggested in installation guide):
    prctl -n project.max-shm-memory -v 8gb -r -i project user.root
    but still I get teh same error. So I refer to Metalink document 399895.1. It says that manually change the settings in /etc/system. this needs a reboot and I got premission to do this reboot tomorrow. But my question is: What are the values that I should be putting in this file?+As suggested in the note, should I put the below values? -
    For example, a sample value (mentioned in the note) are: for /etc/system entry setting SHMMAX = 6GB.
    set shmsys:shminfo_shmmax=6442450944
    set semsys:seminfo_semmni=1024
    set semsys:seminfo_semmsl=1024
    set shmsys:shminfo_shmmni=100
    or should I put some other values (for all the parameters like semmni, semmsl etc) ? I am not clear which values I should be specifying.
    Thanks
    Edited by: orausern on May 26, 2009 7:24 AM
    Edited by: orausern on May 26, 2009 7:27 AM

    Wow! your help comes like an angles' helping hand! thank you so much. I am not very knowledgable on solaris so a few questions :
    Currently there is no project set up for oracle on this server, so the steps I need to do -to make the changes permanent are:
    # projadd -c "Oracle" 'user.oracle'
    # projmod -s -K "project.max-shm-memory=(privileged,8GB,deny)" 'user.oracle'
    correct?
    thanks a lot again

  • I've a few issues, my old computer was out of memory and i changed to a new one which was already on another itunes account.  When i upgraded to os5 my library has gone it has synced with the new computer - how do i restore the old ipad information?  ple

    I've a few issues, my old computer was out of memory.  I changed to a new one which already had another itunes account (for my business)  when i upgraded to os5 my library has dissappeared as it has syncd with the new computer.  Is there any way i can restore the information previously from my ipad?  really appreciate any advice thanx in advance

    Lynda-
    If you still have the old computer, you should be able to connect the iPad, run iTunes and restore from a backup from a previous sync.  Now that you have it updated, you will not need to do that again.
    Fred

Maybe you are looking for

  • Where does system preferences store old profile pictures?

    I can still set it as my profile pictures, but i deleted it in the photobooth folder. its still got to be on there somewhere. Where is it?

  • Error when trying to sync - 'Unknown error (-48)' - HELP!

    Hi everyone Been having some iPod trouble this week - and now the next issue!! When I connect my iPod to my Mac, iTunes opens as it should, but then the following error pops up: The iPod cannot be synced. An unknown error occurred (-48). I haven't go

  • Itunes Store with defective musics

    I use to purchase songs from iTunes all the time, but last night I purchased a song that had completely downloaded but the song started 2 seconds late and ended before the actual end of the song. I am so ******...Does anyone knows how can I get refun

  • Duke Nukem 3D graphic fix?

    Hi, I just got Duke Nukem 3D Atomic Edition from Macsoft. I just downloaded the Mac OS X update and installed the game. It works but the graphic has weird colors and makes it unbearable. Is there a fix for this? I have Mac OS X 10.6.7 Snow Leopard.

  • [SOLVED] How do I find out what packages are installed in a distro?

    Okay. I was running Arch before, but GNOME had a bunch of problems that I couldn't fix myself, so I reformatted and went to Peppermint OS (it's a cloud-based system), and I absolutely love it. Now, the thing is, it's even slower than my Arch system w