UCCX 7 Heap Memory Usage Exceeded Error

UCCX 7.0.(1) SR5
Getting the following error when updating or adding new script applications:
"It is not recommended to update the application as Engine heap memory usage exceeded configured threshold. Click OK to continue and Cancel to exit."
Apparently this is an alert that was built into SR4 and is configurable under the System Parameters.
Does anyone have information on what processes use the heap memory in UCCX or how to monitor the usage?

As Tom can attest to by now, this is something of an iceberg with big sharp edges below the surface.
The Java heap is fixed at 256MB on CCX. The Java heap is used by Tomcat as execution memory. In addition to this, applications, scripts, and other repository data is loaded into the heap at runtime. Depending on your environment, you may be approaching the limits of the heap, which cannot be changed. If the heap size is reached, it will be dumped and impact calls.
What have you been doing as of late on your CCX server? How many applications and scripts do you have? Are any of these using XML files extensively?
Note there is also a possible bug where the MIVR engine does not properly release all objects loaded into the heap at the end of a script execution leading to a memory leak of sorts. The discussion [debate] over this behavior is continuing. As of this week, it may be represented under
/* Style Definitions */
table.MsoNormalTable
{mso-style-name:"Table Normal";
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-parent:"";
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-para-margin:0in;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:10.0pt;
font-family:"Times New Roman","serif";}
CSCte49231. If it is, this may qualify as the most poorly described defect ever.

Similar Messages

  • OAS Heap memory issue: An error "java.lang.OutOfMemoryError: GC overhead

    OAS - 10.1.3.4.0
    We are running out of Heap memory and seeing lots of full GC and out of memory events
    Verbose GC is on.
    Users don't know what they are doing to cause this
    We have 30-40 users per server and 1.5 GB heap memory allocated
    There are no other applications on the machine. Only the PRD instance with 1.5 GB allocated to the JVM. We do not have any issue with memory on the server and we could increase the heap but we dont want to go over the 1.5 GB since that is what I understood to be the high end of what is recommended. we only have 30-40 users on each machine. There are 8 servers and a typical heavy usage day we may have 1 or two machines that have the out of memory or continuous full GC in the logs. When this occurs the phones light up with the people on that machine experiencing slowness.
    below is an example of what we see in a file created in the OPMN log folder on the JAS server then this occurs. I think this is the log created when Verbose GC is turned on. I can send you the full log or anything else you need. Thanks
    1194751K->1187561K(1365376K), 4.6044738 secs]
    java.lang.OutOfMemoryError: GC overhead limit exceeded
    Dumping heap to java_pid10644.hprof ...
    [Full GC 1194751K->1188321K(1365376K), 4.7488200 secs]
    Heap dump file created [1326230812 bytes in 47.602 secs]
    [Full GC 1194751K->1177641K(1365376K), 5.6128944 secs]
    [Full GC 1194751K->986239K(1365376K), 4.6376179 secs]
    [Full GC 1156991K->991906K(1365376K), 4.5989155 secs]
    [Full GC 1162658K->1008331K(1365376K), 4.1139016 secs]
    [Full GC 1179083K->970476K(1365376K), 4.9670050 secs]
    [GC 1141228K->990237K(1365376K), 0.0561096 secs]
    [GC 1160989K->1012405K(1365376K), 0.0920553 secs]
    [Full GC 1012405K->1012274K(1365376K), 4.1170216 secs]
    [Full GC 1183026K->1032000K(1365376K), 4.4166454 secs]
    [Full GC 1194739K->1061736K(1365376K), 4.4009954 secs]
    [Full GC 1194739K->1056175K(1365376K), 5.1124431 secs]
    [Full GC 1194752K->1079807K(1365376K), 4.5160851 secs]
    in addition to the 'overhead limit exceded' we also will see :
    [Full GC 1194751K->1194751K(1365376K), 4.6785776 secs]
    [Full GC 1194751K->1188062K(1365376K), 5.4413659 secs]
    [Full GC 1194751K->1194751K(1365376K), 4.5800033 secs]
    [Full GC 1194751K->1194751K(1365376K), 4.4951213 secs]
    [Full GC 1194751K->1194751K(1365376K), 4.5227857 secs]
    [Full GC 1194751K->1171773K(1365376K), 5.5696274 secs]
    11/07/25 11:07:04 java.lang.OutOfMemoryError: Java heap space
    [Full GC 1194751K->1183306K(1365376K), 4.5841678 secs]
    [Full GC 1194751K->1184329K(1365376K), 4.5469164 secs]
    [Full GC 1194751K->1184831K(1365376K), 4.6415273 secs]
    [Full GC 1194751K->1174738K(1365376K), 5.3647290 secs]
    [Full GC 1194751K->1183878K(1365376K), 4.5660217 secs]
    [Full GC 1194751K->1184651K(1365376K), 4.5619460 secs]
    [Full GC 1194751K->1185795K(1365376K), 4.4341158 secs]

    There's an Oracle support note with a very similar MO :
    WebLogic Server: Getting "java.lang.OutOfMemoryError: GC overhead limit exceeded" exception with Sun JDK 1.6 [ID 1242994.1]
    If I search for "java.lang.OutOfMemoryError: GC overhead" on Oracle Support it returns at least 12 documents
    Might be bug 6065704. Search Oracle support for this bug number.
    Best Regards
    mseberg

  • High memory usage and error creating access rules

    Hi guys
    I'm having a problem with the memory and also trying to create some rules on the CISCO ASA. The version that I got installed was the 8.2.5.33 on a CISCO 5520 with 512 RAM, the memory usage is on 99% used, 1% free and because of that when I'm trying to create a new rule the firewall brings me the next error
    So what I did was a downgrade to the version 8.2 (4) 4 and the memory went down a little (82% used, 18% free) but I still got the error when I'm creating an access rule on the device. One thing and I'm not sure if this could affect on the performance are the number of access list and the object groups that are created.
    I already open a case with CISCO TAC and they are checking if the problem is with the memory capacity or maybe a memory leak.
    Also the doubt that I got is with the memory that I got now available should I can create access rules or 82 is still to hig to create a rule or and object group?
    Regards

    Hi,
    Can you check what is the amount of ACEs you have on the ACLs in use?
    I think if you use the command "show access-list " the first line should give you the total amount of ACEs in the ACL
    - Jouni

  • Memory usage exceed -Xmx option in Windows 2000.

    Hi ,
    I run the java application with -Xms300M -Xmx600M option in Windows 2000.
    When I start the application, it's virtual memory size is 300M.
    (In Windows TaskManager choose the Processes Tab and select "Select Columns..." option in View menu. And check the "Virtual Memory Size".)
    After several test I found that it's virtual memory size is bigger than 600M in Windows TaskManager.
    It's virtual memory size is about 650M.
    Why the application's memory usage is bigger than 600M despite of -Xmx600M?

    The -Xms and -Xmx control the Java heap size only.
    The stack does not go on the heap, and the VM will need some memory of its own as well.

  • LabVIEW 2009 build memory usage exceeds Windows limit

    We have a large application that builds without problems using 8.6.1, but failed to build using LabVIEW 2009 with the error "Not enough memory to build this application".
    We solved the problem by adding the parameter /3GB to the Windows XP boot.ini file. This increases the available memory for a Windows XP application from 2Gb to 3Gb.
    It appears that LabVIEW 2009 uses more memory during the build process and, in our case, this took it over the 2Gb limit. 

    We are also facing the same problem...
    can National Instruments help us  ??????
    Following error is occuring while creating exe.
    "Build was unsuccessful
    An error occurred while saving the following file:
    D:\BIS_Main_2009_V3.XY\branches\BIS_Main_2009_V3.6​1_Trans\User Interface Panels\Inspection.vi
    Invoke Node in AB_Source_VI.lvclass:Close_Reference.vi->AB_Build.​lvclass:Copy_Files.vi->AB_Application.lvclass:Copy​_Files.vi->AB_EXE.lvclass:Copy_Files.vi->AB_Build.​lvclass:Build.vi->AB_Application.lvclass:Build.vi-​>AB_EXE.lvclass:Build.vi->AB_Build.lvclass:Build_f​rom_Wizard.vi->AB_UI_Frmwk_Build.lvclass:Build.vi-​>AB_UI_FRAMEWORK.vi->AB_Item_OnDoProperties.vi->AB​_Item_OnDoProperties.vi.ProxyCaller
    <APPEND>
    Method Name: <b>Save:Target Instrument</b>"
    Please suggest!!!

  • Monitoring heap memory

    Hi,
    I would like to monitor the heap memory usage of my XI server on a regular basis. Do you know if it is possible (in CCMS for instance) and how ?
    Thanks for your help,
    Best regards,
    Guislain

    HI,
    As explained by above use the transaction ST02 which is Tune summary.
    In that screen go bottom , you can see the Heap memory usage  min usage and max usage.
    you can see the analysis in http://host:port/nwa to see analysis of Server.
    to see the CCMS monitor
    see the below link
    /people/sap.user72/blog/2005/11/24/xi-configuring-ccms-monitoring-for-xi-part-i - XI : Configuring CCMS Monitoring for XI- Part I
    /people/sap.india5/blog/2005/12/06/xi-ccms-alert-monitoring-overview-and-features - CCMS Alert Monitoring
    /people/sap.user72/blog/2005/11/24/xi-configuring-ccms-monitoring-for-xi-part-i - CCMS
    Regards
    Chilla
    <i>reward points if it is helpful..</i>

  • JRockit out of native memory errors - what effects native memory usage

    I've seen postings which suggest a way to combat out of native memory errors is to reduce the heap size. That is fine but if you have some parts of your application which run fine and others that run the system out of native memory almost immediately then you may have a code problem. So the question is what, if anything, in the java code affects the JVM's usage of native memory?

    There are lots of things that affect the native memory usage. For example: the number of classes, the size of the classes, the complexity of the type hieararchy, the number of threads, the amount of code generated, the complexity of the methods and so on.
    It may also be an error in the JVM that either leaks memory or temporarily uses too much memory (for example while generating code for a java method).
    Not sure this was very helpful...
    /Staffan

  • Memory Notification:Library Cache Object loaded in Heap size 2262K exceeds

    Dear all
    I am facing the following problem. I am using Oracle 10gR2 on Windows.
    Please help me.
    Memory Notification: Library Cache Object loaded into SGA
    Heap size 2262K exceeds notification threshold (2048K)
    KGL object name :XDB.XDbD/PLZ01TcHgNAgAIIegtw==
    Thanks

    This is a normal warning message displayed for release 10.2.0.1.0, this is just a bug that by default has declared the kgllarge_heap_warning_threshold instance parameter to 8388608 . The bug is harmless, but the problem is that you will see a lot of messages displayed on the alert.log file, which renders this file difficult to read and it is uncomfortable to spot the real errors.
    Just declare a higher value for the kgllarge_heap_warning_threshold undocumented instance parameter. This is meant to be corrected at 10.2.0.2.0, but you can manually have this parameter increased to a value higher than the highest value reported.
    For further references take a look at this metalink note:
    Memory Notification: Library Cache Object Loaded Into Sga
         Doc ID:      Note:330239.1
    ~ Madrid
    http://hrivera99.blogspot.com/

  • High uccx engine memory usage in uccx8.2 su4

    Hi all,
    We are facing High uccx engine memory usage ,our system version uccx 8.2 su4,when ever this problem hapening we are facing all agent desktop failure also.Any one facing this same issue.Kindly attached RTMT screen shot.
    Thanks.

    Hi Renji,
    This is a known defect CSCtn87921 and its caused by a memory leak in BIPPA service
    -Please follow the workaround of restarting the BIPPA service from serviceability section on both servers if present
    -Check if the alert disappears
    -It should have been fixed in SU4 but apparently not
    Keep me posted
    Thanks,
    Prashanth

  • Nested tables and memory usage (ORA-04030 error)

    Dear All,
    I have table with approximately 5,000,000 records
    and try to Bulk Collect part of it into nested table in PL/SQL, the code is bellow
    Declare
         Type TcardRec Is Record(
              serno Pls_Integer,
              numberx Char(16),
              caccserno Pls_Integer
         Type TcardList Is Table Of TcardRec;
         fcardInfo TcardList;
    Begin
    Select c.serno, substr(c.numberx,1,16), c.caccserno
    Bulk Collect Into fcardinfo
    From cardx c;
    End;
    After reading approx. 80% it fails with error
    ORA-04030: out of process memory when trying to allocate 16396 bytes (koh-kghu call ,pmucalm coll)
    I 2G memory, is it realy no enough?
    How can I tune memory usage for collection?
    How can I estimate the maximum size of the collection the will fit into memory?
    Thank you in advance for any help
    Artem

    Declare it as a cursor.
    Open the cursor.
    Use fetch bulk collect with the limit option in the loop.
    In your case, you could do like:
    Declare
    Cursor c1 is
    Select c.serno, substr(c.numberx,1,16), c.caccserno
    From cardx c;
    TcardList Is Table Of c1%rowtype;
    fcardInfo TcardList;
    Begin
    Open c1;
    Loop
    Fetch c1 Bulk Collect Into fcardInfo Limit 10000;
    Exit when c1%notfound;
    -- Do some processing here.
    End Loop;
    Close c1;
    End;
    I hope this helps.

  • Memory Usage in WinTaskManger vs heap size

    hello,
    I have exactly the same problem as in the topic "Windows Task Manager vs. Heap", posted at Dec 10, 2004 5:17 AM, by jorgeHX.
    Here are the symptoms:
    1) My application starts at 20MB seen by windows task manager
    2) I use profiler to monitor the heap. Heap is always working very healthly - heap size in the profiler increases by a minimum until the gc comes around so that the used heap size drops down again.
    3) However, after doing a relatively memory-consuming operation (loop of String indexing, patterning ..etc.), the memory usage in windows task manager goes up couple of MBs but never drops down.
    4) Then I go manually free the heap (System.gc()), I can see GC is freeing heap. However, the memory in windows task manager remains no change, no matter how many times I force the garbabge collection.
    This is a bad thing - if my application keeps doing that memory-consuming operation again and again, the memory seen in the windows task manager will be growing and growing to hundreds of MBs until the windows alerts "low on virtual memory".
    I tried everything already. I set every instance = null at the end, I delete every reference but the memory just keeps increasing! WHY?????
    Anyone can help me charactorize the problem? I am so in dark!!
    Ryan-Chi

    I guess my problem can also be interpretated as
    "Why doesn't JVM return memory to OS?"
    It does, depending on the setting of the -XX:MaxHeapFreeRatio option. "Normal" operation using the default setting does not usually cause memory to be returned to the os.
    Search for this option term for explanations.

  • How to figure out the size of an object - java out of heap memory error

    Hi all,
    I am using an object that I found in a library that I didn't create so I don't know its internal state or members.
    I created a single one of these objects and I call announce() on this object which just sends a UDP announcement over the channel to notify listeners. However, after a message #124,288 I get a java out of heap memory error.
    I am wondering if this announce() method is causing the state of the object to grow with every call...it seems unlikely but I want to check to see if it's reserving a growing amount of heap memory without ever allowing it to free.
    My question is how can I check how big the object is within my program? I'd like to check, for instance, at every 10,000 messages sent how much memory the object is taking up. Is there a method call for that? Would I have to use some kind of debugger or memory monitor? I would like something easy to use.
    Please let me know, and thanks in advance.
    Julian

    jboolean23 wrote:
    Thanks for the quick reply.
    I say it's unlikely that the methods I call are filling up heap memory because I have one Message object. This one Message has a myMsg String member. Whenever I want to change the message I call myMsg = "anewstringhere". And then I do myMessageObject.announce(); And for that reason i say I only have one Message object. The only thing added to the heap would be the strings that I replace myMsg with. The old references to myMsg are no longer valid and should be garbage collected..
    Unless of course if you are calling intern() on them.
    so here's my train of thought (and this isn't what my actual code looks like):
    myMessageObject.myMsg = "hello" //creates a string on the heap? I assume this is equivalent to saying myMsg = new String("hello")No they are not the same.
    The text literal will be in the intern space. Both code fragments would do that.
    The second example would create a second instance of String(). That second instance would be cleaned when no longer referenced. But the literal will not.
    Is my thought process correct? You are calling a third party library right so mock it (write a simple replacement that does the minimal correct functionality.) Substitute it in your code. And then run. If it still fails then the problem is in your code. If not then it is either in the library or the way that you use the library.

  • Memory Usage:  Heap space vs System Monitor

    Hi guys
    I'm trying to wrap my head around something which seems pretty complicated. We have an enterprise app making use of Jasper Reports. When a report is generated, the app's memory usage in the Linux (Ubuntu 8.10) System Monitor shoots up by around 20MB (the more data on the report, the higher that number). After the report is closed, memory usage doesn't decrease. This is resulting in performance issues, and memory usage given by the System Monitor doesn't match up with the results given by the NetBeans profiler. There is a huge difference between the values indicated by the System Monitor, and the heap space given by the profiler. I've done some fairly detailed checking using the profiler, and according to that, there are 0 instances of our report generating classes in memory after closing a generated report. I've read a bunch of articles on the net, but I'm still stumped. My best guess is that something weird is happening somewhere in the Jasper code, but I can't really think of a way to confirm that. Does anyone have any idea what could be causing this and how to prevent it? Any help will be much appreciated; thanks.
    P.S. I've posted on the Jasper forums are well, but to no avail. [http://jasperforge.org/plugins/espforum/view.php?group_id=102&forumid=103&topicid=65335]
    Edit: We're using Java 6 (we've recreated the issue on a few release of Java 6), and Jasper Reports 3.0.1. (Just in case that is important)

    One thing about your test is that using Xms110m will cause the JVM to NEVER give back any memory, because it won't ever go below 110M. I altered your test a bit:
    import java.util.LinkedList;
    public class MemTest {
       private static LinkedList<byte[]> mem = new LinkedList<byte[]>();
       public static void main(String[] args) {
            try {
                System.out.println("Initial:");
                print();
                useMem();
                System.out.println("Allocated:");
                print();
                clearMem();
                System.out.println("Deallocated:");
                print();
                for (;;) {
                   Thread.sleep(10000);
                   System.out.println("After 10 seconds: ");
                   print();
            } catch (Exception ex) {
                ex.printStackTrace();
       private static void useMem() {
          for (;;) {
             try {
                mem.add(new byte[1024*1024]);
             } catch (OutOfMemoryError e) {
                //free up a few MB to make sure we don't get an OOME while trying to print, etc
                mem.removeFirst();
                mem.removeFirst();
                mem.removeFirst();
                break;
       private static void clearMem() {
          mem = null;
          System.gc();
          System.gc();
          System.gc();
       private static void print() {
          long free = Runtime.getRuntime().freeMemory();
          long total = Runtime.getRuntime().totalMemory();
          double freeFraction = (((double)free) / (total)) * 100;
          System.out.printf("\tAvailable: %d\tTotal: %d\tFree: %4.2f%%%n", free, total,
                freeFraction);
    }When I ran it with
    -Xms16m -Xmx1024m -XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=30I got these results:
    Initial:
         Available: 16518016     Total: 16711680     Free: 98.84%
    Allocated:
         Available: 2070952     Total: 1065484288     Free: 0.19%
    Deallocated:
         Available: 582957376     Total: 583073792     Free: 99.98%
    After 10 seconds:
         Available: 582953224     Total: 583073792     Free: 99.98%As you can see, the JVM did return about half of the memory back. ProcExp shows ~592MB used, and peaked ~1000MB. But right now it's still running, and still holding at 583MB.
    I've read bug reports like this, but honestly have no idea what the people at Sun are talking about. I don't see the footprint being reduced. I don't know, maybe if I wait a couple of days...

  • Premiere crashes when exceeding ~1.2GB memory usage

    This is extremely annoying. I have found Premiere to crash when it hits around 1.2GB to 1.5GB of memory usage on medialayer.dll or something. I was able to crash Premiere in just a matter of seconds after launching it and opening my project. I work quickly, so this happens frequently.
    Q9450 Yorkfield Quadcore (12MB Cache), clocked at 3.2GHz
    4GB of 1200 RAM
    All my scratch disks are set to my second partition, which has around or at least 200GB of free space. My page file is set to about 4GB. My machine is defragmented daily by Diskeeper Pro 2008.
    I am dealing with 26 video files and have frame thumbnails displaying in the timeline thing (it makes everything all the more easier for what I am specifically doing) and adds up to around a total of 9 to 10 hours of video and audio. I'm chopping out the intro, credits, and midtro from each video, which is basically what I am doing.
    However, I am currently downloading the latest update (which is like 50.3MB) that might possibly fix this issue. At least I hope it does. :( This crash/memory leak is going to drive me insane if it cannot be fixed.
    EDIT :: I have observed that every time I save, the memory usage goes up by 200KB to 500KB. For each frame thumbnail that Premiere loads, the memory usage goes up by a whopping 2MB+. This will and does quickly add up for the work I'm doing. Photoshop never unloads any of this memory!
    EDIT2 :: By the way, the frame thumbnails are 88x65 sized thumbnails.
    EDIT3 :: Even with the latest version and update, Premiere Pro CS3 still crashes for the same reason. I haven't seen the memory usage at 1GB though.
    EDIT4 :: Latest updates help though, but does not fix the problem. When I minimize Premiere, usage is 17 MB. When I bring it back up, usage is 32MB. I can keep resetting the usage to 17/32MB simply by just minimizing Premiere and bringing it back up again (and the memory usage reset is about instant). So far I've reset it numerous times when reaching levels of around 600MB to 800MB of memory usage and haven't crashed in my current session due to the memory leak yet.
    This seems to be only a temporary solution.
    http://www.hlrse.net/Qwerty/premiere-memoryleak-workspace.gif
    That is my current workspace. I literally keep task manager open with Premiere highlighted like that so that I can watch the memory usage levels (to know when to 'reset' the memory usage) as I work. Right now all I'm doing is zooming in and out, and splitting videos in two (or multiple pieces).

    Q9450 Yorkfield (12MB Cache) @ 3.2GHz
    2GB G.Skill PC2-6400 (400MHz) x2 (4GB total, running with 3.5GB due to 32-bit limitations)
    NVIDIA FX5200 (temporary, waiting for ATI HD4850 to come back from RMA)
    500GB Seagate Barracuda 7200.1 SATA-II w/ SATA-II enabled
    I have my 500GB harddrive partitioned into two partitions: 40GB (C) and 425GB (D). C is for Windows XP (includes drivers, codecs, and things that relate heavily to the core of Windows; stuff that isn't important enough to backup). D is for everything else, which includes games, utilities/tools, media (images, video, music), developer material, etc.
    On D, you would find the following logical layout of folders:
    Developer, Games, Media, Network, Utilities, Mozilla Firefox, Temp.
    Temp is so that I can (in an organized fashion) set the temporary locations and scratch disks for programs here. All scratch disks are set to D:\Temp\Premiere, and currently reports 121.9GB of free space. I can easily free up 114GB to 240GB of material on D though (which I plan to do sometime anyway).
    What information do you want to know? Be specific, I'm not stupid and illiterate like most people.
    EDIT :: I didn't mention this in my post, but I did take the precautions to search through the Troubleshooting database for Premiere, and I did read through the optimizations for Windows XP article (which I already had the same suggestions applied to my computer since the dawn of time -- so that wasn't really useful).
    EDIT2 :: I checked the troubleshooting link and none of those really apply. (Do you even know what a memory leak is?)
    =P

  • Heap Memory Issue in weblogic 9.2 for a JSF 1.1 web application

    Hi,
    We are running a JSF application (Myfaces, facelets, tomahawk, richfaces & iBATIS) in weblogic 9.2 server on Solaris 10. This application is deployed in production and works fine under normal circumstances. But when there is a heavy user load we are facing a memory issue. The memory usage is gradually increasing and when it reaches to max, Full GC kicks in again & again which choks up all requests. We don't save anything in session scope. All our backing beans are saved in request scope hence they should be garbage collected after each request done, but this is not happening.
    We took the heap dump from production after this issue and analyzed it. After my analysis, I found all objects which are set in request object not being garbage collected and the root referers of all these objects is weblogic.servlet.internal.MuxableSocketHTTP.
    I reproduced the similar behaviour in one of our development environment using JMeter. I ran 100 concurrent users in JMeter for almost 1 hour and saw the similar behaviour. Below is the result of all weblogic objects which are still hanging in heap after test was over (I also ran manual Garbage Collector from admin server).
    1) weblogic.servlet.internal.MuxableSocketHTTP - 1774 objects - retained heap (1 GB)
    2) weblogic.servlet.internal.ServletRequestImpl - 1774 objects - retained heap (1 GB)
    My understanding is that every request made to weblogic server goes through the MuxableSocketHTTP object which creates the ServletRequestImpl to serve it. Once the request is served these objects are suppose to be removed. As a result of that whatever is saved in your request will still be hanging.
    I am not able to understand why these objects are hanging after request is done. Could anybody answer to my question. I appreciate your help in advance.
    The GC setting for weblogic server while startup is:
    -XX:MaxTenuringThreshold=15 -XX:+PrintTenuringDistribution -XX:+AggressiveHeap -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:PermSize=128m -XX:MaxPermSize=128m -Xms3g -Xmx3g -XX:NewSize=512m -XX:MaxNewSize=1024m
    Thanks MaKK

    What happened with this issue? We are seeing something similar on WebLogic 9.2 MP1 in Solaris (Jdk 1.5. patch 10. 32 bit). Out of Memory's with thousands if instances of weblogic.socket.MuxableSocket hanging around.
    Our thinking was initally the Java heap, then we thought that maybe the sockets weren't being closed properly, possible in WebLogic or in LiveCycle.
    Any info would be greatly appreciated.
    Snippet of our stack trace:
    <16-Feb-2010 04:30:13 o'clock GMT> <Error> <Kernel> <BEA-000802> <ExecuteRequest failed
    java.lang.OutOfMemoryError: Java heap space.
    java.lang.OutOfMemoryError: Java heap space
    >
    javax.ejb.EJBException: EJB encountered System Exception: : java.lang.OutOfMemoryError: Java heap space
         at weblogic.ejb.container.internal.EJBRuntimeUtils.throwEJBException(EJBRuntimeUtils.java:145)
         at weblogic.ejb.container.internal.BaseLocalObject.postInvokeCleanup(BaseLocalObject.java:550)
         at weblogic.ejb.container.internal.BaseLocalObject.postInvokeCleanup(BaseLocalObject.java:496)
         at com.adobe.idp.um.businesslogic.directoryservices.DirectorySynchronizationManagerBean_f5g74_ELOImpl.synchronizeProviders(DirectorySynchronizationManagerBean_f5g74_ELOImpl.java:267)
    Joel

Maybe you are looking for

  • Snow Leopard corrupted main Mac HD,..can't boot to it HELP Please!

    Boy did I have a day today!! A couple of days ago, I achieved and re-installed OSX 10.5. Upgraded to 10.5.8. Then, I installed Snow Lep and upgraded to 10.6.2. Now, I did all of this in safe boot since at the time, my graphics card was broken. I got

  • Why is my Numbers spreadsheet not sorting properly?

    I am working with a sheet and want to sort the sheet alphabetically by a column that is formated as text.  It is not sorting properly.  What am I missing?

  • Country Chart of Accounts - The need

    Hi Experts, It is generally said that Country Chart of Accounts are required to comply with Country legal requirements, but what kind of requirements could those be? Some of the things which I could think of are: 1. Local GL numbering convention is d

  • Family Guy Blue Harvest DVD with Itunes 7.6

    I downloaded 7.6 when I got home from buying my Family Guy dvd from Wal-Mart, it includes a way to transfer from the second disc of the DVD to the PC. I went into iTunes using the way it told me. iTunes gives me this error: iTunes could not connect t

  • [Solved] Problems installing broadcom-wl

    So, I've been using Arch for about a week now (first time user), and I'm having problems installing my wifi driver. My network controller is Broadcom BCM43142. I know that there are many issues with Broadcom chipsets, but this one worked on Ubuntu be