JRockit for applications with very large heaps

I am using JRockit for an application that acts an in memory database storing a large amount of memory in RAM (50GB). Out of the box we got about a 25% performance increase as compared to the hotspot JVM (great work guys). Once the server starts up almost all of the objects will be stored in the old generation and a smaller number will be stored in the nursery. The operation that we are trying to optimize on needs to visit basically every object in RAM and we want to optimize for throughput (total time to run this operation not worrying about GC pauses). Currently we are using hugePages, -XXaggressive and -XX:+UseCallProfiling. We are giving the application 50GB of ram for both the max and min. I tried adjusting the TLA size to be larger which seemed to degrade performance. I also tried a few other GC schemes including singlepar which also had negative effects (currently using the default which optimizes for throughput).
I used the JRMC to profile the operation and here were the results that I thought were interesting:
liveset 30%
heap fragmentation 2.5%
GC Pause time average 600ms
GC Pause time max 2.5 sec
It had to do 4 young generation collects which were very fast and then 2 old generation collects which were each about 2.5s (the entire operation takes 45s)
For the long old generation collects about 50% of the time was spent in mark and 50% in sweep. When you get down to the sub-level 2 1.3 seconds were spent in objects and 1.1 seconds in external compaction
Heap usage: Although 50GB is committed it is fluctuating between 32GB and 20GB of heap usage. To give you an idea of what is stored in the heap about 50% of the heap is char[] and another 20% are int[] and long[].
My question is are there any other flags that I could try that might help improve performance or is there anything I should be looking at closer in JRMC to help tune this application. Are there any specific tips for applications with large heaps? We can also assume that memory could be doubled or even tripled if that would improve performance but we noticed that larger heaps did not always improve performance.
Thanks in advance for any help you can provide.

Any suggestions for using JRockit with very large heaps?

Similar Messages

  • Best data Structor for dealing with very large CSV files

    hi im writeing an object that stores data from a very large CSV file. The idea been that you initlize the object with the CSV file, then it has lots of methods to make manipulating and working with the CSV file simpler. Operations like copy colum, eliminate rows, perform some equations on all values in a certain colum, etc. Also a method for prining back to a file.
    however the CSV files will probly be in the 10mb range maby larger so simply loading into an array isn't posable. as it produces a outofmemory error.
    does anyone have a data structor they could recomend that can store the large amounts of data require and are easly writeable. i've currently been useing a randomaccessfile but it is aquard to write to as well as needing an external file which would need to been cleaned up after the object is removed (something very hard to guarentee occurs).
    any suggestions would be greatly apprechiated.
    Message was edited by:
    ninjarob

    How much internal storage ("RAM") is in the computer where your program should run? I think I have 640 Mb in mine, and I can't believe loading 10 Mb of data would be prohibitive, not even if the size doubles when the data comes into Java variables.
    If the data size turns out to be prohibitive of loading into memory, how about a relational database?
    Another thing you may want to consider is more object-oriented (in the sense of domain-oriented) analysis and design. If the data is concerned with real-life things (persons, projects, monsters, whatever), row and column operations may be fine for now, but future requirements could easily make you prefer something else (for example, a requirement to sort projects by budget or monsters by proximity to the hero).

  • Ipod touch 4th, running OS5, boots up with very large icons, impossible to navigate, how to get back standard sized homepage?

    Ipod touch 4th, running OS5, boots up with very large icons, impossible to navigate, need to return to standard sized homescreen?

    Triple click the Home button and then go to Settings>General>Accessibility and turn Zoom off. If problems see:
    iPhone: Configuring accessibility features (including VoiceOver and Zoom)

  • Very Large Heap Dump

    Hi,
    I've to analyze a huge heap dump file (ca. 1GB).
    I've tried some tools HAT, YourKit, Profiler4J... etc.)
    Always an OutOfMemoryError occures.
    My machine has 2GB physocal Memory and 3GB SwapSpace on a Dual Core Intel Processor with 2,6GHz
    Is there a way to load the file on my machine or is there a tool which is able to load dump files partially?
    ThanX ToX

    1GB isn't very large :-) When you say you tried HAT, did you mean jhat? Just checking as jhat requires less memory than the original HAT. Also, did you include an option such as -J-mx1500m to increase the maximum heap for the tools that you tried? Another one to try is the HeapWalker tool in VisualVM. That does a better job than HAT/jhat with bigger heaps.

  • Working with VERY LARGE tables - is it possible to bypass row counting?

    Hello!
    For working with large result sets ADF provides the `Range Paging` mechanism for views, described in the 27.1.5 part of the Developer’s Guide For Forms/4GL Developers.
    It works well, but as a common mode it counts total row count to allow paging. In some cases query `select count(1) from (SELECT ...)...` can take very, very long time.
    But if a view object doesn't know row count (for example we can override getEstimatedRowCount() method ), paging controls doesn't appear in user interface.
    Meanwhile I suggest that it's possible to display two paging links - Prev and Next, without knowing row count. Is it a way to do it?
    Thank in advance,
    Ilya Rodionov.

    Hi Ilya,
    while you wait for Franks to dig up the right sample you can read this thread:
    Re: ADF BC: Performance issue with getEstimatedRowCount (ER?)
    There we discuss the exact issue.
    Timo

  • Any suggestions on calculating with very large or small numbers?

    It seems that double values are about 17 decimal places (10e17) in precision.
    Is there a way in iPhone calculations to get more precision for very large and small numbers, like 10e80 and so forth? I know that's more than the entire number of atoms in the universe, but still.
    I tried "long double" but that didn't seem to make any difference.
    Just a limitation?
    Thanks,
    doug

    Hmmm... maybe I was just having a problem with my formatted string then?
    I was using the NSString %g format, which is supposed to print in exponential notation if the number is greater than 1e4 or less than 1e-4, or something like that.
    But I was not getting anything greater exponents than 1e17 and then I was apparently getting overflows because the number were having negative mantissas.
    All the variables involved were double...
    How did you "look at" z?
    Thanks,
    doug

  • Problem signing code for application with embedded runtime

    Hi,
    I have an Adobe Air application which I am publishing with the runtime embedded.  There are lots of reasons for publishing this way.
    I have already seen an unknown publisher message after signing on Windows 8 and a tester reported the same thing on Windows 7.  According to the cert issuer (DigiCert) I need to have an Extended Validation (EV) certificate to get by the Windows 8 issue, but there was no explanation on the Windows 7 issue. 
    As the windows file is essentially a folder wiith ".app" appended to it's name, I'm not sure how signing (which I am doing with the Signature tab on the Flash GUI) ensures that the executable contained in that file is signed.
    Can anyone tell me anything about code signing when publishing an AIR application with embedded runtime?
    Best,
    Chris McLaughlin

    Not that I can think of.  What OS are they running?  Do they have the shared runtime also installed?

  • Problem in compilation with very large number of method parameters

    I have java file which I created using WSDL2Java. Since the actual WSDL has a complex type with a large number of elements(around 600) in it, Consequently the resulting java file(from WSDL2Java) has a method that takes 600 parameters of various types. When I try to compile it using javac at command prompt, it says "Too many parameters" and doesn't compile. The same is compiling successfully using JBuilder X . The only way I could compile successfully at command prompt is by reducing the number of parameters to around 250 but unfortunately that it's not a workable solution. Does Sun specify any upper bound on number of parameters that can be passed to a method?

    ... a method that takes 600 parameters ...Not compatible with the spec, see Method Descriptors.
    When I try to compile it using javac at
    command prompt, it says "Too many parameters" and
    doesn't compile.As it should.
    The same is compiling successfully using JBuilder X .If JBuilder produces a class file, that class file may very well be invalid.
    The only way I could compile
    successfully at command prompt is by reducing the
    number of parameters to around 250Which is what the spec says.
    but unfortunately that it's not a workable solution.Pass an array of objects - an array is just one object.
    Does Sun specify
    any upper bound on number of parameters that can be
    passed to a method?Yes.

  • "Add to bookmarks - button" for application with application alias

    How to make "add to bookmarks - button" to login page of the application with the application alias?
    The idea is to ensure that end-users are storing the application alias to bookmarks rather than the direct link to application id which might change.
    rgrds Paavo

    Jari, here are some of my trials hosted now in apex.oracle.com. All url's to be bookmarked are "hardcoded" in to the login page html-header's javascript function.
    Sample db application - url with workspace name.
    http://apex.oracle.com/pls/apex/f?p=COM_ORACLE_APEX_PRODUCT_PORTAL:LOGIN:0&c=PAAVO_POC
    - the LOGIN alias defaults to login page
    - the session id is = 0 as recommended here: http://docs.oracle.com/cd/E23903_01/doc/doc.41/e21674/concept_url.htm#HTMDB03020
    - workspace name is actually Internal in the administration so omitting workspace name leads to error " Error The application alias "COM_ORACLE_APEX_PRODUCT_PORTAL" can not be converted to a unique application ID. ", perhaps other workspaces have also the same sample app alias
    - so used the PAAVO_POC as workspace name (got it during registration)
    - LOGIN page HTML-header is:
    <script type="text/javascript">
    function bookmark()
    window.external.AddFavorite("http://apex.oracle.com/pls/apex/f?p=COM_ORACLE_APEX_PRODUCT_PORTAL:LOGIN:0&c=PAAVO_POC","COM_ORACLE_APEX_PRODUCT_PORTAL in ws paavo_poc");
    </script>- LOGIN page HTML Body attribute is:
    <form>
    <input type="button" onclick="bookmark()" value="Bookmark me">
    </form>- tried to also add "Click here to bookmark me" button in the login region with the following code (couldn't figure out how to do button for this via apex developer)
    <form>
    <input type="button" onclick="bookmark()" value="Click here to bookmark me">
    </form>- login doesn't work anymore
    Copy of the same sample application but with more different / more unique alias and without the "Click here to bookmark me" button in the Login-region.
    http://apex.oracle.com/pls/apex/f?p=paavos_product_portal:LOGIN:0&c=PAAVO_POC
    - the bookmark could be stored without workspace name, but it might break if the same app. alias name appears in some other workspace http://apex.oracle.com/pls/apex/f?p=paavos_product_portal:LOGIN:0
    - login works now because no hassling with the button-code in the Login-region :)
    Then fresh new application with the following URL's and with the same symptoms.
    http://apex.oracle.com/pls/apex/f?p=POCTEST_NO_B_BUTTON:LOGIN:0&c=PAAVO_POC
    http://apex.oracle.com/pls/apex/f?p=poctest:LOGIN:0&c=PAAVO_POC
    - Enter with "demo" and "ApexIsGood". Please don't kill my workspace with lots of 'fishes' :).
    It would be good to have the URL constructed dynamically e.g. fetching the application_alias_name and the correct workspace to be used.
    In perfect world it would also be good to have feature in for redirecting the user from old version to production 'application alias' with dynamic action requesting the user to update his/her bookmark.
    But as said I am bit stuck ..
    rgrds Paavo
    Edited by: paavo on Apr 7, 2012 3:20 PM
    Edited by: paavo on Apr 10, 2012 4:05 PM

  • Workflow for creating a very large collage for a wall paper

    I have a bunch of photos shot with a canon t4i. My plan is to blow up all the photos and make a collage to print out on wall paper for a wall. I shot all of these photos in RAW. The dimensions of the wall are 101h x 106w. Yes, this collage bogs down my computer to the max. Its ridiculous. But anyways, I am having trouble retaining sharpness in my photos on my collage. I am a beginner with editing raw photos so that might be a problem.
    This is what I did.
    Open new photoshop (101x106")
    Edit all photos in camera raw
    drag photos from bridge on to photoshop
    place and adjust size of each photo
    what would you do differently to retain sharpness? It is important that it looks as best as possible because it will be in a small room in the foyer of our office.
    Thanks,
    Riley

    I see two problem.  On the print size exceed all the printers I know of.  The widest printers that Epson make can only print 64" wide.  Even if you cut your collage into two with some overlap for alignment.  It would be a nightmare applying 54" wide wallpaper to a wall.  There might be a third problem with cost a Epson printers that can print 54" wide rolls starting price is $10,000.00 and go up to $25,000.00. Expensive wallpaper. And you still need the paper and ink.
    For a collage that large you do not need to print at 300 DPI for you will not be viewing the wall that is big and over your height up close where the human eye can resolve down to 300 dpi. In you collage you also show only three image across the top row. Some wider the others. Look at your Cannot T4I the would mean some image would be printed  between 35" to 40" wide. without interpolation your 18MP images printed that size would meant they would print in the 130 to 150 DPI range.  To interpolate from 130-150 to 300  would requite increasing the pixel count in the range of 400 to 500 percent.  That is a lot of interpolation.  I would not do that. I would actually create the collage at 150 DPI.  That resolution should be good for an image that large.  That Collage would not tax Photoshop as much. The 300DPI collage image would be 106x300x101x300=963,540,000  at 150DPI the same image would be 106x150x101x150=.240,885,000  the is 1/4 the number of pixels.
    I would flatten the collage and cut it into 7 ovelaping stripts around 16" wide

  • Need help with "Very large content bounds" error...

    Hey guys,
    I've been having an issue with Adobe Muse [V7.0, Build 314, CL 778523] - one of the widgets I tried from the Exchange library seemed to bug out and created a large content box.
    This resulted in this error:
    Assert: "Very large content bounds (W:532767.1999999997 H:147446.49743999972) detected in BoxUtils::childBounds"
    Does anyone know how I could fix this issue?

    Hi there -
    Your file has been repaired and emailed back to you. Please let us know if you run into any other issues.
    Thanks!
    -Sam

  • Status and messaging for systems with a large number of classes

    Very common dilemma we coders face when creating
    systems involving a large number of classes:
    Any standard framework to take care of the global status of the whole application and of gui subsystems
    for both status handling and report messages to users?
    Something light if possible...and not too much threads.

    Ah, I see,
    I found JPanel with CardLayout or a JTabbedPane very good for control of several GUI in an application - as alternative organization tool I use a JTree, which is used for both, selecting and organizing certain tasks or data in the application - tasks are normally done with data associated with them (that is, what an object is for), so basically a click onto a node in this JTree invokes an interface method of that object (the userObject), which is associated with this node.
    Event handling should be done by the event-handling-thread only as far as possible - it is responsible for it so leave this job to it. This will give you control over the order in which the events are handled. Sometimes it needs a bit more work to obey this rule - for example communication coming from the outside (think of a chat channel for example) must be converted to an event source driven by a thread normally. As soon as it is an event source, you can leave it's event handling to the event handling thread again and problems with concurrent programming are minimized again.
    It is the same with manipulating components or models of components - leave it to the event handling thread using a Runnable and SwingUtilities.invokeLater(Runnable). This way you can be sure that each manipulation is done after the other in the order you have transferred it to the event handling thread.
    When you do this consequently most of your threads will idle most of the time - so give them a break using Thread.sleep(...) - not all platforms provide preemptive multitasking and this way it is garanteed, that the event handling thread will get a chance to run most of the time - which results in fast GUI update and fast event handling.
    Another thing is, that you should use "divide and conquer" also within a single GUI panel - place components in subpanels and transfer the responsibility for the components in this panel to exactly these subpanels - think of a team manager which makes his employees work together. He reports up to his super manager and transfers global order from his boss into specific tasks by delegation to the components, he is managing. If you have this in mind, when you design classes, you will have less problem - each class is responsible for a certain task - define it clearly and define to whom it is reporting (its listeners) and what these listeners may be interested in.
    When you design the communication structure within your hierarchy of classes (directors, managers, team managers and workers) have in mind, that the communication structure should not break the management hierarchy. A director gives global orders to a manager, which delegates several tasks to the team managers, which make their workers do what is needed. This structure makes a big company controlable by directors - the same principles can also keep control within an application.
    greetings Marsian

  • Heap dumps on very large heap memory

    We are experiencing memory leak issues with one of our application deployed on JBOSS (SUN JVM 1.5, Win 32 OS). The application is already memory intensive and consumes the maximum heap (1.5 GB) allowed on a 32-bit JVM on Win32.
    This leaves very few memory for heap dump and the JVM crashes whenever we try adding the heap dump flag (-agentlib:), with a "malloc error"..
    Has anyone faced a scenario like this?
    Alternatively for investigation purpose, we are trying to deploy it on a Windows X64 - but the vendor advises only to run on 32 bit JVM. Here is my question:
    1) Can we run 32bit JVM on Windows 64? Even if we run, can i allocate more than 2 GB for heap memory?
    2) I dont see the rational why we cannot run on 64bit JVM - because, JAVA programs are supposed to be 'platform-independent' and the application in form of byte code should be running no matter if it is a 32bit or 64-bit JVM?
    3) Do we have any other better tools (except HPROF heapdumps) to analyze memory leaks - we tried using profiling tools but they too fail becos of very few memory available.
    Any help is really appreciated! :-)
    Anush

    anush_tv wrote:
    1) Can we run 32bit JVM on Windows 64? Even if we run, can i allocate more than 2 GB for heap memory?Yes, but you're limited to 2GB like any other 32-bit process.
    2) I dont see the rational why we cannot run on 64bit JVM - because, JAVA programs are supposed to be 'platform-independent' and the application in form of byte code should be running no matter if it is a 32bit or 64-bit JVM?It's probably related to JBoss itself, which is likely using native code. I don't have experience with JBoss though.
    3) Do we have any other better tools (except HPROF heapdumps) to analyze memory leaks - we tried using profiling tools but they too fail becos of very few memory available.You could try "jmap", which can dump the heap.

  • Bridge CS4 with very large image archves

    I have just catalogued 420,000 images from 3TB of my photographic collection. For those interested, the master cache files became quite large and took about 9 days of continuous processing:
    cache size: 140 gb
    file count: 991,000
    folder count: 3000
    All cache files were also exported to the disk directories.
    My primary intent was to use the exported cache files as a "quick" browsing mechanism with bridge. Of course, "quick" is a rather optimistic word, however is is very significantly faster than having bridge rebuild caches as needed.
    I am now trying to decide if it is worth keeping the master bridge cache because of the limitations of the bridge implementation which is not very flexible as to where and when the master cache adds new temporary cache entries.
    Any suggestions as to the value of keeping the master cache would be appreciated. I don't really need key word or other rating systems since I presently use a simple external data base for this type for image location.
    I am also interested in knowing if the "500,000" entry cache limitation is real - or if more than 500,000 images can be stored in the master cache since I will be exceeding this image count next year.

    I have a bridge 5 cache system with 600,000 images over 8 TB of networded disk.  I too use this to "speed up" the browsing process and rely primarily on key word processing to group images.  The metadata indexing is, for practical purposes, totally useless (it never ceases to amaze me about why Adobe things it useful for me to know how many images were taken with a lens focal length of 73mm - or some other equally useless statistic).  The only thing I can think of that is are serious missing keyword indexing feature is the ability to have a key word associated with a directory.  For example, I have shot many dozens of dance, theatre, music and sports productions - it would be much more useful to cataloge a directory that has the key words "Theatre" and "Romeo and Juliette" than attempt to key-word each individual image.   It is, of course, possible to work around the restrictions but that is very unclean and certainly less than desireable.   Key-wording a project (i.e. a directory) is a totally different kettle of fish than key-wording an image.  I also find the concept of the "collection" is very useful and well implemented.
    I do maintain a complete cache build of my system.  It is spread over two master caches, one for the first 400,000 images and a second for the next 400,000 (I want to stay within the 500,000 cache size limit - it is probabley associated with the MYSQL component of Bridge and I think may have problems if you exceed the limit by a substantial amount.  With Bridge on CS3, when the limit was exceeded , the cache system self-distructed and I had to rebuild).
    The only thing I can think of (and that seems to be part of Adobe's design) is that Bridge will rebuild the master cache for a working directory "for no apparent reason" such as when certain (unknown) changes are made to ACR,   Other automatic rebuilds have been reported by others however Adobe does not comment upon when or what casuses a rebuild.  Of course, this has serious impact upon getting work done - it is a bloody pain to have bridge suddenly process 1500 thumbs and preview extracts simply to keep the master cache completely and perfectly synchronized (in terms of image quality) with what might be displayed if you happen if you want to load a raw image into photoshop.  This strategy is IMHO completely out of step with how (at least I) use the browsing features of bridge.
    It may be of concern that Adobe may, for design reasons, change the format of the directory cache files and you will have to completely rebuild all master and directory caches yet again - which is a real problem if you have many hundreds of thousands of images.  This happened when the cache system changed from CS3 to CS4 - and Adobe did not provide conversion programme to migrate the old to new format.  This significantly adds to the rebuild time since each raw image must be completely reprocessed.  My current rebuild of the master cache has taken over two elapsed weeks of contunuous running.
    It would be nice if Adobe would allow some control over what is recorded in the master cache - for example, "do you wish meta data to be indexed".
    (( as an aside, adobe does not comment upon why, when using Bridge to import images from a CF card, results in the building a .xmp file with nothing but meta data for each raw file.  I am at a loss to speculate what really useful thing results other than maybe speeding up the processing of the (IMHO useless) aspects of meta data ))
    To answer your quiestion, I do think the master cache is worth keeping - and we can pray that Adobe puts more though process into why the master cache exists and who uses the present type of information indexed within the cache.

  • Since getting version 22 (release update channel) I had a few days with very large text; today, Firefox will not start. How to get it running again???

    I've searched for an .exe file in Firefox program files, but there doesn't seem to be one anywhere. I'd like to uninstall, download a new program and reinstall, but I'd rather not lose bookmarks and other settings.
    Running Windows 7 on a Sony Vaio laptop. Any suggestions? Thanks in advance.

    Certain Firefox problems can be solved by performing a ''Clean reinstall''. This means you remove Firefox program files and then reinstall Firefox, you WILL NOT lose any bookmarks, history, and settings. Please follow these steps:
    '''Note:''' You might want to print these steps or view them in another browser.
    #Download the latest Desktop version of Firefox from http://www.mozilla.org and save the setup file to your computer.
    #After the download finishes, close all Firefox windows (click Exit from the Firefox or File menu).
    #Delete the Firefox installation folder, which is located in one of these locations, by default:
    #*'''Windows:'''
    #**C:\Program Files\Mozilla Firefox
    #**C:\Program Files (x86)\Mozilla Firefox
    #*'''Mac:''' Delete Firefox from the Applications folder.
    #*'''Linux:''' If you installed Firefox with the distro-based package manager, you should use the same way to uninstall it - see [[Installing Firefox on Linux]]. If you downloaded and installed the binary package from the [http://www.mozilla.org/firefox#desktop Firefox download page], simply remove the folder ''firefox'' in your home directory.
    #Now, go ahead and reinstall Firefox:
    ##Double-click the downloaded installation file and go through the steps of the installation wizard.
    ##Once the wizard is finished, choose to directly open Firefox after clicking the Finish button.
    Please report back to see if this helped you!

Maybe you are looking for

  • Transportation ( how to transport objects from one server to another)

    Hi BW guru's Please tell me the steps to transport objects from one server to another server.

  • Nokia E51 refuses to connect to Nokia BH800 blue...

    It took quite a bit of trouble trying to resolve ths problem  but it should have been quite simple . The problem itself isn't hard to resolve if you can get the information you'll require to set it right . Ok so here is the solution .  Open up menu o

  • Disable User on Active Directory in a Workflow

    I have to disable a user on AD. I must disable it in a Workflow calling the Disable User or Disable User Primitive. I have two questions: 1. Do I have to use Disable User or Disable User Primitive? 2. What do I have to pass in arguments?

  • Can't connect to homepage

    Recently upgraded to 05, want to post a bunch of pics to homepage (which i've done a zillion times before, but not since upgrading on this computer) and i keep getting an error message: "Connection could not be established at this time, check your ne

  • Invalid HTML tag - Is this true? Child .chm files cannot have underscores in name?

    I’ve been having the following problem: file mk:@MSITStore:  … xxx.chm::/xxx.hhc contains an invalid HTML tag In more detail what happens: You click on the .chm file to open it and you get a dialog with an error message: The file mk:@MSITStore:E:\p4_