RTSJ ver RTS-2.2u1 - Running with large heap size

We installed the RTSJ version which should support 64 bit , on Solaris S10X_u8 with 8GB memory .
We expected that we can define heap size greater than 4GB .
But when we run with the following flags : -D64 -Xms4g -Xmx4g
we got the following message :
Invalid maximum heapsize:-Xmx4g
The specified size exceeds the maximum representable size
Could not create the java virtual machine

Gabi,
That should be -d64 (small 'd') The -D arguments define Java property settings.
Also note that in JRTS we don't dynamically grow the heap so you only need one of -Xms or -Xmx to set the heap size.
David

Similar Messages

  • Large heap sizes, GC tuning and best practices

    Hello,
    I’ve read in the best practices document that the recommended heap size (without JVM GC tuning) is 512M. It also indicates that GC tuning, object number/size, and hardware configuration play a significant role in determining what the optimal heap size is. My particular Coherence implementation contains a static data set that is fairly large in size (150-300k per entry). Our hardware platform contains 16G physical RAM available and we want to dedicate at least 1G to the system and 512M for a proxy instance (localstorage=false) which our TCP*Extend clients will use to connect to the cache. This leaves us 14.5G available for our cache instances.
    We’re trying to determine the proper balance of heap size vs num of cache instances and have ended up with the following configuration. 7 cache instances per node running with 2G heap using a high-units value of 1.5G. Our testing has shown that using the Concurrent Mark Sweep GC algorithm warrants no substantial GC pauses and we have also done testing with a heap fragmentation inducer (http://www.azulsystems.com/e2e/docs/Fragger.java) which also shows no significant pauses.
    The reason we opted for a larger heap was to cut down on the cluster communication and context switching overhead as well as the administration challenges that 28 separate JVM processes would create. Although our testing has shown successful results, my concern here is that we’re straying from the best practices recommendations and I’m wondering what others thoughts are about the configuration outlined above.
    Thanks,
    - Allen Bettilyon

    Hello,
    I’ve read in the best practices document that the recommended heap size (without JVM GC tuning) is 512M. It also indicates that GC tuning, object number/size, and hardware configuration play a significant role in determining what the optimal heap size is. My particular Coherence implementation contains a static data set that is fairly large in size (150-300k per entry). Our hardware platform contains 16G physical RAM available and we want to dedicate at least 1G to the system and 512M for a proxy instance (localstorage=false) which our TCP*Extend clients will use to connect to the cache. This leaves us 14.5G available for our cache instances.
    We’re trying to determine the proper balance of heap size vs num of cache instances and have ended up with the following configuration. 7 cache instances per node running with 2G heap using a high-units value of 1.5G. Our testing has shown that using the Concurrent Mark Sweep GC algorithm warrants no substantial GC pauses and we have also done testing with a heap fragmentation inducer (http://www.azulsystems.com/e2e/docs/Fragger.java) which also shows no significant pauses.
    The reason we opted for a larger heap was to cut down on the cluster communication and context switching overhead as well as the administration challenges that 28 separate JVM processes would create. Although our testing has shown successful results, my concern here is that we’re straying from the best practices recommendations and I’m wondering what others thoughts are about the configuration outlined above.
    Thanks,
    - Allen Bettilyon

  • XSLT with large file size

    Hello all,
    I am very new to XSL. I have been reading a lot on it. I have a small transformation program which takes an xml file, an xsl file and performs the transformation. Everything works fine when the file size is in terms of KB. When I tried the same program with the file size of around 118MB, it is giving me out of memory exception. I would appreciate any comments to make my program work for bigger file size. I am posting my java code and xsl file
    public static void xsl(String inFilename, String outFilename, String xslFilename) {
    try {
    // Create transformer factory
    TransformerFactory factory = TransformerFactory.newInstance();
    // Use the factory to create a template containing the xsl file
    Templates template = factory.newTemplates(new StreamSource(
    new FileInputStream(xslFilename)));
    // Use the template to create a transformer
    Transformer xformer = template.newTransformer();
    // Prepare the input and output files
    Source source = new StreamSource(new FileInputStream(inFilename));
    Result result = new StreamResult(new FileOutputStream(outFilename));
    // Apply the xsl file to the source file and write the result to the output file
    xformer.transform(source, result);
    } catch (FileNotFoundException e) {
    System.out.println("Exception " + e);
    } catch (TransformerConfigurationException e) {
    // An error occurred in the XSL file
    System.out.println("Exception " + e);
    } catch (TransformerException e) {
    // An error occurred while applying the XSL file
    // Get location of error in input file
    SourceLocator locator = e.getLocator();
    int col = locator.getColumnNumber();
    int line = locator.getLineNumber();
    String publicId = locator.getPublicId();
    String systemId = locator.getSystemId();
    System.out.println("Exception " + e);
    System.out.println("locator " + locator.toString());
    System.out.println("line : " + line);
    System.out.println("col : " + col);
    System.out.println("No Exception");
    xsl file :
    <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
    <xsl:output method="xml" indent="yes"/>
    <xsl:template match="/">
    <xsl:element name="Hosts">
    <xsl:apply-templates select="//maps/map/hosts"/>
    </xsl:element>
    </xsl:template>
    <xsl:template match="//maps/map/hosts">
    <xsl:for-each select="*">
    <xsl:element name="host">
    <xsl:element name="ip"><xsl:value-of select="./@ip"/></xsl:element>
    <xsl:element name="known"><xsl:value-of select="./known/@v"/></xsl:element>
    <xsl:element name="targeted"><xsl:value-of select="./targeted/@v"/></xsl:element>
    <xsl:element name="asn"><xsl:value-of select="./asn/@v"/></xsl:element>
    <xsl:element name="reverse_dns"><xsl:value-of select="./reverse_dns/@v"/></xsl:element>
    </xsl:element>
    </xsl:for-each>
    </xsl:template>
    Thanks,
    Namrata
    </xsl:stylesheet>

    One thing you could try to do is avoid using xpath like ".//" and "*".
    I had many problems in terms of memory consuptiom and performance with xpaths like above.
    Altrought you have a little more work to code your xslt it performs better, and probably you will write once and run many times.

  • How large heap size for running a WL Server?

    Hi friends,
    I am running a 5.1 WL server, which usually uses more than 170Mb or even
    more if I deploy EjBs.
    Is that usully a server will use this much of memory?
    If I want to reduce usage of heap size, what can I do?
    Thanks!
    James

    Hi Sathya,
    If you are not having any memory leaking and have a fair amount of memory allocated on one server for your application then I would suggest you to go with the increasing the number of servers in the cluster. That way you would have a distributed architecture and performance would increase.
    For better tuning you can have a look at the below links
    Topic: Optimizing WebLogic Server Performance: JVM tuning
    http://middlewaremagic.com/weblogic/?p=6388
    Topic: Tuning the WebLogic Server Performance
    http://middlewaremagic.com/weblogic/?p=6384
    Regards,
    Ravish Mody

  • Using JRockit Real Time with Large Heap

    I want to know if JRockit Real Time can work with extremely large heap, e.g. 100G, to provide effective deterministic GC, before proceeding with evaluation for my application.
    Thanks in advance!
    Wing

    Hi Wing,
    In general, extremely large heaps can make all GCs slower. This is true for Deterministic GC as well. However, it is very application dependent. It is above our standard recommendation, but you can download JRockit Real Time and give it a try.

  • I need to input text from PC to iPad with large font size. Too much to ask?

    Works well with PC to PC using just software likge Windows Live Messenger, but not with iPad because I haven't seen a Messenger app for iPad that can display large fonts or has custom font size option. They all use small font size that can't be changed.
    I've also tried many VNC apps by writing the text in my PC to notepad and using the VNC to display the large text for iPad. Probelm is VNC's wont work well with 3G.
    Is this really too much to ask from iPad?
    If some app developer reads this, I'll buy an app for 50$ that has white background and can receive text sent from PC with custom font sizes. Doesn't have to have any other features. Just receive text from PC using the existing IM softwares like Windows Live etc or simple custom software.

    I was wondering if you could help me out with my current situation. It seems like it is similar to your previous I/O delima. I think I might have to parse the data, but i dont know how to do that yet. I basically have to put the information from one text file into a database, and it is formatted in one way, and i need to format it a different way that SQL statements can read. Here are 2 entries of about 12,000 that i have in one text file:
    1
    UI - 4
    TI - 117-28
    Y2 - -32676
    RP - NOT IN FILE
    SO - <None Specified> -1 ;():
    2
    UI - 11721
    TI - The mechanism of genetic recombination in Escherichia coli.
    MH - mechanism
    MH - recombination
    MH - Escherichia coli
    RP - NOT IN FILE
    SO - Cold Spring Harb Symp Quant Biol 1953 ;18():75-93
    This is only 2 of the records, You notice that the record number is first, followed by another field ex. (UI), which then the data stored in that field are after the "-"
    Thanks for your help

  • Large Heap size

    Does keeping the maximum size to big value say 512m creates any problem?
    A gentleman from BEA has asked to keep this value to 128m (mx), keeping
    it high creates the problems for Garbage collection.
    TIA,
    Vikas

    Only load testing will show. Throw massive load and see what your full GC
    times are. If they are acceptable run with the largest heap practical and
    cache the world.
    Peace,
    Cameron Purdy
    Tangosol Inc.
    Tangosol Coherence: Clustered Coherent Cache for J2EE
    Information at http://www.tangosol.com/
    "Vikas Chawla" <[email protected]> wrote in message
    news:[email protected]..
    Does keeping the maximum size to big value say 512m creates any problem?
    A gentleman from BEA has asked to keep this value to 128m (mx), keeping
    it high creates the problems for Garbage collection.

  • Problem with image resolution with larger screen size

    I was working on Basic Lens Sample (Basic
    Lens Sample) for windows phone 8 App. In this the camera by default sets to default highest resolution.
    Just had couple of questions on this:
    1.) How to change basic settings like resolution,ISO etc.,in this app, because during initial settings
    it only takes default value and when I try to change that it throws an error. Also a code comment is written during initialization of camera to take only these default values. So when to change these settings and how?.
    2.) How to adjust resolution for phablets having 6" screen. Like when I run the code in Lumia 1320,
    the image resolution is not that good and it breaks. So how to adjust screen resolutions for catering different smartphones with different resolutions and screen sizes.
    Any solution or suggestions or snippets is very much appreciated.
    Thanks In Advance.

    Hello,
    For you first question, we can use
    GetAvailableCaptureResolutions method of PhotoCaptureDevice to get the available resolutions in device. That code sample returns the largest resolution by default, you just need to choose an appropriate one. You can find it in CameraController class in
    Models folder in the project.
    For you second question, I tested the sample in lumia 1320, I didn’t see the image broken.
    >> So how to adjust screen resolutions for catering different smartphones with different resolutions and screen sizes.
    You can get an appropriate resolution image and let the OS downscale it for you. You can find the support resolution in the following link. Go to Supported resolutions section.
    http://msdn.microsoft.com/en-us/library/windows/apps/jj206974(v=vs.105).aspx.
    If you want to display the whole large resolution image in lower resolution device, I think you can use
    ScrollViewer control. Use Application.Current.Host.Content.ActualWidth and ActualHeight properties to get the device resolutions, and then you can know if your app is running on a lower resolution device.
    Regards,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place. Click HERE to participate
    the survey.

  • Linux AMD64, JDK 1.5_03: slow performance with large heap

    Tomcat app server running on jdk 1.4.2 on 32 bit Linux configured with mx1750m, ms1750m, runs fast. Returns 2MB of data through HttpServlet in under 30 secs.
    Moving the same app server to 64 bit on jdk 1.5.03 configured with mx13000m, ms10000m, the same request for data takes 5-20 minutes. Not sure why the timing is not consistent. If the app server is configured with mx1750m, ms1750m, performance is about 60 secs or less.
    I checked java settings through jstat. -d64 is the default. Why would increasing the heap cause such slow performance? Physical memory on the box = 32MB.
    It looks like it's definitely java related since a perl app making a http request to the server takes under a minute to run. We moved to 64bit to get around 1.7GB limitation of 32bit Linux but now performance is unacceptable.

    I Aggree, a AMD 64 with only 32 MB of memory would be a very strange beast indeed, heck, my graphics card has 4 times that, and it's not the most up-to-date.
    Keep in mind that switching to 64 does not only mean bigger memory space but also bigger pointers (on the sub-java level) and probably more padding in your memory, which leads to bigger memory consumption which in turn leads to more bus traffic which slows stuff down. This might be a cause for your slowdown, but that should not usually result in a slowdown as sever as the one you noticed.
    Maybe it's also a simple question of a not-yet-completely-optimized JDK for amd64.

  • Heap sizes with Planning11.1.1.3 & Weblogic help

    I'm used to having the xms and xmx heap sizes in the registry with planning. But i installed Planning 11.1.1.3 with weblogic 9.2 (all 32 bit) and deployed as a service. However, the registry does not have the usually heap settings automatically added. I can modify the setDomainEnv script to bump up heap sizes but running that script leaves the command window running in the foreground which is something i don't want. I tried manually adding the heap size settings to the Hyperion Planning registry settings, but they are not taking effect when starting the web server via the services console. Anyone have any ideas. I peaked around the web logic admin console but i didnt really want to try and set the heap sizes through there. plus, i'm not sure any settings in there will take effect when starting from the service console. I thought all the arguments for a service are derived from the registry.

    EssbaseInAz wrote:
    "We are using 64bit Windows 2003 with Weblogic 9.2 as well. What is the heap size suggestions in Hyperion Planning 11.1.1.3?"
    Are the heap sizes already present in your planning registry setting? Did you use the jrockit jvm or the sun jvm when configuring Weblogic. I'm trying to track down what caused my install not to create the heap size jvmoption in my registry. I know Weblogic recommends using the jrockit jvm for performance reasons, but if i can't get the planning service to run with larger heaps, i may have to change jvms. Having the planning web running in the foreground seems to risky for me (someone logging out and shutting down any running windows).We are using Jrockit JVM and the heap size setup is around 1GB.
    As John mentioned, 1.8GB may be the max even though you are using 64bit version.

  • Analyse large heap dump file

    Hi,
    I have to analyse large heap dump file (3.6GB) from production environment. However if open it in eclipse mat, it is giving OutOfMemoryError. I tried to increase eclipse workbench java heap size as well. But it doesnt help. I also tried with visualVM as well. Can we split the heap dump file into small size? Or is there any way to set max heap dump file size for jvm options so that we collect reasonable size of heap dumps.
    Thanks,
    Prasad

    Hi, Prasad
    Have you tried open in 64-bit mat on a 64-bit platform with large heap size and CMS gc policy in MemoryAnalyzer.ini file ? the mat is good toolkit on analysing java heapdump file, if it cann't works, you can try Memory Dump Diagnostic for Java(MDD4J) in 64-bit IBM Support Assistant with large heap size.

  • Serial VISA 'Write' -why is it slow to return even with large buffer?

    Hi,
    I'm writing a serial data transfer code 'module' that will run 'in the background' on a cRIO-9014.  I'm a bit perplexed about how VISA write in particular seems to work.
    What I'm seeing is that the VISA Write takes about 177ms to 'return' from a 4096 byte write, even though my write buffer has been set to >> 4096.
    My expectation would be that the write completes near instantly as long as the VISA driver available buffer space is greater than the bytes waiting to be written, and that the write function would only 'slow down' up to the defined VISA timeout value if there was no room in the buffer.
    As such, I thought it would be possible to 'pre-load' the transmit buffer at a high rate, then, by careful selection of the time-out value relative to the baud rate, it would self-throttle once the buffer fills up?
    Based on my testing this is not the case, which leaves me wondering:
    a) If you try to set the transmit buffer to an unsupported value, will you get an error?
    b) Assuming 'yes' to a, what the heck is the purpose of the serial write buffer? I see no difference running with serial buffer size == data chunk size and serial buffer size >> data chunk size??
    QFang
    CLD LabVIEW 7.1 to 2013

    Hi, I can quickly show the low-level part as a png. It's a sub-vi for transferring file segments.  Some things like the thin 'in-line' VI with (s) as the icon were added to help me look at were the hold-up is.  I cropped the image to make it more readable, the cut-off left and right side is just the input and output clusters.
    In a nut-shell, the VISA Write takes as much time to 'return' as it would take to transfer x bytes over y baud rate.  In other words, even though there is suppused to be a (software or hardware) write and read buffer on the com port, the VISA write function seems to block until the message has physically left the port (OR it writes TO the buffer at the same speed the buffer writes out of the port).  This is very unexpected to me, and what prompted me to ask about what the point is of the write buffer in the first place?  -The observations are on a 9014 RT target built in serial port.  Not sure if the same is observed on other targets or other OS's.  [edit: and the observation holds even if transmitting block-sizes of say 4096 with a buffer size of 4096 or 2*4096 or 10 * 4096 etc. I also tried smaller block sizes and larger block sizes with larger still buffers.  I was able to verify that the buffer re-size function does error out if I give it an insane input buffer size request, so I'm taking that to mean that when I assign e.g. a 4MiB buffer space with no error, the write buffer actually IS 4MiB, but I have not found a property to read back what the HW buffer is, so all I have to base that on is the lack of an error during buffer size setting.) [\edit\]
    The rest of the code is somewhat irrelelvant to this discussion, however, to better understand it, the idea is that the remote side of the connection will request various things, including a file.  The remote side can request a file as a stream of messages each of size 'Block Size (bytes)', or it can request a particular block (for handling e.g. re-transmission if file MD5 checksum does not match).   The other main reason for doing block transfers is that VISA Write hogs a substantial ammount of CPU, so if you were to attempt to write e.g. a 4MiB file out the serial port, assuming your VISA time-out is sufficiently long for that size transfer, the write would succeed, but you would see ~50% CPU from this one thread alone and (depending on baud rates) it could remain at that level for a verrry long time.   So, by transferring smaller segments at a time, I can arbitrarily insert delays between segments to let the CPU sleep (at the expense of longer transfer times).  The first inner case shown that opens the file only runs for new transfers, the open file ref is kept on a shift register in the calling VI.  The 'get file offset' function after the read was just something I was looking at during (continued) development, and not required for the functionality that I'm describing.
    QFang
    CLD LabVIEW 7.1 to 2013

  • JRockit for applications with very large heaps

    I am using JRockit for an application that acts an in memory database storing a large amount of memory in RAM (50GB). Out of the box we got about a 25% performance increase as compared to the hotspot JVM (great work guys). Once the server starts up almost all of the objects will be stored in the old generation and a smaller number will be stored in the nursery. The operation that we are trying to optimize on needs to visit basically every object in RAM and we want to optimize for throughput (total time to run this operation not worrying about GC pauses). Currently we are using hugePages, -XXaggressive and -XX:+UseCallProfiling. We are giving the application 50GB of ram for both the max and min. I tried adjusting the TLA size to be larger which seemed to degrade performance. I also tried a few other GC schemes including singlepar which also had negative effects (currently using the default which optimizes for throughput).
    I used the JRMC to profile the operation and here were the results that I thought were interesting:
    liveset 30%
    heap fragmentation 2.5%
    GC Pause time average 600ms
    GC Pause time max 2.5 sec
    It had to do 4 young generation collects which were very fast and then 2 old generation collects which were each about 2.5s (the entire operation takes 45s)
    For the long old generation collects about 50% of the time was spent in mark and 50% in sweep. When you get down to the sub-level 2 1.3 seconds were spent in objects and 1.1 seconds in external compaction
    Heap usage: Although 50GB is committed it is fluctuating between 32GB and 20GB of heap usage. To give you an idea of what is stored in the heap about 50% of the heap is char[] and another 20% are int[] and long[].
    My question is are there any other flags that I could try that might help improve performance or is there anything I should be looking at closer in JRMC to help tune this application. Are there any specific tips for applications with large heaps? We can also assume that memory could be doubled or even tripled if that would improve performance but we noticed that larger heaps did not always improve performance.
    Thanks in advance for any help you can provide.

    Any suggestions for using JRockit with very large heaps?

  • OutOfMemoryError with large number of databases

    Hey,
    I was wondering how the databases themselves are tracked in an environment and whether caching is handled differently for this information. we have one environment with many (~16000) databases. each database only has a few entries. when we start up our process with 64mb heap size, the process gets an OutOfMemoryError just opening the environment, even with the maxMemoryPercent set to 10%. It seems like bdb is not handling the database info well. any ideas would be helpful.
    thanks,
    -james

    Hi James,
    Presently, we never evict a DB after it's opened or encountered during recovery. Each DB takes about 2,000 bytes. So if you have 16K DBs you need approximately 32MB of memory, assuming all of them could be opened or recovered during a process lifetime. Unfortunately, even closing them does not cause them to be evicted. If you encounter them during recovery, this will also pull them into memory.
    We have an FAQ entry on this:
    http://www.oracle.com/technology/products/berkeley-db/faq/je_faq.html#37
    So you will need a bigger cache size. If you are encountering this during recovery, then you could try a more frequent checkpoint interval.
    I hope this is useful.
    Regards,
    Charles Lamb

  • Larger image size and disk performance

    hey all,
    Granted the majority of users here won't experience issues with larger image sizes, but i'm hoping a few may be able to shed some experience.
    I work with a H3D, Mamiya RB (with various Leaf backs) and also drum scanned film. The issue is that often a single image size can be anything from 150mb upto 500mb, this is excluding any post production changes.
    My current aperture library is on a FW800 disk, but disk i/o is crippling the box and program at the moment. I'm looking at the express slot and wondering who here is running a disk off that and what they feel it's like from a performance perspective.
    An example is a recent location shoot which has a handful of images above 500mb each. Aperture takes around 10-15 mins to startup when this folder is selected (constantly processing the image every time it starts) and this leads to a totally unresponsive OS.
    How are you handling large files with Aperture?
      Mac OS X (10.3.9)  

    On the Quad I process 250Mb+ TIF scans, not often, but often enough. External drives 7200rpm in a Sonnet 500P encl. attached to eSATA muliport card (Sonnet E2P). Performance is equal to internal so far as I can judge.
    I recall the PCMCIA then PC Card bus speed was horrendously slow. Not sure what the Expresscard bus speed is, but it would be a crying shame to attach 300Gb/s burst capable drives (or RAID 5 driving 200Mb+ continuous) to a backend bus capable of a few Mb.
    As Alan notes, a MBP may be OK for field work and tethered shooting, but for the image sizes you have, the preferred solution would be a Mac Pro.
    G.

Maybe you are looking for

  • Multiple volumes with the same name

    If I look at my /Volumes folder using Finder>Go>Go to Folder.. I see seven different volumes all starting with the same name: MyName MyName 1 MyName-1 MyName-2 MyName-3 MyName-4 MyName-5 (as well as my external drive and iMac HD) MyName has a folder

  • IPod Touch cannot be synced because of unknown error alert (-50)

    i connect my ipod touch on my lap top and i cannot sync my photos cuase an unknown error alert (-50) but on the ipod settings shows like it sync the photos but i cannot see it how can i fix it?

  • Wavy lines in bar charts after converting a Powerpoint to a pdf?

    After converting a Powerpoint file to a pdf, the bar charts have wavy lines.  The one thing I noticed is that the problem appeared when creating the pdf with Notes showing.  If Notes are turned off, then the bars are fine. Is there a solution for thi

  • Strange icon in FCE Timeline

    A bright aqua "pointer" has appeared in my Timeline Ruler (directly over a time indicator during the 46th minute). During playback, when the playhead (Timeline) hits this icon, the clips stops (Audio continues). Playhead continues to next clip which

  • SYNC- PROXY  TO SAP ABAP WEBSERVICE

    Dear Collegues,                My scenario is synchronous to abap web service. <b>i have picked up the wsdl file from IJA-R/3 system as mentioned by book.</b> <b>the url mentioned for target saop adapter is :</b> http://Q4INPUSY006.in-pune.t-systems.