Idle server 1 meg/second memory consumption

WL 6.1 sp2
Solaris 2.8
JDK 1.3.1_02 -server, 1 gig heap
I noticed lately through the console that an idle WL server, with our
application deployed but no client/sockets connecting to it other than the
web console, is consuming memory at about 1 meg/second. Is this the norm?
Seems a bit voracious to me....
Gene

Damn it, I added one too many zeros! I'm looking at the performance graph
in /console and thought I was seeing 300 megs, instead of 30 megs! So in
actuality my idle server is consuming .1 meg/sec, which seems a bit more
like it... Can I make a feature request, have the console show
comma-separators for those big numbers? :-)
Actually this is a lead-in to my real question: on production we have a
couple of servers that are true memory hogs; they go through 1 gig GC's in
20 seconds! This is causing a lot of issues, obviously: GC occurs every 20
seconds, with 3-5 second GC time. Hence we have an inordinate amount of
"downtime", even if we cluster 2-3 servers, each experiencing this kind of
memory consumption. Here's what I want:
1) I like to capture a daily and weekly graph of GC frequency and duration.
The java applet console does not record such history, so I'm wondering if
there is an MBean I can use, or has someone written one that does this?
2) How can I profile my 50 SLSB ejbs to find which one(s) are the memory
hogs? They aren't leaking, because GC always bring them back down to
baseline; they just suck up a lot of memory! I've tried using -Xrunhprof
and JProbe, but both slow the server down to a point where its unbearable to
run (on dev, I don't do this on production :-)). You guys have other tricks
to find memory consumers?
Thanks,
Gene
"Rob Woollen" <[email protected]> wrote in message
news:[email protected]..
1 MB/s does seem like a lot for an idle server. You might try taking
some thread dumps when it's supposedly idle and see what it's up to.
-- Rob
Gene Chuang wrote:
WL 6.1 sp2
Solaris 2.8
JDK 1.3.1_02 -server, 1 gig heap
I noticed lately through the console that an idle WL server, with our
application deployed but no client/sockets connecting to it other than
the
web console, is consuming memory at about 1 meg/second. Is this thenorm?
Seems a bit voracious to me....
Gene

Similar Messages

  • Continuously refreshing a tab after an interval leads to high memory consumption (400MB to 800MB in 30 seconds for 3 refreshes at 10 secs interval), why?

    Environment:
    MAC OSX 10.9.5
    Firefox 32.0.3
    Firefox keeps consuming lot of memory when you keep refreshing a tab after an interval.
    I opened a single tab in my firefox and logged into my gmail account on that. At this stage the memory consumption was about 400MB. I refreshed the page after 10 seconds and it went to 580MB. Again i refreshed after 10 seconds and this time it was 690MB. Finally, when i refreshed 3rd time after 10 seconds, it was showing as 800MB.
    Nothing was changed on the page (no new email, chat conversation, etc. nothing). Some how i feel that the firefox is not doing a good job at garbage collection. I tested this use case with lot of other applications and websites and got the similar result. Other browsers like Google chrome, safari, etc. they just work fine.
    For one on of my application with three tabs open, the firefox literally crashed after the high memory consumption (around 2GB).
    Can someone tell me if this is a known issue in firefox? and is firefox planning to fix it? Right now, is there any workaround or fix for this?

    Hi FredMcD,
    Thanks for the reply. Unfortunately, i don't see any crash reports in about:crashes. I am trying to reproduce the issue which will make browser to crash but somehow its not happening anymore but the browser gets stuck at a point. Here is what i am doing:
    - 3 tabs are open with same page of my application. The page has several panels which has charts and the javascript libraries used for this page are backbone.js, underscore.js, require.js, highcharts.js
    - The page automatically reloads after every 30 seconds
    - After the first loading of there three tabs, the memory consumption is 600MB. But after 5 minutes, the memory consumption goes to 1.6GB and stays at this rate.
    - After sometime, the page wont load completely for any of the tabs. At this stage the browser becomes very slow and i have to either hard refresh the tabs or restart the browser.

  • Very high memory consumption of B1i and cockpit widgets

    Hi all,
    finally I have managed it to install B1i successfully, but I think something is wrong though.
    Memory consumption in my test environment (Win2003, 1024 MB RAM), while no other applications and no SAP addons are started:
    tomcat5.exe 305 MB
    SAP B1 client 315 MB
    SAP B1DIProxy.exe 115 MB
    sqlservr.exe 40 MB
    SAPB1iEventSender.exe 15 MB
    others less than 6 MB and almost only system based processes...
    For each widget I open (3 default widgets, one on each standard cockpit), the tomcat grows bigger and leaves less for the sql server, which has to fetch all the data (several seconds on 100% of CPU usage).
    Is this heavy memory consumption normal? What happens if several users are logged into SAP B1 using widgets?
    Thanks in advance
    Regards
    Sebastian

    Hi Gordon,
    so this is normal? Then I guess the dashboards are not suitable for many customers, especially for them who are working on a terminal server infrastructure. Even if the tomcat server has this memory consumption only on the SAP server, when each client needs about 300 MB (and add some hundred for the several addons they need!), I could not activate the widgets. And generally SAP B1 is not the only application running at the customers site. Suggesting to buy more memory for some Xcelsius dashboards won't convince the customer.
    I hope that this feature will be improved in the future, otherwise the cockpit is just an extension of the old user menu (except for the brilliant quickfinder on top of the screen).
    Regards
    Sebastian

  • Server goes out of memory when annotating TIFF File. Help with Tiled Images

    I am new to JAI and have a problem with the system going out of memory
    Objective:
    1)Load up a TIFF file (each approx 5- 8 MB when compressed with CCITT.6 compression)
    2)Annotate image (consider it as a simple drawString with the Graphics2D object of the RenderedImage)
    3)Send it to the servlet outputStream
    Problem:
    Server goes out of memory when 5 threads try to access it concurrently
    Runtime conditions:
    VM param set to -Xmx1024m
    Observation
    Writing the files takes a lot of time when compared to reading the files
    Some more information
    1)I need to do the annotating at a pre-defined specific positions on the images(ex: in the first quadrant, or may be in the second quadrant).
    2)I know that using the TiledImage class its possible to load up a portion of the image and process it.
    Things I need help with:
    I do not know how to send the whole file back to servlet output stream after annotating a tile of the image.
    If write the tiled image back to a file, or to the outputstream, it gives me only the portion of the tile I read in and watermarked, not the whole image file
    I have attached the code I use when I load up the whole image
    Could somebody please help with the TiledImage solution?
    Thx
    public void annotateFile(File file, String wText, OutputStream out, AnnotationParameter param) throws Throwable {
    ImageReader imgReader = null;
    ImageWriter imgWriter = null;
    TiledImage in_image = null, out_image = null;
    IIOMetadata metadata = null;
    ImageOutputStream ios = null;
    try {
    Iterator readIter = ImageIO.getImageReadersBySuffix("tif");
    imgReader = (ImageReader) readIter.next();
    imgReader.setInput(ImageIO.createImageInputStream(file));
    metadata = imgReader.getImageMetadata(0);
    in_image = new TiledImage(JAI.create("fileload", file.getPath()), true);
    System.out.println("Image Read!");
    Annotater annotater = new Annotater(in_image);
    out_image = annotater.annotate(wText, param);
    Iterator writeIter = ImageIO.getImageWritersBySuffix("tif");
    if (writeIter.hasNext()) {
    imgWriter = (ImageWriter) writeIter.next();
    ios = ImageIO.createImageOutputStream(out);
    imgWriter.setOutput(ios);
    ImageWriteParam iwparam = imgWriter.getDefaultWriteParam();
    if (iwparam instanceof TIFFImageWriteParam) {
    iwparam.setCompressionMode(ImageWriteParam.MODE_EXPLICIT);
    TIFFDirectory dir = (TIFFDirectory) out_image.getProperty("tiff_directory");
    double compressionParam = dir.getFieldAsDouble(BaselineTIFFTagSet.TAG_COMPRESSION);
    setTIFFCompression(iwparam, (int) compressionParam);
    else {
    iwparam.setCompressionMode(ImageWriteParam.MODE_COPY_FROM_METADATA);
    System.out.println("Trying to write Image ....");
    imgWriter.write(null, new IIOImage(out_image, null, metadata), iwparam);
    System.out.println("Image written....");
    finally {
    if (imgWriter != null)
    imgWriter.dispose();
    if (imgReader != null)
    imgReader.dispose();
    if (ios != null) {
    ios.flush();
    ios.close();
    }

    user8684061 wrote:
    U are right, SGA is too large for my server.
    I guess oracle set SGA automaticlly while i choose default installion , but ,why SGA would be so big? Is oracle not smart enough ?Default database configuration is going to reserve 40% of physical memory for SGA for an instance, which you as a user can always change. I don't see anything wrong with that to say Oracle is not smart.
    If i don't disincrease SGA, but increase max-shm-memory, would it work?This needs support from the CPU architecture (32 bit or 64 bit) and the kernel as well. Read more about the huge pages.

  • Query on memory consumption during SQL

    Hi SAP Gurus,
    Could I kindly request for your inputs concerning the following scenario?
    To put it quite simply, we have a program where we're required to retrieve all the fields from a lengthy custom table, i.e. the select statement uses an asterisk.  Unfortunately, there isn't really a way to avoid this short of a total overhaul of the code, so we had to settle with this (for now).
    The program retrieves from the database table using a where clause filtering only to a single value company code.  Kindly note that company code is not the only key in the table.  In order to help with the memory consumption, the original developer had employed retrieval by packages (also note that the total length of each record is 1803...).
    The problem encountered is as follows:
    - Using company code A, retrieving for 700k entries in packages of 277, the program ran without any issues.
    - However, using company code B, retrieving for 1.8m in packages of 277, the program encountered a TSV_TNEW_PAGE_ALLOC_FAILED short dump.  This error is encountered at the very first time the program goes through the select statement, ergo it has not even been able to pass through any additional internal table processing yet.
    About the only biggest difference between the two company codes is the number of corresponding records they have in the table.  I've checked if company code B had more values in its columns than company code A.  However, they're just the same.
    What I do not quite understand is why memory consumption changed just by changing the company code in the selection.  I thought that the memory consumed by both company codes should be the same... at least, in the beginning, considering that we're retrieving by packages, so we're not trying to get all of the records all at once.  However, the fact that it failed at the very beginning has shown me that I'm gravely mistaken.
    Could someone please enlighten me on how memory is consumed during database retrieval?
    Thanks!

    Hi,
    with FAE (FOR ALL ENTRIES) the whole query even for a single record in the itab is executed and all results for
    the company code are transfered from the database to the DBI since the duplicates will be removed by the DBI
    not by the database.
    If you use package size the resultset is buffered in a system table in the DBI (which allocates memory from your user quota). And from there on the package sizes are built and handed over to your application (into table lt_temp).
    see recent ABAP documentation:
    Since duplicate rows are only removed on the application server, all rows specified using the WHERE condition are sometimes transferred to an internal system table and aggregated here. This system table has the same maximum size as the normal internal tables. The system table is always required if addition PACKAGE SIZE or UP TO n ROWS is used at the same time. These do not affect the amount of rows transferred from the database server to the application server; instead, they are used to transfer the rows from the system table to the actual target area.
    What you should do:
    calculate the size needed for your big company code B. How many row  multiplied with line length.
    That is the minimum amount you need for your user memory quota. (quotas can be checked with
    ABAP report RSMEMORY) If the amount of memory is sufficient then try without packagesize.
    SELECT * FROM <custom table>
    INTO TABLE lt_temp
    FOR ALL ENTRIES IN lt_bukrs
    WHERE bukrs = lt_bukrs-bukrs
    ORDER BY primary key.
    This might actually use less memory than the package size option for the FOR ALL ENTRIES.
    Since with FAE it is buffered anyway in the DBI (and subtracted from your quota) you can
    do it right away and avoid double saving portions (the DBI buffer and a portion of that in the
    packe in lt_temp).
    If the amount of memory is still too big, you have to either increase the quotas or select
    less data (additional where conditions) or avoid using FAE in this case in order to not read all
    the data in one go.
    Hope this helps,
    Hermann

  • Integration Builder Memory Consumption

    Hello,
    we are experiencing very high memory consumption of the Java IR designer (not the directory). Especially for loading normal graphical idoc to EDI mappings, but also for normal idoc to idoc mappings. examples (RAM on client side):
    - open normal idoc to idoc mapping: + 40 MB
    - idoc to edi orders d93a: + 70 MB
    - a second idoc to edi orders d93a: + 70 MB
    - Execute those mappings: no additional consumption
    - third edi to edi orders d93a: + 100 MB
    (alle mappings in same namespace)
    After three more mappings RAM on client side goes on 580 MB and then Java heap error. Sometimes also OutOfMemory, then you have to terminate the application.
    Obviously the mapping editor is not quite will optimized for RAM usage. It seems to not cache the in/out message structures. Or it loads for every mapping very much dedicated functionality.
    So we cannot really call that fun. Working is very slow.
    Do you have similar experiences ? Are there workarounds ? I know the JNLP mem setting parameters, but the problem is the high load of each mapping, not only the overall maximum memory.
    And we are using only graphical mappings, no XSLT !
    We are on XI 3.0 SP 21
    CSY

    Hii
    Apart from raising tablespace..
    Note 425207 - SAP memory management, current parameter ranges
    you have configure operation modes to change work processes dynamically using rz03,rz04.
    Please see the below link
    http://help.sap.com/saphelp_nw04s/helpdata/en/c4/3a7f53505211d189550000e829fbbd/frameset.htm
    You can Contact your Basis administrator for necessary action

  • High memory consumption in XSL transformations (XSLT)

    Hello colleagues!
    We have the problem of a very high memory consumption when transforming XML
    files with CALL TRANSFORMATION.
    Code example:
    CALL TRANSFORMATION /ipro/wml_translate_cls_ilfo
                SOURCE XML lx_clause_text
                RESULT XML lx_temp.
    lx_clause_text is a WordML xstring (i.e. it is a Microsoft Word file in XML
    format) and can therefore not be easily splitted into several parts.
    Unfortunately this string can get very huge (e.g. 50MB). The problem is that
    it seems that CALL TRANSFORMATION allocates memory for the source and result
    xstrings but doesn't free them after the transformation.
    So in this example this would mean that the transformation allocates ~100MB
    memory (50MB for source, ~50MB for result) and doesn't free it. Multiply
    this with a couple of transformations and a good amount of users and you see
    we get in trouble.
    I found this note regarding the problem: 1081257
    But we couldn't figure out how this problem could be solved in our case. The
    note proposes to "use several short-running programs". What is meant with
    this? By the way, our application is done with Web Dynpro for ABAP.
    Thank you very much!
    With best regards,
    Mario Düssel

    Hi,
    q1. how come the Ram consumption is increased to 99% on all the three boxes?If we continue with the theory that network connectivity was lost between the hosts, the Coherence servers on the local hosts would form their own clusters. Prior to the "split", each cache server would hold 1/12 of the primary and 1/12 of the backup (assuming you have one backup). Since Coherence avoids selecting a backup on the same host as the primary when possible, the 4 servers on each host would hold 2/3 of the cache. After the spit, each server would hold 1/6 of the primary and 1/6 of the backup, i.e., twice the memory it previously consumed for the cache. It is also possible that a substantial portion of the missing 1/3 of the cache may be restored from the near caches, in which case, each server would then hold 1/4 of the primary and 1/4 of the backup, i.e., thrice the memory it previously consumed for the cache.
    q2: where is the cache data stored in the coherence servers?on which memory?The cache data is typically stored in the jvm's heap memory area.
    Have you reviewed the logs?
    Regards,
    Harv

  • Dbxml memory consumption

    I have a query that returns about 10MB worth of data when run against my db -- it looks something like the following
    'for $doc in collection("VcObjStore")/doc
    where $doc[@type="Foo"]
    return <item>{$doc}</item>'
    when I run this query in dbxml.exe, I see memory footprint (of dbxml.exe) increase 125MB. Once query finishes, it comes back down.
    I expected memory consumption to be somewhat larger than what the query actually returns but this seems quite extreme.
    Is this behavior expected? What is a general rule of thumb on memory usage with respect to result size (is it really 10x)? Any way to make it less of a hog?
    Thanks

    Hi Ron,
    Thanks for a quick reply!
    - I wasn't actually benchmarking DBXML. We've observed large memory consumption during query execution in our test application and verified the same issue with dbxml.exe. Since dbxml.exe is well understood by everyone familiar with DBXML, I thought it would help starting with that.
    - Yes, an environment was created for this db. Here is the code we used to set it up
    EnvironmentConfig envConfig = new EnvironmentConfig();
    envConfig.setInitializeLocking(true);
    envConfig.setInitializeCache(true);
    envConfig.setAllowCreate(true);
    envConfig.setErrorStream(System.err);
    envConfig.setCacheSize(1024 * 1024 * 100);
    - I'd like an explanation on reasons behind the performance difference between these two queries
    Query 1:
    dbxml> time query 'for $doc in collection("VcObjStore")/doc
    where $doc[@type="VirtualMachine"]
    return $doc'
    552 objects... <snip>
    Time in seconds for command 'query': 0.031
    Query 2:
    dbxml> time query 'for $doc in collection("VcObjStore")/doc
    where $doc[@type="VirtualMachine"]
    return <val>{$doc}</val>'
    552 objects... <snip>
    Time in seconds for command 'query': 5.797
    - Any way to make the query #2 go as fast as #1?
    Thanks!

  • Query memory consumption

    Hi,
    Need some expert in SQL here. May i know how much memory (RAM) consumption for a simple query like 'SELECT SUM(Balance) FROM OCRD' cost.
    What about query like
    select (select sum(doctotal) from ordr) + (select sum(doctotal) from odln) + (select sum(doctotal) from oinv)
    How much memory would it normally takes? The reason is that i have a query that is quite similar to this and it would be run quite often. So i wonder if it is feasible to use this type of queries withought making the server to a crawl.
    Please note that the real query would include JOINS and such. Thanks
    Any information is appreciated

    Hi Melvin,
    Not sure I'd call myself an expert but I'll have a go at an answer
    I think you are going to need to set up a test environment and then stress test your solution to see what happens. There are so many different variables that affect the memory consumption that no-one is likely to be able to say just what the impact will be on your server. SQL Server, by default will allocate 1024Kb to each query but, of course, quite a number of factors will affect whether SQL needs more memory than this to execute a particular query (e.g. the number of joins, the locks created, whether the data is grouped or sorted, the size of the data etc etc). Also, SQL will release memory as soon as it can (based on its own algorithms) so a query that is run periodically has much less impact on the server than a query that will be run concurrently by multiple users. For these reasons, the impact can only really be assessed if you test it in a real-world scenario.
    If you've ever seen SQL Server memory usage when XL Reporter is running a very large report then you'll know that this is a very memory hungry operation. XL Reporter bombards SQL with a huge number of separate little queries and SQL Server starts grabbing significant amounts of memory to fulfill these queries. As the queries are coming so fast, SQL hasn't yet got around to releasing the memory used by previous queries so SQL instead grabs available memory from the server.
    You'll get better performance and scaleability by using stored procedures but SDK certification does not allow the use of SPs in the SBO databases.
    Hope this helps,
    Owen

  • Portal Session Memory Consumption

    Dear All,
                          I want to see the user sessions memory consumption for portal 7.0. i.e. if a Portal user opens a session, how much memory is consumed by him/her. How can i check this. Any default value that is associated with this?
    Backend System memory load will get added to portal consumption or to that specific Backend System memory consumption.
    Thanks in Advance......
    Vinayak

    I'm seeing the exact same thing with our setup (it essentially the same
    as yours). The WLS5.1 documentation indicates that java objects that
    aren't serializeable aren't supported with in-memory replication. My
    testing has indicated that the <web_context>._SERVLET_AUTHENTICATION_
    session value (which is of class type
    weblogic.servlet.security.ServletAuthentication) is not being
    replicated. From what I can tell in the WLS5.1 API Javadocs, this class
    is a subclass of java.lang.object (doesn't mention serializeable) as of
    SP9.
    When <web_context>._SERVLET_AUTHENTICATION_ doesn't come up in the
    SECONDARY cluster instance, the <web_context>.SERVICEMANAGER.LOGGED.IN
    gets set to false.
    I'm wondering if WLCS3.2 can only use file or JDBC for failover.
    Either way, if you learn anything more about this, will you keep me
    informed? I'd really appreciate it.
    >
    Hi,
    We have clustered two instances of WLCS in our development environment with
    properties file configured for "in memory replication" of session data. Both the
    instances come up properly and join the cluster properly. But, the problem is
    with the in memory replication. It looks like the session data of the portal is
    getting replicated.
    We tried with the simplesession.jsp in this cluster and its session data is properly
    replicated.
    So, the problem seems to be with the session data put by Portal
    (and that is the reason why I am posting it here). Everytime the "logged in "
    check fails with the removal of one of the instances, serving the request. Is
    there known bug/patch for the session data serialization of WLCS? We are using
    3.2 with Apache as the proxy.
    Your help is very much appreciated.--
    Greg
    GREGORY K. CRIDER, Emerging Digital Concepts
    Systems Integration/Enterprise Solutions/Web & Telephony Integration
    (e-mail) gcrider@[NO_SPAM]EmergingDigital.com
    (web) http://www.EmergingDigital.com

  • Memory consumption using cvitdms.dll

    Hi all!
    I am using a Diadem library to create .tdms files
    Through the DLL, I create and open a file and so, I start to append data values.
    When I look at the task manager, I can note that the memory consumption of my application does not stop to increase until I stop my program.
    I also tried to start the program without start the logging methods, that use the tdms libraries, and so, this behavior does not occur.
    I flush the data every 30 seconds or every 500 registers.
    How can I solve this problem of memory consumption?
    Regards
    Gustavo

    Hey Gustavo,
    Do you have a small program that demonstrates this behavior?  If so, could you please upload the CVI source so I can reproduce your issue here?  Also, what version of CVI are you using?  I look forward to hearing back from you!
    Best Regards,
    Software Engineer
    Jett R

  • Memory consumption is more with oracle database compared to Sybase

    Hi,
    We are executing the same java source code with backend sybase and the oracle database. But with oracle database it is consuming more memory than Sybase.
    Currently using 11g R2 with ojdbc6.jar driver
    can you please provide the information to optimize the memory consumption when using the oracle.
    Thanks,
    Nagaraj

    user12569889 wrote:
    We are executing the same java source code with backend sybase and the oracle database. But with oracle database it is consuming more memory than Sybase. That is not saying anything at all.
    What memory in Oracle? Shared pool? Buffer cache? PGA? UGA? Library cache? Something else?
    What memory in Sybase did you compare it to?
    What type of client-server connection to Oracle was made? Dedicated or shared?
    What commands/method were used to determine the memory consumption in Oracle and then Sybase?
    What serves as the baseline for comparison?
    Comparing product A with product B is a COMPLEX thing to do. And IMO, beyond the abilities of the majority of developers - as they lack the technical expertise to extract usable metrics and correctly compare these between products that can work VERY differently.
    And unless you can provide technical details to backup your claim that you are observing that "+Memory consumption is more with oracle database compared to Sybase+", I would say that you have no idea what you are actually observing and in no position to deduce that Oracle consumes more memory.

  • Excessive memory consumption when loading Customers through Component Inter

    Hi All,
    I'm facing a big problem with the high memory consumption when loading Customers, Companies and Sites using the Component Interfaces delivered by the product (RD_CONSUMER_CI_API, RD_COMPANY_CI_API, RD_SITE_CI_API) within Application Engine programs. I'm loading about 7 million customers, an amount that is not so big in my opinion, but the memory consume is too high.
    We have 3 Batch Servers, each one running under Red Hat OS with 32 GB RAM memory plus 32 GB Swap memory in each server. We are running 2 Process by Server and with a day and half the servers crash with 100% of memory consumed (RAM and SWAP).
    There's a good pratice to use Component Interface in a heavy load process?
    There are parameters in the process scheduler configuration file that could help to reduce the memory consumption?
    There's a way to free the memory through PeopleCode or by running another process?
    Thanking you in advance.

    You may want to try and cut down on the input data to ascertain that the load might be a problem.
    You may try and use the GarbageCollector, but might not help in your case.
    To get an idea of the size allocated in buffer for the Rowset being used, you may want to check out the memory overhead ...
    Also, could check, which is the process that is consuming a lot of memory

  • Measure thread's memory consumption

    Hello.
    Nice to see you here.
    Please tell me is there any possibility to measure thread's memory consumption?
    I'm trying to tune application server.
    Totally physical server with Power AIX 5.3 on board has 8GB of memory.
    For example I allocate 1408m for Application Server Java heap (-Xms1408m -Xmx1408m).
    Then I tune Application server thread pools (web-threads, EJB-threads, EJB alarm threads, etc...).
    As I understood Java treads live in native memory, not in Java heap.
    I would like to know how to measure size of thread in native memory.
    After that I can set size of thread pools (to avoid OutOfMemory native or heap).

    holod wrote:
    As I understood Java treads live in native memory, not in Java heap.The data the JVM uses to manage threads may live in the JVM's own memory outside of the Java heap. However, that data will be a very tiny fraction of what the JVM is consuming (unless you have huge number of threads, which are all using very, very little memory).
    I would like to know how to measure size of thread in native memory. It will almost certainly be so small as to not matter.
    After that I can set size of thread pools (to avoid OutOfMemory native or heap).No, that will almost certain not help at all.

  • Huge Memory Consumption for Transparent Stage

    When executing the test below the memory consumption in the Task Manager grows up to 2GB within seconds on my test system - the issue does not occur when the stage style is changed from TRANSPARENT to UNDECORATED. Tested on Win7, 8GB with FX8b116.
    public class TransparentStageMemoryIssueTest extends Application
      public static void main(String[] args)
        Application.launch(args);
      @Override
      public void start(Stage stage)
        System.err.println(System.getProperty("javafx.runtime.version"));
        //stage.initStyle(StageStyle.UNDECORATED);
        stage.initStyle(StageStyle.TRANSPARENT);
        final BorderPane root = new BorderPane();
        root.setStyle("-fx-background-color:#808080;-fx-background-radius:30;-fx-border-radius:30;-fx-border-width:14;-fx-border-color:blue;");
        Scene scene = new Scene(root, 800, 600);
        stage.setScene(scene);
        stage.setX(10);
        stage.setY(10);
        stage.show();
        new Thread()
          public void run()
            for (int i = 0; i < 1000; i++)
              Platform.runLater(new Runnable()
                @Override
                public void run()
                  int minSize = 500;
                  int maxSize = 1000;
                  int w = minSize + new Random().nextInt(maxSize - minSize);
                  int h = minSize + new Random().nextInt(maxSize - minSize);
                  Window win = root.getScene().getWindow();
                  win.setWidth(w);
                  win.setHeight(h);
              try
                sleep(50);
              catch (InterruptedException e)
                e.printStackTrace();
        }.start();

    Could you post this issue to the JavaFX Jira as well as an email to openjfx-dev?
    Thanks,
    -- Jonathan

Maybe you are looking for