Differences in JVM memory management in 1.4.2 and 1.5?

Hi all,
We have developed some code calling a third party dll through JNI. The code runs fine on jre 1.4.2. When we test in on jre 1.5, the dll is giving us a heap memory exhausted message. We have set -Xmx to 1024m due to performance issue. If we lower -Xmx from 1024m to 912m, then the program runs fine. So I have question if memory is being handled differently in 1.4.2 and 1.5?
Envirnoment: Windows XP, JRE 1.4.2 and 1.5, 1GB memory
Thanks all!

Hi,
I keep some rules while developing JNI code:
- reduce to minimum use of Local Heap, try to use Global or Virtual memory,
- reduce to minimum a number of local/gobal references used, make shallow copy of the same reference like it is done in Windows COM,
- reduce to minimum a number of stack variables,
- be sure that JNI code has no memory leaks.
To solve these problems I have developed for MS Visual Studio tools
http://www.simtel.net//product.php%5bid%5d93174%5bSiteID%5dsimtel.net
http://www.simtel.net/product.php[id]94368[sekid]0[SiteID]simtel.net
The tools solve these problems and are used by other people in my company (for developing JNI code). They do not know anything about these issues.

Similar Messages

  • Memory Management comparison between Database 9208 and 11gR2 on Sun Solaris

    Hi All,
    Need some case studies which would help understand how Memory management is done in 9208 and 11gR2 on Sun Solaris SPARC
    Also wanted some real time data which says 11gR2 manages Memory and CPU better than 9208. Some comparison Graph between 9i and 11gR2.
    Any information will be of great help.
    Thanks everyone for your support.
    Thanks
    Abdul

    please see if below helps :
    http://www.oracle.com/global/de/upgradecommunity/artikel/upgrade11gr1_workshop2.pdf
    http://www.dba-oracle.com/oracle11g/oracle_11g_memory_target_parameter.htm
    Regards
    Rajesh

  • Difference​s in CIN management between Labview 5.0 and Labview 7.1 ?

    Hi,
    I'm using a VI on a two photon microscope. It steers the laser beam in the sample (quite the same way as a TV scan, scanning a fast line on the X axis, then moving the line down) controlling two mirrors by the PCI-MIO-16E-4 Multifunctions card on traditional DAQ and at the same time it serves as the acquitision through a non-NI DSP Multifunction-board with on-board buffers and hight data throughput (70 Mb/s).
    It is this part of the program which gives us strong headaches since we have upgraded to a new PC.
    On the old machine we used Win2k and Labview 5.0; on the new machine we planned to use WinXP Pro and Labview 7.1.
    The acquisition part of the VI uses 2 CINs. One is used to configure the FPGA on the Multifunctions board at initialization.
    The second CIN reads he buffers received from the board and puts them into the shape of an image.
    Now since we tried in WinXP Pro and Labview 7.1, we get some kind of weird bugs. Either there is a big problem concerning the synchronization of the mirros with the acquisition or with the image reconstruction. We have already tried :
    1. Putting everything back into the old machine to verify there has not been any damage to the boards. OK
    2. Using the boards in the new machine but with Win2000 and Labiew 5.0, just to be sure that there are no conflicting elements in the hardware. OK
    As we wanted to check everything slowly but surely we then installed LV 7.1 on the new machine but still with Win2k. And here the problem appears !
    First we thought the drivers for the Multifunctions board (ATEME Adr128C6x) were not compatible with WinXP Pro, so at this point we were quite surprised to notice the problem seems to be caused by the upgrade from LV 5.0 to 7.1 !
    Now, finally my question is : Are there any major changes in how Labview handles CINs between LV5.0 and 7.1 ?
    Can the problem be caused by the

    So, no answer,...
    Anyhow, we found the problem on our own. For anyone else experiencing this kind of behaviour :
    The problem did not have anything to do with our CINs, nor with XP.
    In fact, we have noticed that the Analog Ground the Digital Ground had to be connected. For some kind of reason in LabView 5.0 this did not introduce the problem. But in Labview 7.1 the sync, because of a somewhat unstable Ground, it caused complete disfunctionment. We don't know why but, hey, the problem is gone.

  • Optimization of the JVM memory - Garbage Collector

    Hi ,
    Just a question about JVM memory management.
    There is memory limitation of memory usage limitation (1.6M) in JVM.
    Is there any possibility to use "Garbage collector" mechanism to optimize the memory usage?
    Or any suggestions for the JVM memory optimization?
    Thanks a lot!!!

    nicolasmichael wrote:
    Hi,
    the "memory limitation" does not have anything to do with garbage collection, but with the address space your operating system provides. On a 32bit operating system, your virtual address space is limited to <= 4 GB, depending on your operating system (for example 1.something GB on Windows and 3.something GB on Solaris). No.
    Windows 32 bit has a 2 GB application space and can be configured to allow a 3GB space.
    The Sun VM does not allow more because of the way that the executable is linked.
    [http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4358809]

  • EMac has poor memory management

    I use an eMac and a G5. I would expect the G5 to be much faster than the eMac but here's a funny thing: I find that the G5 also manages memory much better than the eMac! For example, when I run OmniWeb for a long time on my eMac and really put it through its paces, loading up multiple pages and tabs, I find that OmniWeb starts running slower. Not surprising, but the rest of the computer starts running slower too! For example, creating a new folder may take several seconds.
    Even stranger: after I quit all of my programs, the sluggishness continues. My eMac does not recover until I completely restart. Maybe OmniWeb has a memory leak or something, but this doesn't happen on the G5! I can put OmniWeb through its paces on the G5 and launch twice as many programs, including Photoshop, with no noticeable slowdown.
    Now here's the weirdest thing of all: both computers have exactly the same amount of memory (768 MB) and both have plenty of free hard drive space (over 40 GB). All the memory on my eMac is Apple brand. I have even clean-reinstalled the OS on my eMac, to no avail.
    Is it normal that memory management would be so much better on a G5 than an eMac?

    The improvements in each OS generation in memory management involve swapping tasks in and out of memory more efficiently, not using less memory --- invariablly more memory is actually used (to do more tasks). It would've been clearer had I noted that each OS generation does more with the memory and places more stringent demands on the RAM timing. Memory chips tha were good enough under 10.1 failed when 10.2 came out; ditto with 10.3/10.2 and 10.4/10.3
    See what Console and/or Activity Monitor tell you is using CPU cycles.

  • Difference between nio-file-manager  and nio-memory-manager

    Hi,
    what's the difference between nio-file-manager and nio-memory-manager? The documentation doesn't really discuss the differences as far as I know. They both use nio to store memory-mapped files don't they? What are the advantages/disadvantages of both?
    When to choose the first one and when the second when storing a large amount of data? Can both be used to query data with the Filter API? Are there size limits on both?
    Best regards
    Jan

    Hi Jan,
    The difference is that one uses a memory mapped file and one uses direct nio memory (as part of the memory allocated by the JVM process) to store the data. Both allow storing cache data off heap making it possible to store more data with a single cache node (JVM) without long GC pauses.
    If you are using a 32 bit JVM, the JVM process will be limited to a total of ~3GB on Windows and 4GB on Linux/Solaris. This includes heap and off heap memory allocation.
    Regarding the size limitations for the nio-file manager Please see the following doc for more information.
    With the release of 3.5 there is now the idea of a Partitioned backing map which helps create larger (up to 8GB of capacity) for nio storage. Please refer to the following doc.
    Both can be used to query data but it should be noted that the indexes will be stored in heap.
    hth,
    -Dave

  • What is difference between 32 bit and 64 bit sql server memory management

    What is difference between 32 bit and 64 bit sql server memory management
    Thanks
    Shashikala

    This is the basic difference...check if helps:
    A 32-bit CPU running 32-bit software (also known as the x86 platform) is so named because it is based on an architecture that can manipulate values that are up to 32 bits in length. This means that a 32-bit memory pointer can store a value between 0 and
    4,294,967,295 to reference a memory address. This equates to a maximum addressable space of 4GB on 32-bit platforms
    On the other hand 64-bit limit of 18,446,744,073,709,551,616, this number is so large that in memory/storage terminology it equates to 16 exabytes. You don’t come across that term very often, so to help understand the scale, here is the value converted to
    more commonly used measurements: 16 exabytes = 16,777,216 petabytes (16 million PB)➤ 17,179,869,184 terabytes (17 billion TB)➤ 17,592,186,044,416 gigabytes (17 trillion GB)➤
    As you can see, it is significantly larger than the 4GB virtual address space usable in 32-bit systems; it’s so large in fact that any hardware capable of using it all is sadly restricted to the realm of science fiction. Because of this, processor manufacturers
    decided to only implement a 44-bit address bus, which provides a virtual address space on 64-bit systems of 16TB. This was regarded as being more than enough address space for the foreseeable future and logically it’s split into an 8TB range for user mode
    and 8TB for kernel mode. Each 64-bit process running on an x64 platform will be able to address up to 8TB of VAS.
    Please click the Mark as answer button and vote as helpful if this reply solves your problem

  • Anyone use nio-memory-manager ?? what's it good for?

    Can someone give me an example of when the nio-memory-manager should be used?
    Thanks,
    Andrew

    If I remember the outcome of my experiments with NIO right the situation is as follows:
    1. Allocating/releasing huge shared memory blocks over and over can lead to OS/JVM issues. To avoid this I allocated the max size I wanted from the start (this is an option when configuring "off-heap" storage I believe). When doing it this way I had no reliability issues with the NIO memory manager in my tests.
    2. Tangosol/Oracle used to claim that the off-heap (NIO memory manager) result in worse performance than on-heap - I could not see any clear indication of this but this may be application dependent. For our app the reduced number of JVM:s per server (reducing network communication, number of threads, risk of any JVM performing GC at a given time etc etc) seemed to more than offset the allegedly slower memory manager resulting in MUCH BETTER performance! A lot of queries etc anyhow (at least for us) mainly work against indexes that always are stored "on-heap"...
    3. There is a limitation to 2Gb per NIO block (at least in 32-bit JVM:s not sure about 64:bit - never seen any point in using them since larger heaps than 2Gb seldom work well anyhow and each pointer consumes double the space in heap and CPU-caches) but this is for each CACHE and separate for PRIMARY and BACKUP I believe! So my understanding is that if you (using 64-bit OS) for instance have two (equally big) caches you could allocate max 2 * 2 * 2 = 8Gb of off-heap memory for folding data per JVM (without ANY impact on GC-pauses!) and in addition to that use as much heap as you can get away with (given GC-pause times) for holding the indexes to that data. This would makes a huge difference in JVM count!- for example we today have to run like 10+ JVM:s per server using "on-heap" while we using "off-heap" storage probably could get that down to one or two JVM:s per server!
    4. There may be both OS and JVM parameter that you need to set (depending on OS and JVM used!) in order to allocate large amounts of shared memory using NIO (the default is rather small).
    As for the question about de-allocation I never saw any sign of memory leaks with the NIO memory manager (i.e. space previously occupied by deleted objects were reused for new objects) but as I mentioned above you better allocating the max size NIO memory block you intend to use up-front and that memory will then remain allocated for this use so if your amount of cache data vary and you would like to use memory for other purposes (like heap!) at some point you may be better of sticking with "on-heap" that is more flexible in that respect.
    As I previously mentioned off-heap is today (until Oracle fixes the improvement request!) really only an option if you do not plan to use "overflow protection" or your objects are fixed size :-(
    And if you are interested in using servers with a lot of memory and would like to use "off-heap" please talk to your Oracle sales rep about it! If enough people do that it may allow the Coherence developers to assign more time for making "off-heap" storage better! With this feature in place Coherence will be even more of a "killer application" than it already is!
    Best Regards
    Magnus

  • JVM memory breakup

    Hi,
    My java application (with jdk1.5.0_03) shoots upto 169 MB of Process size (seen in Task manager)
    and the heap size when printed (Runtime.totalMemory()-Runtime.freeMemory())
    shows as 68 MB. I assume that the process size is the total JVM memory.
    I would like to know exact breakup of JVM memory; is it possible to find out the individual parameters that contribute to the total process memory.
    Thanks
    Srila

    Hi,
    I have a similar kind of problem.
    We have a web-application running on jdk1.5 and tomcat 5. I want that my application should not take more than 48 MB memory on to my machine. I used the -Xmx48m option, but it just controls the heap memory. But what about the non-heap memory? Here are few things I have noticed
    -> when I use JConsole to monitor the application, I found that my applications non-heap memory Max is set to 96M.
    -> Another thing I have noticed that when I use "top" command to monitor memory footprint of my application, the RSS (which shows the PHYSICAL MEMORY USED ) shows an alarming 125MB of usage ( Sometimes when i use -server option it used to go upto 250M ) , which may harm my other applications which are memory prone. Even when the GC runs java do not free any memory for other application.
    -> I have tried many options like
         -XX:+UseConcMarkSweepGC
         -XX:NewSize=8m -XX:MaxNewSize=8m -XX:SurvivorRatio=2 -Xms48m -Xmx48m -Xss64k
         -XX:MaxHeapFreeRatio=20 -XX:MinHeapFreeRatio=10 -XX:NewSize=32m -XX:SurvivorRatio=32 -Xss256k -Xms48m -Xmx48m
         -XX:+UseParallelGC -XX:GCTimeRatio=20 -Xms30m -Xmx30m -Xss2048k
         but failed to restrict it to 48M
    Before using Sun JDK5 we are using IBM jdk1.3 with tomcat 3 with green thread. So we were able to configure it with just one option -mx48m. I am really stuck. I cannot believe that Sun have not given us any control on the maximum memory usage. Please help me .
    Regards
    Purav Gandhi

  • RE: (forte-users) memory management

    Brenda,
    When a partition starts, it reserves the MinimumAllocation. Within this
    memory space, objects are created and more and more of this memory is
    actually used. When objects are no longer referenced, they remain in memory
    and the space they occupy remains unusable.
    When the amount of free memory drops below a certain point, the garbage
    collector kicks in, which will free the space occopied by all objects that
    are no longer referenced.
    If garbage collecting can't free enough memory to hold the additional data
    loaded into memory, then the partition will request another block of memory,
    equal to the IncrementAllocation size. The partition will try to stay within
    this new boundary by garbage collecting everytime the available part of this
    memory drops below a certain point. If the partition can't free enough
    memory, it will again request another block of memory.
    This process repeats itself until the partition reaches MaximumAllocation.
    If that amount of memory still isn't enough, then the partition crashes.
    Instrument ActivePages shows the memory reserved by the partition.
    AllocatedPages shows the part of that memory actually used.
    AvailablePages shows the part ot that memory which is free.
    Note that once memory is requested from the operating system, it's never
    released again. Within this memory owned by the partition, the part actually
    used will always be smaller. But this part will increase steadily, until the
    garbage collecter is started and a part of it is freed again.
    There are some settings that determine when the garbage collector is
    started, but I'm not sure which ones they are.
    The garbage collector can be started from TOOL using
    "task.Part.OperatingSystem.RecoverMemory()", but I'm not sure if that will
    always actually start the garbage collector.
    If you track AllocatedPages of a partition, it's always growing, even if the
    partition isn't doing anything. I don't know why.
    If you add AllocatedPages and AvailablePages, you shoud get the value of
    ActivePages, but you won't. You always get a lower number and sometimes even
    considerably lower. I don't know why.
    Pascal Rottier
    Atos Origin Nederland (BAS/West End User Computing)
    Tel. +31 (0)10-2661223
    Fax. +31 (0)10-2661199
    E-mail: Pascal.Rottiernl.origin-it.com
    ++++++++++++++++++++++++++++
    Philip Morris (Afd. MIS)
    Tel. +31 (0)164-295149
    Fax. +31 (0)164-294444
    E-mail: Rottier.Pascalpmintl.ch
    -----Original Message-----
    From: Brenda Cumming [mailto:brenda_cummingtranscanada.com]
    Sent: Tuesday, January 23, 2001 6:40 PM
    To: Forte User group
    Subject: (forte-users) memory management
    I have been reading up on memory management and the
    OperatingSystemAgent, and could use some clarification...
    When a partition is brought online, is the ActivePages value set to the
    MinimumAllocation value, and expanded as required?
    And what is the difference between the ExpandAtPercent and
    ContractAtPercent functions?
    Thanks in advance,
    Brenda
    For the archives, go to: http://lists.xpedior.com/forte-users and use
    the login: forte and the password: archive. To unsubscribe, send in a new
    email the word: 'Unsubscribe' to: forte-users-requestlists.xpedior.com

    The Forte runtime is millions of lines of compiled C++ code, packaged into
    shared libraries (DLL's) which are a number of megabytes in size. The
    space is taken by the application binary, plus the loaded DLL's, plus
    whatever the current size of garbage collected memory is.
    Forte allocates a garbage-collected heap that must be bigger than the size
    of the allocated objects. So if you start with an 8MB heap, you will always
    have at least 8MB allocated, no matter what objects you actually
    instantiate. See "Memory Issues" in the Forte System Management Guide.
    -tdc
    Tom Childers
    iPlanet Integration Server Engineering
    At 10:37 PM 6/11/01 +0200, [email protected] wrote:
    Hi all,
    I was wondering if anyone had any experience in deploying clients on NT
    concerning
    the memory use of these client apps.
    What is the influence of the various compiler options (optimum
    performance, memory use etc)?
    We seem to see a lot of the memory is taken by the Forte client apps (seen
    in the Task Manager
    of NT) in respect to the other native Window apps. For example an
    executable of approx 4Mb takes up to
    15Mb of memory. When I look at the objects retained in memory after
    garbage collection, these are about
    2Mb. Where do the other Mb's come from?

  • LabVIEW memory management changes in 2009-2011?

    I'm upgrading a project that was running in LV8.6.  As part of this, I need to import a custoemr database and fix it.  The DB has no relationships in it, and the new software does, so I import the old DB and create the relationships, fixing any broken ones, and writ eot the new DB.
    I started getting memeory crashes on the program, so started looking at Task manager.  The LabVIEW 8.6 code on my machine will peak at 630MB of memory when the databse is fully loaded.  in LabVIEW 2011, it varies.  The lowest I have gotten it is 1.2GB, but it will go up to 1.5GB and crash.  I tried LV 2010 and LV 2009 and see the same behavior.
    I thought it may be the DB toolkit, as it looks like it had some changes made to it after 8.6, but that wasn't it (I copied the LV8.6 version into 2011 and saw the same problems).  I'm pretty sure it is now a difference in how LabVIEW is handling memory in these subVIs.  I modified the code to still do the DB SELECTS, but do nothing with the data, and there is still a huge difference in memory usage.
    I have started dropping memory deallocation VIs into the subVIs and that is helping, but I still cannot get back to the LV 8.6 numbers.  The biggest savings was by dropping one in the DB toolkit's fetch subVI.
    What changed in LabVIEW 2009 to cause this change in memory handling?  Is there a way to address it?

    I created a couple of VIs which will demonstrate the issue.
    For Memory Test 1, here's the memory (according to Task Manager):
    Pre-run
    Run 1
    Run 2
    Run 3
    LabVIEW 8.6
    55504
    246060
    248900
    248900
    LabVIEW 2011
    93120
    705408
    1101260
    1101260
    This gives me the relative memory increase of:
    Delta Run 1
    Delta Run 2
    Delta Run 3
    LabVIEW 8.6
    190556
    193396
    193396
    LabVIEW 2011
    612288
    1008140
    1008140
    For Memory Test 2, it's the same except drop the array of variants:
    Pre-run
    Run 1
    Run 2
    Run 3
    LabVIEW 8.6
    57244
    89864
    92060
    92060
    LabVIEW 2011
    90432
    612348
    617872
    621852
    This gives us delats of:
    Delta Run 1
    Delta Run 2
    Delta Run 3
    LabVIEW 8.6
    32620
    34816
    34816
    LabVIEW 2011
    521916
    527440
    531420
    What I found interesting in Memory Test #1 was that LabVIEW used more memory for the second run in LV2011 before it stopped.  I started with Test 1 because it more resembled what the DB toolkit was doing since it passes out variants that I then convert.  I htought maybe LabVIEW didn't store variants internally the same any more.  I dropped the indicator thinking it would make a huge difference in Memory Test 2, and it didn't make a huge difference.
    So what is happening?  I see similar behaviore in LV2009 and LV2010.  LV2009 was the worst (significantly), LV2010 was slightly better than 2011, but still siginificantly worse than 8.6.
    Pre-run
    Run 1
    Run 2
    Run 3
    LabVIEW 8.6
    55504
    246060
    248900
    248900
    LabVIEW 2011
    93120
    705408
    1101260
    1101260
    Attachments:
    Memory Test.vi ‏8 KB
    Memory Test2.vi ‏8 KB

  • Memory management of WEB AS 6.20

    Hello,
    has anybody informationen about the memory management of the WEB As 6.20 or the JVM?
    How can I see when a garbage collection takes place? How much memory should be allocated by the server nodes?
    Best regards
    Olaf

    Hi Olaf,
    > has anybody informationen about the memory management
    > of the WEB As 6.20 or the JVM?
    There is an article here on SDN that includes such stuff:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/documents/a1-8-4/how to tune sap j2ee engine 6.20.pdf
    > How can I see when a garbage collection takes place?
    NOT.
    > How much memory should be allocated by the server
    > nodes?
    See the article. Anyway, this is depending on a lot of factors.
    Regards,
    Benny

  • Memory Management Questions

    Hello All!
    I read the Memory Management Programming Guide for Cocoa - several times. But some things are still not really clear.. I would like and need to have a deeper understanding. So, I hope someone could help me The problem is that I had to get rid of several (..) memory leaks in my app, and now I am a bit confused and unsure about my skills at all..
    1.
    What is the difference between sayHello1,sayHello2,getHello1,getHello2,getHello3 and which one is "better" (and why) - please dont try to interprete the logic/sense of the methods itself
    - (NSString *) sayHello1{
    return [[[NSString alloc] initWithString:@"Hello"] autorelease];
    - (NSString *) sayHello2{
    return [[[NSString alloc] initWithString:@"Hello"] retain];
    - (void) getHello1{
    NSString *hello = [self sayHello1];
    [hello release];
    - (void) getHello2{
    NSString *hello = [self sayHello2];
    [hello release];
    - (void) getHello3:(NSString *)hello{
    [hello retain];
    NSLog(@"%@", hello);
    [hello release];
    Concerning this, there are several questions:
    2.
    If I have to release everything I retain/alloc, why then do I have a memory leak, if am returning an object (which was allocated with alloc and init) from a method without autorelease. The object is still in memory. But the following method wont work. What I accept. But the object is, if returned, not reachable, but also not released. Why then is it not automatically released? (i dont mean autorelease)
    - (NSString *) sayHello1{
    return [[NSString alloc] initWithString:@"Hello"]];
    - (void) getHello{
    NSString *hello = [self sayHello1]; //wont work. the object is not there, but also not released. WHERE is it?
    [hello release];
    3.
    When is a delegate method released, if I have no variable I can use to "release"? So, if I have nothing to access the delegate like a NSURLConnection delegate?
    should I, for example, call a [self release]?
    - (void)startParser{
    Parser *parser = [[Parser alloc] init];
    [parser start];
    //should I use a [parser autorelease or retain] here?
    - (void)parserDidEndDocument:(NSXMLParser *)parser{
    //do somethings with the parserstuff
    [self release];
    4.
    *And the last question:*
    Where can I see in instruments, which elements have retain counts > 1 and potential leaks? I was reading the instruments guide but there is only theoretical stuff. No practical guides like: your app should not have/use more than x megabyte ram.. for example: my app gets slower and slower the longer a i use it. -> this indicates memory leaks..

    A Leak is only a leak if the reference to the object is lost.
    https://devforums.apple.com/message/189661#189661

  • Memory Management of SAP HANA

    Hi All,
    I went through one of the documentation in SAP HANA Memory management .
    http://www.saphana.com/servlet/JiveServlet/download/2299-2-12806/HANA_Memory_Usage_v2.pdf
    This gave me a really good understanding about the Memory management of HANA . Queries for Used and Resident memory and comparison with Overview tab numbers
    I had few questions , Which was almost answered in other discussed in one :
    But i still have few questions about  Resident and  used memory
    Used Memory : Code + Tables + DB Managment
    Resident :  what is the formula or content ?
    What does this picture refers to ?
    Infact the below statements are bit confusing
    When memory is required for table growth or for temporary computations, the SAP HANA code obtains it from the existing memory pool. When the pool cannot satisfy the request, the HANA memory manager will request and reserve more memory from the operating system. At this point, the virtual memory size of the HANA processes grows.
    Once a temporary computation completes or a table is dropped, the freed memory is returned to the memory manager, who recycles it to its pool, usually without informing Linux6. Thus, from SAP HANA’s perspective, the amount of Used Memory shrinks, but the process’ virtual and resident sizes are not affected. This creates a situation where the Used Memory may even shrink to below the size of SAP HANA’s resident memory, which is perfectly normal.
    My doubt here is how  in any given point of time used memory can go below the used memory , because resident memory is always loaded with what is there in used memory , When used memory itslef is less , what does resident contains extra .
    Also how to make a relation with HANA used memory , Database Resident memory , Virtual memory .
    In case of  a memory issue , what should we check , Used memory of HANA   OR resident memory of HANA  ?
    Thanks,
    Razal

    Hi  all,
    I am trying understand memory part bit in details ,  as i am building a complete monitoring infrastructure for HANA , and memory is core of HANA  all part of HANA
    Can you also help me to understand how to make some difference  for used memory in HANA and Resident memory
    When we say that the Resident memory is something from OS point of view  , this is the memory of the OS which is really being used .
    So if the  used memory from HANA Perspective is full , OS still have some free memory  which can be used , How that part is managed .
    When i say we are out of memory ,  Both used memory from HANA
    Resident memory from OS is full ?
    OR does the used memory is simply a calculation of Code + table + etc from HANA point of view .
    When execute query :
    SELECT SERVICE_NAME,ROUND(SUM(TOTAL_MEMORY_USED_SIZE/1024/1024/1024), 2) AS
    "Used Memory GB", ROUND(SUM(PHYSICAL_MEMORY_SIZE/1024/1024/1024), 2) AS
    "DB RESIDENT Memory GB" FROM SYS.M_SERVICE_MEMORY GROUP BY SERVICE_NAME
      SERVICE_NAME     Used Memory GB DB RESIDENT Memory GB
    1 nameserver         6.73           1.7                 
    2 preprocessor       5.38           0.24                
    3 indexserver        9.19           4.35                
    4 scriptserver       7.52           1.83                
    5 statisticsserver  8.52           3.87                
    6 xsengine          7.92           1.82                
    7 compileserver    5.31           0.23                
    On top of all this , In admin view  i get used memory as 17.87 as used memory and 18.89 as peak  .
    How this used memory is summed up in admin view .
    I am using version 70 .
    Thanks,
    Razal

  • 4.2 seems to have vastly improved memory management

    This may not be something most uses will notice, but by using the 'MemoryFree' app I have noticed that the upgrade to 4.2 leaves much more software free memory on my iPhone4 than 4.1. I would guess that this would tend to boost performance as well, but I haven't noticed any perceptible change there.

    sn4p2k wrote:
    im not updating to 4.2 and i have no problems with performance on 4.1
    I don't really think you read what I posted!
    I didn't say I had issues with 4.1 and they went away after I upgraded to 4.2. I just pointed out that there were obviously major improvements made between 4.1 and 4.2 'under the hood', and that this was demonstrated in part by the clear improvement in the memory management. I stressed that i had not noticed any perceptible difference, but that the improvements were obviously there.

Maybe you are looking for

  • Mac OS X 10.6.8 Update Combo v1.1

    Does anyone know whether this combo update includes the software for 2011 iMacs in Mac OS X 10.6.7 Update for iMac (early 2011) 1.0? If so, it could save a friend from downloading and installing both. Thanks for any input

  • Can I connect RCA cables to my new 13" macbook pro?

    I want to digitize some records, am told that usb from a turntable can give poor results. Thanks.

  • Network Error: Error connecting to Essbase Studio server

    Hi......I am able to login to almost all the apps (shared services, Web Analysis, FIR, Web Services etc) except EssBase Studio. When i try to login to essbase studio it says "error connecting essbase studio server" with detailed error "Network commun

  • Artwork viewer not showing multiple artwork

    I have a whole heap of tracks with multiple art. With update to iTunes 10.5.1 the artwork viewer no longer shows the little arrows in the bar which allow one to browse the viewer. I have checkd that iTunes "sees" the artwork by doing Get Info and cha

  • Digital Editions will not open

    I cannot get digital editions to open, all I get is a window stating the Windows is checking for a solution, and of course they have not provided one. HELP, I have several downloaded books that have a limited time availability that I have not read. T