Memory handling

Hi!
How does the JVM handle memory EXCEPT for the heap?
I have an application that runs with JDK 1.4.2 on windows2000. We performed a performance test during 48 hours where we had a lot of CORBA communication and a lot of data transfer to this application.
We started the application with an initial heap size of 256 M and also defined the maximum heap size to 256M. When looking at the task manager we saw that the application consumed about 340 M, which is OK, I presume that what I see is the sum of the heap, stack and the loaded classes. After 48 hours we had a quite different situation. We could still see that the JVM consumed 340 M, but the total of allocated memory was 1.2 GB! When I killed the application the whole memory was made free.
What could be the possible cause of this?
br,
/Anders Jansson

I presume that what I see is the sum of the heap, stack
and the loaded classes. Nope. Classes, stack and everything else in java goes on the heap.
There is however stuff in the JVM that is not java. And that doesn't go on the java heap.
There is also windows stuff that doesn't go on the java heap. Try the following test
- Open notepad
- Use task manager to look at the "memory"
- Minimize notepad
- Use task manager to look at the "memory"
On my computer when notepad is open it uses 2-10 times more memory than when minimized.
>
What could be the possible cause of this?Sometimes the cause is a memory leak.
Sometimes the cause is because task manager should not be used to evaluate memory usage in depth. Does your application produce a out of memory error? If so you have a problem and you should get an automated profiling tool to determine where the leak occurs. If not then you don't have a problem.

Similar Messages

  • Memory Handling with multiple images

    How is memory handled by Flash Lite when we load an image from an external folder into an application?
    Basically the same image is loaded 2 times (one as a thumbnail, one as a full image), but by different classes. In this case, will Flash Lite treat each image instance as separate image while blocking memory? Or because the image is the same, although it is loaded twice by 2 separate classes, the memory will be calculated for one image only?
    Regards,
    Mariam

    In Adobe LiveCycle, add a new subform at the page level. Put your images there.
    You can also click Insert > New Body Page.

  • Fireworks CS5 crashes often due to buggy memory handling. Will there be a patch?

    Fireworks CS5 has a major bug that causes a memory leak or memory "cap" where the application stops functioning once it has used approximately 1.5 GB of memory or so. (I read that 2GB was the cap, but I have never seen Fireworks make it to 2GB).  I only have 10 undos set in Fireworks, so I know that's not the problem. I have searched high and low to resolve this, but it seems like Adobe just hasn't recognized this as a high priority.
    I read other discussions about the same problem and I now believe that Adobe either doesn't read the forums or they don't care. I hope this is not the case because I truly love the product. But, for the record, I shouldn't have to:
    - "Limit the number of undos to less than 100" (I have limited mine to 10, but after 6 months of trial and error I have found that to be irrelevant. The errors occur anyway). Don't encumber your users with such an archaic request. This is like a software bug from 1985.
    - "Save, shut down, and restart Fireworks every hour or so". Are you kidding? I can't believe someone on the forums even suggested this. This isn't a solution to the Fireworks bug. This is a work around that does nothing to prevent the error from occuring. Sometimes the error occurs on the very first click after opening a file in Fireworks, after a full system restart. A 5MB file takes 3 minutes to open in Fireworks on my quad-core 3Ghz machine with 8GB of memory. I don't mind that, but I shouldn't have to do it every 30 to 60 minutes. Users shouldn't have to look for "workarounds" to keep a software application running without crashing, especially such an expensive application. I think Fireworks is worth every penny, but only if it runs without crashing.
    Perhaps most importantly, I cannot - ever - open two 5MB files in Fireworks at one time. Which makes it extremely cumbersome to impossible to copy/paste content between two files. In fact, I am designing a web application in Fireworks, and I've had to split the app into approximately 15 different files because Fireworks can't handle files bigger than 6 or 7MB.
    I had the same problem with CS4 on a different machine, so I know it's not a new issue.  I have Fireworks installed on two PCs, both have quad core 3Ghz processors, one PC has 6GB of RAM, the other has 8GB. Once an hour I have to shut down what I'm working on and wait several minutes for Fireworks to reopen the file.  Sometimes Fireworks just stops responding and won't let me save my work at all. And since there is no autosave in Fireworks, I probably lose work at least once a day.
    Please, please, please address this... I love Fireworks, and I'd really like to keep using it.

    I am using Fireworks CS5 in Windows 7 to sketch a large site, and it was taking an extremely long time to do anything and seemed to crash on me once or twice every hour. I saved every couple of minutes and could at least rely on my last save being able to be reloaded. However, a few days ago, I lost a lot of work when Fireworks crashed halfway through the save and destroyed the file. At the time, my most recent backup copy that I had on another drive was a day old, so that crash was not only frustrating but hazardous to my project deadlines (needless to say, I've since implemented a more aggressive backup policy). Anyway, I searched for answers and forums about the problem, and I'm disappointed that it seems that it's a common problem for a lot of folks that has been around for a very long time throughout multiple releases of the software. Memory and resources also seem to be a common factor in contributing to the crashes and are also a theme in the few solution responses that I was able to find. I hope that Adobe will respond with a once-and-for-all fix / a much more efficient and stable way for Fireworks to do its magic. I typically have had very positive things to say about Adobe software and was even the one who recommended the suite for my work, but especially as "well seasoned" and *expensive* as the software is, there is no excuse for any major, well-tested component of it by now to be crashing so consistenly and causing so many problems for so many people.
    The solution to the crashing that seems to work for me so far - and I hope that it works for you as well: I doubled the size of my swap/paging file to ~4GB and set the maximum to double that (~8GB). Since I did that, Fireworks has only crashed once (only once.. as if that is even acceptable considering the short time since then). It still runs fairly slowly (granted, I'm working on a large project, so I can tolerate that more), but at least I can use it again.
    Hope that helps! Best,
    -j

  • Has Photoshop CS5's Memory Handling Gotten Worst with Updates?

    Hi,
    For the last year + I've been running:
    Photoshop CS5
    Windows 7
    12 GB RAM - 7GB allocated to Photoshop, although Windows Task Manager can show as much as 11GB being used by Photoshop.
    Asus P6T Deluxe v2 Motherboard
    Latest nVidia and now latest ATI video card
    Around February I was working on a 2.5 gb file - 8000 x 6400 w/ 100's of layers. It was handled just fine in Photoshop all day long - 12 hours.
    Recently I've been working on smaller 500 gb files. After maybe 6-7 hours of work Photoshop reaches a point where palettes are slow to open and close. Everything seems to be hesitating. I fix the problem by restarting Photoshop.
    So, I'm thinking about just upgrading to 24 gb, since that seems to work for some people. Before doing that though, I'm wondering if there's any other solutions I've missed.
    Thanks for your help!

    Another thing:
    If I hit Edit > Purge > All, my system isn't affected at all. The RAM amount used by Photoshop stays exactly the same.
    Basically it looks like the RAM usage just keeps building until Photoshop needs to be restarted. I even reinstalled Photoshop to make sure no hidden plugins, etc. were installed. It didn't change the memory usage.
    I did find some other threads in the forum on this RAM allocation topic - it sounds like it's a mystery problem with no solutions yet.

  • Memory handling and garbage collection?

    Sorry, these are the correct snippets of code, of course:
    public class Test {
         public static void main(String[] args) {
              byte[] b = new byte[10000000];
              b = null;
              while(true) {}
    public class Test {
         public static void main(String[] args) {
              new Test();
         public Test()
              byte[] b = new byte[10000000];
              b = null;
              zzz();
         public synchronized void zzz()
              try
                   wait();
              catch(Throwable t) {}

    Oh god this is all messed up... Original message then:
    Hi all!
    I'm just interested in the way that Java handles garbage and frees memory, as I have seen some problems with this in the chat server I'm building. I was under the impression that you could remove a reference to something and the memory allocated by it would automatically free up.
    I wrote a stupid little test program that allocates a 10MB byte array and then immediately removes the reference to it. Using the Windows Task Manager I just compared the memory usage when allocating the huge array, to the usage when not. When using the array my program eats about 15MB of memory, while the amount when not using the array is about 5MB. So it's obvious that no memory at all is freed when I remove the reference to that array.
    public class Test {
         public static void main(String[] args) {
              byte[] b = new byte[10000000];
              b = null;
              while(true) {}
    }Ok so perhaps the Garbage Collector doesn't operate while in an endless loop. A little change to let the program get stuck in a wait() instead:
    public class Test {
         public static void main(String[] args) {
              new Test();
         public Test() {
              byte[] b = new byte[10000000];
              b = null;
              zzz();
         public synchronized void zzz() {
              try {
                   wait();
              catch(Exception e) {}
    }Unfortunately, the result is the same. Program eats 10MB of memory too much, and it wouldn't free up in the few minutes I waited anyway.
    Anyone at all have thoughts on this?
    Thanks

  • Problem with trees and memory handling.

    Hi there, I just want to know if, when I create a large binary tree, and I re-"pointed" the root pointer to another node somewhere below in that same tree (say, a terminal node), will the "upper" parts of the tree (starting from the new root node) be removed/deleted from memory? Please explain your answers.
    Thank you.

    f I changed the root to B, will A and its children be
    deleted from memory?If you do root = B, AND if nothing else is referring to A or C or anything else beneath them, then A, C, and all of C's children will become eligible for garbage collection.
    Whether the memory actually gets cleaned up is something you can't really predict or control, but it doesn't really matter. If it's needed, it will get cleaned up. If it's not needed, it may or may not get cleaned up, but your program won't care.
    So, in short, yes, for all intents and purposes, A's, C's, and C's descendants' memory is released when you re-root to B.

  • Has AE CS6 fixed the memory handling problems on Mac?

    I'm disgusted with the poor performance of AE CS5.5 on Mac. The memory handing is primitive. Cinema4D is smooth, and just works. In AE CS5.5, I render at 50% and quarter resolution and I still can't get 30 fps. I have a Mac Pro with 32 GB RAM (yes, 32 GB RAM) and AE still stutters and crawls. Why should I dick around with memory allocations? I've tried all combinations of multiple frame renders and not. C4D just works with no dicking around with preferences (you know, unlike MS DOS...), so competent memory processing is possible on the Mac platoform.
    Anyone see any improvement with CS6 on Mac?
    Any info gratefully received.

    Chaning screen resolutions is not necessary once I set up a workstation. You are stretching the concept of "configuring". The configuring you mentioned earlier is doing something like grekking your /unix/-r--f-s-t\\wini.ininini.config.sys files. Such is not required on a Mac. That's the actual point of a Mac. It's the same reason that I need not know how to rewire the phone closet jumpers in order to make a long-distance phone call: it's not my job. It's the job of OS and program makers to hide such behind-the-scenes fiddling.
    The rest of your remarks amount to "dismantle your advanced workstation so it dumbs down enough so that AE works." Suggesting I use only one screen so AE works is absolute nonsense. I need 3 screens for my current project (cutting and pasting between 4 spreadsheet files). Further, every other non-Adobe program works with no problems on my current configuration: C4d, RealFlow, Vue xStream, Parallels, Dictate, and even MS Excel.  Ergo, its AE that's the problem, not my "configuration". Nor is it Quicktime components that's the problem. Nothing esle has a problem with QT. Once upon a time AE worked flawlessly on my Mac. In recent years, it has become problematic. When Adobe added Almighty Flash capabilities to AE (and who in their right mind uses Flash as a source file format?), QT and AIFF stopped working in the timeline. Huh? AIFF which pre-dates AE? QT which works on everything else? Look, Ma, no QA or beta testing.
    I am looking for rational suggestions as to why AE doesn't work on my Mac. A third-party clipboard utility, perhaps? Reasonable. Multiple screens? Not reasonable.
    Can anyone esle shed light on poor Adobe programming?
    PS: Mylenium, if you don't use a Mac, why do you leap to AE's defence and offer advice on how to trouble-shoot Macs?

  • Memory handling for big footage, wrong settings, bug?

    Hi
    I am getting huge time differences in rendering, a full res render takes ca 100 times longer than a half res, where I would normally expect maybe 4 times longer.
    Here's the details:
    I'm making an 30 sec animation in NTSC, the material is high resolution illustrations in .psd format with lots of layers. I parent and animate the layers in after effects, ie no cell animation or sequences.
    A couple of characters with loose limbs and some backgrounds, and the comp is probably a few hundred layers nested within comps etc.
    All this is fine when previewing in half resolution, it takes a little while for the program to load all the layers for every new scene, and then it ticks along good, since it's static illustrations with position and rotation keys.
    Previewing the 30 secs in half res takes a couple of minutes tops. Totally fine.
    But when switching to full res everything goes unbelievably slow. It takes more than 1 hour 30 minutes. The exact same animation that previewed in 1-2 minutes in half res.
    I am using
    AE CS3, 8.0.2.27
    8-core Mac Pro, Leopard 10.5.3
    10 GB Ram
    Multiprocessor rendering is switched on.
    I see all the AE instances in the Activity Monitor, most of the time they're working on less than 10%.
    So I'm guessing the cache is sufficient for the half res stuff, and for full res it has to load everything for every single frame? Still though, it seems that it's even slower than that. It seems like there's some sort of bottle neck.
    Some good advice would be greatly appreciated.
    Thanks, L

    Turn off multiprocessing. Loading and unloading your hunky files may take way longer than the actual processing. For the half res previews, they would be loaded from the cache in many situations, thus not affecting I/O speed at all. Also your math is not really right. Depending on how an effect works, render times can go up indefinitely, e.g. if it works strictly pixel-per-pixel insrtead of a coherent single one-pass buffer. In any case, the res is 2^2 times as high, so a multiplier of 8 is to be expected on memory consumption and possibly processing as well... Not sure where your project goes wrong, though, would require soem more info about footage types, blendmodes, effects and so on.
    Mylenium

  • Better memory handling in JAXB

    Hello everybody!!
    i'm using JAXB to proccess a large (4MB) xml file, and memory usage is an issue. The thing is, that after parsing the file to Objects, i get a lot of memory Usage, and my java profiler says that exist 400,000 String objects. SO i think that this happens because JAXB holds all that data in the form of objects in it's structure. SO, is it posible to go destroying each object in the JAXB structure after i iterate over it????

    Hello Raj,
    I have done the following Steps:
    1. I created a Vi with the following code:
    I set the dimension size to 10,000,000.
    After running it, I clicked on Edit - Make current Values Default and saved it. The VI- Properties then showed:
    2. I closed LabVIEW, restarted it and checked its Memory- Usage:
    3. I opened my VI and checked LabVIEW's Memory usage again:
    The Difference is something around 310 MB, which is twice the amount of the Memory stated on the Memory Usage window of the VI. So LabVIEW seems to buffer the data at two different places.
    4. I created another blank VI and placed my first VI on the BD, saved it and closed LabView again.
    5. After starting LabView, the Windows Task Manager shows a Memory- usage of 75.132 kB and after loading the Super- VI 387.408 kB, difference is 312 MB.
    This shows, there is no difference between loading the VI itself and loading it as a SubVI.
    P.S. Very interesting is the file size of the VI itself, it is only 165 kB on the Harddisk. That lets me assume, that LabVIEW compresses the data when saving the VI to disk.
    So i altered the VI to the following code:
    I let it run, saved the current values as default and saved the VI. The Memory Usage Page of the VI- Properties showed me the following:
    The data is now almost uncompressible, so the VI- Size on diskt is almost the same than in memory.
    As you cannot see, the dimension size is 1,000,000, which is a tenth of the first VI. LabVIEW is able, to calculate 10 million elements, is able to set the controls value to default with 10 million elements, but is not able to store that VI to disk.
    This seems to be a problem with the compress algorithm of LabVIEW
    Hope this helps,
    Greets, Dave

  • Memory handling in java

    how to increase size of java program......can it be possible.......
    i faced this question in the interview......
    waiting for your answer....

    Type java -X on command line. You should get this.
        -Xmixed           mixed mode execution (default)
        -Xint             interpreted mode execution only
        -Xbootclasspath:<directories and zip/jar files separated by ;>
                          set search path for bootstrap classes and resources
        -Xbootclasspath/a:<directories and zip/jar files separated by ;>
                          append to end of bootstrap class path
        -Xbootclasspath/p:<directories and zip/jar files separated by ;>
                          prepend in front of bootstrap class path
        -Xnoclassgc       disable class garbage collection
        -Xincgc           enable incremental garbage collection
        -Xloggc:<file>    log GC status to a file with time stamps
        -Xbatch           disable background compilation
        -Xms<size>        set initial Java heap size
        -Xmx<size>        set maximum Java heap size
        -Xss<size>        set java thread stack size
        -Xprof            output cpu profiling data
        -Xrunhprof[:help]|[:<option>=<value>, ...]
                          perform JVMPI heap, cpu, or monitor profiling
        -Xdebug           enable remote debugging
        -Xfuture          enable strictest checks, anticipating future default
        -Xrs              reduce use of OS signals by Java/VM (see documentation)
        -Xcheck:jni       perform additional checks for JNI functions
    The -X options are non-standard and subject to change without notice.

  • Is the Memory Suite thread safe?

    Hi all,
    Is the memory suite thread safe (at least when used from the Exporter context)?
    I ask because I have many threads getting and freeing memory and I've found that I get back null sometimes. This, I suspect, is the problem that's all the talk in the user forum with CS6 crashing with CUDA enabled. I'm starting to suspect that there is a memory management problem when there is also a lot of memory allocation and freeing going on by the CUDA driver. It seems that the faster the nVidia card the more likely it is to crash. That would suggest the CUDA driver (ie the code that manages the scheduling of the CUDA kernels) is in some way coupled to the memory use by Adobe or by Windows alloc|free too.
    I replaced the memory functions with _aligned_malloc|free and it seems far more reliable. Maybe it's because the OS malloc|free are thread safe or maybe it's because it's pulling from a different pool of memory (vs the Memory Suite's pool or the CUDA pool)
    comments?
    Edward

    Zac Lam wrote:
    The Memory Suite does pull from a specific memory pool that is set based on the user-specified Memory settings in the Preferences.  If you use standard OS calls, then you could end up allocating memory beyond the user-specified settings, whereas using the Memory Suite will help you stick to the Memory settings in the Preferences.
    When you get back NULL when allocating memory, are you hitting the upper boundaries of your memory usage?  Are you getting any error code returned from the function calls themselves?
    I am not hitting the upper memory bounds - I have several customers that have 10's of Gb free.
    There is no error return code from the ->NewPtr() call.
         PrMemoryPtr (*NewPtr)(csSDK_uint32 byteCount);
    A NULL pointer is how you detect a problem.
    Note that changing the size of the ->ReserveMemory() doesn't seem to make any difference as to whether you'll get a memory ptr or NULL back.
    btw my NewPtr size is either
         W x H x sizeof(PrPixelFormat_YUVA4444_32f)
         W x H x sizeof(PrPixelFormat_YUVA4444_8u)
    and happens concurrently on #cpu's threads (eg 16 to 32 instances at once is pretty common).
    The more processing power that the nVidia card has seems to make it fall over faster.
    eg I don't see it at all on a GTS 250 but do on a GTX 480, Quadro 4000 & 5000 and GTX 660
    I think there is a threading issue and an issue with the Memory Suite's pool and how it interacts with the CUDA memory pool. - note that CUDA sets RESERVED (aka locked) memory which can easily cause a fragmenting problem if you're not using the OS memory handler.

  • Memory Leak Acrobat 9.3.1 Batch Conversion

    I set Acrobat to the task of converting ~3.5k text documents to PDFs, using the File | Create PDF | Batch Create Multiple Files menu option.
    The interface was slow handling the list of 3500 items, but it began processing them at a rate of about 1/sec. After about 45 minutes, ~2500 were processed and an error appeard, "Out of Memory" (with the only option being to acknowledge with OK).
    When I click OK, the conversion status bar briefly reappears (hence how I can tell that approximately 2500 were processed), but disappears so quickly that I cannot click the Stop button before the "Out of Memory" error appears again.
    Checking the Task Manager, I see that Acrobat.exe *32 is using almost 1.7GB of memory.
    The largest text file to be converted is 135k. The average size is 19k.
    It appears my only options are to click OK 1,000 times or to kill the process.
    Clearly, this is a bug. Please fix as soon as possible.
    If I can provide additional information, please let me know.

    This is a user-to-user forum. If you want to report a bug to Adobe, you can
    do so here:
    https://www.adobe.com/cfusion/mmform/index.cfm?name=wishform
    However, you should know that this is not a new bug. It has existed in many
    previous versions. Acrobat's memory handling functions are quite bad,
    especially in batch processes. Good luck, though...
    Edit: When this happens, aren't you able to kill the process using the Task Manager?

  • FIM: Freeing invalid memory in OCIServerAttach

    Hi,
    I have the following piece of code for making connection to Oracle server -
    retCode = OCIEnvCreate(envHandle, OCI_THREADED, (dvoid *)0, 0, 0, 0, (size_t) 0, (dvoid **)0);
    if(retCode == OCI_SUCCESS)
    isEnvAllocated = PMTRUE;
    retCode = OCIHandleAlloc( (dvoid *) envHandle, (dvoid *) &errhp, OCI_HTYPE_ERROR, (size_t) 0, (dvoid **) 0);
    // server contexts
    retCode = OCIHandleAlloc( (dvoid *) envHandle, (dvoid *) &srvhp, OCI_HTYPE_SERVER,(size_t) 0, (dvoid **) 0);
    retCode = OCIHandleAlloc( (dvoid *) envHandle, (dvoid *) &svchp, OCI_HTYPE_SVCCTX,(size_t) 0, (dvoid **) 0);
    pError = checkError(*envHandle, errhp, OCIServerAttach(srvhp, errhp, (ptext *)dbName, strlen(dbName), 0));
    Purify is reporting FIM: Freeing invalid memory in LocalFree {1 occurrence} and the stack trace is -
    [E] FIM: Freeing invalid memory in LocalFree {1 occurrence}
    Address 0x001430f8 points into a HeapAlloc'd block in unallocated region of the default heap
    Location of free attempt
    LocalFree [KERNEL32.dll]
    ??? [security.dll ip=0x76e71a7f]
    AcquireCredentialsHandleA [security.dll]
    naunts5 [orannts8.dll]
    naunts [orannts8.dll]
    sntseltst [oran8.dll]
    naconnect [oran8.dll]
    naconnect [oran8.dll]
    naconnect [oran8.dll]
    nsmore2recv [oran8.dll]
    nsmore2recv [oran8.dll]
    nscall [oran8.dll]
    niotns [oran8.dll]
    osncon [oran8.dll]
    xaolog [OraClient8.Dll]
    xaolog [OraClient8.Dll]
    upiah0 [OraClient8.Dll]
    kpuatch [OraClient8.Dll]
    OCIServerAttach [OraClient8.Dll]
    OCIServerAttach [OCI.dll]
    Does anyone has idea why it is giving that error.
    Also there is a leak associated with it too and the stack trace for that is -
    MPK: Potential memory leak of 4140 bytes from 2 blocks allocated in nsbfree
    Distribution of potentially leaked blocks
    Allocation location
    calloc [msvcrt.dll]
    nsbfree [oran8.dll]
    nsbfree [oran8.dll]
    sntseltst [oran8.dll]
    sntseltst [oran8.dll]
    nsdo [oran8.dll]
    nscall [oran8.dll]
    nscall [oran8.dll]
    niotns [oran8.dll]
    osncon [oran8.dll]
    xaolog [OraClient8.Dll]
    xaolog [OraClient8.Dll]
    upiah0 [OraClient8.Dll]
    kpuatch [OraClient8.Dll]
    OCIServerAttach [OraClient8.Dll]
    OCIServerAttach [OCI.dll]
    Is it a standard leak that is happening in OCI.dll or it is the usage issue so that it can be solved by using OCIServerAttach in a different way.
    Any help in this matter is greatly appreciated.
    Thanks
    Anil
    [email protected]

    I believe that both issues are actually the result of false positives. In general, it's not really possible to tell whether C++ code is actually leaking memory or freeing invalid memory handles. Tools like Purify, BoundChecker, etc. will generally give this sort of false positive when the code you're profiling does something 'dangerous'. I believe you can ignore both messages-- at least the Oracle ODBC driver development group did when I was there.
    Justin

  • LabVIEW memory management changes in 2009-2011?

    I'm upgrading a project that was running in LV8.6.  As part of this, I need to import a custoemr database and fix it.  The DB has no relationships in it, and the new software does, so I import the old DB and create the relationships, fixing any broken ones, and writ eot the new DB.
    I started getting memeory crashes on the program, so started looking at Task manager.  The LabVIEW 8.6 code on my machine will peak at 630MB of memory when the databse is fully loaded.  in LabVIEW 2011, it varies.  The lowest I have gotten it is 1.2GB, but it will go up to 1.5GB and crash.  I tried LV 2010 and LV 2009 and see the same behavior.
    I thought it may be the DB toolkit, as it looks like it had some changes made to it after 8.6, but that wasn't it (I copied the LV8.6 version into 2011 and saw the same problems).  I'm pretty sure it is now a difference in how LabVIEW is handling memory in these subVIs.  I modified the code to still do the DB SELECTS, but do nothing with the data, and there is still a huge difference in memory usage.
    I have started dropping memory deallocation VIs into the subVIs and that is helping, but I still cannot get back to the LV 8.6 numbers.  The biggest savings was by dropping one in the DB toolkit's fetch subVI.
    What changed in LabVIEW 2009 to cause this change in memory handling?  Is there a way to address it?

    I created a couple of VIs which will demonstrate the issue.
    For Memory Test 1, here's the memory (according to Task Manager):
    Pre-run
    Run 1
    Run 2
    Run 3
    LabVIEW 8.6
    55504
    246060
    248900
    248900
    LabVIEW 2011
    93120
    705408
    1101260
    1101260
    This gives me the relative memory increase of:
    Delta Run 1
    Delta Run 2
    Delta Run 3
    LabVIEW 8.6
    190556
    193396
    193396
    LabVIEW 2011
    612288
    1008140
    1008140
    For Memory Test 2, it's the same except drop the array of variants:
    Pre-run
    Run 1
    Run 2
    Run 3
    LabVIEW 8.6
    57244
    89864
    92060
    92060
    LabVIEW 2011
    90432
    612348
    617872
    621852
    This gives us delats of:
    Delta Run 1
    Delta Run 2
    Delta Run 3
    LabVIEW 8.6
    32620
    34816
    34816
    LabVIEW 2011
    521916
    527440
    531420
    What I found interesting in Memory Test #1 was that LabVIEW used more memory for the second run in LV2011 before it stopped.  I started with Test 1 because it more resembled what the DB toolkit was doing since it passes out variants that I then convert.  I htought maybe LabVIEW didn't store variants internally the same any more.  I dropped the indicator thinking it would make a huge difference in Memory Test 2, and it didn't make a huge difference.
    So what is happening?  I see similar behaviore in LV2009 and LV2010.  LV2009 was the worst (significantly), LV2010 was slightly better than 2011, but still siginificantly worse than 8.6.
    Pre-run
    Run 1
    Run 2
    Run 3
    LabVIEW 8.6
    55504
    246060
    248900
    248900
    LabVIEW 2011
    93120
    705408
    1101260
    1101260
    Attachments:
    Memory Test.vi ‏8 KB
    Memory Test2.vi ‏8 KB

  • Continual Memory Issues

    Continual Error - "Can't Finish Previewing.  there Isn't enough Memory ID=-108"
    Illustrator CS5; windows 7; 2x quad core xeon; 24GB RAM.
    Only a small amount of system resources appear to be in use.  Is this a bug or do i need to modify a setting?

    My limited understanding is that memory is distributed evenly between cores.
    I could go into some detail about thread handling and memory allocation, but let's just say that's not how things work. That aside, AI's memory handling is just not efficient - whatever that could possibly mean. That 1.8 GB is a clear sign that you are approaching the overall limit of 2.37 GB or so (content and the app itself) and the app simply struggles to find any free memory at all. As Jesse suggested, seriously consider upgrading to CS6, if all those issues seem memory related. There are other and some new annoyances in CS6, but at least being able to work at all goes a long way to improve ones mood...
    Mylenium

Maybe you are looking for