Memory usage Difference

When I was trying to check the difference in the VI memory usage when the controls are places as Icons and Terminals. Surprisingly when I was using the controls as Icons the memory usage (shown in vi properties) is less than the controls when they are placed as terminals (Unmark the show as icon). Can anyone explain why this happens because I was thinking in the other way.
The best solution is the one you find it by yourself
Solved!
Go to Solution.
Attachments:
BD_as_Icons.png ‏11 KB
BD_as_Icons_Disabled.png ‏11 KB

No I didn't do any change am just changing the control terminal to Icons and back to terminals. I see the memory now increased when I switch back the terminals with icons without closing the vi (Confused ). Now I have closed and opened again and it is showing lesser than before and I did for the Icons and as terminals and both shows same.
The best solution is the one you find it by yourself

Similar Messages

  • Bizarre... Memory usage difference between rMBP15 and rMBP13

    Hey guys
    Just noticed something very bizarre.  So, I have macbook pro retina 13 (2012) and macbook pro retina 15 (2014).  The 13inch is the 2.5GHz Intel Core i5 with 8 GB memory;  The 15inch is the base model for the current year with 16 GB ram.
    They both same similar apps installed (actually the 13 has slightly more, but irrelevant with memory question).
    With nothing running, except dropbox on both, the 15 inch is using more memory than the 13.  Why?  Folks with more computer knowledge may be able to enlighten me!
    Activity monitor on 15:
    Physical Memory: 16 GB
    Memory Used: 5.44 GB
    Virtual Memory: 16 GB
    Swap Used: 0 bytes
    Activity monitor on 13:
    Physical Memory: 8 GB
    Memory Used: 3.52 GB
    Virtual Memory: 8.49
    Swap Used: 22.8
    Also worth mentioning, the 15inch activity monitor looks like that with a more recent reboot (up and running for about 10 minutes) than the 13 which was up and running for about 6 hours.
    Why do you think this may be the case? 
    Also, kernel_task is about 740 MB on 15, and is about 530 MB on 13. 
    If someone could shed some light regarding the reason for such difference with relatively same apps running in the background (only dropbox actually), please share!  (perhaps the 15 uses more memory in general?)
    Thanks in advance!

    In theory it makes the computer use of memory better thus increasing performance. OS X likes to grab up as much memory as feasible, depending upon the amount of RAM installed, then uses the memory manager to allocate memory as it is needed. Conventionally, we used to judge if there was enough memory by looking at how much memory was used, but OS X makes us change our understanding of memory management.
    For your second question, it's really just a matter of OS X using more memory because more is available to service the system demands. At least that's my understanding of it. I'm not a programmer so I am not an expert. But I do have some knowledge of what's going on - enough to be dangerous.

  • Memory Usage.

    Good day to you all,
    Im using the miracl library for my program(www.shamus.ie). it is a c/c++ library
    I compiled its dll and made it run on windows.
    i also compiled it on linux but i didnt use the exact source codes because there are differences in compiling in linux.
    my problem is that memory usage on windows is 4mb per client while on linux, it is 23mb per client.
    Is it because i compiled the linux version wrong?
    should windows memory usage be way lower than linux memory usage?
    should i run my app on a linux or windows?
    Is JNI really faster on windows?
    Thanks
    -Aldrich
    Message was edited by:
    Aldrich

    Good day to you all,
    Im using the miracl library for my program(www.shamus.ie). it is a c/c++ library
    I compiled its dll and made it run on windows.
    i also compiled it on linux but i didnt use the exact source codes because there are differences in compiling in linux.
    my problem is that memory usage on windows is 4mb per client while on linux, it is 23mb per client.
    Is it because i compiled the linux version wrong?
    should windows memory usage be way lower than linux memory usage?
    should i run my app on a linux or windows?
    Is JNI really faster on windows?
    Thanks
    -Aldrich
    Message was edited by:
    Aldrich

  • Memory Usage And Hard Drive Activity Increase After Latest Upgrade

    I have upgraded to Firefox 3.6.15 and latest Adobe Flash Player. Then i noticed that plugin-container started to take a lot of memory as much as Firefox itself totaling around 600Mb. Then i see hard drive activity intermittently slowing down my laptop and making it unresponsive for a short period of time. I have 50 tabs open. I have disabled Flash and since then plugin-container takes very little memory and i don't see consistent hard drive activity.
    I need Flash and i don't want to disable it. Is there a solution to this?

    I am seeking solutions for same problem.  When playing simple online games such as solitaire my memory usage for the plug in container is 300-600 mb while FireFox may only be 80-120 (with several windows or tabs open of mostly text pages). If I disable Adobe add-ons memory usage is greatly reduced. If I understood correctly it is possible to disable the plugin container and run adobe via the old method and another user thought that would solve the problem. I do not understand how it would make a difference but if you havent found a solution you may want to try it.  See these 2 threads in the Mozilla forum:
    A question and brief  explanation  and   how to disable plugin container in config
    I've previously had problems with Adobe crashing Firefox . At least now with Adobe being in the plugin container Firefox does not crash along with it.
    I have another issue - I can sometimes still hear the music from a game after I have exited the game and closed the window. I would like to know how to release the resources used by Adobe - kind of like FireFox's RamBack.
    Well, good luck to you (and me!)

  • BackgroundMediaPlayer high memory usage

    Hi, I have a problem with WP8.1 Background task with BackgroundMediaPlayer. When I have just the simplest app, which have virtually zero logic in both the task and and app, when the task is started, it takes 15MB of memory. When playback starts, it jumps
    to 21 MB, which leaves me about 4 MB (since the limit is 25 MB). This is just ridiculous. (I use MemoryManager.AppMemoryUsage to get used memory).
    What I find even more stupid is the fact that when I run the app in emulator, the memory usage is 5 MB on start, after start of playback 11 MB. So I am left with 14 MB, which is ok.
    I created a simple project, if you are interested, download it here: http://bezysoftware.net/files/App13.zip It will notify you about memory usage in background task using toasts.
    Any ideas what to do? Beside rewriting everything in C++ - I already have an app ready in C#.

    Hi bezysoftware,
    Looks like emulator performs better than the real device? I'm not quite understand what do you mean by "playback" start, what's the difference between task start and playback start?
    And sorry I cannot repro the issue on my emulator, always 7.7MB instead of 11MB as you mentioned.
    I also tested with my Lumia 630, it only use 3MB when start and use 7MB when running, I cannot repro the issue.
    I would also suggest you tell us what kind of device you are using, perhaps it related with the test device.
    --James
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • High Eden Java Memory Usage/Garbage Collection

    Hi,
    I am trying to make sure that my Coldfusion Server is optomised to the max and to find out what is normal limits.
    Basically it looks like at times my servers can run slow but it is possible that this is caused by a very old bloated code base.
    Jrun can sometimes have very high CPU usage so I purchased Fusion Reactor to see what is going on under the hood.
    Here are my current Java settings (running v6u24):
    java.args=-server -Xmx4096m -Xms4096m -XX:MaxPermSize=256m -XX:PermSize=256m -Dsun.rmi.dgc.client.gcInterval=600000 -Dsun.rmi.dgc.server.gcInterval=600000 -Dsun.io.useCanonCaches=false -XX:+UseParallelGC -Xbatch ........
    With regards Memory, the only memory that seems to be running a lot of Garbage Collection is the Eden Memory Space. It climbs to nearly 1.2GB in total just under every minute at which time it looks like GC kicks in and the usage drops to about 100MB.
    Survivor memory grows to about 80-100MB over the space of 10 minutes but drops to 0 after the scheduled full GC runs. Old Gen memory fluctuates between 225MB and 350MB with small steps (~50MB) up or down when full GC runs every 10 minutes.
    I had the heap set to 2GB initally in total giving about 600MB to the Eden Space. When I looked at the graphs from Fusion Reactor I could see that there was (minor) Garbage Collection about 2-3 times a minute when the memory usage maxed out the entire 600MB which seemed a high frequency to my untrained eye. I then upped the memory to 4GB in total (~1.2GB auto given to Eden space) to see the difference and saw that GC happened 1-2 times per minute.
    Is it normal in Coldfusion that the Eden memory would grow so quickly and have garbage collection run so often? i.e do these graphs look normal?
    Also should I somehow redistribute the memory available to give the Eden memory more since it seems to be where all the action is?
    Any other advice for performance improvements would be much appreciated.
    Note: These graphs are not from a period where jrun had high CPU.
    Here are the graphs:
    PS Eden Space Graph
    PS Survivor Space Graph
    PS Old Gen Graph
    PS Perm Gen Graph
    Heap Memory Graph
    Heap/Non Heap Memory Graph
    CPU Graph
    Request Average Execution Time Graph
    Request Activity Graph
    Code Cache Graph

    Hi,
    >Is it normal in Coldfusion that the Eden memory would grow so quickly and have garbage collection run so often?
    Yes normal to garbage collect Eden often. That is a minor garbage collection.
    >Also should I somehow redistribute the memory available to give the Eden memory more since it seems to be where all the action is?
    Sometimes it is good to set Eden (Eden and its two Survivor Spaces combined make up New or Young Generation part of JVM heap) to a smaller size. I know your thinking - what make it less, but I want to make it bigger. Give less a try (sometimes less = more, bigger not = better) and monitor the situation. I like to use -Xmn switch, some sources say to use other method/s. Perhaps you could try java.args=-server -Xmx4096m -Xms4096m -Xmn172m etc. I better mention make a backup copy of jvm.config before applying changes. Having said that now you know how you can set the size to bigger if you want.
    I think the JVM is perhaps making some poor decisions with sizing the heap. With Eden growing to 1Gb then being evacuated not many objects are surviving and therefore not being promoted to Old Generation. This ultimately means the object will need to be loaded again latter to Eden rather than being referenced in the Old generation part of the heap. Adds up to poor performance.
    >Any other advice for performance improvements would be much appreciated.
    You are using Parallel garbage collector. Perhaps you could enable that to run multi-threaded reducing the time duration of the garbage collections, jvm args ...-XX:+UseParallelGC -XX:ParallelGCThreads=N etc where N = CPU cores (eg quad core = 4).
    HTH, Carl.

  • Itunes 11.1.5.5 memory usage windows 7

    I have a very large iTunes library.  Lots of music, movies, tv shows, etc.  It links to the AppleTV in the media room.
    Lately, around two updates ago, memory usage of iTunes went all wonky.  When first started it uses around 100k. Within a few hours it's over 1.5gb and stops working.  It happens faster if we're using AppleTV.
    It's becoming really, really frustrating, to the point where my wife has just quit using it because when iTunes locks up AppleTV stops working.
    I've seen this issue crop up elsewhere on the forums, but have yet to see a real solution.
    Turning off album view or poster view makes no different.  This happenes no matter what screen iTunes is displaying.
    MH

    Thanks for your help. Started itunes in safe mode and it still closed down after approx an hour. There are no add-ons or plug-ins installed. I had previously checked the safe mode and also uninstalled all apple software in the order that one of the help pages indicated. Then redownloaded and installed itunes - made no difference.
    After opening itunes it works fine with no problems - what baffles me is that it then stops working even if the software is not active at the time.

  • Linux memory usage

    Hey all,
    I have a 1.6 GHz desktop running w/ 128mb of memory and a Core2Dou laptop w/ 1gb of memory.  Both machines are running Archlinux and Fluxbox.  At boot, both machine are using roughly the same amount of memory, around 26-30 mb but as soon as fluxbox is started the desktop is running around 40-50 mb while the laptop is sitting at 130-140 mb.  Those numbers do not include buffer/cache in them.  They both have similar configurations so I'm curious as to why is there such a drastic difference between the memory usage on the systems when Fluxbox is started?
    -Vincent

    What graphics card do you use on your laptop? Any integrated graphics with shared memory by chance? Then you have your answer.

  • Memory usage: how do I free it?

    I'm developing a C/S web application; I discovered that memory usage tends to be increasing, sometimes to the point that I had to restart the web server.
    I added the line
    System.gc();
    but this did not seem to make any difference.
    What should I do about it?

    Is memory used by static variables and methods less likely to
    be released than non-static variables and methods? Memory used by static variables is usually never released, but seldom uses more than a few kilobytes at max.
    I have never heard that Objects referenced by static variables and methods are less likely to be released.
    How do I make sure that object references are set to null, fischert?If they are stored in your own classes, just set them to 'null'. If the are stored in collections (maps, vectors, lists etc.) then there is a remove() method. In other cases there are native resources as MsTingle wrote. It depends on the situation.
    Is there anything like "delete" in C++ that explicitly terminates
    an object and frees memory used by it?No, this is not required.

  • SQL Memory Usage

    Hello
    I am updating some items with the DTW. i am updating 9000 items. I have made 9 files of 1000 records, i am just updating the picturname field. When i start to upload the files the SQL Memory usage starts to grow more and more. When i try to upload the third file the memory has grown too much, and the DTW says the file update ok but it doesnt update the database. So i need to restart the SQL service (mssqlserver) and do it again. Some times it doesnt work so i need to restart the server.
    Any ideas??
    Jacobo

    hi,
    Check this information abt DI API Memory Consumption in WIKI SDK FAQ's.
    https://www.sdn.sap.com/irj/scn/wiki?path=/display/b1/faq_sdk
    DI API
    Memory consumption
    New connection mechanism in 2007 vs. classical
    1. In the old method the DI API was loaded into the same process with the Add-On and was an "actual part" of it, so calls to the DI-API were very quick and direct
    2. In the new method there is one common DI-API which is part of the Core B1. All the Add-Ons will use the same DI-API IF (!!!) they work in the new method. The calls to the DI-API have now to "go between" two processes. This means that they go through more processing and although it's all in the same machine and no actual communication (i.e., network traffic) is actually happening, the system's CPU and memory are "working harder".
    The impact is extremely small on an individual call level, but for an Add-On that makes a large amount of calls this difference accumulates...
    There is no huge additional CPU or Memory consumption. Most of the impact is on the Response Time level. Some of it is CPU consumption and some of it is Context Switch waiting.
    3. This trade-off between Memory consumption and Response Time is actually another reason why R&D thought it's a good idea to leave the new method to be optional based on a decision from the developer.
    Jeyakanthan

  • 0.5 MB program has 832 MB memory usage even before it is run

    I just uninstalled and re-installed LabVIEW 2011. I was hoping this would do something to lower the memory footprint. Alas, it did not make any difference. This is what I see:
    - When I launch LabVIEW, it uses 94 MB even before I have opened a single VI, i.e. only the front end uses this much memory.
    - When I open the VI (the size of the .vi file is 549 KB), memory usage goes up to 836.4 MB. This is before I have even run the program.
    - Finally, when I run the program, memory usage jumps to 1.17 GB.
    I find this memory usage grotesquely high. Can anyone please shed some light on what can be done? I have uploaded the VI. It's a rather simple VI.
    Thanks,
    Neil
    Attachments:
    IP QL to charge - from img.vi ‏446 KB

    nbf wrote:
    1) If I understand correctly, what you are saying is that when I open the VI, the data structures associated with the intensity charts and waveform graphs are initialized to default values (e.g. 0), so there is memory usage even before the program is run and anything displayed, right?
    Yes, a huge array still takes a lot of memory, even though it compresses well for storage inside the VI. Once the VI is ready to run, these arrays need to be allocated in memory.
    nbf wrote:
    2) Is there an alternative to using the value property node that doesn't duplicate the memory content? (e.g. something like pointer or reference in C++?)
    One possiblity are data value references. You can open a DVR and fill it inside the subVI, then extract slices later as needed. If done properly, you can also keep the 2D data in a shift register (yes, please learn about them!).
    nbf wrote:
    4) Regarding the clearing of the indicators, could you please tell me how to do that, without removing the color ramp settings?
    The color ramps can be set programmatically using property nodes based on array min&max and you can assign colors for the ramp values at will.
    nbf wrote:
    5) You are correct that I am displaying 4000x2000 array data in an intensity graph with 400x200 pixels. But then I change the min/max of the charts/graphs to zoom into the region of interest; I can identify the region of interest only after displaying the full 4000x2000 array of data. I don't know if there is a way around this.
    Do you really need to show the 2D data in three different transforms? Maybe one intensity graph is sufficient. You can apply the other transforms on the extracted slices with <<1% of the computing effort. What is the range and resolution of your data? Do you really need DBL or is a smaller representation sufficient (SGL, I16, etc). From looking at the subVI, the raw data is U16. The subVI also involves a coercion to DBL, requireing another data copy in memory. When running, make absolutely sure that the subVI front panel is closed, else additional memory is used for the huge array indicator there. For better performance, you might want to flatten the subVI to the main diagram or at least inline it.
    nbf wrote:
    6) Yes, I use the continuous run mode instead of a toplevel while loop. Is there a disadvantage to doing that?
    As Samuel Goldwyn ("A hospital is no place to be sick") might have said: "Run continuously is no way to run a VI"
    Run continuously is a debugging tool and has no place in typical use. What it basically does is restart the VI automatically whenever it completes, and as fast as the CPU allows. It is mostly useful to quickly test subVIs that don't have a toplevel loop and play with inputs to verify results. Any toplevel VI needs a while loop surrounding the core code. Period!
    So, my suggested plan of action would be:
    Use a while loop around your toplevel code
    Decide on the minimal representation needed to faithfully show the data.
    Keep a single copy of the 2D array in a DVR
    You optionally might want to undersample the data for display according to the pixel resolution and use other means of zooming and panning. For zooming, you could extract a differently sampled small array subset and adjust x0 and dx of the axes accordingly.
    Use a single 2D graph (if you need to see the various transforms, add a ring selector and transform the 2D array in place. Never show more than one intensity graph.
    See how far you get.
    LabVIEW Champion . Do more with less code and in less time .

  • ABAP Memory Usage

    Hello,
    I am seeking to find the differences in memory usage in the context of statically declaring work areas using the DATA statement and dynamically declaring work areas using data references.
    How can i track the amount of memory a ABAP program is consuming at runtime

    Hi,
    Go to transaction SE30 and check your program.
    It will give line by line time / memory consumption of every line of code at runtime.
    Hope this helps you.
    Cheers,
    Venkat

  • How to meter memory usage

    Hi,
    I want to meter the memory usage of some XML-Parsers. Thus I do something like
    long before = Runtime.getRuntime().freeMemory();
    //parse
    long after = Runtime.getRuntime().freeMemrory();but always the difference (after - before) returns a negative value, which means that the parsing creates memory ;-)
    First I guessed, that during parsing the GC gets active and collects more objects than created. So I called the GC right before the upper code, wait for 2000ms to ensure the GC does something, but the result is the same.
    I tried this with the kXML2.0, XParse-J and ASXMLP with different sizes but the same result.
    What goes wrong there?
    Kay

    Where are you trying it out?
    Some phones, specially high-end phones like Series 60, allocate more heap memory dynamically while the program is running, so the freeMemory() keeps getting bigger the more memory you allocate.
    Other than that, there really is no way to make sure the garbage collector runs after you call System.gc(). Implementations are free to completely ignore a call like this, even if you give them time to do it.
    shmoove

  • Some questions about the limit of memory usage of Adobe flash player in different OS & Web Browser

    Hi Adobe experts,
    I'm from HP, and now using Adobe flash player making some products about massive data displaying.
    For my target, I need to show more than 200 K rows in client web browser, using AdvancedDataGrid.
    And that may needs more than 200M to cache the data in Web browser memory space.
    So, my questions are:
    Does there any memory usage limit in flash player?
    Say, if we have 4G bytes of physical memory in machine, in Windows, how much can I use in flash player?
    Also, if we have 4G bytes of physical memory in Linux, how much can I use in flash player?
    Does it depends on the Web Browser?
    Say, is there any difference between different web browsers?
                    If the limit exists, can we control the limit?
    Say, can we define some parameters in the tag in web page to expand the limitation?
    Or, can we try to control this limitation by the Flash player management which exists in the Windows Control Panel?
    Best Regards
    Huang Haixu
    +86 18616735091
    [email protected]<mailto:[email protected]>

    1. Yes. The Toolkit for CreateJS is an extra downloadable extension for Flash CS6. It will publish html and js files that will provide the animation instead of the swf that you would normally publish. The success or failure of the resulting javascript version of your animation is the result of working within the constraints of the toolkit. You are pretty much constrained to using the timeline in Flash for your animations. If you work only in Actionscript, then the output will be very disappointing.
    2. Edge outputs javascript, css, and html to give you an animation. The user interface allows you to design within the constraints of what Edge can do. You can preview and adjust and tweek your animation as you work. Edge is an html5 tool. It can create html5 animation. It is not a replacement for Flash. It is something that you can use instead of Flash to embed animation in html.
    The problems and benefits of each are unique. Neither is a good substitute to learning javascript, css and html5. If you don't understand the code that is created from each of these tools, you can easily end up with huge, bloated, files that perform poorly. I'm very biased toward actually knowing what is going on. If I need to edit something, I want to be able to go into the code and make a change, not add an additional chunk of code to work around what was there. I don't use the timeline at all, and so nothing that I have will publish using CreateJS. Well, it will publish, but nothing happens because there is nothing on the timeline to translate.
    You can download a trial of Flash CS6 and try the Toolkit for yourself. Edge is still in free preview, you can get a copy at http://labs.adobe.com and try it to see how it works.
    Also, if you're not using the Greensock Animation Platform with Flash, have a look at that. It has recently been extended to provide Javascript analogues for most of the Libraries. http://www.greensock.com/

  • Memory usage of ApplyLogOnInfo and TestConnectivity

    Hi,
    Our application is coded in C++ (VS 2005)  and we access the Crystal Report 2008 API using .NET (CR 2008 SP1). In the application, when a report needs to be printed, it will
            Load the report
            Login to the database
            Set the parameters
            Print the report
            Close the report
            Perform garbage collection
    One of our customers has a very large report (.rpt > 2 M), which has many subreports and each subreport accesses a number tables.While observing the memory usage from Task Manager on win 2008, I can see loading the report self eats up about 100 M, then Login to the database will consume another 400 M, when the Report is closed, the memoary drops back a little bit and stays at around 500 M. Re-run the same report again, the memory will go up by another 200 M mainly during the login step. Then it stays at around 700 M for subsequent printing requests.
    For reports which are small (< 100 k) and do not have a lot of subreports, the memory usage will be around 150 M. I am wondering why the login step will use some much memory and they do not get freed up when the report is close for the second report request.
    The following is the code for the login step:
    void CRptWriterIntf::SetDBConnectionInfo()
         m_TableLogOnInfo = gcnew TableLogOnInfo;
         m_TableLogOnInfo->ConnectionInfo->DatabaseName = gcnew System::String(m_serverName.getBuffer());
         m_TableLogOnInfo->ConnectionInfo->ServerName = gcnew System::String(m_dbName.getBuffer());
         m_TableLogOnInfo->ConnectionInfo->UserID = gcnew System::String(m_dbUser.getBuffer());
         m_TableLogOnInfo->ConnectionInfo->Password = gcnew System::String(m_dbPass.getBuffer());
    void CRptWriterIntf::ReportLogon(ReportDocument^ reportDocument)
         SetReportLogonInfo(reportDocument);
         for (int i=0; i < reportDocument->Subreports->Count; i++)
                 ReportDocument^ subReport = reportDocument->Subreports<i>;
                 SetReportLogonInfo(subReport);
    void CRptWriterIntf::
    SetReportLogonInfo( ReportDocument^ reportDocument)
        for (int i=0; i < reportDocument->Database->Tables->Count; i++)
            Table^ table = reportDocument->Database->Tables<i>;
            table->ApplyLogOnInfo(m_TableLogOnInfo);
            if (!table->TestConnectivity())
                LOG_ERROR();
                CleanEngine(reportDocument);
                throw e;
    Thanks
    Kin H Chan

    Thank you for the suggestion.
    I updated my Crystal Report 2008 to SP4, but that does not seem to make any difference in term of the memory usage.
    I do not find any function to close the DB connection from the Crystal Report 2008 .NET API Reference. Am I missing something?
    The Depose method is a protected method which I cannot access directly from the ReportDocument object. Instead, it will be called in the destructor of ReportDocument according to the documentation.
    In my codes, after the report is generated, I always perform:
    Close the report by:
         reportDocument->Close()
    and
    Perform garbage collection
         System::GC::Collect();
    If I skip either one of the two calls below, the report will fail to access the db from the report
         table->ApplyLogOnInfo(m_TableLogOnInfo);
         if (!table->TestConnectivity())
    That is the reason that I have them there. Are there alternative to this approach?
    By the way, how can I attach file (e.g. screen shots, sample codes, sample reports) to my question?
    Thanks
    Kin H Chan

Maybe you are looking for