Regarding normal memory usage...

My iMac:
2.8GHz Intel Core 2 Duo
2GB 667MHz DDR2 SDRAM
OS X 10.6.3
I just rebooted my iMac, opened Activity Monitor and Safari...
(1) My current virtual memory size is 133.73GB, is this normal?
Current memory stats: Free = 800 MB, Wired = 170 MB, Active = 580 MB, Inactive = 510 MB
Thanks.

Sounds normal, if you are worried you don't have enough memory you can upgrade your machine to 4 GB. If you choose to do so I'd recommend OWC, very reputable and they even have a trade in on your old memory.
Regards,
Roger

Similar Messages

  • Normal memory usage?

    Hi...
    Could anyone tell me what should be the normal memory usage of the server
    just after startup?
    I only have a few ejbs deployed and just after startup, my server memory
    jumps to 150Mb.. so I'm wondering if this is normal because it looks to me
    like a lot of memory...
    I used to work with another application server with the same ejbs deployed
    and even after several days up, the server process never took more han
    75-80Mb of memory.. Weblogic is taking like twice this amount just after
    startup... maybe my tuning is wrong.. is there anythng that I could do to
    lower the memory usage?
    by the way. I'm running WLS 7.0 on a win2k machine
    thanks!
    Mathieu Girard
    [email protected]

    Hi Mathieu,
    Generally there is no hard rule defined for all app servers for
    how much memory it should use after startup. Why whould you
    need to lower the memory usage? Do you have any specific
    problem with the digits you've got?
    Regards,
    Slava Imeshev
    "Mathieu Girard" <[email protected]> wrote in message
    news:[email protected]..
    Hi...
    Could anyone tell me what should be the normal memory usage of the server
    just after startup?
    I only have a few ejbs deployed and just after startup, my server memory
    jumps to 150Mb.. so I'm wondering if this is normal because it looks to me
    like a lot of memory...
    I used to work with another application server with the same ejbs deployed
    and even after several days up, the server process never took more han
    75-80Mb of memory.. Weblogic is taking like twice this amount just after
    startup... maybe my tuning is wrong.. is there anythng that I could do to
    lower the memory usage?
    by the way. I'm running WLS 7.0 on a win2k machine
    thanks!
    Mathieu Girard
    [email protected]

  • Am running Firefox 7.0.1 now. Normal memory usage is 157mb but when running Facebook (not running apps nor games), it goes up to 553mb Why?

    Am running Firefox 7.0.1 now. Normal memory usage is 157mb but when running Facebook (not running apps nor games), it goes up to 553mb Why?

    I encountered the same type of problem. Firefox running terribly slowly and slowing down my entire machine (Core i5 with 256GB SSD). Searching the forums, I found a couple of things about troubleshooting performance issues, one of which was to use '''hardware acceleration''', that is on by default. It was turned on on my PC, '''so I tried deactivating it, and it worked!'''
    So doing the exact opposite as Mozilla support said solved the problem. It is really a pain now to work with Firefox. I'm using it because I have no choice, but I'd recommend IE and Chrome over Firefox... Whatever, the market will decide once Firefox has become to crappy...

  • Best practices for using .load() and .unload() in regards to memory usage...

    Hi,
    I'm struggling to understand this, so I'm hoping someone can explain how to further enhance the functionality of my simple unload function, or maybe just point out some best practices in unloading external content.
    The scenario is that I'm loading and unloading external swfs into my movie(many, many times over) In order to load my external content, I am doing the following:
    Declare global loader:
    var assetLdr:Loader = new Loader();
    Load the content using this function:
    function loadAsset(evt:String):void{
    var assetName:String = evt;
    if (assetName != null){
      assetLdr = new Loader();
      var assetURL:String = assetName;
      var assetURLReq:URLRequest = new URLRequest(assetURL);
      assetLdr.load(assetURLReq);
      assetLdr.contentLoaderInfo.addEventListener( Event.INIT , loaded)
      assetLdr.contentLoaderInfo.addEventListener(ProgressEvent.PROGRESS, displayAssetLoaderProgress);
      function loaded(event:Event):void {
       var targetLoader:Loader = Loader(event.target.loader);
       assetWindow.addChild(targetLoader);
    Unload the content using this function:
    function unloadAsset(evt:Loader) {
    trace("UNLOADED!");
    evt.unload();
    Do the unload by calling the function via:
    unloadAsset(assetLdr)
    This all seems to work pretty well, but at the same time I am suspicious that the content is not truly unloaded, and some reminents of my previously loaded content is still consuming memory. Per my load and unload function, can anyone suggest any tips, tricks or pointers on what to add to my unload function to reallocate the consumed memory better than how I'm doing it right now, or how to make this function more efficient at clearing the memory?
    Thanks,
    ~Chipleh

    Since you use a single variable for loader, from GC standpoint the only thing you can add is unloadAndStop().
    Besides that, your code has several inefficiencies.
    First, you add listeners AFTER you call load() method. Given asynchronous character of loading process, especially on the web, you should always call load() AFTER all the listeners are added, otherwise you subject yourself to unpredictable results and bud that are difficult to find.
    Second, nested function are evil. Try to NEVER use nested functions. Nested functions may be easily the cause for memory management problems.
    Third, your should strive to name variables in a manner that your code is readable. For whatever reason you name functions parameters evt although a better way to would be to name them to have something that is  descriptive of a parameter.
    And, please, when you post the code, indent it so that other people have easier time to go through it.
    With that said, your code should look something like that:
    function loadAsset(assetName:String):void{
         if (assetName) {
              assetLdr = new Loader();
              assetLdr.contentLoaderInfo.addEventListener(Event.INIT , loaded);
              assetLdr.contentLoaderInfo.addEventListener(ProgressEvent.PROGRESS, displayAssetLoaderProgress);
              // load() method MUST BE CALLED AFTER listeners are added
              assetLdr.load(new URLRequest(assetURL));
    // function should be outside of other function.
    function loaded(e:Event):void {
         var targetLoader:Loader = Loader(event.target.loader);
         assetWindow.addChild(targetLoader);
    function unloadAsset(loader:Loader) {
         trace("UNLOADED!");
         loader.unload();
         loader.unloadAndStop();

  • Im here regarding my memory usage. i checked my memory and calculated it but it doesn't show as the amount i calculated. it gives me more 7GB. i dont understand where did it go. Can someone please explain to me?

    what i mean is my ipad shows used memory is 16gb but after i added up myself it is not that value. i calculated 10gb. who can help me to explain where did i use my another 6 gb.

    Connect the iPad to your computer and look at the storage used by the various categories as shown in the colored bargarph in iTunes.
    How does that agree with what is shown on the iPad? How large is the "other"?

  • Z87-G45, 80%+ Memory Usage in Task Manager

    hey guys ran into another problem 
    i'm using windows 8.1 it's a fresh install and i'm running into high memory usage problem. I was running at 99% while idling earlier (system was lagging hard) so i system restored to the previous day and everything was fine until now. I see it slowly creeping up. While typing this i'm running at 84% Memory usage which seems absurdly high.
    I'm using 8gb of Gskill Ram so I tried to follow the sticky above, but my BIOS would not let me changed anything under the "Advanced DRAM settings" not sure what that is either?
    Anyone got a quick fix to this problem?

    Just updated my killer network drivers because i saw that sticky about people having problems with it.
    This dropped my RAM down to 18% while typing this at this very moment. I'll let you know if it creeps up again.
    My next question I guess is what is normal memory usage on windows 8.1?

  • IPS shows Memory usage 80 % is that normal ?

    Hi there.
    I have a 2 5525x configured as active/standy and bot IPS modules configured with defaults there is no internet connection to them and no traffic passing thru , but IPS show memory usage 80% is that normal ?

    Could be, every environment is different when it comes to IPS. It all depends on what signatures you have configured and tuned.
    Sent from Cisco Technical Support Android App

  • Are any benifits of static declarations regarding memory Usage

    Hi,
    I am new to java programming and I am currently working on Code Optimization to improve application performance and reduce run time memory usage. So, please I need tips on performance improvement and memory consumption.
    Is static method declaration helpful to reduce memory usage and to improve performance?
    Please, reply me?
    Thanks and Regards
    Dagadu Akambe

    BigDaddyLoveHandles wrote:
    georgemc wrote:
    Write good, straightforward object-oriented code that doesn't use tricks, and don't try to optimise it yourelf. SeriouslyThat's exactly what Brian Goetz expressed in the article I linked to in reply #8.
    I've always wondered about the newbie obsession with optimization.
    This is not directed to the OP especially, but I've seen it happen many times: newbie is still writing inelegant, if not tortured code, and they get this bee in their bonnet that they have to optimize it, so they pound the code until it is beyond incomprehensible. Then they time it and it runs more slowly!Perhaps they're crafting O(n^3) sort routines, and not knowing about things like appropriate data structures and algorithms, big-O measurement, etc. the only conclusion they're able to draw is that their code is slow because they're creating so many objects, and declaring variables inside of loops, etc.

  • IDS 4235 showing 98% memory usage, is it normal?

    IDS 4235 with 4.1.5.S191 showing
    Using 908922880 out of 921522176 bytes of available memory (98% usage)
    Is it normal ?

    There is a 4.x known bug where the memory usage is incorrect.
    The actual memory usage number can be determined from the service account by entering the following command:
    bash-2.05a$ free
    total used free shared buffers cached
    Mem: 1934076 1424896 509180 0 18284 1214536
    -/+ buffers/cache: 192076 1742000
    Swap: 522072 0 522072
    The "Mem:" row, "used" column is the amount of memory (in kilobytes) that
    the "show version" command reports. However, this total includes the
    "cached" amount.
    So in the above example, the actual memory used is ( 1424896 - 1214536 ), or
    210360 KB. This is ( 210360 / 1934076 * 100 ), or 10.9% of total memory.

  • SA520W - High memory usage, possible fix in 2.2.0 firmware?

    As suggested by Thomas Watts, I'm starting a new thread to discuss the new SA520W firmware (2.2.0) and a possible resolution to high memory usage I'm experiencing on my network.
    My current setup is: 16Mbit DSL > SA520W > SA300-10, all with stock settings (no fancy VLAN's etc.)
    I have 4 CentOS 5/6 servers and a Windows 7 Ultimate station connected to the switch. I use CIFS to connect from Windows station to the other Linux servers and send large files. I currently notice the following behavior:
    When the file transfer starts, the Intel 1Gbit NIC is nearly saturated, hitting 115MB/sec. After few seconds, the data transfer comes to a halt and the transfer speed drops to around 50MB/sec. If I check the memory usage before the file transfer, it is approximately to 50-60% (on a fresh router reboot). Every time I send large files to other machines, the router memory consumption increases and it does not lower after a reasonable delay. I end-up with high memory near 90% and the only solution I have is to reboot the router in order to bring it back to 50%.
    Now, Thomas told me that this is simply a cosmetic issue, the memory is not actually 90% used. Yet, when the memory hits this threshold, I'm not capable to send files are normal LAN speeds I'm used to. Rebooting the router allows me to send only ONCE (and for few seconds) data at the expected LAN speeds.
    I would apreciate any input from Cisco engineers as well other users who experience the same issue. I would also like to know if any related work was done into 2.2.0 firmware and when we expect to have it released to users.
    Regards,
    Floren Munteanu

    Hi Tom,
    See below the answers.
    Are you currently running the 2.1.71 code?
    Yes
    Are you using IPS?
    No, the LAN is for internal use (no external users allowed)
    Are you using Protectlink services?
    No
    Hardware wise, I did not changed anything on machines. All boxes have dual Intel EXPI9301CT NIC's (LACP was planned) but I currently use single connections for sanity reasons (disks won't allow greater speeds anyway). Previous to Cisco, I used a Netgear ProSafe router + switch which did not encountered the issues I mention. Honestly, at first I thought I'm dealing with some stupid disk issues on Windows. So I ran a quick test and the stats are proper:
    > winsat disk -drive c
    > Disk  Sequential 64.0 Read                   109.62 MB/s        6.5
    > Disk  Random 16.0 Read                       2.47 MB/s          4.4
    > Responsiveness: Average IO Rate              2.12 ms/IO         6.9
    > Responsiveness: Grouped IOs                  8.34 units         7.4
    > Responsiveness: Long IOs                     5.59 units         7.7
    > Responsiveness: Overall                      46.63 units        7.1
    > Responsiveness: PenaltyFactor                0.0
    > Disk  Sequential 64.0 Write                  117.03 MB/s        6.7
    > Average Read Time with Sequential Writes     6.977 ms           5.3
    > Latency: 95th Percentile                     32.720 ms          3.0
    > Latency: Maximum                             118.231 ms         7.6
    > Average Read Time with Random Writes         13.346 ms          3.7
    > Total Run Time 00:01:39.50
    As I mentioned before, everything is pretty much stock on router/switch settings. If you have any tips that allow me to identify the cause, I would appreciate the input. What puzzles me is the speed drop and quick memory usage increase. It occurs 7-10 seconds after the transfers begins. It looks like the data transfer hangs for a very short period of time (less than half of second) and the transfer speed decreases from 110-115MB/sec to 50-60MB/sec. The transfer is completed at this speed. No matter how many other files I try to transfer after, the speed won't go higher than 60MB/sec. If I reboot the router, I get the same cycle.

  • High Eden Java Memory Usage/Garbage Collection

    Hi,
    I am trying to make sure that my Coldfusion Server is optomised to the max and to find out what is normal limits.
    Basically it looks like at times my servers can run slow but it is possible that this is caused by a very old bloated code base.
    Jrun can sometimes have very high CPU usage so I purchased Fusion Reactor to see what is going on under the hood.
    Here are my current Java settings (running v6u24):
    java.args=-server -Xmx4096m -Xms4096m -XX:MaxPermSize=256m -XX:PermSize=256m -Dsun.rmi.dgc.client.gcInterval=600000 -Dsun.rmi.dgc.server.gcInterval=600000 -Dsun.io.useCanonCaches=false -XX:+UseParallelGC -Xbatch ........
    With regards Memory, the only memory that seems to be running a lot of Garbage Collection is the Eden Memory Space. It climbs to nearly 1.2GB in total just under every minute at which time it looks like GC kicks in and the usage drops to about 100MB.
    Survivor memory grows to about 80-100MB over the space of 10 minutes but drops to 0 after the scheduled full GC runs. Old Gen memory fluctuates between 225MB and 350MB with small steps (~50MB) up or down when full GC runs every 10 minutes.
    I had the heap set to 2GB initally in total giving about 600MB to the Eden Space. When I looked at the graphs from Fusion Reactor I could see that there was (minor) Garbage Collection about 2-3 times a minute when the memory usage maxed out the entire 600MB which seemed a high frequency to my untrained eye. I then upped the memory to 4GB in total (~1.2GB auto given to Eden space) to see the difference and saw that GC happened 1-2 times per minute.
    Is it normal in Coldfusion that the Eden memory would grow so quickly and have garbage collection run so often? i.e do these graphs look normal?
    Also should I somehow redistribute the memory available to give the Eden memory more since it seems to be where all the action is?
    Any other advice for performance improvements would be much appreciated.
    Note: These graphs are not from a period where jrun had high CPU.
    Here are the graphs:
    PS Eden Space Graph
    PS Survivor Space Graph
    PS Old Gen Graph
    PS Perm Gen Graph
    Heap Memory Graph
    Heap/Non Heap Memory Graph
    CPU Graph
    Request Average Execution Time Graph
    Request Activity Graph
    Code Cache Graph

    Hi,
    >Is it normal in Coldfusion that the Eden memory would grow so quickly and have garbage collection run so often?
    Yes normal to garbage collect Eden often. That is a minor garbage collection.
    >Also should I somehow redistribute the memory available to give the Eden memory more since it seems to be where all the action is?
    Sometimes it is good to set Eden (Eden and its two Survivor Spaces combined make up New or Young Generation part of JVM heap) to a smaller size. I know your thinking - what make it less, but I want to make it bigger. Give less a try (sometimes less = more, bigger not = better) and monitor the situation. I like to use -Xmn switch, some sources say to use other method/s. Perhaps you could try java.args=-server -Xmx4096m -Xms4096m -Xmn172m etc. I better mention make a backup copy of jvm.config before applying changes. Having said that now you know how you can set the size to bigger if you want.
    I think the JVM is perhaps making some poor decisions with sizing the heap. With Eden growing to 1Gb then being evacuated not many objects are surviving and therefore not being promoted to Old Generation. This ultimately means the object will need to be loaded again latter to Eden rather than being referenced in the Old generation part of the heap. Adds up to poor performance.
    >Any other advice for performance improvements would be much appreciated.
    You are using Parallel garbage collector. Perhaps you could enable that to run multi-threaded reducing the time duration of the garbage collections, jvm args ...-XX:+UseParallelGC -XX:ParallelGCThreads=N etc where N = CPU cores (eg quad core = 4).
    HTH, Carl.

  • SQL SERVER PHYSICAL MEMORY USAGE HIGH

    HI Teams,
    i am  going threw one of my production high physical memory usage in SQL Server. It always
    around 90%.When i reboot the server, it will initially good but end of the day it will around 95 to 98% physical memory usage.
    please give the valid solutions..,
    Regards,
    DBA 

    This is an expected/normal behaviour in SQL Server box. Memory management is highly dynamic in SQL Server and will use the complete memory allocated to SQL Server. It is also important to set the MAX SERVER MEMORY for the SQL Server instance. You may not
    need to worry about this unless you find any performance issues.
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped.
     [Blog]

  • Diagnostics Workload Analysis - Java Memory Usage gives BI query input

    Dears
    I have set up diagnostics (aka root cause analysis) at a customer side and I'm bumping into the problem that on the Java Memory Usage tab in Workload analyis the BI query input overview is given
    Sol Man 7.0 EHP1 SPS20 (ST component SP19)
    Wily Introscope 8.2.3.5
    Introscope Agent 8.2.3.5
    Diagnostics Agent 7.20
    When I click on the check button there I get the following:
    Value "JAVA MEMORY USAGE" for variable "E2E Metric Type Variable" is invalid
    I already checked multiple SAP Notes like the implementation of the latest EWA EA WA xml file for the Sol Man stack version.
    I already reactivated BI content using report CCMS_BI_SETUP_E2E and it gave no errors.
    The content is getting filled in Wily Introscope, extractors on Solution Manager are running and capturing records (>0).
    Did anyone come accross this issue already?
    ERROR MESSAGE:
    Diagnosis
    Characteristic value "JAVA MEMORY USAGE" is not valid for variable E2E Metric Type Variable.
    Procedure
    Enter a valid value for the characteristic. The value help, for example, provides you with suggestions. If no information is available here, then perhaps no characteristic values exist for the characteristic.
    If the variable for 0DATE or 0CALDAY has been created and is being used as a key date for a hierarchy, check whether the hierarchies used are valid for this characteristic. The same is valid for variables that refer to the hierarchy version.
      Notification Number BRAIN 643 
    Kind regards
    Tom
    Edited by: Tom Cenens on Mar 10, 2011 2:30 PM

    Hello Paul
    I checked the guide earlier on today. I also asked someone with more BI knowledge to take a look with me but it seems the root cause analysis data fetching isn't really the same as what is normally done in BI with BI cubes so it's hard to determine why the data fetch is not working properly.
    The extractors are running fine, I couldn't find any more errors in the diagnostics agent log files (in debug mode) and I don't find other errors for the SAP system.
    I tried reactivating the BI content but it seems to be fine (no errors). I reran the managed system setup which also works.
    One of the problems I did notice is the fact that the managed SAP systems are half virtualized. They aren't completely virtualized (no seperate ip address) but they are using virtual hostnames which also causes issues with Root Cause Analysis as I cannot install only one agent because I cannot assign it to the managed systems and when I install one agent per SAP system I have the message that there are already agents reporting to the Enterprise Manager residing on the same host. I don't know if this could influence the data extractor. I doubt it because in Wily the data is being fetched fine.
    The only thing that it not working at the moment is the workload analysis - java memory analysis tab. It holds the Key Performance Indicators for the J2EE engine (garbage collection %). I can see them in Wily Introscope where they are available and fine.
    When I looked at the infocubes together with a BI team member, it seemed the infocube for daily stats on performance was getting filled properly (through RSA1) but the infocube for hourly stats wasn't getting filled properly. This is also visible in the workload analysis, data from yesterday displays fine in workload analysis overview for example but data from an hour ago doesn't.
    I do have to state the Solution Manager doesn't match the prerequisites (post processing notes are not present after SP-stack update, SLD content is not up to date) but I could not push through those changes within a short timeframe as the Solution Manager is also used for other scenarios and it would be too disruptive at this moment.
    If I can't fix it I will have to explain to the customer why some parts are not working and request them to handle the missing items so the prerequisites are met.
    One of the notes I found described a similar issue and noted it could be caused due to an old XML file structure so I updated the XML file to the latest version.
    The SAPOscol also throwed errors in the beginning strange enough. I had the Host Agent installed and updated and the SAPOscol service was running properly through the Host Agent as a service. The diagnostics agent tries to start SAPOscol in /usr/sap/<SID>/SMDA<instance number>/exe which does not hold the SAPOscol executable. I suppose it's a bug from SAP? After copying the SAPOscol from the Host Agent to the location of the SMD Agent the error disappeared. Instead the agent tries to start SAPOscol but then notices SAPOscol is already running and writes in the log that SAPOscol is already running properly and a startup is not neccesary.
    To me it comes down the point where I have little faith in the scenario if the Solution Manager and the managed SAP systems are not maintained and up to date 100%. I could open a customer message but the first advice will be to patch the Solution Manager and meet the prerequisites.
    Another pain point is the fact that if the managed SAP systems are not 100% correct in transaction SMSY it also causes heaps of issues. Changing the SAP system there isn't a fast operation as it can be included in numerous logical components, projects, scenario's (CHARM) and it causes disruption to daily work.
    All in all I have mixed feelings about the implementation, I want to deliver a fully working scenario but it's near impossible due to the fact that the prerequisites are not met. I hope the customer will still be happy with what is delivered.
    I sure do hope some of these issues are handled in Solution Manager 7.1. I will certainly mail my concerns to the development team and hope they can handle some or all of them.
    Kind regards
    Tom

  • Server 2008 R2 Memory Usage

    We have 10 Server 2008 R2 servers all exhibiting the same behaviour.
    The servers run at almost 100% memory utilization.  Only one has had SP1 installed, which I did this morning and after about 1 hour, the memory is maxed out.  The server initially had 14GB memory, but after installing SP1 and seeing more of the
    same, I decided to try installing more RAM.  I put in another 8GB, and while it takes a little longer to bottom out, it still bottoms out.  Perfmon is showing less than 100MB of available memory, and, as expected, performance is not good.
    These servers are strictly file servers.  We work with images of hard drives.  They are split into 2GB chunks.  An easy way to replicate the issue is to load a hard drive image and do a re-acquisition.  We do this sometimes as a drive
    may be acquired in the field without compression, and we will re-acquire it with compression.  Once this process is started, in about 1 hour, I have about 100MB free memory. 
    Using RamMap,  Mapped File is using 21GB of memory.  When I look at the file summary, the server is caching the 2GB image files and not letting go of them in a timely manner.  I am the _only_ person accessing this server, it just gets worse
    when more are.
    As noted above, performance is terrible, is this supposed to be normal?
    Server is an IBM x3550 M2 with dual quad-core CPU, 22GB RAM, OS is on a RAID 1 mirror of two 76GB 15K SAS drives, and the server has two 14TB arrays attached. OS is Windows Server 2008 R2 SP1.  Only the File Server Role is installed.
    Thanks,
    Brian

    Hi,
    I would like to confirm that do you have Exchange Server installed on the servers?
    If you do have Exchange Server installed, this behavior is normal. Exchange store.exe grabs as much RAM on the server as it can possibly get because
    store.exe needs it to optimize performance. For more information, please refer to the following Microsoft TechNet blogs:
    Why is Exchange Store.exe so RAM hungry?
    http://blogs.technet.com/b/exchange/archive/2004/08/02/206012.aspx
    Understanding Exchange 2007 Memory Usage and its use of the Paging File
    http://blogs.technet.com/b/exchange/archive/2008/08/06/3406010.aspx
    If you do not have Exchange Server installed, please check Task Manager and let us know what application uses the large size of memory. In addition,
    you may run Process Explorer to monitor the memory usage. You may download and install it from the following link:
    http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx
    After that, please let us know the suspect.
    Regards,
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • When i run exe built in Labview 7.1 the memory usage goes on increasing

    Dear Sir/Madam
    I have built an application in LabView 7.1. When i run this application and view the memory usage in Task manager, i find that the memory usage goes on increasing.
    Please help.
    With Regards
    Ravindra Kumbhar

    Hi, Ravindra,
    It looks that you have memory leak in your application.
    There are lot of possible reasons for memory leaks - opened in cycle and not closed references, continuosly growing arrays, memory allocation in own DLLs, etc.
    Normally you should have the same behaviour in development environment as well (memory of LabVIEW.exe incresed continuously).
    You should check the code, which executed repeatable (while/for loops) for allocated, but not closed resources. What you can do is following - remove (or isolate) parts of you code, executed in cycles, then check is leak present or not. So, step by step you will found the place where leak occurred.
    best regards,
    Andrey. 

Maybe you are looking for

  • Wifi issues with iPhone 4s/iOS7

    The wifi on my iPhone 4s is not working properly. It will connect to the wifi network (full strength), but when going to a website the "progress" bar will load about 10% and then just freeze. Websites, Twitter, Facebook, none of them will load over w

  • A webservice dont want to deploy and run (because of weblogic.jar)

    Good Afternoon! I have a JDeveloper 11g Release 1 (11.1.1.3.0) And I have a simple webservice. (A simple Class i converted to webservice.) Trying to test it (RMC on file -> Test Web Service) i got a fail like this: [Running application SimpleWSApll o

  • IDOC generation error in LSMW

    while loading the data for customer master i have basic data, sales data and conmpany code data in my data structure. I am trying to upload only sales data through IDOC . I have assigned the sales data text file to the sales data structure. For the r

  • Server Error 503: SC_SERVICE_UNAVAILABLE: HTTP server is temporarily overlo

    Hi The wrong is that : Server Error 503: SC_SERVICE_UNAVAILABLE: HTTP server is temporarily overloaded, and unable I'd like some information about it. In my application i don't have pool connection and it's in production right now, my boubt is: Can i

  • Debugging BSP page

    I want to debug a bsp application (Z app), i am logged in remote desktop, ecc 6.0, internet explorer: 7 i have tried to test page directly from se80 and also from URL. I have tried setting external break-point, activating debugging for user..........