Memory leak when transferring files Lion

hi all, help   I'm experiencing a memory leak since upgrading to Lion when transferring shared files between my macbook pro and imac. This occures through ethernet (wireless/wired) and firewire 800. No other apps are open except activity montior and finder. I am trying to transfer about 100g of itunes music / video files and pictures. Time machine backups to my TC do not have this affect.
When transfer starts the free memory slowly decreases whilst the inactive memory increases. Eventually the free memory drops below 10mg, active memory is approx 5-6gb and wired/active is about 2gb. Bascially as the free memory decreases the inactive memory increases however does not get realocated so the mac grinds to a halt and dies with lots of page outs etc.
this only happens when trying to transfer files. I have tried whole folders and just small individual ones and noticed the same event. I am struggling to find the cuase of the problem - any ideas?
mac os x 10.7.1
processor 2.53 GHz Intel Core i5
Memory 8 GB 1067 MHz DDR3
cheers, Kevin

I dont think you have a memory leak. I think the problem is the xml file is 1000000 records long and takes up too much memory. Even if you find a way to increase memory size, you are loading down the server too much for other people's applications to run. I suggest instead reading up on xml and learning how to read in only a few records at a time, processing it, and getting the next set of records to process. There are two methods to parse an xml file using an xml parser, one is to parse it all and put it in memory, the other is to process one record at a time (an xml book explains it better).
However, I question why you have reports that are 1000000 records long. end-users cannot effectively use such records (you cant scroll through 1000000 records). I suggest finding a way to greatly decrease the number of records in each file such as by providing just the records a particular user needs to do his job and not all records. For instance, put a textfield on his screen to let him only fetch records within a certain date range.
Lastly, I suggest putting your code in a try/catch/finally block where the finally block actually closes the objects. Example:
finally{
if(conn!=null)
conn.close();
}

Similar Messages

  • Memory leak when transferring pdf...help needed

    Dear All,
    I'm a newbie in developing java app. I'm making a web based application in reporting using JSP. java 1.6.02, jasper report 204, i-report 2.0.0
    Client : OS : Win xp, memory 512MB
    Server : Tomcat 6.0
    DB Server : SQL Server 2005
    OS Server : Windows Server 2003, memory 1GB
    Report spec : .pdf based, with up to 1million record
    Here's the code :
    String query;
            try {
            String filereport = request.getRealPath("division/accounting/template/POReport.jrxml");
            InputStream input = new FileInputStream(filereport);
            Class.forName(odbcDriver);
            Connection conn = DriverManager.getConnection(odbcURL,username,passwd);
            ResultSet rset = null;
            CallableStatement cs = null;
            query = "{ call sp_tpo_list }";
              cs = conn.prepareCall(query);
              rset = cs.executeQuery();
            JRDataSource dataSource = new JRResultSetDataSource(rset);
            JasperDesign design = JRXmlLoader.load(input);
            JasperReport report = JasperCompileManager.compileReport(design);
            JasperPrint print = JasperFillManager.fillReport(report, null, dataSource);
            ByteArrayOutputStream baos = new ByteArrayOutputStream();
            JasperExportManager.exportReportToPdfStream(print, baos);
            response.setContentType("application/pdf");
            response.setContentLength(baos.size());
            ServletOutputStream sos;
            sos = response.getOutputStream();
            baos.writeTo(sos);
            sos.flush();
            rset.close();
            cs.close();
            conn.close();
            sos.close();
            baos.close();
            input.close();      
            catch (FileNotFoundException fe) {}
            catch (JRException jre) {}
            catch (ClassNotFoundException cnfe) {}
            catch (SQLException sqle) {} 
            catch (IOException ioe) {}I've already increase heap memory in Tomcat Manager -Xms 64Mb -Xmx 512Mb.
    But when i test with 1million record, out of heap memory error is coming out.
    When i test with 500.000 record, it work out, but only when 1 client access. More than 1 client, out of heap memory error generated by the system.
    How can this happen? How can i detect memory leaks occured in the program?
    And pls let me know if you see something is missing in the code above.
    Thank you

    I dont think you have a memory leak. I think the problem is the xml file is 1000000 records long and takes up too much memory. Even if you find a way to increase memory size, you are loading down the server too much for other people's applications to run. I suggest instead reading up on xml and learning how to read in only a few records at a time, processing it, and getting the next set of records to process. There are two methods to parse an xml file using an xml parser, one is to parse it all and put it in memory, the other is to process one record at a time (an xml book explains it better).
    However, I question why you have reports that are 1000000 records long. end-users cannot effectively use such records (you cant scroll through 1000000 records). I suggest finding a way to greatly decrease the number of records in each file such as by providing just the records a particular user needs to do his job and not all records. For instance, put a textfield on his screen to let him only fetch records within a certain date range.
    Lastly, I suggest putting your code in a try/catch/finally block where the finally block actually closes the objects. Example:
    finally{
    if(conn!=null)
    conn.close();
    }

  • Pavilion p6130y: System crash (w/memory dump) when transferring files

    Someone told me it's my nVIDIA card that is the problem but I wanted to post here to see what others say.
    I am running Windows 7 64-bit.
    When I transfer large photos (JPEG)/video (MOV) files and large amounts of those files from a flash memory card to my portable hard drive or regular hard drive, my computer crashes (bluescreen with memory dump text), restarts, then cannot find a "solution" to the problem.
    If it is a graphics card problem, what other kind should I consider getting that is compatible with my model?  Is it easy to install?  According to the nVIDIA website, I have the latest driver.  I read that nVIDIA graphics cards are junk and that ATI is better.  My p6130y is about 4-5 months old.
    Is there more information that is needed to troubleshoot my problem?

    @monareaves.
    My husband has this model as well, it's a computer that we've owned since August of '09, it did this in September.  2 years old, and probably 40 hours of usage on it and it just completely won't boot up.  Today it gave me a "insert boot disc, etc" but now I can't get it to do it again.  That was after I'd had it unplugged for a month.  I feel like this is a common occurance.  I will be checking the other forums.  But regardless, it's BS.

  • I am experiencing a memory leak when I write to a file in a loop.

    I am conducting a cycling test, each cycle represents about 48k of memory. I collect a selectable number of cycles into a shift register. When the chosen number of cycles is reached, I write the data to text files and then clear out the shift register. Unfortunately, I do not free up any memory space when this happens. I have been able to isolate the memory leak to the file writing function. I thought I was closing the reference when I closed the file, but apparently that is not happening. What space I free up by clearing the buffer, is cancelled by the memory required to make a reference to the file (I am guessing). This is all happening in a loop that must run as many as 500,000 times. Ideally, I
    would like to write as many as 1500 records at a time. At this point the memory leak makes this almost impossible. I have ordered some more memory, but I would rather plug the leak. I have read that LabView leaves a reference open to the file even after the file is closed. What can I do to clear the memory used up by leaving the reference open? Any insights would be very appreciated.

    The behaviour you describe is strange. Something to try: Open a refernce to the file one time before entering the loop, then inside the loop use that reference to do all the writing. Finally close the reference when the loop completes.If this doesn't work, post your code in 6.0 format and I'll be glad to look at it.Mike...
    Certified Professional Instructor
    Certified LabVIEW Architect
    LabVIEW Champion
    "... after all, He's not a tame lion..."
    Be thinking ahead and mark your dance card for NI Week 2015 now: TS 6139 - Object Oriented First Steps

  • Is there a way to turn off using ram as intermediary when transfering files between 2 internal hdds? (write cache is off on all drives)

    right forum?  first time here. none of the options seem perfect. so i guess this applies to 'setup.' i tried to describe what's happening verbosely from the start of a file transfer to completion of writing file to 2nd hard drive.
    win 7 64bit home - i2600k 3.4ghz - 8gb of ram - 2hdd 1 ssd
    I have this problem when transferring files between 2 internal hard drives. one is unhealthy (slow write speeds but not looking for advice to replace it because it serves its unimportant purpose). so, the unhealthy drive drops into PIO mode - that's acceptable.
    however, a little over 1gb of any file transfer is cached in RAM during the file transfer. after the file transfer window closes, indicating the transfer is "complete" (HA!), it still has 1gb to write from RAM which takes about 30minutes. This would
    not be a problem if it did not also earmark an additional 5gb of ram (never in use), leaving 1gb or less 'free' for programs. this needlessly causes my pc to be sluggish - moreso than a typical file transfer file between 2 healthy drives. i have windows write
    caching turned off on all drives. so this is a different setting i can't figure out nor find after 2 hours of google searches.
    info from taskmanager and resource monitor.
    idle estimates: total 8175, cached 532, available 7218, free 6771 and the graph in taskman shows about 1gb memory in use.
    at the start of a file transfer:  8175,  2661,  6070,  4511free and ~2gb of ram used in graph. No problems, yet.
    however, as the transfer goes on, the free ram value drops to less than 1gb (1gb normal +1gb temporary transfer cache +5gb of unused earmarked space = ~7gb),  cached value increases, but the amount of used ram remains relatively unchanged (2gb
    during transfer and slowly drops to idle values as remaining bits are written to the 2nd hard drive).  the free value is even slower to return to idle norms after the transfer completes writing data to 2nd hard drive from RAM. so, it's earmarking an addition
    5gb of ram that are completely unused during this process.  *****This is my problem*****
    Is there any way to turn this function off or limit its maximum size. in addition to sluggishness, it poses risk to data integrity from system errors/power loss, and it's difficult to discern the actual time of transfer completion, which makes it difficult
    to know when it's safe to shutdown my pc (any data left in ram is lost, and the file transfer is ruined - as of now i have to use resmon and look through what's writing to disk2 -> sometimes easy to forget about it when the transfer window closed 20-30minutes
    ago and the file is still in the process of writing to the 2nd disk).
    Any solution would be nice, and a little extra info like whether it can be applied to only 1 hard drive would be excellent.

    Thanks for the reply.
    (Although i have an undergrad degeee in computers, it's been 15years and my vocab is terrible. so, i will try my best. keep an open mind. it's not my occupation, so i rarely have to communicate ideas regarding PCs)
    It operates the same way regardless of the write-cache option being enabled. It's not using the 5gb for read/write buffer - it's merely bloating standby memory during the transfer process at a rate similar to the write speed of destination (for my situation).
    at this point i don't expect a solution. i've tried to look through lists of known memory leaks but i dont have the vocabulary to be 100% certain this problem is not documented. as of now it can't affect many people - NAS's with low bandwidth networks, usb
    attached storage etc. do bugs get forwarded from these forums? below i can outline the consistent and repeatable nature not only on my pc but on others' pcs as well (2012 forum post i found).
    I've been testing and paying a little more attention and i can describe it better:
    Just the Facts
    Resmon Memory Info: "In Use" stays consistent ~1gb (idle amount and roughly the same when nothing else is running during file transfer)
                                     "Modified" contains file transfer
    data (meta data?) which remains consistent at little over 1gb (minor fluctuations due to working as a buffer). After the file transfer window closes "Modified" slowly diminishes as it finishes lazy writing (i believe that's the term). I forget idle
    pc amount, but after transfer this is ony 58mb)
    "Standby" as the transfer starts it immediately rises to ~2gb. I'm sure this initial jump is normal. However, it will bloat well over 5gb over time with a large enough transfer increasing at a consistent rate during the entire transfer
    process. the crux of the matter.
    "Free" will drop as far as 35-50megabytes over time.
    as the transfer starts, the "Standby" increases by an immediate chunk then at a slow rate throughout entire transfer process(~1mb/s). once writing metadata to RAM no longer occurs, the "Modified" ram slowly (@500kb/s matching resmon disk
    activity for that file write) diminishes as it finishes lazy writing, After file is 100% written to destination drive, "Standby" remains a bloated figure long after.
    a 1.4gb transfer filled 3677MB of "Standby" by the time writing finished and modified ram cleared. after 20minutes, it's still bloated at 3696MB. after 30-40mins it's down to 2300mb - this is about what it jumps immediately to when transfer starts
    - it now remains at this level until i reboot.
    I notice the "standby" is available to programs. but they do not operate well. e.g. a 480p trailer on IMDB.com will stop-and-go every 2-3seconds (stream buffers fine/fast) - this would be during the worst case scenario of 35-50mb "Free"
    ram. my pc isn't and never was the latest and greatest, but choppy video never happens even with 1 or 2 resource hogs running (video processing/encoding, games, etc).
    Conjecture Below
    i think it's a problem when one device is significantly slower at writing than the source device - this is the factor that i share with others having this problem. when data is written to modified ram then sent to destination, standby memory is expanded
    until it completely or nearly fills all available RAM - If the transfer size is large enough relative to how slow the write speed of destination device is. otherwise it fills it up as much as the file size/write speed issue allows. the term "memory leak"
    is used below but may not technically be one, but it's an apt description in layman's terms.
    i saw a similar post in these forums (link at end). My problem is repeatable and consistent with others' reports. I wasn't sure if i should revive it with a reply. some of these online message boards (maybe not this one) are extremely picky
    and sensitive, lol.the world will end if an old thread revives - even if for a good reason.
    i can answer some of the ancillary issues. one person (Friday, September 21, 2012 8:33 PM) mentions not being able to shutdown, i asume he means stuck on the shutdown screen - this is because lazy writing has not completed - his nas write speed is significantly
    slower than reading from source - the last bits of data left in ram still needs to be writen to the destination. shutdown will stall for as long as needed until the data finishes writing to destination to prevent data loss.
    another person (Monday, September 24, 2012 6:31 PM) mentions the rate of the leak, but the rate is more likely a function of read speed from source relative to write speed of destination. which explains why my standby expands closer to a 1:1 ratio compared
    to his 1:100 (he said 10mb per 1000mb)
    we all have the same exact results/behaviour, but slightly different rates of bloating/leaking. as the file is written from from the ram to the destination, standby increases during this time - not a problem if read and write speeds are roughly equal (unless
    your transfering a terabytes then i bet the problem could rear its head). when writing lags, it gives the opportunity for standby ram to bloat with no maximum except the amount of installed ram. slower the write speed, the worse the problem.
    The reply on Wednesday, September 26, 2012 3:04 AM has before and after pictures of exactly what i described in
    "Just the Facts". Specifically the resmon image showing the Memory Tab.
    The kb2647452 hotfix seems to do some weird things relative to this problem. in the posts that mention they've applied it: after file completes it looks like the "standby" bloat is now "in use" bloat. as per info from Tuesday, October
    09, 2012 10:36 PM - bobmtl in an earlier post applies the patch. compare images from earlier posts to his post on this date. seems like a worse problem. Also, his process list indicates it's very unlikely they add up to ~4gb as listed in the color coded bar.
    wheres the extra gb's coming from? likely the same culprit of what filled up "standby" memory for me and others. it looks like this patch relative to this problem merely recategorizes the bloat - just changes the title it falls under.
    Link:
    https://social.technet.microsoft.com/Forums/windows/en-US/955b600f-976d-4c09-86dc-2ff19e726baf/memory-leak-in-windows-7-64bit-when-writing-to-a-network-shared-disk?forum=w7itpronetworking

  • Computer freezes when transferring files

    Aside from a whole host of other problems I am having with Lion (on a new drive with a complete clean install--twice!!) I seem to be having a very serious one where whenever I try and transfer a file (USB drive, external drive, mounted web drive) the computer become inoperable during the transfer.  While the computer doesn't completely freeze up, it is impossible to do any work as clicking on a window may take 4 - 7 minutes for it to respond to that click.
    I sort of feel that this problem is probably what is causing all my others.  The computer becomes totally unresponsive and I often have to wait 30 - 50 seconds for it to catch up on what I'm doing.  It is as if it cannot manage multiple tasks.  Thinking that this was due to my User Account, I logged into a guest account and pretty much had the exact same experience, especially when transferring files.
    Doe anyone have an idea as to what I can do to fix this? 

    They are usually video files, ranging from 700mb to the very rare 2GB. Sometimes it's 4-5 of them at a time.
    Model Name: MacBook Pro
    Model Identifier: MacBookPro5,3
    Processor Name: Intel Core 2 Duo
    Processor Speed: 2.66 GHz
    Number Of Processors: 1
    Total Number Of Cores: 2
    L2 Cache: 3 MB
    Memory: 4 GB
    Bus Speed: 1.07 GHz
    Boot ROM Version: MBP53.00AC.B03
    SMC Version (system): 1.48f2
    Serial Number (system): xxx
    Hardware UUID: xxx
    Sudden Motion Sensor:
    State: Enabled

  • Memory Leak when running Contacts

    I am having a big memory leak when running the app Contacts on a MB Air. It gobbles up 2 GB of ram in just a few minutes forcing a reboot. I have re-installed Mountain Lion 10.8.2 and it still leaks memory. Watching Activity Monitor shows the rapid increase in Ram being gobbled up by Contacts. Used Mackeeper to clear cache as well as ran Cocktail, all to no avail. Any tips would be greatly appreciated.

    The size of oracle.exe is not an indication of how the Java VM GC works; so you are not comparing apples to apples. It'll be too long to explain here but in my upcoming book (see hereafter), I gave a detailled explanation of the various memory areas the Java VM uses and how these are GCed and also how you can meausre their size (not all, though).
    In short you want to use OracleRuntime methods such as
    OracleRuntime.getSessionSize(); --> get he current size of Sessionspace
    OracleRuntime.getNewspaceSize(); --> get he current size of Newspace
    there are other memory areas described in the book
    http://www.oracle.com/technology/pub/articles/mensah_dws.html
    http://www.elsevier.com/wps/find/bookdescription.cws_home/706089/description#description
    Sample chapter: http://www.oracle.com/technology/books/pdfs/mensah_ch1.pdf
    Kuassi

  • TestStand 2010 Memory Leak when calling sequence in New Thread or New Execution

    Version:  TestStand 4.5.0.310
    OS:  Windows XP
    Steps to reproduce:
    1) Unzip 2 attached sequences into this folder:  C:\New Thread Memory Leak
    2) Open "New Thread Memory Leak - Client" SEQ file in TestStand 2010
    3) Open Task Manager, click Processes tab, sort A-Z (important), and highlight the "SeqEdit.exe" process.  Note the memory useage.
    4) Be ready to click Terminate All in TestStand after you see the memory start jumping.
    5) Run the "New Thread Memory Leak - Client" sequence.
    6) After seeing the memory consumption increase rapidly in Task Manager, press Terminate All in TestStand.
    7) Right click the "While Loop - No Wait (New Thread)" step and set Run Mode » Skip
    8) Right click the "While Loop - No Wait (New Execution)" step and set Run Mode » Normal
    9) Repeat steps 3 through 6
    I've removed all steps from the While Loop to isolate the problem.  I've also tried the other methods you'll see in the ZIP file but all cause the memory leak (with the exception of the Message Popup).
    I have not installed the f1 patch, but none of the bug fixes listed appear to address this issue.  NI Applications Engineering has been able to reproduce the issue (with Windows 7) and is working on it in parallel.  That said, are we missing something??
    Any ideas?
    Certified LabVIEW Architect
    Wait for Flag / Set Flag
    Separate Views from Implementation for Strict Type Defs
    Solved!
    Go to Solution.
    Attachments:
    New Thread Memory Leak.zip ‏14 KB

    Good point Doug.  In this case parallel sequences are being launched at the beginning of the sequential process model, but I'll keep that in mind for later.  Take away:  be intentional about when to wait at the end of the sequence for threads to complete.
    Certified LabVIEW Architect
    Wait for Flag / Set Flag
    Separate Views from Implementation for Strict Type Defs

  • Mac Mini Mid 2011 General system slowdown when transferring files?

    Hi i have a stock mac mini mid 2011, 2.5ghz i5, 4gb ram, 500gb HDD os 10.7.1 and i've noticed a general system slowdown when transferring files on 10.7 and the last 10.7.1 update ( from usb to hdd, another computer on my local network to the mini, external dvd to the mini, etc).
    I could describe the slowdown as mouse pointer movement suffers, its "laggy", application take longer to start, even menus in them from appearing, writting something on the keyboard i have to wait like 5 - 10 seconds so the words start appearing in the app (i.e. textedit).
    I wanted to know if any other new mac mini owner has had this issue and if there is any solution to this(that would not involve buying a SSD for example)?
    Im not really sure if this is a hardware or system issue but it really is getting on my nerves(i know i could just wait for the files to transfer which is what i do now but i dont think that should be), since i also have a old macbook from 2007 2.0ghz c2d, 2gb ram, 250gb HDD os 10.6.8 and have never had this problem before.
    Thanks in advanced.
    Let me know if i need to give more info on this subject.
    Edit: I forgot to mention, the funny thing is doing a time machine backup to my external HDD or transferring files from my HDD to a usb doesnt give me any kind of slowdown.

    Hi Rick, I'd see if it's reated to this, which has many symptoms...
    Try this: "sudo pmset autopoweroff 0" and "sudo pmset standby 0"
    http://xlr8yourmac.com/archives/sep13/091313.html#10.8.5SleepEjectTip

  • Sound Buzz when transferring file from Trash to desktop

    I get the occasional Sound Buzz when transferring file from Trash to desktop
    Its a Short "grounding" buzz similar to slowly plugging an electric guitar when the amp is on, but its loud !
    I am just using the internal sound,

    Brett - thanks for that input, but as stated in my original post this happens whether I do the copy within Finder, or from inside Bridge.
    Your note did inspire me to try one thing though: I UN-checked my Bridge preference to export caches to folders when possible, then I went to the folder I was trying to move (on the NAS) and did Tools > Cache > Purge Cache. I found that if the folder contained only files and no other folders, then I could move the folder to my desktop. However, if the folder in question had nested folders, then I still run into the same conflict.
    Question: since this conflict does not happen when moving a folder from one computer to another computer (but only from the NAS to a computer), I'm not sure how your logic holds when you say "Bridge is trying to cache the files you are moving and creates a ".BridgeCacheT" file before the remote version is copied"
    To answer your other question, we use AFP or at least is says "afp://server.local" when we connect. My apologies for not being an expert on this. The icon of the NAS on the desktop looks like the icons for the other computers.
    Thanks, I'm still trying to solve this. It's really become a major problem for us as we now have to drag files individually (rather than by folder) when we want to do something with them. -- Jim

  • I purchased a new MacBook Pro and when transferring files iphoto did not transfer.  I have a question mark in the place of the icon.  My pictures are in library but can't properly access iphoto.  Thank you

    I purchased a new MacBook Pro and when transferring files iphoto did not transfer.  I have a question mark in the place of the icon.  My pictures are in library but can't properly access iphoto.  Thank you

    Sure-glad to help you. You will not lose any data by changing synching to MacBook Pro from imac. You have set up Time Machine, right? that's how you'd do your backup, so I was told, and how I do my backup on my mac.  You should be able to set a password for it. Save it.  Your stuff should be saved there. So if you want to make your MacBook Pro your primary computer,  I suppose,  back up your stuff with Time machine, turn off Time machine on the iMac, turn it on on the new MacBook Pro, select the hard drive in your Time Capsule, enter your password, and do a backup from there. It might work, and it might take a while, but it should go. As for clogging the hard drive, I can't say. Depends how much stuff you have, and the hard drive's capacity.  As for moving syncing from your iMac to your macbook pro, should be the same. Your phone uses iTunes to sync and so that data should be in the cloud. You can move your iTunes Library to your new Macbook pro
    you should be able to sync your phone on your new MacBook Pro. Don't know if you can move the older backups yet-maybe try someone else, anyways,
    This handy article from Apple explains how
    How to move your iTunes library to a new computer - Apple Support''
    don't forget to de-authorize your iMac if you don't want to play purchased stuff there
    and re-authorize your new macBook Pro
    time machine is an application, and should be found in the Applications folder. it is built in to OS X, so there is nothing else to buy. double click on it, get it going, choose the Hard drive in your Time capsule/Airport as your backup Time Machine  and go for it.  You should see a circle with an arrow on the top right hand of your screen (the Desktop), next to the bluetooth icon, and just after the wifi and eject key (looks sorta like a clock face). This will do automatic backups  of your stuff.

  • App Memory Leak When Open iPhoto

    Hi everyone,
    Does anyone have experienced app memory leak when open iPhoto?  My free memory immediate dropped from 5000 mb to 15mb when I open iPhoto and the app never open.  If I force to quit iPhoto, all return to normal and everything work fine.  I only have iPhoto app running, not sure what caused the memory leak??? 
    Looking for help. 
    Thanks
    JHML

    Actually the new library was a test to see if the problem occurred only with your current library or with all libraries.  The fact that switching libraries cleared up the problem is just serendipitous. 

  • Memory leak when using Threads?

    I did an experiment and noticed a memory leak when I was using threads.. Here's what I did.
    ======================================
    while( true )
         Scanner sc = new Scanner(System.in);
         String answer;
         System.out.print("Press Enter to continue...");
         answer = sc.next();
         new TestThread();
    ========================================
    And TestThead is the following
    ========================================
    import java.io.*;
    import java.net.*;
    public class TestThread extends Thread
    public TestThread() { start(); }
    public void run() {  }
    =====================================
    When I open windows Task Manager, every time a new thread starts and stops, the java.exe increases the Mem Usage.. Its a memory leak!? What is going on in this situation.. If I start a thread and the it stops, and then I start a new thread, why does it use more memory?
    -Brian

    MoveScanner sc = new
    Scanner(System.in);out of the
    loop.Scanner sc = new Scanner(System.in);
    while (true) {
    That won't matter in any meaningful way.
    Every loop iteration creates a new Scanner, but it also makes a Scanner eligible for GC, so the net memory requirement of the program is constant.
    Now, of course, it's possible that the VM won't bother GCing until 64 MB worth of Scanners have been created, but we don't care about that. If we're allowing the GC 64 MB, then we don't care how it uses it or when it cleans it up.

  • Memory Leak when TOMCAT connects to Oracle 10g RAC using JDBC Thin driver.

    We had experienced Memory leak when a Oracle 10g (10.2.0.3) RAC node was evicted. TOMCAT app server is connecting to the Oracle 10g RAC database instances using JDBC 10.2.0.3 thin driver.
    Anyone had similar experience?
    Any ideas? Any bugs reported/fixed?
    Thanks,
    Raj

    If you're doing XA, we absolutely do not support
    driver-level load-balancing OR failover. Use neither.
    For non-XA, you can use driver-level failover. For
    non-XA, you could set load-balancing, but it won't
    help because we get connections from the driver,
    and keep them indefinitely, so the driver never gets
    the chance to affect which connections the pool
    uses after that.

  • Photoshop CS6 memory leak when idle and nothing open

    Photoshop CS6 runs away with memory after being used and then going idle. If I open up PS and leave it, it will be ok but as soon as I open any file it will go up in memory usage (which is normal) but when I close all files and hide PS the memory will stay high never goes back down. When I close PS and re-open (no files open) again it idles at 300Mb memory but when I open a file then close it and then hide/idle PS it raises and stays around 1.25-1.5 GB if not more.
    I have tried to Purge All, and even hide all menus to no avail. I have even tried to close Suitcase (eleminiate any font issues) and still same problem. I am running PS bone stock, no extra plug-ins. 
    Any ideas on why it would be doing this would be greatly appreciated!
    My Computer:
    Photoshop 13.0.1
    MacBookPro
    OS 10.6.8
    CPU: 2.66 GHz Intel Core 2 Duo
    Mem: 4 GB 1067 MHz DDR3
    HD: 300GB (30GB Free)

    Photoshop is not supposed to free memory when you close documents -- that's normal, because the memory gets reused.
    Yes, opening a file makes the memory usage go up - because space is needed for the document and it's window.
    None of what you said describes a leak, and sounds like perfectly normal behavior.

Maybe you are looking for