Memory Leak..Big Time

I've recently purchaced El Gato's EyeTV HDHomeRun & associated software to record and then export content to AppleTV2. The system can record 2 shows simultaneously and also automatically export to iTunes for streaming to AppleTV. That's where the fun begins. The recording works just fine but when the export to iTunes process begins, memory leaks, no gushes out of my system, turning my 12GB of Ram to 12MB within an hour. Paging begins, finally leading up to a Kernel Panic and shutting down my system (lost recordings).  A system restart will clear out the RAM but starting the export process agin will just wipe out all of my memory again. I've also tried repairing permissions (with SL software install disk) but the problem will persist days following. I've also purchased and installed new RAM from OWC, so I know that's not it and even reinstalled EyeTV software after contacting their Tech Support. No other software applications are runnning. Can anyone shed light on what is going on here? I'm thinking about a fresh install of 10.6.8 or going to Lion but don't know if this will even make any difference. Please HELP!

It is entirely possible that Elgato's software writes to the memory controller in such a way that if the memory is marginal in certain bits of the board, that it would panic.     The fact the issue remanifests itself under new RAM may even indicate the problem is not the RAM, but the RAM controller itself!   An old piece of code I know from "C" programming days, malloc is the memory allocation code word.      They should be digging in the malloc codes making sure they aren't asking for points in memory they don't know exist, or create some debug routines around every malloc piece of code until it trips on your machine.

Similar Messages

  • Memory Leak with RDC (11.5.11.1732) using SetDataSource

    I believe I have identified a 20 byte memory leak each time a VB6 application calls Report.Database.SetDataSource and passes an ADO recordset (in craxdrt.dll version 11.5.11.1732 and earlier).
    I've created a simple VB application which demonstrates the problem. The program repeatedly loads a report and optionally calls SetDataSource.
    When running the program and monitoring for leaks with DebugDiag I observe the same 20 byte leak that I see in my live application if I enable calls to SetDataSource. Without the calls there is no leak.
    I have uploaded the VB source code for the test application, RPT file used for the test and the reports from DebugDiag for both the leaky and non-leaky runs to [http://www.turambar.co.uk/download/crystal/CrystalTest.zip]
    The two DebugDiag reports are as follows:
    Memory_Report__Project1.exe__10212009140853486.mht - run with no calls to SetDataSource
    Memory_Report__Project1.exe__10212009140007780.mht - run with calls to SetDataSource
    Looking at the second of these reports it can be seen that the leak originates at "Project1!Form1::SDS+ec C:\CrystalTest\Form1.frm @ 89". That function contains a single line of code which calls SetDataSource.
    Our live application is using CRXI R1 RTM (which has other leaks too but those disappear when upgrading to SP4).
    For this test application I have used CRXI R2 with FixPack 5.6 and the error still manifests.
    Option Explicit
    Dim objApp As CRAXDRT.Application
    Private Sub Form_Load()
        Set objApp = New CRAXDRT.Application
    End Sub
    Private Sub Form_Unload(Cancel As Integer)
        Set objApp = Nothing
    End Sub
    Private Sub Command1_Click()
        Dim i As Long
        Command1.Enabled = False
        DoEvents
        For i = 1 To CLng(Text1.Text)
            Call Stress
        Next
        Command1.Enabled = True
    End Sub
    Sub Stress()
        Dim objRep As CRAXDRT.Report
        Dim objRec As ADODB.Recordset
        Set objRec = New ADODB.Recordset
        Set objRep = objApp.OpenReport("test.rpt")
        If Check1.Value = vbChecked Then
             Call SDS(objRep, objRec)
        End If
        Set objRep = Nothing
    End Sub
    Sub SDS(objRep As CRAXDRT.Report, objRec As ADODB.Recordset)
        Call objRep.Database.SetDataSource(objRec)
    End Sub

    I've retested using VB.NET 2005. Still using ADO recordsets to pass the data through to Crystal.
            Dim objRep As ReportDocument
            Dim objRec As ADODB.Recordset
            Dim objConn As ADODB.Connection
            objConn = New ADODB.Connection
            objConn.Open("Provider=SQLOLEDB.1;Data Source=DB-HOST;Initial Catalog=TestDB", "simon", "mypass")
            objRec = New ADODB.Recordset
            objRec.CursorLocation = ADODB.CursorLocationEnum.adUseClient
            objRec.CursorType = ADODB.CursorTypeEnum.adOpenStatic
            objRec.LockType = ADODB.LockTypeEnum.adLockBatchOptimistic
            objRec.Open("SELECT TOP 1 * FROM simon.mytable", objConn)
            objRep = New ReportDocument
            objRep.Load("test.rpt")
            If Check1.CheckState = System.Windows.Forms.CheckState.Checked Then
                objRep.SetDataSource(objRec)
            End If
            objRep.Close()
            objRep.Dispose()
            objRep = Nothing
            objRec.ActiveConnection = Nothing
            objRec.Close()
            objRec = Nothing
            objConn.Close()
            objConn = Nothing
    With the calls to SetDataSource in place DebugDiag is again reporting various leaks which do not happen when that call is not made. I'm not too hot on .NET code so I hope I'm doing all the right things to clear down before I do the leak analysis. My main calling form forces GC cleanup before I start the dump:
            GC.Collect()
            GC.WaitForPendingFinalizers()
    I'm running this on Windows Server 2003 R2 SP2 with hotfix 939527 installed. For Crystal I installed CrystalReports11_5_NET_2005.msi from the 5.6 fixpack.
    Simon

  • OBVIOUS MEMORY LEAK  iTunes about:10.6.3.25  (upgraded from 32 to 64)  but iTunes*32 process is in memory. clicking on a song to play or right clicking it adds 5MB.  Did 32 have this problem and didn't uninstall correctly or is 64 broken too.

    OBVIOUS MEMORY LEAK
    iTunes about:10.6.3.25
    windows 64 bit Home on AMD64 HP G60 laptop
    downloaded and installed "iTunes32Setup.exe"
    saw the problem and uninstalled it
    downloaded and installed "iTunes64Setup.exe"
    iTunes about:10.6.3.25
    now i see process "iTunes.exe*32
    it is ok when it play song list
    but ***** UP MEMORY when i click on another song to play. apparently it doesn't release previous player thread
    OR,
    if you right click on a song, the memory will blow up antother 6MB and won't release it...can do it all day long
    -- until run out of memory and iTunes freezes...without any of my changes.
    sooooo.. is it the uninstaller or is iTunes64Setup.exe mispackaged or do both have memory problems on AMD 64 ?

    Unfortunately I was able to repeat the memory leak even with turning the Toolbar off.  To replicate the problem, I was able to watch the memory leak each time I actively selected a new song with the iTunes open.  I was able to repeat the leak with and without both Spotify and TuneUp Companion running and not running.  The only thing I noticed is when both Spotify and iTunes were open, the memory leak from iTunes added insult to injure with Spotify which kept trying to sync up with iTunes local library which would crash Spotify as well.  But I can confirm the memory leak is in iTunes 10.6.3.25 (looking forward to 10.6.4 fixing the problem).

  • Memory leak issues persist in Safari 6.0.5 (it's about time that Apple actually fixed this)

    Safari has been notorious for its memory leaks for years, as I'm sure many of you know. I stopped using Safari as my main browser in 2010 or so, and I instead began to use Chrome. However, I recently had to use Safari to access some webpages, and I neglected to quit out of it. A few hours later, my computer slowed to a complete crawl. I was confused, because my computer never slows down to such a crawl, so I went into Activity Monitor to find the 'Safari Web Content' process using all of my available RAM, which was nearly 6 GB. Needless to say, Safari was force quit after that. I've heard that some plugins will cause this, but Flash was my only active plugin. Flash is, indeed, a heaping pile of crap that will readily eat resources as it sees fit- but this never happens to me in Chrome, and I use Flash all of the time n Chrome. This was after it had been sitting idle for quite some time, and I imagine that it would have used more RAM if it could.

    Using 6.0.5 here without any of those issues ...
    Regardless of whether Flash is being used, from your Safari menu bar click Help > Installed Plug-ins.
    Try troubleshooting extensions and third party plug-ins.
    From your Safari menu bar click Safari > Preferences then select the Extensions tab. Turn that off if there are any installed. Quit and relaunch Safari to test. If that helped, turn extensions back on then uninstall one a time to test.
    If it's not an extensions issue, try troubleshooting third party plug-ins.
    Back to Safari > Preferences. This time select the Security tab. Deselect:  Allow all other plug-ins. Quit and relaunch Safari to test.
    If that made a difference, instructions for troubleshooting plugins here.
    my computer slowed to a complete crawl.
    Checked to see exactly how much space is available ??
    Click your Apple menu icon top left in your screen. From the drop down menu click About This Mac > More Info > Storage
    Make sure there's at least 15% free disk space.
    Checked the startup disk lately?
    Launch Disk Utility located in HD > Applications > Utilities
    Select the startup disk on the left then select the First Aid tab.
    Click:  Verify Disk  (not Verify Disk Permissions)
    If the disk needs repairing, restart your Mac while holding down the Command + R keys. From there you can access the built in utilities in OS X Recovery to repair the startup disk.
    message edited by:  cs

  • Memory leak in Real-Time caused by VISA Read and Timed Loop data nodes? Doesn't make sense.

    Working with LV 8.2.1 real-time to develop a host of applications that monitor or emulate computers on RS-422 busses.   The following screen shots were taken from an application that monitors a 200Hz transmission.  After a few hours, the PXI station would crash with an awesome array of angry messages...most implying something about a loss of memory.  After much hair pulling and passing of the buck, my associate was able to discover while watching the available memory on the controller that memory loss was occurring with every loop containing a VISA read and error propogation using the data nodes (see Memory Leak.jpg).  He found that if he switched the error propogation to regular old-fashioned shift registers, then the available memory was rock-solid.  (a la No Memory Leak.jpg)
    Any ideas what could be causing this?  Do you see any problems with the way we code these sorts of loops?  We are always attempting to optimize the way we use memory on our time-critical applications and VISA reads and DAQmx Reads give us the most heartache as we are never able to preallocate memory for these VIs.  Any tips?
    Dan Marlow
    GDLS
    Solved!
    Go to Solution.
    Attachments:
    Memory Leak.JPG ‏136 KB
    No Memory Leak.JPG ‏137 KB

    Hi thisisnotadream,
    This problem has been reported, and you seem to be exactly reproducing the conditions required to see this problem. This was reported to R&D (# 134314) for further investigation. There are multiple possible workarounds, one of which is the one that you have already found of wiring the error directly into the loop. Other situations that result in no memory leak are:
    1.  If the bytes at port property node is not there and a read just happens in every iteration and resulting timeouts are ignored.
    2.  If the case structure is gone and just blindly check the bytes at port and read every iteration.
    3.  If the Timed Loop is turned into a While loop.
    Thanks for the feedback!
    Regards,Stephen S.
    National Instruments
    Applications Engineering

  • Yosemite Time Machine memory leak chewing through memory?

    I installed Yosemite 10.10 on my 27" iMac (Intel Core i5) with 12 GB of RAM. After plugging in my external drive to make a backup with Time Machine, I noticed the indicator in my menu bar from Memory Clean was red and showing less than 15 MB of free memory available! I ran Memory Clean to recover some memory while the backup was in progress and it made roughly 3 GB available. Then the rapid drop in available memory began again as Time Machine continued to back up 40+ GB of data.
    When finished, the available memory was very slowly released and crept up to around 220 MB. After 5 minutes without seeing a major release of memory I then ran Memory Clean again and it made 1.96 GB free for the system to use.
    Before the Yosemite upgrade I didn't have Memory Clean installed so I don't think I ever checked to see if system memory was being chewed through in such a massive way under Mavericks. Is this common or is this yet another memory leak associated with Yosemite?
    EDIT: In case this might be helpful, the drive is a Seagate Backup Plus 2 TB USB 3.0.

    Similar machine though16Gb of RAM.   The leak seems to be related to long sleep ( I used to never turn it off)  and then Time Machine so again, similar.   Time Machine (to LaCie 2TB via USB) wanted to re-archive the whole hard drive and via Memory Clean I could watch the RAM get eaten up.   Interesting tidbit, Activity Monitor is oblivious to the leak... completely oblivious. 
    Today, I begin the fun part of copying the 800Gb of data and reinstalling Yosemite.   (Sarcasm)  That should only set me back and hour or two.... sheesh.
    Of course, I cannot make any edits - NONE - in the new notfication center.   I can only get local area weather, the stocks apple provides and the time in Cupertino, two time zones away.
    BTW, should you read this, we're out in the cold on Continuity though every now and again, I can see my iMac on my 4Gen iPad.   Weird.

  • [SOLVED] Big Memory Leak with Pacman - Xterm - Openbox

    Hi Everyone,
    I Don't know if this is the place to post this, and if not please correct me.
    I've installed Arch Linux in a virtual machine HDD using VirtualBox on Linux Mint. Then I made a simple install with openbox and slim just to make some testes before putting the all thing in my laptop.
    So after installing slim and openbox, i've checked free -m and he noticed me that the system was using 80mb of ram (really cool ^^). Then I start installing some apps just for test.
    I start installing libreoffice like this "sudo pacman -S libreoffice", in a xterm terminal and on another xterm terminal I had the fallowing command running "watch free -m" just to see the ram usage evolution...
    And then, during downloading/instalations, things became wierd... during download the ram usage start to grow up to 120mb and during instalation the ram usage start's growing more and more, to finish at 770mb !
    I doesn't make any sense ! But I don't really know if this is a pacman, openbox or xterm problem :S...
    Can someone give me any information about that ?
    Thanks in advance !
    PS : after a "sudo pacman -Scc" the ram drop at 633mb...
    Luis Da Costa
    Last edited by aliasbody (2011-07-07 18:33:56)

    Leonid.I wrote:
    aliasbody wrote:
    I'm sorry, I'm not making myself clear - my problem isn't the usage of Swap itself. I know that when RAM reaches its maximum, it uses Swap. I know that. And that's not what I'm "complaining" about.
    Problem is - why does updating via pacman uses so much allocated RAM? I think there's a memory leak in here somewhere and that's what I was aiming at - I'm basically asking if it's just me or if this has happened to someone else and if it's really a problem or it has a simple explanation....
    Well, pacman's memory usage depends on a particular package in question. Installing TexLive, for instance, takes a tremendous amount of RAM/swap compared to netcfg. Pacman does not simply copy files, but also executes scripts.
    It is not true that swap is used when the RAM is full. The kernel decides which data in RAM is relevant and the irrelevant data goes to swap, even if you have 300mb of RAM still available. /proc/sys/vm/swappiness controls this logic.
    In your atom netbook you have only 3mb in swap... this can not be the cause of a slowdown.
    Thank you for your answer this make me understand more about the all thing ^^
    I was just scared about a possibile memory leak, I didn't know that pacman.
    Thanks in advance for all, I will marked it as Solved then !

  • BIG memory leak in develop module 3.2RC

    I've been getting "unknown error occurred" crashes ever since I upgraded to LR3.  Today I'm doing (supposedly) a rush job and getting crashes after processing every 5 or 6 images.  So I've done some investigation.
    Here's the set up. 
    Win XP SP3.  4Gb memory installed (yes, I know XP can't address all of it)  /3Gb option NOT set.
    Intel Core2 Quad CPU, 2.4GHz
    Lightroom 3.2RC.
    Windows Task Manager open on the processes tab
    Lightroom is the ONLY significant application running on the PC (yes there is AV, and a host of other bits in the right side of the task bar, such as printer monitors etc, but task manager shows that only Lightroom uses any significant CPU cycles and memory used by other processes is fairly minimal.
    Folder of 296 images selected, filtered to only show the flagged images (99 of them)
    Here's the memory used BY LIGHTROOM as reported by Windows Task Manager....  (Memory figures were after CPU usage from lightroom had dropped to zero and the memory figure had become steady)
    Open Lightroom         455,816K
    Incr Brightness        457,988K
    Next Image             620,796K
    Incr Fill,Bright,Exp   647,768K
    Next Image             814,968K
    Incr Fill,Bright       828,112K
    Set rating             490,328K  (See Note)
    Next Image             646,952K
    Incr Bright            664,564K
    Next Image             848,140K
    Incr Bright,Blacks     854,784K
    Next Image           1,003,988K
    Incr Bright,Exp      1,005,800K
    Next Image           1,166,936K
    Incr Fill,Bright     1,153,431K
    Next Image           1,286,465K
    short pause... "An Unknown Error Occurred"
    Memory drops back to 1,171,380K
    Repeat after reboot of PC
    Open Lightroom         455,592K
    Incr Fill              475,124K
    Next Image             640,126K briefly then 446,704K  (See Note)
    Incr Fill,WB           467,632K
    Next Image             632,144K
    Incr Fill,Bright,WB    654,412K
    Next Image             814,136K
    Incr Bright,Fill,WB    839,540K
    Next Image           1,000,780K
    Incr Bright,Fill,WB  1,006,766K
    Next Image           1,165,644K
    Incr Fill,WB         1,169,664K
    Next Image           1,287,056K
    short pause... "An Unknown Error Occurred"
    Memory drops back to 1,169,376K
    I've done this many times and watched the same pattern each time (I've only noted the figures down twice as shown above).  It is noticable that the memory 'loss' appears to happen at the point when I move from one image to the next.  The memory lost each time is consistently around 180,000K.
    NOTE: Once on each of the runs when I was recording the memory useage I did see memory released.  The first time was in response to setting a rating for the image.  This is the ONLY time I have seen that happen at the same time as setting a rating and I'm inclined to think it a co-incidence.  On the second run above the memory was release after moving to the next image on one occasion.  I have not found any patter to when the occasional release of memory occurs.
    What is very consistent is that whenever the memory used by lightroom goes over 1,200,000K.  The program displays the "Unknown Error Occurred" message and the program either hangs or becomes so unresponsive that the only option is to shut down lightroom and restart. 
    Ian.  :-(

    Hi,
    I can confirm this problem on a windows 7, 64 bit system. With 4GB of memory LR stays usuable for while, with 2GB problems start very quickly.
    Apart from that the RC is very good and integrates the much needed SONY NEX support.
    Regards

  • Getting rid of  JFrame (big memory leak)

    Hi,
    I have an app that has a main frame with a button. When you press the button another JFrame is created. When you close this JFrame it is not garbage collected becauses references to it are held by some swing internals and also, it appears, by all listeners that have been registered. Is there any way of getting rid of these references so I can recover the memory? This is causing a huge memory leak and the app eventually dies......
    Thanks
    Simon

    Don't know if you are using kunststoff laf and a 1.4 jvm, but kunststoff has a mem leak when using toolbars.
    I sent a mail in August 2002 to them, but it seems no bugfix release is coming out.
    Copy of the relevant part of the mail:
    While looking for a memory leak (JFrames not being garbage collected), I noticed a bug in the kunststoff 2.0.1 laf:
    The KunststoffToolBarUI class is a singleton, but the base MetalToolBarUI class is not (at least in jdk 1.4, I haven't looked at earlier versions).
    The MetalToolBarUI has several hashtables containing ui components, and the static reference in KunststoffToolBarUI is preventing the components in this hashtables from being garbage collected.
    After changing KunststoffToolBarUI.createUi to always return a new instance of KunststoffToolBarUI the memory leak disappears and the JFrame's are garbage collected.

  • Memory leak in redeploy

    I am doing some memory profiling for an application running inside OC4J standalone. When the application is redeployed to OC4J, OC4J does not release memory for servlets and all the instances that's defined as class variables, e.g. private static Object s_instace. So if the application is redeployed 5 times, I saw 5 instances in the memory while gabarge collection can not collect them, and eventually the application runs out of memory if redeployed multiple times without restarting OC4J. I am wondering if anyone has encountered this issue before?

    Hello cguo:
    Thanks for reporting the problem.
    This is a recognized minor problem in oc4j standalone (and hence also in application server.) It is believed that the memory leak is not big enough to be a serious concern, unless an application is redeployed many times in a production environment, which is an unlikely. In any case, this bug should be fixed in the next oc4j production. Please try the next oc4j production release when it comes (when? I am not sure) and tell me if the problem still persists.

  • How to fix huge iTunes memory leak in 64-bit Windows 7?

    iTunes likes to allocate as much as 1.6GB of memory on my dual-quad XEON 8GB 64-Bit Windows computer and then becomes unresponsive.
    This can happen several times a day and has been going on for as long as I can remember.  No other software that I use does this - only Apple's iTunes.  Each version I have installed of iTunes appears to have this same memory leak.  Currently I am running version 10.7.0.21.
    I love iTunes when it works.  But having to constantly kill and relaunch the app throughout the day is bringing me down.
    Searching for a fix for this on the internet just surfaces more and more complaints about this problem - but without a solution.
    Having written shrinkwrapped software for end users as well as for large corporations and governments for more than 25 years I know a thing or two about software.  A leak like this should take no more than a day or two to locate using modern software tools and double that to fix it.  So why with each new version of iTunes does this problem persist?  iTunes for Windows is the flagship software product Apple makes for non-Mac users - yet they continue to pass up each opportunity they have had over the years with each new release to fix this issue.  Why is this?
    Either the software engineers are not that good or they have been told NOT to spend time on this issue.  I personally believe that the engineers at Apple are very good, and therefore am left thinking that the latter is more likely the case.  Maybe this is to coax people to purchase a Mac so that they can finally run iTunes without these egregious memory leaks.  I would like to offer another issue to consider.
    Just as Amazon sold Kindles and Google sold Nexus tablets at low cost - not counting on margin for profit - but instead they wanted to saturate the marketplace with tools for making future purchases of content almost trivial to do with their devices.  Apple also counts on this model with their pricer hardware - but they also have iTunes.  Instead of trying to get people to switch to a MAC by continuing to avoid fixing this glaring issue in iTunes for Windows I would like to suggest that by allowing their engineers to address this issue that Apple will help keep Windows users from jumping ship to another music app.  The profit to be made by keeping those Windows users happy and wedded to the iTunes store is obvious.
    By continuing to keep this leak in iTunes for Windows all it does is lower my esteem for the company and start to make me wonder if the software is just as buggy on Macs.

    I have same issue. Ongoing for more than 1 year and currently running iTunes 11.3.
    My PC is Dell OptiPlex 990 I7 processor, 8GB ram, W7 64 [always keep things patched up to latest OS updates etc]
    I use this iTunes install to stream music videos etc to multiple appleTVs, ipads, iphones etc .. via Home Sharing
    Store all my media including music, videos and apps on separate NAS  .. so the iTunes running on PC is only doing the traffic cop role and streaming / using files stored on NAS .. creates lots of IO across my network
    Previous troubleshooting suggest possible contributing causes include
    a) podcast updates  .. until recently I had this auto updates on multiple podcast subscriptions, presumably the iTunes would flow this from the PC to save on the NAS across the network .. if the memory leak is in the iTunes network communication layer (?bonjour?)  this may be sensitive to IO that would not normally occur if the iTunes file saving was local on the same PC
    b) app updates .. have 200+ apps in my library and there is always a batch of updates .. some updates 100s of MB is size .. routinely see 500MB to 1GB of updates in single update run .. all my apps are
    c) streaming music / movies .. seems when we ramp up streamlining of music or movies . memory leak grows faster .. ie within hours of clean start
    c) large syncs of music or videos to ipads or iphones .. noticed that get big problems when I rebuild an ipad .. I typically have 60+ GB of data in terms of apps /  music / videos to load .. have to do rebuild in phases due to periodic lockups

  • Memory leak versus inactive memory

    Hi:
    My iMac was becoming painfully slow a year or so ago. This seemed to be due to a lack of RAM, so I upgraded (2 GB to 4 GB total). This worked great, but now I seem to be running out of memory again, even when there are not a lot of programs open.
    I’ve started tracking memory usage with Activity Monitor, and typically a huge chunk of my memory is “Inactive”. I.e., at this time I have 215 MB free, 450 MB Wired, 1.6 GB active, and 1.7 GB inactive. This despite only having Word, Outlook, OmniOutliner, Safari and iTunes open. Under System Memory (in Activity Monitor) it says Finder at 250 MB, Word at 131 MB, iTunes at 120 MB, Outlook at 80 MB, OO at 46 MB, and Dropbox at 34 MB (then a bunch of smaller usages). Even being very generous, the total memory usage only seems to add up to ~1GB, far short of the 4 GB installed.
    I have no idea what a real “memory leak” actually is, but I’ve heard the term bandied about. I’ve had some weird, nonreproducible system crashes in the last few months where the system just totally freezes, often (but not always) putting a nice colour pattern on the monitors. Looking around, some folks have said that this might be due to a memory problem since everything else seems to check out AOK.
    Thus, several questions:
    Might I have a “memory leak”? If so, how do I diagnose and fix it?
    What is the 1.7 GB of “inactive” memory being used for? Why is my Free memory so small when this big chunk of inactive memory is sitting there?
    Thank you very much!
    OS 10.8.2, 2.8 GHz Intel Core 2 Duo with 4 GB 800 MHz DDR2 SDRAM

    The way Safari accumulates memory is normal. The way it is trying to page the memory to disc and error-ing is not. I think the integrity of your disc volume / catalog and directories may be to blame.
    Try a bit of basic maintenance:
    Repairing permissions is important, and should always be carried out both before and after any software installation or update.
    Go to Disk Utility (this is in your Utilities Folder in your Application folder) and click on the icon of your hard disk (not the one with all the numbers).
    In First Aid, click on Repair Permissions.
    This only takes a minute or two in Tiger, but much longer in Leopard.
    Background information here:
    http://docs.info.apple.com/article.html?artnum=25751
    and here:
    http://docs.info.apple.com/article.html?artnum=302672
    An article on troubleshooting Permissions can be found here:
    http://support.apple.com/kb/HT2963
    By the way, you can ignore any messages about SUID or ACL file permissions, as explained here:
    http://support.apple.com/kb/TS1448?viewlocale=en_US
    If you were having any serious problems with your Mac you might as well complete the exercise by repairing your hard disk as well. You cannot do this from the same start-up disk. Reboot from your install disk (holding down the C key). Once it opens, select your language, and then go to Disk Utility from the Utilities menu. Select your hard disk as before and click Repair.
    Once that is complete reboot again from your usual start-up disk.
    More useful reading here:
    Resolve startup issues and perform disk maintenance with Disk Utility and fsck
    http://support.apple.com/kb/TS1417?viewlocale=en_US
    For a full description of how to resolve Disk, Permission and Cache Corruption, you should read this FAQ from the X Lab:
    http://www.thexlab.com/faqs/repairprocess.html

  • Htmleditorkit memory leak

    We are using the HtmlEditorKit class to parse documents in a Java program using Java version 1.4.1_02. While the class seems to work correctly, we eventually run out of memory when processing large numbers of documents. The problem seems to worsen when we process big documents. We had the same problem with Java version 1.4.0_03. In our class constructor we have the following code:
    // create the HTML document
    kit = new HTMLEditorKit();
    doc = kit.createDefaultDocument();
    Inside our processing loop we do the following:
    if (doc.getLength() > 0 ) {
    doc.remove(0, doc.getLength());
    // Create a reader on the HTML content.
    Reader rd = getReader(inFilePath);
    // Parse the HTML. -- Here is a memory leak
    kit.read(rd, doc, 0);
    // Close a reader
    rd.close();
    Any help will be much appreciated.
    Michael Sperling
    516-998-4803

    Hi,
    if you process HTML files in a loop I assume you do not want to actually show them on the screen. It seems as if you rather do something with the HTML file such as reading the HTML code and change it / store it / transform it as you go in batch type of manner on multiple files.
    If the above is true I would not recommend to use HTMLEditorKit.read at all because this will not only parse HTML, it will also build a HTMLDocument with respective element structure each time. This in turn is only necessary for showing the HTML file in a text component and to work on the document interactively.
    Depending on the actual purpose of your processing loop, you might want to simply parse the HTML file using class HTMLEditorKit.ParserCallback. By creating a subclass if HTMLEditorKit.ParserCallback you can implement / customize the way you would like to process HTML files.
    The customized subclass of HTMLEditorKit.ParserCallback then can be used by calling
    new ParserDelegator().parse(new FileReader(srcFileName), MyParserCallback, true);
    HTH
    Ulrich

  • Memory leak puzzle

    Hello guys,
    I'm having a very, very difficult time debugging this memory leak on my server. I'm pretty sure it's related to berkeley DB XML, bu I just couldn't prove this in numbers.
    My server: Debian GNU/Linux 2.4.27-2-386 #1 Wed Aug 17 09:33:35 UTC 2005 i686 GNU/Linux, 512 MB of RAM, Pentium 4-class processor.
    Here's the behaviour. I start Tomcat 5.5.15, the application is loaded, current memory status:
    delegaciainterativa:~# free -m
    total used free shared buffers cached
    Mem: 440 346 94 0 14 282
    -/+ buffers/cache: 49 391
    Swap: 909 4 905
    So, I've got 94 MBs of free memory. Now, here's this JSP I'm running:
    <%@ page language="java" %>
    <%@ page session="true" %>
    <%@ page import="br.gov.al.delegaciainterativa.admin.*"%>
    <%@ page import="br.gov.al.delegaciainterativa.controles.*"%>
    <%@ page import="br.gov.al.delegaciainterativa.admin.areas.*"%>
    <%@ page import="br.gov.al.delegaciainterativa.controles.BdbXmlAmbiente"%>
    <%@ page import="java.util.*"%>
    <%@ page import="javax.servlet.RequestDispatcher"%>
    <%@ page import="com.sleepycat.dbxml.XmlContainer"%>
    <%@ page import="com.sleepycat.dbxml.XmlDocument"%>
    <%@ page import="com.sleepycat.dbxml.XmlException"%>
    <%@ page import="com.sleepycat.dbxml.XmlIndexDeclaration"%>
    <%@ page import="com.sleepycat.dbxml.XmlIndexSpecification"%>
    <%@ page import="com.sleepycat.dbxml.XmlQueryContext"%>
    <%@ page import="com.sleepycat.dbxml.XmlQueryExpression"%>
    <%@ page import="com.sleepycat.dbxml.XmlResults"%>
    <%@ page import="com.sleepycat.dbxml.XmlTransaction"%>
    <%@ page import="com.sleepycat.dbxml.XmlUpdateContext"%>
    <%@ page import="com.sleepycat.dbxml.XmlValue"%>
    <%@ page import="com.sleepycat.dbxml.XmlUpdateContext"%>
    <%@ page import="com.sleepycat.dbxml.XmlQueryContext"%>
    <%@ include file="config.jsp"%>
    <%!
         final String _grupoPagina = "visualizaLogsUsuario";
    %>
    <font face="verdana" size="1">
    <%
              HttpSession sessao = request.getSession(true);
              BdbXmlAmbiente bdbxmlAmbiente = new BdbXmlAmbiente();
              XmlContainer container = bdbxmlAmbiente.abrirContainer(Config.getNomeContainerBO());
              String fullquery = "";
              fullquery = "for $i in collection('"+ Config.getNomeContainerBO() +"')/BO \n";     
              fullquery += "where $i/cadastro/fim/data/date >= '2006-00-00' \n";
              fullquery += "return $i";
    ArrayList docs = new ArrayList();
    XmlResults results = null;
         XmlValue value = null;
         XmlDocument document = null;
    try {
         XmlQueryContext context = bdbxmlAmbiente.getEnvironmentInit().getManager().createQueryContext();
         context.setEvaluationType(XmlQueryContext.Eager);
         XmlQueryExpression xqe = bdbxmlAmbiente.getEnvironmentInit().getManager().prepare(fullquery, context);
         results = xqe.execute(context);
                   out.println("<p>fez busca, deletando...</p>");
    while (results.hasNext()) {
         value = results.next();
    document = value.asDocument();
         document.fetchAllData();
         out.println(document.getName() + "<br>");
    //out.println("<p>total: " + results.size() + "</p>");
    } catch (Throwable e) {
         out.println(e);
    e.printStackTrace();
    } finally {
                   results.delete();
    %>
    This basically loads about 1600 small documents (~5KB) from the container, and guess how memory is now:
    delegaciainterativa:~# free -m
    total used free shared buffers cached
    Mem: 440 429 11 0 14 282
    -/+ buffers/cache: 132 308
    Swap: 909 4 905
    That's about 83Mbs consumed!! And if I perform other searches like this, this keeps eating up more and more memory, then swapping and getting slow... and sometimes crashing Tomcat.
    Here's the big mistery: the memory NEVER GETS RELEASED. Even calling delete(), as I did explicitily, it doesnt release the memory. It keeps increasing as searches are performed.
    Now, let's go to the profiler. I've installed JProfiler and run it on this server, and here're the results: http://freeunix.com.br/All_Objects.html
    Even more puzzling, I havent got any impressive amount of memory consumption.
    So, what can I do to figure this out? The only way to release memory is to restart the server once and a while, which far from ideal.
    I'd really would appreciate any help about this, since I'm on production with this server.
    cheers,
    -- Breno Jacinto

    Hello George,
        I tried this:
    <%
              HttpSession sessao = request.getSession(true);
              BdbXmlAmbiente bdbxmlAmbiente = new BdbXmlAmbiente();
              XmlContainer container = bdbxmlAmbiente.abrirContainer(Config.getNomeContainerBO());
              String fullquery = "";
              fullquery = "for $i in collection('"+ Config.getNomeContainerBO() +"')/BO \n";     
              fullquery += "where $i/cadastro/fim/data/date >= '2006-00-00' \n";
              fullquery += "return $i";
            ArrayList docs = new ArrayList();
            XmlResults results = null;
             XmlValue value = null;
             XmlDocument document = null;
            try {
                 XmlQueryContext context = bdbxmlAmbiente.getEnvironmentInit().getManager().createQueryContext();
                 context.setEvaluationType(XmlQueryContext.Eager);
                 XmlQueryExpression xqe = bdbxmlAmbiente.getEnvironmentInit().getManager().prepare(fullquery, context);
                 results = xqe.execute(context);
                   out.println("<p>fez busca, deletando...</p>");
                while (results.hasNext()) {
                     value = results.next();
                    document = value.asDocument();
                        document.fetchAllData();
                        out.println(document.getName() + "<br>");
                        value.delete();
                        document.delete();
                //out.println("<p>total: " + results.size() + "</p>");
                   context.delete();
                   xqe.delete();
                   results.delete();
            } catch (Throwable e) {
                 out.println(e);
                e.printStackTrace();
    %>
         Don't you think that it's too much to consume ~80 MBs of RAM, in a search that returns ~1600 5 KB  documents? And worse, this amount is never released.
         I'm wondering if there's any configuration issue here? Such as JVM's Heap size, or BDBXML Environment Initialization?
    thanks,
    -- Breno

  • Memory leak using pthreads on Solaris 7

    I'm on Solaris 7
    uname -a:
    SunOS zelda 5.7 Generic_106541-18 sun4u sparc SUNW,Ultra-250
    compiling with g++ (2.95.2)
    purify 5.2
    I consistently get memory leaks related (apparently)
    to the pthreads lib. I've set the error reporting chain length to 30 (way big), the start of the chain in every case starts with threadstart [ libthread.so.1 ]
    when I end my process, I Join the threads before deleting them. What else can I do to get rid of these leaks?
    thanks,
    rich

    is it worth upgrading to a 64 bit OS with more ram.
    Well you're talking a fairly hefty investment in fact your best bets buying a new machine at that point since the motherboard would need to be upgraded as well as memory and OS.
    Now  I run on a 64 bit OS 12 gigs of ram, but i love my plug-ins, and most are 32 bit, so you would still be using the 32 bit Photoshop if you want the majority of all plugins for photoshop to work. Well while the 32 bit version still have the memory limitations on how much memory it can use, Because its in the 64 bit OS, you have available the full amount of that limitation available to the application.
    I can run several different memory intensive applications at one time and normally not have an issue.  I say normally cause, sometimes i will crash my graphics driver if i open one too many 3d apps hooked into the Nvidia drivers.
    I normally only reboot maybe once a week.
    So in short, would it help you to be able to go 64 bit with more ram, Most Certainly even more so if you could care less about plug-ins and want to use the 64 bit version. should you go to 16 gigs of ram.. That depends on your budget really,
    Personally I always plan to upgrade when I build my systems, Putting in the largest chips you can with out filling all the slots leaving room for upgrading if needed. that way you're not filling all your slots with cheaper lower memory ram that you would have to replace them all to upgrade.
    Hope this helps a bit

Maybe you are looking for

  • Awr questions

    Hi, I have some questions related to AWR: 1) In AWR there is a section on "SQL ordered by Sharable Memory" . I am not clear what is the meaning of sqls ordered by sharable memory? and and how that information could be used for a tuning oppertunity? 2

  • Need to profile the jdk 1.1.8

    Hello Friends, I need to profile a java app which is targeted for jdk 1.1.8. In absence of the jvmpi interface, is there a way that I could do it through the native header files (in the Include directory). Studying those header files, and the native

  • Pre order status

    Just wondering what everybodies pre order status is. Mines still says "in process"

  • Patch problems

    i haven't installed a newconfig patchwise successfully in a a few patches (iMS and UWC) and, i'm having problems uninstalling the patches, so i might try and reapply them properly. uninstall newconfig works, but reinstalling says it's still installed

  • Can OS X server fulfill my needs?

    i have tons of music and movies that i would like my friends and family to use/copy but I am unsure how to go about doing this. Can OS X server do some file sharing and provide basic security?