Total system memory *decreases* when using invidia 9600M GPU

I have 4 GB of RAM in my MBP. I understand that when using the integrated video processor (invidia 9400M), 250 MB of system RAM are set aside for the video memory. So when my total system RAM showed 3.8 GB, that made sense. However, when I switched to the 9600M GPU which has its own 256 MB of dedicated VRAM, total system memory actually decreased to 3729 MB!
How can that be, and why isn't does the total amount of RAM installed show 4096 MB?

Hi Odysseus,
I am not completely sure here, but it sounds like you are seeing an issue related to 32-bit addressing. I am not well-versed in Mac architecture, but I when it comes to PC architecture. With a system that uses 32-bit kernel (like Mac OS X and 32-bit versions of Windows), it can natively access up to 4 GBs of RAM (232bytes). Here is where I get sketchy, I know that 32-bit WIndows (including Vista) uses what is termed a flat memory model. In order for the system to access the video RAM, the kernel maps the video memory over the upper addressees of the 4 GB memory space. It does it backwards, so for a 256 MB video card, it gets the addressees from 4096 - 3840 MBs. This prevents these addressees from being used by the system, so you effectively lose that RAM. There are two traditional methods of fixing this, the first is to use more than 32-bits of address space, and allow the video RAM to be mapped above the 4 GB barrier. This is what the Intel Santa Rosa chipset (used in all older MBPs that support 4 GBs of RAM), The second fix is to use a 64-bit kernel, which puts the address space up to the 264 range (around 17.2 billion GBs of RAM). As previously mentioned, I am not well versed with the Mac architecture, but I wonder if the behavior you are seeing is some combination of the 9400M taking its system memory, and the 9600 GT taking up addressees due to an addressing limitation? Won't really know for sure until Snow Leopard comes out, as it finally makes the OS X kernel 64-it, which should ease any addressing limitations. Good luck!
Rich S.

Similar Messages

  • Memory problems when using alias channel strips (3.0.2)

    I'm having memory issues when using alias channel strips.  What is happening is I create one instance of the plugin, say Massive.  Then make a bunch of other alias on different patches.  When looking at the memory usage everytime I add and alais it increases the memory usage by an amount the same as the origional plugin.  ie. add 3 alias and you have 3 times more memory usage.  See attached photos.
    Original plugin instance ~80mb memory usage
    Adding 3 Alias channel strips.  ~330mb memory usage
    By comparison here is the usage by adding 3 brand new instance of the plugins. ~160mb memory usage
    By comparison it is using less memory by having 4 seperate instances of the plugin compared to using 3 alias's of the origional.  This doesn't make sense since the whole point of an alias is to save memory by only loading one instance of the plugin. 
    Anyone have any thoughts?
    Jon

    What's interesting is that if I, starting with a blank concert, add a Massive to a patch and then make an alias the memory is as you say. But if I make a duplicate of the inital patch and then make alias's of that the memory is lower.
    Are you seeing the same with the built-in plug-ins?

  • High Virtual memory usage when using Pages 2.0.2

    Hey there,
    I was just wondering whether there had been any other reports of unusually high memory usage when using Pages 2.0.2, specifically Virtual memory. I am running iWork 06 on the Mac listed below and Pages has been running really slowly recently. I checked the Activity Monitor and Pages is using hardly any Physical memory but loads of Virtual memory (so much so that the Page outs are almost as high as the Page ins (roughly 51500 page ins / 51000 page outs).
    Any known problems, solutions or comments for this problem? Thanks in advance

    I don't know if this is specifically what you're seeing, but all Cocoa applications, such as Pages, have an effectively infinite Undo. If you have any document that you've been working on for a long time without closing, that could be responsible for a large amount of memory usage.
    While it's good practice to save on a regular basis, if you're making large amounts of changes it's also a good idea to close and reopen your document every once in awhile, simply to clear the undo. I've heard of some people seeing sluggish behavior after working on a document for several days, which cleared up when the document was closed and reopened.
    Titanium PowerBook   Mac OS X (10.4.8)  

  • System keeps crashing when using SIMS?

    My system keps crashing when using SIMS.  Is there anything we can do to prevent this?

    responses from 2 years ago - how to get back to those?

  • Massive memory usage when using latest version of final cut, 10 minutes after restart.

    Hey I am really struggling with using Final Cut Pro X and I suspect there is some software issue.
    I am editing a one minute video clip, the video files are 1080 h264. I restart my computer and only open final cut. After 10 minutes of editing I cant barely navigate my computer anymore as it is going so slow, until eventually I have to force reboot my computer. I can scarcly export anything out of final cut, when I do my computer feels like its about to die and if it exports I seem to get broken frames and/or missing media.
    I reinstalled my mac only about 2 months ago.
    System specs.
    Mac book pro.
    13-inch, Late 2011
    Processor  2.4 GHz Intel Core i5
    Memory  4 GB 1333 MHz DDR3
    Graphics  Intel HD Graphics 3000 384 MB
    On my previous mac book pro (early 2010, 4 gm ram), running snow leopard and previous versions of final cut I was able to edit 3, 1 hour documentaries with very little problem, but now I cant even manage a one minute clip.
    Not sure what to do. Here is a screen shot of my memory usage. Im getting massive swap memory and lots of inactive memory. It seems very strange. Especially after only 10 minutes running only final cut.

    FCP is only using 409 MBs of memory out of your 4 GBs. But whatever else you have open is using up virtually all the physical RAM you have. This is forcing the memory manager to page memory out to the disk-based VM file. That's a major reason for major slowdowns.
    You have 4 GBs of RAM. How many applications are you running concurrently? If you look at Page outs: you will note the positive number in parentheses. That means your computer is hitting the hard drive a lot because the memory demands are too high. If possible cut down on how many concurrent applications you run or put more RAM into the computer, if that's possible.
    About OS X Memory Management and Usage
    Using Activity Monitor to read System Memory & determine how much RAM is used
    Memory Management in Mac OS X
    Performance Guidelines- Memory Management in Mac OS X
    A detailed look at memory usage in OS X
    Understanding top output in the Terminal
    The amount of available RAM for applications is the sum of Free RAM and Inactive RAM. This will change as applications are opened and closed or change from active to inactive status. The Swap figure represents an estimate of the total amount of swap space required for VM if used, but does not necessarily indicate the actual size of the existing swap file. If you are really in need of more RAM that would be indicated by how frequently the system uses VM. If you open the Terminal and run the top command at the prompt you will find information reported on Pageins () and Pageouts (). Pageouts () is the important figure. If the value in the parentheses is 0 (zero) then OS X is not making instantaneous use of VM which means you have adequate physical RAM for the system with the applications you have loaded. If the figure in parentheses is running positive and your hard drive is constantly being used (thrashing) then you need more physical RAM.
    Adding RAM only makes it possible to run more programs concurrently.  It doesn't speed up the computer nor make games run faster.  What it can do is prevent the system from having to use disk-based VM when it runs out of RAM because you are trying to run too many applications concurrently or using applications that are extremely RAM dependent.  It will improve the performance of applications that run mostly in RAM or when loading programs.

  • Memory leak when using Threads?

    I did an experiment and noticed a memory leak when I was using threads.. Here's what I did.
    ======================================
    while( true )
         Scanner sc = new Scanner(System.in);
         String answer;
         System.out.print("Press Enter to continue...");
         answer = sc.next();
         new TestThread();
    ========================================
    And TestThead is the following
    ========================================
    import java.io.*;
    import java.net.*;
    public class TestThread extends Thread
    public TestThread() { start(); }
    public void run() {  }
    =====================================
    When I open windows Task Manager, every time a new thread starts and stops, the java.exe increases the Mem Usage.. Its a memory leak!? What is going on in this situation.. If I start a thread and the it stops, and then I start a new thread, why does it use more memory?
    -Brian

    MoveScanner sc = new
    Scanner(System.in);out of the
    loop.Scanner sc = new Scanner(System.in);
    while (true) {
    That won't matter in any meaningful way.
    Every loop iteration creates a new Scanner, but it also makes a Scanner eligible for GC, so the net memory requirement of the program is constant.
    Now, of course, it's possible that the VM won't bother GCing until 64 MB worth of Scanners have been created, but we don't care about that. If we're allowing the GC 64 MB, then we don't care how it uses it or when it cleans it up.

  • Need help with internal HD memory problems when using Premiere Pro?

    When using PP I keep loosing memory on my HD.
    Now this seems strange to me since I have every thing, all my video and audio files on external HDs.
    Each time I time I make a new project I end up with less space on my internal HD.
    Information related to these projects is somehow remaining on my internal HD.
    Anyone got any ideas about what I might be doing wrong?
    Dimitrije

    Premiere will slowly compile various files to help the project along, and the default place is usually your internal hard drive. Make sure your scratch disks are pointed to an external hard drive if that is what you want, also, make sure the Media Cache Files are being created on your external as well (and not the default location which is on your local drive).
    Premiere Preferences > Media
    Media Cache Files & Media Cache Database should be changed to an external disk if you don't want them created on your local disk. There are many tutorials and explanations about all of these aspects of Premiere on these forums and from other sources. Hope that helps!

  • Further info needed on the System Time Problem when using Bootcamp

    Hi, my query is an extension on the previously reported problems and fix with the Sytem Time being incorrect when using Bootcamp.
    As respected user 'SideStepSociety' very helpfully pointed out and posted in another thread, the cure for this problem is as follows:-
    In Windows, try this:
    Run > RegEdit


    Find: HKEYLOCALMACHINE > SYSTEM > CurrentControlSet > Control > TimeZoneInformation
    

Add key ( REG_DWORD type ): RealTimeIsUniversal


    Double-click and set value to 1
    However, unless I'm mistaken I have seen somewhere that the key type 'REG_DWORD' is 32-bit based and I am running Windows 7 x64 (64-bit) Bootcamp, therefore do I have to enter the QWORD 64-bit type key i.e. 'REG_QWORD' instead of the DWORD 32-bit type key i.e. 'REG_DWORD', or is it perfectly acceptable and fine to just enter both to cover myself should any of the components revert from 64-bit to-32 bit? ..which I have experienced happening once with Windows 'Gadgets' which normally always runs 64-bit, but once reverted to 32-bit.
    Advice appretiated, thanks.

    Thanks for advice.
    I had gone ahead and entered the DWORD value in the registry just after my post and a couple of week booting between the two OS's, there were no more problems with the time issue between OS X and Bootcamp.
    Shame this most frustrating of little problems where not resolved in a Bootcamp update patch, I had been experiencing this problem for a long, long time not knowing how to fix it and thinking it was my own MacBook Pro that was at fault. I do sympathise now with the possible hundreds or even thousands of Mac-Bootcamp users that are suffering this unbeknown that there's only a special registry fix that has to be manually applied to cure the problem. This is Tacky. I do wonder about Apple sometimes.

  • Bug report & possible patch: Wrong memory allocation when using BerkeleyDB in concurrent processes

    When using the BerkeleyDB shared environment in parallel processes, the processes get "out of memory" error, even when there is plenty of free memory available. This results in possible database corruption.
    Typical use case when this bug manifests is when BerkeleyDB is used by rpm, which is installing an rpm package into custom location, or calls another rpm instance during the installation process.
    The bug seems to originate in the env/env_region.c file: (version of the file from BDB 4.7.25, although the culprit code is the same in newer versions too):
    330     /*
    331      * Allocate room for REGION structures plus overhead.
    332      *
    333      * XXX
    334      * Overhead is so high because encryption passwds, replication vote
    335      * arrays and the thread control block table are all stored in the
    336      * base environment region.  This is a bug, at the least replication
    337      * should have its own region.
    338      *
    339      * Allocate space for thread info blocks.  Max is only advisory,
    340      * so we allocate 25% more.
    341      */
    342     memset(&tregion, 0, sizeof(tregion));
    343     nregions = __memp_max_regions(env) + 10;
    344     size = nregions * sizeof(REGION);
    345     size += dbenv->passwd_len;
    346     size += (dbenv->thr_max + dbenv->thr_max / 4) *
    347         __env_alloc_size(sizeof(DB_THREAD_INFO));
    348     size += env->thr_nbucket * __env_alloc_size(sizeof(DB_HASHTAB));
    349     size += 16 * 1024;
    350     tregion.size = size;
    Usage from the rpm's perspective:
    The line 346 calculates how much memory we need for structures DB_THREAD_INFO. We allocate structure DB_THREAD_INFO for every process calling db4 library. We don't deallocate these structures but when number of processes is greater than dbenv->thr_max then we try to reuse some structure for process that is already dead (or doesn't use db4 no longer). But we have DB_THREAD_INFOs in hash buckets and we can reuse DB_THREAD_INFO only if it is in the same hash bucket as new DB_TREAD_INFO. So line 346 should contain:
    346     size += env->thr_nbucket * (dbenv->thr_max + dbenv->thr_max / 4) *
    347         __env_alloc_size(sizeof(DB_THREAD_INFO));
    Why we don't encounter this problem earlier? There are some magic reserves as you can see on line 349 and some other additional space is created by alligning to blocks. But if we have two processes running at the same time and these processes end up in the same hash bucket and we repeat this proces many times to fill all hash buckets with two DB_THREAD_INFOs then we have 2 * env->thr_nbucket(37) = 74 DB_THREAD_INFOs, which is much more than dbenv->thr_max(8) + dbenv->thr_max(8) / 4 = 10 and plus allocation from dbc_put, we are out of memory.
    And how we will create two processes that end up in the same hash bucket. We can start one process (rpm -i) and then in scriptlet we start many processes (rpm -q ...) in loop and one of them will be in the same hash bucket as the first process (rpm -i).
    I would like to know your opinion on this issue, and if the proposed fix would be acceptable.
    Thanks in advance for answers.

    The attached patch for db-4.7 makes two changes:
      it allows enough for each bucket to have the configured number of threads, and
      it initializes env->thr_nbuckets, which previously had not been initialized.
    Please let us know how it works for you.
    Regards,
    Charles

  • Memory leak when "Use JSSE SSL" is enabled

    I'm investigating a memory leak that occurs in WebLogic 11g (10.3.3 and 10.3.5) when "Use JSSE SSL" is checked using the Sun/Oracle JVM and JCE/JSSE providers. The leak is reproducible just by hitting the WebLogic Admin Console login page repeatedly using SSL. Running the app server under JProfiler shows byte arrays (among other objects) leaking from the socket handling code. I thought it might be a general problem with the default JSSE provider, but Tomcat does not exhibit the problem.
    Anyone else seeing this?

    Yes, we are seeing it as well on Oracle 11g while running a GWT 2.1.1 application using GWT RPC. Our current fix is to remove the JSSE SSL configuration check, however this might not be an option if you really need it for your application. Have you found anything else about it?

  • Indesign cs5 'out of memory' error when using preflight

    I have been regulary getting an 'out of memory' error when i choose to use my bespoke preflight profile.
    I have 4gig of ram and run Indesign CS5 on OS 10.6.8.
    Does anyone know a work around?
    As soon as I select from the basic default profile, i get the beach ball from hell for 10mins, then it kindly lets me know that I am out of memory, sends a crash report to Adobe and then asks if I want to relauch. I'm stuck in a vicious circle. I must of sent my 4th crash report now and no feedback from anyone at Adobe.

    I have replaced my preferences, but still the problem persists. I have tried switching my view from typical display to fast display before i selected a profile. I thought this may give me the extra memory I needed to avoid the enevitable crash. I learnt that 2 files were indeed rgb instead of cmyk before it crashed again. So I switched them to cmyk and tried again, selected my bespoke profile, but yet again it crashed. I think the problem lies with the file, not Indesign, as i have tried the same profile on a different file and the program doesn't crash and runs as it should. So if in future I need to use said crashing file again, firstly i will need to try Peter's isolate fix method. Otherwise i'll never be able to progress to successful a pdf.

  • Are there any memory restrictions when using Invoke-Command?

    Hi, I'm using the Invoke-PSCommandSample workbook to run a batch file inside a VM.
    The batch file inside the VM runs a Java program.
    The batch file works fine, when I run it manually in the VM.
    However, when I use the Invoke-PSCommandSample workbook to run the batch file, I get the following error:
    Error occurred during initialization of VM
    Could not reserve enough space for object heap
    errorlevel=1
    Press any key to continue . . .
    Does anybody know if there are any memory restrictions when invoking commands inside a VM via runbook?
    Thanks in advance.

    Hi Joe, I'll give some more background information. I'm doing load testing with JMeter in Azure and I want to automate the task. This is my runbook:
    workflow Invoke-JMeter {
    $Cmd = "& 'C:\Program Files (x86)\apache-jmeter-2.11\bin\jmeter-runbook.bat'"
    $Cred = Get-AutomationPSCredential -Name "[email protected]"
    Invoke-PSCommandSample -AzureSubscriptionName "mysubscription" -ServiceName "myservice" -VMName "mymachine" -VMCredentialName "myuser" -PSCommand $Cmd -AzureOrgIdCredential $Cred
    This is my batch file inside the VM:
    set JAVA_HOME=C:\Program Files\Java\jdk1.7.0_71
    set PATH=%JAVA_HOME%\bin;%PATH%
    %COMSPEC% /c "%~dp0jmeter.bat" -n -t build-web-test-plan.jmx -l build-web-test-plan.jtl -j build-web-test-plan.log
    Initially I tried to run JMeter with "-Xms2048m -Xmx2048m". As that didn't work, I lowered the memory allocation but even with "-Xms128m -Xmx128m" it does not work. I have tried with local PowerShell ISE as you suggested, but I'm
    running into certification issues. I'm currently have a look at this. Here's my local script:
    Add-AzureAccount
    Select-AzureSubscription -SubscriptionName "mysubscription"
    $Uri = Get-AzureWinRMUri -ServiceName "myservice" -Name "mymachine"
    $Credential = Get-Credential
    $Cmd = "& 'C:\Program Files (x86)\apache-jmeter-2.11\bin\jmeter-runbook.bat'"
    Invoke-command -ConnectionUri $Uri -credential $Credential -ScriptBlock {
    Invoke-Expression $Args[0]
    } -Args $Cmd
    With this, I get the following error (sorry, it is in German):
    [myservice.cloudapp.net] Beim Verbinden mit dem Remoteserver "myservice.cloudapp.net" ist folgender Fehler
    aufgetreten: Das Serverzertifikat auf dem Zielcomputer (myservice.cloudapp.net:666) enthält die folgenden
    Fehler:
    Das SSL-Zertifikat ist von einer unbekannten Zertifizierungsstelle signiert. Weitere Informationen finden Sie im
    Hilfethema "about_Remote_Troubleshooting".
        + CategoryInfo          : OpenError: (myservice.cloudapp.net:String) [], PSRemotingTransportException
        + FullyQualifiedErrorId : 12175,PSSessionStateBroken

  • Keep System.in open when using redirection

    I am using redirection for System.in but when I go to read a second line the stream has closed so all I get is null.

    You need to explain a bit better about what you are asking.
    System.in only has three read methods and none of them can return null, so you must be referring to something else.

  • Performance hit when using a FirePro GPU?

    Hey there!
    I'm interested in purchasing a cheap ATI FirePro V4900 or similar GPU in order to take advantage of my 10 bit TFT when using Photoshop. I was wondering if I have to expect a performance hit when using PS 6.0 with such a card as opposed to a normal "gamer" GPU like the NVidia 670 GTX when :
    - performing normal file handling, opening PSD files, paning, zooming, brushing, etc.
    - applying some of the newer GPU-enhanced filters like Liquify, Oil Paint, Iris Blur, 3d enhancements, etc.
    - actually applying/rendering a more demanding fliter such as Iris Blur?
    Does anyone know about this? I'm afraid I could not find any benchmarks at all except for the one on Tomshardware regarding OCL, but that one does not include professional GPUs...
    Thanks for any info in advance!

    There will not be synchronization when the method of A
    is being called. The method 1) certainly saves memory
    space, but will the performance be hurt since there
    will only be one object accessed by multiple threads?
    Or maybe it doesn't matter?If there is no synchronization, it will not matter. Threads execute methods. Methods do not run on Objects. The Object is just data that is implicitly linked to the method.
    Just make sure it's safe to keep the method unsynchronized.

  • Makequeues memory leak when using Lexmark Z53

    Behavior with printer switched off while connected to built-in USB port:
    Problem 1: makequeues has a memory leak, starting up with between 3 and 5 MB of RAM. This is done by connecting the printer when logged in and watching via Activity Monitor.
    Initially makequeues has sent 255 Mach messages. Every ~10.5 seconds, makequeues sends 59 more messages and gets 53 in reply. With each send/receive cycle, its RAM usage grows by 8 KB, a rate of 780 kiB/min.
    Problem 2: The process is started each time the same printer is reconnected.
    Symptoms: Disconnecting the printer doesn't let the process exit (any instance). If left alone, the memory usage cycle continues.
    The memory leak caught my attention after taking up over a half-gigabyte of RAM when I left the Mac running over night.
    Resolution:
    (1) Disconnect USB cable
    (2) Kill each instance of the process
    Results:
    (1) The process launches after reconnection of the printer with 996 KB of RAM, immediately having sent 60 Mach messages and received 56.
    (2) During this run time, the process may use less and less RAM, down to a minimum of 912.
    (3) After sending 64 more messages without replies, one every 4 seconds, the process exits after having sent a total of 120 messages.
    (4) This behavior (1 through 3) is the same for each process launched as a result of printer reconnection.
    This behavior isn't shown during Safe Mode boot - makequeues never starts up as a result of printer connection.
    PowerMac G4 QuickSilver 2001 733Mhz

    If this is meant to be a bug report, please file a bug report in the proper place. This is a user to user discussion forum.
    As to the source of running 'makequeues' (/System/Library/SystemConfiguration/PrinterNotifications.bundle/Contents/MacOS /makequeues), I would look at the Lexmark software. I currently have two Epson printers connected via USB with CUPS running. I see no process 'makequeues' appearing.
    Matt

Maybe you are looking for