Problems freeing up memory

i've been having a problem of late with my iPhone 5s 16 GB. I have been receiving a warning box, advising me that my memory is low; problem is, no matter how many apps I delete, I'm still advised that I have no memory available despite the fact that I am sure I've deleted at least 3 GB of memory. Because the phone thinks that more memory should be freed up, I can't do anything of substance on the phone without getting hung up by the warning box. If anyone can shed any light on what is going on here and point to a possible fix, I would appreciate it.
Thanks,
Dave

i've been having a problem of late with my iPhone 5s 16 GB. I have been receiving a warning box, advising me that my memory is low; problem is, no matter how many apps I delete, I'm still advised that I have no memory available despite the fact that I am sure I've deleted at least 3 GB of memory. Because the phone thinks that more memory should be freed up, I can't do anything of substance on the phone without getting hung up by the warning box. If anyone can shed any light on what is going on here and point to a possible fix, I would appreciate it.
Thanks,
Dave

Similar Messages

  • Freeing up Memory During CFFILE

    I have a process that generates flat files using CFFILE. The
    process uses a database reference table that feeds the query
    parameters. I have several loops in this process which I thought
    would be more efficient and free up memory after each CFFILE
    "action append" but before execution of the next query. (execute
    query then write all the files, loop to query then write all files,
    etc....)
    Problem is that it is not freeing up memory and I am getting
    java.lang.OutOfMemoryError errors after about 15 minutes of run
    time.
    Current solution is to run this process in batches. This
    unfortuantly requires someone baby sit the process and change a
    parameter and submit.
    Is there anyway after the one of the loops is complete (query
    has completed and first group of files have been written) and
    before the next execution of the query that I can free memory?
    I need to trick the process in thinking that the page is
    complete then immediately initiate the next query to process the
    files.
    We have tuned the box as best we can, applied all patches
    (MX7) . Process runs great in batches, but, just can't handle the
    volume. Millions of rows are being processed.
    Thanks
    Bill

    Current solution is to run this process in batches. This
    unfortuantly
    requires someone baby sit the process and change a parameter
    and submit.
    java.lang.OutOfMemoryError means that there are too many
    objects in memory at one time. However, it is difficult to remove
    objects in one continuous process while it is still running.
    The word
    batches carries a hint. In the circumstances, I would break
    the process up into smaller jobs and use cfschedule for each. Each
    job could store the data that subsequent jobs need in a file or in
    a database table specifically created for that purpose. Use cached
    queries for static data. Coldfusion will then read the data from
    memory, rather than from a file or database, sparing you the
    creation of yet more objects.

  • Problems with device memory withs new apps and upgrades!

    Dear all,
    I'm having problems with my memory device. I have installed thes apps:
    - Blackberr battery watch
    - Bejewled
    - Google Maps
    - Level
    - Ispeech translator
    - Octopuzzle
    - Shazam Encore
    - Simcity Deluxe
    - The Light flashlight
    - Unit converter premium
    - Vlingo
    - Weather eye
    - Whatsapp Messenger
    And it says my 128 Mb device memory is full. How can i clean up this issue so I can continue upgrading and make my device a little more organized.
    Thanks!

    Hey marcoaramini,
    Welcome to the forums. 
    This article should help you with freeing up some memory on your BlackBerry smartphone.
    How to Maximize Battery Life and Free Memory on the BlackBerry smartphone
    -SR
    Come follow your BlackBerry Technical Team on twitter! @BlackBerryHelp
    Be sure to click Kudos! for those who have helped you.Click Solution? for posts that have solved your issue(s)!

  • Problem in ABAP memory

    Hi Experts,
    This is problem about ABAP memory.
    I have two programs. Program-A & Program-B
    Program-A sets value to variable and EXPORT command is used to set this variable in memory.
    EXPORT variable TO DATABASE indx(st) ID 'KEYVALUE'.
    Program-B gets variable using IMPORT command from memory.
    IMPORT variable FROM DATABASE indx(st) ID 'KEYVALUE'.
    User runs Program-A in SE38. Program-A calls Program-B using a button click event (SUBMIT).
    The scenario is..
    User1 executes the Program-A,
    which set the variable = User1 in memory.
    User2 executes the Program-A,
    which set the variable = User2 in memory.
    User2 clicks button to call Program-B,
    which imports variable = User2 from memory.
    User1 clicks button to call Program-B,
    which imports variable = User2 from memory.
    (But User1 expects the variable = User1).
    So User1 gets wrong variable value set by another User.
    How to handle this situation?. How to set memory variables user specific? I will appriciate all helpful answers.
    Thanks in advance
    Hari.

    What you are using is global memory, if you don't want other sessions to see it, then you have to use a memeory id instead.  This will work when submittin program b using the SUBMIT statement.
    export variable to memory id 'ZRICHTEST'.
    import variable from memory id 'ZRICHTEST'.
    Or you can simply make your KEYVALUE unique by giving the USERID as part of it.
    Regards,
    Rich Heilman

  • FIM: Freeing invalid memory in OCIServerAttach

    Hi,
    I have the following piece of code for making connection to Oracle server -
    retCode = OCIEnvCreate(envHandle, OCI_THREADED, (dvoid *)0, 0, 0, 0, (size_t) 0, (dvoid **)0);
    if(retCode == OCI_SUCCESS)
    isEnvAllocated = PMTRUE;
    retCode = OCIHandleAlloc( (dvoid *) envHandle, (dvoid *) &errhp, OCI_HTYPE_ERROR, (size_t) 0, (dvoid **) 0);
    // server contexts
    retCode = OCIHandleAlloc( (dvoid *) envHandle, (dvoid *) &srvhp, OCI_HTYPE_SERVER,(size_t) 0, (dvoid **) 0);
    retCode = OCIHandleAlloc( (dvoid *) envHandle, (dvoid *) &svchp, OCI_HTYPE_SVCCTX,(size_t) 0, (dvoid **) 0);
    pError = checkError(*envHandle, errhp, OCIServerAttach(srvhp, errhp, (ptext *)dbName, strlen(dbName), 0));
    Purify is reporting FIM: Freeing invalid memory in LocalFree {1 occurrence} and the stack trace is -
    [E] FIM: Freeing invalid memory in LocalFree {1 occurrence}
    Address 0x001430f8 points into a HeapAlloc'd block in unallocated region of the default heap
    Location of free attempt
    LocalFree [KERNEL32.dll]
    ??? [security.dll ip=0x76e71a7f]
    AcquireCredentialsHandleA [security.dll]
    naunts5 [orannts8.dll]
    naunts [orannts8.dll]
    sntseltst [oran8.dll]
    naconnect [oran8.dll]
    naconnect [oran8.dll]
    naconnect [oran8.dll]
    nsmore2recv [oran8.dll]
    nsmore2recv [oran8.dll]
    nscall [oran8.dll]
    niotns [oran8.dll]
    osncon [oran8.dll]
    xaolog [OraClient8.Dll]
    xaolog [OraClient8.Dll]
    upiah0 [OraClient8.Dll]
    kpuatch [OraClient8.Dll]
    OCIServerAttach [OraClient8.Dll]
    OCIServerAttach [OCI.dll]
    Does anyone has idea why it is giving that error.
    Also there is a leak associated with it too and the stack trace for that is -
    MPK: Potential memory leak of 4140 bytes from 2 blocks allocated in nsbfree
    Distribution of potentially leaked blocks
    Allocation location
    calloc [msvcrt.dll]
    nsbfree [oran8.dll]
    nsbfree [oran8.dll]
    sntseltst [oran8.dll]
    sntseltst [oran8.dll]
    nsdo [oran8.dll]
    nscall [oran8.dll]
    nscall [oran8.dll]
    niotns [oran8.dll]
    osncon [oran8.dll]
    xaolog [OraClient8.Dll]
    xaolog [OraClient8.Dll]
    upiah0 [OraClient8.Dll]
    kpuatch [OraClient8.Dll]
    OCIServerAttach [OraClient8.Dll]
    OCIServerAttach [OCI.dll]
    Is it a standard leak that is happening in OCI.dll or it is the usage issue so that it can be solved by using OCIServerAttach in a different way.
    Any help in this matter is greatly appreciated.
    Thanks
    Anil
    [email protected]

    I believe that both issues are actually the result of false positives. In general, it's not really possible to tell whether C++ code is actually leaking memory or freeing invalid memory handles. Tools like Purify, BoundChecker, etc. will generally give this sort of false positive when the code you're profiling does something 'dangerous'. I believe you can ignore both messages-- at least the Oracle ODBC driver development group did when I was there.
    Justin

  • TR: Problem with free memory

    DARTIGUENAVE Antoine wrote:
    Dear forte users,
    If my first mail about our "free pointer Memory with Forte" was notclear
    enough
    I try to give details of the circumstances when it occurs :
    our forte program performs twice a call to the methode Open_Uni_session
    which returns a pointer to Structure ( defined in the wrraped library ).
    The first time , an exception is raised after reading the structure,
    The free Memory processed before ( or after ) raising the Exc is allright !
    The second time, an Exception is raised after reading the structuremakes a
    Segmentation access violation Exception ! the Free Memory makes a
    segmentation Acces Violation exception !
    I am trying to resolve the problem by the following way :
    Concerning my problem to free memory for a pointer to structure ,
    after return of the wrapped method ,
    there is no more problem when I initialise the structure pointed before
    processing Free( pointer );
    unResultat->ret_type = 0;
    unResultat->ret_info =0;
    free(unResultat);
    raise Exc;
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

    Forgot to tell.
    Before my iphone,  i had a Samsung Galaxy Ace.
    With my Samsung I had nog problem at the same places.

  • HP 25 Calculator - Hard problem related to memory

    Hi All,
    I have an old HP 25 calculator, and I tried to revive it after 20 years have stopped.
    There is a problem related to memory malfunction that is, the store functions and programe functions don´t work.
    Can any one of you help me?
    I would like to know if there is available some electrical scheme to understand and analyse what probrably is happening and also if there is some workshop here in São Paulo - Brazil, that could fix these old calculator.
    I live in Brazil, in São Paulo city.
    Thanks in advance.

    Hi,
    I think you will find the HP Calculator Museum forums a good place to ask:
    http://www.hpmuseum.org/forum/
    (ask your question in the General Forum).
    I am sure there someone will give you some guidance.
    Best regards.
    Note: I do not work for HP, I just like playing with calculators :-)

  • Problem freeing mapping bios

    I try to upgrade bios at motherboard
    K9N NeoV2 from Live flash utility
    which I was instal from CD which was get with MB
    and have a this info:
    problem freeing mapping bios
    what is the problem and how to fix
    THNKS

    Quote from: metaphysic on 24-July-08, 04:46:52
    I was roll back driver to MPS multi...
    but when I try to update I only have
    MPS multiprosecror PC and
    Standard PC
    what I need to do to get ACPI multi... driver
    is it this way
    https://forum-en.msi.com/index.php?topic=114197.msg855961#msg855961
    be couse my english is not verygood
    Quote from: metaphysic on 24-July-08, 04:46:52
    I was roll back driver to MPS multi...
    but when I try to update I only have
    MPS multiprosecror PC and
    Standard PC
    Quote from: metaphysic on 24-July-08, 04:46:52
    what I need to do to get ACPI multi... driver
    is it this way
    https://forum-en.msi.com/index.php?topic=114197.msg855961#msg855961
    be couse my english is not verygood
    Yes, its by this way, but from current HAL used {Standard PC} will not works.(https://forum-en.msi.com/index.php?topic=118643.msg895041#msg895041)
    Do a new fresh installation. {Not repair!!}
    Before proceed enter in BIOS and "LOAD BIOS OPTIMIZED DEFAULTS"
    Verify options:
    In "Advanced BIOS Features":
    IOAPIC Function = Enabled
    MPS Table Version = 1.4
    In "Power Management Setup":
    ACPI Function = Enabled

  • Problem Installing Additional Memory -- Older G4 15"

    I am having a problem installing additional memory this computer:
    Machine Name: Powerbook G4 15"
    Machine Model: Powerbook 5,2
    CPU Type: PowerPC G4 (1.1)
    Number Of CPUs: 1
    CPU Speed: 1 GHz
    L2 Cache (per CPU): 512 KB
    Memory: 256 MB
    Bus Speed: 167 MHz
    Boot ROM Version: 4.7.1f1
    Serial Number: V734***RY
    The symptoms are the same as those described here:
    http://docs.info.apple.com/article.html?artnum=303173
    Any suggestions?
    Thanks.
    Tom
    Powerbook 5,2   Mac OS X (10.4.5)  

    tguild,
    Our PB's are sensitive to the memory used - cheap memory typically doesn't work, although the raw specs, PC2700 DDR SDRAM, may meet Apple's published specs. Three brands that seem to consistently work in our PB's are Crucial, Kingston, and Samsung (which is used by Apple as factory fill). Both Crucial and Kingston have memory selectors on their site, so you get the correct RAM (Crucials fitment guarantee requires you use their memory selector).
    Crucial is available from www.crucial.com, Kingston from www.kingston.com (choose specials/promotions/25% off notebook memory for a reduced price), and one source for Samsung chips is OWC at www.macsales.com.
    So, make sure you've got quality memory. Your PB serial number is outside the range covered by the Apple repair program. I would make sure that with quality memory, it's doing what you want it to do before shipping it off for repair. Some with the lower socket issue have chosen to use one RAM strip in the upper slot, and not pay for repair. I've seen estimates of $350 and up to repair it for those not covered by the repair program, and some have lucked out by having their local Apple store fix it, even though it's outside the serial number range.
    You only have 256mb today - OS X really likes more memory, so if you can upgrade to at least 512mb, or even 1gb, you'll be happier with the performance.

  • I'm having problems with my memory load

    I'm having problems with my memory load, because I have a few photos on my iPhone 5 (32GB) and has been detected that I am using 10GB of photos, and I only have 1,000 photos. Could someone help me?

    Hi Fehmi76,
    The article linked below details how to go about seeing what apps on your iPhone are using what amounts of space.
    See how much storage you've used on your iPhone, iPad, and iPod touch
    https://support.apple.com/en-us/HT201656
    Regards,
    Allen

  • Problem with Java Memory "Could not reserve enough space for object heap"

    Hi gurus,
    I am not an expert with Java´s configuration, and I have a situation that I don´t understand. First of all, I am working at Centos 6.2 with jdk_1.6 and Tomcat 7.
    The problem is...
    - If I run Tomcat with JAVA_OPTS="-Xmx128m"* (at catalina.sh) all works fine.
    - If I run Tomcat with JAVA_OPTS="-Xmx512m"* (at catalina.sh) an error appears:
    Error occurred during initialization of VM
    Could not reserve enough space for object heap
    Could not create the Java virtual machine.
    This appear when I run java -version or when I try to stop Tomcat, and the Tomcat isn´t able to stop.
    The strange thing is that my server has more than 200M free in physical memory. So, why Tomcat isn´t able to stop? and Why Java doesn´t use the free memory in my server?
    Thanks in advanced.

    Hello EJP, thanks for your answer.
    I have explained bad.
    The server has 703M free when Tomcat is stopped. I had mentioned that my server has more than 200M free in physical memory when Tomcat is running with JAVA_OPTS="-Xmx512m", so I don´t understand why these errors appear.
    Do you understand me?
    Recently I have checked the swap memory, and it is disabled. In spite of swap memory is disabled I think java wouldn´t need this memory because it has free physical memory
    Thanks again.

  • Problem with PermGen memory (Java) - Tomcat Server - Business Object XI

    We have installed Business Objects XI on a W2003 Standard with SP2 (x86), using Tomcat as web server and MySQL as DBMS. The server has 4 GB of RAM and dual-core processor. From the beginning we have been given the memory problems of the Java virtual machine. The exception that occurs is of type "java.lang.OutOfMemoryError: PermGen space failure". We modify environment variables (JAVA_OPTS) -XX: PermSize = 256m and-XX: MaxPermSize = 512m and install the monitor LambdaProbe to see the use made of memory. We note that at no time was the PermGen memory indicated by these variables, continuing in that has a default 64MB. For this reason we decided to change the Tomcat 5.5 BO included in the version 6.0.20 and also updated Java to version 1.6.0_18-b07, and make deployment by wdeploy.bat file that comes with BusinessObjects (changing config.tomcat55 and tomcat55.xml). The deployment was successful and everything works, but as with the previous version of Java / Tomcat, shortly PermGen memory fills and returns to "hang". In this latest version of Tomcat installed as a service, do not use environment variables, use an application, "Configure Tomcat", which is a tab to pass parameters to the JVM. After looking at many sites, I have seen to be putting "-D" to pass the parameter. Currently this is my configuration:
    -Dcatalina.home = C: \ Program Files \ Apache Software Foundation \ Tomcat 6.0-Dcatalina.base = C: \ Program Files \ Apache Software Foundation \ Tomcat 6.0-Djava.endorsed.dirs = C : \ Program Files \ Apache Software Foundation \ Tomcat 6.0 \ endorsed-Djava.io.tmpdir = C: \ Program Files \ Apache Software Foundation \ Tomcat 6.0 \ temp-Djava.util.logging.manager = org.apache.juli . ClassLoaderLogManager-Djava.util.logging.config.file = C: \ Program Files \ Apache Software Foundation \ Tomcat 6.0 \ conf \ logging.properties-Dcom.sun.management.jmxremote-D-Xms2g-D-Xmx2g-D -XX: + UseConcMarkSweepGC-D-XX:-D PermSize = 256m-XX: MaxPermSize = 512m-Daf.configdir = C: / Program Files / Business Objects / Dashboard and Analytics 12.0-D-verbose: gc-D-XX : + PrintGCTimeStamps-D-XX: + PrintGCDetails
    I tried changing the values of-XX: PermSize and XX: MaxPermSize, and modifying the various policies of the Garbage Collector (-XX: + UseConcMarkSweepGC,-XX: + UseParNewGC,-XX: + UseParallelGC, ...), but nothing. Any idea how to get change the value of the report? Or how to solve this problem?
    Thank you!

    Victor,
    Is the Product you are using BusinessObjects Enterprise XI (Release 1)?
    With XIR2 and XI31, Tomcat 5.0 and Tomcat 5.5 are included with the software, and when installed with the BOE installer, you will get an application installed to the startmenu named "Tomcat Configuration".
    Using this "Tomcat Configuration" utility, there are several configuration options available.  On the JAVA tab, you will see the JAVA_OPTS that are set (These are prefixed with "-D") and also your initial and max memory sizes are listed at the bottom (Max 1024 by default in XI31).
    Here is the default setting for permsize in XI31 Tomcat:
    -XX:MaxPermSize=256M
    From your post, your issue might be the spaces in between your values (there should be no spaces, each "-" parameter on its own line).
    I would suggest starting Tomcat and reviewing your stdout.log file to review what options were set.
    Hope that helps
    -Brian

  • 3D performance problems after upgrading memory

    I recently purchased an additional 2GB of memory to try and extend the life of my aging computer.  I installed the memory yesterday and Windows seems to recognize it (reporting now 3.3GB) but when I dropped into WoW (pretty much the only game I have) the 3D performance was down from the usual 60FPS @ 1600x1080 to a bleak 20 (at best) and the CPU utilization went to about 80% on both cores (with ~20% kernel usages).  Basically WoW was being software 3D rendered!!!
    I went through the usual reinstall drivers, reboot, etc... and couldn't find a fix.  I powered down, pulled out 2 of the memory sticks, booted up, and dropped into WoW - it ran at the full 60FPS and CPU utilization was very low (i.e. back to GPU Hardware 3D rendering).  I powered down again, swapped the 2 sticks for the other 2 sticks, booted up, and dropped into WoW - again it ran 100% fine.  So I powered down, put all four sticks in, booted back up, and when I dropped into WoW it was running in the software 3D rendering mode (20FPS at best and High CPU/Kernel usage).
    I've tried the /PAE option in boot.ini - no joy.  I've tried /MAXMEM = to 3300, 3072, 3000, and even 2048 - no joy in any of those cases.  Has anyone seen anything like this before?  Or have suggestions to fix (other than going to Win7-64)?
    All info in signature is up to date.
    Thanks in advance for any help!

    Quote
    Well his last post was a little over 6 hours ago so he was up pretty late.
    Looks like nothing one does in here goes completely unnoticed.   
    Anyway, I am done sleeping now.
    Quote
    his 2 Pfennig's worth.  I know, I know it's Euro's now.
    Yeah, and what used to be "Pfennige" is now also called "Cents" and here are mine:
    Quote
    I've tried the /PAE option in boot.ini - no joy.  I've tried /MAXMEM = to 3300, 3072, 3000, and even 2048 - no joy in any of those cases.  Has anyone seen anything like this before?  Or have suggestions to fix (other than going to Win7-64)?
    PAE or Physical Memory Extension will not do anything as Microsoft has castrated this feature to such an extend that it has nothing to do with memory addressing anymore when in comes to Windows XP:
    http://en.wikipedia.org/wiki/Physical_Address_Extension#Microsoft_Windows
    Quote
    Windows XP Service Pack 2 and later, by default, on processors with the no-execute (NX) or execute-disable (XD) feature, runs in PAE mode in order to allow NX. The NX (or XD) bit resides in bit 63 of the page table entry and, without PAE, page table entries only have 32 bits; therefore PAE mode is required if the NX feature is to be exploited. However, desktop versions of Windows (Windows XP, Windows Vista) limit physical address space to 4 GiB for driver compatibility reasons.
    The feature is already automatically enabled.  But since is original function (Address Extension) does no longer exist when it comes to the desktop versions of Windows XP, it won't really do anything you would ever notice.
    About the /MAXMEM Switch:  In Windows 32bit operating systems, every process is limited to 2GB of memory.  The point of the switch is to allow certain applications (or their run-time process) to occupy a higher amount of system memory than 2GB.  However, the culprit here is that only those applications are able to utilize this ability that have been programmed (or compiled) accordingly.  A special flag (large memory aware) has to be implemented.  Otherwise, these application will be restricted to 2GB even though the /MAXMEM Switch has been set to extend the 2GB limit to 3GB.  Most 32bit applications come without the "large memory aware" flag and that is why usually, settings the switch won't change anything.
    In any case, it is unlikely that /PAE (even if it would not be castrated) and /MAXMEM would have an impact on your actual issue because I doubt that it has much to do with either memory adressing or the memory limit of an indiviual Windows process.
    Quote
    the 3D performance was down from the usual 60FPS @ 1600x1080 to a bleak 20 (at best) and the CPU utilization went to about 80% on both cores (with ~20% kernel usages).
    There are a couple of hardware based explanations to consider here.  Let's start with the most obvious one:
    1. 975X Memory Controller
    The main reason that the system chooses to automatically set the Memory Speed to DDR2-667 even though DDR2-800 modules are installed, is that by design the memory controller of the Intel 975X Chipset does not natively support DDR2-800 modules, but
    >>Intel® 975X Express Chipset Datasheet - For the Intel® 82975X Memory Controller Hub (MCH)<< [Page 20]
    This means, that from the point of view of the memory controller, operating the memory @DDR2-800 actually means overclocking it (with all potential side effects).
    Basically, if your initial problem disappears as soon as you reduce the memory speed to DDR2-667, the design limitation of the memory controller may explain your findings.
    2. Different memory modules
    If I read your signature correctly, you are actually mixing two different kits/models of RAM (CM2X1024-6400C4DHX and  CM2X1024-6400C4).  This can work of course, but in practice it not necessarely does under all circumstances. 
    This list  (-> http://ramlist.i4memory.com/ddr2/) indicates that there are at least 14 different module types/revisions of Corsair DDR2-800 / CL4 modules that utilize a wide range of different memory chips (Elpida, ProMos, Micron, Infinion, Powerchip, Qimonda, Samsung, Infinion etc.).  Even though the superficial specifications for these chips appear to be pretty similar (DDR2-800 / CL5 / CL4), this does not necessarely mean that the modules will respond to the same operating conditions in the same way. There may be small difference in sub-timings/sub-latencies and/or the general responsiveness of the ICs which may affect the operating behaviour of the memory controller (which by the way also includes the PCI-Express interface which your video card is hooked up to).
    And again:  If running the system @DDR2-667 solves your issue, the possible explanation is that higher clock speeds may amplify (or trigger) potential performance problems that could have to do with the use of non-identical memory modules.
    Furthermore: It is also possible that the memory controller's design limitations and the potential compatibility problems that may be attributed to mixing different modules types may reinforce each other in terms of reduced system performance.
    3. The BIOS may have an impact as well
    There has been known issue with the use of certain video cards in conjunction with 4GB of system memory on this mainboard:
    https://forum-en.msi.com/index.php?topic=107301.0
    https://forum-en.msi.com/index.php?topic=105955.0
    https://forum-en.msi.com/index.php?topic=99818.msg798951#msg798951
    What may have come out as graphics/display corruption in earlier BIOS Releases may come out as reduced system performance when using the latest BIOS Release.  Of course, this is hard to prove, but I thought I'd mention it anyway.  May I ask what amount of video memory your card has onboard?
    Fortunately, there is a BIOS version that you could consider to try in this matter.  It is not only the last BIOS Release that could be used in order to avoid the corruption issue, but it is (in my oppionion) the best BIOS Version that was ever released for the 975X Platinum PUE Mainboard:  W7246IMS.716 [v7.1b6].  I have been using this mainboard for almost two years and have tested almost every BIOS Release that ever came out and I always went back to v7.1b6 as "ground zero". 
    It will properly support your E6600 (so you don't have to worry about that) and as far as I remember, there are no known compatibility issues with other components.  So maybe, you want to give this a shot.
    The bottom line is that in a worst case scenario, the problem you describe could be caused by all of the above things at the same time.  You cannot really do anything about the 975X Chipset Specifications and the only way to rule out explanation #2 is to test modules that are actually identical (same model number, revision and memory chips).  A test of the 7.1b6 BIOS Release is something you should consider.  It may be the only way to test the BIOS Hypothesis.
    This post turned out to be longer than I intended, but then again, I am well-rested after a good sleep and the wake-up coffee is kicking in pretty good.

  • Warning about bizarre eMac Tiger installation problem due to memory

    I'm a fairly proficient user. I've upgraded the OS on my 800MHz eMac (original version with NVIDIA GeForce2 MX graphics) many times. This time, while upgrading to Tiger, I blew almost an entire weekend trouble-shooting the machine. I'm posting this so others on the forum can hopefully save themselves some time.
    My machine came from the factory with 256MB RAM and a Superdrive. I had the AppleStore add another 256MB of RAM when I bought the machine in Nov 2002. This was Apple-branded RAM, not third-party.
    When I tried to upgrade to Tiger, the installation would proceed through all the DVD media and system hard-drive tests, then succesfully install one or more packages. It would then fail with the "Please try to install again" message. The installation log was showing messages about file corruption. I was able to re-install Panther from CD, which eliminated optical drive failure from the list of possible problems.
    I ran all of the hard-drive checks in Drive Genius and my drive passed. I ran all of the Apple Hardware Tests from my original system CD. Then I found some warnings on the Apple site about marginal third-party RAM causing this kind of problem. I first rejected this line of approach, since I had Apple RAM.
    Eventually I ran out of "normal" trouble-shooting, so I decided to pull the extra memory module as a last resort prior to replacing the hard drive. Once I did this, the installation ran flawlessly using the original 256MB of RAM.
    The message here is that EMAC MEMORY PLAYS A MAJOR ROLE IN EITHER DVD I/O OR TIGER-SPECIFIC INSTALLATION. Check your memory before cracking open the rest of the machine, even if all the Apple hardware tests say the memory is OK.
    I have new memory on the way from OWC and will follow-up with status when I get some runtime on it.
    As an aside, I upgraded to Tiger from Panther mainly to get the increased parental controls (this machine is destined for my kids). Now that I have Tiger, I am pleasantly impressed by the other refinements as well.
    E-Mac   Mac OS X (10.4.8)  

    The RAM test on the Apple Hardware Test disc is a compromise between being throrough and completing in a reasonable amount of time. Memtest (especially if run in single-user mode as per it's instructions) offers a more throrough test.

  • Problem of FOP memory with APEX ...

    Hello everyone !!!!
    I installed FOP in Oracle Applications Server, and said ApEx to use FOP to generate PDF files.
    However, the memory of FOP is very limited because it can't create PDF files if there are more than about 1.200 lines in the Report I want to print.
    I found on the net there was something to do for this :
    - launch the JVM with "java -Xmx1G" to use a memory of 1Go
    - create FOP_OPTS="-Xmx1G"in the file $HOME/.foprc
    But the file doesn't exist, and I don't access to any JVM ...
    Could anyone help me to know what to do ????
    Thanks by advance,
    Pierre C.

    Hi Varad,
    I tried your solution, however it didn't function :
    I found the opmn.xml file and changed the java-option for the OC4J contener of my application, setting it to "... -Xms512m -Xmx512m".
    Then I tried to print a PDF Report with about 1.700 lines, but the file was "corrupted", but with 1.300 lines it's ok.
    I also tried the Metalink note 744866.1.
    Is there any other think to do in order to solve this problem ??
    Thanks by advance,
    Pierre C.

Maybe you are looking for