Oracle memory settings

Hi experts,
My database server setup is as follows:
Database: Oracle Database 10.2.0.4.0 Standard - 64bit
Server OS: Windows 2008 R2 64bit
RAM: 20GB
4 Processor
Running on VMIf i were to specify the SGA to 10GB, does Oracle knows the 10GB allocated memory available to use? do i need to set the /awe or /pae in windows boot.ini?
Does Windows 2008 64bit have the 4GB memory limitation issue that i need to specify the above?
As for the processor of 4? how do i tell Oracle to use those CPU effiiently?
Thank you,
Faizal.

user3657459 wrote:
If i were to specify the SGA to 10GB, does Oracle knows the 10GB allocated memory available to use? do i need to set the /awe or /pae in windows boot.ini?
Does Windows 2008 64bit have the 4GB memory limitation issue that i need to specify the above?No need as 4GB is a limitation of using a 32bit number to address memory. 64bit can (in theory) address 16 EB (exabytes) of memory.
64bit Windows can address more than 4GB. See the MSDN memory limits article for details.
As for the processor of 4? how do i tell Oracle to use those CPU effiiently?No need to. Oracle will detect the number of CPUs available and use that as a baseline for configuring a number of parameters.
Also, the issue is not whether Oracle is using RAM efficiently or using CPUs efficiently.
The issue is, are YOU and your code using Oracle efficiently?
That determines how well Oracle can perform and how well Oracle can scale.

Similar Messages

  • Shared Memory Settings for Oracle - Machine Spec Specific?

    Hi,
    I've been asked to apply some shared memory settings for Oracle on a new V210. The DBA has asked that we simply copy the settings from one system file to the new machine.
    My problem is that I'm not entirely sure whether this is the best course of action. The machine the settings are taken from is a E280R with 2GB Memory and 2 x 900Mhz processors. The new V210 has 2GB and 2 x 1.33Ghz.
    Surely the memory settings should be different? Don't worry, I've copied the relevant settings - not the whole system file!
    Thanks for any advice,
    John

    Hi John,
    Normally, When you want to apply the shm. We normally for production machines , consider 1/3rd of your physical memory. For eg. If you have 16gb of physical memory, I would consider around 4gb of shm_max.
    Thanks,
    Keshav .N

  • Installation of central instance - memory settings

    Hi All,
    I have a question about the memory settings reg. the ABAP / Java Add-In installation (Central system).
    Installation of CI:
    When I install the central instance, I provide a value for the Instance Memory Management.
    Installation of DI:
    In the step where I install the database instance, I provide again a value for Instance Memory.
    Q1: The memory value for the Central instance later is visible in the profile for the CI. But where does the DI value go? How could I change it?
    Q2: The value for the Java JVM (heap size) - is this value additional to the value of the CI, or part of the CI's memory. So could I set heap size to 2M while the CI's memory is set to 1M?
    Any hints are appreciated.
    Thx.
    KB
    System
    - Win2K (Win32)
    - Oracle

    Q1: The memory value for the Central instance later is visible in the profile for the CI. But where does the DI value go? How could I change it?
    --> You can ignore this, there is no INSTANCE memory for your database. If you want to configure the memory settings for your RDBMS software it'll depend on what software it is... For Sql Server it's in the Enterprise Manager in Oracle you can edit the init<SID>.ora file...
    Q2: The value for the Java JVM (heap size) - is this value additional to the value of the CI, or part of the CI's memory. So could I set heap size to 2M while the CI's memory is set to 1M?
    --> This value is specific to your JVM heap, so it's not additive or related to your CI.

  • Linux memory settings

    We installed the suite on a linux server in a managed configuration (AdminServer, soa_server1, bam_server1).
    At first we didnot change the memory settings so all three process ran with memory settings:
    -Xms512m -Xmx1024m -XX:PermSize=128m -XX:MaxPermSize=512m
    With these settings the suite ran OK at first but gradually the performance detoriated.
    Using jconsole I looked at the memory consumption of the processes and the soa_server1 was using up all memory while the AdminServer and bam_server1 had plenty of memory.
    I changed the memory setting specific to the servers:
    AdminServer, bam_server1
    -Xms768m -Xmx768m -XX:PermSize=128m -XX:MaxPermSize=256m
    soa_server1
    -Xms1536m -Xmx1536m -XX:PermSize=128m -XX:MaxPermSize=512m
    The process have been running stable upto now.
    The server:
    OS: oracle enterprise server release 4 update 7
    Cpu: 4*E7340 @ 2.4 Ghz
    Ram: 8 Gb
    Any other experiences with memory settings on linux and/or using a single server setup vs the shared configuration?
    Gr,
    Gert Jan Kersten

    About rebuilding kernel
    on linux 7.0 :
    1. Try to look at file /usr/src/linux/include/asm/shmparam.h
    Update parameters SHMMAX, SHMMIN, SHMMNI, SHMSEG.
    SHMMAX is set to 50% of RAM
    SHMMIN 1
    SHMMNI 100
    SHMSEG 10
    2. edit file /usr/src/linux/include/linux/sem.h
    and set param
    SEMMSL 250
    SEMOPM 100
    SEMVMX 32767
    3. After then you must configure kernel, and set modules
    and drivers when you need.
    >cd /usr/src/linux
    >make xconfig
    4. Save configure.
    5. >cd /usr/src/linux
    >make clean
    >make dep
    >make bzImage
    6. Command make bzImage create new kernel in directory
    /usr/src/linux/arch/i386/boot
    there is file bzImage
    7. copy new kernel to /boot directory
    8. you must edit /etc/lilo.cfg and add new record for
    new kernel.
    9. write lilo to disk
    >cd /etc
    > lilo -c
    10. Be shure that you can boot old kernel !!!!!
    11. restart.
    12. After correct reboot new kernel you can see to the
    /proc/sys/kernel/shmmax
    there is parameter about max shared memory.
    on Linux 7.1
    is different file path and name for edit.
    /usr/src/linux-2.4/include/linux/sem.h
    /usr/src/linux-2.4/include/linux/shm.h
    Hi.

  • Messaging memory settings can block filetransfer

    I have as probebly most businessmen/woman do, save (some)of my SMS-messages to my memorycard for safety. Therefor I.ve ajusted the memory settings of messaging.
    This causes an error when I want to use direct filetransfer. THis error block fore instance a lot of software installations. And I don't feel like changing the settings twice if I want to use filetransfer.
    is this another bug, or is there a solution?

    Hi,
    In general a good starting point for allocating memory is as follows
    OLTP system
    20% memory for OS, 80% for Oracle
    DSS
    30% memory for OS, 70% for Oracle
    so at a rough guess I would estimate that you will have available between 22 to 26 GB ram available for oracle. The next thing to think about is the split between SGA and PGA. another rule of thumb is as follows
    - For OLTP systems
    PGA_AGGREGATE_TARGET = (<Total Physical Memory > * 80%) * 20%
    - For DSS systems
    PGA_AGGREGATE_TARGET = (<Total Physical Memory > * 80%) * 50%
    so this arrives at figures of PGA between 5GB and 13GB.
    this then leaves the rest of the memory for SGA_MAX_SIZE and then you can start your database with an SGA_TARGET of something less than SGA_MAX_SIZE. these figures are a guide only.
    for tuning the PGA you can look at the following view for a quick estimate of what the system thinks is the best value.
    The following select statement can be used to find this information
    SELECT round(PGA_TARGET_FOR_ESTIMATE/1024/1024) target_mb,
    ESTD_PGA_CACHE_HIT_PERCENTAGE cache_hit_perc,
    ESTD_OVERALLOC_COUNT
    FROM v$pga_target_advice;
    The output of this query might look like the following:
    TARGET_MB CACHE_HIT_PERC ESTD_OVERALLOC_COUNT
    63 23 367
    125 24 30
    250 30 3
    375 39 0
    where the over_allocation is 0 is the figure that the PGA should be set to.
    rgds
    alan

  • Oracle memory gets trimmed every 6 hours

    We have a very strange behaviour in our SAP R/3 Enterprise 4.7 production system (SAP_BASIS 620).
    SAP runs on Windows Server 2003 Enterprise Edition.
    10 GB RAM, PAE enabled (Physical Address Extension).
    The affected server is the database server, which also runs some working processes (DIA, BTC and UPD).
    There are also 6 Windows application servers (x32, x64 and Itanium).
    After a normal SAP start, all Windows processes bit by bit allocate their memory.
    oracle.exe starts with a Mem Usage 236 MB (VM Size 1.900 MB).
    You can see this in Windows Task Manager.
    After about 30 minutes oracle.exe reaches its average value of about 2 GB.
    The value ranges from 1,9 GB up to 2,5 GB.
    Then, about every 6 hours the following happens:
    oracle.exe deallocates its memory completely !
    No answer in SAPGUI, no reaction on the console for about 5 Minutes.
    Then when i get the first look at the Task manager, i see that oracle.exe allocated about 80 MB.
    In the next 20 minutes Mem Usage raises up to the average value of about 2 GB.
    During this time, the performance comes up again step by step.
    Not only Oracle is affected, at least every disp+work process also frees all allocated mamory.
    But it seems as if Oracle would be the first to free up its memory and then drags down the SAP Kernel processes.
    We have no changes made to the SAP Kernel, we did not apply any Windows updates.
    SAP operated error-free for the last 2 years in this configuration.
    The only thing we did, was to apply several SAP Support Packages (Basis + Application).
    This behaviour occured the next day after we imported those packages.
    So we have to suspect these packages, although the symptoms point to a problem with the SAP kernel, Oracle or the Windows memory mamagement.
    SAP Support adviced us to reduce the load on the server, so we suspended some work processes.
    Result: no improvement.
    Next we reduced the Oracle cache size by 250 MB.
    Result: the situation became even worse, the error occured every hour.
    So we icreased the cache size up to 1,36 GB.
    Result: could be an improvement, not sure yet.
    I am wondering what must happen, that all processes on a Windows Server deallocate their memory.
    Can a ABAP-Report provoke this error ?
    Has anybody else ever seen such a behaviour ?
    Any ideas ?

    Thx for your interest in this issue.
    For clarification:
    - Database version is 9.2.0.7.0
    - We will upgrade to 64 Bit in the next months, but we still need a solution for our 32 Bit system.
    - We did not add new application servers. These servers were up and running before and after the problem occured.
    - I don't think that Oracle restarts. There are no ORA-entries in the Oracle Log and there is no Oracle-Usertrace.
    The system slows down, because every byte, that is backed up in the paging file (as far as i know in MS terms this is called "standby list"), has become invalid and must be read from disk.
    Not only Oracle is affected, every process trimmed its working set.
    For example Terminal Services is unresponsive for about 4 minutes.
    In the end all processes continue their work, but it takes some time until their working set has been restored from the paging file.
    No errors occur, no Dumps, no EventLog or SystemLog entries.
    There are just some TIMEOUTs, caused by the unresponsiveness of the server in the first minutes of the memory crash.
    @Markus:
    Yes, i also think that we reached some kind of Oracle memory limit.
    Since we increased the Oracle cache size, the frequency of the error has been significantly reduced.
    But still i am wondering what funny things can happen.
    I would expect Oracle to crash, Windows to bluescreen, SAP to dump.
    But freeing the memory of all processes is something completely new to me.
    Edited by: Leonhard Bernhart on Jan 8, 2008 5:10 PM
    Edited by: Leonhard Bernhart on Jan 8, 2008 5:11 PM

  • How to configure the memory settings based on the number of VSAs for Cisco service control Subscriber Manager?

    Hey Good day to all,
    please help with this; when installing the Service Control Management Suite Subscriber Manager scms-sm, and at the secound step where you have to determine the system memory settings, there are several attributes to be considered:
    the maximum number of the subscribers
    with or without qouta manager
    number of VSAs used
    so, my question is about the memory configuration parameters versus the number of VSAs used, since you must multiply with certain values, set already on a table on the Cisco website, but as shown in the example under the table these values are multiplied to all the attributes except that the example dosn't show the value of the temp-size memory;
    so please confirm this to me:
    the temporary memory size "temp_size" is not related to the number of VSAs implemented!
    this is a screen shot from Cisco website:
    thank you in advance for helping

    Hi Tessitori,
    The best way to cache, index and query that amount of data in Coherence is to use a number of stand alone JVMs (i.e. com.tangosol.net.DefaultCacheServer s) to 'manage' the data. Then access (query) that cache from your application servers instances. For an indexing and querying example take a look at this FAQ item
    If you would like to discuss this further please email me at [email protected]
    Later,
    Rob Misek
    Tangosol, Inc.
    Coherence: Cluster your Work. Work your Cluster.

  • No Bios Update for Satellite: L755D-S5204 / Inability to change Memory Settings in Bios

    Ok so I bought two new 8GB of BLUE Kingston Hyperx 1600 MHZ Memory Modules with heat spreaders recently from newegg.
    I installed them, and my system recognizes the full 8 GB, but when running CPU-Z the memory MHZ is registering in at 665.5 MHZ.
    I want to be able to run my Memory at least the STANDARD minimum 1333 MHZ if not be able to change the MHZ if possible to 1600 MHZ.  I CANNOT DO THIS WITH THE BIOS THAT CAME WITH MY SYSTEM, AND I CANNOT FIND A BIOS UPDATE ANYWHERE ON THE TOSHIBA SITE THAT WILL LET ME DO THIS.
    My system Laptop Satellite L755D-S5204 has the basic Bios that came with the system.  However I do not know or CAN NOT FIND any flash updates to update the bios so I can change these setting.
    My question to Toshiba Tech support is it possible to clock my system memory higher, and how do I do this with a BIOS that doesn't let me make any changes to my Voltage to my Memory, and doesn't let me make any changes to system memory or memory timing settings??
    When you figure that out and if you could possibly help me resolve this problem, please email me because as a computer technician I am really intrigued as to the reason why Toshiba doesn't have a FLASH BIOS UPDATE that will let you change the Memory settings, and Memory Timings!!
    Thanks,
    Michael Richins
    Comptia A+ Computer Technician

    BIOS does not provide such option!
    I have no idea what graphic card you have but for example the Sat P850-138 was equipped with an NVIDIA GeForce GT 630M graphic card.
    This GPU supports dedicated VRAM (default 2048MB)
    The available graphics memory can be expanded using system memory, through TurboCache
    In case the system memory would be expanded to 6GB RAM, the TurboCache technology could use up to 4,095 MB VRAM

  • PI 7.1 memory settings ?

    Hi Guys,
    I have installled PI 7.1 and i am looking for ABAP and Java memory settings. I am facing lot of short dump errors on the ABAP side when installing the support packs.
    I have restored the image back and now i want to set the memory parameters on both the ABAP and Java before i start patching them again.
    any help or links for the memory settings on both the ABAP and Java would be a great help.
    Thanks,
    Srini

    Hi,Charles
    PI7.1 default JVM heap size is 2GB.
    You can check your Haep memory available in the Web MMC.
    http://<yourhostname>:5<SIDnumber>13
    Select your Java server node, you can check the memory.
    Please checked the following notes,
    note 894509 - XI Performance Check
    note 1060264 - PI Troubleshooting Guide 7.1
    note 1248926 - AS Java VM Parameters for NetWeaver 7.1 based products
    Best Regards,
    Michikuni

  • Oracle memory

    The Oracle 10g database on AIX installation document asks to pre-check the server, it should have at least 1Gig memory. My question is that at least 1Gig is only oracle itself that will take away at least 1Gig from my total memory of the box? So SGA and PGA are included in this amount of oracle itself or extra usage of the memory?
    For example, the box has 4Gig memory, I see 58% is used, that means oracle itself takes about 1Gig and sga_max=1Gig + pga_aggregate_target=0.5Gig, is this correct?

    Dear Betty,
    The Oracle will eat up your shared pool size in the server mostly as shared memory.
    So if you want to calculate the Oracle memory parameters for the server consider:
    Total Server Memory = SGA_MAX_SIZE + PGA_SIZE + DEDICATED_USER_CONNECTIONS + OTHER_SERVER_APP
    The DEDICATED_USER_CONNECTIONS will appear if your users are set to dedicated mode on their tnsnames. If so, you should consider this additional space that Oracle will request to server in order to create this user connections.
    The AIX processes memory is not that easy to see what is really occupied, what is free, what is cached and how many users can connect to the server. You should use pmap to map the process private area, shared area and calculate how much is used, shared and free.
    The other thing is that the SGA should be pinned on RAM, so the database will take up all the space when going up and your server will avoid the 'allocating new pages' issues. So you should set up the sga_lock=true parameter if you want to lock the SGA into memory.
    Cheers,
    Ricardo Rodriguez

  • Passing memory settings from the Node Manager to the Managed Server

    All,
    I am trying to start a Managed server from the Node Manager. The configuration is as follows:
    I have a domain with two managed servers(M1, M2). I could start both of them using the shell scripts without any problems. Both of the servers has custom memory settings -Xms3072m -Xmx3072m -XX:PermSize=256m
    to facilitate deployment and effective operation of managed servers (the applications being deployed are kind of large).
    Now, I am trying to start the same two servers using Admin Console:
    In the admin console: I created a Unix Machine(U1) assigned the managed servers (M1, M2) to U1. I started the Node Manager on U1.
    When I try to start a Managed Server from console, the server is starting but the memory settings (-Xms3072m -Xmx3072m -XX:PermSize=256m ) are not passed to the managed server startup, so the application deployment is failing with out of Memory errors. I tried to assign same memory settings to Node manager assuming it will pass on that configuration to the node manager without any luck.
    Any help is appreciated.
    Thanks
    Raj

    Starting Managed Server from Node Manager

  • SQL Server Max Memory Settings

    Hi,
    I'd like to check if SQL Server will consume memory more than the configured MAX Memory settings? And if so when does SQL consume that and how much would it consume.
    Regards,
    Jay

    Hi,
    I'd like to check if SQL Server will consume memory more than the configured MAX Memory settings? And if so when does SQL consume that and how much would it consume.
    Hi
    Can you please tell us what is version and edition of SQL Server here. If it is 2012 its little difficult to reporduce your scenario where SQL Server 2012 will take more than max server memory setting because lots of features which use to take memory
    outside buffer pool before SQL 2012 are now changed to take memory from buffer pool. Also quite lot depends on whether system is 32 bit or 64 bit
    For SQL Server versions below 2012(not SS2000) you might get lucky with following (taken from
    Here)
    1. COM Objects
    2. SQL Server CLR
    3. Memory allocated by Linked Server OLEDB Providers and third party DLL’s loaded in SQL Server process
    4. Extended Stored Procedures:
    5. Network Packets
    6. Memory consumed by memory managers. If the memory request is greater than 8 KB and needs contiguous allocation. 
    7. Backup
    If you heavily use above features you might see SQL Server memory utilization crossing above max server memory setting. Of all above SQLCLR and extended stored procs would be my bet. If you use them heavily you might see what you want to. Extended
    stored proc has performance issues so use it on your own risk. Use below query to check SQL server memory utilization( works from SS 2008 and above)
    select
    (physical_memory_in_use_kb/1024)Memory_usedby_Sqlserver_MB,
    (locked_page_allocations_kb/1024 )Locked_pages_used_Sqlserver_MB,
    (total_virtual_address_space_kb/1024 )Total_VAS_in_MB,
    process_physical_memory_low,
    process_virtual_memory_low
    from sys. dm_os_process_memory
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
    My TechNet Wiki Articles

  • Release Oracle memory

    Hi All,
    After searching from internet, I found that the maximum memory allowed for 32 bit Linux on Oracle is 2GB. Actually I don't know what the components are included in the 2GB ! But when I mistakenly set the PGA_AGGREGATE_TARGET to 7GB, Oracle still allow me to set it! (I just want to set to 700MB only)
    After I set it to 700MB, I found that total PGA inuse and total PGA allocated from v$pgastat are continuously growing and then died. I also checked the values generated by this SQL are growing too! It died after total PGA inuse is around 1.7GB.
    SELECT ROUND(pga_target_for_estimate /(1024*1024)), estd_pga_cache_hit_percentage, estd_overalloc_count
    FROM v$pga_target_advice;1. Will Oracle release memory? (after I performed lots of INSERT with GROUP BY statements)
    2. How to prevent Oracle to die after several LGWR switch (Alert Log message: "ORA-04030: out of process memory when trying to allocate 8512 bytes (pga heap,kgh stack)")
    Thanks!

    linuxos wrote:
    Do you mean each non-SGA process (PMON, SMON, LGWR) can have 2GB, and all Oracle memory processes may be over 2GB in total, e.g. 20GB, 40GB?No. There are two basic memory areas that are used by "+Oracle+" - which is a collection of processes.
    The "+brains+" of Oracle is a shared memory segment called the SGA. Each Oracle process, will attach itself to it. Depending on the process listing you do (and how it displays memory utilisation), it may look like there is 20GB of memory being used. Make sure to differentiate between what is shared memory (single global shared memory structure) and what is not.
    The second basic memory area is the process image loaded by the kernel. This has a code segment (fixed for that executable) and data segment. The latter can be dynamic - depending on whether or not the process dynamically allocates memory to itself. This memory allocated is private process memory - not shared with any other process. In Oracle terminology this is referred to as the PGA.
    But, how can I make the large amount of DML to run slowly, or recycle the memory, so that the memory can be released instantly? Actually, the PGA cannot go above 1.7GB!You can't. What you can and should do is size the memory areas for Oracle correctly given the available resources of the server and the expected utilisation of Oracle.
    For example, if you service a lot of data warehouse type processes (complex and slow running queries), that would mean using Oracle dedicated server - and if there are a 100 of these processes, you will have a 100 PGAs to cater for. Otoh, if you have a 500 users all running short and fast OLTP transactions, you would rather want to use shared server processes where perhaps 50 shared server processes can service 500 concurrent session - thus you would need to size for 50 PGAs.
    Perhaps you cannot cater for 50 PGAs without reducing the SGA (and in turn the size of the db buffer cache and various other caches). This can affect performance.
    Thus there is a balance ito of performance when deciding on how much memory you should assign to the SGA and how much can be reserved (as free kernel memory) for PGA usage.
    As for running large DMLs... that should not place heavy strain on memory at all - as that will be using the db buffer cache, residing in the SGA. This is a fixed memory area sized up front. Not something that can grow by itself. Obviously a buffer cache plays a role in reducing physical I/Os - and that needs to be sized accordingly if you want to reduce PIOs and increase performance.
    However, if these DMLs are a result of poorly written PL/SQL code that attempts to "better performance" by bulk processing, this code can seriously dent memory growth as this processing (by the Oracle server process running that PL/SQL code) will require to increase PGA to cater for bulk processing.
    Get the bulk processing wrong and run just a couple of these bad bulk processes, and the kernel can spend over 90% of its time on swapping.
    I suggest that you read Memory Architecture in the [Oracle® Database Concepts|http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/memory.htm#i12483] guide.

  • CPO Oracle database settings

    Hi,
    The CPO install guide mention to modify the Oracle database settings as follow:
    "For the Oracle DB instance dedicated to TEO, use the following script to change the case-sensitive settings in the Oracle database. The user must have database administrative rights to execute this script.
    After changing the settings, the user must reset the database server.
    Script
    ALTER SYSTEM SET NLS_COMP=LINGUISTIC scope=spfile;
    ALTER SYSTEM SET NLS_SORT=BINARY_CI scope=spfile; "
    We are deploying TEO in an existing database containing other applications. Modifying the Database properties has indicated above may (still investigating) create issues with the other existing applications.
    What is the impact of not applying the above changes to the database ?
    Thank you,
    JB

    Julio suggested checking out these two bugs:
    CSCtr89012 Oracle database is case sensitive
    CSCts03785 Operations -> search is case sensitive in Oracle
    He said it was close to the end of regression and was safer for them to ask for case insensitive settings.
    The drawback to not having this would be some select type statements would not work or some things in tasks might not work...
    ie. "task" would not match up with "Task"
    I would suggest contacting Don Freeman and/or Julio Silveria for more information.

  • K8T Neo-FIS2R - 3200+ - Mushkin Level 1 RAM: Optimal Memory Settings

    Just got a K8T Neo-FIS2R with a AMD Athlon 64 3200+ with 1g (512MB X
    2) Mushkin Level 1 PC3500 RAM as recommended by Anandtech on his High
    End System roundup:
    www.anandtech.com/guides/showdoc.html?i=2041
    I am having a HUGE problems with memory settings in the BIOS.
    On AUTO, the memory is only autodected at DDR200, which I found odd.
    I them took memory setting in my old hands and things don't seem much
    better:
    I made these changes:
    MEMCLOCK: DDR400
    BANK INTERVEAVING: AUTO
    BURST LENGTH: 8 BEAT
    CL: 2
    TRCD: 3
    TRAS: 5
    TRP: 2
    This was unstable at first and  bumped the DDR Voltage to 2.7, moved
    the sticks to DIMM1 and DIMM3 (instead of DIMM2), and it seems stable,
    but not very fast. according to Sandra Scores
    Any recommendations for Memory Settings?   Why would the MB not
    autodecte DDR400 RAM?  I am not looking to overlock...just maximize
    the CPU and memory I have.
    Maybe I just need to get new memory.  What is the best memory for this motherboard?
    Thanks,
    Steve

    I'm running the same mem (see my sig).  Since its PC3500 I attempted to tighten the settings to those you had listed (2,3,2,5).  It was not Prime95 torture test stable at even the stock 200 MHz FSB using these settings.  Upping the voltage to 2.7 did not increase the stability in my system at these settings.  The memory ran perfectly at AUTO settings, but was clocked at 2.5,3,3,8 I believe.
    Keep in mind that level 1 is advertised at 2,3,3.  My system is 100% stable at 2.7V for me up to 210 MHz FSB at these settings (I haven't tried any faster).  My TRAS is set to 6.
    Hope this helps a fellow Mushkin user  

Maybe you are looking for

  • Loosing bind params

    Hi, We are having a problem with our application that we loose our bind parameters after a while. There is a view object that has a bind param called p_id. Which we set by using setp_id and perform executyQuery inside coding. If we leave the page alo

  • Help with joining AP to WLC 5508 Getting error in LOG's

    I am tryign to join my first AP to the 5508 and am gettign this error int he logs. *Dec 07 10:42:31.067: %DTL-3-NPUARP_ADD_FAILED: dtl_arp.c:2280 Unable to add an ARP entry for 0:0.0.0 to the network processor. entry does not exist The management Int

  • WAAS inline deployment

    Hello All, I'm deploying a couple of WAE-512's in "inline" mode for a customer. I have the configuration and seems pretty straight forward, looks like I just need to have the inlinegroup created and allow all VLAN's. I will also have a crossover conn

  • Submitting as PDF

    How to submit form by email as pdf, so that it can be digitally signed by different authorities. Now it has been sent as .xml file, it needs to be download and again uploaded to the form signed and again sent as .xml.

  • IPhoto Deletion Question

    When you delete photos out of the iPhoto trash, are they gone forever? I deleted some photos by accident out of the trash in iPhoto, is there any way to get them back? Do they store in some kind of library? Thanks in advance.