RAM for RAC implementation

Dear All,
It is not a core requirement of RAC that the RAM in both machines has to be exactly the same?
I have 2 linux machines with 16 GB and 14 GB RAM respectively.
After installation the RAM might be updated to 16 GB in both machines.
Regards,
Imran

misterimran wrote:
Dear All,
It is not a core requirement of RAC that the RAM in both machines has to be exactly the same?
I have 2 linux machines with 16 GB and 14 GB RAM respectively.
After installation the RAM might be updated to 16 GB in both machines.
Regards,
Imran
Hi,
It doesn't have to be match in term of memory but all nodes must meet minimum memory requirement
Cheers

Similar Messages

  • OVM disks for RAC implementation

    Dear All
    is there any guide available on how can you create the disks for RAC ASM in OVM 3.3.1 using a fiber channel block level storage?
    Thanks
    George

    You are right, you can't use virtual disks for RAC configuration. Have a look here (especially page 18):
    http://www.oracle.com/technetwork/products/clustering/oracle-rac-in-oracle-vm-environment-131948.pdf
    Using physical disks means that you create LUNs on your storage array connected to Oracle VM Servers by fabric channel. You map these LUNs to all servers in the pool or all standalone servers where you are going to install your virtual machines being Clusterware nodes. Then you rediscover storage in Oracle VM Manager, mark these LUNs as "shared" in OVMM and add them to your virtual machines as "Physical disks" (by editing guest properties in OVMM).
    Alternatively you can directly map iSCSI or NFS storage to your guests. By "directly" I mean you use IP addresses and software in your guests as iSCSI initiator or NFS client - without engaging Oracle VM in the middle.
    Regards,
    Michal

  • Oracle 11gR2 (2 node) RAC Architecture requirements for ASM implementation

    My architect is the following:
    * RAC1 server of Red Hat Linux Enterprise 5.5 64bit
    * RAC2 server of Red Hat Linux Enterprise 5.5 64bit
    * NFS Server (SAN) with Red Hat Linux Enterprise 5.5 64bit
    - Exported files systems for Shared Data, OCR and Voting Disks to nodes RAC1 and RAC2
    I've installed the ASM packages on to my NFS Server (SAN) and then realized that I still need a way to share the storage. It is my understanding that Oracle ASM for Red Hat Linux Enterprise 5.5 64bit DOES NOT PROVIDE Shared Storage to the other nodes (RAC1 and RAC2). It seems that I need something else???
    I've implemented Oracle RAC 11gR2 using NFS and wanted to try building Oracle RAC using ASM in my playground (home server's).
    Does anyone have any ideas on how I might be able to use ASM with out having a true network shared storage. I'm using NFS because it is part of the Red Hat Enterprise 5.5.
    Any ideas are appreciated !!!

    Hi,
    I would't recommend NFS for RAC in a enterprise solution. However if you are doing this for your playpan, then this is what you can do.
    Once the NFS shares are presented to the servers, you need to mount your filesystems on the RAC servers to access the NFS shares.
    e.g, If the the following NFS shares are presented to rac nodes:
    /mnt/disk1
    /mnt/disk2
    And let say you want to have your ocr and voting disks on /mnt/disk1 and database on /mnt/disk2.
    Firstly, you need to mount the shared on one of the rac nodes as follows. I will mount /mnt/disk1 as /u01/shared for my ocr and voting disks and /mnt/disk2 as /u01/asmdata by updating my fstab file as follows:
    nfs_server:/mnt/DISK1 /u01/shared nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,noac,vers=3,timeo=600 0 0
    nfs_server:/mnt/DISK2 /u01/asmdata nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600 0 0
    Mount the FS on both servers and then create the block devices into each filesystem. You don't need to run oracleasm to create disks on it but rather change your ASM discovery path to the location where you have the block devices created and mount your asm diskgroups.
    That is how I got my 11g RAC setup.
    Hope that helps.
    Pranilesh

  • RAC implementation learning

    Hi Gurus,
    I would like to upgrade skills to learn RAC as i am eager to do RAC implementation .
    I would like to first implement on local PC . Is it possible to implement in local laptop ??????/
    Laptop OS :Vista 32 Bit,
    3GB RAM and 250 GB harddisk .
    I would like to carry out 11g/10g RAC implementation.
    Could you please provide me the link (if any by oracle) which provides detailed RAC implementation along with softwares and all ?
    11g database , will it have RAC releated components ? Or any other components need to be downloaded ?
    Please help me
    Thakns & Regards
    New beginner at RAC

    I would like to upgrade skills to learn RAC as i am eager to do RAC implementation .
    I would like to first implement on local PC . Is it possible to implement in local laptop ??????/
    Laptop OS :Vista 32 Bit,
    3GB RAM and 250 GB harddisk .Install VMware and test with 10gR2/11gR1 RAC
    http://www.oracle-base.com/articles/10g/OracleDB10gR2RACInstallationOnCentos4UsingVMware.php
    http://www.oracle-base.com/articles/10g/OracleDB10gR2RACInstallationOnWindows2003UsingVMware.php
    Software Oracle:
    http://www.oracle.com/technology/software/products/database/index.html
    DOCs
    http://tahiti.oracle.com

  • Oracle 10g RAC implementation running out of space

    I have an Oracle RAC implementation setup on a Sun Storagetek 6140 for storage. I have allocated 100gb of space to Oracle but am constantly running out of space during operations. I know that i can allocate additional disk space using Common Array Manager for the Storagetek. How do i get ASM to recognize the fact that there is additional space available to it?

    Hi buddy,
    How do i get ASM to recognize the fact that there is additional space available to it?You have two options:
    1- The first one is create a new LUN, configure it at the OS level and add the disk to disk group.
    alter diskgroup add disk '<DISK_DEVICE_PATH>';
    2- The second one is increasing the LUN size (if possible, of course) and resize the disk.
    alter diskgroup <DISK_GROUP_NAME> resize disk '<DISK_NAME>' <NEW_SIZE>;
    Hope it helps,
    Cerreia

  • Setup script document for Sourcing implementation

    Hi,
    Please can one share a setup script document for Sourcing implementation ?
    Regards.

    Hi Ram,
    See wiki page:
    General PPDS wiki page
    http://wiki.sdn.sap.com/wiki/display/SCM/APO-PPDS
    Setup Matrix Generation in a Complex Manufacturing Evironment
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/00a618c4-8aad-2b10-6ebb-f70cb4470195
    Oficial doc
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/00a618c4-8aad-2b10-6ebb-f70cb4470195?quicklink=index&overridelayout=true
    Hope this help.
    Luiz Giani

  • Oracel RAC implementation experience

    Dear All,
    Has any one successfully implemented Oracle RAC on Solaris 10/SPARC on a production box for SAP ECC/SRM etc...?. Can u Pl share your experience? Will Oracle RAC be supported in future or is it going to be phased out by SAP? How robust is the Oracle RAC implementation? Is it advisable to take the Oracle RAC route?
    Thanks for all your inputs.
    Regards
    Velu

    Hi
    Hey are you asking for the Implementation methodology used by SAP-XI.It is known as
    AcceleratedSAP (ASAP) Roadmap.
    The ASAP Methodology has five phases:
    1. Project Preparation – project formally initiated and planning well under way.
    2. Business Blueprint – project team gathers requirements and conducts conceptual design of the solution.
    3. Realization – system solution is built and integration tested, end users trained
    4. Final Preparation – final check before cut over to new system solution
    5. Go Live & Support – solution confirmation, on-going support in place and project closing
    The ASAP Implementation Roadmap for Exchange Infrastructure provides guidance for the implementation teams embarking on implementation project of SAP XI Solution.
    http://www.sap-basis-abap.com/sapgeneral/what-is-asap.htm
    Refer link below for a brief ASAP stages
    ASAP SAP IMPLEMENTATION
    http://help.sap.com/printdocu/core/Print46c/en/data/pdf/SVASAP/SVASAP.pdf 
    http://media.sdn.sap.com/html/submitted_docs/Implementation_Roadmap_XI/index.htm
    Thanks

  • Dedicated switches needed for RAC interconnect or not?

    Currently working on an Extended RAC cluster design implementation, I asked the network engineer for dedicated switches for the RAC interconnects.
    Here is a little background:
    There are 28 RAC clusters over 2X13 physical RAC nodes with separate Oracle_Home for each instance with atleast 2+ instances on each RAC node. So 13 RAC nodes will be in each site(Data-Center). This is basically an Extended RAC solution for SAP databases on RHEL 6 using ASM and Clusterware for Oracle 11gR2. The RAC nodes are Blades in a c7000 enclosure (in each site). The distance between the sites is 55+ kms.
    Oracle recommends to have Infiniband(20GBps) as the network backbone, but here DWDM will be used with 2X10 Gbps (each at 10 GBps) links for the RAC interconnect between the sites. There will be separate 2x1GBps redundant link for the Production network and 2x2 GBps FC(Fiber-Channel) redundant links for the SAN/Storage(ASM traffic will go here) network. There will be switches for the Public-production network and the SAN network each.
    Oracle recommends dedicated switches(which will give acceptable latency/bandwith) with switch redundancy to route the dedicated/non-routable VLANs for the RAC interconnect (private/heartbeat/global cache transfer) network. Since the DWDM interlinks is 2x10Gbps - do I still need the dedicated switches?
    If yes, then how many?
    Your inputs will be greatly appreciated.. and help me take a decision.
    Many Thanks in advance..
    Abhijit

    Absolutely agree.. the chances of overload in a HA(RAC) solution and ultmate RAC node eviction are very high(with very high latency) and for exactly this reason I even suggested inexpensive switches to route the VLANs for the RAC interconnect through these switches. The ASM traffic will get routed through the 2x2GB FC links through SAN-Directors (1 in each site).
    Suggested the network folks to use Up-links from the c7000 enclosure and route the RAC VLAN through these inexpensive switches for the interconnect traffic. We have another challenge here: HP has certified using VirtualConnect/Flex-Fabric architecture for Blades in c7000 to allocate VLANs for RAC interconnect. But this is only for one site, and does not span Production/DR sites separated over a distance.
    Btw, do you have any standard switch model to select from.. and how many to go for a RAC configuration of 13 Extended RAC clusters with each cluster hosting 2+ RAC instances to host total of 28 SAP instances.
    Many Thanks again!
    Abhijit

  • Solutions for access as ROOT for RAC DBA duties

    Our Networking Team and Applications Team are going through some growing pains. We are trying to resolve what permissions should be given to a RAC DBA. Our RAC DBA is responsible for Oracle Clusterware, Oracle Automatic Storage
    Management and Oracle RDBMS software. The OS, Server and Storage Subsystem are the responsibility of the System Administrator. We have the following Environment:
    Production and Test (RAC)
    Oracle Enterprise Linux 5 update 2
    Oracle Clusterware 11.2.0.2 -- Grid Infrastructure
    Oracle ASM 11.2.0.2
    Oracle Database 11.2.0.2 EE
    Development (Single Instance)
    Oracle Enterprise Linux 5 update 2
    Oracle ASM 11.2.0.2 -- Grid Infrastructure
    Oracle Database 11.2.0.2 EE
    As the RAC DBA, I have identified the following areas that require ROOT for RAC and Single Instance DB's; however, I understand there may be more:
    diagcollection.pl
    - diagnostic tool for Oracle Clusterware and may be requested by Oracle Support
    ocrconfig
    - to repair ocr configuration issue (add, replace and remove requires root)
    srvctl modify
    - required root to change ip address
    tar
    - TAR Grid Infrastructure Directory structure preserving files with ROOT ownership
    cluvfy
    - cluvfy fix it scripts need to run as ROOT
    - some cluvfy commands under 11gr1 would only run properly for -post cfs check as ROOT in our last installation
    ASM Libraries
    - ROOT required to install and configure ASM libraries
    fdisk - l
    - this is used to see disks attached which is relevant when ASM disks are not mounted
    /etc/sysconfig/oracleasm
    - oracleasm loading configuration file
    /usr/sbin/oracleasm
    - to make disks available to ASMLIB (scandisks etc.)
    /usr/sbin/asmtool
    - asm config tool due to bug
    asm cluster file system
    - some commands require ROOT (mounting etc.)
    - acfsutil
    /var/log/messages
    - loading errors ohas and oracleasm would be logged here
    cvuqdisk
    - needs to be loaded for new install
    root.sh
    - script needed to run at install, upgrades and patching
    oraInstRoot.sh
    - script needed to run at install
    rootupgrade.sh
    - upgrade script
    roothas.pl
    - upgrade script
    ocrcheck
    - check for ocr corruption
    - corrupt check portion requires ROOT
    - oracle local registry
    Grid Infrstructure
    - .runInstaller from Grid Infrastructure
    - includes upgrades
    asm configuration assistant (asmca)
    - configuration of asm diskgroups
    - vol mgr for asm disks
    ocrconfig
    - ocr configuration tool
    - ocr import
    - ocr export
    - oracle local registry
    ocrdump
    - used to check ocr backup file
    - oracle local registry
    opatch
    - patching grid control requires ROOT
    crsctl
    - Startup and Shutdown Oracle Clusterware, Oracle ASM and Database/Instance
    - restore voting disk
    - restore ocr
    - set log for dynamic debugging
    - check install periodically
    srvctl
    - modify nodeapps (ex. ip address change)
    - add filesystem (acfs)
    What solutions have people found so that RAC DBA can perform responsibilites yet not have ROOT password?

    In all the environments I've worked in, I either had direct su access with knowledge of the root password or used sudo. I really can't imagine an environment that would require something other than either of those two options.
    In places with stricter auditing requirements we used sudo in conjunction with the sudosh shell wrapper to log all activities to syslog, but this was used by everyone and not just the DBA
    Is SUDO the only solutuion? Every command need to be entered into SUDO config files that necessitates ROOT access.
    As I demonstrated in the other thread, giving the oracle user sudo access to files that are writable by the oracle user (eg. root.sh) gives them the ability to access to a root shell. It is good to implement a "minimum privileges necessary" policy in your organization but it has to be within reason. The minimum privilege necessary for running and maintaining CRS is root.
    Edited by: AllYourDataBase on Apr 18, 2011 1:44 PM

  • Are there suggestions for managing Multiple VMWare Fusion systems sharing files with a MacBook?  Suppose I have enough ram for 3 servers to share files.

    Suppose I have enough ram for 2 or three VMWare fusion systems to that could benefit from file sharing.  Is it possible to configiure each VMWare funsion instance to have a common read only file share on my desktop and a common read/write area on on my desktop? 
    This would allow me to extend the power and mustle of my Mac Book Air, Retina Display system to its best advantage.  Whether native Mac processing or speciality linux or Windows systems as needed.
    I am having trouble allowing more than one system to share the same read only and the same read/write areas so that software does the file locking and merge controls.  Even if these do not share a common read/write area, it would be truly handy for drive space management for them to share a common read only area.
    Thanks for your thoughtful input.
    20 year IT Professional that loves Mac visual interfaces, outstanding performance and a *nix command line. 
    Don

    It really denpends on the os running in the vm and what the permissions are set to on those shares.
    I run a few vms on both a mac at home and windows at work and all other computers in my home or at work can connect to the vms without any problems. Even multiple computer at the same time.
    It also depends on how you have set the vm up for networking. If you are setting it up as nat, instead of bridged, that can also cause problems access it from multiple system. With nat the vm is on its own network branch.

  • How can I have a collective AWR report for RAC database in 10gR2 and 11gR1?

    Plz correct me here
    awrrpt.sql takes the snapshot at the instance level and incase we have 5 instances we have to take 5 awr reports for a particular period.correct???
    If above is true ,any wayout to collect a single collective AWR report for RAC database which includes information on all the instances in Oracle 10g R2 or 11gR1 ?
    Thanks in advance
    Gagan

    I have never come across a way for this. though I can n't say it is not there.
    But I guess it may not be feasible too becuase as we know the current AWR report contains data which is specific for one instance.
    Various hit ratios, top events, instance effeciency reports.... what not ..
    It would be really nice to see something is a new format where it lists values from each instance in a single report.. I guess such a thing Does not exiist as of now.

  • What are the thread safety requirements for container implementation?

    I rarely see in the TopLink documentation reference to thread safety requirements and it’s not different for container implementation.
    The default TopLink implementation for:
    - List is Vector
    - Set is HashSet
    - Collection is Vector
    - Map is HashMap
    Half of them are thread safe implementations List/Collection and the other half is not thread safe Set/Map.
    So if I choose my own implementation do I need a thread safe implementation for?
    - List ?
    - Set ?
    - Collection ?
    - Map ?
    Our application is always reading and writing via UOW. So if TopLink synchronize update on client session objects we should be safe with not thread safe implementation for any type; does TopLink synchronize update on client session objects?
    The only thing we are certain is that it is not thread safe to read client session object or read read-only UOW object if they are ever expired or ever refreshed.
    We got stack dump below in an application always reading and writing objects from UOW, so we believe that TopLink doesn’t synchronize correctly when it’s updating the client session objects.
    java.util.ConcurrentModificationException
    at java.util.AbstractList$Itr.checkForComodification(AbstractList.java:449)
    at java.util.AbstractList$Itr.next(AbstractList.java:420)
    at oracle.toplink.internal.queryframework.InterfaceContainerPolicy.next(InterfaceContainerPolicy.java:149)
    at oracle.toplink.internal.queryframework.ContainerPolicy.next(ContainerPolicy.java:460)
    at oracle.toplink.internal.helper.WriteLockManager.traverseRelatedLocks(WriteLockManager.java:140)
    at oracle.toplink.internal.helper.WriteLockManager.acquireLockAndRelatedLocks(WriteLockManager.java:116)
    at oracle.toplink.internal.helper.WriteLockManager.checkAndLockObject(WriteLockManager.java:349)
    at oracle.toplink.internal.helper.WriteLockManager.traverseRelatedLocks(WriteLockManager.java:144)
    at oracle.toplink.internal.helper.WriteLockManager.acquireLockAndRelatedLocks(WriteLockManager.java:116)
    at oracle.toplink.internal.helper.WriteLockManager.checkAndLockObject(WriteLockManager.java:349)
    at oracle.toplink.internal.helper.WriteLockManager.traverseRelatedLocks(WriteLockManager.java:144)
    at oracle.toplink.internal.helper.WriteLockManager.acquireLockAndRelatedLocks(WriteLockManager.java:116)
    at oracle.toplink.internal.helper.WriteLockManager.acquireLocksForClone(WriteLockManager.java:56)
    at oracle.toplink.publicinterface.UnitOfWork.cloneAndRegisterObject(UnitOfWork.java:756)
    at oracle.toplink.publicinterface.UnitOfWork.cloneAndRegisterObject(UnitOfWork.java:714)
    at oracle.toplink.internal.sessions.UnitOfWorkIdentityMapAccessor.getAndCloneCacheKeyFromParent(UnitOfWorkIdentityMapAccessor.java:153)
    at oracle.toplink.internal.sessions.UnitOfWorkIdentityMapAccessor.getFromIdentityMap(UnitOfWorkIdentityMapAccessor.java:99)
    at oracle.toplink.internal.sessions.IdentityMapAccessor.getFromIdentityMap(IdentityMapAccessor.java:265)
    at oracle.toplink.publicinterface.UnitOfWork.registerExistingObject(UnitOfWork.java:3543)
    at oracle.toplink.publicinterface.UnitOfWork.registerExistingObject(UnitOfWork.java:3503)
    at oracle.toplink.queryframework.ObjectLevelReadQuery.registerIndividualResult(ObjectLevelReadQuery.java:1812)
    at oracle.toplink.internal.descriptors.ObjectBuilder.buildWorkingCopyCloneNormally(ObjectBuilder.java:455)
    at oracle.toplink.internal.descriptors.ObjectBuilder.buildObjectInUnitOfWork(ObjectBuilder.java:419)
    at oracle.toplink.internal.descriptors.ObjectBuilder.buildObject(ObjectBuilder.java:379)
    at oracle.toplink.queryframework.ObjectLevelReadQuery.buildObject(ObjectLevelReadQuery.java:455)
    at oracle.toplink.queryframework.ObjectLevelReadQuery.conformIndividualResult(ObjectLevelReadQuery.java:622)
    at oracle.toplink.queryframework.ReadObjectQuery.conformResult(ReadObjectQuery.java:339)
    at oracle.toplink.queryframework.ReadObjectQuery.registerResultInUnitOfWork(ReadObjectQuery.java:604)
    at oracle.toplink.queryframework.ReadObjectQuery.executeObjectLevelReadQuery(ReadObjectQuery.java:421)
    at oracle.toplink.queryframework.ObjectLevelReadQuery.executeDatabaseQuery(ObjectLevelReadQuery.java:811)
    at oracle.toplink.queryframework.DatabaseQuery.execute(DatabaseQuery.java:620)
    at oracle.toplink.queryframework.ObjectLevelReadQuery.execute(ObjectLevelReadQuery.java:779)
    at oracle.toplink.queryframework.ReadObjectQuery.execute(ReadObjectQuery.java:388)
    at oracle.toplink.queryframework.ObjectLevelReadQuery.executeInUnitOfWork(ObjectLevelReadQuery.java:836)
    at oracle.toplink.publicinterface.UnitOfWork.internalExecuteQuery(UnitOfWork.java:2604)
    at oracle.toplink.publicinterface.Session.executeQuery(Session.java:993)
    at oracle.toplink.publicinterface.Session.executeQuery(Session.java:950)

    Hi Lionel,
    As a general rule of thumb, the ATI Rage 128 Pro will not support a 20" LCD. That being said, there are reports of it doing just that (possibly the edition that went into the cube).
    I'm not that familiar with the ins and outs of the Cube, so I can't give you authoritative information on it.
    A good place to start looking for answers is:
    http://cubeowner.com/kbase_2/
    Cheers!
    Karl

  • Urgent help needed in configuring X1151A for RAC cluster

    for RAC requirements I have to configure this card to use the interface name of ce1 but I have tried changing slots and puting /etc/hostname.ce1 file out there but it fails .. it always comes up as ce0.
    my question is : How can I cofigure this card to come up as ce1 ? in the other box I had to do nothing .. i just placed the /etc/hostname.ce1 file with hostname and it works perfect.
    can you please email me a copy of your response at [email protected] ?
    thanks
    Sami

    Look for "ce" instances in the /etc/path_to_inst file. You'll need to change the lines so that the hardware path you want is "1".
    Keep a backup and write down the pathname. You can give that path to a 'boot -a' prompt if anything bad happens.
    Darren

  • How can i allocate more RAM for my onboard video card on a 2011 macbook pro 13 inch running windows 7 via bootcamp

    I noticed that since Macbooks.. macs in general use an EFI instead of BIOS.  . .. Now i need to allocate some more RAM for the onboard video card since its currently running with an estimated 50mb.. I would like to increase this to 512  and cut the 3.95 down to 3.5 MB for the rest of the system ( allowing me to run more graphic intensive games and the such ... ) I know this could slow down the entire system.. ( i am aware of this ) and have gotten a 3rd party application to control and close out uneeded processes during gaming.. I just need more ram for my onboard video card.. Any help?

    You don't allocate more RAM towards your video card, you put more RAM into the machine and hopefully some machines will bump the CPU graphics RAM allocation up a little.
    The problem is you bought a integrated only graphics computer, if you want to 3D game you need a dedicated graphics computer with it's own or dedicated VRAM.
    If your this serious about 3D gaming, your on the wrong platform.
    Get a Windows 7 64bit i7 8GB RAM expandable 3D gaming tower with a good power supply and video card upgrade path, this way every few years you buy and stick in a new video card to play the latest games.
    Mac's are not gaming machines and never will be because they would last too long if we could upgrade them.
    http://www.cbscores.com/index.php?sort=ogl&order=desc

  • Bought ram for Macbook 4,1 and wont boot up

    Hey guys i just bought some new ram to upgrade my macbook 2.1 Ghz Intel Core 2 Duo running Snow Leopard 10.6.8 and it says its the right ram for it but it doesnt boot up. the RAM i bought is 2GB PC5300/667Mhz DDR2. i bought 2 modules to equal 4GB and it doesnt boot. i tried one stick, both sticks, one new and one old, still nothing. the disk drive runs but thats all. So either i got bad ram or they labled it wrong? What can i do besides return it?

    If you bought your RAM at a place that also sells PC RAM you may have gotten mislabelled RAM. PCs can handle different RAM speeds so they’ll often label higher speed RAM as a lower speed rather than make two different speeds of RAM. But Macs are much more picky. They require a RAM stick to be exactly 667mhz not 675mhz or 800mhz. The way to tell is to put one of your old RAM sticks in and it will force the new RAM to run at the correct speed.
      This is from one review of PNY RAM:
    "These modules are actually 800Mhz. PNY no longer makes or sells 667Mhz modules. Not all computers that require 667Mhz are compatible with 2 800Mhz modules. They refuse to down clock properly. This is especially true with a number of Core 2 Duo MacBooks. Spoke to PNY support, they flat out told me that yes, they sell 800Mhz modules in 667Mhz packaging. If you RMA a module that is 667Mhz (or supposed to be 667Mhz) they will replace it with an 800Mhz module as they no longer have any 667Mhz SODIMMs, not even for RMA replacement!"
    These are good online stores for Mac compatible RAM
    OWC 667Mhz RAM
    http://eshop.macsales.com/shop/memory/MacBook/DDR2/ - They offer Mac tested RAM at very good prices.
    Crucial Memory http://www.crucial.com/ - good place to buy RAM from all over the world. They also have an excellent memory selector that allows you to choose memory based on your computer's model
    Data Memory Systems http://www.datamemorysystems.com/apple-memory.asp - another good, cheap place to buying RAM if you live in the U.S.

Maybe you are looking for