Using mat with locked memory timing

If you disable the spd setting so the bios does not mess with the timings, can you still benifit from using mat? Seems like all mat does is tighten the memory, but if you lock the timings, what good is mat?

you can still benefit mat.
if you lock the timing that mean you do not want to manually adjust or overlock the memory. mat is another internal thing that the motherboard sort itself out.

Similar Messages

  • Possible deadlocks with in-memory database using Java

    I've written a completely in-memory database using the Java API on BDB 4.6 and 4.7 for Windows and Linux (x86). The completely in-memory database means the database content and logs are entirely in-memory and the overflow pages will not be written to a disk file.
    The database environment and the database are configured to be transactional. All database access methods are specified to be auto-commit by setting the transaction argument to null. The environment is configured to be multi-threaded (which is the default when using the Java API).
    When run with a single-threaded client, the application works correctly on both Windows and Linux for BDB 4.6 and 4.7.
    When run with a multi-thread client that uses two thread for the database access, I run into a deadlock inside the call to the Database.delete method about half the time.
    I am assuming that in the "auto-commit" mode, a deadlock should not be possible.
    Any reported problems with using Java with in-memory database?
    Thanks.
    Hisur

    Hi Hisur,
    If you are using transactions and multiple threads, you will have to deal with deadlock. In this particular case, it's likely that a delete is causing two btree pages to be merged (called a "reverse split"). Auto-commit makes no difference in this case -- the application must retry the operation.
    Regards,
    Michael Cahill, Oracle.

  • Lock a ztable using function module or memory id

    hello experts,
    i am facing a problem in ALV grid output (editable) - when user editing and saving data into the ztable - other user should not able to save the data and error msg to be displayed.
    i am using the function modules for locking the whole table - enqueue_eztable and dequeue_eztable. i can see the table as locked in sm12, but i can also do the updating and save data.
    is there anyother function module to work with?
    another way i am doing is -
    DATA: text4 TYPE c,
            text5 TYPE c.
      IMPORT text4 TO text5 FROM MEMORY ID 'Y222'.
      IF sy-subrc = 0.
        IF text5 = 'X'.
          MESSAGE e000(z1) WITH 'SOME USER IS ACCESSING THE PROGRAM'.
        ENDIF.
      ELSE.
        EXPORT text4 = 'X' TO MEMORY ID 'Y222'.
        PERFORM update_dbase.
        FREE MEMORY ID 'Y222'.
      ENDIF.
    this method also not working.
    plz help me, how to avoid other user accessing the data when another user is working on it?
    thanks.

    Hi,
    Create lock object for the table and use below method
      Lock Table ZWMDASHBOARD
        CALL FUNCTION 'ENQUEUE_EZWMDASHBOARD'
          EXPORTING
            mode_zwmdashboard = 'E'
            mandt             = sy-mandt
          EXCEPTIONS
            foreign_lock      = 1
            system_failure    = 2
            OTHERS            = 3.
        IF sy-subrc <> 0.
          RAISE unable_to_lock.     " Exception
        ELSE.
        Modify Table ZWMDASHBOARD.
          MODIFY zwmdashboard FROM TABLE t_zwmdashboard[].
          IF sy-subrc <> 0.
          Do Nothing
          ENDIF.
        Unlock table ZWMDASHBOARD.
          CALL FUNCTION 'DEQUEUE_EZWMDASHBOARD'
            EXPORTING
              mode_zwmdashboard = 'E'
              mandt             = sy-mandt.
        ENDIF.
    Hope this helps..

  • JDK6 locks use a LOT more memory then JDK5

    I'm happy user of java 5 concurrency utilities - especially read/write locks. We have a system with hundreds of thousands of objects (each protected by read/write lock) and hundreds of threads. I have tried to upgrade system to jdk6 today and to my surprise, most of the memory reported by jmap -histo was used by thread locals and locks internal objects...
    As it turns out, in java 5 every lock had just a counter of readers and writers. In java 6, it seems that every lock has a separate thread local for itself - which means that there are 2 objects allocated for each lock for each thread which ever tries to touch it... In our case, memory usage has gone up by 600MB just because of that.
    I have attached small test program below. Running it under jdk5 gives following results:
    Memory at startup 114
    After init 4214
    One thread 4214
    Ten threads 4216With jdk6 it is
    Memory at startup 124
    After init 5398
    One thread 8638
    Ten threads 39450This problem alone makes jdk6 completly unusable for us. What I'm considering is taking ReentranceReadWriteLock implementation from JDK5 and using it with rest of JDK6. There are two basic choices - either renaming it and changing our code to allocate the other class (cleanest from deployment point of view) or putting different version in bootclasspath. Will renaming the class (and moving it to different package) work correctly with jstack/deadlock detection tools, or they are expecting only JDK implementation of Lock ? Is there any code in new jdk depending on particular implementation of RRWL ?
    Why this change was made btw ? Only reason I can see is to not allow threads to release read lock taken by another threads. This is a nice feature, but is it worth wasting gigabyte of heap ? How this would scale to really big number of threads ?
    Test program
    import java.util.concurrent.atomic.AtomicInteger;
    import java.util.concurrent.locks.*;
    public class LockTest {
      static AtomicInteger counter = new AtomicInteger(0);
      static Object foreverLock = new Object();
      public static void main(String[] args) throws Exception {
        dumpMemory("Memory at startup ");
        final ReadWriteLock[] locks = new ReadWriteLock[50000];
        for ( int i =0; i < locks.length; i++ ) {
          locks[i] = new ReentrantReadWriteLock();
        dumpMemory("After init ");
        Runnable run = new Runnable() {
          public void run() {
            for ( int i =0; i< locks.length; i++ ) {
              locks.readLock().lock();
    locks[i].readLock().unlock();
    counter.incrementAndGet();
    synchronized(foreverLock) {
    try {
    foreverLock.wait();
    } catch (InterruptedException e) {
    e.printStackTrace();
    new Thread(run).start();
    while ( counter.get() != 1 ) {
    Thread.sleep(1000);
    dumpMemory("One thread ");
    for ( int i =0; i < 9; i++ ) {
    new Thread(run).start();
    while ( counter.get() != 10 ) {
    Thread.sleep(1000);
    dumpMemory("Ten threads ");
    System.exit(0);
    private static void dumpMemory(String txt ) {
    System.gc();
    System.gc();
    System.gc();
    System.out.println(txt + (Runtime.getRuntime().totalMemory()-Runtime.getRuntime().freeMemory())/1024);

    Controlling access/update to data is what DBMS are
    all about.And our framework is more or less DBMS.
    Imagine that you need a SQL database with following extensions:
    If any row you have ever requested is modified, you should get a new version transparently plus get notified about the change (what fields have changed)
    If any query you have ever done would return different rows then previously, the result collection should be modified and you should be notified about the change (delta to previous contents).
    It is distributed-cache-meets-DBMS framework.
    Some of the entities are backed by actual database for persistence, but others are not (they are in transient memory only, or views to data managed by completly different systems).
    We could stay with R/W locks for the lists and plain locks for objects - but even the number of lists in the system (5-10k) could already have some effect when multiplied by the number of threads - and originally the cost for having R/W lock per object was relatively small and it seems cleaner and more scalable.
    Just from top of my head I can give the example where I was searching the list of the objects for the index to insert a new one in write lock, but I have switched to searching this list in read lock, then changing to a write lock and searching area around previously found place (as list could be modified in the moment lock is upgraded, but in most cases I have to search only 1-2 indices around). This change had incredible perceived performance impact (as rendering code for a JTable was using model based on the same list with a readlock). For single object locking it is not so obvious, but still there are objects which can locked for reading from many threads concurrently.

  • I want to ask something about firefox. why firefox use very much memory? can you develop to reduce memory comsume? this problem is very distrub in my PC with low memory.

    I want to ask something about firefox.
    why firefox use very much memory?
    can you develop to reduce memory comsume?
    this problem is very distrub in my PC with low memory.
    == This happened ==
    Every time Firefox opened

    How much memory is Firefox using right now?
    # Press '''CTRL+SHIFT+ESC''' to load the Task Manager window
    # Click the Processes tab at the top. (Click once near the top of the window if you don't see tab
    # Find firefox.exe, and see how many kilobytes of memory it's using.
    Showing around 80MB when Firefox first starts is normal. Right now, I have 75 tabs open and it's using 500MB - this varies a lot depending on what you have in the tabs.
    Other than high memory usage, what other problems are you experiencing? (Examples include slowness, high CPU usage, and failure to load certain sites)
    Many of these issues, including high memory usage, can be caused by misbehaving add-ons. To see if this is the case, try the steps at [[Troubleshooting extensions and themes]]. Outdated plugins are another cause of this issue - you can check for this at http://www.mozilla.com/plugincheck

  • My internal drive is NOT showing up in the event library in Final Cut X.  I am also getting a "The operation couldn't be completed. Cannot allocate memory" Any ideas? I am using an IMAC Core i7 with 8GB memory, OS version 10.6.8

    My internal boot drive is NOT showing up in the Event Library in Final Cut X. My external raid drive is showing up.
    I am also getting error message "The operation couldn’t be completed. Cannot allocate memory"  when attempting to create a new "event".   Any ideas?
    If I had a application disk, I would unistall and reinstall FCPX.  I am assuming there is a way do do this without a disk - just have not been able to easliy find out how.
    I am using an IMAC Core i7 with 8GB memory, OS version 10.6.8
    Thanks

    Well it did NOT work.  Here is a screen capture as FCPX loads showing the drive while it loads. Then when it is loaded, NO drive showing up in the Event Library ( I have turned off the external drive).
    ANY ideas . . . .  Anybody . . . .  APPLE?

  • Good evening, Have been buying ipad4 device used by a licensed legal place in the UAE during use shows that locked the machine by the previous user's icloud I can not deal with it, please help

    Good evening,
    Have been buying ipad4 device used by a licensed legal place in the UAE during use shows that locked the machine by the previous user's icloud I can not deal with it, please help

    amr35 wrote:
    But I want to figure out a solution to this problem because I was not able to get to the first user because I did not know this person, or access to the store where you purchased the working conditions, please help because I had paid an exorbitant amount to buy
    There is no solution other than having the device cleared by the former owner. If you are trying to activate an iPad or iPhone and it is asking for a previous owners Apple ID and password, you have encountered the Activation Lock. This is a security feature that prevents thieves from setting up and using a stolen or lost iPad or iPhone. You have no alternative. You must contact the previous owner to get permission to use the device. If you cannot contact the previous owner return the device to where you bought it and get a refund. You will never be able to activate the device and no one can help you do it.

  • How to use vivado hls::mat with AXI-Stream interfaces (not AXI4 video stream) ?

      Hello, everyone. I am trying to design a image processing IP core with vivado hls 2014.4. From xapp1167, I have known that video functions provided by vivado hls should be used with AXI4 video stream and VDMA. However, I want to write/read image data to/from the Ip core through AXI stream interfaces and AXI-DMA for some special reasons.
      To verify the feasibility, a test IP core named detectTest was designed as follows. The function of this IP core is reading a 320x240 8 bit gray image (bit 7-0 of INPUT_STREAM_TDATA) from the axis port "INPUT_STREAM” and then output it with no changes. I fabricated a vivado project of zedboard and then test the IP core with a AXI-DMA. Experimental results show that the IP core works normally. So it seems possible to use hls::mat with axis. 
    #include "hls_video.h"
    #include "hls_math.h"
    typedef ap_axiu<32, 1, 1, 1> AXI_VAL;
    typedef hls::Scalar<HLS_MAT_CN(HLS_8U), HLS_TNAME(HLS_8U)> GRAY_PIXEL;
    typedef hls::Mat<240, 320, HLS_8U> GRAY_IMAGE;
    #define HEIGHT 240
    #define WIDTH 320
    #define COMPRESS_SIZE 2
    template<typename T, int U, int TI, int TD>
    inline T pop_stream(ap_axiu<sizeof(T) * 8, U, TI, TD> const &e) {
    #pragma HLS INLINE off
    assert(sizeof(T) == sizeof(int));
    union {
    int ival;
    T oval;
    } converter;
    converter.ival = e.data;
    T ret = converter.oval;
    volatile ap_uint<sizeof(T)> strb = e.strb;
    volatile ap_uint<sizeof(T)> keep = e.keep;
    volatile ap_uint<U> user = e.user;
    volatile ap_uint<1> last = e.last;
    volatile ap_uint<TI> id = e.id;
    volatile ap_uint<TD> dest = e.dest;
    return ret;
    template<typename T, int U, int TI, int TD>
    inline ap_axiu<sizeof(T) * 8, U, TI, TD> push_stream(T const &v, bool last =
    false) {
    #pragma HLS INLINE off
    ap_axiu<sizeof(T) * 8, U, TI, TD> e;
    assert(sizeof(T) == sizeof(int));
    union {
    int oval;
    T ival;
    } converter;
    converter.ival = v;
    e.data = converter.oval;
    // set it to sizeof(T) ones
    e.strb = -1;
    e.keep = 15; //e.strb;
    e.user = 0;
    e.last = last ? 1 : 0;
    e.id = 0;
    e.dest = 0;
    return e;
    GRAY_IMAGE mframe(HEIGHT, WIDTH);
    void detectTest(AXI_VAL INPUT_STREAM[HEIGHT * WIDTH], AXI_VAL RESULT_STREAM[HEIGHT * WIDTH]) {
    #pragma HLS INTERFACE ap_fifo port=RESULT_STREAM
    #pragma HLS INTERFACE ap_fifo port=INPUT_STREAM
    #pragma HLS RESOURCE variable=RESULT_STREAM core=AXI4Stream metadata="-bus_bundle RESULT_STREAM"
    #pragma HLS RESOURCE variable=INPUT_STREAM core=AXI4Stream metadata="-bus_bundle INPUT_STREAM"
    #pragma HLS RESOURCE variable=return core=AXI4LiteS metadata="-bus_bundle CONTROL_STREAM"
    int i, j;
    for (i = 0; i < HEIGHT * WIDTH; i++) {
    unsigned int instream_value = pop_stream<unsigned int, 1, 1, 1>(INPUT_STREAM[i]);
    hls::Scalar<HLS_MAT_CN(HLS_8U), HLS_TNAME(HLS_8U)> pixel_in;
    *(pixel_in.val) = (unsigned char) instream_value;
    mframe << pixel_in;
    hls::Scalar<HLS_MAT_CN(HLS_8U), HLS_TNAME(HLS_8U)> pixel_out;
    mframe >> pixel_out;
    unsigned int outstream_value = (unsigned int) *(pixel_out.val);
    RESULT_STREAM[i] = push_stream<unsigned int, 1, 1, 1>(
    (unsigned int) outstream_value, i == HEIGHT * WIDTH - 1);
    return;
      Then I tried to modify the function of detectTest as follow. The function of the modified IP core is resizing the input image and then recoverying its original size. However, it did not work fine in the AXI-DMA test. The waveform captured by chipscope show that the ready signal of INPUT_STREAM was cleared after recieving servel pixels. 
    GRAY_IMAGE mframe(HEIGHT, WIDTH);
    GRAY_IMAGE mframe_resize(HEIGHT / COMPRESS_SIZE, WIDTH / COMPRESS_SIZE);
    void detectTest(AXI_VAL INPUT_STREAM[HEIGHT * WIDTH], AXI_VAL RESULT_STREAM[HEIGHT * WIDTH]) {
    #pragma HLS INTERFACE ap_fifo port=RESULT_STREAM
    #pragma HLS INTERFACE ap_fifo port=INPUT_STREAM
    #pragma HLS RESOURCE variable=RESULT_STREAM core=AXI4Stream metadata="-bus_bundle RESULT_STREAM"
    #pragma HLS RESOURCE variable=INPUT_STREAM core=AXI4Stream metadata="-bus_bundle INPUT_STREAM"
    #pragma HLS RESOURCE variable=return core=AXI4LiteS metadata="-bus_bundle CONTROL_STREAM"
    int i, j;
    for (i = 0; i < HEIGHT * WIDTH; i++) {//receiving block
    unsigned int instream_value = pop_stream<unsigned int, 1, 1, 1>(INPUT_STREAM[i]);
    hls::Scalar<HLS_MAT_CN(HLS_8U), HLS_TNAME(HLS_8U)> pixel_in;
    *(pixel_in.val) = (unsigned char) instream_value;
    mframe << pixel_in;
    hls::Resize(mframe, mframe_resize);
    hls::Resize(mframe_resize, mframe);
    for (i = 0; i < HEIGHT * WIDTH; i++) {//transmitting block
    hls::Scalar<HLS_MAT_CN(HLS_8U), HLS_TNAME(HLS_8U)> pixel_out;
    mframe>>pixel_out;
    unsigned char outstream_value=*(pixel_out.val);
    RESULT_STREAM[i] = push_stream<unsigned int, 1, 1, 1>((unsigned int) outstream_value, i == HEIGHT * WIDTH - 1);
    return;
      I also tried to delete or modify the following 2 lines in the modified IP core. But the transmitting problem existed too. It seems that the IP core cannot work normally if the receiving block and the transmitting block in different "for" loops. But if I did not solve this problem, the image processing functions cannot be added into the IP core either. The document of xapp1167 mentioned that " the hls::Mat<> datatype used to model images is internally defined as a stream of pixels". Does that caused the problem? And how can I solve this problem? Thanks a lot !
    hls::Resize(mframe, mframe_resize);
    hls::Resize(mframe_resize, mframe);
     

    Hello
    So the major concept that you need to learn/remember is that hls::Mat<> is basically "only" an hls stream -- hls::stream<> -- It's actually an array of N channels (and you have N=1).
    Next, streams are fifos; in software that's modeled as infinite queues but in HW they have finite size.
    The default value is a depth of 2 (IIRC)
    in your first code you do :
    for all pixels loop {
      .. something to read pixel_in
       mframe takes pixel_in
       pixel_out is read from mframe
       .. wirte out pixel_out
    } // end loop
    If you notice, mframe has never more than one pixel element inside since as soon as you write to it, you unload it. in other terms mframe never contains a full frame of pixel (but a full frame flow through it!).
    In your second coding, mframe has to actually contain all the pixels as you have 2 for loops and you don't start unloading the pixels unless you have the first loop complete.
    Needless to say that your fifo had a depth of 2 so actually you never read more than 3 pixels in.
    That's why you see that the ready signal of the iput stream drops after a few pixels; that's the back pressure being applied by the VHLS block.
    Where to go from there?
    Well first stop doing FPGA tests and chipscope if you did not run cosim first and that it passed.
    you would have done cosim and it had failed - or got stuck - then you would have debugged there, rather than waiting for a bitstream to implement.
    Check UG902 about cosim and self checking testbench. maybe for video you can't have selfchecking so at least you need to have visual checks of generated pictures - you can adapt XAPP1167 for that.
    For your design, you could increased the depth of the stream - the XAPP1167 explains that, but here it's impractical or sometimes impossible to buffer a full size frame.
    If you check carefully the XAPP, the design operates in "dataflow" mode; check UG902 as to what this means.
    In short, dataflow means that the HW functions will operate in parallel, and here the second loop will start executing as soon as data has been generated in the first loop - if you understand, the links between the loops is a stream / fifo, so as soon as a data is generated in the first loop, the second loop could process that; this is possible because the processing happens in sequential order.
    Well I leave you to read more.
    I hope this helps....

  • How can I unlock my 4S its locked to Bluegrass cellular? So that I can use it with Straight Talk?

    How can I unlock my iphone 4s? Its locked to Bluegrass Celluar I want to use it with straight talk?

    If your iPhone is locked to Bluegrass, you will need to contact them and ask if they unlock iPhones and if so what their requirements and procedure are. Apple's latest information is that Bluegrass does not unlock iPhones, but you can contact them and ask.
    Regards.

  • Using Powershell to set Multiple timed.servers with variables

    Having an issue using PowerShell to set 3 timed.servers which are defined in a variable. Running the commands: $TimeServers = "IPaddress1,IPaddress2,IPaddress3"Set-NaOption -OptionName timed.servers -OptionValue $TimeServers Thanks in advance!

    Hi, The Set-NaOption CmdLet -optionvalue parameter expects a string and it shouldn't matter if that's a comma delimited string containing multiple IP Addresses. I noticed that whilst the cmdlet thows an error it does actually set the option value for all servers so this seems like it could be a bug (IMO). It might be possible to invoke the API using "Invoke-NaSystemApi" but I checked the ZAPI and noticed this also fails using ZExplore from the SDK: ZAPI Request: <?xml version="1.0" encoding="UTF-8"?>
    <netapp  xmlns="http://www.netapp.com/filer/admin" version="1.21">
      <options-set>
        <name>timed.servers</name>
        <value>192.168.100.10,192.168.100.11,192.168.100.12</value>
      </options-set>
    </netapp> ZAPI Results: <?xml version='1.0' encoding='UTF-8' ?>
    <netapp version='1.1' xmlns='http://www.netapp.com/filer/admin'>
        <!-- Output of options-set [Execution Time: 8610 ms] -->
        <results reason='Unable to set option: timed.servers' errno='13001' status='failed'>
            <cluster-constraint>same_preferred</cluster-constraint>
            <cluster_constraint>same_preferred</cluster_constraint>
            <message>1 entry was deleted.
    </message>
        </results>
    </netapp> So i think the options are either using the "Set-NaOption" cmdlet with the -SilentlyContinue parameter or the "Invoke-NaSsh" cmdlet with -ErrorAction stop.As a work around i'd recommend something like: [String]$servers = "192.168.100.10,192.168.100.11,192.168.100.12"
    [String]$command = "options timed.servers $servers"
    Try{
       Invoke-NaSsh -Command $command -ErrorAction Stop
       Write-Host "Executed Command: $command"   
    }Catch{
       Write-Warning -Message $("Failed Executing Command: $command. Error " + $_.Exception.Message)
    } Hope that helps /matt

  • I am using Xcode 5  on Mac Book with configuration Memory  5 GB 1067 MHz DDR3  and Processor  2.4 GHz Intel Core 2 Duo. It is getting hanged every time  whenever i work on Storyboard.  As my storyboard consists of 25 scenes. Please help I m suffering.

    I am using Xcode 5  on Mac Book with configuration Memory  5 GB 1067 MHz DDR3  and Processor  2.4 GHz Intel Core 2 Duo. It  hangs every time  whenever i work on Storyboard and even on clicking on any of the controller it takes time to open.  As my storyboard consists of 25 scenes. I searched a lot but did not get any results . I have upgraded my xcode to xcode 5.0.2  which is bug free as compare to xcode 5. Please help I m struggling with this problem a lot.

    iTunes for Mac
    http://support.apple.com/en-us/HT201693
    http://www.apple.com/itunes/download/
    Was it installed before? Does it show up in Software Update?
    Do other updates show up there (realizing that Lion is not getting all the updates anymore as it is no longer being supported).
    Apple Downloads
    - Always a place to manual download.
    MacBook Pro
    Mac OS X Lion System Communities

  • HT201263 All of these steps were followed corectly. But a warning popped up and said that it could not be restored because it is locked with a pascode, and it said we must enter a pascode before we can use it with itunes. What should I do?

    All of these steps were followed corectly. But a warning popped up and said that it could not be restored because it is locked with a pascode, and it said we must enter a pascode before we can use it with itunes. What should I do?

    Place the iPod in Recovery Mode and restore via iTunes.
    iOS: Wrong passcode results in red disabled screen
    If not successful, try DFU mode.
    How to put iPod touch / iPhone into DFU mode « Karthik's scribblings

  • When i try to use facetime on my imac it comes up with facetime not responding and starts to use nearly 2gb of memory. Any ideas please?

    when i try to use facetime on my imac it comes up with facetime not responding and starts to use nearly 2gb of memory. Any ideas please?
    Also messages does the same thing

    Hello BassoonPlayer,
    Since you are using one of the the school's Macbooks, it is quite possible that the time and date are not properly set on the computer that you are using.  FaceTime will not work if you do not have the proper time zone set up for the location that you are in.  This past week, there were a two other Macbook users I've helped by simply telling them to set the Date/Time properly.  By the way, you described your problem very well, which makes it easier for us to help you.  Hope this solves your problem -- if not, post back and I can suggest other remedies.
    Wuz

  • I want to get a macbook air with a 4gb and a 125 memory hard drive is it okay to use it with premiere.

    i want to get a macbook air with a 4gb and a 125 memory hard drive is it okay to use it with premiere

    Hi Mayelita521,
    Some have had success with light projects with the MacBook Air. Check out this article: WalterBiscardi.com | Yes Virginia, You Can Edit Video with a MacBook Air
    I'd love to have one for small personal projects.
    Thanks,
    Kevin

  • I have a macbook pro 1.5 years old ver10.7.5 and now having a problem with having to manually force quit after a hour or so of use to clear up memory(unit slows down) should I up grade iso if so to what

    I have a macbook pro 1.5 years old ver10.7.5 and now having a problem with having to manually force quit after a hour or so of use to clear up memory(unit slows down) should I up grade iso if so to what and what is the correct procedure?  Rick

    To determine if it is, boot into Safe Mode and see what happens. http://support.apple.com/kb/HT1455
    In the Activity Monitor, what are the Page Outs and Page Ins?

Maybe you are looking for