0% Cache Hits?

I noticed my MacBook Pro getting slow lately. Rebooting seemed to help briefly, then it would slow down again. Laggy, lots of spinning beach balls of doom.
Checking MenuMeters, it shows the following:
Memory Usage:
   1,518.9MB used, 527.9MB free, 2,048MB total
MemoryPages:
   1,275.7MB active, 244.2MB wired
   404.2MB inactive, 123.7MB free
VM Statistics:
   1,463,651 pageins, 1,048,788 pageouts
   406,346 cache lookups, 65 cache hits (0.0%)
   624,972,406 page faults, 4,063,543 copy-on-writes
SwapFiles:
   9 encrypted swap files present in /private/var/vm
   9 swap files at peak usage
   5,120MB total swap space (3,982MB used)
I rebooted yesterday and the 0% cache hits continue. Computer still feels slow.
I upgraded from Leopard to Snow Leopard about a month ago. Don't know if that has anything to do with it.
Is this normal behavior? I expected a much bigger number of cache hits. Something wrong with my cache, perhaps?
Thanks!
MacBookPro3,1 
2.2 GHz
2 GB RAM
Snow Leopard 10.6.8

I can't speak to why the software indicates 0% cache hits, but something is causing the system to overflow your RAM in a big way, judging by the 9(!) swapfiles.
My guess is that it's not Snow Leopard, but Safari 5.1 that's causing the memory problems. The previous Safari worked fine for me, but the current uses a crazy amount of RAM and even on my iMac with 4 GB of RAM, it would create swapfiles left and right and I'd have to reboot daily and quit Safari every few hours (there is a very long thread about that somewhere in this forum). You can watch memory usage balloon in Acticity Monitor. Another culprit seems to be Quick Look Helper; the screen saver using iPhoto images would often cause pageouts.
My initial workaround was to use a different browser, but a better solution/workaround was adding substantially more RAM, bringing the machine up to 12 GB.
Safari still uses a load of RAM, but not enough to have to hit the hard drive, and the machine runs like a champ now and I've not restarted the machine in a week, despite leaving many apps open all the time.

Similar Messages

  • How can I increase my Library Cache Hit Ratio?

    I was wondering if anyone can help me out regarding the values that I am getting for my Library Cache hits stats
    Half of the samples that I have taken on a periodic interval today have ranged from 89% to 96%.
    The SQL that I have used is,
    SELECT
    sysdate,
    SUM(PINS-RELOADS)/SUM(PINS)*100
    from v\$librarycache
    Also, Running the AWR report for 4am to 4pm, see below
    Shared Pool Statistics AWR report
                        Begin      End
    Memory Usage %:           50.83      42.43
    % SQL with executions>1:      55.56      77.13
    % Memory for SQL w/exec>1:      74.12
    Regarding the current SGA settings,
    SQL> show parameter sga_target;
    NAME TYPE VALUE
    sga_target big integer 1184M
    SQL>
    SQL> select pool,name,bytes/1048576 "Size in MB" from v$sgastat where name = 'free memory';
    POOL NAME Size in MB
    shared pool free memory 135.742641
    large pool free memory 15.9389648
    java pool free memory 16
    The main questions are,
    a) is the low Library cache hit ration particularly low?
    b) if I want to improve this figure, it is advised that the 'SHARED_POOL_SIZE' parameter should be increased.
    Obviously Oracle itself is in charge of this at present, so what can I do to improve?
    c) Are there any really good links to help me to understand the figures that appear in the AWR report.

    a) is the low Library cache hit ration particularly low?
    I didnt understand this.Can you please rephrase?
    b)
    Well indeed that shared pool controls the allocation and everything about Library Cache but it doesnt mean that increasing the value will stop all the issues.Its among the hardest parameters to be tuned infact for the reason that what primarly comes into it,sql statements,code and all that,that is not written entirely by a dba/tuner.Its by developers who does some times not so good things that are required to make shared pool work properly.Very commonly occuring mistake can be quoted as the lack of use of bind variabls and constant use of literals.In that case,eventualy we will have a hard parse of all the statements which will eat up the shared pool some time or the other.No matter what size it may be,it will end to the same result.Hit ratio is a guiding factor,not the end goal of tuning.Its been documented so many places,here,other forums,even in OU books also that looking and tuning alone the hit ratio may not end up at the expected or right results.You should look for the Parse statistics in the AWR report how they are working.How many are Parse(hard),Parse(total) statistics coming up?What is the sql execute to parse time,elapsed time and the related statistics.They will be helpful in getting things sorted out more nicely and correctly.
    I am sure I have missed so much than I said.Surely you will get more better advice on this.Have patience and wait.
    b)Documentation will be a good point.Performance tuning in that is a good resource.
    http://www.oracle.com/pls/db102/to_toc?pathname=server.102%
    2Fb14211%2Ftoc.htm&remark=portal+%28Getting+Started%29
    I am not sure about a specific book about AWR but this one is good for over all knowledge about tuning of Oracle.
    http://www.mcgraw-hill.co.uk/html/007222729X.html
    Aman....

  • Negative Values of Library Cache Hit Ratio in AWR Report

    Hi all,
    We are getting Negative values of Library cache hit ratio in AWR Report of 11g(11.2.0.3) with Solaris[tm] OE (64-bit).
    Please suggest us why it shows negative value.
    Instance Efficiency Percentages (Target 100%)
    Buffer Nowait %: 99.87 Redo NoWait %: 99.99
    Buffer Hit %: 92.17 In-memory Sort %: 100.00
    Library Hit %: -3,321.23 Soft Parse %: 81.95
    Execute to Parse %: 92.88 Latch Hit %: 95.11
    Parse CPU to Parse Elapsd %: 87.25 % Non-Parse CPU: 81.39
    Regards,
    Madhan
    Edited by: user12078989 on Jul 25, 2012 7:40 AM

    Hi Aman,
    DB not restarted during this time.. we don't have production access. we will get awr report automatically from customer daily. also i am getting issue in our local environment v$rowcache view gets values is negative for dc_users and dc_tablespaces parameter may be it's related production issue...

  • Problem with the cache hit ratio

    Hello,
    I ma having a problem with the cache hit ratio I am geting. I am sure, 100% sure, that something has got to be wrong with the cache hit ratio I am fetching!
    1) I will post the code that I am using to retrieve the cache hit ratio. I've seen about a thousand different equations, all equivalent in the end.
    In oracle cache hit ratio seems to be:
    cache hits / cache lookups,
    where cache hits <=> logica IO - physical reads
    cache lookups <=> logical IO
    Now some people use the session logical Reads stat, from teh view v$sysstat; others use db block gets + db consistent gets; whatever. At the end of the day its all the same, and this is what i Use:
    SELECT (P1.value + P2.value - P3.value) AS CACHE_HITS, (P1.value + P2.value) AS CACHE_LOOKUPS, P4.value AS MAX_BUFFS_SIZEB
    FROM v$sysstat P1, v$sysstat P2, v$sysstat P3, V$PARAMETER P4
    WHERE
    P1.name = 'db block gets' AND
    P2.name = 'consistent gets' AND
    P3.name = 'physical reads' AND
    P4.name = 'sga_max_size'
    2) The problem:
    The cache hit ratio I am retrieving cannot be correct. In this case i was benchamarking a HUGELY inneficient query, consisting of the Union of 5 Projections over the same source table, and Oracle is configured with a relatively small SGA of 300 MB. They query plan is awful, the database will read the source database table 5 times.
    And I can see in the physical data statistics of the source tablespace, that total Bytes read is aproximatly 5 times the size of the text file that I used to bulk load data into the databse.
    Some of the relevant stats, wait events:
    db file scattered read     1129,93 seconds
    Elapsed time: 1311,9 seconds
    CPU time: 179,84
    SGA max Size: 314572800 Bytes
    And total bytes read: 77771964416 B (aproximatly 72 Gga bytes)
    the source txt loaded to the database was aprox 16 G
    Number of reads was like 4.5 times the source datafile.
    I would say this, given the difference between CPU time and Elapsed Time, it is clear that the query spent almost all of its time doin DB file scattered reads. How is it possible that i get the following cache hit ratio:
    Cache hit Ratio: 0,92
    Cache hits: 109680186
    Cache lookups: 119173819
    I mean only 8% of that Logical I/O corresponded to physical I/O? It is just not possible.
    3) Procedure of taking stats:
    Now to retrieve these stats I snapshot the system 2 times. One before the query, one after the query.
    But: this is not done in a single session. In total 3 sessions are created. One session two retrieve the stats before the query, one session to run the query, a last session to snapshot after the query.
    Could the problem, assuming there is one, be related to this:
    "The V$SESSTAT view contains statistics on a per-session basis and is only valid for the session currently connected. When a session disconnects all statistics for the session are updated in V$SYSSTAT. The values for the statistics are cleared until the next session uses them."
    What does this paragraph mean. Does it mean that the v$sysstat only shows you the stats of the last session that closed? Or does it mean thtat v$sysstat is increamented with the statistics of each v$sessionstat once a session terminates? If so, then my procedure for gathering those stats should be correct.
    Can anyone help me sort out the origin of such a high cache hit ratio, with so much I/O being done?

    sono99 wrote:
    Hi,s
    first of, let me start by saying that there were many things in your post that you mentioned that I could no understand. 1. Because i am not an Oracle Expert, i use whatever RDBMS whenever i need to. 2. Because another problem has come up and, right now, i cannot inform myself to be able to comprehend it all.Well, could it be that you need to understand the database you are working on in order to comprehend it? That is why we strongly advise you to read the concepts manual first, you need to understand the architecture that Oracle uses, as well as the basic concepts of how oracle does locking and maintains read consistency. It does these different than other database engines, and some things become nonsense if looked at from the viewpoint of a single user.
    >
    quote:
    It would be useful to see the execution plan jhust in case you have simplified the problem so much that a critical detail is missing.
    First, the query code:
    2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:>SQL> CREATE TABLE FAVFRIEND
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 2 NOLOGGING TABLESPACE TARGET
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 3 AS
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 4 SELECT ID as USRID, FAVF1 as FAVF FROM PROFILE
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 5 UNION ALL
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 6 SELECT ID as USRID, FAVF2 AS FAVF FROM PROFILE
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 7 UNION ALL
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 8 SELECT ID as USRID, FAVF3 AS FAVF FROM PROFILE
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 9 UNION ALL
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 10 SELECT ID as USRID, FAVF4 AS FAVF FROM PROFILE
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 11 UNION ALL
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 12 SELECT ID as USRID, FAVF5 AS FAVF FROM PROFILE
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 13 ;
    Now, Althought it is clear from the query that the statement is executed with the NOLOGGiNG, i have disabled the logging entirely for the tablespace.There are certain rules about nologging that may not be obvious. Again, this derives from the basic Oracle architecture, and if you use the wrong definitions of things like logging, you will be led down the primrose path to confusion.
    >
    Futhermore, yes, the RDBMS is a test RDBMS... I have droped the database a few times... And I am constantly deleting an re-inserting data into the source database table named PROFILE.>
    I also make sure do check all the datafile statistics, and for this query the amount of RedoLog, Undo "Log", Templife used is negligible, practically zero.Create table is DDL, which has implied commits before and afterwards. There is a lot going on, some of it dependent on the volume of data returned. The Oracle database writer writes things out when it feels like it, there are situations where it might just leave it in memory for a while. With nologging, Oracle may not care that you can't perform recovery if it is interrupted. So you might want to look into statspack or EM to tell you what is going on, the datafile statistics may not be all that informative for this case.
    >
    Most of the I/O is reading, a few of the I/O is writing.
    My idea is not to optimize this query, it is to understand how it performs. Well, have you read the Concepts manual?
    I have other implementations to test, namely I having trouble with one of them.
    Furthermore, I doubt the query Plan Oracle is using actually involves tablescans (as I I'd like it to do); because in the Wait Events, most of the wait time for this query is spent doing "db file scattered read". And I think this is different from a tablescan.Please look up the definition of [db file scattered read|http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/instance_tune.htm#sthref703].
    >
    Do you really have to use sessions external to the query session ? Can you query v$mystat joined to v$statname from the session itself.
    No, I don't want to that!
    I avoid as much as possible having the code I execute being implemented in java. Why do you think java has anything to do with this? In your session, desc v$mystat and v$statname, these are views you can look at.
    When i can avoid it I don't query the database directly through JDBC, i use the RDBMS command line client, which is supposed to be very robust. Er, is that sqlplus?
    So yes, I only connect to the database with JDBC... in very last session.
    Of course, I Could Have put both the gather stats before query and gathers stats after query a single script: the script that would be also runing the query.
    But that would cause me a number of problems, namely some of the SQL i build has to be implemented dynamically. And I don't want to be replicating the snapshoting code into every query script I make. This way I have one sql with the snapshoting scripts; and multiple scripts for running each query. I avoid code replication in this manner.Instrumentation is a large subject; dynamic sql generation is something to be avoided if possible. Remember, Oracle is written with the idea that many people are going to be sharing code and the database, so it is optimized in that way. For SQL parsing in particular, if every SQL is different, you get a performance problem called "hard parsing." You can (and generally should, and sometimes can't avoid) use bind variables so that Oracle doesn't need to hard parse every SQL. In fact, this is one of those things that applies to other engines besides Oracle. I would recommend you read Tom Kyte's books, he explains what is going on in detail, including in some places the non-Oracle viewpoint.
    >
    Furthermore, Since the database is not a production database, it is there so I can do my tests. I don't have to be concerned with what other sessions may be doing to my system. There are only the sessions I control.No, there are sessions Oracle controls. If you are on unix, you can easily see this, but there are ways to see it on Windows, too. In some cases, your own sessions can affect themselves.
    >
    then what it the array fetch size ? If the array fetch size is large enough the number of block visits would be similar to the number of physical block reads.
    I don't know what the arraysize you mention is. i have not touched that parameter. So whatever it is, it's the default.You should find out! You can go to http://tahiti.oracle.com and type array fetch size into the search box. You can also go to http://asktom.oracle.com and do the same thing, with some more interesting detail.
    >
    By the way, I don't get the query results into my client, the query results are dumped into a target output table.
    So, if the arraysize has something to do with the number of rows that Oracle is returning the client in each step... I think it doesn't matter.You may hear this phrase a lot:
    "It depends."
    >
    As for the query plan, If i am not mistaken you can't get get query plans for queries that are: create table as select.What?
    JG@TTST> explain plan for create table jjj as select * from product_master;
    Explained.
    JG@TTST> select count(*) from plan_table;
      COUNT(*)
             3
    I can however commit the create table part and just call for the evalution of the Select part of the query; i believe it should be same.
    "Optimizer"     "Cost"     "Cardinality"     "Bytes"     "Partition Start"     "Partition Stop"     "Partition Id"     "ACCESS PREDICATES"     "FILTER PREDICATES"
    "SELECT STATEMENT"     "ALL_ROWS"     "2563"     "586110"     "15238860"     ""     ""     ""     ""     ""
    "UNION-ALL"     ""     ""     ""     ""     ""     ""     ""     ""     ""
    "TABLE ACCESS(FULL) SONO99.PROFILE"     ""     "512"     "117222"     "3047772"     ""     ""     ""     ""     ""
    "TABLE ACCESS(FULL) SONO99.PROFILE"     ""     "513"     "117222"     "3047772"     ""     ""     ""     ""     ""
    "TABLE ACCESS(FULL) SONO99.PROFILE"     ""     "513"     "117222"     "3047772"     ""     ""     ""     ""     ""
    "TABLE ACCESS(FULL) SONO99.PROFILE"     ""     "513"     "117222"     "3047772"     ""     ""     ""     ""     ""
    "TABLE ACCESS(FULL) SONO99.PROFILE"     ""     "513"     "117222"     "3047772"     ""     ""     ""     ""     ""
    This query plan was taken from sql developer, exported to txt, and the PROFILE table here has only 100k tuples.
    Right now I am more concerned with testing the MODEL query. Which Oracle doesn't seem to be able any more... but that is a matter for another thread.
    Regarding this plan. The Union ALL seems to be more than just a binary Operator... IT seems to be Neray.
    The union all on that execution plan seems to be taking as leaf tables 5 99sono.Profile tables, and be making a table scan to them all. So I'd say that the RDBMS should only scan each database block once and not 5 times.
    But: It doesn't seem to be so. IT seems like what oracle is doing is scanning completly each the table, and then moving on to next select statement in the UNION ALL. Because given the amount of source table that was read, 5 times greater than the size of the source table. Oracle didn't reuse read blocks.
    But this is just my feeling.Your feeling is uninteresting. Telling us what you really hope to accomplish might be more interesting.
    Anyway, in terms of consistent gets, how many consistent gets should the RDBMS be doing? 5
    One for each table block?It depends.
    >
    My best regards,
    Nuno (99sono xp).

  • Buffer cache hit ratio query

    Hi,
    I would appreciate some advice please.
    My Oracle 10.2.0.4.0 database starts off in the day with a buffer cache hit ratio of almost 100% but this drops gradually in the course of the day as the system gets busier. Is this something I need to be concerned about if no performance problem has been reported by the users?
    I would have thought this should be a normal situation, i.e. as the system gets busier, I would expect that less number of calls to the buffer cache will be satisfied because many more calls are using up the memory?
    Please note that I've done some reading up on this but would like some suggestions from more experienced people than myself on what I should normally expect and what should or should not be a concern.
    thanks

    user8869798 wrote:
    hi,
    thanks again for your response and apologies for not being able to get back yesterday.No problem :) .
    >
    - What i am doing and how - it is the backend database r our Finance application, many users with various transactions.Okay , that sounds like a "normal" database with normal workload.
    - configuring buffer cache size - I haven't done anything manually yet, it's all been as installed, this is what I'm trying to figure out whether it is something I should be looking into doing simply because of the dropping hit ratio and not because of any reported performance problem.If you don't have much of the knowledge, its the best to take use of the advisories which would tell you in a better and in a graphical way that whether you should or shouldn't be worries. Look at the view v$db_cache_advice which can suggest to you that whether you would need to tweak the buffer cache or not.
    >
    - looking at the queries - What is the easiest way of doing this in an environment where many users are running different queries? What's the easiest way to identify queries that we may need to have a closer look at?Easiest way? Well, let the users come back to you ;-) .
    >
    - Basically, I'm just trying to ascertain whether or not I need to be concerned that my hit ratio drops below 89% even though no performance problem has been reported. If it is something that I should look into, then what is the best way to go about it?
    I believe that's answered by couple of us already, nope.
    Aman....

  • Buffer Cache hit Ratio

    Hi All,
    My DB Version: 10.2.0
    OS: Windows Server 2003
    I run the following script to get Hit ratio's
    SELECT cur.inst_id, 'Buffer Cache Hit Ratio ' "Ratio", to_char(ROUND((1-(phy.value / (cur.value + con.value)))*100,2)) "Value"
    FROM gv$sysstat cur, gv$sysstat con, gv$sysstat phy
    WHERE cur.name = 'db block gets'
    AND con.name = 'consistent gets'
    AND phy.name = 'physical reads'
    and phy.inst_id=1
    and cur.inst_id=1
    and con.inst_id=1
    union all
    SELECT cur.inst_id,'Buffer Cache Hit Ratio ' "Ratio", to_char(ROUND((1-(phy.value / (cur.value + con.value)))*100,2)) "Buffer Cache Hit Ratio"
    FROM gv$sysstat cur, gv$sysstat con, gv$sysstat phy
    WHERE cur.name = 'db block gets'
    AND con.name = 'consistent gets'
    AND phy.name = 'physical reads'
    and phy.inst_id=2
    and cur.inst_id=2
    and con.inst_id=2
    union
    SELECT inst_id, 'Library Cache Hit Ratio ' "Ratio", to_char(Round(sum(pins) / (sum(pins)+sum(reloads)) * 100,2)) "Library Cache Hit Ratio"
    FROM gv$librarycache group by inst_id
    union
    SELECT inst_id,'Dictionary Cache Hit Ratio ' "Ratio", to_char(ROUND ((1 - (SUM (getmisses) / SUM (gets))) * 100, 2)) "Percentage"
    FROM gv$rowcache group by inst_id
    union
    Select inst_id, 'Get Hit Ratio ' "Ratio",to_char(round((sum(GETHITRATIO))*100,2)) "Get Hit"--, round((sum(PINHITRATIO))*100,2)"Pin Hit"
    FROM GV$librarycache
    where namespace in ('SQL AREA')
    group by inst_id
    union
    Select inst_id, 'Pin Hit Ratio ' "Ratio", to_char(round((sum(PINHITRATIO))*100,2))"Pin Hit"
    FROM GV$librarycache
    where namespace in ('SQL AREA')
    group by inst_id
    union
    select a.inst_id,'Soft-Parse Ratio ' "Ratio", to_char(round(100 * ((a.value - b.value) / a.value ),2)) "Soft-Parse Ratio"
    from (select inst_id,value from gv$sysstat where name like 'parse count (total)') a,
    (select inst_id, value from gv$sysstat where name like 'parse count (hard)') b
    where a.inst_id = b.inst_id
    union
    select a.inst_id,'Execute Parse Ratio ' "Ratio", to_char(round(100 - ((a.value / b.value)* 100),2)) "Execute Parse Ratio"
    from (Select inst_id, value from gv$sysstat where name like 'parse count (total)') a,
    (select inst_id, value from gv$sysstat where name like 'execute count') b
    where a.inst_id = b.inst_id
    union
    select a.inst_id,'Parse CPU to Elapsed Ratio ' "Ratio", to_char(round((a.value / b.value)* 100,2)) "Parse CPU to Elapsed Ratio"
    from (Select inst_id, value from gv$sysstat where name like 'parse time cpu') a,
    (select inst_id, value from gv$sysstat where name like 'parse time elapsed') b
    where a.inst_id = b.inst_id
    union
    Select a.inst_id,'Chained Row Ratio ' "Ratio", to_char(round((a.val/b.val)*100,2)) "Chained Row Ratio"
    from (SELECT inst_id, SUM(value) val FROM gV$SYSSTAT WHERE name = 'table fetch continued row' group by inst_id) a,
    (SELECT inst_id, SUM(value) val FROM gV$SYSSTAT WHERE name IN ('table scan rows gotten', 'table fetch by rowid') group by inst_id) b
    where a.inst_id = b.inst_id
    union
    Select inst_id,'Latch Hit Ratio ' "Ratio", to_char(round(((sum(gets) - sum(misses))/sum(gets))*100,2)) "Latch Hit Ratio"
    from gv$latch
    group by inst_id
    /* Available from 10g
    union
    select inst_id, metric_name, to_char(value)
    from gv$sysmetric
    where metric_name in ( 'Database Wait Time Ratio', 'Database CPU Time Ratio')
    and intsize_csec = (select max(intsize_csec) from gv$sysmetric)
    order by inst_id
    What i am getting after this is:
    INST_ID Ratio Value
    1 Buffer Cache Hit Ratio .83
    1 Chained Row Ratio 0
    1 Dictionary Cache Hit Ratio 77.5
    1 Execute Parse Ratio 45.32
    1 Get Hit Ratio 75.88
    1 Latch Hit Ratio 100
    1 Library Cache Hit Ratio 99.52
    1 Parse CPU to Elapsed Ratio 24.35
    1 Pin Hit Ratio 95.24
    1 Soft-Parse Ratio 89.73
    i have a doubt with buffer cache hit ratio, can anyone please help me to understand this

    Buffer Cache Hit Ratio .83Quite weird value. It seems your system is doing all physical reads, which seems unrealistic.
    I had a 10.2.0.1 database where i saw this kind of result for cache hit ratio and after patching it to 10.2.0.4, it started showing results correctly.
    Probably it could be some Oracle 10g bug which made this odd display of hit ratio information in data dictionary. Can you try patching your database to latest 10g PSU, or contact Oracle support for a one-off patch for this problem
    Salman

  • Buffer Cache hit ratio low (40.42%)

    Can someone help me with a good guide or advice in order to increase my Buffer Cache hit ratio? If I am not wrong databases should have +85% hit ratio for a good performance.
    thanks

    Here is a script that will bump up your BCHR
    http://www.oracledba.co.uk/tips/choose.htm
    I think you understood from that you shouldn't worry about your bchr but better focus on your real problems i.e. parts of your app that run slowly and people are expecting quicker resposnse.
    Gints Plivna
    http://www.gplivna.eu

  • Buffer Cache Hit ratio 48

    just now I have checked the buffer cache hit ratio is 48% . In my production database . is It any problem.
    Thank You
    Prasad

    Maybe, maybe not.
    You might want to post your question into a more appropriate forum : General Database Discussions
    Nicolas.

  • Low database cache hit ratio (85%)

    Hi Guys,
    I understand that high db cache hit ratio doesn't indicates that the database is healthy.
    The database might be doing additional "physical" reads due to un-tuned SQL.
    However, can explain why a low cache hit ratio might not indicate the db is unhealthy, as in the db need additional memory allocated?
    What i can think of is probably:
    1. the database might query different data most of the time. As such the data is not read again from cache before it aged out. Even if i add additional memory, the data might not be read again (from memory).
    2. ?
    3. ?
    I'm quite reluctant to list out the databases with below 90% hit ratio as part of the monthly report to the management. To them, below 90% means unhealthy.
    If these ratios to be used in monthly report, there will be a long section to explain why these ratios cannot meet but there is no performance concern.
    As such will need your expert advise on this.
    thanks
    Edited by: Chewy on Mar 13, 2012 1:23 AM

    Nikolay Savvinov wrote:
    In isolation, ratios are useless, but trends in ratios can point to potential problem. If your BCHR is steadily degrading over time, this is something to worry about (you'll have to examine your application for scalability issues)
    I used to think that there was a case for trending through a ratio in the days when databases were small, simple and (by modern standards) not very busy. But I'm no longer sure it was even a good idea then. How much of a change do you need to see before you start worrying - and what time-granularity would you take as your baseline. When a ratio varies between 98% and 99% during daylight hours, how do you spot a very large problem that's only going make a change of 0.01% over the course of a couple of weeks ?
    I really don't think there's any good SIMPLE way of producing a management sound-bite for every database in the system; each database needs a personal touch, and the number of figures you need to supply on each is not going to be easy to grasp without some graphic assistance. A suggestion I have is simply to pick three "representative" queries from the application (one "small", one "medium" and one "large") and run them once every hour, capturing the plan_hash_value, elapsed time, disk-reads, buffer gets, and CPU for each. A daily graph - 4 lines each - of each query will give management the big picture of variation in response time; a longer term graph based on the daily average with (say) best and worst excluded will give a trend warning. Obvously each database (or even application within database) needs its own three queries, and there may be periods during the day when it is not relevant to worry about a particular query.
    (NB In the past I've run baseline queries from a pl/sql package called by dbms_job, or dbms_scheduler, and stored the resulting cost figures in the database - capturing all the session stats, wait event and time model information)
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    Author: <b><em>Oracle Core</em></b>

  • MaxDB Catalog cache hit ratio too low

    Hello,
    We use Maxdb version 7.6 and 7.7 on various systems.
    Each week the report of EarlyWatch indicates to me that the catalog
    cache ratio is too low. ( < 60% )
    We modified the value of parameter CAT_CACHE_SIZE of 10000 towards
    20000 then 50000 then 100000 then 200000 then 400000.
    That did not have any effect on the catalog cache hit ratio.
    In the forums sdn or the notes sap one gives no other method to improve
    this ratio, except always increasing the value of parameter
    CAT_CACHE_SIZE .
    Do you have another solution?
    Thank you in advance
    Best regards
    Frédéric Blaise
    e-Kenz S.A.

    Hi,
    don't be too afraid of this value. If you use (internally) some statements where all tables have to be checked like update statistics (with some option) or do some selects on some statistical info, then this value goes down as there are sooooooooo many tables around in a R/3-system that no CAT_CACHE_SIZE-value will be sufficient to hold all their descriptions in the cache. Therefore with some of these commands, the cache is turned round and round, thus reducing the hit ratio. It is a good idea to increase the value to hold many of the normally used descriptions in parallel, but opposite to other caches the hit ration will always be low.
    Elke

  • Manual calculation of 'Buffer Cache hit ratio'

    Using: Oracle 10.2.0.1.0, Redhat 4, 64bit.
    Manual calculation of ‘Buffer Cache hit ratio’ is very off from what it shown in statspack.
    Statspack shows:
    Instance Efficiency Percentages
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                Buffer Nowait %:  100.00       Redo NoWait %:   99.97
                Buffer  Hit   %:   99.18    In-memory Sort %:  100.00
                Library Hit   %:   90.43        Soft Parse %:   52.56
             Execute to Parse %:   80.14         Latch Hit %:   99.98
    Parse CPU to Parse Elapsd %:   96.21     % Non-Parse CPU:   56.89
    Manual calculation (Got this formula from Sybex PT book, page 275):
    SQL> select name, 1-(PHYSICAL_READS/(DB_BLOCK_GETS+CONSISTENT_GETS))
    from  v$buffer_pool_statistics;
    NAME                 1-(PHYSICAL_READS/(DB_BLOCK_GETS+CONSISTENT_GETS))
    DEFAULT                                                      .700247215Any idea, why using v$buffer_pool_statistics gives wrong results?
    Thanks regards,

    SYS@oradocms11> select (P1.value + P2.value - P3.value)/(P1.value + P2.value)*100
    2 from v$sysstat P1, v$sysstat P2, v$sysstat P3
    3 where P1.name = 'db block gets'
    4 and P2.name = 'consistent gets'
    5 and P3.name = 'physical reads';
    (P1.VALUE+P2.VALUE-P3.VALUE)/(P1.VALUE+P2.VALUE)*100
    99.6977839
    SYS@oradocms11> select name, 1 - (PHYSICAL_READS/(DB_BLOCK_GETS+CONSISTENT_GETS)) from v$buffer_pool_statistics
    NAME 1-(PHYSICAL_READS/(DB_BLOCK_GETS+CONSISTENT_GETS))
    DEFAULT .997009542
    In my case, both are giving almost same result.

  • Counter negative cache hits and long searches

    hi my name is sumit.
    In my network there is a frame-relay pvc on 7206 VXR router on which 2 T1's are terminated(from S8700 EPABX) and the bandwidth of this PVC is 600 KBPS. we are using rtp and tcp header compression technique(codec g729r8).7206 router is connected to IGX through HSSI cable and there is a IGX to IGX connectivity from INDIA to US. At US site there is a same network. and it is further connected to AT&T cloud. mainly on this PVC there is inbound traffic which comes from US AT&T cloud.
    we are facing a lot of voice quality issues and when i use show frame relay ip rtp header-compression.
    the counter negative cache hits and long searches rapidly increases
    can somebody help me to solve the problem.
    and tell me is there any affect on voice quality if these counters rapidly increases

    hi Neo thanks for reply.
    r u from Datacraft ?
    I am also from Datacraft
    i have already used this URL. but it gives only one statement about counters.
    but i am asking what could be an impact of these counters on voice ?
    please reply
    thanks

  • Affect of negative cache hits and long searches

    hi my name is sumit.
    In my network there is a frame-relay pvc on 7206 VXR router on which 2 T1's are terminated(from S8700 EPABX) and the bandwidth of this PVC is 600 KBPS. we are using rtp and tcp header compression technique(codec g729r8).7206 router is connected to IGX through HSSI cable and there is a IGX to IGX connectivity from INDIA to US. At US site there is a same network. and it is further connected to AT&T cloud. mainly on this PVC there is inbound traffic which comes from US AT&T cloud.
    we are facing a lot of voice quality issues and when i use show frame relay ip rtp header-compression.
    the counter negative cache hits and long searches rapidly increases
    can somebody help me to solve the problem.
    and tell me is there any affect on voice quality if these counters rapidly increases

    Is there any one to answer this query..........
    any update .............

  • Cache hit percentage below 60 %

    Hello everybody!
    I have too many check point not complete in the alert log file,
    and too frequent logfile switching during peak time.
    The redo log file is 500MB.
    Also i found the cache hit percentage 59 %.
    It is 10g R2 DB on unix.
    Any advice on that.
    I am beginner and appreciate your responses.
    Thank you

    Check the log switch frequency per hour during the peak hours. If it is more than 10 per hour you need to increase the size of your redo log files. Also for check point not complete message increase the size of your redo logs and add more groups for the redo logs.
    what should be the redlog size in OLTP environment like banks?(10g)
    Read the link.
    For low cache hit percentage check for the full table scan, also you can increase the buffer cache.
    Regards
    Asif Kabir

  • Database Cache Hit ratio is very less

    Hi All,
    my live database Cache Hit ratio is very less. can some one proposed relevant solution for that.
    SELECT (1-((phy.value-phyd.value) / (cur.value + con.value))) * 100 "Cache Hit ratio"
    FROM v$sysstat cur, v$sysstat con, v$sysstat phy, v$sysstat phyd
    WHERE cur.name = 'db block gets'
    AND con.name = 'consistent gets'
    AND phy.name = 'physical reads'
    AND phyd.name = 'physical reads direct';
    Cache Hit ratio
    47.99717699362490769958384625413175240442
    select NAME,VALUE from v$parameter where name like '%db_cache_size%';
    NAME VALUE
    db_cache_size 335544320
    select name, value From v$sysstat
    where name in ('db block gets', 'consistent gets', 'physical reads');
    NAME VALUE
    db block gets 30047032214
    consistent gets 691770165681
    physical reads 376120181932
    Thanks

    user1093072 wrote:
    Hi all thanks a lot for your answers. i am getting some query time outs and while investigating got less ratios. can some one tell me how to get dicission using this ratios
    Buffer Hit % 95.71
    Cache Hit ratio 47.99The fact that you get two different values for "the same thing" is a clue that (a) your SQL statement is suspect and (b) your logic is flawed.
    You SQL is going to give you some kind of (meaningless) average since the database started up. The report you showed looks like part of an AWR or statspack report which will be covering a short interval of time. The difference between the two percentages may mean the two approaches are using different formula, or it may mean that the result is highly dependent on the time window.
    Since you have access to AWR/statspack - run off a report for a (short) interval when the system is performing well, and for when the system is performing badly, and compare them.
    As a starting point you could post the two sections "Load Profile" and "Top 5 Timed Events" to the forum for initial comment. If you do so, please make sure you use the "code" tags (see end of post) to make the output readable or you may get no replies.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    There is a +"Preview"+ tab at the top of the text entry panel. Use this to check what your message will look like before you post the message. If it looks a complete mess you're unlikely to get a response. (Click on the +"Plain text"+ tab if you want to edit the text to tidy it up.)                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

Maybe you are looking for