Data Buffer Cache Quality

Hi All,
Can somebody please please tell some ways in which i can improve the data buffer quality? Presently it is 51.2%. The DB is 10.2.0.2.0
I want to know, wat all factors do i need to keep in mind if i want to increase DB_CACHE_SIZE?
Also, i want to know how can i find out Cache Hit ratio?
Further, i want to know which are the most frequently accessed objects in my DB?
Thanks and Regards,
Nick.

Nick-- wud b DBA wrote:
Hi Aman,
Thanks. Can u please give the appropriate query for that?
And moreover when i'm giving:
SQL>desc V$SEGMENT-STATISTICS; It is giving the following error:
SP2-0565: Illegal identifier.
Regards,
Nick.LOL dude I put it by mistake. Its dash(-) sign but we need underscore(_) sign.
About the query, it may vary what you really mean by "most used obect". If you mean to find the object that is undergoing lots of reads,writes than this may help,
SELECT Rownum AS Rank,
Seg_Lio.*
FROM (SELECT St.Owner,
St.Obj#,
St.Object_Type,
St.Object_Name,
St.VALUE,
'LIO' AS Unit
FROM V$segment_Statistics St
WHERE St.Statistic_Name = 'logical reads'
ORDER BY St.VALUE DESC) Seg_Lio
WHERE Rownum <= 10
UNION ALL
SELECT Rownum AS Rank,
Seq_Pio_r.*
FROM (SELECT St.Owner,
St.Obj#,
St.Object_Type,
St.Object_Name,
St.VALUE,
'PIO Reads' AS Unit
FROM V$segment_Statistics St
WHERE St.Statistic_Name = 'physical reads'
ORDER BY St.VALUE DESC) Seq_Pio_r
WHERE Rownum <= 10
UNION ALL
SELECT Rownum AS Rank,
Seq_Pio_w.*
FROM (SELECT St.Owner,
St.Obj#,
St.Object_Type,
St.Object_Name,
St.VALUE,
'PIO Writes' AS Unit
FROM V$segment_Statistics St
WHERE St.Statistic_Name = 'physical writes'
ORDER BY St.VALUE DESC) Seq_Pio_w
WHERE Rownum <= 10; But if you are looking for the objects which are most highly in the waits than this query may help
select * from
   select
      DECODE
      (GROUPING(a.object_name), 1, 'All Objects', a.object_name)
   AS "Object",
sum(case when
   a.statistic_name = 'ITL waits'
then
   a.value else null end) "ITL Waits",
sum(case when
   a.statistic_name = 'buffer busy waits'
then
   a.value else null end) "Buffer Busy Waits",
sum(case when
   a.statistic_name = 'row lock waits'
then
   a.value else null end) "Row Lock Waits",
sum(case when
   a.statistic_name = 'physical reads'
then
   a.value else null end) "Physical Reads",
sum(case when
   a.statistic_name = 'logical reads'
then
   a.value else null end) "Logical Reads"
from
   v$segment_statistics a
where
   a.owner like upper('&owner')
group by
   rollup(a.object_name)) b
where (b."ITL Waits">0 or b."Buffer Busy Waits">0)This query's reference:http://www.dba-oracle.com/t_object_wait_v_segment_statistics.htm
So it would depend upon that on what ground you want to get the objects.
About the cache increase, are you seeing any wait events related to buffer cache or DBWR in the statspack report?
HTH
Aman....

Similar Messages

  • Data Buffer Cache Error Message

    I'm using a load rule that builds a dimenson on the fly and getting the following error: "Not enough memory to allocate the Data Buffer Cache [adDatInitCacheParamsAborted]"I've got 4 other databases which are set up the same as this one and I'm not getting this error. I've checked all the settings and I think they're all the same.Anyone have any idea what this error could mean?I can be reached at [email protected]

    Hi,
    Same issue, running Vista too.  This problem is recent.  It may be due to the last itunes update.  itunes 11.2.23

  • Data buffer cache has undo information...??

    do data buffer in sga has any undo information....??

    920273 wrote:
    in sga we have data buffer ..... in data buffer cache if i m updating a block, do any undo information is maintained in buffer cache regarding the change, i hav made in block.....am
    lets take an example if i m updating a block having value '2' to '4'...and dbwriter not writes to datafiles ...in this case do any undo information is maintained in buffer cache...???Hmmm.. When you updated value 2 to 4, so still its uncommitted then the transaction is active and it will be in buffer cache itself. Of course if the cache is full or performed any checkpoint, then this modified data also writtened into data files. So you should not mention dbwriter not writes to datafiles you cant predict when cache going to full or checkpoint occurred.

  • Data Buffer Cache!

    Hi Guru(s),
    Please anyone can tell me that, how can we check the size of data buffer cache.
    Please help.
    Regards,
    Rajeev K

    Hi,
    Here is what I would do:
    First, let's have a look what views we have available to us
    SQL> select table_name from dict where table_name like '%SGA%';
    TABLE_NAME
    DBA_HIST_SGA
    DBA_HIST_SGASTAT
    DBA_HIST_SGA_TARGET_ADVICE
    V$SGA
    V$SGAINFO
    V$SGASTAT
    V$SGA_CURRENT_RESIZE_OPS
    V$SGA_DYNAMIC_COMPONENTS
    V$SGA_DYNAMIC_FREE_MEMORY
    V$SGA_RESIZE_OPS
    V$SGA_TARGET_ADVICE
    GV$SGA
    GV$SGAINFO
    GV$SGASTAT
    GV$SGA_CURRENT_RESIZE_OPS
    GV$SGA_DYNAMIC_COMPONENTS
    GV$SGA_DYNAMIC_FREE_MEMORY
    GV$SGA_RESIZE_OPS
    GV$SGA_TARGET_ADVICE
    19 rows selected.v$sgainfo looks pretty promising, what does it contain?
    SQL> desc v$sgainfo
    Name                                                              Null?    Type
    NAME                                                                       VARCHAR2(32)
    BYTES                                                                      NUMBER
    RESIZEABLE                                                                 VARCHAR2(3)And how many rows are in it?
    SQL> select count(*) from v$sgainfo;
      COUNT(*)
            12Not too many so let's look at them all, with a little formatting
    SQL> select name,round(bytes/1024/1024,0) MB, resizeable from v$sgainfo order by name;Hope that helps,
    Rob

  • Data Buffer Cache Limit

    Is there any way that I could signal daat buffer cache to write all data to data files if amount of dirty blocks reach say 50 Mb.
    Iam processing with BLOBS, one blob at a time, some of which has sizes exceeeding 100 Mb and the diffcult thing is that I cannot write to disk until the whoile blob is finished as it is one transaction.
    Well if anyone is going to suggest that to open, process close, commit ....well i tried that but it also gives error "nmo free buffer pool" but this comes for twice the size of buffer size = 100 Mb file when db cache size is 50 mb.
    any ideas.

    Hello,
    Ia using Oracle 9.0.1.3.1
    I am getting error ORA-OO379 No free Buffers available in Buffer Pool. Default for block size 8k.
    My Init.ora file is
    # Copyright (c) 1991, 2001 by Oracle Corporation
    # Cache and I/O
    db_block_size=8192
    db_cache_size=104857600
    # Cursors and Library Cache
    open_cursors=300
    # Diagnostics and Statistics
    background_dump_dest=C:\oracle\admin\iasdb\bdump
    core_dump_dest=C:\oracle\admin\iasdb\cdump
    timed_statistics=TRUE
    user_dump_dest=C:\oracle\admin\iasdb\udump
    # Distributed, Replication and Snapshot
    db_domain="removed"
    remote_login_passwordfile=EXCLUSIVE
    # File Configuration
    control_files=("C:\oracle\oradata\iasdb\CONTROL01.CTL", "C:\oracle\oradata\iasdb\CONTROL02.CTL", "C:\oracle\oradata\iasdb\CONTROL03.CTL")
    # Job Queues
    job_queue_processes=4
    # MTS
    dispatchers="(PROTOCOL=TCP)(PRE=oracle.aurora.server.GiopServer)", "(PROTOCOL=TCP)(PRE=oracle.aurora.server.SGiopServer)"
    # Miscellaneous
    aq_tm_processes=1
    compatible=9.0.0
    db_name=iasdb
    # Network Registration
    instance_name=iasdb
    # Pools
    java_pool_size=41943040
    shared_pool_size=33554432
    # Processes and Sessions
    processes=150
    # Redo Log and Recovery
    fast_start_mttr_target=300
    # Sort, Hash Joins, Bitmap Indexes
    pga_aggregate_target=33554432
    sort_area_size=524288
    # System Managed Undo and Rollback Segments
    undo_management=AUTO
    undo_tablespace=UNDOTBS

  • How data buffer cache is handled

    hi all,
    I have a doubt in how buffer cache is managed.. I read that server process will read blocks from data file and keeps them in buffer cache if they are not already present in buffer cache.. Suppose two users from different sessions issue the same statement at the same time, and they are picked up by different server processes, and upon checking they will find that the block is not present in SGA, so they both will go read the block.. is it not unnecessary.. or am i wrong? please let me know how it is handled.
    Regards
    Suresh

    Hi tinku,
    I understand what you mean.. but my problem is different.
    suppose there are two sessions
    first session issues a statement:
    Select * from emp where empno = 12;
    and second session issues
    select * from emp where empno in (12,13);
    becoz these two statements are different, these two statements are to be parsed separately. After parsing, each of the server processes will see if the block corresponding 12 is already in memory, if it is not, they will read it from data file and keep in buffer cache.. suppose these two server processes at the same time find that the block is not in SGA and try to read the block from data file, there would be two copies of the block in SGA. isn't it. I don't think this is what is gonna happen. so i need how the server processes coordinate with each other to read only necessary blocks without redundancy.
    Regards
    Suresh

  • Database buffer Cache

    Hi Guru,
    Can anyone tell me what is the actual definition of Data buffer Cache & Log buffer Cache and how it works in Oracle 10g??
    Please
    Regards,
    Rajeev,India
    Edited by: 970371 on Nov 8, 2012 7:06 PM

    vlethakula wrote:
    Databuffer cache contains the blocks which are read from physical data files.Database buffer cache contains buffers that hold the blocks read from the disk.
    Log buffer cahce contains changes made to the database
    e.g: You try to update a row, the changes made to that row is written in form of change vectors to Log buffer cache, from there on certail rules(like commit) LGWR background process writes those changes from LOG BUFFER CACHE to redo log files.
    The block which is modified in database buffer cache, will be written to physical files by DBWR process after certain rules are met (like checkpoint)The reason that I didn't give the explanation or the links containing the same from the docs that I wanted OP to come up with some sort of his own understanding about the two caches first.
    Aman....

  • Buffer cache of SGA

    can anyone plz clarify my doubt,it is which parameter defines the size of db buffer cache in the SGA.does db_cache_size directly defines the size or is it the size of the cache of standard blocks(specified by db_block_size parameter).

    DB_BLOCK_BUFFERS specifies the number of blocks to allocate for data buffer. This parameter's value is then multiplied with DB_BLOCK_SIZE to calculate the size of the data buffer.
    DB_CACHE_SIZE specifies the size value itself directly in units of KV,MB or GB. This parameter alone is enough to calculate the data buffer cache size.
    DB_BLOCK_BUFFERS can only create buffer cache in units of blocks based on one parameter DB_BLOCK_SIZE. On the other hand, multiple data buffere caches can be created by using DB_nK_CACHE_SIZE parameters where n is the blocks's size of for the buffer cache. So for example, one can allocate X MB buffer cache of 8K block size and have Y MB buffer cache of 16K blocks. This helps when you have tablespaces of varying block sizes (This could not be possible using DB_BLOCK_BUFFERS as DB_BLOCK_SIZE is not modifiable).
    DB_CACHE_SIZE can work along with SGA_TARGET parameter (which decides SGA size). If DB_CACHE_SIZE is 0, then it's value varies based on usage. If a value is set then that value turns out to be a minimum value. This is not possible using DB_BLOCK_BUFFERS.

  • Data Buffer Quality got down below 90%

    Hi,
    The data buffer quality in ST04 getting reduced gradually and came down below 90%, there are 8 app servers on our production, how to improve the data buffer quality, please suggest.
    Thanks,
    Pavan

    Hi Pavan,
    r u running update optimizer stats daily
    please check below links its helpfull to u
    http://sap.ittoolbox.com/groups/technical-functional/sap-basis/st04-database-performance-analysis-shared-pool-and-data-buffers-770596?cv=expanded
    http://www.uber-goober.com/~ubergoob/forums/showthread.php?t=105003
    Regards,
    Srinu

  • Buffer cache vs data blocks that can be cache

    Hi Guys,
    Do it means that if we have 2 GB of buffer cache allocated to Oracle, we can only store up to 2GB of data in memory?
    thanks

    dbaing wrote:
    Hi Guys,
    Do it means that if we have 2 GB of buffer cache allocated to Oracle, we can only store up to 2GB of data in memory?Yes, this means that you can any time, can have 2gb of cached blocks in your memory.
    Aman....

  • Data being fetched bigger than DB Buffer Cache

    DB Version: 10.2.0.4
    OS : Solarit 5.10
    We have a DB with 1gb set for DB_CACHE_SIZE . Automatic Shared Memory Management is Disabled (SGA_TARGET = 0).
    If a query is fired against a table which is going retrieve 2GB of data. Will that session hang? How will oracle handle this?

    Tom wrote:
    If the retrieved blocks get automatically removed from the buffer cache after it is fetched as per LRU algorithm, then Oracle should handle this without any issues. Right?Yes. No issues in that the "+size of a fetch+" (e.g. selecting 2GB worth of rows) need to fit completely in the db buffer cache (only 1GB in size).
    As Sybrand mentioned - everything in that case will be flushed as newer data blocks will be read... and that will be flushed again shortly afterward as even newer data blocks are read.
    The cache hit ratio will thus be low.
    But this will not cause Oracle errors or problems - simply that performance degrade as the data volumes being processed exceeds the capacity of the cache.
    It is like running a very large program that requires more RAM than what is available on a PC. The "extra RAM" comes from the swap file on disk. App program will be slow as its memory pages (some on disk) needs to be swapped into and out of memory as needed. It will work faster if the PC has sufficient RAM. However, the o/s is designed to deal with this exact situation where more RAM is needed than what physically available.
    Similar situation with processing larger data chunks than what the buffer cache has capacity for.

  • 10G NEW FEATURE-HOW TO FLUSH THE BUFFER CACHE

    제품 : ORACLE SERVER
    작성날짜 : 2004-05-25
    10G NEW FEATURE-HOW TO FLUSH THE BUFFER CACHE
    ===============================================
    PURPOSE
    이 자료는 Oracle 10g new feature 로 manual 하게
    buffer cache 를 flush 할 수 있는 기능에 대하여 알아보도록 한다.
    Explanation
    Oracle 10g 에서 new feature 로 소개된 내용으로 SGA 내 buffer cache 의
    모든 data 를 command 수행으로 clear 할 수 있다.
    이 작업을 위해서는 "alter system" privileges 가 있어야 한다.
    Buffer cache flush 를 위한 command 는 다음과 같다.
    주의) 이 작업은 database performance 에 영향을 줄 수 있으므로 주의하여 사용하여야 한다.
    SQL > alter system flush buffer_cache;
    Example
    x$bh 를 query 하여 buffer cache 내 존재하는 정보를 확인한다.
    x$bh view 는 buffer cache headers 정보를 확인할 수 있는 view 이다.
    우선 test 로 table 을 생성하고 insert 를 수행하고
    x$bh 에서 barfil column(Relative file number of block) 과 file# 를 조회한다.
    1) Test table 생성
    SQL> Create table Test_buffer (a number)
    2 tablespace USERS;
    Table created.
    2) Test table 에 insert
    SQL> begin
    2 for i in 1..1000
    3 loop
    4 insert into test_buffer values (i);
    5 end loop;
    6 commit;
    7 end;
    8 /
    PL/SQL procedure successfully completed.
    3) Object_id 확인
    SQL> select OBJECT_id from dba_objects
    2 where object_name='TEST_BUFFER';
    OBJECT_ID
    42817
    4) x$bh 에서 buffer cache 내에 올라와 있는 DBARFIL(file number of block) 를 조회한다.
    SQL> select ts#,file#,dbarfil,dbablk,class,state,mode_held,obj
    2 from x$bh where obj= 42817;
    TS# FILE# DBARFIL DBABLK CLASS STATE MODE_HELD J
    9 23 23 1297 8 1 0 7
    9 23 23 1298 9 1 0 7
    9 23 23 1299 4 1 0 7
    9 23 23 1300 1 1 0 7
    9 23 23 1301 1 1 0 7
    9 23 23 1302 1 1 0 7
    9 23 23 1303 1 1 0 7
    9 23 23 1304 1 1 0 7
    8 rows selected.
    5) 다음과 같이 buffer cache 를 flush 하고 위 query 를 재수행한다.
    SQL > alter system flush buffer_cache ;
    SQL> select ts#,file#,dbarfil,dbablk,class,state,mode_held,obj
    2 from x$bh where obj= 42817;
    6) x$bh 에서 state column 이 0 인지 확인한다.
    0 은 free buffer 를 의미한다. flush 이후에 state 가 0 인지 확인함으로써
    flushing 이 command 를 통해 manual 하게 수행되었음을 확인할 수 있다.
    Reference Documents
    <NOTE. 251326.1>

    I am also having the same issue. Can this be addressed or does BEA provide 'almost'
    working code for the bargin price of $80k/cpu?
    "Prashanth " <[email protected]> wrote:
    >
    Hi ALL,
    I am using wl:cache tag for caching purpose. My reqmnt is such that I
    have to
    flush the cache based on user activity.
    I have tried all the combinations, but could not achieve the desired
    result.
    Can somebody guide me on how can we flush the cache??
    TIA, Prashanth Bhat.

  • What else are stored in the database buffer cache?

    What else are stored in the database buffer cache except the data blocks read from datafiles?

    That is a good idea.
    SQL> desc v$BH;
    Name                                                                                                      Null?    Type
    FILE#                                                                                                              NUMBER
    BLOCK#                                                                                                             NUMBER
    CLASS#                                                                                                             NUMBER
    STATUS                                                                                                             VARCHAR2(10)
    XNC                                                                                                                NUMBER
    FORCED_READS                                                                                                       NUMBER
    FORCED_WRITES                                                                                                      NUMBER
    LOCK_ELEMENT_ADDR                                                                                                  RAW(4)
    LOCK_ELEMENT_NAME                                                                                                  NUMBER
    LOCK_ELEMENT_CLASS                                                                                                 NUMBER
    DIRTY                                                                                                              VARCHAR2(1)
    TEMP                                                                                                               VARCHAR2(1)
    PING                                                                                                               VARCHAR2(1)
    STALE                                                                                                              VARCHAR2(1)
    DIRECT                                                                                                             VARCHAR2(1)
    NEW                                                                                                                CHAR(1)
    OBJD                                                                                                               NUMBER
    TS#                                                                                                                NUMBERTEMP      VARCHAR2(1)      Y - temporary block
    PING      VARCHAR2(1)      Y - block pinged
    STALE      VARCHAR2(1)      Y - block is stale
    DIRECT      VARCHAR2(1)      Y - direct block
    My question is what are temporary block and direct block?
    Is it true that some blocks in temp tablespace are stored in the data buffer?

  • ESE - Event Log Warning: 906 - A significant portion of the database buffer cache has been written out to the system paging file...

    Hello -
    We have 3 x EX2010 SP3 RU5 nodes in a cross-site DAG.
    Multi-role servers with 18 GB RAM [increased from 16 GB in an attempt to clear this warning without success].
    We run nightly backups on both nodes at the Primary Site.
    Node 1 backup covers all mailbox databases [active & passive].
    Node 2 backup covers the Public Folders database.
    The backups for each database are timed so they do not overlap.
    During each backup we get several of these event log warnings:
     Log Name:      Application
     Source:        ESE
     Date:          23/04/2014 00:47:22
     Event ID:      906
     Task Category: Performance
     Level:         Warning
     Keywords:      Classic
     User:          N/A
     Computer:      EX1.xxx.com
     Description:
     Information Store (5012) A significant portion of the database buffer cache has been written out to the system paging file.  This may result  in severe performance degradation.
     See help link for complete details of possible causes.
     Resident cache has fallen by 42523 buffers (or 27%) in the last 903 seconds.
     Current Total Percent Resident: 26% (110122 of 421303 buffers)
    We've rescheduled the backups and the warning message occurences just move with the backup schedules.
    We're not aware of perceived end-user performance degradation, overnight backups in this time zone coincide with the business day for mailbox users in SEA.
    I raised a call with the Microsoft Enterprise Support folks, they had a look at BPA output and from their diagnostics tool. We have enough RAM and no major issues detected.
    They suggested McAfee AV could be the root of our problems, but we have v8.8 with EX2010 exceptions configured.
    Backup software is Asigra V12.2 with latest hotfixes.
    We're trying to clear up these warnings as they're throwing SCOM alerts and making a mess of availability reporting.
    Any suggestions please?
    Thanks in advance

    Having said all that, a colleague has suggested we just limit the amount of RAM available for the EX2010 DB cache
    Then it won't have to start releasing RAM when the backup runs, and won't throw SCOM alerts
    This attribute should do it...
    msExchESEParamCacheSizeMax
    http://technet.microsoft.com/en-us/library/ee832793.aspx
    Give me a shout if this is a bad idea
    Thanks

  • Data Buffer error USER_AUTH_FAILED: User account for logonid "SYSTEM"

    All,  I have the following errors on both the Quality and the Production system in our data buffer job.
    com.sap.security.api.NoSuchUserException: USER_AUTH_FAILED: User account for logonid "SYSTEM" not found!
    These entries will not process because they are generating an error about the loginid for the Username SYSTEM is not found.
    So I am thinking that somehow the MII system is not capturing the correct username when they are being added into the Data Buffer Jobs, or there is something I am overlooking when I set up the databuffering.
    Other entries that were in the data buffer jobs were listed as using the RS1000SVC-QMUSBATCH, RS1630SVC-PMIIBATCH User accounts.  These are the accounts that our scheduled tasks run under.
    Those entries process OK out of the data buffer jobs.
    I did notice a similarity between the data buffer jobs in the quality and production systems as it pertains to the following transactions.
    Production MII ver 12.0.7 (Build 20)
    Muscatine%2FIntegration%2FSAP%2FPROD_CONFIRMED_INPUT_InsertQuery
    Which is called from the MIIC1043_IDOC Message Processing Rule.
    Muscatine%2FIntegration%2FSAP%2FHEADER_InsertQuery
    Which is called from the MIIC1043_Control_Recipe_Download Message Processing Rule.
    Quality MII 12.0.11 (Build 14)
    Muscatine%2FIntegration%2FSAP%2FPROD_CONFIRMED_INPUT_InsertQuery
    Which is called from the MIIC1043_IDOC Message Processing Rule.
    So the commonality is that these transactions are being initaiated by the Message processing rules.
    Are there known issues with data buffering from transactions initiated with Message Processing Rules?
    Is anyone sucessfully using data buffering of transactions called by message processing rules?
    Any help is appreciated.
    Bob

    Jeremy,  Thanks for your reply.
    There doesn't seem to be much detailed information on the use of Catagories with Processing rules in Help or in the forums.  So let me see if I understand your suggestion correctly.
    On the MII server create a processing rule for the message using a category instead of using a transaction,  The message received by the message listener will be placed in a buffer.  I am assuming these messages whould show up in the message monitor and not in the  Data Buffer jobs/entries.
    So in my transaction which normally processes this data I could add logic to access the message data; using the Message Service (Query, Read, Update and Delete) action blocks.  I could pare down the selection by selecting messages based on the MessageCategory that I defined in the message processing rule.   This will allow me to access the stored message data.
    Finally use a scheduled Job to execute the transaction.  The scheduled job would be run with a valid userID and Password so if it connection to the external database failed the enteries would be placed in the data buffer jobs with a valid userID credentials.
    Does this sound like what you had in mind?

Maybe you are looking for