Many "Flushing buffer cache" in  11.1.0.7

Hello,
I am getting "ALTER SYSTEM: Flushing buffer cache" in out alert log continuously . I have not done any buffer pool flushing  but still it coming . does anyone know is there any oracle scheduled job will do this ? or this will happen only by issuing a manual command ?
Any thoughts will be highly appreciated.
Thu Jul 11 03:46:27 2013
Archived Log entry 151129 added for thread 1 sequence 92387 ID 0xc7afa6e dest 1:
Thu Jul 11 03:48:07 2013
ALTER SYSTEM: Flushing buffer cache
Thu Jul 11 03:50:28 2013
ALTER SYSTEM: Flushing buffer cache
Thu Jul 11 03:51:29 2013
ALTER SYSTEM: Flushing buffer cache
Thu Jul 11 03:52:25 2013
ALTER SYSTEM: Flushing buffer cache
Thu Jul 11 03:53:00 2013
ALTER SYSTEM: Flushing buffer cache
Thu Jul 11 03:53:29 2013
ALTER SYSTEM: Flushing buffer cache
Thu Jul 11 03:57:27 2013
Thanks
Aju

This is not normal. Can be issued manually or by sheduled jobs, or by 3rd party software. Are you running PeopleSoft?
As adviced already check AUDIT_TRAIL, check that auditing is enabled first, and issue one flush manually to be sure that this action is logged.
Bug 12530225 : ALTER SYSTEM: FLUSHING BUFFER CACHE MESSAGES IN ALERT.LOG
Regards
Ed

Similar Messages

  • Selectively flushing BUFFER CACHE

    We use SQL EXPRESS 2008; so we are limited to 1GB memory. The following query, lists the top databases using highest buffer cache. In my case, 2 insignificant databases are listed at the top. The critical database is number 3--and it performs slow. I don't
    mind the other 2 performing slow. They run a process twice a day, and do nothing. This heavy process blows-up the buffer cache. Is there a way to selectively flush the buffer cache?  (I'm already using 
     DBCC FREESYSTEMCACHE('SQL Plans') 
     DBCC FLUSHPROCINDB ()
    But they were NOT much useful. 
    SELECT db_name(database_id) as dbname,
           count(page_id)  as pages,
       convert(decimal(20,2),count(page_id)*8192.0/1048576) as Mb
    from sys.dm_os_buffer_descriptors
    group by database_id
    order by convert(decimal(20,2),count(page_id)*8192.0/1048576) desc

    Sorry, I have to comment as I also have 1200+ stores with express edition (2005 though :sad face:)
    What's interesting is that the data is staying in cache. If the data isn't being used it'll eventually get aged out as all other have said. I would start by investigating what is using that outside of the one to two processes you've already stated you're
    running. Possibly by tuning or shutting those down (the ones that maybe aren't supposed to run after those processes) you could alleviate this issue.
    The next is to have the developers (if possible) go back and tune their queries. This is extremely important on express edition as a select * from huge table will effectively remove all over data in the buffer pool.
    One alternative that I dare even mention would be that since you can't selectively flush the buffer pool you can offline and then online a database. When this happens all data will be flushed to disk and all entries from the buffer pool and plan cache removed. 
    This is extremely heavy handed and honestly if this fixes you're issue I would look at my first point of seeing what queries are running that NEED that data, since as you've said it isn't needed.
    Other have stated indexes which I will reiterate, but honestly with some of the terrible queries in the retail world it's a moot point.
    Keep us updated - I know you're pain.
    Sean Gallardy | Blog |
    Twitter

  • Buffer Cache Flush

    Hi All,
    What is the benefit of Flushing buffer cache. what are its effects and is it advisable to do it on production system.
    Thanks in Advance
    Edited by: Vikas Kohli on Jan 31, 2012 11:13 AM
    Edited by: Vikas Kohli on Jan 31, 2012 11:22 AM

    Asif,
    All blocks which are resides in the buffer cache get flushed. I think this needs correction. "However a buffer cannot become free if it is "pinned" (i.e. actualy in use) or if it is "dirty" (i.e. needs to be written to disc). There's nothing that Oracle can do about the pinned buffers, but it can write the content of the dirty buffers to disk before freeing them." Sir Jonathan @ flush buffer cache
    @ Vikas,
    In the above thread Sir Jonathan has written many most valuable inputs. Please check it.
    I think you will get more details from below link too : (Almost equal to Docs)
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:7413988573867
    Flushing cache is generally and mostly used in Testing environment, untill and unless there is such need to do so.
    Flushing the buffer helps to find out more consistent results for sql traces.
    If you want to compare two different cases for performance, flush buffer cache and shared pool before execution of both of them.
    But this won't reduce LIO's (Logical I/O)
    Regards
    Girish Sharma

  • 10G NEW FEATURE-HOW TO FLUSH THE BUFFER CACHE

    제품 : ORACLE SERVER
    작성날짜 : 2004-05-25
    10G NEW FEATURE-HOW TO FLUSH THE BUFFER CACHE
    ===============================================
    PURPOSE
    이 자료는 Oracle 10g new feature 로 manual 하게
    buffer cache 를 flush 할 수 있는 기능에 대하여 알아보도록 한다.
    Explanation
    Oracle 10g 에서 new feature 로 소개된 내용으로 SGA 내 buffer cache 의
    모든 data 를 command 수행으로 clear 할 수 있다.
    이 작업을 위해서는 "alter system" privileges 가 있어야 한다.
    Buffer cache flush 를 위한 command 는 다음과 같다.
    주의) 이 작업은 database performance 에 영향을 줄 수 있으므로 주의하여 사용하여야 한다.
    SQL > alter system flush buffer_cache;
    Example
    x$bh 를 query 하여 buffer cache 내 존재하는 정보를 확인한다.
    x$bh view 는 buffer cache headers 정보를 확인할 수 있는 view 이다.
    우선 test 로 table 을 생성하고 insert 를 수행하고
    x$bh 에서 barfil column(Relative file number of block) 과 file# 를 조회한다.
    1) Test table 생성
    SQL> Create table Test_buffer (a number)
    2 tablespace USERS;
    Table created.
    2) Test table 에 insert
    SQL> begin
    2 for i in 1..1000
    3 loop
    4 insert into test_buffer values (i);
    5 end loop;
    6 commit;
    7 end;
    8 /
    PL/SQL procedure successfully completed.
    3) Object_id 확인
    SQL> select OBJECT_id from dba_objects
    2 where object_name='TEST_BUFFER';
    OBJECT_ID
    42817
    4) x$bh 에서 buffer cache 내에 올라와 있는 DBARFIL(file number of block) 를 조회한다.
    SQL> select ts#,file#,dbarfil,dbablk,class,state,mode_held,obj
    2 from x$bh where obj= 42817;
    TS# FILE# DBARFIL DBABLK CLASS STATE MODE_HELD J
    9 23 23 1297 8 1 0 7
    9 23 23 1298 9 1 0 7
    9 23 23 1299 4 1 0 7
    9 23 23 1300 1 1 0 7
    9 23 23 1301 1 1 0 7
    9 23 23 1302 1 1 0 7
    9 23 23 1303 1 1 0 7
    9 23 23 1304 1 1 0 7
    8 rows selected.
    5) 다음과 같이 buffer cache 를 flush 하고 위 query 를 재수행한다.
    SQL > alter system flush buffer_cache ;
    SQL> select ts#,file#,dbarfil,dbablk,class,state,mode_held,obj
    2 from x$bh where obj= 42817;
    6) x$bh 에서 state column 이 0 인지 확인한다.
    0 은 free buffer 를 의미한다. flush 이후에 state 가 0 인지 확인함으로써
    flushing 이 command 를 통해 manual 하게 수행되었음을 확인할 수 있다.
    Reference Documents
    <NOTE. 251326.1>

    I am also having the same issue. Can this be addressed or does BEA provide 'almost'
    working code for the bargin price of $80k/cpu?
    "Prashanth " <[email protected]> wrote:
    >
    Hi ALL,
    I am using wl:cache tag for caching purpose. My reqmnt is such that I
    have to
    flush the cache based on user activity.
    I have tried all the combinations, but could not achieve the desired
    result.
    Can somebody guide me on how can we flush the cache??
    TIA, Prashanth Bhat.

  • Flushing Database Buffer Cache

    I am trying out variants of a SQL statement in an attempt to tune it. Each variant involves joins across a different combination of tables, although some tables are common across all variants. In order to be able to do a valid comparison of the TKPROF outputs for the variants, I believe I need to flush the database buffer cache between variants so that the db block gets, consistent gets and physical reads parameters are true for each variant. By doing this, data retrieved for one variant is not already in the buffer cache for the next variant, thus not influencing the above parameters for the next variant.
    Is it possible to flush the buffer cache? The shared pool can be flushed with the ALTER SYSTEM FLUSH SHARED_POOL command. I've searched but have not been able to find an equivalent for the buffer cache. The NOCACHE option to the ALTER TABLE command only pushes retrieved data to the LRU list in the buffer cache, but does not remove it from the buffer cache.
    I'm hoping to be able to do this without bouncing the database between variants. It is a development instance, and I have it to myself after hours.

    Hi,
    I never tried this before, but if you want make a test you can try corrupt the block ID's returned by one of these queries below:
    Try corrupt the ID of the block containing the segment header
    select dbms_rowid.rowid_block_number(rowid) from hr.regions;
    Try corrupt one of the blocks returned by the query, which shows the ID of the block where each row is located
    select s.owner,t.ts#,s.header_file,s.header_block
    from
    v$tablespace t, dba_segments s
    where
    s.segment_name='REGIONS' and
    owner='HR' and
    t.name = s.tablespace_name;Legatti
    Cheers

  • LRU and CKPTQ in database buffer cache

    Hi experts out here,
    This functionality will work out in Database buffer cache of Oracle 10.2 or greater.
    Sources:OTN forums and Concepts 11.2 guide
    As per my readings.To improve the funtionality and make it more good Database Bufer cache is divided into several areas which are called workareasNow further
    zooming this each workarea will store multiple lists to store the buffers inside the database buffer cache.
    Each wrokarea can have one or more then one lists to maintain the wrokordering in there.So the list each workarea will have is LRU list and CKPTQ list.LRU list
    is a list of pinned,free and dirty buffers and CKPTQ is a list of Dirty buffers.We can say CKPTQ is a bundled of dirty buffers in low RBA ordering and ready to be flushed from cache to disk.
    CKPTQ list is maintained in low RBA ordering.
    As being novice let me clear about low RBA and High RBA first
    RBA is stored in the block header and will give us the information about when this block is changed and how many times it is changed.
    Low RBA : the low RBA is the address of the redo for the first change that was applied to the block since it was last clean,
    high RBA : the high RBA is the address of the redo for the most recent change to have been applied to the block.
    Now Back to CKPTQ
    It can be like this (Pathetic diagram of CKPTQ)
    lowRBA==================================High RBA
    (Head Of CKPTQ)                         (Tail Of CKPTQ)
    CKPTQ is a list of Dirty buffers.As per RBA concept.The most recent buffer modified is at the tail of CKPTQ.          
    Now oracle process starts and Try to Get buffer from DB cache if it gets a Buffer it will put a buffer MRU end of the LRU list.and buffer will become the most
    recently used.
    Now if process cant find a required buffer.then first it will try to find out Free buffer in LRU.And if it finds it its over it will place a datablock from datafile to the
    place where free buffer was sitting.(Good enough).
    Now if process cant fnd a Free buffer in LRU then First step would be it will find some Dirty buffers from the LRU end of the LRU list and place them on a
    CKPTQ(Remeber in low order of RBA it will arrange it in CKPT queue). and now oracle process will take required buffer and place it on the MRU end of LRU list.(Because space has been acclaimed by moving Dirty buffers to CKPTQ).
    I am sure that from CKPTQ the buffers(to be more accurate Dirty buffers) will move to datafiles.all the buffers are line up n CKPTQ in lower RBA first manner.But
    will be flushed to datafile how and in which manner and what event?
    This is what i understand after last three days flicking through blogs,forums and concepts guide.Now what i am missing please clear me out and apart from that
    i cant link the following functionalities with this flow..that is
    1)How the incremental checkpoint work with this CKPTQ?
    2)Now what is that 3 seconds timeout?
    (Every 3 seconds DBWR process will wake and find if anything there to write on datafiles for this DBWR will only check CKPTQ).
    3)apart form 3 second funda , when CKPTQ the buffers will be moved??(IS it when Process cant find any space in CKPTQ to keep buffers from LRU.ITs a
    moment when buffer from CKPTQ will be moved to disk)
    4)Can you please relate when control file will be updated with checkpoint so it can reduce recovery time?
    To many ques but i am trying to build up the whole process in mind that how it works may be i can be wrong in any phase in any step please correct me up and
    take me @ the end of flow..
    THANKS
    Kamesh

    Hi Amansir,
    So i m back with my bunch of questions.I cant again ask a single because you know its a flow so i cant end up with single doubt.Thanks for your last reply.
    Yes amansir first doubt clear that was buffer will be inserted at MID point for this i got one nice document (PDF)names "All about oracle touch count algorithm by CRAIG A SHALLAHAMER".That was quite nice PDF allabout hot and cold buffer and buffer movments inside the LRU list.I am prettly much clear with that point.Thank you. and Incremental checkpoint i read from Harald.van.Breederode ppt a person from oracle.You have shared it on one of your thread.that was nice reference
    flicking through threads i came across term REPL and its variations REPL-AUX (thread was for Oracle 9i).Is this variation REPL-AUX deprecated in 10g So i i am not wrong For each work area two main lists that are LRU and CKPTQ exists??not more than that any other types?
    For non-RAC database Thread checkpoint is a Full checkpoint?
    I read about the incremental checkpointing Here incremental checkpointing in my words n brief.Incremental Checkpoints means write only some selected buffer from CKPTQ to Datafiles.FROM CKPTQ few Low orders RBA buffers are selected and chekcpointed *(Buffer will be checkpointed on many conditions)* and When the Next checkpoint occurs that buffers are flushed to disk.Now this thing *(Checkpointing few buffers and flushing them to disk)* can be multiple times within three seconds so after 3 seconds *(This is the 3 second concept i was asking in the starting of the thread,Can this time be changed if yes with which parameter)* the checkpoint RBA and Checkpoint*(the point upto which database buffer has flushed to disk)* will be updated in Control file header *(Datafile also)* by CKPT process.So that Checkpoint will be used for Instance recovery purpose.Which can dramatically down the instance recovery time.
    every 3 seconds control file is updated with checkpoint and that checkpoint is the point from where we have to start the recovery process in oracle from redo log.I m aware that incremental checkpointing is controlled by Fast_start_mttr_target prarameter and now it is autotuned for >10.2 but the smaller value i will keep the less time my instance will take.
    Is above two para right what i understood if wrong correct me??
    What i understand is after three seconds it will take some buffers from the CKPTQ ( from low RBA end ) and flush them to disk.apart from this many other conditions are there when Data will be flushed to disk.
    1) like CKPTQ is full.
    2)Process cant find a free buffer in LRU
    3)to advance checkpoint DBWR writes..
    Correct me if i m wrong?
    THANKS
    Kamesh
    Edited by: Kamy on May 2, 2011 10:55 PM

  • Will I increase my Buffer Cache ?

    Oracle 9i
    Shared Pool 2112 Mb
    Buffer Cache 1728 Mb
    Large Pool 32Mb
    Java Pool 32 Mb
    Total 3907.358 Mb
    SGA Max Size 17011.494 Mb
    PGA
    Aggregate PGA Target 2450 Mb
    Current PGA Allocated 3286059 KB
    Maximum PGA Allocated (since Startup) 3462747 KB
    Cache Hit Percentage 98.71%
    The Buffer Cache Size advise is telling me that if I increase the Buffer Cache to 1930Mb i will get a 8.83 decrease in phyiscal reads (And its get better the more I increase it)
    The question is .. can I safely increase it (In light of my current memory allocations) ? Is it worth it .. ?

    Two things stand out:
    Your sga max size is 17Gb, but you are only using about 4Gb of it - so you seem to have 13Gb that you are not making best use of.
    Your pga aggregate target is 2.4Gb, but you've already hit a peak of 3.4Gb - which means your target may be too small - so it's lucky you had all that spare memory which hadn't gone into the SGA. Despite the availability of memory, some of your queries may have been rationed at run-time to try to minimise the excess demand.
    Is this OLTP or DSS - where do you really need the memory ? (Have a look in v$process to see the pga usage on a process by process level).
    How many processes are allowed to connect to the database ? (You ought to allow about 2Mb - 4Mb per process to the pga_aggregate_target for OLTP) and at least 1Mb per process for the buffer cache.
    Where do you see time lost ? time on disk I/O, or time on CPU ? What type of disk I/O, what's the nature of the CPU usage. These figures alone do not tell us what you should do with the spare memory you seem to have.
    A simple response to your original question would be that you probably need to increase the pga_aggregate_target, and you might as well increase the buffer size since you seem to have the memory for both.
    On the downside, changing the pga_aggregate_target could cause some execution plans to change; and changing the buffer size does change the limit size on a 'short' table, which can cause an increase in I/O as an unlucky side effect if you're a little heavy on "long" tablescans.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk

  • SCOM reports "A significant portion of the database buffer cache has been written out to the system paging file. This may result in severe performance degradation"

    This was discussed here, with no resolution
    http://social.technet.microsoft.com/Forums/en-US/exchange2010/thread/bb073c59-b88f-471b-a209-d7b5d9e5aa28?prof=required
    I have the same issue.  This is a single-purpose physical mailbox server with 320 users and 72GB of RAM.  That should be plenty.  I've checked and there are no manual settings for the database cache.  There are no other problems with
    the server, nothing reported in the logs, except for the aforementioned error (see below).
    The server is sluggish.  A reboot will clear up the problem temporarily.  The only processes using any significant amount of memory are store.exe (using 53GB), regsvc (using 5) and W3 and Monitoringhost.exe using 1 GB each.  Does anyone have
    any ideas on this?
    Warning ESE Event ID 906. 
    Information Store (1497076) A significant portion of the database buffer cache has been written out to the system paging file.  This may result in severe performance degradation. See help link for complete details of possible causes. Resident cache
    has fallen by 213107 buffers (or 11%) in the last 207168 seconds. Current Total Percent Resident: 79% (1574197 of 1969409 buffers)

    Brian,
    We had this event log entry as well which SCOM picked up on, and 10 seconds before it the Forefront Protection 2010 for Exchange updated all of its engines.
    We are running Exchange 2010 SP2 RU3 with no file system antivirus (the boxes are restricted and have UAC turned on as mitigations). We are running the servers primarily as Hub Transport servers with 16GB of RAM, but they do have the mailbox role installed
    for the sole purpose of serving as our public folder servers.
    So we theroized the STORE process was just grabbing a ton of RAM, and occasionally it was told to dump the memory so the other processes could grab some - thus generating the alert. Up until last night we thought nothing of it, but ~25 seconds after the
    cache flush to paging file, we got the following alert:
    Log Name:      Application
    Source:        MSExchangeTransport
    Date:          8/2/2012 2:08:14 AM
    Event ID:      17012
    Task Category: Storage
    Level:         Error
    Keywords:      Classic
    User:          N/A
    Computer:      HTS1.company.com
    Description:
    Transport Mail Database: The database could not allocate memory. Please close some applications to make sure you have enough memory for Exchange Server. The exception is Microsoft.Exchange.Isam.IsamOutOfMemoryException: Out of Memory (-1011)
       at Microsoft.Exchange.Isam.JetInterop.CallW(Int32 errFn)
       at Microsoft.Exchange.Isam.JetInterop.MJetOpenDatabase(MJET_SESID sesid, String file, String connect, MJET_GRBIT grbit, MJET_WRN& wrn)
       at Microsoft.Exchange.Isam.JetInterop.MJetOpenDatabase(MJET_SESID sesid, String file, MJET_GRBIT grbit)
       at Microsoft.Exchange.Isam.JetInterop.MJetOpenDatabase(MJET_SESID sesid, String file)
       at Microsoft.Exchange.Isam.Interop.MJetOpenDatabase(MJET_SESID sesid, String file)
       at Microsoft.Exchange.Transport.Storage.DataConnection..ctor(MJET_INSTANCE instance, DataSource source).
    Followed by:
    Log Name:      Application
    Source:        MSExchangeTransport
    Date:          8/2/2012 2:08:15 AM
    Event ID:      17106
    Task Category: Storage
    Level:         Information
    Keywords:      Classic
    User:          N/A
    Computer:      HTS1.company.com
    Description:
    Transport Mail Database: MSExchangeTransport has detected a critical storage error, updated the registry key (SOFTWARE\Microsoft\ExchangeServer\v14\Transport\QueueDatabase) and as a result, will attempt self-healing after process restart.
    Log Name:      Application
    Source:        MSExchangeTransport
    Date:          8/2/2012 2:13:50 AM
    Event ID:      17102
    Task Category: Storage
    Level:         Warning
    Keywords:      Classic
    User:          N/A
    Computer:      HTS1.company.com
    Description:
    Transport Mail Database: MSExchangeTransport has detected a critical storage error and has taken an automated recovery action.  This recovery action will not be repeated until the target folders are renamed or deleted. Directory path:E:\EXCHSRVR\TransportRoles\Data\Queue
    is moved to directory path:E:\EXCHSRVR\TransportRoles\Data\Queue\Queue.old.
    So it seems as if the Forefront Protection 2010 for Exchange inadvertently trigger the cache flush which didn't appear to happen quick or thuroughly enough for the transport service to do what it needed to do, so it freaked out and performed the subsequent
    actions.
    Do you have any ideas on how to prevent this 906 warning, which cascaded into a transport service outage?
    Thanks!

  • How to remove blocks from buffer cache for a specific object

    hi everybody,
    is it possible to remove blocks which belogns to a specific object (a table for ex) from buffer cache.
    as you know, there is
    alter system flush buffer_cache;command but it does it's job for all buffer cache. if you ask me why i want this, for tuning reasons. I want to test some plsql codes when they run as if they are running for the first time (reading from disk).
    ps: I use oracle 11g r2

    Hi mustafa,
    Your performance will not degrade if you run the query second time ( if i understood correctly, you worry about the performance if you execute the procedure second time). Executing/running the code/sql statements over and over again will have following two good benefits.
    1) This will avoid hard parsing (Hard parsing is resource intensive operation and this generally increase the overall processing time.
    2) This will avoid physical read IO (You gonna see the benefits if data blocks already cached and you dont have to spend time in reading blocks from disk. Reading from disk is much costlier and time consuming operation as compared to data in RAM)
    Having that said sometime bad written queries will acquire more blocks then required and consume most part of buffer cache, and this can some times effect the other important blocks and force to flush out from buffer cache.
    Oracle have built some intelligence for large full table scan operations for e.g will doing full table scan(I hope you already know what is fts) oracle will put its blocks at end of LRU chain. So these will be the buffers will would flush out first then any other.
    From oracle documentation:
    "When the user process is performing a full table scan, it reads the blocks of the table into buffers and puts them on the LRU end (instead of the MRU end) of the LRU list. This is because a fully scanned table usually is needed only briefly, so the blocks should be moved out quickly to leave more frequently used blocks in the cache.
    You can control this default behavior of blocks involved in table scans on a table-by-table basis. To specify that blocks of the table are to be placed at the MRU end of the list during a full table scan, use the CACHE clause when creating or altering a table or cluster. You can specify this behavior for small lookup tables or large static historical tables to avoid I/O on subsequent accesses of the table."
    Regards
    Edited by: 909592 on Feb 6, 2012 4:37 PM

  • Buffer cache hit ratio query

    Hi,
    I would appreciate some advice please.
    My Oracle 10.2.0.4.0 database starts off in the day with a buffer cache hit ratio of almost 100% but this drops gradually in the course of the day as the system gets busier. Is this something I need to be concerned about if no performance problem has been reported by the users?
    I would have thought this should be a normal situation, i.e. as the system gets busier, I would expect that less number of calls to the buffer cache will be satisfied because many more calls are using up the memory?
    Please note that I've done some reading up on this but would like some suggestions from more experienced people than myself on what I should normally expect and what should or should not be a concern.
    thanks

    user8869798 wrote:
    hi,
    thanks again for your response and apologies for not being able to get back yesterday.No problem :) .
    >
    - What i am doing and how - it is the backend database r our Finance application, many users with various transactions.Okay , that sounds like a "normal" database with normal workload.
    - configuring buffer cache size - I haven't done anything manually yet, it's all been as installed, this is what I'm trying to figure out whether it is something I should be looking into doing simply because of the dropping hit ratio and not because of any reported performance problem.If you don't have much of the knowledge, its the best to take use of the advisories which would tell you in a better and in a graphical way that whether you should or shouldn't be worries. Look at the view v$db_cache_advice which can suggest to you that whether you would need to tweak the buffer cache or not.
    >
    - looking at the queries - What is the easiest way of doing this in an environment where many users are running different queries? What's the easiest way to identify queries that we may need to have a closer look at?Easiest way? Well, let the users come back to you ;-) .
    >
    - Basically, I'm just trying to ascertain whether or not I need to be concerned that my hit ratio drops below 89% even though no performance problem has been reported. If it is something that I should look into, then what is the best way to go about it?
    I believe that's answered by couple of us already, nope.
    Aman....

  • How to turn off unified buffer caching

    It's clear from many other forums that the unified buffer cache is supposed to 'just work' and in general it does. However, I have a situation where I really want the option of turning off caching either at the device or file system level.
    Is this possible?
    I have seen mention of using F_NOCACHE when opening files but that either does not work or, in any case, is not an option in this case.
    Thanks.

    It actually indexes the "cache" files, so although not creating them might improve indexing speed, you wouldn't get anything indexed which is probably not very useful.

  • Available blocks in buffer cache

    Hi.
    I need to find available blocks in buffer cache. I can not query x$bh as not sysdba user. Anyone that has an idea how to get this information. I tried query the v$bh view but I can not get it right.
    Anyone with a good idea?
    Rgds
    Kjell OVe

    No,
    When you have a 100m buffer cache, it means you can buffer 100m/8k blocks of your database in cache, and you don't need to read them from disk.
    When the cache gets full Oracle will use a modified Least Recently Used algorithm to determine which blocks can be flushed.
    If the block is unmodified (not dirty) it will simply be removed, if it is modified it will be written to disk.
    When you insert a record (you seem to be really obsessed by this)
    - Oracle will look for free space in the current segment.
    When it finds a block and it is not in cache, it will retrieve this in cache.
    - if there is no space, Oracle will allocate a new extent.
    It will retrieve blocks from the new extent in cache. Simply: each block in the cache has a RBA (relative block address). The RBA points to a block on disk.
    - When it can't allocate an extent, Oracle will try to extend the tablespace (actually the datafile)
    If this doesn't succeed Oracle will raise an error, and send the error number and error text to the client program.
    The failing statement will be rolled back automatically.
    Hth
    Sybrand Bakker
    Senior Oracle DBA

  • Data Buffer Cache!

    Hi Guru(s),
    Please anyone can tell me that, how can we check the size of data buffer cache.
    Please help.
    Regards,
    Rajeev K

    Hi,
    Here is what I would do:
    First, let's have a look what views we have available to us
    SQL> select table_name from dict where table_name like '%SGA%';
    TABLE_NAME
    DBA_HIST_SGA
    DBA_HIST_SGASTAT
    DBA_HIST_SGA_TARGET_ADVICE
    V$SGA
    V$SGAINFO
    V$SGASTAT
    V$SGA_CURRENT_RESIZE_OPS
    V$SGA_DYNAMIC_COMPONENTS
    V$SGA_DYNAMIC_FREE_MEMORY
    V$SGA_RESIZE_OPS
    V$SGA_TARGET_ADVICE
    GV$SGA
    GV$SGAINFO
    GV$SGASTAT
    GV$SGA_CURRENT_RESIZE_OPS
    GV$SGA_DYNAMIC_COMPONENTS
    GV$SGA_DYNAMIC_FREE_MEMORY
    GV$SGA_RESIZE_OPS
    GV$SGA_TARGET_ADVICE
    19 rows selected.v$sgainfo looks pretty promising, what does it contain?
    SQL> desc v$sgainfo
    Name                                                              Null?    Type
    NAME                                                                       VARCHAR2(32)
    BYTES                                                                      NUMBER
    RESIZEABLE                                                                 VARCHAR2(3)And how many rows are in it?
    SQL> select count(*) from v$sgainfo;
      COUNT(*)
            12Not too many so let's look at them all, with a little formatting
    SQL> select name,round(bytes/1024/1024,0) MB, resizeable from v$sgainfo order by name;Hope that helps,
    Rob

  • Pin the sequence WIP_JOB_NUMBER_S in the DB Buffer Cache

    hai,
    how u pin the sequence in db_buffer_cache?????
    ( The job numbers skip to
    the next set of 100 when a user exits and then logs in)
    for resolving this issue
    Pin the sequence WIP_JOB_NUMBER_S in the DB Buffer Cache so it
    does not get flushed.
    regards
    dba

    how did you pin it? have you done the following
    dbms_shared_pool.keep('WIP_JOB_NUMBER_S','Q').
    When you shut down a database normally, either "SHUTDOWN
    NORMAL" or "SHUTDOWN IMMEDIATE", the database takes care of
    making sure all sequences are "in sequence".
    If the database experiences instance failure or a "SHUTDOWN
    ABORT" statement is issued, you lose any unused cached
    sequence values.
    fadi

  • Buffer cache writes

    Hi,
    From ADDM report i saw following message.
    Buffer cache writes due to small log files were consuming significant database
    time.
    Recommendation 1: Database Configuration
    Estimated benefit is .31 active sessions, 2.99% of total activity.
    Action
    Increase the size of the log files to 2048 M to hold at least 20 minutes
    of redo information.
    From AWR which section would tell me about Buffer cache writes?
    Thanks

    user10698496 wrote:
    From ADDM report i saw following message.
    Buffer cache writes due to small log files were consuming significant database
    time.
    Recommendation 1: Database Configuration
    Estimated benefit is .31 active sessions, 2.99% of total activity.
    Action
    Increase the size of the log files to 2048 M to hold at least 20 minutes
    of redo information.
    From AWR which section would tell me about Buffer cache writes?
    As Nikolay has pointed out, the estimated benefit is only three percent, however it's not (usually) a difficult, risky, r time-consuming change to make so you might as well follow the advice.
    One of the triggers that utlimately causes the database writer (dbwr) to write is the log file switch - which means if your log files are small you may find dbwr writing blocks more aggressively than needed. Oracle may be analysing history to count examples of database blocks being written more than once before being flushed from the buffer to produce this conclusion.
    Jgarry has already mentioned v$instance_recovery - one of the columns in that view is "writes_logfile_size" - the number of writes due to the size of the log file; this is the figure that could be reduced by increasing the log file size. This doesn't get reported in the AWR, though. I think the best you can get is the instance activity statistic "DBWR thread checkpoint buffers written" - but there's a complicated precedence of which features cause which counter to increment so I'm not even sure how helpful that fiture may be.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    Author: <b><em>Oracle Core</em></b>

Maybe you are looking for

  • How can I see what I'm shooting with my Iphone in realtime on my Ipad

    Hi Everyone, In a few days time, I need to make a short film with an iphone. And in order to make things easier, I would like to connect my Iphone to my Ipad, preferably via BLUETOOTH ('cause I'll be shooting on the streets where there is no WIFI and

  • Problem with zen configured as alarm with speak

    I decided to just hook it up to my altec computer speakers with subwoofer (wow! sounds great). the alarm worked fine. but using the sleep timer and alarm, it kept cutting out. i'm using a simple connector line from the zen to the subwoofer and the sp

  • Need a Code Fix

    I am getting the following error on the attached code and can not seem to see the error in my ways - can someone please help me see the light (or fix the problem). Warning: mysql_fetch_array(): supplied argument is not a valid MySQL result resource i

  • Compressing HDV for web distribution

    My source material is 1080i HDV captured on Sony HD1000 (PAL) camera. This I've imported into FCP to make a 3 min movie with additional music. My aim - good quality movie with a small file size My best result after countless attempts is 22MB, up to 8

  • Physical size of dimensions and composites (Express and Oracle OLAP)

    Hi, have a rather odd question re: the physical size (in bytes) of composites...are they affected by the dimension sizes? I.e. if I define a dimension "Org" as: define org dimension text width 10 will that result in smaller composites than if I defin