Data buffer cache has undo information...??

do data buffer in sga has any undo information....??

920273 wrote:
in sga we have data buffer ..... in data buffer cache if i m updating a block, do any undo information is maintained in buffer cache regarding the change, i hav made in block.....am
lets take an example if i m updating a block having value '2' to '4'...and dbwriter not writes to datafiles ...in this case do any undo information is maintained in buffer cache...???Hmmm.. When you updated value 2 to 4, so still its uncommitted then the transaction is active and it will be in buffer cache itself. Of course if the cache is full or performed any checkpoint, then this modified data also writtened into data files. So you should not mention dbwriter not writes to datafiles you cant predict when cache going to full or checkpoint occurred.

Similar Messages

  • ESE - Event Log Warning: 906 - A significant portion of the database buffer cache has been written out to the system paging file...

    Hello -
    We have 3 x EX2010 SP3 RU5 nodes in a cross-site DAG.
    Multi-role servers with 18 GB RAM [increased from 16 GB in an attempt to clear this warning without success].
    We run nightly backups on both nodes at the Primary Site.
    Node 1 backup covers all mailbox databases [active & passive].
    Node 2 backup covers the Public Folders database.
    The backups for each database are timed so they do not overlap.
    During each backup we get several of these event log warnings:
     Log Name:      Application
     Source:        ESE
     Date:          23/04/2014 00:47:22
     Event ID:      906
     Task Category: Performance
     Level:         Warning
     Keywords:      Classic
     User:          N/A
     Computer:      EX1.xxx.com
     Description:
     Information Store (5012) A significant portion of the database buffer cache has been written out to the system paging file.  This may result  in severe performance degradation.
     See help link for complete details of possible causes.
     Resident cache has fallen by 42523 buffers (or 27%) in the last 903 seconds.
     Current Total Percent Resident: 26% (110122 of 421303 buffers)
    We've rescheduled the backups and the warning message occurences just move with the backup schedules.
    We're not aware of perceived end-user performance degradation, overnight backups in this time zone coincide with the business day for mailbox users in SEA.
    I raised a call with the Microsoft Enterprise Support folks, they had a look at BPA output and from their diagnostics tool. We have enough RAM and no major issues detected.
    They suggested McAfee AV could be the root of our problems, but we have v8.8 with EX2010 exceptions configured.
    Backup software is Asigra V12.2 with latest hotfixes.
    We're trying to clear up these warnings as they're throwing SCOM alerts and making a mess of availability reporting.
    Any suggestions please?
    Thanks in advance

    Having said all that, a colleague has suggested we just limit the amount of RAM available for the EX2010 DB cache
    Then it won't have to start releasing RAM when the backup runs, and won't throw SCOM alerts
    This attribute should do it...
    msExchESEParamCacheSizeMax
    http://technet.microsoft.com/en-us/library/ee832793.aspx
    Give me a shout if this is a bad idea
    Thanks

  • SCOM reports "A significant portion of the database buffer cache has been written out to the system paging file. This may result in severe performance degradation"

    This was discussed here, with no resolution
    http://social.technet.microsoft.com/Forums/en-US/exchange2010/thread/bb073c59-b88f-471b-a209-d7b5d9e5aa28?prof=required
    I have the same issue.  This is a single-purpose physical mailbox server with 320 users and 72GB of RAM.  That should be plenty.  I've checked and there are no manual settings for the database cache.  There are no other problems with
    the server, nothing reported in the logs, except for the aforementioned error (see below).
    The server is sluggish.  A reboot will clear up the problem temporarily.  The only processes using any significant amount of memory are store.exe (using 53GB), regsvc (using 5) and W3 and Monitoringhost.exe using 1 GB each.  Does anyone have
    any ideas on this?
    Warning ESE Event ID 906. 
    Information Store (1497076) A significant portion of the database buffer cache has been written out to the system paging file.  This may result in severe performance degradation. See help link for complete details of possible causes. Resident cache
    has fallen by 213107 buffers (or 11%) in the last 207168 seconds. Current Total Percent Resident: 79% (1574197 of 1969409 buffers)

    Brian,
    We had this event log entry as well which SCOM picked up on, and 10 seconds before it the Forefront Protection 2010 for Exchange updated all of its engines.
    We are running Exchange 2010 SP2 RU3 with no file system antivirus (the boxes are restricted and have UAC turned on as mitigations). We are running the servers primarily as Hub Transport servers with 16GB of RAM, but they do have the mailbox role installed
    for the sole purpose of serving as our public folder servers.
    So we theroized the STORE process was just grabbing a ton of RAM, and occasionally it was told to dump the memory so the other processes could grab some - thus generating the alert. Up until last night we thought nothing of it, but ~25 seconds after the
    cache flush to paging file, we got the following alert:
    Log Name:      Application
    Source:        MSExchangeTransport
    Date:          8/2/2012 2:08:14 AM
    Event ID:      17012
    Task Category: Storage
    Level:         Error
    Keywords:      Classic
    User:          N/A
    Computer:      HTS1.company.com
    Description:
    Transport Mail Database: The database could not allocate memory. Please close some applications to make sure you have enough memory for Exchange Server. The exception is Microsoft.Exchange.Isam.IsamOutOfMemoryException: Out of Memory (-1011)
       at Microsoft.Exchange.Isam.JetInterop.CallW(Int32 errFn)
       at Microsoft.Exchange.Isam.JetInterop.MJetOpenDatabase(MJET_SESID sesid, String file, String connect, MJET_GRBIT grbit, MJET_WRN& wrn)
       at Microsoft.Exchange.Isam.JetInterop.MJetOpenDatabase(MJET_SESID sesid, String file, MJET_GRBIT grbit)
       at Microsoft.Exchange.Isam.JetInterop.MJetOpenDatabase(MJET_SESID sesid, String file)
       at Microsoft.Exchange.Isam.Interop.MJetOpenDatabase(MJET_SESID sesid, String file)
       at Microsoft.Exchange.Transport.Storage.DataConnection..ctor(MJET_INSTANCE instance, DataSource source).
    Followed by:
    Log Name:      Application
    Source:        MSExchangeTransport
    Date:          8/2/2012 2:08:15 AM
    Event ID:      17106
    Task Category: Storage
    Level:         Information
    Keywords:      Classic
    User:          N/A
    Computer:      HTS1.company.com
    Description:
    Transport Mail Database: MSExchangeTransport has detected a critical storage error, updated the registry key (SOFTWARE\Microsoft\ExchangeServer\v14\Transport\QueueDatabase) and as a result, will attempt self-healing after process restart.
    Log Name:      Application
    Source:        MSExchangeTransport
    Date:          8/2/2012 2:13:50 AM
    Event ID:      17102
    Task Category: Storage
    Level:         Warning
    Keywords:      Classic
    User:          N/A
    Computer:      HTS1.company.com
    Description:
    Transport Mail Database: MSExchangeTransport has detected a critical storage error and has taken an automated recovery action.  This recovery action will not be repeated until the target folders are renamed or deleted. Directory path:E:\EXCHSRVR\TransportRoles\Data\Queue
    is moved to directory path:E:\EXCHSRVR\TransportRoles\Data\Queue\Queue.old.
    So it seems as if the Forefront Protection 2010 for Exchange inadvertently trigger the cache flush which didn't appear to happen quick or thuroughly enough for the transport service to do what it needed to do, so it freaked out and performed the subsequent
    actions.
    Do you have any ideas on how to prevent this 906 warning, which cascaded into a transport service outage?
    Thanks!

  • A significant portion of the database buffer cache has been written out to the system paging file.

    Hi,
    We seem to get this error through SCOM every couple of weeks.  It doesn't correlate with the AV updates, so I'm not sure what's eating up the memory.  The server has been patched to the latest roll up and service pack.  The mailbox servers
    have been provisioned sufficiently with more than enough memory.  Currently they just slow down until the databases activate on another mailbox server.
    A significant portion of the database buffer cache has been written out to the system paging file.
    Any ideas?

    I've seen this with properly sized servers with very little Exchange load running. It could be a  number of different things.  Here are some items to check:
    Confirm that the server hardware has the latest BIOS, drivers, firmware, etc
    Confirm that the Windows OS is running the recommended hotfixes.  Here is an older post that might still apply to you
    http://blogs.technet.com/b/dblanch/archive/2012/02/27/a-few-hotfixes-to-consider.aspx
    http://support.microsoft.com/kb/2699780/en-us
    Setup a perfmon to capture data from the server. Look for disk performance, excessive paging, CPU/Processor spikes, and more.  Use the PAL tool to collect and analyze the perf data -
    http://pal.codeplex.com/
    Include looking for other applications or processes that might be consuming system resources (AV, Backup, security, etc)
    Be sure that the disk are properly aligned -
    http://blogs.technet.com/b/mikelag/archive/2011/02/09/how-fragmentation-on-incorrectly-formatted-ntfs-volumes-affects-exchange.aspx
    Check that the network is properly configured for Exchange server.  You might be surprise how the network config can cause perf & scom alerts.
    Make sure that you did not (improperly) statically set msExchESEParamCacheSizeMax and msExchESEParamCacheSizeMin attributes in Active Directory -
    http://technet.microsoft.com/en-us/library/ee832793(v=exchg.141).aspx
    Be sure that hyperthreading is NOT enabled -
    http://technet.microsoft.com/en-us/library/dd346699(v=exchg.141).aspx#Hyper
    Check that there are no hardware issues on the server (RAM, CPU, etc).  You might need to run some vendor specific utilities/tools to validate.
    Proper paging file configuration should be considered for Exchange servers.  You can use the perfmon to see just how much paging is occurring.
    These will usually lead you in the right direction. Good Luck!

  • Data Buffer Cache Error Message

    I'm using a load rule that builds a dimenson on the fly and getting the following error: "Not enough memory to allocate the Data Buffer Cache [adDatInitCacheParamsAborted]"I've got 4 other databases which are set up the same as this one and I'm not getting this error. I've checked all the settings and I think they're all the same.Anyone have any idea what this error could mean?I can be reached at [email protected]

    Hi,
    Same issue, running Vista too.  This problem is recent.  It may be due to the last itunes update.  itunes 11.2.23

  • Data Buffer Cache!

    Hi Guru(s),
    Please anyone can tell me that, how can we check the size of data buffer cache.
    Please help.
    Regards,
    Rajeev K

    Hi,
    Here is what I would do:
    First, let's have a look what views we have available to us
    SQL> select table_name from dict where table_name like '%SGA%';
    TABLE_NAME
    DBA_HIST_SGA
    DBA_HIST_SGASTAT
    DBA_HIST_SGA_TARGET_ADVICE
    V$SGA
    V$SGAINFO
    V$SGASTAT
    V$SGA_CURRENT_RESIZE_OPS
    V$SGA_DYNAMIC_COMPONENTS
    V$SGA_DYNAMIC_FREE_MEMORY
    V$SGA_RESIZE_OPS
    V$SGA_TARGET_ADVICE
    GV$SGA
    GV$SGAINFO
    GV$SGASTAT
    GV$SGA_CURRENT_RESIZE_OPS
    GV$SGA_DYNAMIC_COMPONENTS
    GV$SGA_DYNAMIC_FREE_MEMORY
    GV$SGA_RESIZE_OPS
    GV$SGA_TARGET_ADVICE
    19 rows selected.v$sgainfo looks pretty promising, what does it contain?
    SQL> desc v$sgainfo
    Name                                                              Null?    Type
    NAME                                                                       VARCHAR2(32)
    BYTES                                                                      NUMBER
    RESIZEABLE                                                                 VARCHAR2(3)And how many rows are in it?
    SQL> select count(*) from v$sgainfo;
      COUNT(*)
            12Not too many so let's look at them all, with a little formatting
    SQL> select name,round(bytes/1024/1024,0) MB, resizeable from v$sgainfo order by name;Hope that helps,
    Rob

  • Data Buffer Cache Limit

    Is there any way that I could signal daat buffer cache to write all data to data files if amount of dirty blocks reach say 50 Mb.
    Iam processing with BLOBS, one blob at a time, some of which has sizes exceeeding 100 Mb and the diffcult thing is that I cannot write to disk until the whoile blob is finished as it is one transaction.
    Well if anyone is going to suggest that to open, process close, commit ....well i tried that but it also gives error "nmo free buffer pool" but this comes for twice the size of buffer size = 100 Mb file when db cache size is 50 mb.
    any ideas.

    Hello,
    Ia using Oracle 9.0.1.3.1
    I am getting error ORA-OO379 No free Buffers available in Buffer Pool. Default for block size 8k.
    My Init.ora file is
    # Copyright (c) 1991, 2001 by Oracle Corporation
    # Cache and I/O
    db_block_size=8192
    db_cache_size=104857600
    # Cursors and Library Cache
    open_cursors=300
    # Diagnostics and Statistics
    background_dump_dest=C:\oracle\admin\iasdb\bdump
    core_dump_dest=C:\oracle\admin\iasdb\cdump
    timed_statistics=TRUE
    user_dump_dest=C:\oracle\admin\iasdb\udump
    # Distributed, Replication and Snapshot
    db_domain="removed"
    remote_login_passwordfile=EXCLUSIVE
    # File Configuration
    control_files=("C:\oracle\oradata\iasdb\CONTROL01.CTL", "C:\oracle\oradata\iasdb\CONTROL02.CTL", "C:\oracle\oradata\iasdb\CONTROL03.CTL")
    # Job Queues
    job_queue_processes=4
    # MTS
    dispatchers="(PROTOCOL=TCP)(PRE=oracle.aurora.server.GiopServer)", "(PROTOCOL=TCP)(PRE=oracle.aurora.server.SGiopServer)"
    # Miscellaneous
    aq_tm_processes=1
    compatible=9.0.0
    db_name=iasdb
    # Network Registration
    instance_name=iasdb
    # Pools
    java_pool_size=41943040
    shared_pool_size=33554432
    # Processes and Sessions
    processes=150
    # Redo Log and Recovery
    fast_start_mttr_target=300
    # Sort, Hash Joins, Bitmap Indexes
    pga_aggregate_target=33554432
    sort_area_size=524288
    # System Managed Undo and Rollback Segments
    undo_management=AUTO
    undo_tablespace=UNDOTBS

  • Data Buffer Cache Quality

    Hi All,
    Can somebody please please tell some ways in which i can improve the data buffer quality? Presently it is 51.2%. The DB is 10.2.0.2.0
    I want to know, wat all factors do i need to keep in mind if i want to increase DB_CACHE_SIZE?
    Also, i want to know how can i find out Cache Hit ratio?
    Further, i want to know which are the most frequently accessed objects in my DB?
    Thanks and Regards,
    Nick.

    Nick-- wud b DBA wrote:
    Hi Aman,
    Thanks. Can u please give the appropriate query for that?
    And moreover when i'm giving:
    SQL>desc V$SEGMENT-STATISTICS; It is giving the following error:
    SP2-0565: Illegal identifier.
    Regards,
    Nick.LOL dude I put it by mistake. Its dash(-) sign but we need underscore(_) sign.
    About the query, it may vary what you really mean by "most used obect". If you mean to find the object that is undergoing lots of reads,writes than this may help,
    SELECT Rownum AS Rank,
    Seg_Lio.*
    FROM (SELECT St.Owner,
    St.Obj#,
    St.Object_Type,
    St.Object_Name,
    St.VALUE,
    'LIO' AS Unit
    FROM V$segment_Statistics St
    WHERE St.Statistic_Name = 'logical reads'
    ORDER BY St.VALUE DESC) Seg_Lio
    WHERE Rownum <= 10
    UNION ALL
    SELECT Rownum AS Rank,
    Seq_Pio_r.*
    FROM (SELECT St.Owner,
    St.Obj#,
    St.Object_Type,
    St.Object_Name,
    St.VALUE,
    'PIO Reads' AS Unit
    FROM V$segment_Statistics St
    WHERE St.Statistic_Name = 'physical reads'
    ORDER BY St.VALUE DESC) Seq_Pio_r
    WHERE Rownum <= 10
    UNION ALL
    SELECT Rownum AS Rank,
    Seq_Pio_w.*
    FROM (SELECT St.Owner,
    St.Obj#,
    St.Object_Type,
    St.Object_Name,
    St.VALUE,
    'PIO Writes' AS Unit
    FROM V$segment_Statistics St
    WHERE St.Statistic_Name = 'physical writes'
    ORDER BY St.VALUE DESC) Seq_Pio_w
    WHERE Rownum <= 10; But if you are looking for the objects which are most highly in the waits than this query may help
    select * from
       select
          DECODE
          (GROUPING(a.object_name), 1, 'All Objects', a.object_name)
       AS "Object",
    sum(case when
       a.statistic_name = 'ITL waits'
    then
       a.value else null end) "ITL Waits",
    sum(case when
       a.statistic_name = 'buffer busy waits'
    then
       a.value else null end) "Buffer Busy Waits",
    sum(case when
       a.statistic_name = 'row lock waits'
    then
       a.value else null end) "Row Lock Waits",
    sum(case when
       a.statistic_name = 'physical reads'
    then
       a.value else null end) "Physical Reads",
    sum(case when
       a.statistic_name = 'logical reads'
    then
       a.value else null end) "Logical Reads"
    from
       v$segment_statistics a
    where
       a.owner like upper('&owner')
    group by
       rollup(a.object_name)) b
    where (b."ITL Waits">0 or b."Buffer Busy Waits">0)This query's reference:http://www.dba-oracle.com/t_object_wait_v_segment_statistics.htm
    So it would depend upon that on what ground you want to get the objects.
    About the cache increase, are you seeing any wait events related to buffer cache or DBWR in the statspack report?
    HTH
    Aman....

  • How data buffer cache is handled

    hi all,
    I have a doubt in how buffer cache is managed.. I read that server process will read blocks from data file and keeps them in buffer cache if they are not already present in buffer cache.. Suppose two users from different sessions issue the same statement at the same time, and they are picked up by different server processes, and upon checking they will find that the block is not present in SGA, so they both will go read the block.. is it not unnecessary.. or am i wrong? please let me know how it is handled.
    Regards
    Suresh

    Hi tinku,
    I understand what you mean.. but my problem is different.
    suppose there are two sessions
    first session issues a statement:
    Select * from emp where empno = 12;
    and second session issues
    select * from emp where empno in (12,13);
    becoz these two statements are different, these two statements are to be parsed separately. After parsing, each of the server processes will see if the block corresponding 12 is already in memory, if it is not, they will read it from data file and keep in buffer cache.. suppose these two server processes at the same time find that the block is not in SGA and try to read the block from data file, there would be two copies of the block in SGA. isn't it. I don't think this is what is gonna happen. so i need how the server processes coordinate with each other to read only necessary blocks without redundancy.
    Regards
    Suresh

  • Data being fetched bigger than DB Buffer Cache

    DB Version: 10.2.0.4
    OS : Solarit 5.10
    We have a DB with 1gb set for DB_CACHE_SIZE . Automatic Shared Memory Management is Disabled (SGA_TARGET = 0).
    If a query is fired against a table which is going retrieve 2GB of data. Will that session hang? How will oracle handle this?

    Tom wrote:
    If the retrieved blocks get automatically removed from the buffer cache after it is fetched as per LRU algorithm, then Oracle should handle this without any issues. Right?Yes. No issues in that the "+size of a fetch+" (e.g. selecting 2GB worth of rows) need to fit completely in the db buffer cache (only 1GB in size).
    As Sybrand mentioned - everything in that case will be flushed as newer data blocks will be read... and that will be flushed again shortly afterward as even newer data blocks are read.
    The cache hit ratio will thus be low.
    But this will not cause Oracle errors or problems - simply that performance degrade as the data volumes being processed exceeds the capacity of the cache.
    It is like running a very large program that requires more RAM than what is available on a PC. The "extra RAM" comes from the swap file on disk. App program will be slow as its memory pages (some on disk) needs to be swapped into and out of memory as needed. It will work faster if the PC has sufficient RAM. However, the o/s is designed to deal with this exact situation where more RAM is needed than what physically available.
    Similar situation with processing larger data chunks than what the buffer cache has capacity for.

  • Database buffer Cache

    Hi Guru,
    Can anyone tell me what is the actual definition of Data buffer Cache & Log buffer Cache and how it works in Oracle 10g??
    Please
    Regards,
    Rajeev,India
    Edited by: 970371 on Nov 8, 2012 7:06 PM

    vlethakula wrote:
    Databuffer cache contains the blocks which are read from physical data files.Database buffer cache contains buffers that hold the blocks read from the disk.
    Log buffer cahce contains changes made to the database
    e.g: You try to update a row, the changes made to that row is written in form of change vectors to Log buffer cache, from there on certail rules(like commit) LGWR background process writes those changes from LOG BUFFER CACHE to redo log files.
    The block which is modified in database buffer cache, will be written to physical files by DBWR process after certain rules are met (like checkpoint)The reason that I didn't give the explanation or the links containing the same from the docs that I wanted OP to come up with some sort of his own understanding about the two caches first.
    Aman....

  • Data File Cache Significance

    Hi Everyone,I have been away from Essbase for a little while and in the mean time our client made the move to Essbase v6.Could somebody please explain the significance, and correct way to set, the Data File Cache? Also, have the rules for setting the Data Cache changed as a result?Any advice would be appreciated - I understood Essbase v5 quite well but have been out of the loop as far as Essbase v6 goes.

    The data file cache has significance only if your client is using Direct I/O. Direct I/O was introduced beginning with v6 to allow DBAs to explicitly manage data file caching on a database by database basis rather than allowing the operating system to do it. The previous I/O management scheme (Buffered I/O) was still supported, but Direct I/O was the default.Unfortunately, in installations with multiple applications per server, optimizing the data file cache for each database proved to be a headache. So Hyperion decided to revert to Buffered I/O as the default beginning with v6.2. So in Essbase version 6.0-6.1 Direct I/O is the default and you have to change it in essbase.cfg to use Buffered I/O. In version 6.2 and later the reverse is true (please correct me if I'm wrong everyone).If you choose to use Direct I/O, I believe the conventional wisdom is to make the data file cache big enough to hold all the .pag files in your database. Otherwise, set it as large as possible.Good luck,Bruce---Message Posted by sparky1 ??4/7/02 19:44---Hi Everyone,I have been away from Essbase for a little while and in the mean time our client made the move to Essbase v6.Could somebody please explain the significance, and correct way to set, the Data File Cache? Also, have the rules for setting the Data Cache changed as a result?Any advice would be appreciated - I understood Essbase v5 quite well but have been out of the loop as far as Essbase v6 goes.

  • Buffer cache of SGA

    can anyone plz clarify my doubt,it is which parameter defines the size of db buffer cache in the SGA.does db_cache_size directly defines the size or is it the size of the cache of standard blocks(specified by db_block_size parameter).

    DB_BLOCK_BUFFERS specifies the number of blocks to allocate for data buffer. This parameter's value is then multiplied with DB_BLOCK_SIZE to calculate the size of the data buffer.
    DB_CACHE_SIZE specifies the size value itself directly in units of KV,MB or GB. This parameter alone is enough to calculate the data buffer cache size.
    DB_BLOCK_BUFFERS can only create buffer cache in units of blocks based on one parameter DB_BLOCK_SIZE. On the other hand, multiple data buffere caches can be created by using DB_nK_CACHE_SIZE parameters where n is the blocks's size of for the buffer cache. So for example, one can allocate X MB buffer cache of 8K block size and have Y MB buffer cache of 16K blocks. This helps when you have tablespaces of varying block sizes (This could not be possible using DB_BLOCK_BUFFERS as DB_BLOCK_SIZE is not modifiable).
    DB_CACHE_SIZE can work along with SGA_TARGET parameter (which decides SGA size). If DB_CACHE_SIZE is 0, then it's value varies based on usage. If a value is set then that value turns out to be a minimum value. This is not possible using DB_BLOCK_BUFFERS.

  • Data Base Buffer Cache Ful

    Hi,
    what if, an update statement causes the existing db cache ful with the updated information(in this case this is uncommited transaction). in this case, if another user run the update statement where that update information stored before written to the datafile.

    No this is not correct. A block/buffer , when undergoes any kind of dml, is marked as dirty. Once marked dirty , there are various events which push it to the dirty list , from where it has to be flushed out. So the buffer , if it is marked as dirty, it will be sent to the datafile for sure, without waiting for it to be committed or not. That's why , because of this operation only , we need to rollback the changes with the roll backward process , to make sure that the only committed buffers stay in the datafile not the uncommitted ones.
    The only buffers which are not flushed to the dataifle are the buffers which are used for the read consistacny mechanism by oracle, snapshot buffers. The snapshot buffers are created in the same hash bucket of the dirty buffers but they are not flushed out to the datafile. They are simply thrown away.
    ok... the dirty buffers are witten to datafiles irrespective of the type of transaction(committed or un committed)..
    if this is true, if you roll back the transaction, from where oracle get the previous data(the previous data must be stored some where in order to replace), in this case the previous data will store in UNDO tablespace, this will also provide read consistency. if this is not true how the read consistency provided by snapshot buffers.

  • What are all information brought into database buffer cache ?

    Hi,
    What are all information brought into database buffer cache , when user does any one of operations such as "insert","update", "delete" , "select" ?
    Whether the datablock to be modified only brought into cache or entire datablocks of a table brought into cache while doing operations i mentioned above ?
    What is the purpose of SQL Area? What are all information brought into SQLArea?
    Please explain me the logic behind the questions i asked above.
    thanks in advance,
    nvseenu

    Documentation is your friend. Why not start by
    reading the
    [url=http://download.oracle.com/docs/cd/B19306_01/serv
    er.102/b14220/memory.htm]Memory Architecturechapter.
    Message was edited by:
    orafad
    Hi orafad,
    I have learnt MemoryArchitecture .
    In that documentation , folowing explanation are given,
    The database buffer cache is the portion of the SGA that holds copies of data blocks read from datafiles.
    But i would like to know whether all or few datablocks brought into cache.
    thanks in advance,
    nvseenu

Maybe you are looking for

  • Why get "fatal alert: bad_certificate" but "certificate_expired"

    Hi all, I am testing 2 way SSL authentication now. Expected Result: When the client side provides a expired client cert to the server during handshaking, the server is supposed to return fatal alert: certificate_expired, which is exactly the client s

  • Reliability of DMVPN as primary link

    Hi, We are planning to implement DMVPN (phase 3)  through internet to connect 100 plus locations (including business critical locations) . These locations are located around the globe including embargoed countries. However , while browsing through va

  • Allow SharePoint 2013 documents to be viewed only in the browser.

    I have a SharePoint 2013 server and have no Office Web Apps installed. I am able to view Excel documents in the browser. Because of excel service running. However, I couldn't find a way to view Word or PowerPoint PDF and Auto Cad File (.dwg). I have

  • Is there a way to identify a thread you want to return to without tagging?

    Is there a way to identify a thread you want to return to without tagging? I will try to explain what I mean. Let say someone asks a question and you are curious at what people may suggest as solutions.  The interest could range from simple curiosity

  • There is an X through my battery

    There is an "x" through my battery and I do not know what happened. It was working fine when I opened it this morning and it was at 59% and then all of a sudden the screen went black and so I restarted it and then when it all came back on, the "x" wa