About DBWn & Checkpoint

Hi All,
I have a confusion over this two statements .
DBWn defers writing to the data files until one of the following events occurs:
• Incremental or normal checkpoint
Checkpoint
An event called a checkpoint occurs when the Oracle background process DBWn writes all the modified database buffers in the SGA, including both committed and uncommitted data, to the data files.
I want to know whether
DBWn cause Checkpoint to occur
or
Checkpoint cause DBWn to occur.
Regards
Rama

DBWn executes most of the checkpoint tasks and CKPT does the rest.
Checkpoints are triggered by LOG_CHECKPOINT_INTERVAL parameter and redo log switches (from http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams109.htm#sthref455):
LOG_CHECKPOINT_INTERVAL specifies the frequency of checkpoints in terms of the number of redo log file blocks that can exist between an incremental checkpoint and the last block written to the redo log. This number refers to physical operating system blocks, not database blocks.
Regardless of this value, a checkpoint always occurs when switching from one online redo log file to another. Therefore, if the value exceeds the actual redo log file size, checkpoints occur only when switching logs. Checkpoint frequency is one of the factors that influence the time required for the database to recover from an unexpected failure.
Message was edited by:
Pierre Forstmann

Similar Messages

  • Confused about transaction, checkpoint, normal recovery.

    After reading the documentation pdf, I start getting confused about it's description.
    Rephrased from the paragraph on the transaction pdf:
    "When database records are created, modified, or deleted, the modifications are represented in the BTree's leaf nodes. Beyond leaf node changes, database record modifications can also cause changes to other BTree nodes and structures"
    "if your writes are transaction-protected, then every time a transaction is committed the leaf nodes(and only leaf nodes) modified by that transaction are written to JE logfiles on disk."
    "Normal recovery, then is the process of recreating the entire BTree from the information available in the leaf nodes."
    According to the above description, I have following concerns:
    1. if I open a new environment and db, insert/modify/delete several million records, and without reopen the environment, then normal recovery is not run. That means, so far, the BTree is not complete? Will that affact the query efficiency? Or even worse, will that output incorrect results?
    2. if my above thinking is correct, then every time I finish commiting transactions, I need to let the checkpoint to run in order to recreate the whole BTree. If my above thinking is not correct, then, that means that, I don't need to care about anything, just call transaction.commit(), or db.sync(), and let je to care about all the details.(I hope this is true :>)
    michael.

    http://www.oracle.com/technology/documentation/berkeley-db/je/TransactionGettingStarted/chkpoint.html
    Checkpoints are normally performed by the checkpointer background thread, which is always running. Like all background threads, it is managed using the je.properties file. Currently, the only checkpointer property that you may want to manage is je.checkpointer.bytesInterval. This property identifies how much JE's log files can grow before a checkpoint is run. Its value is specified in bytes. Decreasing this value causes the checkpointer thread to run checkpoints more frequently. This will improve the time that it takes to run recovery, but it also increases the system resources (notably, I/O) required by JE.
    """

  • Doubt about database point in time recovery using rman

    Hi Everyone,
    I have been practising various rman restore and recovery scenarios . I have a doubt regarding database point in time recovery using rman. Imagine i have a full database backup including controlfile scheduled to run at 10 PM everyday. today is 20th dec 2013. imagine i want to restore the database to a prior point in time ( say 18th dec till 8 AM). so i would restore all the datafiles  from 17th night's backup and apply archives till 8 AM of 18th dec . in this scenario should i restore the controlfile too from 17th dec bkp ( i am assuming yes we should ) or can we use the current controlfile ( assuming it is intact). i found the below from oracle docs.
    Performing Point-in-Time Recovery with a Current Control File
    The database must be closed to perform database point-in-time recovery. If you are recovering to a time, then you should set the time format environment variables before invoking RMAN. The following are sample Globalization Support settings:
    NLS_LANG = american_america.us7ascii
    NLS_DATE_FORMAT="Mon DD YYYY HH24:MI:SS"
    To recover the database until a specified time, SCN, or log sequence number:
    After connecting to the target database and, optionally, the recovery catalog database, ensure that the database is mounted. If the database is open, shut it down and then mount it:
    2.  SHUTDOWN IMMEDIATE;
    3.  STARTUP MOUNT;
    4. 
    Determine the time, SCN, or log sequence that should end recovery. For example, if you discover that a user accidentally dropped a tablespace at 9:02 a.m., then you can recover to 9 a.m.--just before the drop occurred. You will lose all changes to the database made after that time.
    You can also examine the alert.log to find the SCN of an event and recover to a prior SCN. Alternatively, you can determine the log sequence number that contains the recovery termination SCN, and then recover through that log. For example, query V$LOG_HISTORY to view the logs that you have archived. 
    RECID      STAMP      THREAD#    SEQUENCE#  FIRST_CHAN FIRST_TIM NEXT_CHANG
             1  344890611          1          1      20037 24-SEP-02      20043
             2  344890615          1          2      20043 24-SEP-02      20045
             3  344890618          1          3      20045 24-SEP-02      20046
    Perform the following operations within a RUN command:
    Set the end recovery time, SCN, or log sequence. If specifying a time, then use the date format specified in the NLS_LANG and NLS_DATE_FORMAT environment variables.
    If automatic channels are not configured, then manually allocate one or more channels.
    Restore and recover the database.
      The following example performs an incomplete recovery until November 15 at 9 a.m. 
    RUN
      SET UNTIL TIME 'Nov 15 2002 09:00:00';
      # SET UNTIL SCN 1000;       # alternatively, specify SCN
      # SET UNTIL SEQUENCE 9923;  # alternatively, specify log sequence number
      RESTORE DATABASE;
      RECOVER DATABASE;
    If recovery was successful, then open the database and reset the online logs:
    5.  ALTER DATABASE OPEN RESETLOGS;
    I did not quiet understand why the above scenario is using current controlfile as the checkpoint scn in the current controlfile and the checkpoint scn in the datafile headers do not match after the restore and recovery. Thanks in Advance for your help.
    Thanks
    satya

    Thanks for the reply ... but what about the checkpoint scn in the controlfile . my understanding is that unless the checkpoint scn in the controlfile and datafiles do not match the database will not open. so assuming the checkpoint scn in my current controlfile is 1500 and i want to recover my database till scn 1200. so the scn in the datafiles (which is 1200) is not not matching with the scn in the controlfile(1500). so will the database open in such cases.
    Thanks
    Satya

  • Incremental checkpoint and SCN

    Hi,
    I am getting messages of incremental checkpoint in my alert logs with some scn.
    >
    Completed checkpoint up to RBA [0x125de6.2.10], SCN: 445135162445
    >
    Does this mean that all dirty blocks which have had their initial changes before this SCN(445135162445) will be written to disk so that instance recovery can begin from the SCN from which checkpoint has completed.Or is it the other way like the incremental checkpoint has occured at scn 445135162445.
    Sekar

    user13485610 wrote:
    As per my knowledge, the checkpoint is classified as below (correct me if I am wrong somewhere)
    Checkpoint types can be divided as INCREMENTAL and COMPLETE.
    Also COMPLETE CHECKPOINT can be divided further into
    PARTIAL and FULL.
    It would be convenient to have a reference to the documents where you picked up this information. There may be further reading in them that clarifies the meaning. The terms have been around for a long time, of course, but it's always hard to get any sort of definitive description together - in your case, for example, you don't make any comment about which checkpoints lead to high priority writes and which to low, but the description of any type of checkpoint isi incomplete without some reference to the write priority.
    As far as classifying checkpoints by name - I'm not too concerned that there is still some confusion in the different way that people name or categorise them, provided that they can describe what's going on to ensure that there is no ambiguity. In this context I think there are only options to consider:
    a) does the particular type of checkpoint walk along the checkpoint queue (CKPTQ) in order to pick the blocks that need to be written to disc.
    b) does the particular type of checkpoint use a different queue (such as an object queue or file queue) to pick the blocks that need to be written to disc.
    c) is there any other mechanism for picking the blocks to be written - such as walking the LRU and identifying all dirty blocks.
    To my mind, an incremental checkpoint should probably have a definition that says it walks the checkpoint queue.
    I dislike the term "complete" if it then leads to the option for "partial" - how much clarity can you read into the statement "at this point Oracle does a partial complete checkpoint" (or should that be a "complete partial checkpoint") - but I can understand the need for a term of that sort to distinguish a checkpoint that is based on one of the other queues.
    But my doubt is mentioned below.
    2.At the time of log switch - Sometimes log switches may trigger a complete checkpoint , if the
    next log where the log switch is to take place is Active.
    Why is this behaves in this fashion? (Any internal thoughts on this please)This, in part, is why I'd like to see the reference document - I think that the term "complete" may have been given a different meaning at this point. If the logfile you want to use is still active checkpoint activity MUST take place urgently, but it need only be a checkpoint that walks the CKPTQ up to the point where the content of the target redo log can be discarded. This is no different from any other checkpointing due to log file switch - but it could have a higher degree of urgency. (The need to differentiate this special case on log file switch probably came about at the time that Oracle stopped triggering an automatic checkpoint at every log file switch.)
    Regards
    Jonathan Lewis

  • Checkpoints for conversion of load from Full to delta

    Hi Guys,
    I need to change Delivery item data load from daily Full to daily Delta(3.X design)
    The catch is the same cube has other multiple full data loads from billing extractors and sales order extractors.
    Please let me know the check points if any before changing from full Update to delta Update.
    Will there be any issues if I perform delta load from one extractor and full from other extractors.
    For e,g .. a common KeyFigure being loaded from 2 different extractors.
    I don't think there would be any issue but still wanted a second opinion about the checkpoints required before conversion.
    Thanks.

    Hi Neeraj
    Yes you can load Full load from one data source and Init + Delta from another data source into a data target / info provider. However, when you say that 'similar' key figures are getting updated from both the extractors then you need to keep it in mind that the data wouldnt' get overwritten or added for these key figures. Your info provider design would play a big role in this. If your design isnt capable to storing data from these datasource as different records then you could have data validity issues.
    As for moving from full to init/delta there are no issues.
    Cheers
    Umesh

  • Understanding Oracle checkpoint in depth

    Dear All,
    I am trying to find some good information about Oracle checkpoints and SCNs to have a thorough and deep understanding of the same.
    Could someone provide me with some good links.
    Thanks.

    Hi;
    Hussein Sawwan already answered your question, In addition to his great post
    Please see:
    Difference between SCN and checkpoint.
    http://sai-oracle.blogspot.com/2006/03/difference-between-scn-and-checkpoint.html
    Oracle Checkpoint mechanisim
    http://www.adp-gmbh.ch/ora/concepts/checkpoint.html
    http://www.orafaq.com/wiki/Checkpoint
    Regard
    Helios

  • 'fast_start_mttr_target=300'

    HI all,
    i've been reading about this parmeter 'fast_start_mttr_target=300'..but i coundnt get proper idea..
    it just means that if oracle is taking long time to recover then it should not go beyond this time to restart or something else ralated to it?
    in the mean time i got the idea about incremental checkpoint and cache recovery(rolling forward stage)..
    but has got obscure knowledge of 'fast_start_mttr_target'
    how exactly it helps to reduce the time in recovery and what exactly it does?
    it will be a great help for me!!
    Thanx in Advance!!!

    Hi,
    I suggest you to read the two articles below, maybe help you:
    Automated Checkpoint Tuning (MTTR)
    http://www.akadia.com/
    Recovery Enhancements In Oracle9i
    http://www.oracle-base.com/articles/9i/RecoveryEnhancements9i.php
    Cheers

  • NFe and NFSe

    Hi,
    Can someone please explain more about the checkpoints or confiigurations (from Finance point of view) needed for NFe and NFse??
    Many thanks....

    Olá
    Como o Fernando comentou dá pra se dizer que os módulos/processos envolvidos são MM e SD. Mas como o Henrique também lembrou existem alguns pontos que envolvem FICO como por exemplo também a configuração de Domicilio Fiscal e range de CEP que estão dentro de estrutura de configurações de FICO.
    Mas claro... tudo depende de como a sua empresa define e separa os processos. Conheço empresas cuja configuração foi toda especificada por MM e SD mas quem efetivamente configurou foi FICO.
    Abraço
    Eduardo

  • Next Entend (Tablespace)

    Hi
    When a datafile in autoextend ON , does next extend with same size as the database block size is good for performance ?
    Suppose my datafile is of 100Mb and extoextend on with next as 16K , and if i import a 2GB table in this datafile using imp i think the performance is very low . So increasing the next to a higher number like 100M is a good idea ?

    avramits wrote:
    checkpoint and SCN number for old files and complete information when new file created. Well, correct me if I am wrong, are you talking about "File Checkpoint"?I understand that with a new file, the info about it would be recorded but I am still in doubt about the statement that OP made when an extent is allocated , control file is updated? I checked in the V$sysstat but I couldn't find anything relevant which should match this. Tehre is indeed a File Checkpoint which does happen and is specific to a particular file only. And what about sCN? Wehre is SCN coming into the picture here ?
    Regards
    Aman....

  • Functionality of CKPT

    Hi experts i had a query regarding actions preformed by CKPT.
    Every three seconds, the checkpoint process (CKPT) records information in the control file about the checkpoint position in the online redo log.
    The following is posted by one of our expert in some thread.
    CKPT process will do the following things:
    1)Flushes the redo log buffer to redo log files by means of LGWR.
    2)Writes a checkpoint record to the redo log file.
    3)Initiates DBWR to write all dirty blocks back to the datafiles, thus synchronizes the database.
    4)Updates the headers of the data files with information about the last checkpoint performed.
    5)Update the control files about the last checkpoint.
    By the above two statements(which are in bold), can we conclude that even redo log file will have checkpoint information and it will be written by CKPT process?
    plz explain. thanks in advance.

    Hi,
    You said,
    Every three seconds, DBWR wakes up to check whether there are any dirty buffers to write to datafiles.If yes,it writes them away otherwise does nothing.
    This is know as incremental checkpoint. The CKPT does not record information of the checkpoint position in the online redo log.There is no relation between checkpoint and redo log.
    Though the point that incremental writing part of DBWR is correct,but the rest is wrong. Incremental checkpoint is the process to write the Checkopoint RBA,information of the most oldest cheanged block in the control file so that in order of an instance crahs, control file have the info that from where it has to start the recovery. So your statement that CKPT doesn't record it in the control file is not right.Please read the following two links for the same information.
    http://www.vldb.org/conf/1998/p665.pdf
    http://www.ixora.com.au/notes/rba.htm
    You said,
    Who posted this?.Surely he is not an expert.
    CKPT never flushes redo log buffer by means of LGWR.It never writes checkpoint record to redol go files.
    Lets not get into saying who is an expert and who is not,being an expert is a big term,.But the statement mentioned is correct as mentioned by Pavan Kumar(the one I know) .This is relevant to oracle 7 or probably older release than that too but I don't know about those releases.There was a parameter which actualy initiated the process (CKPT) in that release and the working of CKPT which we see now was done by LGWR at that time.I guess in Oracle 8 , CKPT was introduced as a mandatory process and hence the distinction of the jobs done by both LGWR and CKPT was made.
    About the last part, yes the redo log file do have the information of the number where it has to start the recovery.It may not be the checkpoint number as whole but surely there is SCN number (which makes up the most of checkpoint number) in it.Check v$log view,
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/dynviews_1150.htm#REFRN30127
    HTH
    Aman....

  • Problem about "Checkpoint not complete"

    Dear Friends ,
    I got the following error in my database 2 or 3 times a day . The error is :
    Current log# 1 seq# 4127 mem# 0: /dbfs/oradata/ababil/redo01.log
    Thread 1 cannot allocate new log, sequence 4128
    Checkpoint not complete
    I am using Oracle 10g database server where each redo log file size is 512 MB , and each of the file contains ONE member .
    Our production server is used in a Private BANK .
    In this moment How can I resolve this problem ?

    Hi,
    Girish, I will try to figure out and you tell me the which is best.
    Adding the Space or Size to the Existing Redo Log File Size, it will be effect the some time slack for performing the
    Checkpoint as the Case concern with OP.
    While comparing with scenario where we are adding the Extra Redo Log File, where the one extra Check point will increase in Cycle and as with respective need size/ space is increased (in terms), but the size of existence file is not changed.
    I think better to add an Extra Redo Log File, Instead of increasing the Size of present Redo log file.
    Let me know what you think... and please post your concerns and Suggestions.
    - Pavan Kumar N

  • Know more about checkpoint

    checkpoint 分成很多种 full 、file、thread、parallel query、 object 、incremental 、logfile switch
    每一种checkpoint 都有其自身的特性,例如Incremental Checkpoint会要求ckpt 每3s 更新一次controlfile 但是不更新datafile header, 而FULL CHECKPOINT要求立即完成(同步的) 且会同时更新 controlfile 和 datafile header。
    各种checkpoint 的特点见下表:
    Full Checkpoint
    Writes block images to the database for all dirty buffers from all instances
    Statistics updated:
    DBWR checkpoints
    DBWR checkpoint buffers written
    DBWR thread checkpoint buffers written
    Caused by:
    Alter system checkpoint [global]
    Alter database begin backup
    Alter database close
    Shutdown
    Controlfile and datafile headers are updated
    CHECKPOINT_CHANGE#
    Thread Checkpoint
    Writes block images to the database for all dirty buffers from one instance
    Statistics updated:
    DBWR checkpoints
    DBWR checkpoint buffers written
    DBWR thread checkpoint buffers written
    Caused by:
    Alter system checkpoint local
    Controlfile and datafile headers are updated
    CHECKPOINT_CHANGE#
    File Checkpoint
    Writes block images to the database for all dirty buffers for all files of a tablespace from all instances
    Statistics updated:
    DBWR tablespace checkpoint buffers written
    DBWR checkpoint buffers written
    DBWR checkpoints
    Caused by:
    Alter tablespace XXX offline
    Alter tablespace XXX begin backup
    Alter tablespace XXX read only
    Controlfile and datafile headers are updated
    CHECKPOINT_CHANGE#
    Parallel Query Checkpoint
    Writes block images to the database for all dirty buffers belonging to objects accessed by the query from all instances
    Statistics updated:
    DBWR checkpoint buffers written
    DBWR checkpoints
    Caused by:
    Parallel Query
    Parallel Query component of PDML or PDDL
    Mandatory for consistency
    Object “Checkpoint”
    Writes block images to the database for all dirty buffers belonging to an object from all instances
    Statistics updated:
    DBWR object drop buffers written
    DBWR checkpoints
    Caused by:
    Drop table XXX
    Drop table XXX purge
    Truncate table XXX
    Mandatory for media recovery purposes
    Incremental Checkpoint
    Writes the contents of “some” dirty buffers to the database from CKPT-Q
    Block images written in SCN order
    Checkpoint RBA updated in SGA
    Statistics updated:
    DBWR checkpoint buffers written
    Controlfile is updated every 3 seconds by CKPT
    Checkpoint progress record
    Log Switch Checkpoint(8i 以前 LOG switch checkpoint是FULL CHECKPOINT)
    Writes the contents of “some” dirty buffers to the database
    Statistics updated:
    DBWR checkpoints
    DBWR checkpoint buffers written
    background checkpoints started
    background checkpoints completed
    Controlfile and datafile headers are updated
    CHECKPOINT_CHANGE#
    无论是什么类型的checkpoint 检查点 ,所有的本地检查点(CKPT)已类似的本地化方法处理。 每一个实例中所有本地的活跃检查点请求(active local checkpoint request)都保存在一个队列(queue)中,这个队列叫做 Active Checkpoint Queue。 在这个队列(queue)中的每一条记录代表一个本地检查点(local checkpoint request)。 当某一个进程( 可能是前台进程 foreground process --例如前台进程执行“alter tablespace users begin/end backup", 也可能是CKPT 或者其他后台进程) , 这个进程都会将新的 request 记录放到这个Active Checkpoint Queue中。 典型的一个checkpoint request 由 检查点类型checkpoint request type,优先级priority , 以及与该checkpoint request 关联的checkpoint structure, 等待进程 waiter process,  还有其余一些相关的属性如 FILE Checkpoint 的tablespace id、FILE NUMBER、 Object checkpoint的object id 等。
    DBWR进程会不断地扫描这个Active Checkpoint Queue, 并服务于这个Queue上的checkpoint request 检查点请求。 一旦某个request 被完成了,DBWR 将这个request标记为completed 。 CKPT 进程也会不断监控这个 Active Checkpoint Queue 查看是否所有request都被完成了。 当CKPT发觉一个checkpoint request完成了, CKPT会将这个request从 Active Checkpoint Queue中移除。 取决于不同的检查点种类和目的, 当一个本地检查点(local checkpoint)完成,这意味着 某些特定的磁盘上的数据结构被更新,以反映这个检查点完成的物理表现。 这个操作 或者由 直接CKPT完成, 或者由提交checkpoint request的 等待进程直接完成, 或者由CKPT唤醒这个提交checkpoint request的等待进程间接地完成,以上具体由谁来完成操作 取决于检查点种类和目的。

    checkpoint 分成很多种 full 、file、thread、parallel query、 object 、incremental 、logfile switch
    每一种checkpoint 都有其自身的特性,例如Incremental Checkpoint会要求ckpt 每3s 更新一次controlfile 但是不更新datafile header, 而FULL CHECKPOINT要求立即完成(同步的) 且会同时更新 controlfile 和 datafile header。
    各种checkpoint 的特点见下表:
    Full Checkpoint
    Writes block images to the database for all dirty buffers from all instances
    Statistics updated:
    DBWR checkpoints
    DBWR checkpoint buffers written
    DBWR thread checkpoint buffers written
    Caused by:
    Alter system checkpoint [global]
    Alter database begin backup
    Alter database close
    Shutdown
    Controlfile and datafile headers are updated
    CHECKPOINT_CHANGE#
    Thread Checkpoint
    Writes block images to the database for all dirty buffers from one instance
    Statistics updated:
    DBWR checkpoints
    DBWR checkpoint buffers written
    DBWR thread checkpoint buffers written
    Caused by:
    Alter system checkpoint local
    Controlfile and datafile headers are updated
    CHECKPOINT_CHANGE#
    File Checkpoint
    Writes block images to the database for all dirty buffers for all files of a tablespace from all instances
    Statistics updated:
    DBWR tablespace checkpoint buffers written
    DBWR checkpoint buffers written
    DBWR checkpoints
    Caused by:
    Alter tablespace XXX offline
    Alter tablespace XXX begin backup
    Alter tablespace XXX read only
    Controlfile and datafile headers are updated
    CHECKPOINT_CHANGE#
    Parallel Query Checkpoint
    Writes block images to the database for all dirty buffers belonging to objects accessed by the query from all instances
    Statistics updated:
    DBWR checkpoint buffers written
    DBWR checkpoints
    Caused by:
    Parallel Query
    Parallel Query component of PDML or PDDL
    Mandatory for consistency
    Object “Checkpoint”
    Writes block images to the database for all dirty buffers belonging to an object from all instances
    Statistics updated:
    DBWR object drop buffers written
    DBWR checkpoints
    Caused by:
    Drop table XXX
    Drop table XXX purge
    Truncate table XXX
    Mandatory for media recovery purposes
    Incremental Checkpoint
    Writes the contents of “some” dirty buffers to the database from CKPT-Q
    Block images written in SCN order
    Checkpoint RBA updated in SGA
    Statistics updated:
    DBWR checkpoint buffers written
    Controlfile is updated every 3 seconds by CKPT
    Checkpoint progress record
    Log Switch Checkpoint(8i 以前 LOG switch checkpoint是FULL CHECKPOINT)
    Writes the contents of “some” dirty buffers to the database
    Statistics updated:
    DBWR checkpoints
    DBWR checkpoint buffers written
    background checkpoints started
    background checkpoints completed
    Controlfile and datafile headers are updated
    CHECKPOINT_CHANGE#
    无论是什么类型的checkpoint 检查点 ,所有的本地检查点(CKPT)已类似的本地化方法处理。 每一个实例中所有本地的活跃检查点请求(active local checkpoint request)都保存在一个队列(queue)中,这个队列叫做 Active Checkpoint Queue。 在这个队列(queue)中的每一条记录代表一个本地检查点(local checkpoint request)。 当某一个进程( 可能是前台进程 foreground process --例如前台进程执行“alter tablespace users begin/end backup", 也可能是CKPT 或者其他后台进程) , 这个进程都会将新的 request 记录放到这个Active Checkpoint Queue中。 典型的一个checkpoint request 由 检查点类型checkpoint request type,优先级priority , 以及与该checkpoint request 关联的checkpoint structure, 等待进程 waiter process,  还有其余一些相关的属性如 FILE Checkpoint 的tablespace id、FILE NUMBER、 Object checkpoint的object id 等。
    DBWR进程会不断地扫描这个Active Checkpoint Queue, 并服务于这个Queue上的checkpoint request 检查点请求。 一旦某个request 被完成了,DBWR 将这个request标记为completed 。 CKPT 进程也会不断监控这个 Active Checkpoint Queue 查看是否所有request都被完成了。 当CKPT发觉一个checkpoint request完成了, CKPT会将这个request从 Active Checkpoint Queue中移除。 取决于不同的检查点种类和目的, 当一个本地检查点(local checkpoint)完成,这意味着 某些特定的磁盘上的数据结构被更新,以反映这个检查点完成的物理表现。 这个操作 或者由 直接CKPT完成, 或者由提交checkpoint request的 等待进程直接完成, 或者由CKPT唤醒这个提交checkpoint request的等待进程间接地完成,以上具体由谁来完成操作 取决于检查点种类和目的。

  • Not able to view Forms Server version in Help: About Oracle Applications after the forms upgrade 10.1.2.3.0

    Hi all,
    DB:11.2.0.3.0
    EBS:12.1.3
    O/S: Sun Solaris SPARC 64 bits
    I am not able to view Forms Server version in Help: About Oracle Applications after the forms upgrade 10.1.2.3.0 after the forms upgrade 10.1.2.3.0 as per note:Upgrading OracleAS 10g Forms and Reports to 10.1.2.3 (437878.1)
    Java/jre upgraded to 1.7.0.45 and JAR files regenerated(without force option). Able to opne forms without any issues.
    A)
    $ORACLE_HOME/bin/frmcmp help=y
    FRM-91500: Unable to start/complete the build.
    B)
    $ORACLE_HOME/bin/rwrun ?|grep Release
    Report Builder: Release 10.1.2.3.0 - Production on Thu Nov
    28 14:20:45 2013
    Is this an issue? Could anyone please share the fix if faced the similar issue earlier.
    Thank You for your time
    Regards,

    Hi Hussein,
    You mean reboot the solaris server and then start database and applications services. We have two databases running on this solaris server.
    DBWR Trace file shows:
    Read of datafile '+ASMDG002/test1/datafile/system.823.828585081' (fno 1) header failed with ORA-01206
    Rereading datafile 1 header failed with ORA-01206
    V10 STYLE FILE HEADER:
            Compatibility Vsn = 186646528=0xb200000
            Db ID=0=0x0, Db Name='TEST1'
            Activation ID=0=0x0
            Control Seq=31739=0x7bfb, File size=230400=0x38400
            File Number=1, Blksiz=8192, File Type=3 DATA
    Tablespace #0 - SYSTEM  rel_fn:1
    Creation   at   scn: 0x0000.00000004 04/27/2000 23:14:44
    Backup taken at scn: 0x0001.db8e5a1a 04/17/2010 04:16:14 thread:1
    reset logs count:0x316351ab scn: 0x0938.0b32c3b1
    prev reset logs count:0x31279a4c scn: 0x0938.08469022
    recovered at 11/28/2013 19:43:22
    status:0x2004 root dba:0x00c38235 chkpt cnt: 364108 ctl cnt:364107
    begin-hot-backup file size: 230400
    Checkpointed at scn:  0x0938.0cb9fe5a 11/28/2013 15:04:52
    thread:1 rba:(0x132.49a43.10)
    enabled  threads:  01000000 00000000 00000000 00000000 00000000 00000000
    Hot Backup end marker scn: 0x0000.00000000
    aux_file is NOT DEFINED
    Plugged readony: NO
    Plugin scnscn: 0x0000.00000000
    Plugin resetlogs scn/timescn: 0x0000.00000000 01/01/1988
    00:00:00
    Foreign creation scn/timescn: 0x0000.00000000 01/01/1988
    00:00:00
    Foreign checkpoint scn/timescn: 0x0000.00000000 01/01/1988
    00:00:00
    Online move state: 0
    DDE rules only execution for: ORA 1110
    ----- START Event Driven Actions Dump ----
    ---- END Event Driven Actions Dump ----
    ----- START DDE Actions Dump -----
    Executing SYNC actions
    ----- START DDE Action: 'DB_STRUCTURE_INTEGRITY_CHECK' (Async) -----
    Successfully dispatched
    ----- END DDE Action: 'DB_STRUCTURE_INTEGRITY_CHECK'
    (SUCCESS, 0 csec) -----
    Executing ASYNC actions
    ----- END DDE Actions Dump (total 0 csec) -----
    ORA-01186: file 1 failed verification tests
    ORA-01122: database file 1 failed verification check
    ORA-01110: data file 1:
    '+ASMDG002/test1/datafile/system.823.828585081'
    ORA-01206: file is not part of this database - wrong
    database id
    Thanks,

  • Performance / stability issues with BDB 3.3.62 while checkpointing

    Hi,
    first of all: we use BDB 3.2 for more than a year within our application and we never had any performance or stability issues. Now we are about to release a new version of our software with BDB 3.3 and we found some problems during our release tests.
    This is the situation: we have implemented some cleanup routines that make very heavy use of the BDB environment. The common usage pattern is to scan the database (about 300GB with >> 100 million items) and than delete a lot of items from the database. The first versions of our cleanup routine worked without any problem but during performance tuning of our code we suddenly and reproducible encountered segmentation faults in the com.sleepycat.je.recovery.Checkpointer thread. We analysed more that 10 VM crashes and ALL crashes were related to Checkpointer thread. (Linux 64 bit, JDK 1.5.0_013 / 1.6.0_06)
    After we removed our performance tuning (only application logic – nothing BDB-related) the VM crashes disappeared – so it looks like some kind of problem which only occurs under high load conditions. Next thing we tried was enabling our performance tuning and setting Ceckpointer configuration to default (write a checkpoint after 20MB) before that we used time based checkpointing with 60 sec interval.
    With Default Checkpointer configuration there were no VM-crashes but performance was very slow!
    Finally my question: What Checkpointer configuration would you recommend for an environment that is under very high load with regard to delete operations? Would you recommend using “high priority Checkpointing”? Would you recommend time- or data-based checkpointing?
    It looks like Checkpointing every 60 seconds was not a good idea but 20MB seems to be too slow.
    This leads to another question: What does a longer Checkpointing interval mean for recovery time? Does recovery take substantially longer if checkpointing is done less fequently?
    Any help would be very much apprechiated.
    Here are two HS_ERR_PID
    # An unexpected error has been detected by Java Runtime Environment:
    # SIGSEGV (0xb) at pc=0x00002af3cbe47a27, pid=29834, tid=1170757984
    # Java VM: Java HotSpot(TM) 64-Bit Server VM (10.0-b23 mixed mode linux-amd64)
    # Problematic frame:
    # V [libjvm.so+0x5d8a27]
    # If you would like to submit a bug report, please visit:
    # http://java.sun.com/webapps/bugreport/crash.jsp
    --------------- T H R E A D ---------------
    Current thread (0x00002aad7cd06800): JavaThread "Checkpointer" daemon [_thread_in_vm, id=30798, stack(0x0000000045b85000,0x0000000045c86000)]
    siginfo:si_signo=SIGSEGV: si_errno=0, si_code=1 (SEGV_MAPERR), si_addr=0x0000000000000010
    Registers:
    RAX=0x00002aad7c942d80, RBX=0x00002aad7cd06800, RCX=0x00002aad7c943168, RDX=0x00002aad7c942d90
    RSP=0x0000000045c84720, RBP=0x0000000045c84730, RSI=0x00002aabf193e230, RDI=0x0000000000000000
    R8 =0x00002aad7dcf4380, R9 =0x000000000000784e, R10=0x00002af3cbb8e840, R11=0x0000000045c84810
    R12=0x0000000000000000, R13=0x0000000045c84780, R14=0x0000000045c84838, R15=0x00002aad7cd06800
    RIP=0x00002af3cbe47a27, EFL=0x0000000000010206, CSGSFS=0x0000000000000033, ERR=0x0000000000000004
    TRAPNO=0x000000000000000e
    Top of Stack: (sp=0x0000000045c84720)
    0x0000000045c84720: 00002aad7cd06800 0000000000000000
    0x0000000045c84730: 0000000045c84740 00002af3cbce3a9d
    0x0000000045c84740: 0000000045c847b0 00002af3cbb8e896
    0x0000000045c84750: 00002aad7dcf4380 00002aad7c942d80
    0x0000000045c84760: 00002aad7c942d90 00002aad7c943168
    0x0000000045c84770: 00002aad7cd06800 00002aad7cd06800
    0x0000000045c84780: 00002aad7cd06800 0000000000000000
    0x0000000045c84790: 000000002f7e8112 00002aabf193e1f9
    0x0000000045c847a0: 0000000000000000 00002aad5fede569
    0x0000000045c847b0: 0000000045c84810 00002aaaab25a0c7
    0x0000000045c847c0: 00002aaaab253070 00002aaaab25a08b
    0x0000000045c847d0: 0000000045c847d0 00002aad5fede569
    0x0000000045c847e0: 0000000045c84838 00002aad5fedef50
    0x0000000045c847f0: 00002aad601fedc0 00002aad5fede5b0
    0x0000000045c84800: 0000000000000000 0000000045c84840
    0x0000000045c84810: 0000000000010001 00002aaaab854744
    0x0000000045c84820: 0000000000000000 0000000000000000
    0x0000000045c84830: 0000000000000000 0000000000000000
    0x0000000045c84840: 00002aad7cd06800 00002aaacd77a138
    0x0000000045c84850: 00002aaaf27a8c90 00002aac00000001
    0x0000000045c84860: 00002aaa00000000 00002af3cbd73369
    0x0000000045c84870: 0000000145c848b0 000000002f7e8112
    0x0000000045c84880: 00002aad7cd06800 00002aaaf39de2f0
    0x0000000045c84890: 00002aabbee886d0 00002aab35552900
    0x0000000045c848a0: 00002aaacd77a3c8 00002aaaf573a448
    0x0000000045c848b0: 00002aaaee5aaab8 00002aaacd77a160
    0x0000000045c848c0: 00002aab355887f0 00002aaaee643748
    0x0000000045c848d0: 00002aaacd77a160 00002aac6a858558
    0x0000000045c848e0: 0000000000000000 00002aaacd77a380
    0x0000000045c848f0: 00002aad0000005b 0000000000000001
    0x0000000045c84900: 0000000000000001 00002aaacd779328
    0x0000000045c84910: 00002aabad0b62d8 00002aaaab7589c0
    Instructions: (pc=0x00002af3cbe47a27)
    0x00002af3cbe47a17: 89 f0 eb ea 90 66 66 66 90 55 48 89 e5 41 54 53
    0x00002af3cbe47a27: 0f b7 57 10 48 89 fb 44 8d 62 01 49 63 fc e8 06
    Stack: [0x0000000045b85000,0x0000000045c86000], sp=0x0000000045c84720, free space=1021k
    Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
    V [libjvm.so+0x5d8a27]
    V [libjvm.so+0x474a9d]
    V [libjvm.so+0x31f896]
    v ~BufferBlob::Interpreter
    Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
    v ~BufferBlob::Interpreter
    J com.sleepycat.je.recovery.Checkpointer.flushIN(Lcom/sleepycat/je/dbi/EnvironmentImpl;Lcom/sleepycat/je/dbi/DatabaseImpl;
    Lcom/sleepycat/je/log/LogManager;Lcom/sleepycat/je/recovery/Checkpointer$CheckpointReference;Lcom/sleepycat/je/recovery/DirtyINMap;IIZJZLcom/sleepycat/je/recovery/Checkpointer$FlushStats;
    Lcom/sleepycat/je/cleaner/LocalUtilizationTracker;Z)V
    J com.sleepycat.je.recovery.Checkpointer.flushDirtyNodes(Lcom/sleepycat/je/dbi/EnvironmentImpl;Lcom/sleepycat/je/recovery/DirtyINMap;
    Ljava/util/Map;ZJZLcom/sleepycat/je/recovery/Checkpointer$FlushStats;)V
    v ~BufferBlob::Interpreter
    v ~BufferBlob::Interpreter
    v ~BufferBlob::Interpreter
    v ~BufferBlob::Interpreter
    v ~BufferBlob::StubRoutines (1)
    # An unexpected error has been detected by Java Runtime Environment:
    # SIGSEGV (0xb) at pc=0x00002b00816f2a27, pid=32743, tid=1162336608
    # Java VM: Java HotSpot(TM) 64-Bit Server VM (10.0-b23 mixed mode linux-amd64)
    # Problematic frame:
    # V [libjvm.so+0x5d8a27]
    # If you would like to submit a bug report, please visit:
    # http://java.sun.com/webapps/bugreport/crash.jsp
    --------------- T H R E A D ---------------
    Current thread (0x00002aad7fa1c000): JavaThread "Checkpointer" daemon [_thread_in_vm, id=32646, stack(0x000000004537d000,0x000000004547e000)]
    siginfo:si_signo=SIGSEGV: si_errno=0, si_code=1 (SEGV_MAPERR), si_addr=0x0000000000000010
    Registers:
    RAX=0x00002aad8082f9b0, RBX=0x00002aad7fa1c000, RCX=0x00002aad8082fd98, RDX=0x00002aad8082f9c0
    RSP=0x000000004547c5e0, RBP=0x000000004547c5f0, RSI=0x00002aab73edc788, RDI=0x0000000000000000
    R8 =0x00002aad7fa08d30, R9 =0x0000000000007f86, R10=0x00002b0081439840, R11=0x000000004547c6d0
    R12=0x00002aaca950fcc8, R13=0x000000004547c640, R14=0x000000004547c6f8, R15=0x00002aad7fa1c000
    RIP=0x00002b00816f2a27, EFL=0x0000000000010202, CSGSFS=0x0000000000000033, ERR=0x0000000000000004
    TRAPNO=0x000000000000000e
    Top of Stack: (sp=0x000000004547c5e0)
    0x000000004547c5e0: 00002aad7fa1c000 00002aaca950fcc8
    0x000000004547c5f0: 000000004547c600 00002b008158ea9d
    0x000000004547c600: 000000004547c670 00002b0081439896
    0x000000004547c610: 00002aad7fa08d30 00002aad8082f9b0
    0x000000004547c620: 00002aad8082f9c0 00002aad8082fd98
    0x000000004547c630: 00002aad7fa1c000 00002aad7fa1c000
    0x000000004547c640: 00002aad7fa1c000 00002aaca950fcc8
    0x000000004547c650: 000000004547c540 00002aab73edc751
    0x000000004547c660: 00002aaca950fcc8 00002aad5fef5d49
    0x000000004547c670: 000000004547c6d0 00002aaaab25a0c7
    0x000000004547c680: 00002aaaab253070 00002aaaab25a08b
    0x000000004547c690: 000000004547c690 00002aad5fef5d49
    0x000000004547c6a0: 000000004547c6f8 00002aad5fef6730
    0x000000004547c6b0: 00002aad600eed48 00002aad5fef5d90
    0x000000004547c6c0: 0000000000000000 000000004547c700
    0x000000004547c6d0: 0000000000010001 00002aaaaba46ac0
    0x000000004547c6e0: 0000000000000000 0000000000000000
    0x000000004547c6f0: 0000000000000000 0000000000000000
    0x000000004547c700: 0001000100010005 00002aaa00000001
    0x000000004547c710: 0010244e00432762 00002aab2ef43ff0
    0x000000004547c720: 00002aaaf0635a00 00002aac5694a460
    0x000000004547c730: 0000000100000000 00002aad7fa1c000
    0x000000004547c740: 00002aabe0162870 00002aaaf0761bf0
    0x000000004547c750: 00002aaaf07f20d0 00002aaadd628058
    0x000000004547c760: 00002aaadd628ee0 00002aaa00000001
    0x000000004547c770: 00002aaaeec996b8 0000000000000000
    0x000000004547c780: 00002aab2ecc9890 00002aad00000000
    0x000000004547c790: 00002aaadd627af0 00002aac5bc259c8
    0x000000004547c7a0: 0000000000000000 00002aaadd628010
    0x000000004547c7b0: 0000000000000001 00002aaaee4f8870
    0x000000004547c7c0: 00000030f1218c01 00002aaae5222d38
    0x000000004547c7d0: 0000000000010001 00002aaaab6cc928
    Instructions: (pc=0x00002b00816f2a27)
    0x00002b00816f2a17: 89 f0 eb ea 90 66 66 66 90 55 48 89 e5 41 54 53
    0x00002b00816f2a27: 0f b7 57 10 48 89 fb 44 8d 62 01 49 63 fc e8 06
    Stack: [0x000000004537d000,0x000000004547e000], sp=0x000000004547c5e0, free space=1021k
    Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
    V [libjvm.so+0x5d8a27]
    V [libjvm.so+0x474a9d]
    V [libjvm.so+0x31f896]
    v ~BufferBlob::Interpreter
    Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
    v ~BufferBlob::Interpreter
    J com.sleepycat.je.recovery.Checkpointer.flushIN(Lcom/sleepycat/je/dbi/EnvironmentImpl;
    Lcom/sleepycat/je/dbi/DatabaseImpl;Lcom/sleepycat/je/log/LogManager;
    Lcom/sleepycat/je/recovery/Checkpointer$CheckpointReference;Lcom/sleepycat/je/recovery/DirtyINMap;
    IIZJZLcom/sleepycat/je/recovery/Checkpointer$FlushStats;Lcom/sleepycat/je/cleaner/LocalUtilizationTracker;Z)V
    J com.sleepycat.je.recovery.Checkpointer.flushDirtyNodes(Lcom/sleepycat/je/dbi/EnvironmentImpl;Lcom/sleepycat/je/recovery/DirtyINMap;Ljava/util/Map;ZJZLcom/sleepycat/je/recovery/Checkpointer$FlushStats;)V
    v ~BufferBlob::Interpreter
    v ~BufferBlob::Interpreter
    v ~BufferBlob::Interpreter
    v ~BufferBlob::Interpreter
    v ~BufferBlob::StubRoutines (1)

    Hello,
    I know your request is a bit old but I have exactly the same error on a jboss server but no HS_ERR_PID was create...
    Java VM: Java HotSpot(TM) 64-Bit Server VM (10.0-b23 mixed mode linux-amd64)
    # Problematic frame:
    # V [libjvm.so+0x5d8a27]
    Do you remember how you solve your problem?
    regards
    Arnaud

  • Deleting VM with snapshot/checkpoint is extremely slow in SCVMM 2012 R2

    It appears deleting VM with checkpoint from SCVMM 2012 R2 is extremely slow. 
    There is the configuration: 
    one SCVMM 2012 R2 running on Windows Server 2012 R2
    one Windows Server 2012 R2 as file share server (SMB), stores all VM files. The share is managed by SCVMM . uses local SSD disk. two gigabit ethernet ports connected to a switch.
    two hyper-v hosts running Windows Server 2012 R2, non-clustered, managed by SCVMM, each hosts has three gigabit Ethernet ports connected to the switch.
    Everthing works including live migration. Here is the problem:
    When I deleted a VM with checkpoint, it appeared that SCVMM/Hyper-V merges something, it took a lot of time on reading each AVHDX and write to VHDX, this results two problems:
    1. It took a lot of time and bandwidth. I am about to delete the VM (and all its files). Why SCVMM/Hyper-V wants to merge all files? The deletion took 29 minutes for a 36GB VM.  This VM is just plain standalone VM, no usage of template.
    The expected result is to just delete all VM files and remove VM, don't do merge. (BTW: VMWare doesn't do merge, the deletion is done in a few seconds))
    2. although all data file are on the file share, the hyper-v host still reads each file from file share, then write to another file on the same file share. It consumes significant network bandwidth. Can they offload the work to file server? 
    The expected result is that Hyper-V offload the transfer to file share server. It should not create huge amount of network io. 
    3. The merging operation is incredibility slow. around 24MByte/sec data being copied. 
    The hyper-v has 2xgigabit links, benchmark shows the file share access can easily go 200Mbytes/sec, and the file server side is SSD drive with 200+MByte/sec bandwidth. The expected result is if hyper-v ever merge, the speed is least 100MByte/s and hopefully
    go up to 200MByte/sec.  21MB/s is just painfully slow.
    Did I miss anything? 

    Thanks for the info. Even there may be use cases where merge is necessary, there are plenty of use cases where merging is overkill -- can Hyper-V have a CLI option that deletes VM without doing merging?    
    If such option won't be available soon, can SCVMM do this on deleting a VM:
    1. revert to the latest checkpoint, delete the checkpoint
    2. repeat the step 1 until all checkpoint removed.
    3. delete the VM.
    This workaround avoids the Hyper-V's merging. It is not as clean as VMWare's (they just delete the VM files, no extra merge), but at least it is much faster and less resource consumption than merging. 

Maybe you are looking for

  • LR4 Volumes / Hard Drives not displayed in Folders Panel

    I recentely upgraded to LR4.1 and it seems to have done something strange with my Folders. As you can see in the earlier version all of my harddrives are displayed as panel items in the folders panel. However now it's only displaying the boot drive a

  • Re: Mutliple Inhertance with Forte

    Hi Gunjan, No. It is not possible. What is possible is : Create interface a with implementation class A, interface b with implementation class B. Create a class C1 (who could optionaly inherit from class C0) and declare (in the proprety menu of the c

  • FTO Assignment

    Hi Experts, Can anyone give me a brief idea about how do we configure Foreign Trade organization to company code. I.e how many company codes can possible to single fto. Exp: From we are exporting from India, Bangladesh, China, Pakistan, Sri Lanka to

  • Nikon D7000 .nef files

    I have a Nikon D7000 camera which creates raw files - .nef and my version of CS5 is not able to open them directly. Does the latest version of CS5 resolve this problem?

  • Soring a keyfigure

    Is there a way to have a key figured sorted automatically in a query without using a condition?  I'm trying to set up the Sales Quantity key figure so it is always sorted in descending order when a query opens. I'm using BI 7.0 on the portal. Thanks!