ESEUTIL /P recovery time

My exchange 2010 server lost power before it was able to perform a clean shutdown and while coming back online some of the log files became corrupted. I've tried every safe-recovery option up my sleeve but finally decided to run the eseutil /p command and
hope for the best.
my exchange data file is well over 500gb, about 540gb to be more exact and while I expected the process to take more than a few hours to complete, we're going hour 23 and we're at what I believe is the last step: Deleting MSsyLocales; I know after I run
this process that I'm going to have to run a defragmentation and then finally an integrity check; but management is concerned about email being down for a full day. 
I'd like to sit down with management and give them a better idea of how long this process is going to take, and hoping that my math is wrong: It's processed about 14gb every 15 minutes during this step which would indicate i have another 9-ish hours to go.
And then to run the defragmentation which research shows processes at about 3-5gb/h I'm looking at an additional 110hrs?! Please tell me my math is off.
Any reassurance that we're almost out of the woods would be greatly appreciated.

Hi,
Based on my knowledge,
eseutil /p runs at approximately 3 to 6 GB per hour, defragmentation takes 9 GB per hour is the
speed at which the Eseutil utility runs. This number is only for reference. The exact number depends on your hardware and production environment.
And the Isinteg process after the repair runs at approximately 3 to 6 GB per hour. (These rates are average; performance may vary depending on how many passes the repair has to make on your database
and the speed of the hardware.)
Related articles here:
http://support.microsoft.com/kb/259851
http://support.microsoft.com/kb/192185
Best regards,
Belinda Ma
TechNet Community Support

Similar Messages

  • How to estimate recovery time from rman backup

    how to estimate recovery time from rman backup?
    The database is of size 800GB.
    the approximate time to backup is 8- 10 hours.

    "Recovery time" is : RESTORE DATABASE time + RECOVER DATABASE time.
    RESTORE can be done
    a. in parallel with multiple channels if your backup was created as multiple backupsets
    b. using incremental backups
    c. whether the backups are on
    i. disk
    ii. tape
    Alternatively, if you have a Recovery Area with an Incrementally Update backup of the database, you can SWITCH DATABASE almost immediately
    RECOVER depends on
    a. how many archivelogs (in terms of size, not necessarily number) need to be applied
    -- this depends on
    i. transaction volume
    ii. the "freshness" of the backups (how recent the backups are)
    b. whether required archivelogs are available on disk (which can be the case if the database backup itself is a very recent backup and archivelogs since then haven't been deleted {whether they have been backed up or not}) OR whether they need to be restored from
    i. disk backup
    ii. tape backup

  • Recovery Time of Database

    Dear All
    I have a database server with the following configuration:
    Total Storage :(1.2 Tera Bytes)
    Total Tablespace Size (250 GB)
    Type of Processor :3.1 GHz
    Number of Processor : 2
    RAM : 8 GB
    I want to setup a backup policy by configuring catalog server and want to take weekly full database backup and daily incremental level 1 backup.(control file auto backup). The database is in archive log mode
    Please let me know the possible recovery time of control file, log file and data file in case of database failure from backup
    Thanks and Regards,

    10g has got very good new features with RMAN, flashback area helps to manage size of your backup location automatically, block change tracking, incremenal backups
    some new features
    http://www.oracle-base.com/articles/10g/RMANEnhancements10g.php
    check this page, has lots of examples you might want to pick one
    http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10734/rcmbackp.htm#1006452

  • Help!!! How to get the recovery time of transient response of a power supply with Labview basic package without analysis option?

    How to get the recovery time of transient response of a power supply with Labview basic package without analysis option? Does anyone have any idea or some similar function SUBVIs?
    Recovery time of transient response is defined as the time from the beginning of the transient to the voltage point on the waveform fallen into 10percent of the overshoot. Well, the waveform is something like a pulse with a soft slope.

    I recommend plotting your data on a graph on paper. Take a look at the data, and determine what is unique about the point you are looking for. Look for how you can teach your program to look for this point.
    I have written several algorithms that do similar, one in fact being for a power supply, the other being for RPM. Neither algorithm used any advanced analysis tools. They are just a matter of determining, mathematically, when you achieve what you are looking for. Just sit down with your graph (I recommend multiple copies) and draw horizontal and vertical lines that determine when you get to the point you are looking for. You are probably going to have to reverse the array and start from the end, so think in those terms.
    If you have trouble, emai
    l me a bitmap of the graph, and what you are looking for and I will try to be of further assistance. Don't do that however; until you you have given this a few tries. Your solution should be involve a lot of logic on analog levels.
    Good luck

  • Database Recovery Time

    Hi
    How to calculate Database recovery time in 10g? On what factors does it depend on?
    Regards
    JIL

    JIL wrote:
    Hi
    How to calculate Database recovery time in 10g? On what factors does it depend on?
    Regards
    JILIts depend on:-
    (1)Your backup stratergy.
    (2)How much work you need to do during recovery like, number of archive log file/incremental backup needed for recovery.
    (3)Whether require archive log file on disk or on tape.
    (4)Whther you store baclup as file copy or backupset.
    (5)And most important is, Your expertise on backup and recovery.

  • Alternative solutions to reduce recovery times ?

    Currently recovery times are excessively long when needing to restore an individual mailbox or messages.  If for example wanted to restore a particular folder or review the contents of a backup would currently restore mailbox to a temp database and
    replay logs.  Some of the mailboxes are very large.
    What is recommended backup/recovery solution when it comes to Exchange 2010 ?  also are there any 3rd party tools which allow you to view the contents of the backup without needing to restore the database ?
    Thanks in advance.

    1. Yes if you had the capacity you could store the backups on disk for rapid granular recovery.  Its a balance between convenience, time and resources.
    2. I would say most people deal with it via the method provided via their backup provider or by restoring EDB's on an as needed basis and using the MS Recovery Database method.
    3. So if you drill down into the foundation of this all granular recoveries require access to the native EDB that was backed up OR the granular data backup set. i.e. the entire data set explicitly or by way of automation process that data MUST get restored
    to disk first and then the needed data gets extracted from that copy.  Of course you could use the Brick Level backup method if you can find a vendor that still uses that but I wouldn't recommend because well in short it just sucks. 
    3.A: However as stated in my post above the Single Item Recovery method might be just the thing to eliminate this issue for a large # of cases, i.e. lets say most of your restores happen within a 90 day window of the data being created.  If you turned
    on SIR it would take more space but you would have all the data available for restoration and then for the outliers you just use the normal restore the DB to disk first and then recover
    4. Logically you are probably thinking why can i not just open the Backup and extract what I want.  Well the short answer is because when an EDB is backed up its in a dirty state ( Uncommitted logs) so in order to OPEN that EDB and get access to the
    data you must first restore the EDB and logs and then commit them to the DB.  If that happened in place the EDB would not be exactly what you backed up, i.e. you would be modifying the backed up files hence the reason they restore the EDB and Logs first
    so that you are only working with a copy.  That said IF a backup provider wanted to allow for this there are methods to make it happen for example A": Create a virtual rollup system that allows you access to the source files but when done throws
    away all changes i.e. log commits etc so that the source file remains pristine.  or B: allow for a tool like our DigiScope product to access the files without bringing them into a mountable state, i.e. no log commit when being accessed.  We have
    a function called forensic mount that allows this which works great if you don't have the logs or you want to not modify the original DB by applying logs.   All that aside the vendor would have to open that up for us to do and alas their is no value
    or reason for them to do so. Oh and also because none of the backup providers have a True EDB reader like DigiScope, Kroll or Quest (top 3 EDB tools).  If they developed or licensed that technology they could make it happen
    Search, Recover, & Extract Mailboxes, Folders, & Email Items from Offline Exchange Mailbox and Public Folder EDB's and Live Exchange Servers or Import/Migrate direct from Offline EDB to Any Production Exchange Server, even cross version i.e. 2003
    --> 2007 --> 2010 --> 2013 with Lucid8's
    DigiScope

  • Sub minute recovery time?

    Has anybody heard about a high availability solution for SAP on SQLServer which allows recovery times under 1 minute?
    I know SAP does not certify high availability solutions from partners, so I think this has to be a hardware solution from a SAP partner (HP, IBM, etc.).
    As I know, every HA solution would need some kind of failover for database engine and SCS. Database failover will took always about tens of minutes.
    I will appreciate any guidance.

    We have SQL 2008 (Release Candidate 0) Mirroring in place (same as SQL 2005, but SQL 2005 is more stable of course, since it is a released product).  Our mirroring mode is synchronous with automatic failover.
    At SAP level, we have setup database mirroring via supported methods.  Just modify the database reference in your profiles, from a single database name to:
    SAPDBHOST = primarydbserver;Failover_Partner=secondarydbserver
    In event of db failure, at the DB level it takes less than 5 seconds to failover.  SAP work processes will error but will be successful during reconnect.  Average failover takes less than 1 minute from my experiences.
    If you are looking for sub 1 min recovery, do not, I repeat - do not use MSCS.  Fastest I have ever seen any MSCS recover is around 5 minutes.  MSCS is affected by IPSEC, network name replication, etc.  Because the name is moved from 1 physical server to another.
    For SQL mirroring, there are no name changes, SAP simply point to another server.

  • Collisions: Recovery Time

    Right now I have, in my game, a numeric health counter beginning at 100.
    I have a collision set to where when in contact, the health subtracts by 5.
    However, I need a recovery timer for the player so he doesn't keep getting hit by 5 every milisecond.
    How do you set up a timer where the collision has to wait for 2 second before the player could be hit again?

    I would probably handle the score setting with a function. Something like this:
    var score:Number=100;
    var hitDelay:Number=2000;
    var lastHitTime:Number=0;
    var maxScore:Number=200;
    function setScore(n:Number){
    if((getTimer()-lastHitTime)<hitDelay){
         return;
    lastHitTime=getTimer();
    score+=n;
    if(score<=0){
         trace("Dead!!!");
    if(score>maxScore){
         score=maxScore;
    Then everytime you want to adjust the score you just call the function.
    setScore(-5) \\ some kind of negative hit
    setScore(5) \\ ate a power up bar or something....
    That way you put all the logic and testing in one place. It makes it easier to maintain and adjust as you work out exactly what you want to do....

  • How to reduced instance recovery time

    Hello Friends how are you all. I hope you all will be fine. friends I want to reduced the instance recovery time but i don't know which views or parameters which could helpful for me. Please tell me the way how can i reduced the instance recovery time.
    Thanks & Best Wishes

    Hi,
    Reduced instance recovery time
    check ur DB size, is it connected many users.
    my advice is do not set the reduced instance recovery time, it may be decreased run-time peformance.
    the following procedure
    -) Frequent checkpoints
    -) reduce instance recovery time
    -) set FAST_START_MTTR_TARGET parameter
    -) size the Online redo log files
    -) Implent manual checkpoints
    -) reduce log_buffer size
    -) Decrease run-time performance

  • Hot Backup - Recovery Time

    I have 2 questions on hot backup concept please:
    Let's say I started hot backup at 9 AM and finished at 11 AM.
    1) I can recover database to the point of 9 AM and on (recover until time) as long as I have all archive logs generated between 9 and 11 AM, right?
    2) I can recover database up to 11:30 as well if I have archive logs between 9 AM and 11:30 Am?
    Please correct me if I am wrong.
    Thanks,

    1) Depends
    2) YES
    Oh, I have an idea....why don't you test this as well. It will give you good practice and experience.
    Regards
    Tim
    Well, I guess it all depends on how you look at it. Ok in simple terms let us say you have 2 data files A and B. If your backup of datafile A begins at 9:15 and completes at 9:25 then your backup of data file B starts at 9:35 and ends at 10:00 what does that mean for a restore to a point in time at 9:30. Well if you are only recover datafile A then you are good. However, if you are trying to recover data file B then you are out of luck.
    Here is the test case. I backed up my database with level 0 and it ran between '2010_09_13_14:24:36' and '2010_09_13_14:25:17'
    RMAN> run {
    2> set until time '2010_09_13_14:25:00';
    3> restore database;
    4> recover database;
    5> }
    executing command: SET until clause
    using target database control file instead of recovery catalog
    Starting restore at 2010_09_13_14:48:22
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: sid=154 devtype=DISK
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of restore command at 09/13/2010 14:48:23
    RMAN-06026: some targets not found - aborting restore
    RMAN-06023: no backup or copy of datafile 4 found to restore
    RMAN-06023: no backup or copy of datafile 2 found to restore
    RMAN> run {
    2> set until time '2010_09_13_14:25:00';
    3> restore datafile 3;
    4> }
    executing command: SET until clause
    Starting restore at 2010_09_13_14:59:32
    using channel ORA_DISK_1
    channel ORA_DISK_1: restoring datafile 00003
    input datafile copy recid=32 stamp=729613500 filename=/u08/flash_recovery_area/PRIMETEST/datafile/o1_mf_sysaux_68wv3n4w_.dbf
    destination for restore of datafile 00003: /u04/oradata/primetest/primetes/sysaux01.dbf
    channel ORA_DISK_1: copied datafile copy of datafile 00003
    output filename=/u04/oradata/primetest/primetes/sysaux01.dbf recid=37 stamp=729615575
    Finished restore at 2010_09_13_14:59:35
    RMAN> recover database;
    Starting recover at 2010_09_13_14:59:51
    using channel ORA_DISK_1
    starting media recovery
    media recovery complete, elapsed time: 00:00:00
    Finished recover at 2010_09_13_14:59:51
    RMAN> run {
    2> set until time '2010_09_13_14:25:00';
    3> restore datafile 4;
    4> }
    executing command: SET until clause
    Starting restore at 2010_09_13_15:00:11
    using channel ORA_DISK_1
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of restore command at 09/13/2010 15:00:11
    RMAN-06026: some targets not found - aborting restore
    RMAN-06023: no backup or copy of datafile 4 found to restore
    Edited by: Tim Boles on Sep 13, 2010 11:49 AM
    Read, learn, test and share
    Tim Boles

  • Doubt about database point in time recovery using rman

    Hi Everyone,
    I have been practising various rman restore and recovery scenarios . I have a doubt regarding database point in time recovery using rman. Imagine i have a full database backup including controlfile scheduled to run at 10 PM everyday. today is 20th dec 2013. imagine i want to restore the database to a prior point in time ( say 18th dec till 8 AM). so i would restore all the datafiles  from 17th night's backup and apply archives till 8 AM of 18th dec . in this scenario should i restore the controlfile too from 17th dec bkp ( i am assuming yes we should ) or can we use the current controlfile ( assuming it is intact). i found the below from oracle docs.
    Performing Point-in-Time Recovery with a Current Control File
    The database must be closed to perform database point-in-time recovery. If you are recovering to a time, then you should set the time format environment variables before invoking RMAN. The following are sample Globalization Support settings:
    NLS_LANG = american_america.us7ascii
    NLS_DATE_FORMAT="Mon DD YYYY HH24:MI:SS"
    To recover the database until a specified time, SCN, or log sequence number:
    After connecting to the target database and, optionally, the recovery catalog database, ensure that the database is mounted. If the database is open, shut it down and then mount it:
    2.  SHUTDOWN IMMEDIATE;
    3.  STARTUP MOUNT;
    4. 
    Determine the time, SCN, or log sequence that should end recovery. For example, if you discover that a user accidentally dropped a tablespace at 9:02 a.m., then you can recover to 9 a.m.--just before the drop occurred. You will lose all changes to the database made after that time.
    You can also examine the alert.log to find the SCN of an event and recover to a prior SCN. Alternatively, you can determine the log sequence number that contains the recovery termination SCN, and then recover through that log. For example, query V$LOG_HISTORY to view the logs that you have archived. 
    RECID      STAMP      THREAD#    SEQUENCE#  FIRST_CHAN FIRST_TIM NEXT_CHANG
             1  344890611          1          1      20037 24-SEP-02      20043
             2  344890615          1          2      20043 24-SEP-02      20045
             3  344890618          1          3      20045 24-SEP-02      20046
    Perform the following operations within a RUN command:
    Set the end recovery time, SCN, or log sequence. If specifying a time, then use the date format specified in the NLS_LANG and NLS_DATE_FORMAT environment variables.
    If automatic channels are not configured, then manually allocate one or more channels.
    Restore and recover the database.
      The following example performs an incomplete recovery until November 15 at 9 a.m. 
    RUN
      SET UNTIL TIME 'Nov 15 2002 09:00:00';
      # SET UNTIL SCN 1000;       # alternatively, specify SCN
      # SET UNTIL SEQUENCE 9923;  # alternatively, specify log sequence number
      RESTORE DATABASE;
      RECOVER DATABASE;
    If recovery was successful, then open the database and reset the online logs:
    5.  ALTER DATABASE OPEN RESETLOGS;
    I did not quiet understand why the above scenario is using current controlfile as the checkpoint scn in the current controlfile and the checkpoint scn in the datafile headers do not match after the restore and recovery. Thanks in Advance for your help.
    Thanks
    satya

    Thanks for the reply ... but what about the checkpoint scn in the controlfile . my understanding is that unless the checkpoint scn in the controlfile and datafiles do not match the database will not open. so assuming the checkpoint scn in my current controlfile is 1500 and i want to recover my database till scn 1200. so the scn in the datafiles (which is 1200) is not not matching with the scn in the controlfile(1500). so will the database open in such cases.
    Thanks
    Satya

  • DB Recovery takes a long time even after checkpointing

    We observe that DB recovery takes a long time after application recovery even if we keep doing checkpoints at frequent(30 secs) intervals. These are the parameters that we use to open the environment:
    env_flags =
    DB_CREATE | /* Create the environment if it does not exist */
    DB_RECOVER | /* Run normal recovery. */
    DB_INIT_LOCK | /* Initialize the locking subsystem */
    DB_INIT_LOG | /* Initialize the logging subsystem */
    DB_INIT_TXN | /* Initialize the transactional subsystem. This
    * also turns on logging. */
    DB_INIT_MPOOL | /* Initialize the memory pool (in-memory cache) */
    DB_THREAD | /* Cause the environment to be free-threaded */
    DB_REGISTER |
    DB_INIT_REP | /* Cause the environment to be replicated */
    DB_SYSTEM_MEM ;
    Our understanding was that recovery time should be proportional to the checkpoint interval. Can anyone please give some insights into what might be happening and how the recovery process works. It seems like DB is going back far in time while traversing the logs. We do not remove any log files as part of our regular process flows.
    Secondly, we observe that recovery times increase proportionately with the size of our database even with checkpointing. Any insights into this are also welcome.
    We are using DB 4.7.25 C API on HP-UX 11i.

    try to burn recovery dvds and use it.
    also perform disk check.
    http://support.microsoft.com/kb/315265

  • No recovery mode available when recovered from Time Machine

    My MBP has been really annoying ever since I bought it last year; every 1-2 months, HDD file structure is completely messed up which is beyond Disk Utility's recovery capability.
    So I need to restore entire system from Time Machine after formatting the internal HDD, but every time I do, there is no way to reboot it to the recovery mode (command + R) and always end up with the Internet Recovery.
    The only way to get the recovery mode available again is, so far, download the entire OS again and re-install it.
    But it is quite time consuming and I would like to minimize the complete recovery time (total recovery time including recovering files from Time Machine and re-install Mountain Lion mounts to 10 hours or longer and I need to do it roughly every month...)
    Does anyone have any good way to avoid the re-installation step?
    Thank you.

    If your system will not load the local Recovery HD partition then you will need to download and reinstall the OS again to get it working (and likely both partition and format the drive as well).
    If you are simply trying to restore a Time Machine backup, then you can use the Internet Recovery to do so, by choosing the option from the OS X Tools interface and then choosing a backup instance on your TM drive to restore.

  • Slow Recovery and Rebalancing Time on Node Failover

    Hello,
    We are currently conducting high availability and failover testing with our coherence grid (Coherence 3.5), and encountering slow re-balancing and recovery times (approx 60-90 seconds) when an a storage-member fails. Does anyone have any recommendations on how to minimized the amount of time required for re-balancing? Have you noticed any correlation between number of indexes and recovery time?
    We are simulating a node failure by turning it off and removing it from the grid. During each failure we monitor the “PartitionsEndangered”, “PartitionsUnbalanced” and “StatHA” of each cache in the grid. During these tests we experienced a 60-90sec recovery time of (MACHINE-SAFE -> ENDANGERED -> MACHINE-SAFE) for each
    failure.
    Coherence Version: Coherence 3.5
    JVM: Jrockit 1.6
    OS: RHEL5 64-bit
    Datagram test between coherence hosts: 196MB/s
    Heap size of each node: 1.5GB x 1 node x 10 hosts
    Coherence-node footprint for each node: 614M (including primarily data,
    backup data, indexes)
    System Average Load during HA: 0.3-0.6
    Under normal operation circumstances about half of our node footprint is tied to indexes. We have noticed that the recovery time appears to be greatly improved when we do not create these indexes. The cache being tested in this example has 34 indexes. If we do not create these indexes we are finding the re-balancing/recovery time is closer to 13 seconds vs the 60-90 seconds with index data present. Without these indexes the foot print of each node drops to just over 300M.
    We appreciate any feedback or tips you may be able to offer.
    Ilan

    hannonpi wrote:
    Hey all, just looked through our code base, we use ChainedExtractor or custom subclasses of ChainedExtractor. One problem we have is that we use a ValueHolder pattern to allow us to decompose our object graph into multiple caches. I see how for simple index we could chain to PofExctractors, however, I do not see how one could call multiple methods, for example getPerson().getFirstName(), using PofExctractors.
    That seems to be horribly expensive to index your get methods if those get methods in fact end up calling on the cache again (if I understand what you mean decomposing the object graph to multiple caches). Not to mention, that the index data would not be up-to-date as any changes on other cache entries would not be mirrored in the index.
    Is the message that on rebalancing indexes are rebuilt? Is it just for migrated objects? In that case it would seem that adding more cache members would alleviate the problem, however, our data doesn't show a large increase in performance as we increase node members.
    Indexes are updated on a per-entry basis whenever an entry changes. Partition rebalancing means lot of entry changes.
    More nodes would somewhat reduce the problem but the main problem it seems is that you're indexing cache calls.
    Best regards,
    Robert

  • SAP GoLive : File System Response Times and Online Redologs design

    Hello,
    A SAP Going Live Verification session has just been performed on our SAP Production environnement.
    SAP ECC6
    Oracle 10.2.0.2
    Solaris 10
    As usual, we received database configuration instructions, but I'm a little bit skeptical about two of them :
    1/
    We have been told that our file system read response times "do not meet the standard requirements"
    The following datafile has ben considered having a too high average read time per block.
    File name -Blocks read  -  Avg. read time (ms)  -Total read time per datafile (ms)
    /oracle/PMA/sapdata5/sr3700_10/sr3700.data10          67534                         23                               1553282
    I'm surprised that an average read time of 23ms is considered a high value. What are exactly those "standard requirements" ?
    2/
    We have been asked  to increase the size of the online redo logs which are already quite large (54Mb).
    Actually we have BW loading that generates "Chekpoint not comlete" message every night.
    I've read in sap note 79341 that :
    "The disadvantage of big redo log files is the lower checkpoint frequency and the longer time Oracle needs for an instance recovery."
    Frankly, I have problems undertanding this sentence.
    Frequent checkpoints means more redo log file switches, means more archive redo log files generated. right ?
    But how is it that frequent chekpoints should decrease the time necessary for recovery ?
    Thank you.
    Any useful help would be appreciated.

    Hello
    >> I'm surprised that an average read time of 23ms is considered a high value. What are exactly those "standard requirements" ?
    The recommended ("standard") values are published at the end of sapnote #322896.
    23 ms seems really a little bit high to me - for example we have round about 4 to 6 ms on our productive system (with SAN storage).
    >> Frequent checkpoints means more redo log file switches, means more archive redo log files generated. right?
    Correct.
    >> But how is it that frequent chekpoints should decrease the time necessary for recovery ?
    A checkpoint is occured on every logswitch (of the online redologfiles). On a checkpoint event the following 3 things are happening in an oracle database:
    Every dirty block in the buffer cache is written down to the datafiles
    The latest SCN is written (updated) into the datafile header
    The latest SCN is also written to the controlfiles
    If your redologfiles are larger ... checkpoints are not happening so often and in this case the dirty buffers are not written down to the datafiles (in the case of no free space in the buffer cache is needed). So if your instance crashes you need to apply more redologs to the datafiles to be in a consistent state (roll forward). If you have smaller redologfiles more log switches are occured and so the SCNs in the data file headers (and the corresponding data) are closer to the newest SCN -> ergo the recovery is faster.
    But this concept does not really fit the reality because of oracle implements some algorithm to reduce the workload for the DBWR in the case of a checkpoint.
    There are also several parameters (depends on the oracle version) which control that a required recovery time is kept. (for example FAST_START_MTTR_TARGET)
    Regards
    Stefan

Maybe you are looking for

  • Ipod Touch 4G sound messed up

    So my iPod Touch 4g is about 1.5 years old. There was a while where I wouldn't have earbuds or headphones, so I didn't listen to music for a while. Then I got my car that has an aux port. I plugged in my iPod and played music just fine. But, I just b

  • Photoshop CS5 crashes when using certain function

    Hi, I have just downloaded the trial version of Adobe CS5 and it appears to crash when you use specific tools.  Particularly the program crashes when I try to perform a photomerge or view System Info.  There is no exact error message except a generic

  • Zoom default in Word for Mac

    I just got Word for Mac recently. Whenever I open a document, the text is tiny. The zoom is set at 100%, but that's too low for comfortable reading/typing. (Now I know why my Mac-using friends always sent me Word documents set at 125% zoom!) I am won

  • Date Formatting Not Updating

    Hello, Has anyone else run into the issue of updating a field's format to MM/DD/YYYY.  The information is populated from our ATS system which pulls information from the user/onboarding process.  The PDF field's date format is set to MM/DD/YYYY but on

  • Bluetooth don't discover apple tv (8.1.2)

    I have a new Apple TV and want to configure with my Iphone 5S version iOS 8.1.2. When i go to Bluetooth it doesn't show me any device. I can see the iPhone and the Apple TV from my Macbook Air with Maverick 10.9.5 I tried with reseting all settings b