Redo log tuning - improving insert rate

Dear experts!
We've an OLTP system which produces large amount of data. After each record written to our 11.2 database (standard edition) a commit is performed (the system architecture can't be changed - for example to commit every 10th record).
So how can we speed up the insert process? As the database in front of the system gets "mirrored" to our datawarehouse system it is running in NOARCHIVE mode. I've already tried placing the redo log files on SSD disks which speeded up the insert process.
Another idea is putting the table on a seperate tablespace with NOLOGGING option. What do you think about this?
Further more I heard about tuning the redo latches parameter. Does anyone have information about this way?
I would be grateful for any information!
Thanks
Markus

We've an OLTP system which produces large amount of data. After each record written to our 11.2 database (standard edition) a commit is >>performed (the system architecture can't be changed - for example to commit every 10th record).Doing commit after each insert (or other DML command) doesn't means that dbwriter process is actually writing this data immediately in db files.
DBWriter process is using an internal algorithm to decide where to apply changes to db files. You can adjust the writing frequency into db files by using "fast_start_mttr_target" parameter.
So how can we speed up the insert process? As the database in front of the system gets "mirrored" to our datawarehouse system it is running >>in NOARCHIVE mode. I've already tried placing the redo log files on SSD disks which speeded up the insert process.Placing the redo log files on SSD disks is indeed a good action. Also you can check buffer cache hit rate and size. Also stripping for filesystems where redo files resides should be taken into account.
Another idea is putting the table on a seperate tablespace with NOLOGGING option. What do you think about this?It's an extremely bad idea. NOLOGGING option for a tablespace will lead to an unrecovearble tablespace and as I stated on first sentence will not increase the insert speed.
Further more I heard about tuning the redo latches parameter. Does anyone have information about this way?I don't think you need this.
Better check indexes associated with tables where you insert data. Are they analyzed regularly, are all of them used indeed (many indexes are created for some queries but after a while they are left unused but at each DML all indexes are updated as well).

Similar Messages

  • Improving redo log writer performance

    I have a database on RAC (2 nodes)
    Oracle 10g
    Linux 3
    2 servers PowerEdge 2850
    I'm tuning my database with "spotilght". I have alredy this alert
    "The Average Redo Write Time alarm is activated when the time taken to write redo log entries exceeds a threshold. "
    The serveres are not in RAID5.
    How can I improve redo log writer performance?
    Unlike most other Oracle write I/Os, Oracle sessions must wait for redo log writes to complete before they can continue processing.
    Therefore, redo log devices should be placed on fast devices.
    Most modern disks should be able to process a redo log write in less than 20 milliseconds, and often much lower.
    To reduce redo write time see Improving redo log writer performance.
    See Also:
    Tuning Contention - Redo Log Files
    Tuning Disk I/O - Archive Writer

    Some comments on the section that was pulled from Wikipedia. There is some confusion in the market as their are different types of solid state disks with different pros and cons. The first major point is that the quote pulled from Wikipedia addresses issues with Flash hard disk drives. Flash disks are one type of solid state disk that would be a bad solution for redo acceleration (as I will attempt to describe below) they could be useful for accelerating read intensive applications. The type of solid state disk used for redo logs use DDR RAM as the storage media. You may decide to discount my advice because I work with one of these SSD manufacturers but I think if you do enough research you will see the point. There are many articles and many more customers who have used SSD to accelerate Oracle.
    > Assuming that you are not CPU constrained,
    moving the online redo to
    high-speed solid-state disk can make a hugedifference.
    Do you honestly think this is practical and usable
    advice Don? There is HUGE price difference between
    SSD and and normal hard disks. Never mind the
    following disadvantages. Quoting
    (http://en.wikipedia.org/wiki/Solid_state_disk):[
    i]
    # Price - As of early 2007, flash memory prices are
    still considerably higher  
    per gigabyte than those of comparable conventional
    hard drives - around $10
    per GB compared to about $0.25 for mechanical
    drives.Comment: Prices for DDR RAM base systems are actually higher than this with a typical list price around $1000 per GB. Your concern, however, is not price per capacity but price for performance. How many spindles will you have to spread your redo log across to get the performance that you need? How much impact are the redo logs having on your RAID cache effectiveness? Our system is obviously geared to the enterprise where Oracle is supporting mission critical databases where a hugh return can be made on accelerating Oracle.
    Capacity - The capacity of SSDs tends to be
    significantly smaller than the
    capacity of HDDs.Comment: This statement is true. Per hard disk drive versus per individual solid state disk system you can typically get higher density of storage with a hard disk drive. However, if your goal is redo log acceleration, storage capacity is not your bottleneck. Write performance, however, can be. Keep in mind, just as with any storage media you can deploy an array of solid state disks that provide terabytes of capacity (with either DDR or flash).
    Lower recoverability - After mechanical failure the
    data is completely lost as
    the cell is destroyed, while if normal HDD suffers
    mechanical failure the data
    is often recoverable using expert help.Comment: If you lose a hard drive for your redo log, the last thing you are likely to do is to have a disk restoration company partially restore your data. You ought to be getting data from your mirror or RAID to rebuild the failed disk. Similarly, with solid state disks (flash or DDR) we recommend host based mirroring to provide enterprise levels of reliability. In our experience, a DDR based solid state disk has a failure rate equal to the odds of losing two hard disk drives in a RAID set.
    Vulnerability against certain types of effects,
    including abrupt power loss
    (especially DRAM based SSDs), magnetic fields and
    electric/static charges
    compared to normal HDDs (which store the data inside
    a Faraday cage).Comment: This statement is all FUD. For example, our DDR RAM based systems have redundant power supplies, N+1 redundant batteries, four RAID protected "hard disk drives" for data backup. The memory is ECC protected and Chipkill protected.
    Slower than conventional disks on sequential I/OComment: Most Flash drives, will be slower on sequential I/O than a hard disk drive (to really understand this you should know there are different kinds of flash memory that also impact flash performance.) DDR RAM based systems, however, offer enormous performance benefits versus hard disk or flash based systems for sequential or random writes. DDR RAM systems can handle over 400,000 random write I/O's per second (the number is slightly higher for sequential access). We would be happy to share with you some Oracle ORION benchmark data to make the point. For redo logs on a heavily transactional system, the latency of the redo log storage can be the ultimate limit on the database.
    Limited write cycles. Typical Flash storage will
    typically wear out after
    100,000-300,000 write cycles, while high endurance
    Flash storage is often
    marketed with endurance of 1-5 million write cycles
    (many log files, file
    allocation tables, and other commonly used parts of
    the file system exceed
    this over the lifetime of a computer). Special file
    systems or firmware
    designs can mitigate this problem by spreading
    writes over the entire device,
    rather than rewriting files in place.
    Comment: This statement is mostly accurate but refers only to flash drives. DDR RAM based systems, such as those Don's books refer to, do not have this limitation.
    >
    Looking at many of your postings to Oracle Forums
    thus far Don, it seems to me that you are less
    interested in providing actual practical help, and
    more interested in self-promotion - of your company
    and the Oracle books produced by it.
    .. and that is not a very nice approach when people
    post real problems wanting real world practical
    advice and suggestions.Comment: Contact us and we will see if we can prove to you that Don, and any number of other reputable Oracle consultants, recommend using DDR based solid state disk to solve redo log performance issues. In fact, if it looks like your system can see a serious performance increase, we would be happy to put you on our evaluation program to try it out so that you can do it at no cost from us.

  • Tuning of Redo logs in data warehouses (dwh)

    Hi everybody,
    I'm looking for some guidance to configure redo logs in data warehouse environments.
    Of course we are running in noarchive log mode and use direct path inserts (nologging) whereever possible.
    Nevertheless every etl process (one process per day) produces 150 GB of redo logs. That seems quite a lot compared to the overall data volume (1 TB tables + indexes).
    Actually im not sure if there is a tuning problem, but because of the large amount of redo I'm interested in examining it.
    Here are the facts:
    - Oracle 10g, 32 GB RAM
    - 6 GB SGA, 20 GB PGA
    - 5 log groups each with 1 Gb log file
    - 4 MB Log buffer
    - every day ca 150 logswitches (with peaks: some logswitches after 10 seconds)
    - some sysstat metrics after one etl load:
    Select name, to_char(value, '9G999G999G999G999G999G999') from v$sysstat Where name like 'redo %';
    "NAME" "TO_CHAR(VALUE,'9G999G999G999G999G999G999')"
    "redo synch writes" " 300.636"
    "redo synch time" " 61.421"
    "redo blocks read for recovery"" 0"
    "redo entries" " 327.090.445"
    "redo size" " 159.588.263.420"
    "redo buffer allocation retries"" 95.901"
    "redo wastage" " 212.996.316"
    "redo writer latching time" " 1.101"
    "redo writes" " 807.594"
    "redo blocks written" " 321.102.116"
    "redo write time" " 183.010"
    "redo log space requests" " 10.903"
    "redo log space wait time" " 28.501"
    "redo log switch interrupts" " 0"
    "redo ordering marks" " 2.253.328"
    "redo subscn max counts" " 4.685.754"
    So the questions:
    Does anybody can see tuning needs? Should the Redo logs be increased or incremented? What about placing redo logs on Solid state disks?
    kind regards,
    Mirko

    user5341252 wrote:
    I'm looking for some guidance to configure redo logs in data warehouse environments.
    Of course we are running in noarchive log mode and use direct path inserts (nologging) whereever possible.Why "of course" ? What's your recovery strategy if you wreck the database ?
    Nevertheless every etl process (one process per day) produces 150 GB of redo logs. That seems quite a lot compared to the overall data volume (1 TB tables + indexes).This may be an indication that you need to do something to reduce index maintenance during data loading
    >
    Actually im not sure if there is a tuning problem, but because of the large amount of redo I'm interested in examining it.
    For a quick check you might be better off running statspack (or AWR) snapshots across the start and end of batch to get an idea of what work goes on and where the most time goes. A better strategy would be to examine specific jobs in detail, though).
    "redo synch time" " 61.421"
    "redo log space wait time" " 28.501" Rough guideline - if the redo is slowing you down, then you've lost less than 15 minutes across the board to the log writer. Given the number of processes loading and the elapsed time to load, is this significant ?
    "redo buffer allocation retries"" 95.901" This figure tells us how OFTEN we couldn't get space in the log buffer - but not how much time we lost as a result. We also need to see your 'log buffer space' wait time.
    Does anybody can see tuning needs? Should the Redo logs be increased or incremented? What about placing redo logs on Solid state disks?Based on the information you've given so far, I don't think anyone should be giving you concrete recommendations on what to do; only suggestions on where to look or what to tell us.
    Regards
    Jonathan Lewis

  • Redo Log Generation Rate is more as compaired to Single Instacne

    Hi Experts,
    i need views regarding the experience that i m facing in rac environment. actually we have two node rac 10gr2 running on AIX 6.1
    on application side we have a process of 4 hrs and during this process we observed average 110 Gb of logs. but if we run same process on application and single instance DB is being used at back end, then it generates only 40Gb of log. now my question is why same process have different redo log generation values against rac db and single instacne.

    Amiabu,
    Your question calls for crystal balls, as you have provided way too few details.
    Also you didn't use Logminer to find out what is going on. This would have been a sensible approach, even before posting a question which can be paraphrased as 'It doesn't work as expected, why?'.
    Two general observations:
    if your code doesn't scale, using RAC will make things worse.
    Oracle 10g is a heavily instrumented product, assuming you use AWR or ADDM or statspack.
    The Cache Fusion feature of RAC can easily result in extra wait events, which in turn are being tracked (INSERTed that is) in the AWR or statspack repository.
    Only you can answer the question what was going on by using Logminer.
    Sybrand Bakker
    Senior Oracle DBA
    Edited by: sybrand_b on 16-jun-2011 14:33

  • Redo log space requests VALUE high

    SELECT name||' = '||value
    FROM v$sysstat
    WHERE name = 'redo log space requests';
    I am noticing 40+ space requests for some of my Oracle 9.2.0.5 databases.
    On another 7.3.4 DB I see this over 140 but this DB shutdown only on weekends so this cumulative value increases I presume.
    I have 20MB of 5 groups already. Do I still add another 2 more groups or increase their sizes ?
    I did read somewhere that I'd have to increase the log_buffer parameter. So how do we deal with this issue ? Any repercussions if I let this as it is for now ?
    The cause of this would be redo logs are not big enough or otherwise ?
    Thanks.

    user4874781 wrote:
    Thanks for your response Charles.
    So if I understand this correctly ... Redolog Space requests corresponds to a either an incorrectly sized redo log file / DBWR / CKPT needs to be tuned.
    Maybe I was interpreting this the wrong way. (Possibly)
    " The statistic 'redo log space requests' reflects the number of times a user process waits for space in the redo log buffer. " If that is the case, if there was longer waits for this event, I was under the assumption that log_buffer needs to be increased.
    http://www.idevelopment.info/data/Oracle/DBA_tips/Tuning/TUNING_6.shtml
    * Yes, the waits have increased to 70 as of now (since 40 yesterday .. DB was started Saturday night and will run till weekend) Less activity as of now, since the day has just started ; so it would definitely rise by end of the day. I took a look at the above article, and I think I understand why the article is slightly confusing. With due respect to the author, the article was last modified 16-Apr-2001, which I believe is before the Oracle documentation was better clarified regarding these statistics. From:
    http://download.oracle.com/docs/cd/B14117_01/server.101/b10755/stats002.htm
    "redo log space requests: Number of times the active log file is full and Oracle must wait for disk space to be allocated for the redo log entries. Such space is created by performing a log switch. Log files that are small in relation to the size of the SGA or the commit rate of the work load can cause problems. When the log switch occurs, Oracle must ensure that all committed dirty buffers are written to disk before switching to a new log file. If you have a large SGA full of dirty buffers and small redo log files, a log switch must wait for DBWR to write dirty buffers to disk before continuing."
    "redo log space wait time: Total elapsed waiting time for "redo log space requests" in 10s of milliseconds"
    It is quite possible that the "redo log space requests" will increase with every redo log file switch, which should not be too much of a concern. You may want to focus a little more on the "redo log space wait time" statistic, which indicates how much wait time was involved waiting. You might also want to focus on the system-wide wait event interface, examining how the accumulated wait time increases from one sampling of each of the statistics to the next.
    * I have 1 log switch every 11 minutes ; BTW ; I have 5 log groups of 20 MB each as of now. So I am assuming 40 MB of 4 or 5 log groups should be fine as per your suggestion ?If you have the disk space, considering that it is an ancient AIX box, you may want to set the redo log files to an even larger size, possibly 100MB (or larger). You may then want to force periodic switches of the redo log, for instance once an hour, or once every 30 minutes.
    * This is an ancient AIX box with 512 MB Ram. Is the redo log located on a fast device ? I'd have to find that out ( any hints on that ? )
    * Decreasing the log_buffer is possible on weekends since I'd have to bounce it for it to take effect.
    I will increase the log files accordingly and hopefully the space waits will reduce. Thanks again.Someone else, possibly Sybrand, on the forums might be familiar with AIX and be able to provide you with an answer. If you are seeing system-wide increasing wait time for redo related waits, then that might be an indication that the redo logs are not located on a fast device.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Lot of Redo log wait

    Dear all,
    In st04 I see Redo log wait is this a problem. Please suggest how to solve it
    Please find the details.
    Size (kB)                            14,352
    Entries                          42,123,046
    Allocation retries                    9,103
    Alloc fault rate(%)                     0.0
    Redo log wait (s)                       486
    Log files (in use)                8 (   8 )
    DB_INST_ID     Instance ID     1
    DB_INSTANCE     DB instance name     prd
    DB_NODE     Database node     A
    DB_RELEASE     Database release     10.2.0.4.0
    DB_SYS_TIMESTAMP     Day, Time     06.04.2010 13:07:10
    DB_SYSDATE     DB System date     20100406
    DB_SYSTIME     DB System time     130710
    DB_STARTUP_TIMESTAMP     Start up at     22.03.2010 03:51:02
    DB_STARTDATE     DB Startup date     20100322
    DB_STARTTIME     DB Startup time     35102
    DB_ELAPSED     Seconds since start     1329368
    DB_SNAPDIFF     Sec. btw. snapshots     1329368
    DATABUFFERSIZE     Size (kB)     3784704
    DBUFF_QUALITY     Quality (%)     96.3
    DBUFF_LOGREADS     Logical reads     5615573538
    DBUFF_PHYSREADS     Physical reads     207302988
    DBUFF_PHYSWRITES     Physical writes     7613263
    DBUFF_BUSYWAITS     Buffer busy waits     878188
    DBUFF_WAITTIME     Buffer wait time (s)     3583
    SHPL_SIZE     Size (kB)     1261568
    SHPL_CAQUAL     DD-cache Quality (%)     95.1
    SHPL_GETRATIO     SQL area getratio(%)     98.4
    SHPL_PINRATIO     SQL area pinratio(%)     99.9
    SHPL_RELOADSPINS     SQLA.Reloads/pins(%)     0.0042
    LGBF_SIZE     Size (kB)     14352
    LGBF_ENTRIES     Entries     42123046
    LGBF_ALLORETR     Allocation retries     9103
    LGBF_ALLOFRAT     Alloc fault rate(%)     0
    LGBF_REDLGWT     Redo log wait (s)     486
    LGBF_LOGFILES     Log files     8
    LGBF_LOGFUSE     Log files (in use)     8
    CLL_USERCALLS     User calls     171977181
    CLL_USERCOMM     User commits     1113161
    CLL_USERROLLB     User rollbacks     34886
    CLL_RECURSIVE     Recursive calls     36654755
    CLL_PARSECNT     Parse count     10131732
    CLL_USR_PER_RCCLL     User/recursive calls     4.7
    CLL_RDS_PER_UCLL     Log.Reads/User Calls     32.7
    TIMS_BUSYWT     Busy wait time (s)     389991
    TIMS_CPUTIME     CPU time session (s)     134540
    TIMS_TIM_PER_UCLL     Time/User call (ms)     3
    TIMS_SESS_BUSY     Sessions busy (%)     0.94
    TIMS_CPUUSAGE     CPU usage (%)     2.53
    TIMS_CPUCOUNT     Number of CPUs     4
    RDLG_WRITES     Redo writes     1472363
    RDLG_OSBLCKWRT     OS blocks written     54971892
    RDLG_LTCHTIM     Latching time (s)     19
    RDLG_WRTTIM     Redo write time (s)     2376
    RDLG_MBWRITTEN     MB written     25627
    TABSF_SHTABSCAN     Short table scans     12046230
    TABSF_LGTABSCAN     Long table scans     6059
    TABSF_FBYROWID     Table fetch by rowid     1479714431
    TABSF_FBYCONTROW     Fetch by contin. row     2266031
    SORT_MEMORY     Sorts (memory)     3236898
    SORT_DISK     Sorts (disk)     89
    SORT_ROWS     Sorts (rows)     5772889843
    SORT_WAEXOPT     WA exec. optim. mode     1791746
    SORT_WAEXONEP     WA exec. one pass m.     93
    SORT_WAEXMULTP     WA exec. multipass m     0
    IEFF_SOFTPARSE     Soft parse ratio     0.9921
    IEFF_INMEM_SORT     In-memory sort ratio     1
    IEFF_PARSTOEXEC     Parse to exec. ratio     0.9385
    IEFF_PARSCPUTOTOT     Parse CPU to Total     0.9948
    IEFF_PTCPU_PTELPS     PTime CPU / PT elps.     0.1175
    Regards,
    Kumar

    Hi,
    If the redo buffers are not large enough, the Oracle log-writer process waits for space to become available. This wait time becomes wait time for the end user. Hence this may cause perfromance problem at database end and hence need to be tuned. 
    The size of the redo log buffer is defined in the init.ora file using the 'LOG_BUFFER' parameter. The statistic 'redo log space requests' reflects the number of times a user process waits for space in the redo log buffer.
    If the size of redo log buffer is not big enough causing this wait, recommendation is to increase the size of redo log buffer in such a way that the value of "redo log space requests" should be near to zero.
    regards,
    rakesh

  • Best practice - online redo logs and virtualization

    I have a 10.1.0.4 instance (soon to be migrated to 11gr2) running under Windows Server 2003.
    We use a non-standard disk distribution scheme -
    on the c: drive we have oracle_home as well as directories for control files and online redo logs.
    on the d: drive we have datafiles
    on the e: drive we have archive log files and another directory with online redo logs and another copy of control file
    my question is this:
    is it smart practice to have ANY online redo logs or control file on the same spindle with archive logs?
    Our setup works fairly well but we are in the process of migrating the instance first to ESX server and SAN and then secondly to 11gtr2 64bit under server 2008 64 and when we bring up our instance on the VM for testing we find that benchmarking the ESX server (dual Xeon 3.4ghz with 48gb RAM running against FalconStor NSS SAN with 15k SAS disks over iSCSI) against the production physical server (dual Xeon 2.0ghz with 4gb RAM using direct attached SATA 7200rpm drives) we find that some processes run faster on the ESX box and some run 40-100% slower. Running Statspack seems to identify lots of physical read waits as well as some waits for redo and controlfiles.
    Is it possible that in addition to any overhead introduced by ESX and iSCSI (we are running Jumbo Frames over 1gb) we may have contention because the archive logs are on the same "spindle" (virtual) as the online redo and control files?
    We're looking at multiple avenues to bring the 2 servers in line from a performance standpoint - db configuration, memory allocation, possible move to 10gb network, possible move to SSD storage tray, possible application rewrites. But from the simplest low hanging fruit idea, if these files should not be on the same spindle thats an easy change to make and possibly eke out an improvement.
    Ideas?
    Mike

    Hi,
    "Old" Oracle standard is to use as many spindles as possible.
    It looks to me, you have only 1 disk with several partitions on it ??
    In my honest opinion you should anyway start by physically seperating OS from Oracle, so let the C: drive to the Windows OS
    Take another physical seperate D: drive to install you application.
    Use yet another set of physical drives, preferably in RAID10 setup, for your database and redo logs
    And finally yet another disk for the archive logs.
    We have recently configured a Windows 2008 server with an 11G Db, which pretty much follows the above setup.
    All non RAID10 disks are RAID1 ( mirror ) and we even have some SSD's for hot tables and redo-logs.
    The machine, or must I say the database, operates like a high speed train, very, very fast.
    Ofcourse keep in mind the number of cores ( not only for licensing ) and the amount of memory.
    Try to prevent the system from swapping, because that is a performance killer!
    Edit: And even if you put a virtual layer in between, try to seperate the virtual disks as much as possible over physical disks
    Success!
    FJFranken
    Edited by: fjfranken on 7-okt-2011 7:19

  • Why do we need standby redo log on Primary database.

    Hi Gurus,
    I was going through the document in OBE,
    http://www.oracle.com/technology/obe/11gr1_db/ha/dataguard/physstby/physstdby.htm
    I have two queries:
    1) I noticed the statement -
    "Configure the primary database to receive redo data, by adding the standby logfiles to the primary. "
    Why do we have to create standby redo log on a primary database?
    2) There is another statement --
    "It is highly recommended that you have one more standby redo log group than you have online redo log groups as the primary database. The files must be the same size or larger than the primary database’s online redo logs. "
    Why do we need one additional standby redo log group than in Primary database.
    Could anyone please explain to me in simple words.
    Thanks
    Cherrish Vaidiyan

    Hi,
    1. Standby redo logs are used only when the database_role is standby, it is recommended to be added in primary also so that they can be used on role reversal, however during normal working standby redo logs will not be used at all on primary.
    2. In case of 3 online redo log groups, it is recommended to use 4 standby redo log group this is in case if log switching is happening frequently on primary and all 3 standby redo logs are still not completely archived on the standby and 4th can be used here as there will be some delay on standby due to network or slowness of arch on standby.
    Use of the standby redo log groups depends on the redo generation rate, you can see only 2 standby redo logs are getting used while you have 4 standby redo log groups, when the redo generation rate is less.
    So it is recommended to have one more standby redo log group when redo generation rate is high and all of the existing standby redo log group are getting used.
    Regards
    Anudeep

  • Can we use online redo log to recover lost datafile in NOARCHIVE mode?

    I am working on OCA exam and confued about these 2 sample questions. (similar questions with totally different answer)
    Please give me hint about the different between these 2 questions.
    ** If the database is in NOARCHIVELOG mode, and one of the datafile for tablespace USERS is lost, what kind of recovery is possible? (answer: B)
    A. All transactions except those in the USERS tablespace are recoverable up to the loss of the datafile.
    B. Recovery is possible only up to the point in time of the last full database backup.
    C. The USERS tablespace is recoverable from the online redo log file as long as none of the redo log files have been reused since the last backup.
    D. Tablespace point in time recovery is available as long as a full backup of the USERS tablespace exists.
    ** The database of your company is running in the NOARCHIVELOG mode. You perform a complete backup of the database every night. On Monday morning, you lose the USER1.dbf file belonging to the USERS tablespace. Your database has four redo log groups, and there have been two log switches since Sunday night's backup.
    Which is true (answer: B)
    A. The database cannot be recovered.
    B. The database can be recovered up to the last commit.
    C. The database can be recovered only up to the last completed backup.
    D. The database can be recovered by performing an incomplete recovery.
    E. The database can be recovered by restoring only the USER!.dbf datafile from the most recent backup.

    I think Gaurav is correct, you can recover to the last commit even in NOARCHIVELOG, as long as all the changes in the redo logs have not been overwritten. So answer should be B for question 2.
    Here is my test:
    SQL> select log_mode from v$database;
    LOG_MODE
    NOARCHIVELOG
    SQL> select tablespace_name, file_name from dba_data_files;
    TABLESPACE_NAME
    FILE_NAME
    USERS
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\USERS01.DBF
    SYSAUX
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\SYSAUX01.DBF
    UNDOTBS1
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\UNDOTBS01.DBF
    SYSTEM
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\SYSTEM01.DBF
    DATA
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\DATA01.DBF
    SQL> create table names
    2 ( name varchar(16))
    3 tablespace users;
    Table created.
    so this segment 'names' is created in the datafile users01.
    At this point I shut down and mount the DB, then:
    RMAN> backup database;
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:29
    Finished backup at 06-OCT-07
    SQL>alter database open
    SQL> insert into names values ('pippo');
    1 row created.
    SQL> commit;
    Commit complete.
    SQL>shutdown immediate;
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    At this point I delete datafile users01 and restart:
    SQL> startup
    ORACLE instance started.
    Total System Global Area 167772160 bytes
    Fixed Size 1247900 bytes
    Variable Size 67110244 bytes
    Database Buffers 96468992 bytes
    Redo Buffers 2945024 bytes
    Database mounted.
    ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
    ORA-01110: data file 4: 'C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\USERS01.DBF'
    restoring the backup taken before inserting the value 'pippo' in table names:
    RMAN> restore database;
    Starting restore at 06-OCT-07
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting datafile backupset restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    restoring datafile 00001 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\SYSTEM01.D
    BF
    restoring datafile 00002 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\UNDOTBS01.
    DBF
    restoring datafile 00003 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\SYSAUX01.D
    BF
    restoring datafile 00004 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\USERS01.DB
    F
    restoring datafile 00005 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\DATA01.DBF
    channel ORA_DISK_1: reading from backup piece C:\ORACLE\PRODUCT\10.2.0\DB_1\DATA
    BASE\0AITR52K_1_1
    channel ORA_DISK_1: restored backup piece 1
    piece handle=C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\0AITR52K_1_1 tag=TAG20071006
    T181337
    channel ORA_DISK_1: restore complete, elapsed time: 00:02:07
    Finished restore at 06-OCT-07
    RMAN> recover database;
    Starting recover at 06-OCT-07
    using channel ORA_DISK_1
    starting media recovery
    media recovery complete, elapsed time: 00:00:05
    Finished recover at 06-OCT-07
    SQL> alter database open;
    Database altered.
    SQL> select * from names;
    NAME
    pippo
    SQL>
    enrico

  • Redo log files are not applying to standby database

    Hi everyone!!
    I have created standby database on same server ( windows XP) and using oracle 11g . I want to synchronize my standby database with primary database . So I tried to apply redo logs from primary to standby database as follow .
    My standby database is open and Primary database is not started (instance not started) because only one database can run in Exclusive Mode as DB_NAME is same for both database.  I run the following command on the standby database.
                SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
    It returns "Database altered" . But when I checked the last archive log on primary database, its sequence is 189 while on standby database it is 177. That mean archived redo logs are not applied on standby database.
    The tnsnames.ora file contains entry for both service primary & standby database and same service has been used to transmit and receive redo logs.
    1. How to resolve this issue ?
    2.Is it compulsory to have Primary database open ?
    3. I have created standby  control file by using  command
              SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS ‘D:\APP\ORACLE\ORADATA\TESTCAT\CONTROLFILE\CONTROL_STAND1.CTL‘;
    So database name in the standby control file is same as primary database name (PRIM). And hence init.ora file of standby database also contains DB_NAME = 'PRIM' parameter. I can't change it because it returns error of mismatch database name on startup. Should  I have different database name for both or existing one is correct ?
    Can anybody help me to come out from this stuck ?
    Thanks & Regards
    Tushar Lapani

    Thank you Girish. It solved  my redo apply problem. I set log_archive_dest parameter again and then I checked archive redo log sequence number. It was same for both primary and standby database. But still table on standby database is not being refresh.
    I did following scenario.
    1.  Inserted 200000 rows in emp table of Scott user on Primary database and commit changes.
    2. Then I synchronized standby database by using Alter database command. And I also verify that archive log sequence number is same for both database. It mean archived logs from primary database has been applied to standby database.
    3. But when I count number of rows in emp table of scott user on standby database, it returns only 14 rows even of redo log has been applied.
    So my question is why changes made to primary database is not reflected on standby database although redo logs has been applied ?
    Thanks

  • Urgent: Huge diff in total redo log size and archive log size

    Dear DBAs
    I have a concern regarding size of redo log and archive log generated.
    Is the equation below is correct?
    total size of redo generated by all sessions = total size of archive log files generated
    I am experiencing a situation where when I look at the total size of redo generated by all the sessions and the size of archive logs generated, there is huge difference.
    My total all session redo log size is 780MB where my archive log directory size has consumed 23GB.
    Before i start measuring i cleared up archive directory and started to monitor from a specific time.
    Environment: Oracle 9i Release 2
    How I tracked the sizing information is below
    logon as SYS user and run the following statements
    DROP TABLE REDOSTAT CASCADE CONSTRAINTS;
    CREATE TABLE REDOSTAT
    AUDSID NUMBER,
    SID NUMBER,
    SERIAL# NUMBER,
    SESSION_ID CHAR(27 BYTE),
    STATUS VARCHAR2(8 BYTE),
    DB_USERNAME VARCHAR2(30 BYTE),
    SCHEMANAME VARCHAR2(30 BYTE),
    OSUSER VARCHAR2(30 BYTE),
    PROCESS VARCHAR2(12 BYTE),
    MACHINE VARCHAR2(64 BYTE),
    TERMINAL VARCHAR2(16 BYTE),
    PROGRAM VARCHAR2(64 BYTE),
    DBCONN_TYPE VARCHAR2(10 BYTE),
    LOGON_TIME DATE,
    LOGOUT_TIME DATE,
    REDO_SIZE NUMBER
    TABLESPACE SYSTEM
    NOLOGGING
    NOCOMPRESS
    NOCACHE
    NOPARALLEL
    MONITORING;
    GRANT SELECT ON REDOSTAT TO PUBLIC;
    CREATE OR REPLACE TRIGGER TR_SESS_LOGOFF
    BEFORE LOGOFF
    ON DATABASE
    DECLARE
    PRAGMA AUTONOMOUS_TRANSACTION;
    BEGIN
    INSERT INTO SYS.REDOSTAT
    (AUDSID, SID, SERIAL#, SESSION_ID, STATUS, DB_USERNAME, SCHEMANAME, OSUSER, PROCESS, MACHINE, TERMINAL, PROGRAM, DBCONN_TYPE, LOGON_TIME, LOGOUT_TIME, REDO_SIZE)
    SELECT A.AUDSID, A.SID, A.SERIAL#, SYS_CONTEXT ('USERENV', 'SESSIONID'), A.STATUS, USERNAME DB_USERNAME, SCHEMANAME, OSUSER, PROCESS, MACHINE, TERMINAL, PROGRAM, TYPE DBCONN_TYPE,
    LOGON_TIME, SYSDATE LOGOUT_TIME, B.VALUE REDO_SIZE
    FROM V$SESSION A, V$MYSTAT B, V$STATNAME C
    WHERE
    A.SID = B.SID
    AND
    B.STATISTIC# = C.STATISTIC#
    AND
    C.NAME = 'redo size'
    AND
    A.AUDSID = sys_context ('USERENV', 'SESSIONID');
    COMMIT;
    END TR_SESS_LOGOFF;
    Now, total sum of REDO_SIZE (B.VALUE) this is far less than archive log size. This at time when no other user is logged in except myself.
    Is there anything wrong with query for collecting redo information or there are some hidden process which doesnt provide redo information on session basis.
    I have seen the similar implementation as above at many sites.
    Kindly provide a mechanism where I can trace which user is generated how much redo (or archive log) on a session basis. I want to track which all user/process are causing high redo to generate.
    If I didnt find a solution I would raise a SR with Oracle.
    Thanks
    [V]

    You can query v$sess_io, column block_changes to find out which session generating how much redo.
    The following query gives you the session redo statistics:
    select a.sid,b.name,sum(a.value) from v$sesstat a,v$statname b
    where a.statistic# = b.statistic#
    and b.name like '%redo%'
    and a.value > 0
    group by a.sid,b.name
    If you want, you can only look for redo size for all the current sessions.
    Jaffar

  • Fundamental questions on redo logs and rollbacks

    Hi all,
    Some basic questions, I really want to understand it very clearly.
    Suppose that we have updated few records in a table. We know that the blocks to be updated will be fetched into buffer cache, they will be updated with new value and commited eventually. The questions I have are ,
    1) What exact information will go to redo log ? is it a copy of the block before change and copy of the block after change ?
    2) What exactly goes to rollback segment? is it copy of block before change (for update) and just the rowid for inserted row and the copy of block for a deleted row ?
    3) Whatever we do, is it the whole block that goes to redo or rollback ? Means if there are 10 rows in the block and we update one of them, still whole block goes to redo or rollback ?
    4) If we rollback, what goes where ? Is there anything that goes to redo if we rollback ?
    Please explain.
    Thanks.

    Redo stores changes made in the database, and undo/rollback stores the reverse of those changes. Data blocks may be changed prior to a commit, and recorded in both locations.
    So, when a database is recovered, redo is applied to the backup datafiles, rolling every change forward, and then undo is applied to reverse any uncomitted transactions.
    Undo/rollback can also be used simply to roll back a transaction in an active instance. Redo is only used during instance recovery.
    I don't know if this is tracked via the storage of block images, or if it just stores the change itself.
    -cf

  • Recover Database is taking more time for first archived redo log file

    Hai,
    Environment Used :
    Hardware : IBM p570 machine with P6 processor Lpar of .5 CPU and 2.5 GB Ram
    OS : AIX 5.3 ML 07
    Cluster: HACMP 5.4.1.2
    Oracle Version: 9.2.0.4 RAC
    SAN : DS8100 from IBM
    I have used flash copy option to copy the database from production to test machine. Then tried to recover the database to an consistent state using the command "recover automatic database until cancel". System is taking long time and after viewing the alert log it was found that, for the first time, if it is reading all the datafiles and it is taking 3 seconds for each datafile. Since i have more than 500 datafiles, it is nearly taking 25 mins for applying the first archived redo log file. All other log files are applied immeidately without any delay. Any suggession to improve the speed will be highly appreciated.
    Regards
    Sridhar

    After chaning the LPAR settings with 2 CPU and 5GB RAM, the problem solved.

  • [DG Physical] ORA-00368: checksum error in redo log block

    Hi all,
    I'm building a DR solution with 1 primary & 2 DR site (Physical).
    All DBs use Oracle 10.2.0.3.0 on Solaris 64bit.
    The first one ran fine for some days (6), then I installed the 2nd. After restoring the DB (DUPLICATE TARGET DATABASE FOR STANDBY) & ready to apply redo. The DB fetched missing arc gaps & I got the following error:
    ==================
    Media Recovery Log /global/u04/recovery/billhcm/archive/2_32544_653998293.dbf
    Errors with log /global/u04/recovery/billhcm/archive/2_32544_653998293.dbf
    MRP0: Detected read corruption! Retry recovery once log is re-fetched...
    Wed Jan 27 21:46:25 2010
    Errors in file /u01/oracle/admin/billhcm/bdump/billhcm1_mrp0_12606.trc:
    ORA-00368: checksum error in redo log block
    ORA-00353: log corruption near block 1175553 change 8236247256146 time 01/27/2010 18:33:51
    ORA-00334: archived log: '/global/u04/recovery/billhcm/archive/1_47258_653998293.dbf'
    Managed Standby Recovery not using Real Time Apply
    Recovery interrupted!
    Recovered data files to a consistent state at change 8236247255373
    ===================
    I see that may be RFS get the file incorrectly so I ftp to get this file & continue the apply, it pass. Comparison the RFS file & ftp is difference. At that time, I think that something wrong with the RFS because the content of arc is not right. (I used BACKUP VALIDATE ARCHIVELOG SEQUENCE BETWEEN N1 AND N2 THREAD X to check all arcs the RFS fetched, there was corrupt in every file);
    I restore the DR DB again & apply incremental backup from the primary, now it run well. I don't know what's happening as I did the same procedures for all DR DB.
    Yesterday night, I have to stop & restart DR site 1. Today, I check and it got the same error as the 2nd site, with corrupted redo. I try to delete the arcs, & let RFS to reget it, but the files were corrupt too.
    If this continue to happen with the 2nd site again, that'll be a big problem.
    The DR site 1 & Primary is linked by a GB switch, site 2 by a 155Mbps connection (far enough for my db load at about 1.5MB/s avg apply rate).
    I seach Oracle support (metalink) but no luck, there is a case but it mentions max_connection>1 (mine is default =1)
    Can someone show me how to troubleshooting/debug/trace this problem.
    That would be a great help!
    Thank you very much.

    This (Replication) is the wrong forum for your posting.
    Please post to the "Database - General" forum at
    General Database Discussions
    But, first, log an SR with Oracle Support.
    Hemant K Chitale

  • Redo log wait

    Dear All,
    We are usinf ecc5 ans the databse oacle 9i on wondows 2003I have notice that the
    Redo log wait S has been suddenly increase in number   690
    Please suggest what si the problem and to solve it.
    Data buffer
    Size              kb      1,261,568
    Quality            %           96.2
    Reads                 4,234,462,711
    Physical reads          160,350,516
              writes           3,160,751
    Buffer busy waits         1,117,697
    Buffer wait time   s          3,507
    Shared Pool
    Size              kb        507,904
    DD-Cache quality   %           84.3
    SQL Area getratio  %           95.6
             pinratio  %           98.8
          reloads/pins %         0.0297
    Log buffer
    Size              kb          1,176
    Entries                  11,757,027
    Allocation retries              722
    Alloc fault rate   %            0.0
    *Redo log wait      s            690*
    Log files (in use)            8( 8)
    Calls
    User calls               41,615,763
         commits                367,243
         rollbacks                7,890
    Recursive calls         100,067,593
    Parses                    7,822,590
    User/Recursive calls            0.4
    Reads / User calls            101.8
    Time statistics
    Busy wait time     s        697,392
    CPU time           s         42,505
    Time/User call    ms             18
      Sessions busy      %           9.26
      CPU usage          %           4.51
      CPU count                         2
    Redo logging
    Writes                    1,035,582
    OS-Blocks written        14,276,056
    Latching time      s              1
    Sessions busy      %           9.26
    CPU usage          %           4.51
    CPU count                         2
    Redo logging
    Writes                    1,035,582
    OS-Blocks written        14,276,056
    Latching time      s              1
      Write time         s            806
      Mb written                    6,574
    Table scans & fetches
    Short table scans           607,891
    Long table scans             32,468
    Fetch by rowid        1,620,054,083
       by continued row         761,131
    Sorts
    Memory                    3,046,669
    Disk                             32
    Rows sorted             446,593,854
    Regards,
    Shiva

    Hi Stefan,
    As per the doc you have suggest. The details are as following.
    In the day there is only 24 log switch , but in hour there is no more than 10 to 15 as per doc ,so ti is very less.
    The DD-Cache quality   %           84.1 is less
    The elapsed time since start
    Elapsed since start (s)       540,731
      Log buffer
      Size              kb          1,176
      Entries                  13,449,901
      Allocation retries              767
      Alloc fault rate   %            0.0
    *Redo log wait      s            696*
       Log files (in use)            8( 8)
    Check DB Wait times
    TCode ST04->Detail Analysis Menu->Wait Events
    Statistics on total waits for an event
    Elapsed time:             985  s
    since reset at 09:34:06
    Type   Client   Sessions      Busy wait            Total wait           Busy wait
                                time (ms)    time (ms)            time (%)
    USER   User          40            1,028,710           17,594,230        5.85
    BACK   ARC0           1                2,640            1,264,410        0.21
    BACK   ARC1           1                  540            1,020,400        0.05
    BACK   CKPT           1                  950              987,490        0.10
    BACK   DBW0           1                  130              983,920        0.01
    BACK   LGWR           1                  160              986,430        0.02
    BACK   PMON           1                    0              987,000        0.00
    BACK   RECO           1                   10            1,800,010        0.00
    BACK   SMON           1                3,820            1,179,410        0.32
    Disk based sorts
    Sorts
    Memory                    3,443,693
    Disk                             41
    Rows sorted             921,591,847
    Check DB Shared Pool Quality
    Shared Pool
    Size              kb        507,904
    DD-Cache quality   %           84.1
    SQL Area getratio  %           95.6
      pinratio  %                           98.8
          reloads/pins %         0.0278
      V$LOGHIST
    THREAD#   SEQUENCE#   FIRST_CHANGE#   FIRST_TIME            SWITCH_CHANGE#
    1         31612       381284375       2008/11/13 00:01:29   381293843
    1         31613       381293843       2008/11/13 00:12:12   381305142
    1         31614       381305142       2008/11/13 03:32:39   381338724
    1         31615       381338724       2008/11/13 06:29:21   381362057
    1         31616       381362057       2008/11/13 07:00:39   381371178
    1         31617       381371178       2008/11/13 07:13:01   381457916
    1         31618       381457916       2008/11/13 09:26:17   381469012
    1         31619       381469012       2008/11/13 10:27:19   381478636
    1         31620       381478636       2008/11/13 10:59:54   381488508
    1         31621       381488508       2008/11/13 11:38:33   381498759
    1         31622       381498759       2008/11/13 12:05:14   381506545
    1         31623       381506545       2008/11/13 12:33:48   381513732
    1         31624       381513732       2008/11/13 13:08:10   381521338
    1         31625       381521338       2008/11/13 13:50:15   381531371
    1         31626       381531371       2008/11/13 14:38:36   381540689
    1         31627       381540689       2008/11/13 15:02:19   381549493
    1         31628       381549493       2008/11/13 15:43:39   381556307
    1         31629       381556307       2008/11/13 16:07:47   381564737
    1         31630       381564737       2008/11/13 16:39:45   381571786
    1         31631       381571786       2008/11/13 17:07:07   381579026
    1         31632       381579026       2008/11/13 17:37:26   381588121
    1         31633       381588121       2008/11/13 18:28:58   381595963
    1         31634       381595963       2008/11/13 20:00:41   381602469
    1         31635       381602469       2008/11/13 22:23:20   381612866
    1         31636       381612866       2008/11/14 00:01:28   381622652
    1         31637       381622652       2008/11/14 00:09:52   381634720
    1         31638       381634720       2008/11/14 03:32:00   381688156
    1         31639       381688156       2008/11/14 07:00:30   381703441
    14.11.2008         Log File information from control file                                10:01:32
      Group     Thread    Sequence   Size         Nr of     Archive          First           Time 1st SCN
      Nr        Nr        Nr         (bytes)      Members        Status      Change Nr       in log
      1         1         31638      52428800     2         YES  INACTIVE    381634720       2008/11/14 03:32:00
      2         1         31639      52428800     2         YES  INACTIVE    381688156       2008/11/14 07:00:30
      3         1         31641      52428800     2         NO   CURRENT     381783353       2008/11/14 09:50:09
      4         1         31640      52428800     2         YES  ACTIVE      381703441       2008/11/14 07:15:07
    Regards,

Maybe you are looking for

  • How can I scale just one dimension of a pic? I need to shrink the width only, to fit the frame.

    HELP!  How can I shink only one dimension?  I need to shrink the width slightly to fit the frame; the program only allows scaling that is proportional, so when I shrink the width the heigh becomes too short.  Father Mike

  • User provisioning problem from OIM 10g to Siebel CRM

    Hi Team, I am facing User provisioning problem from OIM 10g to Siebel CRM.Please find the log details. Running Get Attribute Mapping Running Siebel Create User <com.siebel.common.common.CSSException> <Error><ErrorCode>8716601</ErrorCode> <ErrMsg>Sock

  • How to tpalloc() based on sizeof() value?

    Hello All, If I have a C structure (say no pointers inside), is there any way to allocate Tuxedo FML buffer to hold this structure data. Can this be done in one shot? Or Is there some way to determine value of tuxsize where csize = sizeof(struct emp)

  • Primavera Web service call to read project using BPEL

    Hi, Please let me know if any sample available in BPEL that invoke primavera web service to read the projects. we tried but we are getting error of accessing endpoints. please let me know how to fix and available sample Thanks

  • Still on Forms 6i?

    What % of Oracle developers are still on Forms 6i? Have many have made the jump to 9i, which requires us to 1) give-up client/server deployment option, 2) more limited set of features so it can run on the Web, 3) modify existing Forms 6i applications