DB_TXN_NOSYNC, log write to disk...

Hi Everyone...
If I choose to use "non durable" transactions and sets the DB_TXN_NOSYNC flag, what is the algorithm followed by Berkeley db to determine when to write the log files to disk ?
Are there any parameters that can be set to tweak this behaviour ?
Thank you very much for your help with this.
</Rajeev>

Hi Rajeev,
There are a number of ways you can tweak this behavior. Some of the relevant methods and references describing this can be found here:
http://www.oracle.com/technology/documentation/berkeley-db/db/api_c/env_set_lg_bsize.html
You can also explicitly flush the log by calling log_flush. You can also call the checkpoint method. For more information:
http://www.oracle.com/technology/documentation/berkeley-db/db/api_c/log_flush.html
http://www.oracle.com/technology/documentation/berkeley-db/db/api_c/txn_checkpoint.html
http://www.oracle.com/technology/documentation/berkeley-db/db/api_c/memp_sync.html
Ron

Similar Messages

  • Log writer taking more time to write

    Hi All,
    we have oracle 10g(10.1.0) installed on rhel 5.3.. We are facing an issue regarding the logwriter..the logwriter is taking more time to write to the online redolog file...the trace file shows a warning message as " log writer time 840ms" ... Not sure what is the root cause of it..Please help me out in that regards..Thanks in advance...

    imran khan wrote:
    The archived log mode is disabled for the db..The online redolog files are not in FRA instead they are kept in diskgroups (multiplexed in two disk groups)..The controlfiles and datafiles are also stored in the same location as of online redolog files.The db is having the release as 10.2.0.4.0 and all the components are in 10.2.0.4.0 as well.
    I also found that the ASM instance is having the release as 10.2.0.4.0(v$version view) but the compatibility attribute of the diskgroups is 10.1.0.0 . The compatible parameter in ASM instance shows 10.2.0.3.0 .
    Do i need to change the compatibility of the diskgroups and the compatible parameter to 10.2.0.4.0 in ASM instance?
    Not sure whether ASM compatibility makes any impact on delaying the lgwr to write to the online redolog files ?
    Note : The online redolog files are stored in ASM diskgroups
    please suggest...If your redo is in the same physical location as your datafiles, don't you think there might be some contention? (For my OLTP type of experience, undo is the most hit data file, too).
    There could also be some more fundamental thing wrong, such as misconfigured I/O, and your redo just seems small. How often are they switching? Are you using hugefiles (as opposed to configuring them and unknowingly not using them)? Do you see any actual swapping to disk going on?
    You likely have an OS I/O problem, but appear to only be looking at Oracle's view of it. What does the O/S say about it?
    Are you sure you want to be running a production database without archive logging?

  • Redo log backup to disk is failed

    Hi,
    My Archive log backup to disk is failed
    BR0002I BRARCHIVE 7.00 (13)                                                                               
    BR0006I Start of offline redo log processing: aeamsbyx.svd 2009-05-04 15.15.07                                                   
    BR0477I Oracle pfile E:\oracle\DV1\102\database\initDV1.ora created from spfile E:\oracle\DV1\102\database\spfileDV1.ora         
    BR0101I Parameters                                                                               
    Name                           Value                                                                               
    oracle_sid                     DV1                                                                               
    oracle_home                    E:\oracle\DV1\102                                                                               
    oracle_profile                 E:\oracle\DV1\102\database\initDV1.ora                                                            
    sapdata_home                   E:\oracle\DV1                                                                               
    sap_profile                    E:\oracle\DV1\102\database\initDV1.sap                                                            
    backup_dev_type                disk                                                                               
    archive_copy_dir               W:\oracle\DV1\sapbackup                                                                           
    compress                       no                                                                               
    disk_copy_cmd                  copy                                                                               
    cpio_disk_flags                -pdcu                                                                               
    archive_dupl_del               only                                                                               
    system_info                    SAPServiceDV1 SAP2DQSRV Windows 5.2 Build 3790 Service Pack 1 Intel                               
    oracle_info                    DV1 10.2.0.2.0 8192 21092 71120290                                                                
    sap_info                       46C SAPR3 DV1 W1372789206 R3_ORA 0020109603                                                       
    make_info                      NTintel OCI_10103_SHARE Apr  5 2006                                                               
    command_line                   brarchive -u / -c force -p initDV1.sap -sd                                                        
    BR0013W No offline redo log files found for processing                                                                           
    BR0007I End of offline redo log processing: aeamsbyx.svd 2009-05-04 15.15.11                                                     
    BR0280I BRARCHIVE time stamp: 2009-05-04 15.15.11                                                                               
    BR0004I BRARCHIVE completed successfully with warnings                                                                               
    I have checked the target directory nothing is backed up and gone through few SAP notes 10170, 17163, 132551, 490976 and 646681 but nothing is helped.
    And anohter question is in DB13 Calander --> Schedule an action pattern maximum I can backup only 1 month redo logs. But I have 3 months redo log files are there. How can I back up those files.
    Our environment is SAP R/3 4.6C, windows 2003 and Oracle 10.2.0.2.0
    Please some one help me on this.
    Thanks and Regards
    Satya

    Update your BRTools. They are very old.
    Check that your DB is in archive-log-mode. If no enable it.
    testing backup
    - run a online backup
    - run "sqlplus / as sysdba"
    - SQL> alter system switch logfile; ... this switches the current online log... a new log will written to oraarch.
    - run a archivelog backup
    ... now you should have complete db-backup with minimum 1!!! archive log.
    Now you can delete old redologs from oraarch.
    If this doesn't work and your database is in archive-log-mode:
    - shutdown sap and oracle
    - MOVE all redologs from oraarch to a other location manually... no more files should be on oraarch
    - run a offline backup
    If the offline backup was running successfully you can delete the prior moved redologs. The backup is consistent and the redologs will no more required.
    - start oracle and sap
    Oracle should now write new redologs to oraarch. Test online backup!
    Edited by: Thomas Rudolph on May 6, 2009 10:16 PM
    Edited by: Thomas Rudolph on May 6, 2009 10:17 PM

  • Improving redo log writer performance

    I have a database on RAC (2 nodes)
    Oracle 10g
    Linux 3
    2 servers PowerEdge 2850
    I'm tuning my database with "spotilght". I have alredy this alert
    "The Average Redo Write Time alarm is activated when the time taken to write redo log entries exceeds a threshold. "
    The serveres are not in RAID5.
    How can I improve redo log writer performance?
    Unlike most other Oracle write I/Os, Oracle sessions must wait for redo log writes to complete before they can continue processing.
    Therefore, redo log devices should be placed on fast devices.
    Most modern disks should be able to process a redo log write in less than 20 milliseconds, and often much lower.
    To reduce redo write time see Improving redo log writer performance.
    See Also:
    Tuning Contention - Redo Log Files
    Tuning Disk I/O - Archive Writer

    Some comments on the section that was pulled from Wikipedia. There is some confusion in the market as their are different types of solid state disks with different pros and cons. The first major point is that the quote pulled from Wikipedia addresses issues with Flash hard disk drives. Flash disks are one type of solid state disk that would be a bad solution for redo acceleration (as I will attempt to describe below) they could be useful for accelerating read intensive applications. The type of solid state disk used for redo logs use DDR RAM as the storage media. You may decide to discount my advice because I work with one of these SSD manufacturers but I think if you do enough research you will see the point. There are many articles and many more customers who have used SSD to accelerate Oracle.
    > Assuming that you are not CPU constrained,
    moving the online redo to
    high-speed solid-state disk can make a hugedifference.
    Do you honestly think this is practical and usable
    advice Don? There is HUGE price difference between
    SSD and and normal hard disks. Never mind the
    following disadvantages. Quoting
    (http://en.wikipedia.org/wiki/Solid_state_disk):[
    i]
    # Price - As of early 2007, flash memory prices are
    still considerably higher  
    per gigabyte than those of comparable conventional
    hard drives - around $10
    per GB compared to about $0.25 for mechanical
    drives.Comment: Prices for DDR RAM base systems are actually higher than this with a typical list price around $1000 per GB. Your concern, however, is not price per capacity but price for performance. How many spindles will you have to spread your redo log across to get the performance that you need? How much impact are the redo logs having on your RAID cache effectiveness? Our system is obviously geared to the enterprise where Oracle is supporting mission critical databases where a hugh return can be made on accelerating Oracle.
    Capacity - The capacity of SSDs tends to be
    significantly smaller than the
    capacity of HDDs.Comment: This statement is true. Per hard disk drive versus per individual solid state disk system you can typically get higher density of storage with a hard disk drive. However, if your goal is redo log acceleration, storage capacity is not your bottleneck. Write performance, however, can be. Keep in mind, just as with any storage media you can deploy an array of solid state disks that provide terabytes of capacity (with either DDR or flash).
    Lower recoverability - After mechanical failure the
    data is completely lost as
    the cell is destroyed, while if normal HDD suffers
    mechanical failure the data
    is often recoverable using expert help.Comment: If you lose a hard drive for your redo log, the last thing you are likely to do is to have a disk restoration company partially restore your data. You ought to be getting data from your mirror or RAID to rebuild the failed disk. Similarly, with solid state disks (flash or DDR) we recommend host based mirroring to provide enterprise levels of reliability. In our experience, a DDR based solid state disk has a failure rate equal to the odds of losing two hard disk drives in a RAID set.
    Vulnerability against certain types of effects,
    including abrupt power loss
    (especially DRAM based SSDs), magnetic fields and
    electric/static charges
    compared to normal HDDs (which store the data inside
    a Faraday cage).Comment: This statement is all FUD. For example, our DDR RAM based systems have redundant power supplies, N+1 redundant batteries, four RAID protected "hard disk drives" for data backup. The memory is ECC protected and Chipkill protected.
    Slower than conventional disks on sequential I/OComment: Most Flash drives, will be slower on sequential I/O than a hard disk drive (to really understand this you should know there are different kinds of flash memory that also impact flash performance.) DDR RAM based systems, however, offer enormous performance benefits versus hard disk or flash based systems for sequential or random writes. DDR RAM systems can handle over 400,000 random write I/O's per second (the number is slightly higher for sequential access). We would be happy to share with you some Oracle ORION benchmark data to make the point. For redo logs on a heavily transactional system, the latency of the redo log storage can be the ultimate limit on the database.
    Limited write cycles. Typical Flash storage will
    typically wear out after
    100,000-300,000 write cycles, while high endurance
    Flash storage is often
    marketed with endurance of 1-5 million write cycles
    (many log files, file
    allocation tables, and other commonly used parts of
    the file system exceed
    this over the lifetime of a computer). Special file
    systems or firmware
    designs can mitigate this problem by spreading
    writes over the entire device,
    rather than rewriting files in place.
    Comment: This statement is mostly accurate but refers only to flash drives. DDR RAM based systems, such as those Don's books refer to, do not have this limitation.
    >
    Looking at many of your postings to Oracle Forums
    thus far Don, it seems to me that you are less
    interested in providing actual practical help, and
    more interested in self-promotion - of your company
    and the Oracle books produced by it.
    .. and that is not a very nice approach when people
    post real problems wanting real world practical
    advice and suggestions.Comment: Contact us and we will see if we can prove to you that Don, and any number of other reputable Oracle consultants, recommend using DDR based solid state disk to solve redo log performance issues. In fact, if it looks like your system can see a serious performance increase, we would be happy to put you on our evaluation program to try it out so that you can do it at no cost from us.

  • Log Buffer to disk?

    Hi,
    When the information in the Log Buffer is saved to disk?
    Thanks,
    Felipe

    Hi Felipe,
    Let’s conduct a brief summary about the redo process. When Oracle blocks are changed, including undo blocks, oracle records the changes in a form of vector changes which are referred to as redo entries or redo records. The changes are written by the server process to the redo log buffer in the SGA. The redo log buffer is then flushed into the online redo logs in near real time fashion by the log writer LGWR.
    The redo logs are written by the LGWR when:
    •     When a user issue a commit.
    •     When the Log Buffer is 1/3 full.
    •     When the amount of redo entries is 1MB.
    •     Every three seconds
    •     When a database checkpoint takes place. The redo entries are written before the checkpoint to ensure recoverability.
    Remember that redo logs heavily influence database performance because a commit cannot complete until the transaction information has been written to the logs. You must place your redo log files on your fastest disks served by your fastest controllers. If possible, do not place any other database files on the same disks as your redo log files. Because only one group is written to at a given time, there is no harm in having members from several groups on the same disk.
    To avoid losing information that could be required to recover the database at some point, Oracle has an archive (ARCn) background process that archives redo log files when they become filled. However, it’s important to note not all Oracle Databases will have the archive process enabled. An instance with archiving enabled, is said to operate in ARCHIVELOG mode and an instance with archiving disabled is said to operate in NO ARCHIVELOG mode.
    You can determine with mode or if archiving is been used in your instance either by checking the value for the LOG_ARCHIVE_START parameter in your instance startup parameter file (pfile or spfile – This parameter is deprecated on version 10g), by issuing an SQL query to the v$database (“ARCHIVELOG” indicates archiving is enabled, and “NOARCHIVELOG” indicates that archiving is not enabled) or by issuing the SQL ARCHIVE LOG LIST command.
    SQL> Select log_mode from v$database;
    LOG_MODE
    ARCHIVELOG
    SQL> archive log list
    Database log mode                    Archive Mode
    Automatic archival                    Enabled
    Archive destination                   USE_DB_RECOVERY_FILE_DEST
    Oldest online log sequence       8
    Next log sequence to archive   10
    Current log sequence               10The purpose of redo generation is to ensure recoverability. This is the reason why, Oracle does not give the DBA a lot of control over redo generation. If the instance crashes, then all the changes within SGA will be lost. Oracle will then use the redo entries in the online redo files to bring the database to a consistent state. The cost of maintaining the redolog records is an expensive operation involving latch management operations (CPU) and frequent write access to the redolog files (I/O).
    Regards,
    Francisco Munoz Alvarez
    www.oraclenz.com

  • Speed it takes to write to disk

    I am looking for some T-sql that I can use that will test to see how long it takes to write to disk. I have two disks. G and H. I T-Sql that will tell me which one takes longer to write to. That should be pretty simple right?
    Alan

    I am looking for some T-sql that I can use that will test to see how long it takes to write to disk. I have two disks. G and H. I T-Sql that will tell me which one takes longer to write to. That should be pretty simple right?
    I assume you are interested in the time it takes to write data pages.  This is simple at first glance but you need to consider how the database engine performs writes. 
    When you insert, update, or delete data, SQL Server does not immediately write the modified pages to disk.  Instead, data modified only in memory and a record written to the transaction log upon COMMIT or when the log buffer is full.  The
    actual writes of modified pages occur asynchronously.  These writes are done by the Lazy Writer and Checkpoint background processes.  You have little control over when those writes occur so it is difficult to measure timing.
    Also consider that writes are actually rewrites of existing data pages.  The page to be modified must first be read into memory, where it is modified as described above.  These physical reads are synchronous during query execution and you also
    have little control over whether the page is already in memory or not and that will influence query time.
    Be aware that transaction log writes are mostly sequential and are on the critical path for performance of insert/update/delete queries.  Data pages are mostly random and therefore take much longer on spinning media due to seek time.  Data file
    write performance helps the background processes write dirty pages to disk faster, allowing buffer cache to be reused sooner.
    That being said, T-SQL is not the write tool to measure hardware performance and you need to consider what types of write I/O you want to measure (sequential versus random).  I suggest you use a tool like SQLIO for this task.  See
    http://www.microsoft.com/en-us/download/details.aspx?id=20163
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Warning  : log write time 7706 , size 2KB

    Below are the contents of the lgwr trace file:
    Warning: log write time 4040ms, size 1KB
    *** 2010-01-06 11:28:03.962
    Warning: log write time 3680ms, size 8KB
    *** 2010-01-06 11:28:42.510
    Warning: log write time 4010ms, size 1KB
    *** 2010-01-06 11:28:50.576
    Warning: log write time 8060ms, size 18KB
    *** 2010-01-06 11:28:58.582
    Warning: log write time 8010ms, size 2KB
    *** 2010-01-06 11:29:06.609
    Warning: log write time 8020ms, size 36KB
    *** 2010-01-06 11:29:10.626
    Warning: log write time 4010ms, size 3KB
    *** 2010-01-06 11:29:18.653
    Warning: log write time 8030ms, size 7KB
    *** 2010-01-06 11:29:26.704
    Warning: log write time 8050ms, size 3KB
    *** 2010-01-06 11:29:30.737
    Warning: log write time 4030ms, size 5KB
    *** 2010-01-06 11:29:38.495
    Warning: log write time 7760ms, size 2KB
    User insert takes forever, Note 601316.1 states that this can be ignored but the problem still persist
    KK

    The MOS note says also to check for some OS issue such as disk too slow. Did you check this ?

  • Apache Sling Logging Writer Configuration

    Hi,
    I'm having an issue where my custom log writer configuration is not being picked up and used by CQ5 sometimes.  I've created a custom error log writer based on the example provided at http://helpx.adobe.com/cq/kb/HowToRotateRequestAndAccessLog.html which I've installed on an author, replicated it to the publishers, and all seemed ok and working correctly.  The settings are:
    Log File: ../logs/error.log
    Number of Log Files: 5
    Log File Threshold: 200MB
    After installing this, the error logs rotated at 200MB resulting in error.log.0, error.log.1 etc as expected.
    However after rebuilds on the machines, this configuration was overwritten (as expected).  So I've installed and replicated the package again, but now the configuration is not taking effect.  I'm using the exact same config package as I was previously, but the logs don't seem to be rotating at all now (not even daily).  I've deleted the config from the Felix console on all authors and publishers, reinstalled on the authors, replicated to the publishers, then restarted the CQ5 service on all machines, but it's still not working.
    So I have a couple of questions about this:
    Is there somewhere else in CQ5 that might be overriding these log writer config settings?
    Is ../logs/error.log correct for the standard log file location?  A note in step 3 of Creating Your Own Loggers and Writers on http://dev.day.com/docs/en/cq/current/deploying/configure_logging.html states:
    Log writer paths are relative to the crx-quickstart/launchpad location.
    Therefore, a log file specified as logs/thelog.log writes to crx-quickstart/launchpad/logs/thelog.log.
    To write to the folder crx-quickstart/logs the path must be prefixed with ../ (as../logs/thelog.log).
    So a log file specified as ../logs/thelog.log writes to crx-quickstart/logs/thelog.log.
         The configs in the log rotation example also use ../logs.  However when looking at the default/standard logging writer config, it uses logs/error.log.  Which one is correct?
    Any help on what's going on here would be appreciated!!
    Thanks,
    K

    Hi,
    I am answering your last question and this one here only.
    1. Log configuration can be override at project via creating config and overriding factory "org.apache.sling.commons.log.LogManager.factory.config-<identifier>" and writer (if required) "org.apache.sling.commons.log.LogManager.factory.writer-<identifier>". So plz check if you have configure log at project level. Now if you are not able to see then plz recheck you log configuration in felix console http://localhost:4502/system/console/slinglog here you can see entire log configuration for CQ system.
    2. both will work but the location will change
         i. CQ 5.5 - log file located under "crx-quickstart" ../logs/<filename>.log will create under this
         ii. CQ 5.4 - logs creates under "crx-quickstart\logs" also under "crx-quickstart\launchpad\logs" so if you select ../logs/ it will go under "crx-quickstart\logs" only but you can change wherever you want to store. Again you can see this configuration info at http://localhost:4502/system/console/slinglog
    3. You can also customize you log via creating at project level and assigning the "identifire" (with all other configure parameters) for more information you can refer http://sling.apache.org/site/logging.html apart from earlier share links.
    I hope above will help you to proceed. Please let me know for more information.
    Thanks,
    Pawan

  • "Can't Read Or Write to disk" Video Won't Transfer To Ipod

    I have transfered videos to my ipod in the past but I have two tv shows from the Itunes story that won't transfer.
    They are in the correct mpeg 4 whatever format. My itunes is the most resent version, but for some reason I keep getting the same error.
    "Cannot Read Or Write To Disk".
    HELP??!!
      Windows XP  

    I've got this problem too. My 5g iPod 30gig syncs after about 3 minutes with iTunes then after loading songs and videos after 10 minutes i get the same message your getting - "Can't Read Or Write to disk". I think theres some kind of USB2 conflict or something. All my other USB devices work fine. All my drivers and software is up-to-date.

  • Ipod will sync up once then says cannot read or write to disk.

    Is this normal? The IPOD shows up in Itunes after the first sync but when I go and try to update the ipod again it says it cannot read or write to disk. Tech support says this is normal but it does not make any sense as the IPOD still shows up in Itunes.

    the IPOd will sync up when I fiorst plug it in. the problem is when it is finished, Itunes says ipod update complete you can disconnect. I leave it connected, and if I try to update the ipod again after I download something or whatever into Itunes it comes back with the cannot read from or write to disk. I can unplug it and plug it back in and it work sfine. There is something with the software I guess that is not staying connected with the hardware after the first sync.

  • Music Downloads - unable to read or write to disk.

    Hi, I'm new to iPods, and when I try to download music
    I get the message Unable to update iPod, unable to read / write to disk....
    Originally I had no problems now unable to download more than one song at a time, and only the manually... it's taking ages...
    Does anyone have any ideas? I've restored, reset everything and downloaded the latest software....

    Hi,
    Unless you are using OCR (optical character recognition) software when doing the scan, the result of scanning a document is only an image, a picture of the document. Since you only have a picture of the document at this point, the only thing that you can do with it in Word is to insert the image into the document. As an image, you won't be able to edit or search the text that appears in the image. You need to run OCR software to examine the image and convert the images of the characters into editable text. If you are already using OCR software, reply back and let me know which application you are using, and I'll try to help you further.
    Hope this helps,
    Ken

  • "Can't read from or write to disk...." iPod error

    I recently switched from my old PowerBook G4 (running Leopard 10.5.2) to a newer MacBook Pro (running Leopard 10.5.5). I migrated my account and permissions from the PB to the MacBook Pro, and everything is working just fine except for iTunes (version 8.0.2) and my older 160 GB iPod Classic (running 1.1.2).
    NOTE: THE VERSION OF iTunes ON THE PB WAS VERSION 8.0.1.
    Since the migration, I've been getting this "can't read from or write to disk..." error whenever I attempt to sync my iPod to iTunes. Previously, I'd never had any problems. I have tried to restore the iPod using iTunes - didn't work. I have tried different USB to dock connector cables - didn't work. I have even tried completely re-formatting the iPod using Disk Utility - didn't work. Everytime I begin syncing the iPod, it starts out just fine. At varying degrees of completion it ALWAYS fails with the same error.
    I had an idea. I tried syncing the iPod to a different computer - my G5 4 x 2.5 GHz (running Leopard 10.5.5) with iTunes (version 8.0.2). I still got the same error. I haven't dropped it, submerged it in water or done anything to it that would otherwise compromise its functionality. So, I'm at a loss.
    PLEASE! PLEASE! PLEASE!
    Does anyone have any other suggestions? - OTHER THAN BUYING A NEW IPOD

    UPDATE:
    OK, after the visit to my Genius Bar; I thought of a variable that wasn't previously considered. Because their transfer of media was successful without the slightest hitch, I had to figure out something else.
    Everytime I had tried to sync the iPod with my various Macs, I was using the same process. Since all of my music and movies "live" on an external 500 GB drive, I was attempting to use that same drive whenever I would try to sync. It occurred to me that possibly the failing was in the drive and not the iPod.
    I have part of that library on a Power Mac G4 867 DP (running 10.4.9). I updated the iTunes on that machine to 8.0.2, connected the iPod and made a successful transfer of a large amount of media!
    My only problem now is that I can't figure out how to get the 500 GB drive to stop hanging on transfer and update of my Genius information. The drive just completely stalls and I get the spinning wheel from iTunes and the application ceases to respond. When I force quit iTunes, the drive recovers fine. I don't have any problems transferring data to and from it and Disk Utility reports that the drive is healthy.
    I have tried re-installing iTunes on all affected Macs - didn't help. I believe my problem may exist in the iTunes Library file or in the associated .xml document. My only concern is that if I do a completely fresh install of iTunes where those files would be re-created, I would lose all of my play and ratings data as well as the information about when my files were added. On a related note, when playing the music from this drive through iTunes. I don't have any issues at all.
    ANY SUGGESTIONS?

  • Best way to move redo log from one disk group to another in ASM?

    Hi All,
    Our db is 10.2.0.3 RAC db. And database servers are window 2003 server.
    We need to move more than 50 redo logs (some are regular and some are standby) which are not redundant from one disk group to another. Say we need to move from disk group 1 to 2. Here are the options we are thinking about but not sure which one is the best from easiness and safety prospective.
    Thank you very much for your help in advance.
    Shirley
    Option 1:
    1)     shutdown immediate
    2)     copy log files from disk group 1 to disk group2 using RMAN (need to research on this)
    3)     startup mount
    4)     alter database rename file ….
    5)     Open database open
    6)     delete the redo files from disk group 1 in ASM (how?)
    Option 2:
    1)     create a set of redo log groups in disk group 2
    2)     drop the redo log groups in disk group 1 when they are inactive and have been archived
    3)     delete the redo files associated with those dropped groups from disk group 1 (how?) (According to Oracle menu: when you drop the redo log group the operating system files are not deleted and you need to manually delete those files)
    Option 3:
    1)     create a set of redo members in disk group 2 for each redo log group in disk group 1
    2)     drop the redo log memebers in disk group 1
    3)     delete the redo files from disk group 1 associated with the dropped members

    Absolutely not, they are not even remotely similar concepts.
    OMF: Oracle Managed Files. It is an RDMBS feature, no matter what your storage technology is, Oracle will take care of file naming and location, you only have to define the size of a file, and in the case of a tablespace on an OMF DB Configuration you only need to issue a command similar to this:
    CREATE TABLESPACE <TSName>; So the OMF environment creates an autoextensible datafile at the predefined location with 100M by default as its initial size.
    On ASM it should only be required to specify '+DGroupName' as the datafile or redo log file argument so it can be fully managed by ASM.
    EMC. http://www.emc.com No further commens on it.
    ~ Madrid
    http://hrivera99.blogspot.com

  • On RAID 10 - How to relieve Log Writer Slow Write Time Trace Files

    We have a DELL 8 CPU 5460 3.16Ghz Xeon with Dell Open Manage RAID 10 array
    Oracle 10g 10.2.0.4 on RedHat EL 5 with
    filesystemio_options='' (DEFAULT)
    disk_asynch_io='TRUE' ( NOT DEFAULT)
    Running 2 instances 64 bit 10g 10.2.0.4 with an app that does a lot of row updates and uses BLOBs heavily.
    Our storage (RAID 10) is presented through a single mount point.
    I periodically see these messages in a lgwr trc file as follows
    Warning: log write time 560ms, size 5549KB
    *** 2010-02-25 17:22:24.574
    Warning: log write time 650ms, size 6759KB
    *** 2010-02-25 17:22:25.103
    Warning: log write time 510ms, size 73KB
    *** 2010-02-25 20:33:00.015
    Warning: log write time 540ms, size 318KB
    *** 2010-02-25 20:35:17.956
    Warning: log write time 800ms, size 5KB
    Note that most of these are larger chunks of data.
    Our log wait histogram is as follows:
    106 log file parallel write 1 465780158
    106 log file parallel write 2 5111874
    106 log file parallel write 4 5957262
    106 log file parallel write 8 2171240
    106 log file parallel write 16 1576186
    106 log file parallel write 32 1129199
    106 log file parallel write 64 852217
    106 log file parallel write 128 2092462
    106 log file parallel write 256 508494
    106 log file parallel write 512 109449
    106 log file parallel write 1024 55441
    106 log file parallel write 2048 11403
    106 log file parallel write 4096 1197
    106 log file parallel write 8192 29
    106 log file parallel write 16384 5
    In discussions with the group that builds and maintains the systems (DBA's do not) we have asked for more spindles / hba's / mount points to address this issue. We have been advised that since the RAID 10 spreads the I/Os across multiple drives this is not going to affect the situation.
    Our thoughts are that multiple HBAs going to separate RAID 10 devices would help relieve the pressure.
    Thank you.

    Is this an internal RAID array? Is it composed of SCSI (SAS) or SATA drives? How many drives are in the array?
    Does the RAID controller have a built in battery backed cache (some of Dell's RAID controllers have 128MB, 256MB, or 512MB of battery backed cache). If the RAID controller has a battery backed cache, consider disabling the caching of all read operations, and set the write policy to write-back (see: http://support.dell.com/support/edocs/software/svradmin/5.1/en/omss_ug/html/cntrls.html ).
    Charles Hooper
    Co-author of "Expert Oracle Practices: Oracle Database Administration from the Oak Table"
    http://hoopercharles.wordpress.com/
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • How to reduce "Wait for Log Writer"

    Hi,
    in a production system using MaxDB 7.6.03.07 i checked follow acitivities about Log:
    Log Pages Written: 32.039
    Wait for Log Writer: 31.530
    the docs explains that "Wait for Log Writer", Indicates how often it was necessary to wait for a log entry to be written.
    then what steps i must follow to reduce this?
    thanks for any help
    Clóvis

    Hi,
    when the log Io queue is full all user tasks who want to insert entries into the log io queue have to wait until the log entries of the queue have been written to the log volume - they are wiating for LOG Writer
    First you should check the size of the LOG_IO_QUEUE Parameter.
    Think about increaseing the parameter value.
    Second will be to check the time for write I/O to the log -> use DB-Analyzer and activate time measurement via x_cons <DBNAME> time enable command.
    Then you will get time for write I/O on the log in the DB-Analyzer log files (expert)
    You will find more information about MaxDb Logging and Performance Analysis on maxdb.sap.com -> [training material|http://maxdb.sap.com/training] chapter logging and performance analysis.
    Regards, Christiane

Maybe you are looking for