Reads on redo logs when on file system

I was concentrated on redo tuning problems and am surprised by the fact that when I have the redo logs on file system (jfs2, aix 5.3, Oracle 10.1.0.4) I see in IOSTAT reads on the device. I have verified that there's no other process to touch these drives except LGWR.
If I move the redo logs on raw devices I do not have these reads - they are zero.
How can we explain this?
There's no archiving or replication going on at all.
Is there something I could do to prevent the reads on FS?
I did not find anything on the web or in Metalink for this issue.
Thanks a lot,mj

Probably because using raw devices bypasses any operating system handling. Oracle accesses the files on raw devices directly, therefore iostat doesn't know about it. In fact I'd be surprised if iostat knows much of anything about raw devices at all. Try looking for info on using raw devices on AIX. It's more of an operating system issue than an Oracle issue. In a nutshell, if you're using cooked files, the operating system has to get involved in order for any application to access the files.

Similar Messages

  • Multiplexing redo logs and control files to a separate diskgroup

    General question this one...
    I've been using ASM for a few years now and have always installed a new system with 3 diskgroups
    +DATA - for datafiles, control files, redo logs
    +FRA - for achive logs, flash recovery. RMAN backup
    Those I guess are the standards, but I've always created an extra (very small) diskgroup, called +ONLINE where I keep multiplexed copies of the redo logs and control files.
    My reasoning behind this is that if there are any issues with the +DATA diskgroup, the redo logs and control files can still be accessed.
    In the olden days (all those 5 years ago!), on local storage, this was important, but is it still important now? With all the striping and mirroring going on (both at ASM and RAID level), am I just being overtly paranoid? Does this additional +ONLINE diskgroup actually hamper performance? (with dual write overheads that are not necessary)
    Thoughts?

    Some of the decision will probably depend on your specific environment's data activity, volume, and throughput.
    Something to remember is that redo logs are sequential write, which benefit from a lower RAID overhead (RAID-10, 2 writes per IOP vs RAID-5, 4 writes per IOP). RAID-10 is often not cost-effective for the data portion of a database. If your database is OLTP with a high volume of random reads/writes, you're potentially hurting redo throughput by creating contention on the disks sharing data and redo. Again, that depends entirely on what you're seeing in terms of wait events. A low volume database would probably not experience any noticeable degraded performance.
    In my environment, I have RAID-5 and RAID-10 available, and since the RAID-10 requirement from a capacity perspective for redo is very low, it makes sense to create 2 diskgroups for online redo, separate from DATA, and separate from each other. This way, we don't need to be concerned with DATA transactions impacting REDO performance, and vice versa, and we still maintain redo redundancy.
    In my opinion, you can't be too paranoid. :)
    Good luck!
    K

  • Location of Redo log and control files?

    Dear all,
    I am checking the location of redo log and control files, but found that the redo log file (like log02a.dbf ....) in the same directory of data files. However, I couldn't find any control files in the data files directries.
    What could be the location of control files?
    Amy

    select name
      from v$controlfile
    or
    show parameter control_filesKhurram

  • Multiplex Redo Logs and Control File

    I am wanting to setup an existing Oracle Express 10g instance to multiplex the redo log files and the control file.
    Instance is using Oracle-Managed Files and the Flash Recovery Area.
    With these options being used what are the steps required to setup multiplexing?
    I tried setting the DB_CREATE_ONLINE_LOG_DEST_1 and DB_CREATE_ONLINE_LOG_DEST_2 parameters but this doesn't appear to have worked (I even bounced the db instance).
    BTW, the DB_CREATE_FILE_DEST is set to null and the DB_RECOVERY_FILE_DEST is set to the flash recovery area.
    Any help is much appreciated.
    Regards, Sheila

    Thanks for this. My instance originally had two log groups so I've added a new member to each group into the same flash recovery area directory, but have assigned a name. Is this why when I query v$logfile the is_recovery_dest_file is set to NO? Is it ok to assign a name & directory and if not, how do you add a new memeber and allow Oracle-Managed files to name them?
    Also, how can I check that the multiplexing is working (ie the database is writing to both sets of files)?
    Thanks again.

  • GoldenGate Extract Process will not read from redo log with manual help

    Here is my issue.
    I have GoldenGate replication successfully setup one-way from 1 Source to Many Targets. There is 1 source extract on the DB and many pumps that push the trail file data to the Targets. Replication does work but after manual help with starting up the Source Extract process.
    If I execute the command:
    GGSCI> alter extract <source extract name> begin now
    GGSCI> view report <source extract name>
    The extract starts and reads the source trail file but will not process data, I continually see in the ggserr.log file "OGG-01515: Positioning to begin time MMM DD, YYYY, HH:MM:SS" The date and time are irrevelant for this problem.
    When I see this, I SQ*Plus into the database and look in the v$log table for the current log and sequence #.
    I return to GGSCI and issue the following command:
    GGSCI> alter extract <source extract name> thread 1 extseqno <sequence # from v$log query>
    GGSCI> start <source extract name>
    It then works as expected. Why is this so? I thought the alter extract <source extract name> begin now would do the same output.
    We do use ASM but like I said when I issue the:
    GGSCI> alter extract <source extract name> thread 1 extseqno <sequence # from v$log query>
    It works like it should.
    Very weird.
    - Jason

    Yes, supplemental logging is enabled on both the source and the targets, but why would supplemental logging on the targets have any affect on why the Source Extract on the source can't read from the source redo log?
    This is not a RAC database, rather single-instance with one thread. Also, we are using DBLOGREADER functionality as it is an 11.2.0.3 database.
    My issue is simply, when I start the source extract from being down, meaning it isn't running, I issue this command:
    alter <source extract> begin now
    start <source extract>
    view report <source extract>
    OGG-01515 Positioning to begin time <today's date and time> ie Mar 4, 2013, 3:26:39 PM. (this is repeated over and over and over)
    If I perform a
    info <source extract> detail---> I see the following:
    Log Read Checkpoint Oracle Redo Logs 2013-03-04 15:26:39 Thread 1, Seqno 0, RBA 0 (why is it showing 0, becuase it can't read the redo, WHY NOT?)
    Extract Source BEGIN END
    Not Available <today's date> <today's date> (repeat....)
    However, if I retreive the Redo Log number and I issue:
    alter spe thread 1 extseqno (redo log sequence #)
    start spe.
    Then it works okay. I have to manually tell it what redo log to begin reading from. Why?
    - Jason
    Edited by: 924317 on Mar 4, 2013 9:03 AM

  • Photoshop CC locking up saying "Reading Camera Raw Format" when opening file from Lightroom

    When I open a file from Lightroom, CC opens and asks about sending data to Adobe, and no matter what answer you give the program locks up at the "Reading Camera Raw Format" prompt.

    What files from which camera do you use? Where are they stored? What system? Are Lightroom and Photoshop properly updated and use the same RAW settings? Provide more info.
    Mylenium

  • Crawler Error when indexing file system repository

    Hi all:
    I am configuring an index of File system repository.
    The version of my ep is EP 7.0 sp12, installed on HP-UX
    My Trex is 7.0,installed on Win 2003
    Some of the properties of my repository is:
    Services:    properties,rating
    Property serch Manager:    SimplePropertySearchManager
    Security Manager:     AclSecurityManager
    The states of repository is green.
    When I create an index, the index can't index any of the files. Please help me to see the problem. Thank you!
    Exception is like:
    class com.sapportals.wcm.repository.InvalidNameException
    Logon failure: account currently disabled.
    com.sapportals.wcm.repository.InvalidNameException: Logon failure: account currently disabled.
            at com.sapportals.wcm.repository.ResourceException.fillInStackTrace(ResourceException.java:399)
            at java.lang.Throwable.(Throwable.java:195)
            at java.lang.Exception.(Exception.java:41)
            at com.sapportals.wcm.WcmException.(WcmException.java:59)
            at com.sapportals.wcm.util.content.ContentException.(ContentException.java:38)
            at com.sapportals.wcm.repository.ResourceException.(ResourceException.java:238)
            at com.sapportals.wcm.repository.InvalidNameException.(InvalidNameException.java:33)
            at com.sapportals.wcm.repository.util.file.FileUtils.checkFilePathTolerating(FileUtils.java:140)
            at com.sapportals.wcm.repository.util.file.FileUtils.checkFilePath(FileUtils.java:103)
            at com.sapportals.wcm.repository.manager.sfs.FSFile.(FSFile.java:89)
            at com.sapportals.wcm.repository.manager.sfs.FSFile.createInstance(FSFile.java:81)
            at com.sapportals.wcm.repository.manager.sfs.FSRepositoryManager.getResource(FSRepositoryManager.java:223)
            at com.sapportals.wcm.repository.RMAdapter.getResource(RMAdapter.java:227)
            at com.sapportals.wcm.repository.runtime.CmAdapter.findResource(CmAdapter.java:1349)
            at com.sapportals.wcm.repository.runtime.CmAdapter.findManagerAndResource(CmAdapter.java:1322)
            at com.sapportals.wcm.repository.runtime.CmAdapter.getResourceImpl(CmAdapter.java:979)
            at com.sapportals.wcm.repository.runtime.CmAdapter.getResource(CmAdapter.java:192)
            at com.sapportals.wcm.control.base.WcmResourceControl.createResource(WcmResourceControl.java:118)
            at com.sapportals.wcm.control.base.WcmResourceControl.getSafeResource(WcmResourceControl.java:68)
            at com.sapportals.wcm.control.navigation.ResourceDetailsHeaderControl.render(ResourceDetailsHeaderControl.java:160)
            at com.sapportals.wdf.layout.HorizontalLayout.renderControls(HorizontalLayout.java:42)
            at com.sapportals.wdf.stack.Pane.render(Pane.java:155)
            at com.sapportals.wdf.stack.PaneStack.render(PaneStack.java:73)
            at com.sapportals.wdf.layout.HorizontalLayout.renderPanes(HorizontalLayout.java:73)
            at com.sapportals.wcm.control.layout.HorizontalGroupLayout.renderPanes(HorizontalGroupLayout.java:49)
            at com.sapportals.wdf.stack.Pane.render(Pane.java:158)
            at com.sapportals.wdf.stack.PaneStack.render(PaneStack.java:73)
            at com.sapportals.wdf.WdfCompositeController.doInitialization(WdfCompositeController.java:282)
            at com.sapportals.wdf.WdfCompositeController.buildComposition(WdfCompositeController.java:660)
            at com.sapportals.htmlb.AbstractCompositeComponent.preRender(AbstractCompositeComponent.java:33)
            at com.sapportals.htmlb.Container.preRender(Container.java:120)
            at com.sapportals.htmlb.Container.preRender(Container.java:120)
            at com.sapportals.htmlb.Container.preRender(Container.java:120)
            at com.sapportals.portal.htmlb.PrtContext.render(PrtContext.java:406)
            at com.sapportals.htmlb.page.DynPage.doOutput(DynPage.java:237)
            at com.sapportals.wcm.portal.component.base.KMControllerDynPage.doOutput(KMControllerDynPage.java:130)
            at com.sapportals.htmlb.page.PageProcessor.handleRequest(PageProcessor.java:129)
            at com.sapportals.portal.htmlb.page.PageProcessorComponent.doContent(PageProcessorComponent.java:134)
            at com.sapportals.wcm.portal.component.base.ControllerComponent.doContent(ControllerComponent.java:77)
            at com.sapportals.portal.prt.component.AbstractPortalComponent.serviceDeprecated(AbstractPortalComponent.java:209)
            at com.sapportals.portal.prt.component.AbstractPortalComponent.service(AbstractPortalComponent.java:114)
            at com.sapportals.portal.prt.core.PortalRequestManager.callPortalComponent(PortalRequestManager.java:328)
            at com.sapportals.portal.prt.core.PortalRequestManager.dispatchRequest(PortalRequestManager.java:136)
            at com.sapportals.portal.prt.core.PortalRequestManager.dispatchRequest(PortalRequestManager.java:189)
            at com.sapportals.portal.prt.component.PortalComponentResponse.include(PortalComponentResponse.java:215)
            at com.sapportals.portal.prt.pom.PortalNode.service(PortalNode.java:645)
            at com.sapportals.portal.prt.core.PortalRequestManager.callPortalComponent(PortalRequestManager.java:328)
            at com.sapportals.portal.prt.core.PortalRequestManager.dispatchRequest(PortalRequestManager.java:136)
            at com.sapportals.portal.prt.core.PortalRequestManager.dispatchRequest(PortalRequestManager.java:189)
            at com.sapportals.portal.prt.core.PortalRequestManager.runRequestCycle(PortalRequestManager.java:753)
            at com.sapportals.portal.prt.connection.ServletConnection.handleRequest(ServletConnection.java:240)
            at com.sapportals.wcm.portal.connection.KmConnection.handleRequest(KmConnection.java:52)
            at com.sapportals.portal.prt.dispatcher.Dispatcher$doService.run(Dispatcher.java:522)
            at java.security.AccessController.doPrivileged(Native Method)
            at com.sapportals.portal.prt.dispatcher.Dispatcher.service(Dispatcher.java:405)
            at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
            at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.runServlet(HttpHandlerImpl.java:401)
            at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.handleRequest(HttpHandlerImpl.java:266)
            at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:387)
            at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:365)
            at com.sap.engine.services.httpserver.server.RequestAnalizer.invokeWebContainer(RequestAnalizer.java:944)
            at com.sap.engine.services.httpserver.server.RequestAnalizer.handle(RequestAnalizer.java:266)
            at com.sap.engine.services.httpserver.server.Client.handle(Client.java:95)
            at com.sap.engine.services.httpserver.server.Processor.request(Processor.java:175)
            at com.sap.engine.core.service630.context.cluster.session.ApplicationSessionMessageListener.process(ApplicationSessionMessageListener.java:33)
            at com.sap.engine.core.cluster.impl6.session.MessageRunner.run(MessageRunner.java:41)
            at com.sap.engine.core.thread.impl3.ActionObject.run(ActionObject.java:37)
            at java.security.AccessController.doPrivileged(Native Method)
            at com.sap.engine.core.thread.impl3.SingleThread.execute(SingleThread.java:100)
            at com.sap.engine.core.thread.impl3.SingleThread.run(SingleThread.java:170)

    Hi Lou,
    Please go though the following thread and check your configurations according to that. I think it will help you.
    https://www.sdn.sap.com/irj/sdn/thread?threadID=403393&messageID=3429730#3429730
    regards,
    Chamkaur

  • Physical Standby Online Redo log  files,

    Hi,
    I'm trying to create a physical standby database (10.2.0.3). I'm a little confused about the requirement for online redo logs on the standby.
    in my standby alert log I get the following when I issue:
    SQL> alter database recover managed standby database disconnect from session
    "ORA-00313: open failed for members of log group 1 of thread 1
    ORA-00312: online log 1 thread 1: '/appl/oradata/prod/prod_1_redo_01_02.log'
    ORA-27037: unable to obtain file status
    SVR4 Error: 2: No such file or directory
    Additional information: 3"
    /appl/oradata/prod/prod_1_redo_01_02.log is the path to the location of the online redo logs on the production system. This file does not exist on the standby filesystem so the error is correct.
    I assume that it gets this information from the standby control file I created on the production system and copied over to the standby.
    Do I need to copy the online redo logs from the primary over to the standby site or do I need to create online redo logs on the standby?
    Does the standby need to have redo log files?
    I'm not talking about 'standby log files' of the type created using 'alter database add standby log file'. I've not got that far yet.
    I just need to establish if a physical standby requires online redo log files?
    Thanks in advance,
    user234564

    I wanted to update this thread since I've been dealing with the exact same errors. The basic question is: "does a physical standby need the online redo logs?"
    Answer: Not really, until one wants to switchover or failover (and become a primary database). Furthermore, whenever the MRP process is started, Oracle prepares for a possible switchover/failover by "clearing" the online redo logs (MetaLink note# 352879.1). It is not a big deal, since Oracle will build the actual redo files when the "alter database open resetlogs" is accomplished during a "role transition."
    In our situation, we have decided to use our standby for nightly exports. We stop MRP, open the database read-only, then restart MRP. We built these standby DBs with RMAN. The RMAN duplicate process will not build the online redo log files until the database is opened for read/write (with resetlogs). However, we haven't had a need for read/write (i.e. a switchover).
    Thus, every morning we have been getting the same errors that "user234564" posted above. At first the errors seemed scary, then we realized they were just a nusiance. In order to clean things up, all I did was just "cp" our stanby redo logs (SRL) into our online redo directories ensuring the names matched what was in v$logfile. When I restarted MRP, the alert log clearly showed Oracle clearing these "newly found" online redo logs.

  • The file structure online redo log, archived redo log and standby redo log

    I have read some Oracle documentation for file structure and settings in Data Guard environment. But I still have some doubts. What is the best file structure or settings in Oracle 10.2.0.4 on UNIX for a data guard environment with 4 primary databases and 4 physical standby databases. Based on Oracle documents, there are 3 redo logs. They are: online redo logs, archived redo logs and standby redo logs. The basic settings are:
    1. Online redo logs --- This redo log must be on Primary database and logical standby database. But it is not necessary to be on physical standby database because physical standby is not open. It doesn't generate redo log. However, if don't set up online redo log on physical standby, when primary failover and switch standby as primary. How can standby perform without online redo logs? In my standby databases, online redo logs have been set up.
    2. Archived redo logs --- It is obviously that primary database, logical and physical standby database all need to have this log file being set up. Primary use it to archive log files and ship to standby. Standby use it to receive data from archived log and apply to database.
    3. Standby redo logs --- In the document, it says A standby redo log is similar to an online redo log, except that a standby redo log is used to store redo data received from another database. A standby redo log is required if you want to implement: The maximum protection and maximum availability levels of data protection and Real-time apply as well as Cascaded destinations. So it seems that this standby redo log only should be set up on standby database, not on primary database. Am my understanding correct? Because I review current redo log settings on my environment, I have found that Standby redo log directory and files have been set up on both primary and standby databases. I would like to get more information and education from experts. What is the best setting or structure on primary and standby database?

    FZheng:
    Thanks for your input. It is clear that we need 3 type of redo logs on both databases. You answer my question.
    But I have another one. In oracle ducument, it says If you have configured a standby redo log on one or more standby databases in the configuration, ensure the size of the current standby redo log file on each standby database exactly matches the size of the current online redo log file on the primary database. It says: At log switch time, if there are no available standby redo log files that match the size of the new current online redo log file on the primary database. The primary database will shut down
    My current one data gurard envirnment setting is: On primary DB, online redo log group size is 512M and standby redo log group size is 500M. On the standby DB, online redo log group size is 500M and standby redo log group size is 750M.
    This was setup by someone I don't know. Is this setting OK? or I should change Standby Redo Log on standby DB to 512M to exactly meatch with redo log size on primary?
    Edited by: 853153 on Jun 22, 2011 9:42 AM

  • Cannot read Redo log

    Hi,
    We have a site which is used for replication.
    At this site we had a capture,propogate and the apply process.
    Because of some errors we dropped the capture and propogate process.
    We also droppeed the streams queue.
    Then we created a new Capture and Propogate process as well as
    a new streams queue.
    When we start the capture process then the process gets aborted.
    When we see the dump file we get the following errors.
    It says that you cannot read from redo log
    *** 2003-02-07 22:01:23.000
    *** SESSION ID:(22.16) 2003-02-07 22:01:23.000
    ORA-00333: redo log read error block 131074 count 8192
    ORA-00334: archived log: 'E:\ORACLE\ORCL92\RDBMS\ARC00029.001'
    ORA-27070: skgfdisp: async read/write failed
    OSD-04016: Error queuing an asynchronous I/O request.
    O/S-Error: (OS 23) Data error (cyclic redundancy check).
    ORA-00333: redo log read error block 131074 count 8192
    ORA-00334: archived log: 'E:\ORACLE\ORCL92\RDBMS\ARC00029.001'
    ORA-27091: skgfqio: unable to queue I/O
    ORA-27070: skgfdisp: async read/write failed
    OSD-04006: ReadFile() failure, unable to read from file
    O/S-Error: (OS 23) Data error (cyclic redundancy check).

    OSD-04006: ReadFile() failure, unable to read from file
    O/S-Error: (OS 23) Data error (cyclic redundancy check).
    These errors indicate that the file is unreadable from the OS point of view.

  • Larger redo log file members or more log groups

    Oracle 11gR1 RHEL5 64 bit
    Hi,
    I was wondering what is better from a perfomance tuning perspective. I have log swiches occuring every 2 minutes in our production database. I know definitely that our log file members are too small (100MB). The redo log sizing tool in OEM told me to make it 40G according to the fast_start_mttr_target setting which is set to 600. Now, my question is what is better to do?
    1. Increase the size of my current redo log members? Right now there are 4 groups with 2 members in each.
    OR
    2. Should I create additonal redo log groups (4 more) and then re-rerun the sizing tool or query the v$instance_recovery view?
    Which is better? tradeoffs?
    Thanks all.

    If you want to reduce the number (frequency) of Log Switches, you should increase the size of the Online Redo Logs -- ie create new Log File Groups of a larger size and drop the older ones.
    If the issue is "checkpoint not complete" waits, then either
    a. Increasing the size of the Log Files
    or
    b. Increasing the number of Log Files
    is doable
    Note that if you increase the number but not the size, you still have a checkpoint every N Mbytes -- ie, possibly too frequently !
    On the other hand if you increase your size to be very large, at every switch, the Archiver is going to kick in with a large Read + large Write operation -- reading that Redo Log of N GBytes and writing it out to the archive log target location, imposing that additional I/O spike on your system. (Writing to filesystem will go through the FileSystem Buffers so if your database SGA isn't very large and your database performance relies on hitting the FileSystem Buffer Cache to avoid Disk Reads, that performance will be impacted as a large portion of the FileSystem Buffer Cache will be taken over by the Archiver for some time).
    Hemant K Chitale
    http://hemantoracledba.blogspot.com

  • Mv control files, redo.log, database fies

    Hello,
    I installed Oracle 10.2. on Unix.
    we have created file systems for our data/control files.
    some how we missed that part to define location for dbf, control & redo.log files during installation.
    My question is how to mv all control files, redo.log and dbf files to one location to another.
    for. eg:
    currently there are installed in /opt/oracle/oradata.
    now i want o move in /u03/oradata.
    Please note:
    Prior to that I'd like to put in archive log mode.
    shutdown immediate;
    startup mount;
    alter database archivelog;
    alter database open;
    DN

    For DB and Redo files
    conn / as sysdba
    shutdown immediate;
    startup mount
    host
    $ cp old_name new_name
    $ exit
    alter database rename file old_name
    to new_name;
    alter database open
    host
    $ rm old_name
    $ exitFor control files
    1. Shut down the database.
    2. Copy an existing control file to a different locations, using operating system commands.
    3. Edit the CONTROL_FILES parameter in the database's initialization parameter file to add the new control file's name, or to change the existing control filename.
    4. start the database.
    Message was edited by:
    tekicora

  • Why not use Redo log for consistent read

    Oracle 11.1.0.7:
    This might be a stupid question.
    As I understand if a select was issued at 7:00 AM and the data that select is going to read has changed at 7:10 AM even then Oracle will return the data that existed at 7:00 AM. And for this Oracle needs the data in Undo segments.
    My question is since redo also has past and current information why can't redo logs be used to retreive that information? Why is undo required when redo already has all that information.

    user628400 wrote:
    Thanks. I get that piece but isn't it the same problem with UNDO? It's written as it expires and there is no guranteee until we specifically ask oracle to gurantee the UNDO retention? I guess I am trying to understand that UNDO was created for effeciency purposes so that there is less performance overhead as compared to reading and writing from redo.And this also you said,
    >
    If data was changed to 100 to 200 wouldn't both the values be there in redo logs. As I understand:
    1. Insert row with value 100 at 7:00 AM and commit. 100 will be writen to redo log
    2. update row to 200 at 8:00 AM and commit. 200 will be written to redo log
    So in essence 100 and 200 both are there in the redo logs and if select was issued at 7:00 data can be read from redo log too. Please correct me if I am understanding it incorrectly.I guess you didnt understand the explaination that I did. Its not the old data that is kept. Its the changed vector of Undo that is kept which is useful to "recover" it when its gone but not useful as such for a select statement. Whereas in an Undo block, the actual value is kept. You must remember that its still a block only which can contain data just like your normal block which may contain a table like EMP. So its not 100,200 but the change vectors of these things which is useful to recover the transaction based on their SCN numbers and would be read in that order as well. And to read the data from Undo, its quite simple for oracle to do so using an Undo block as the transaction table which holds the entry for the transaction, knows where the old data is kept in the Undo Segment. You may have seen XIDSEQ, XIDUSN, XIDSLOT in the tranaction id which are nothing but the information that where the undo data is kept. And to read it, unlke redo, undo plays a good role.
    About the expiry of Undo, you must know that only INACTIVE Undo extents are marked for expiry. The Active Extents which are having an ongoing tranaction records, are never marked for it. You can come back after a lifetime and if undo is there, your old data would be kept safe by oracle since its useful for the multiversioning. Undo Retention is to keep the old data after commit, something which you need not to do if you are on 11g and using Total Recall feature!
    HTH
    Aman....

  • Database, redo log files accessiblity

    Hi guys!
    I have a quick question regarding Oracle File Permissions (Operational Security Check).
    How would one check if the Oracle files are not world read/writeable
    (database, redo log and control files are not world accessible)?
    Thanks!

    Hi guys!
    I have a quick question regarding Oracle File Permissions (Operational Security Check).What is this referring to?
    How would one check if the Oracle files are not world read/writeable
    (database, redo log and control files are not world accessible)?Example from a lab setup on AIX
    $ find $ORACLE_HOME | wc -l
    18197
    $ find $ORACLE_HOME -perm -0002
    If "at least" the world writable bit is set, find will list files in and below Home top directory.

  • Using 2 file systems to place archive logs

    We currently have LOG_ARCHIVE_DEST parameter set to place archive logs to /u10 file system. However it no longer has enough space and system admin added second dedicated file system /u11 to place archive logs. So we would like to start using both of them. If first gets full we want to make sure second is in use until backup job deletes them. If this possible to do in current version that we have (8.1.7) and what changes should we make to start utilizing both file systems.
    Thanks.
    Edited by: user594143 on Sep 8, 2008 3:47 PM

    I am not sure whether this was introduced from 8i but 9i & 10g both versions have this feature.
    You can use the ALTERNATE feature of log_archive_dest_n to set an alternate destination if the primary destination fails. In the sense, if the primary (mandatory) archive log destination fails because of either full or any other reason, the database starts archiving to the ALTERNATE destination. After you clear the primary destination, you have to manually switch the destinations to PRIMARY and ALTERNATE.
    SQL> alter system set log_archive_dest_1='LOCATION=/u00/app/oracle/admin/archive_1 MANDATORY NOREOPEN ALTERNATE=log_archive_dest_2';
    SQL> alter system set log_archive_dest_2='LOCATION=/u01/app/oracle/admin/archive_2 OPTIONAL';
    SQL> alter system set log_archive_dest_state_1=enable;
    SQL> alter system set log_archive_dest_state_2=alternate;
    Thanks,
    Harris.

Maybe you are looking for

  • VGA output vs Composite Ouput

    Hi All, Was wondering if the resolution output from the VGA cable was better than that from the composite HD cable? Does the VGA cable output at 720p and the composite at 576p only? Many Thanks,

  • Problem with Http RFC Scenario

    hi all.........      Am doing an HTTP RFC scenario. The scenario is working fine with all messages in IS, IE and AE being successful. But the problem is the RFC response am getting is not what is expected. When i run the RFC directly with the same in

  • How can I edit the Alert Description for a SQL Agent Job monitor, specifically Job Duration.

    I'm setting up some sql agent job monitoring, I need to monitor for Job Duration.  This is easy enough, I just create the overrides and set the parameters and it alerts as expected, however there is no alert text. The alert does me no good with out a

  • AD RMS - SCP tcp port

    the server is a file server, intranet web server and ADRMS server After installing ADRMS, it uses port 80 for ADRMS SCP it cannot be changed and not advised to change (http://social.technet.microsoft.com/wiki/contents/articles/13130.ad-rms-troublesho

  • Two Accounts, One iTunes

    I have two different accounts activated on my iTunes. This makes it difficult using my iPhone because I have to be logged in as the right account for ringtones and app updates. Is there anyway I can merge the two into one?