Very high log file sequential read and control file sequential read waits?

I have a 10.2.0.4 database and have 5 streams capture processes running to replicate data to another database. However I am seeing very high
log file sequential read and control file sequential read by the capture procesess. This is causing slowness in the database as the databass is wasting so much time on these wait events. From AWR report
Elapsed: 20.12 (mins)
DB Time: 67.04 (mins)
and From top 5 wait events
Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
CPU time 1,712 42.6
log file sequential read 99,909 683 7 17.0 System I/O
log file sync 49,702 426 9 10.6 Commit
control file sequential read262,625 384 1 9.6 System I/O
db file sequential read 41,528 378 9 9.4 User I/O
Oracle support hasn't been of much help, other than wasting my 10 days and telling me to try this and try that.
Do you have streams running in your environment, are you experiencing this wait. Have you done anything to resolve these waits..
Thanks

Welcome to the forums.
There is insufficient information in what you have posted to know that your analysis of the situation is correct or anything about your Streams environment.
We don't know what you are replicating. Not size, not volume, not type of capture, not rules, etc.
We don't know the distance over which it is being replicated ... 10 ft. or 10 light years.
We don't have any AWR or ASH data to look at.
etc. etc. etc. If this is what you provided Oracle Support it is no wonder they were unable to help you.
To diagnose this problem, if one exists, requires someone on-site or with a very substantial body of data which you have not provided. The first step is to fill in the answers to all of the obvious first level questions. Then we will likely come back with a second level of questioning.
But when you do ... do not post here. Your questions are not "Database General" they are specific to Streams and there is a Streams forum specifically for them.
Thank you.

Similar Messages

  • Multiplexing redo logs and control files to a separate diskgroup

    General question this one...
    I've been using ASM for a few years now and have always installed a new system with 3 diskgroups
    +DATA - for datafiles, control files, redo logs
    +FRA - for achive logs, flash recovery. RMAN backup
    Those I guess are the standards, but I've always created an extra (very small) diskgroup, called +ONLINE where I keep multiplexed copies of the redo logs and control files.
    My reasoning behind this is that if there are any issues with the +DATA diskgroup, the redo logs and control files can still be accessed.
    In the olden days (all those 5 years ago!), on local storage, this was important, but is it still important now? With all the striping and mirroring going on (both at ASM and RAID level), am I just being overtly paranoid? Does this additional +ONLINE diskgroup actually hamper performance? (with dual write overheads that are not necessary)
    Thoughts?

    Some of the decision will probably depend on your specific environment's data activity, volume, and throughput.
    Something to remember is that redo logs are sequential write, which benefit from a lower RAID overhead (RAID-10, 2 writes per IOP vs RAID-5, 4 writes per IOP). RAID-10 is often not cost-effective for the data portion of a database. If your database is OLTP with a high volume of random reads/writes, you're potentially hurting redo throughput by creating contention on the disks sharing data and redo. Again, that depends entirely on what you're seeing in terms of wait events. A low volume database would probably not experience any noticeable degraded performance.
    In my environment, I have RAID-5 and RAID-10 available, and since the RAID-10 requirement from a capacity perspective for redo is very low, it makes sense to create 2 diskgroups for online redo, separate from DATA, and separate from each other. This way, we don't need to be concerned with DATA transactions impacting REDO performance, and vice versa, and we still maintain redo redundancy.
    In my opinion, you can't be too paranoid. :)
    Good luck!
    K

  • Location of Redo log and control files?

    Dear all,
    I am checking the location of redo log and control files, but found that the redo log file (like log02a.dbf ....) in the same directory of data files. However, I couldn't find any control files in the data files directries.
    What could be the location of control files?
    Amy

    select name
      from v$controlfile
    or
    show parameter control_filesKhurram

  • How to create parameter and control file like filename + date

    Hello there
    I am trying to create parameter and control file with following command
    in SQLPLUS
    create pfile='/u03/oradata/WEBDB/backup/initWEBDB.ora' from spfile;
    In RMAN
    copy current controlfile to '/u03/oradata/WEBDB/backup/cf_longterm.cpy';
    how can I put date at the end of filename like
    initWEBDB8jan06.ora and cf_longterm8jan06.cpy
    Thanks in advance
    Lionel

    ASM is reliable but a smart DBA is very careful. If ASM is doing mirroring this is like RAID doing mirroring. What happens if you accidentally delete one copy ... the other one disappears instantly. Not a good idea.
    With respect to redo logs you need a minimum of three groups, two members, and one thread per instance. So a 2 node cluster should, at a minimum have 12 physical files.
    Not mirroring the redo logs, assuming multiple members, is not as critical.

  • How to create redlog and control file at ASM in linux RAC

    Hi Experts,
    I will to maintance a oracle 10g database at ASM as RAC iin linux red hat.
    i am a new person with some question.
    nornally speaking, oracle recommadition for oracle 10g database as
    create 3 copy fills for control file
    create at least 2 redo log with mirror files in system.
    However, I checked find
    redlog file is at FRA place +FLSdisk1 and no mirror
    control file is at FRA place--+FLSDISK1/
    datebase file at ‘+DATA1/
    There are no mirror for relog.
    Go to EM, I also could not find place to enter file name.
    We use ASM to hold database to support RAC.
    Do i need to create redlog file as
    ALTER DATABASE ADD LOGFILE GROUP 1 ('+FLSdisk1/sale/onlinelog/REDO01.LOG','+FLSdisk1/sale/onlinelog/REDO01_mirror.LOG') SIZE 1000M reuse;
    my boss told me that ASM is reliable.
    Do you need to creat more directory to arrange redlog and control files in ASM for RAC system?
    FRA is a good place to store control file and redlog file ?
    Thanks
    JIM
    Edited by: user589812 on Jul 3, 2009 3:03 PM

    ASM is reliable but a smart DBA is very careful. If ASM is doing mirroring this is like RAID doing mirroring. What happens if you accidentally delete one copy ... the other one disappears instantly. Not a good idea.
    With respect to redo logs you need a minimum of three groups, two members, and one thread per instance. So a 2 node cluster should, at a minimum have 12 physical files.
    Not mirroring the redo logs, assuming multiple members, is not as critical.

  • Different log file name in the Control file of SQL Loader

    Dear all,
    I get every day 3 log files with ftp from a Solaris Server to a Windows 2000 Server machine. In this Windows machine, we have an Oracle Database 9.2. These log files are in the following format: in<date>.log i.e. in20070429.log.
    I would like to load this log file's data to an Oracle table every day and I would like to use SQL Loader for this job.
    The problem is that the log file name is different every day.
    How can I give this variable log file name in the Control file, which is used for the SQL Loader?
    file.ctl
    LOAD DATA
    INFILE 'D:\gbal\in<date>.log'
    APPEND INTO TABLE CHAT_SL
    FIELDS TERMINATED BY WHITESPACE
    TRAILING NULLCOLS
    (SL1 DATE "Mon DD, YYYY HH:MI:SS FF3AM",
    SL2 char,
    SL3 DATE "Mon DD, YYYY HH:MI:SS FF3AM",
    SL4 char,
    SL5 char,
    SL6 char,
    SL7 char,
    SL8 char,
    SL9 char,
    SL10 char,
    SL11 char,
    SL12 char,
    SL13 char,
    SL14 char,
    SL15 char)
    Do you have any better idea about this issue?
    I thought of renaming the log file to an instant name, such as in.log, but how can I distinguish the desired log file, from the other two?
    Thank you very much in advance.
    Giorgos Baliotis

    I don't have a direct solution for your problem.
    However if you invoke the SQL loader from an Oracle stored procedure, it is possible to dynamically set control\log file.
    # Grant previleges to the user to execute command prompt statements
    BEGIN
    dbms_java.grant_permission('bc4186ol','java.io.FilePermission','C:\windows\system32\cmd.exe','execute');
    END;
    * Procedure to execute Operating system commands using PL\SQL(Oracle script making use of Java packages
    CREATE OR REPLACE AND COMPILE JAVA SOURCE NAMED "Host" AS
    import java.io.*;
    public class Host {
    public static void executeCommand(String command) {
    try {
    String[] finalCommand;
    finalCommand = new String[4];
    finalCommand[0] = "C:\\windows\\system32\\cmd.exe";
    finalCommand[1] = "/y";
    finalCommand[2] = "/c";
    finalCommand[3] = command;
    final Process pr = Runtime.getRuntime().exec(finalCommand);
    new Thread(new Runnable() {
    public void run() {
    try {
    BufferedReader br_in = new BufferedReader(new InputStreamReader(pr.getInputStream()));
    String buff = null;
    while ((buff = br_in.readLine()) != null) {
    System.out.println("Process out :" + buff);
    try {Thread.sleep(100); } catch(Exception e) {}
    catch (IOException ioe) {
    System.out.println("Exception caught printing process output.");
    ioe.printStackTrace();
    }).start();
    new Thread(new Runnable() {
    public void run() {
    try {
    BufferedReader br_err = new BufferedReader(new InputStreamReader(pr.getErrorStream()));
    String buff = null;
    while ((buff = br_err.readLine()) != null) {
    System.out.println("Process err :" + buff);
    try {Thread.sleep(100); } catch(Exception e) {}
    catch (IOException ioe) {
    System.out.println("Exception caught printing process error.");
    ioe.printStackTrace();
    }).start();
    catch (Exception ex) {
    System.out.println(ex.getLocalizedMessage());
    public static boolean isWindows() {
    if (System.getProperty("os.name").toLowerCase().indexOf("windows") != -1)
    return true;
    else
    return false;
    * Oracle wrapper to call the above procedure
    CREATE OR REPLACE PROCEDURE Host_Command (p_command IN VARCHAR2)
    AS LANGUAGE JAVA
    NAME 'Host.executeCommand (java.lang.String)';
    * Now invoke the procedure with an operating system command(Execyte SQL-loader)
    * The execution of script would ensure the Prod mapping data file is loaded to PROD_5005_710_MAP table
    * Change the control\log\discard\bad files as apropriate
    BEGIN
    Host_Command (p_command => 'sqlldr system/tiburon@orcl control=C:\anupama\emp_join'||1||'.ctl log=C:\anupama\ond_lists.log');
    END;Does that help you?
    Regards,
    Bhagat

  • SQL Loader and control file changes for different users

    In the front end of my application I can select a data file and a control file, and load data to the table mentioned in .ctl file. Every user who logs in uses the same .ctl file and so loads onto the same table. Now I want the user to load data onto the table in his own schema. I can get the username of the user currently logged in and i want to insert it into that username.table. So can i copy the contents of the .ctl file into a variable, modify it into username.table in that string and pass that variable as a parameter to the sqlldr command instead of the .ctl file.
    Or is there a better way how I can modify the same control file everytime to change tablename to username.tablename in .ctl file and pass to sqlldr to load data to table in local user schema table.
    Thanks and Regards

    Thanks for the reply .. user do have their user credentials but only for the application ... but all users use a common loader and control file once they log into the application. So irrespective of which user is logged in he selects the same control file and loads to the same table mentioned in the control file .. i instead want user to be able to load to the table in control file but into his schema like username.tablename instead of just the tablename mentioned in .ctl file.

  • Grant an account rights to read and modify files in all Redirected Documents folders?

    We have set up redirected Documents folder with the default recommended permissions that does not grant Administrators any access to the folders.  If an administrator needs access to a folder, they take ownership of the folder, grants themselves permissions
    and does whatever they need to do with the folder such as delete it or give access to a new person.  This is rarely needed so it is not a big deal.
    However,  we  now need to import everyone's Outlook PSTs (stored in their redirected MyDoc's so they are backed up) into their new  Exchange Archive mailboxes.  We will also need to be able to verify all PSTs are gone from the file server
    once the contents are imported into their archive mailbox.  We also need to be able to run reports showing which users are storing music and video files in their folders and how much.
    When I tried to run Windirstat against the share containing the redirected MyDocs folders to see what file types where taking up space, I got very incomplete results even with using psexec running as system account due to inadequate permissions on the user's
    folders.  
    We also need  the PST Capture Tool to have rights to read into their documents folders and move any psts found into the users archive mailbox.
    Rather than having to go to each user's redirected documents fiolder one by one and take ownership to change permissions, is there a more efficient way to have an account running the PST Capture Tool and WindirStat have access to read and change files in
    the user's redirected documents folders?
    I thought of just removing the option in the GPO that says "Grant the user exclusive rights to My Documents," but I don't think that will work on pre-existing
    folders that already have permissions set.

    Hi,
    Based on my research, if you already have a bunch of existing redirected My Documents folders set up with the
    Grant the user exclusive rights to My Documents check box selected, the only documented way to regain access to the folders is to take ownership of each individual folder and manually edit the permissions to give the Administrators group full
    control.
    We can use powershell script to help regain access to the folders, for more details, please go through the below link:
    http://mypkb.wordpress.com/2008/12/29/how-to-restore-administrators-access-to-redirected-my-documents-folder/
    Regards,
    Yan Li
    Regards, Yan Li

  • Database restore without temp, undo and control files.

    Hi All,
    You might found this question silly but I don't know so asking this question here.
    I have cold back up of the database. Now, I want create clone of that database, but I have some different paths for the DBFs so I will create new control file after restoring the database.
    Now, I know that I don't need control files and tempfiles to be restored. I have 10 undo files in backup but on the new clone database I don't need all 10. I want only 5. So can I do the restoration without undo , temp and control file and later on add undo and temp?? and if yes then tell me that can I add them at mount level??
    This is my first restore, Please guide me its very urgent

    Nitin Joshi wrote:
    f the COLD Backup does not include the Online Redo Logs, an ALTER DATABASE OPEN RESETLOGS is requireed >>to create these Online Redo Logs. Unfortunately, an OPEN RESETLOGS can only be done after an Incomplete >>Recovery or when using a Backup Control file.
    Therefore, we do a RECOVER with a CANCEL to simulate an Incomplete Recovery.Completely agree with you Hemant. And the links you've provided,i've gone through many times. Excellent description.
    I just wanted to know in above(OP's) scenario if he has complete cold backup(includes online redo logs), does he really need open reset logs or any recovery?
    Regards!no , if you have cold backup with online redo log files then i don't think so you need to open database in resetlogs.Resetlog is always after incomplete recovery or recovery using backup controlfile or you dont have redo logs.
    I am completely agree with you that with given scenario for the cold backup undo tablespace would not be part of recovery and you can
    -offline drop undo tablespace file
    -create another one undo tablespace and its undo datafile
    -point spfile to that newly undo tablespace
    I think Aman is saying in the context of restore and recover online database where undo tablespace create a vital role in database recovery, the undo blocks roll back the effects of uncommitted transactions previously applied by the rolling forward phase.
    Khurram

  • XI Sender file adapter - How to process data and control files.

    Hello all,
       I have the following requirement to fulfill: I am using an FTP client (XI Sender file adapter) to retrieve data from an FTP site. To make sure I am not picking up a data file that is currently being written to, 2 files are actually present on the FTP site (for each data file):
    1. abc.ctrl (control file with no data in it. Indicates that the data file has been completely written).
    2. abc.dat (actual data file).
      I want the file/ftp connector in XI to retrieve the data file (abc.dat) only if the control file (abc.ctrl) is present. After the processing of the data file is finished, both files (.dat and .ctrl) should be deleted.
      Is there an elegant and robust way to accomplish this?
    Thanks for your help.

    Hi Yves,
    in my opinion there's no problem with files currently being written in combination with a polling file adapter because the final file name should be available only when the file is transferred completely. I'm using different file sender adapters very often and never had any problems. After picking up the files I move them to the corresponding archive folders mentioned in the adapters so that a second processing cannot occur.
    Regards
    Ralph

  • Oracle binary and control files

    Hi All,
    I want know whether the oracle binary and control files are they related in anyway.
    I have my physical files on a SAN storage and my oracle binary files on a local disk.
    In case if I delete my oracle binaries and restore it from a backup, will I be able to start my database without any issues.
    Since all my oracle datafiles,controlfiles and redofiles are located in SAN storage.

    Oracle binaries and control files are related in some way because Oracle version is recorded in control files:
    oerr ora 201
    00201, 00000, "control file version %s incompatible with ORACLE version %s"
    // *Cause:  The control file was created by incompatible software.
    // *Action: Either restart with a compatible software release or use
    //          CREATE CONTROLFILE to create a new control file that is
    //          compatible with this release.When restoring Oracle binaries on UNIX you should take care about setuid bits on oracle executable to avoid local connection issues by non oracle Unix accounts.

  • When will Apple have a searchable text reader and upload file capability?

    When will Apple have a searchable text reader and upload file capability? I have a large text file I need to upload to my iPhone, search, and dial.
    Anything planned?

    I had a Palm the I used for work and travel it had a very nice application called DOCUMENTS TO GO. It allowed you to upload, view, and edit .doc, .xls, and .ppt files. this was very handy when traveling. Also, I could sync my palm with my iMac and work PC using bluetooth, took care of the file transfers as well. I would like to have the same capability on my iPhone. I bought it to replace all of my old stuff i need it to the same as my old stuff. I hope that Apple reads this and gets on the ball. Until then I still have to carry my Palm or thumb drive on trips.

  • Where is the location of tablespace file and control file

    Hi, all
    where is the location of tablespace file and control file? tks

    For DataFiles, query DBA_DATA_FILES or V$DATAFILE
    For TempFiles, query DBA_TEMP_FILES or V$TEMPFILE
    For Online Redo Logs, query v$LOGFILE
    For Archived Redo Logs, query v$ARCHIVED_LOG
    for Controlfiles, query v$CONTROLFILE
    Hemant K Chitale
    http://hemantoracledba.blogspot.com

  • Quickest method for reading and writing files

    Hi
    I need help regarding file operations.(Reading and Writing). Currently I am using BufferedReader and BufferedWriter to read and write files. But the files (XML) are very huge files(from 30 -50 mb). This is slowing the application to a great extent. Is there any other approach to perform the above mentioned operations on XML files in a fast manner.
    Thank You
    Mansoor.

    Hi
    Can u let me know how to use the java.nio pavkage for primitve data types(int,float..., boolean). I have tried it but found no success.
    Thank You
    Mansoor

  • Load data with SQL Loader link field between CSV file and Control File

    Hi all,
    in a SQL Loader control file, how do you specify link with field in CSV file and Control file?
    E.g. if I wat to import the record in table TEST (col1, col2, col3) with data in csv file BUT in different position. How to do this?
    FILE CSV (with variable position):
    test1;prova;pippo;Ferrari;
    xx;yy;hello;by;
    In the table TEST i want that col1 = 'prova' (xx),
    col2 = 'Ferrari' (yy)
    col3 = default N
    the others data in CSV file are ignored.
    so:
    load data
    infile 'TEST.CSV'
    into table TEST
    fields terminated by ';'
    col1 ?????,
    col2 ?????,
    col3 CONSTANT "N"
    Thanks,
    Attilio

    With '?' mark i mean " How i can link this COL1 with column in csv file ? "
    Attilio

Maybe you are looking for

  • ICal 2.0.5 and iOS 4.1 vs. MobileMe

    I have a PPC iMac G5 running OS 10.4.11 and utilizing iCal 2.0.5. These are the latest versions of the respective software that this computer can run. I recently paid for a family MobileMe account in the hopes of coordinating my calendars and contact

  • Final Log Checkup:  Warning: file will not be included...

    Just a final checkup... I am currently building my finished DVD project on to my master disk. I've noticed this throughout my test burns as well. The alert comes up saying: Warning: The file 'filename.ext' found in the VIDEO_TS or HVDVD_TS folder wil

  • [b]Can't install voice conversion server properly[/b]

    OCS 9.0.4.1 installed on three machines: infrastructure and storage on the first SPARC Solaris 9, midletier on the second SPARC Solaris 9, and Voice Conversion and Document Conversion Servers version 2.0.4.2.0 on the Windows 2000 Server. Dialogic car

  • Row Values To Columns.

    Hi, I'm looking to cut row-values to columns and here is my data-set and my objective... delimiter ( for readability ) : "," ( comma ) column1, column2, column3, column4 10, name=john, age=30, state=ca, country=usa 20, name=jane, age=25, null, null O

  • HT201303 Itunes charge me twice for the same game

    Itunes charge me twice for the same game