Quick Redo Log Question:

Hi All,
I am attempting to multiplex redo logs with Enterprise Manager and there is an option to 'reuse file'. I do not see any explanation about what this option does in my manuals and would appreciate if someone could quickly explain the what and why I would want to select this.
Thanks in advance,
d.

Hi All,
I am attempting to multiplex redo logs with
Enterprise Manager and there is an option to 'reuse
file'. I do not see any explanation about what this
option does in my manuals and would appreciate if
someone could quickly explain the what and why I
would want to select this.
Thanks in advance,
d.The REUSE option is discussed in the SQL Reference manual under the description of the ADD [STANDBY] LOGFILE MEMBER Clause. You need to use this option if you wish to use existing file(s) for the new multiplexed log file members in the group.

Similar Messages

  • Redo log question..

    my databse is in archivelog mode.
    i have 2 redolog files.
    one of my redolog file got corrupted .so wat ll happen..
    & how to recover the redologfile

    Hi,
    Good that ur DB is in archive log.
    If ur current redo log gets corrupted then u cannot open the DB and hence its archiving is also nt possible. This wud be the case irrespective of the status of the DB (up or shut down) when the redo log got corrupted. Be ready to loose data = amount of redo data written to the current redo log file which now happens to be corrupted.
    What u can do is this...
    startup mount
    ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP <no.>;
    alter database open resetlogs;
    Plz refer http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14231/onlineredo.htm#i1006568 and search for clear logfile
    Later plz do a shu immediate and take a cold and a consistent backup.
    -Aditi

  • Hoping for a quick response : EXP and Archived REDO log files

    I apologize in advance if this question has been asked and answered 100 times. I admit I didn't search, I don't have time. I'm leaving on vacation tomorrow, and I need to know if I'm correct about something to do with backup / restore.
    we have 10g R2 running a single instance on a single server. The application vendor has "embedded" oracle with their application. The vendor's backup is a batch file using EXP - thus:
    exp system/xpwdxx@db full=y file=D:\Orant\admin\db\EXP\db_full.dmp log=D:\Orant\admin\db\EXP\db_full.txt direct=y compress=y
    This command is executed nightly at midnight. The files are then backed up by our nightly backup to offsite storage media.
    Te database is running in autoarchive mode. The problem is, the archived redo files filled the drive they were being stored on, and it is the drive the database is on. I used OS commands to move 136G of archived redo logs onto other storage media to free the drive.
    My question: Since the EXP runs at midnight, when there is likely NO activity, do I need to run in AutoArchive Mode? From what I have read, you cannot even apply archived redo log files to this type of backup strategy (IMP) Is that true? We are ok losing changes since our last EXP. I have read a lot of stuff about restoring consistent vs. inconsistent, and just need to know: If my disk fails, and I have to start with a clean install of Oracle and nothing else, can I IMP this EXP and get back up and running as of the last EXP? Or do I need the autoarchived redo log files back to July 2009 (136G of them).
    Hoping for a quick response
    Best Regards, and thanks in advance
    Bruce Davis

    Bruce Davis wrote:
    Amardeep Sidhu
    Thank you for your quick reply. I am reading in the other responses that since I am using EXP without consistent=y, I might not even have a backup. The application vendor said that with this dmp file they can restore us to the most recent backup. I don't really care for this strategy as it is untested. I asked them to verify that they could restore us and they said they tested the dmp file and it was OK.
    Thank you for taking the time to reply.
    Best Regards
    BruceThe dump file is probably ok in the sense it is not corrupted and can be used in an imp operation. That doesn't mean the data in it is transactionally consistent. And to use it at all, you have to have a database up and running. If the database is physically corrupted, you'll have to rebuild a new database from scratch before you can even think about using your dmp file.
    Vendors never understand databases. I once had a vendor tell me that Oracle's performance would be intolerable if there were more than 5 concurrent connections. Well, maybe in HIS product ..... Discussions terminated quickly after he made that statement.

  • Fundamental questions on redo logs and rollbacks

    Hi all,
    Some basic questions, I really want to understand it very clearly.
    Suppose that we have updated few records in a table. We know that the blocks to be updated will be fetched into buffer cache, they will be updated with new value and commited eventually. The questions I have are ,
    1) What exact information will go to redo log ? is it a copy of the block before change and copy of the block after change ?
    2) What exactly goes to rollback segment? is it copy of block before change (for update) and just the rowid for inserted row and the copy of block for a deleted row ?
    3) Whatever we do, is it the whole block that goes to redo or rollback ? Means if there are 10 rows in the block and we update one of them, still whole block goes to redo or rollback ?
    4) If we rollback, what goes where ? Is there anything that goes to redo if we rollback ?
    Please explain.
    Thanks.

    Redo stores changes made in the database, and undo/rollback stores the reverse of those changes. Data blocks may be changed prior to a commit, and recorded in both locations.
    So, when a database is recovered, redo is applied to the backup datafiles, rolling every change forward, and then undo is applied to reverse any uncomitted transactions.
    Undo/rollback can also be used simply to roll back a transaction in an active instance. Redo is only used during instance recovery.
    I don't know if this is tracked via the storage of block images, or if it just stores the change itself.
    -cf

  • Redo log buffer question

    hi masters,
    this seems to be very basic, but i would like to know internal of this process.
    we all know that LGWR writes redo entries to online redo log files on disk. on commit SCN is generated and tagged to transaction. and LGWR writes this to online redo log files.
    but my question is, how these redo entries comes to redo log buffer??? look all required data is fetched into buffer cache by server process. it is modified there, and committed. DBWR writes this to datafiles, but at what point, which process writes this committed transaction (i think redo entry) into log buffer cache????
    does LGWR do this?? what internally happens exactly???
    if you can plz focus some light on internals, i will be thankfull....
    thanks and regards
    VD

    Hi Vikrant,
    DBWR writes this to datafiles, but at what point, which process writes this committed transaction (i think redo entry) into log buffer cache????Remember that, Before DBWR Acts on flushing the dirty Blocks to Data files, Before this server process, makes sure that LGWR finishes the writing Redo Log Buffer to Online Redo Log files. Since as per ORACLE Architecture poting of Recovering data till point of time @ Crash is important and this is achieved by Online Redo Logs files.
    Rest how the data is Updated in to the Redo Log Buffer Aman had stated the clear steps.
    - Pavan Kumar N

  • Question about how Oracle manages Redo Log Files

    Good morning,
    Assuming a configuration that consists of 2 redo log groups (Group A and B), each group consisting of 2 disks (Disks A1 & A2 for Group A and Disks B1 and B2 for group B). Further, let's assume that each redo log file resides by itself in a disk storage device and that the device is dedicated to it. Therefore in the above scenario, there are 4 disks, one for each redo log file and, each disk contains nothing else other than a redo log file. Furthermore, let's assume that the database is in ARCHIVELOG mode and that the archive files are stored on yet another different set of devices.
    sort of graphically:
        GROUP A             GROUP B
          A1                  B1
          A2                  B2The question is: When the disks that comprise Group A are filled and Oracle switches to the disks in Group B, can the disks in Group A be taken offline, maybe even physically removed from the system if necessary, without affecting the proper operation of the database ? Can the Archiver process be temporarily delayed until the disks (that were removed) are brought back online or is the DBA forced to wait until the Archiver process has finished creating a copy of the redo log file into the archive ?
    Thank you for your help,
    John.

    Hello,
    Dropping Log Groups
    To drop an online redo log group, you must have the ALTER DATABASE system privilege. Before dropping an online redo log group, consider the following restrictions and precautions:
    * An instance requires at least two groups of online redo log files, regardless of the number of members in the groups. (A group is one or more members.)
    * You can drop an online redo log group only if it is inactive. If you need to drop the current group, first force a log switch to occur.
    * Make sure an online redo log group is archived (if archiving is enabled) before dropping it. To see whether this has happened, use the V$LOG view.
    SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG;
    GROUP# ARC STATUS
    1 YES ACTIVE
    2 NO CURRENT
    3 YES INACTIVE
    4 YES INACTIVE
    Drop an online redo log group with the SQL statement ALTER DATABASE with the DROP LOGFILE clause.
    The following statement drops redo log group number 3:
    ALTER DATABASE DROP LOGFILE GROUP 3;
    When an online redo log group is dropped from the database, and you are not using the Oracle Managed Files feature, the operating system files are not deleted from disk. Rather, the control files of the associated database are updated to drop the members of the group from the database structure. After dropping an online redo log group, make sure that the drop completed successfully, and then use the appropriate operating system command to delete the dropped online redo log files.
    When using Oracle-managed files, the cleanup of operating systems files is done automatically for you.
    Your Database wont be affected as you can operate with 2 redo log files in each group as The minimum number of redo log files required in a database is two because the LGWR (log writer) process writes to the redo log files in a circular manner. so the process will hang becuase you are having 2 only groups if you want to remove 1 add a third one and make it the current group then remove the one you want to be offline.
    Please refer to:
    http://download.oracle.com/docs/cd/B10500_01/server.920/a96521/onlineredo.htm#7438
    Kind regards
    Mohamed
    Oracle DBA

  • Question on redo log files at the standby

    Oracle version: 10.2.0.5
    Platform : AIX
    We have 2 node RAC primary with 2 node RAC standby
    Primary Instance1 named as cmapcp1
    Primary Instance2 named as cmapcp2
    Standby Instance1 named as cmapcp3
    Standby Instance2 named as cmapcp4At standby side
    SQL> show parameter log_file_name_convert
    NAME                 TYPE                 VALUE
    log_file_name_conver string               cmapcp1, cmapcp3, cmapcp2, cmapcp4
    Despite the value set for log_file_name_convert, I don't see any change in names of Online and Standby redo logs at the Standby site.
    -- From primary
    SQL> select member,type from v$logfile;
    MEMBER                                             TYPE
    +CMAPCP_DATA01/cmapcp/cmapcp_log01.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log02.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log03.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log04.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log05.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log06.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log11.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log12.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log13.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log14.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log15.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log16.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log17.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log18.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log19.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log20.dbf             STANDBY
    16 rows selected.-- From standby
    SQL> select member,type from v$logfile;
    MEMBER                                             TYPE
    +CMAPCP_DATA01/cmapcp/cmapcp_log01.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log02.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log03.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log04.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log05.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log06.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log11.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log12.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log13.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log14.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log15.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log16.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log17.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log18.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log19.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log20.dbf             STANDBY
    16 rows selected.--- Another thing I noticed, v$log doesn't list Standby Redo Logs. This is expected behaviour , I guess
    Below is the output from Primary and Standby (it is the same)
    set linesize 200
    set pagesize 50
    col member for a50
    break on INST SKIP PAGE on GROUP# SKIP 1
    select l.thread# inst, l.group#,lf.member, lf.type
        from v$log l , v$logfile lf
        where l.group# = lf.group#
        order by 1,2 ;
          INST     GROUP# MEMBER                                             TYPE
             1          1 +CMAPCP_DATA01/cmapcp/cmapcp_log01.dbf             ONLINE
                        2 +CMAPCP_DATA01/cmapcp/cmapcp_log02.dbf             ONLINE
                        3 +CMAPCP_DATA01/cmapcp/cmapcp_log03.dbf             ONLINE
          INST     GROUP# MEMBER                                             TYPE
             2          4 +CMAPCP_DATA01/cmapcp/cmapcp_log04.dbf             ONLINE
                        5 +CMAPCP_DATA01/cmapcp/cmapcp_log05.dbf             ONLINE
                        6 +CMAPCP_DATA01/cmapcp/cmapcp_log06.dbf             ONLINE

    John_75 wrote:
    Thank you ckpt, mseberg.
    I think log_file_name_convert is set wrongly as you've mentioned. But If I don't want to any change to name of Online or standby redo log files in standby, I don't have to set log_file_name_convert at all. Right ?From Same link
    If you specify an odd number of strings (the last string has no corresponding replacement string), an error is signalled during startup. If the filename being converted matches more than one pattern in the pattern/replace string list, the first matched pattern takes effect. There is no limit on the number of pairs that you can specify in this parameter (other than the hard limit of the maximum length of multivalue parameters).

  • Question on Redo logs in RAC

    DB version:11.2
    Platform : Solaris 10
    We create RAC DBs manually. Below is a log of the DB creation from Node1 . Instance in Node2 is not yet created (only binary is installed in Node2).
    SQL> conn / as sysdba
    Connected to an idle instance.
    SQL> startup nomount pfile=/u03/oracle/11.2/db_1/dbs/initnehprd1.ora
    ORACLE instance started.
    Total System Global Area 1252643278 bytes                                      
    Fixed Size                  2219208 bytes                                      
    Variable Size             771752760 bytes                                      
    Database Buffers          469762048 bytes                                      
    Redo Buffers                8929280 bytes  
    SQL> CREATE DATABASE nehprd MAXINSTANCES 8 MAXLOGFILES 16 MAXLOGMEMBERS 4 MAXDATAFILES 1024
      2  CHARACTER SET AL32UTF8 NATIONAL CHARACTER SET AL16UTF16
      3  DATAFILE '+DG_DATA01/nehprd/nehprd_system01.dbf' SIZE 1000m EXTENT MANAGEMENT LOCAL
      4  SYSAUX DATAFILE '+DG_DATA01/nehprd/nehprd_sysaux01.dbf' SIZE 600m
      5  DEFAULT TEMPORARY TABLESPACE temp
      6  TEMPFILE '+DG_DATA01/nehprd/nehprd_temp01.dbf' SIZE 2000m EXTENT MANAGEMENT LOCAL UNIFORM SIZE 5m
      7  UNDO TABLESPACE undotbs11 DATAFILE '+DG_DATA01/nehprd/nehprd_undotbs1101.dbf' SIZE 700m
      8  LOGFILE
      9          GROUP 1 ('+DG_DATA01/nehprd/nehprd_log01.dbf') SIZE 150m,
    10          GROUP 2 ('+DG_DATA01/nehprd/nehprd_log02.dbf') SIZE 150m,
    11          GROUP 3 ('+DG_DATA01/nehprd/nehprd_log03.dbf') SIZE 150m
    12  /
    Database created.
    Elapsed: 00:00:18.95
    SQL> CREATE UNDO TABLESPACE undotbs12 DATAFILE '+DG_DATA01/nehprd/nehprd_undotbs1201.dbf' SIZE 700m;
    Tablespace created.
    Elapsed: 00:00:01.30
    SQL> ALTER DATABASE ADD LOGFILE thread 2 GROUP 4 '+DG_DATA01/nehprd/nehprd_log04.dbf' SIZE 150m;
    Database altered.
    Elapsed: 00:00:00.25
    SQL> ALTER DATABASE ADD LOGFILE thread 2 GROUP 5 '+DG_DATA01/nehprd/nehprd_log05.dbf' SIZE 150m;
    Database altered.
    Elapsed: 00:00:00.43
    SQL> ALTER DATABASE ADD LOGFILE thread 2 GROUP 6 '+DG_DATA01/nehprd/nehprd_log06.dbf' SIZE 150m;
    Database altered.But after the above activity, the following log files are created in the DB.
    6 log groups for each Instance and they all are on the same location +DG_DATA01/nehprd  !
    INST_ID     GROUP# STATUS  TYPE             MEMBER                                   IS_
             1          1         ONLINE  +DG_DATA01/nehprd/nehprd_log01.dbf             NO
             1          2         ONLINE  +DG_DATA01/nehprd/nehprd_log02.dbf             NO
             1          3         ONLINE  +DG_DATA01/nehprd/nehprd_log03.dbf             NO
             1          4         ONLINE  +DG_DATA01/nehprd/nehprd_log04.dbf             NO
             1          5         ONLINE  +DG_DATA01/nehprd/nehprd_log05.dbf             NO
             1          6         ONLINE  +DG_DATA01/nehprd/nehprd_log06.dbf             NO
             2          1         ONLINE  +DG_DATA01/nehprd/nehprd_log01.dbf             NO
             2          2         ONLINE  +DG_DATA01/nehprd/nehprd_log02.dbf             NO
             2          3         ONLINE  +DG_DATA01/nehprd/nehprd_log03.dbf             NO
             2          4         ONLINE  +DG_DATA01/nehprd/nehprd_log04.dbf             NO
             2          5         ONLINE  +DG_DATA01/nehprd/nehprd_log05.dbf             NO
             2          6         ONLINE  +DG_DATA01/nehprd/nehprd_log06.dbf             NO How was redo log group 4,5,6 created for thread 1 and how was redo log group 1,2,3 created for thread 2 ?

    Hi,
    To make things worse, when you query v$logfile , It will show 6 redo logfiles belonging to 6 redo groups for each instance.The fact that it shows all groups of redo does not mean it belongs to that instance. Try to query v$database or v$datafile, this means that database/datafiles belongs to only one instance, of course not.
    Isn't this a bit of a bug ?Of course not. It's concept.
    To understand it you need understand the difference between instance and database. An database (i.e files) can be opened by many instances.
    An Oracle database server consists of a database and at least one database instance (commonly referred to as simply an instance). Because an instance and a database are so closely connected, the term Oracle database is sometimes used to refer to both instance and database. In the strictest sense the terms have the following meanings:
    Database
    A database is a set of files, located on disk, that store data. These files can exist independently of a database instance.
    Database instance
    An instance is a set of memory structures that manage database files. The instance consists of a shared memory area, called the system global area (SGA), and a set of background processes. An instance can exist independently of database files.
    Database: (v$database)
    CONTROLFILE (v$controlfile)
    DATAFILE (v$datafile)
    ONLINELOG (v$logfile,v$log)
    ARCHIVELOG (v$archivelog)
    SPFILE
    These views above will show the same values in either instance, because if the file (database) is changed it is modified in all instances. That's means you not need use gv$ because the information are the same in all instances, also you not need get info connecting in each instance querying theses v$ because the inf are the same independent of the instance
    Instances: (v$instance)
    PARAMETERS (v$parameter)
    MEMORY STRUCTURE (e.g v$session)
    The view v$session will show information about sessions from that instance only. In RAC each instance have you own info about session so you will need query gv$session because it get information about session from others instances.
    The fact that each instance assign its own REDO/UNDO not mean they are part of the instances, REDO/UNDO are part of Database. They can be write by assigned instance and read by all instance (just it)
    It's not a bug when you query v$datafile, v$logfile, v$controlfile in all instances you will get same result, because it's the DATABASE. (An database (i.e files) can be opened by many instances).
    Levi Pereira

  • Multiplexing Redo Log Files question

    If you are running RAC on ASM on a RAID system, is this required?  We are using an HP autoraid which mirrors at the block level and in the documentation about Multiplexing Redo Log Files it says that you do it to protect against media failure.  The autoraid that we are using gives us multiple levels of redundancy against media failure so I was wondering if Multiplexing would be adding more overhead than is needed.  Thanks for your input.

    ASM is quite compex and I'm not going to outline all the advantages or reasons for ASM, but under ASM you can drop and add devices to maintain your capacity needs online without loosing data, which you cannot do using RAID, which requires a re-initialize, for example, regardless of redundancy. Please see the documentation. ASM, like pretty much everything Oracle will add complexity and you will have to check your requirements. ASM is however pretty much the standard. If you use external RAID, make sure your storage is not using RAID 5 or 0. Regarding logical errors, you could for example overwrite or delete a file by mistake, in which case file redundancy does not protect you. If you are looking for reasons or ways not to use ASM, I'm sure you will find them, but what's the point?

  • REDO LOG GROUP QUESTION

    I was reading an article and it said
    the distance(in bytes) between the checkpoint position in a redolog group and the end of the current redolog group can never be more then 90 % of the size of the smallest redo log group
    Can someone elaborate this

    I'm not sure what you want to elaborate on, but yes, it's true. If your redo logs are 100MB in size, and nothing else causes a checkpoint to take place, you'll have a checkpoint issued when you hit the 90MB mark. The idea is simply that you don't want to sit there doing nothing at all and then bang! the logs switch and you have to go hell-for-leather performing a massive checkpoint, all the while praying more log switches don't mean that you're threatening to catch up with yourself (at which point you'd have the 'thread unable to advance to log...' problem). By implementing the 90% rule, the idea is that your log switch, at worst, will cause a "10%-sized" checkpoint, which should be bearable.
    Of course, the situation is made more complex by the fact that other things DO kick in and cause their own checkpoints, so the interaction between -for example, FAST_START_MTTR_TARGET and the 90% rule can get, er, 'interesting'.

  • Quick Redo Transport Services question

    Hey All,
    I'm just testing Data Guard out for the first time. To make sure I understand this properly, physical archive logs are not copied from the primary to standby (LGWR or ARCH). Rather, the redo is copied into the online standby redo logs, and then the standby archives them when switches occur (which would be when the primary does a switch). Is this correct?
    Thanks!

    Hi
    It depends how you have configured your data guard set-up, assuming you have created standby redo logs then yes this is roughly what will happen.
    Have you read: http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/concepts.htm#i1033862 ? It does explain all of this.
    Thanks
    Paul

  • Tuning of Redo logs in data warehouses (dwh)

    Hi everybody,
    I'm looking for some guidance to configure redo logs in data warehouse environments.
    Of course we are running in noarchive log mode and use direct path inserts (nologging) whereever possible.
    Nevertheless every etl process (one process per day) produces 150 GB of redo logs. That seems quite a lot compared to the overall data volume (1 TB tables + indexes).
    Actually im not sure if there is a tuning problem, but because of the large amount of redo I'm interested in examining it.
    Here are the facts:
    - Oracle 10g, 32 GB RAM
    - 6 GB SGA, 20 GB PGA
    - 5 log groups each with 1 Gb log file
    - 4 MB Log buffer
    - every day ca 150 logswitches (with peaks: some logswitches after 10 seconds)
    - some sysstat metrics after one etl load:
    Select name, to_char(value, '9G999G999G999G999G999G999') from v$sysstat Where name like 'redo %';
    "NAME" "TO_CHAR(VALUE,'9G999G999G999G999G999G999')"
    "redo synch writes" " 300.636"
    "redo synch time" " 61.421"
    "redo blocks read for recovery"" 0"
    "redo entries" " 327.090.445"
    "redo size" " 159.588.263.420"
    "redo buffer allocation retries"" 95.901"
    "redo wastage" " 212.996.316"
    "redo writer latching time" " 1.101"
    "redo writes" " 807.594"
    "redo blocks written" " 321.102.116"
    "redo write time" " 183.010"
    "redo log space requests" " 10.903"
    "redo log space wait time" " 28.501"
    "redo log switch interrupts" " 0"
    "redo ordering marks" " 2.253.328"
    "redo subscn max counts" " 4.685.754"
    So the questions:
    Does anybody can see tuning needs? Should the Redo logs be increased or incremented? What about placing redo logs on Solid state disks?
    kind regards,
    Mirko

    user5341252 wrote:
    I'm looking for some guidance to configure redo logs in data warehouse environments.
    Of course we are running in noarchive log mode and use direct path inserts (nologging) whereever possible.Why "of course" ? What's your recovery strategy if you wreck the database ?
    Nevertheless every etl process (one process per day) produces 150 GB of redo logs. That seems quite a lot compared to the overall data volume (1 TB tables + indexes).This may be an indication that you need to do something to reduce index maintenance during data loading
    >
    Actually im not sure if there is a tuning problem, but because of the large amount of redo I'm interested in examining it.
    For a quick check you might be better off running statspack (or AWR) snapshots across the start and end of batch to get an idea of what work goes on and where the most time goes. A better strategy would be to examine specific jobs in detail, though).
    "redo synch time" " 61.421"
    "redo log space wait time" " 28.501" Rough guideline - if the redo is slowing you down, then you've lost less than 15 minutes across the board to the log writer. Given the number of processes loading and the elapsed time to load, is this significant ?
    "redo buffer allocation retries"" 95.901" This figure tells us how OFTEN we couldn't get space in the log buffer - but not how much time we lost as a result. We also need to see your 'log buffer space' wait time.
    Does anybody can see tuning needs? Should the Redo logs be increased or incremented? What about placing redo logs on Solid state disks?Based on the information you've given so far, I don't think anyone should be giving you concrete recommendations on what to do; only suggestions on where to look or what to tell us.
    Regards
    Jonathan Lewis

  • Oracle DataGuard - Standy Redo log on physical Standby db

    Hello Guys,
    A quick question on my 10.2.0.2 db with Windows 2003 x64 OSs.
    I have 2 machines - One for primary & the other for physical standby
    I have successfully setup DG with Real Time Apply and also tested switchover & failover scenarios and they work well and as expected...
    But I have a query which popped up when I was trying to set this up at home.
    I create the physical standby db by shutting the primary db and copying all the datafiles, tempfiles,online redo logs and then creating a standby control file which will be used for the physical standby db to start off with.
    After mounting the standby db, I tried to create standby redo logs files starting with Group 4 but I got error that standby group 4 already exists.
    Upon querying v$logfile view, I noticed that the standby redo logs that I created on primary are also showing up on the standby db which I understand is from the standby control file.
    So here is my quesiton: What is correct method of creating these standby redo logs on the standby database?
    I know that I could drop those 4 standby redo logs from the standby db and recreate them but all the DG docs online and the documents that I have referred say that I should create the standby redo logs on the standby as I did on the primary but how can this duplication be avoided i.e. from the standby control file?
    I know that I could use another method to create a hot backup such as RMAN etc...but I wanted to follow this way of shutting down the primary and copy the relevant database files.
    Any help appreciated...and thanks in advance guys!
    -Bharath

    so with the setup that I used, i.e. create a standby control file which contains info about the standby redo logs at the primary site, should I also copy over the standby redo logs from the primary to standby as I only copied the datafiles, tempfile(s) & online redo logs? But then the filenames of the standby redo logs will be the exact same as that of the primary. Will that cause any issue during a switchover/failover?
    I don't remember exactly how I did it when I got the whole thing working but I have a vague remembrance that I drop the std. redo log files entries from the std database and then recreate new ones with different filenames (as compared to primary).
    Thanks

  • Database, redo log files accessiblity

    Hi guys!
    I have a quick question regarding Oracle File Permissions (Operational Security Check).
    How would one check if the Oracle files are not world read/writeable
    (database, redo log and control files are not world accessible)?
    Thanks!

    Hi guys!
    I have a quick question regarding Oracle File Permissions (Operational Security Check).What is this referring to?
    How would one check if the Oracle files are not world read/writeable
    (database, redo log and control files are not world accessible)?Example from a lab setup on AIX
    $ find $ORACLE_HOME | wc -l
    18197
    $ find $ORACLE_HOME -perm -0002
    If "at least" the world writable bit is set, find will list files in and below Home top directory.

  • Is There a Way to Run a Redo log for a Single Tablespace?

    I'm still fairly new to Oracle. I've been reading up on the architecture and I am getting the hang of it. Actually, I have 2 questions.
    1) My first question is..."Is there a way to run the redo log file...but to specify something so that it only applies to a single tablespace and it's related files?"
    So, in a situation where, for some reason, only a single dbf file has become corrupted, I only have to worry about replaying the log for those transactions that affect the tablespace associated with that file.
    2) Also, I would like to know if there is a query I can run from iSQLPlus that would allow me to view the datafiles that are associated with a tablespace.
    Thanks

    1) My first question is..."Is there a way to run the
    redo log file...but to specify something so that it
    only applies to a single tablespace and it's related
    files?"
    No You can't specify a redolog file to record the transaction entries for a particular tablespace.
    In cas if a file gets corrupted.you need to apply all the archivelogs since the last backup plus the redologs to bring back the DB to consistent state.
    >
    2) Also, I would like to know if there is a query I
    can run from iSQLPlus that would allow me to view the
    datafiles that are associated with a tablespace.Select file_name,tablespace_name from dba_data_files will give you the
    The above will give you the number of datafiles that a tablespace is made of.
    In your case you have created the tablespace iwth one datafile.
    Message was edited by:
    Maran.E

Maybe you are looking for

  • Why Cant I open NEF files In Photoshop CS3?

    I bought a Nikon D7000 and can not open the Raw files (NEF) I did not have this problem with my Pentax and DNG files. I can not seem to find Camera Raw either. Any help would be great!

  • My number is greyed out in iMessage, I can only use my email, help me!

    I've just updated my iPhone 5 to iOS 7 and now when I iMessage people it comes up with my email address and i went into messages in settings and my number is greyed out with my email address under it ticked, I've tried everything such as turning it o

  • Passing hard code value as the sender name in  fm SO_NEW_DOCUMENT_SEND_API1

    Hi Experts, I am using one FM "SO_NEW_DOCUMENT_SEND_API1" to send mail form my program. In the receipient inbox  under from address the mail is showing the USER ID in which that program has been executed . *Is it possible to send a hard coded value a

  • Retina macbook pro, worst purchase experience ever

    I bought my Macbook pro at the beginning of July.  Specs were 2.6ghz processor, 512 drive with 16 gb of ram.  First mac I ever purchased.  Sold my windows laptop which had similar specs and had worked a charm but the enticement of the new pro was too

  • Mail not getting new email

    I have had trouble getting Mail to go and get my new email. I believe that it is a problem with Mail, because I was able to check my mail online through my schools webmail address, and I also created an account on Entourage, and it worked there too.