CONVERT Redo Logs across platforms - 10g or even 11?

Hi all,
I've seen that it's possible to convert databases across platforms on
10g, but have not seen a reference to the Redo Logs. Can these
be converted across platforms? Interested in AIX to Linux - there are
obvious problems of endianess - anybody any experience?
TIA.
Paul...

To be able to share archived log files, the two database must be identical on a block-for-block basis. Logical Standby use SQL apply where physical organization and structure of the database can be different from primary.
In both cases, the Oracle need to be on same OS platform.
In 11g that requirement is not lifted.
What's New in Oracle Data Guard 11g?

Similar Messages

  • FAST_START_MTTR_TARGET & Redo Log Advisor in 10g Standard Edition

    OS: Oracle Enterprise Linux 5 Update 2 (64-bit)
    DB: 10.2.0.4
    I've been trying to set the FAST_START_MTTR_TARGET in my 10.2.0.4 Standard Edition database. As well as wanting the value set for recovery reasons, I'd also like to use the Redo Advisor Log to determine the best size for my redo logs.
    [oracle@scgamadb01pl ~]$ sqlplus / as sysdba
    SQL*Plus: Release 10.2.0.4.0 - Production on Thu Oct 16 15:04:32 2008 Copyright (c) 1982, 2007, Oracle. All Rights Reserved.
    Connected to:
    Oracle Database 10g Release 10.2.0.4.0 - 64bit Production
    SQL> ALTER SYSTEM SET fast_start_mttr_target=300;
    ALTER SYSTEM SET fast_start_mttr_target=300
    ERROR at line 1:
    ORA-02097: parameter cannot be modified because specified value is invalid
    ORA-00439: feature not enabled: Fast-Start Fault RecoveryOracle Support tells me that Fast-Start Fault Recovery is not available in 10g Standard Edition so I'm unable to set FAST_START_MTTR_TARGET. I'm confused because in my 9i Standard Edition iinstance I have the value set and can adjust it.
    Has anybody else run into this? Is there a workaround other than manually adjusting all the checkpoint-related parameters?

    In 9i, it's also not suppose to be working for SE
    Oracle either ignored the parameter or it's a bug not detecting the setting.
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96531/ch5_edit.htm#66165
    You can use LOG_CHECKPOINT_INTERVAL instead
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams109.htm#REFRN10095

  • Improving redo log writer performance

    I have a database on RAC (2 nodes)
    Oracle 10g
    Linux 3
    2 servers PowerEdge 2850
    I'm tuning my database with "spotilght". I have alredy this alert
    "The Average Redo Write Time alarm is activated when the time taken to write redo log entries exceeds a threshold. "
    The serveres are not in RAID5.
    How can I improve redo log writer performance?
    Unlike most other Oracle write I/Os, Oracle sessions must wait for redo log writes to complete before they can continue processing.
    Therefore, redo log devices should be placed on fast devices.
    Most modern disks should be able to process a redo log write in less than 20 milliseconds, and often much lower.
    To reduce redo write time see Improving redo log writer performance.
    See Also:
    Tuning Contention - Redo Log Files
    Tuning Disk I/O - Archive Writer

    Some comments on the section that was pulled from Wikipedia. There is some confusion in the market as their are different types of solid state disks with different pros and cons. The first major point is that the quote pulled from Wikipedia addresses issues with Flash hard disk drives. Flash disks are one type of solid state disk that would be a bad solution for redo acceleration (as I will attempt to describe below) they could be useful for accelerating read intensive applications. The type of solid state disk used for redo logs use DDR RAM as the storage media. You may decide to discount my advice because I work with one of these SSD manufacturers but I think if you do enough research you will see the point. There are many articles and many more customers who have used SSD to accelerate Oracle.
    > Assuming that you are not CPU constrained,
    moving the online redo to
    high-speed solid-state disk can make a hugedifference.
    Do you honestly think this is practical and usable
    advice Don? There is HUGE price difference between
    SSD and and normal hard disks. Never mind the
    following disadvantages. Quoting
    (http://en.wikipedia.org/wiki/Solid_state_disk):[
    i]
    # Price - As of early 2007, flash memory prices are
    still considerably higher  
    per gigabyte than those of comparable conventional
    hard drives - around $10
    per GB compared to about $0.25 for mechanical
    drives.Comment: Prices for DDR RAM base systems are actually higher than this with a typical list price around $1000 per GB. Your concern, however, is not price per capacity but price for performance. How many spindles will you have to spread your redo log across to get the performance that you need? How much impact are the redo logs having on your RAID cache effectiveness? Our system is obviously geared to the enterprise where Oracle is supporting mission critical databases where a hugh return can be made on accelerating Oracle.
    Capacity - The capacity of SSDs tends to be
    significantly smaller than the
    capacity of HDDs.Comment: This statement is true. Per hard disk drive versus per individual solid state disk system you can typically get higher density of storage with a hard disk drive. However, if your goal is redo log acceleration, storage capacity is not your bottleneck. Write performance, however, can be. Keep in mind, just as with any storage media you can deploy an array of solid state disks that provide terabytes of capacity (with either DDR or flash).
    Lower recoverability - After mechanical failure the
    data is completely lost as
    the cell is destroyed, while if normal HDD suffers
    mechanical failure the data
    is often recoverable using expert help.Comment: If you lose a hard drive for your redo log, the last thing you are likely to do is to have a disk restoration company partially restore your data. You ought to be getting data from your mirror or RAID to rebuild the failed disk. Similarly, with solid state disks (flash or DDR) we recommend host based mirroring to provide enterprise levels of reliability. In our experience, a DDR based solid state disk has a failure rate equal to the odds of losing two hard disk drives in a RAID set.
    Vulnerability against certain types of effects,
    including abrupt power loss
    (especially DRAM based SSDs), magnetic fields and
    electric/static charges
    compared to normal HDDs (which store the data inside
    a Faraday cage).Comment: This statement is all FUD. For example, our DDR RAM based systems have redundant power supplies, N+1 redundant batteries, four RAID protected "hard disk drives" for data backup. The memory is ECC protected and Chipkill protected.
    Slower than conventional disks on sequential I/OComment: Most Flash drives, will be slower on sequential I/O than a hard disk drive (to really understand this you should know there are different kinds of flash memory that also impact flash performance.) DDR RAM based systems, however, offer enormous performance benefits versus hard disk or flash based systems for sequential or random writes. DDR RAM systems can handle over 400,000 random write I/O's per second (the number is slightly higher for sequential access). We would be happy to share with you some Oracle ORION benchmark data to make the point. For redo logs on a heavily transactional system, the latency of the redo log storage can be the ultimate limit on the database.
    Limited write cycles. Typical Flash storage will
    typically wear out after
    100,000-300,000 write cycles, while high endurance
    Flash storage is often
    marketed with endurance of 1-5 million write cycles
    (many log files, file
    allocation tables, and other commonly used parts of
    the file system exceed
    this over the lifetime of a computer). Special file
    systems or firmware
    designs can mitigate this problem by spreading
    writes over the entire device,
    rather than rewriting files in place.
    Comment: This statement is mostly accurate but refers only to flash drives. DDR RAM based systems, such as those Don's books refer to, do not have this limitation.
    >
    Looking at many of your postings to Oracle Forums
    thus far Don, it seems to me that you are less
    interested in providing actual practical help, and
    more interested in self-promotion - of your company
    and the Oracle books produced by it.
    .. and that is not a very nice approach when people
    post real problems wanting real world practical
    advice and suggestions.Comment: Contact us and we will see if we can prove to you that Don, and any number of other reputable Oracle consultants, recommend using DDR based solid state disk to solve redo log performance issues. In fact, if it looks like your system can see a serious performance increase, we would be happy to put you on our evaluation program to try it out so that you can do it at no cost from us.

  • How to remove a redo log file

    Hi Experts,
    I want to remove a wrong redo log file from 10G R2 database in window.
    how to do that without loss data?
    my steps as
    1. alter system switch logfile;
    2. select * from v$log;
    which ARC and sataus do I can drop redo log file based on above SQL
    no archive and active status?
    ALso Which account should I do above action?
    fExamp, system account added redo log file, i only
    can drop by system? how about sys?
    Thanks help in detail steps
    Jim
    Edited by: user589812 on Dec 23, 2008 4:35 PM

    Jim,
    Check this link out for how to drop a redo log file
    Make sure a redo log group is archived (if archiving is enabled) before dropping it. To see whether this has happened, use the V$LOG view.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/onlineredo.htm#i1006489
    SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG;
       GROUP# ARC STATUS
            1 YES ACTIVE
            2 NO  CURRENT
            3 YES INACTIVE
            4 YES INACTIVE
    Drop a redo log group with the SQL statement ALTER DATABASE with the DROP LOGFILE clause.
    The following statement drops redo log group number 3:
    ALTER DATABASE DROP LOGFILE GROUP 3;

  • Redo Logs Groups and Members

    Hi -
    I have a few questions regarding redo log groups and naming conventions I was hoping someone could address or point me to some docs.
    I am multiplexing my control file and redo logs across HDDs for an XE installation.
    The original logs created at install have the naming form of:
    O1_MF_1_462H1GK7_.LOG.
    1. What is behind the naming scheme (specifically the _462H1GK7_ section)?
    2. Is there a generally recognized naming scheme for adding new group members in XE?
    3. I noticed that with any XE install I have done, the redo logs groups default to Group 1 and Group 3, with no Group 2 to be found. Is this normal/required? If not, is it best to add group 2 and then remove group 3? I'm not sure if has much bearing here, but the 10gR2 docs state that skipping group numbers will consume space in the control files.
    Thanks in advance for any assistance,
    Scott

    The odd-looking filename is from using Oracle Managed Files (OMF). You can override the naming scheme or create your own groups and members. Very common to include "redo" in the file name along with group and member identifiers. An example would be:
    <path>/redo01a.log
    <path>/redo01b.log
    <path>/redo02a.log
    etc.
    You can see group 01 has two members, a and b. Can also include the SID in the file name as well, but that can be identified via the path. 462H1GK7 is a unique identifier generated by Oracle. It has no meaning.
    I don't know about XE not creating a group 2. Were there group 2 file(s) left over from a previous install (although the OMF probably would have ignored the existing files)? If creating the files manually, you can use "reuse" to use existing files.

  • Frequent redo log switches

    Oracle 9.2.0.1 on W2k3 server. the redo log is switching every minute, even without any discernable database activity. It's in archive log mode and the redo logs are 100mb in size, so the archive logs are filling up my hard drive. I'm having a hard time figuring out why the redo logs are switching so often. there are 3 redo log groups. Thanks for any help you can give me.

    If the redo logs are defined at 100M in size and the archived redo logs are 100M in size then the online redo logs are being filled. As suggested log miner is one way to determine what is happening.
    Are the redo logs switching all the time or only during periods of peek activity? If the rapid log switches are only happening during certain time periods like 9:30 - 10:30 or the time correspond to the running of certain batch jobs then you should probably increase the size of your online redo logs.
    If the archived redo logs are small then obvious something is forcing log switches prior to their filling. I would check the spfile setting for log_checkpoint_interval, log_checkpoint_timeout, and fast_start_mttr_target to be sure no one had made a mistake changing one of the values prior to running the log miner.
    HTH -- Mark D Powell --

  • 10G redo log switchs with no activity

    We have been testing 8i to 10G migrations and two things I have noticed that are different bwteeen the 2 databases are:
    1) redo log switching occurring even though the database has no users and just sitting there. This is not happeniong in our 8i databases.
    2) trace files of format instance_m001_ and instance_m000_ in ../bdump. Our 8i bdump's normally do not have trc files unless a problem occurs.
    Is it normal for 10G to automatically switch redo logs with no activity and are these .trc files also normal? In general I get nervous with trc files. -quinn

    Still haven't figured out the redo log switches, they seem to be slowing down, but the excess trace files are apparently cuased by a bug and can be fixed with Patch 3432729.

  • How to change redo log size in oracle 10g

    Hi Experts,
    Can anybody confirm how to change redo log size in oracle 10g?
    Amit

    Dear Amit,
    You can enlarge the size of existing Online Redo log files, by adding new groups with different size of files (origlog$/mirrlog$) and then carefully droping the old groups with  their associated inactive files.
    Please refer SAP Note 309526 - Enlarging redo log files to perform the activity.
    Steps to perform:
    STEP-1. Analyze the exisiting situation and prepare an action plan.
    A. You have to ensure that no more than one log switch per minute occurs during peak times.
    It may also be necessary to increase the size of the online redo logs until they are large enough.
    Too many log switches lead to too many checkpoints, which in turn lead to a high writing load in the I/O subsystem.
    Use ST04 -> Additional Functions --> Display GV$-Views
    There you can select
    Gv$LOG_HISTORY --->for determing your existing LOG switching frequency.
    GV$LOG -
    > list the status(INACTIVE/CURRENT/ACTIVE) /size/sequence no. of existing online redolog files
    GV$LOGFILE  --- > list the information of existing online  redolog files with their storage paths
    You can document the existing situation of Online Redo Log Fiile management before going to enlarge Redo Log Files.
    It will be helpful, if something goes wrong while performing activities.
    B. Based on above Situation analysis, Plan your New Redo Log Group and there Members with new optimal size.
    e.g.
    Group No.         Redo Log File Locations  u201C/oracle/<SID>/u201D                  Size
                                 /origlogA                  /mirrlogA            
    15                        log_g15m1.dbf         log_g15m2.dbf               100 MB
    17                        log_g17m1.dbf            log_g17m2.dbf               100 MB
                                /origlogB                    /mirrlogB
    16                       log_g16m1.dbf          log_g16m2.dbf            100 MB
    18                       log_g18m1.dbf            log_g18m2.dbf            100 MB
    Continue to next.....

  • Oracle 10g ASM converting noarchive log to archive log

    DATABASE Details
    Oracle Database 10g Release 10.2.0.4.0 - 64bit Production
    PL/SQL Release 10.2.0.4.0 - Production
    CORE 10.2.0.4.0 Production
    TNS for 64-bit Windows: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    Mode
    Database log mode No Archive Mode
    Automatic archival Disabled
    Archive destination USE_DB_RECOVERY_FILE_DEST
    Oldest online log sequence 26269
    Current log sequence 26274
    SQL>
    TYPE :
    DW Batch Process
    Redo log groups
    6 groups with 2 files size each 2GB Aprx. 400gb redo log generates per day
    OS : WINDOWS 2008 SERVER
    Current Size
    SQL> select sum(bytes)/(1024*1024*1024) from v$datafile;
    SUM(BYTES)/(1024*1024*1024)
    1003.86945
    ASM Details
    SELECT GROUP_NUMBER,NAME FROM V$ASM_DISKGROUP;
    GROUP_NUMBER NAME
    1 KK_DATA
    SQL> SELECT dg.name AS diskgroup, SUBSTR(c.instance_name,1,12) AS instance,
    2 SUBSTR(c.db_name,1,12) AS dbname, SUBSTR(c.SOFTWARE_VERSION,1,12) AS software,
    3 SUBSTR(c.COMPATIBLE_VERSION,1,12) AS compatible
    4 FROM V$ASM_DISKGROUP dg, V$ASM_CLIENT c
    5 WHERE dg.group_number = c.group_number;
    DISKGROUP INSTANCE DBNAME SOFTWARE COMPATIBLE
    KK_DATA +asm         KK   10.2.0.4.0   10.2.0.0.0
    Currently DB in noarchive mode, Need to put in archive Mode
    What are additional precautions need to take in case of ASM for managing archive mode

    I would recommend creating a new ASM diskgroup and assign Flash recovery area to this group. This is part of ASM best practices as in case your Data diskgroup goes, you can use RMAN backups/archivelogs for recovery purpose.
    -Amit
    http://askdba.org/weblog/

  • Select from .. as of - using archived redo logs - 10g

    Hi,
    I was under the impression I could issue a "Select from .. as of" statement back in time if I have the archived redo logs.
    I've been searching for a while and cant find an answer.
    My undo_management=AUTO, database is 10.2.0.1, the retention is the default of 900 seconds as I've never changed it.
    I want to query a table as of 24 hours ago so I have all the archived redo logs from the last 48 hours in the correct directory
    When is issue the following query
    select * from supplier_codes AS OF TIMESTAMP
    TO_TIMESTAMP('2009-08-11 10:01:00', 'YYYY-MM-DD HH24:MI:SS')
    I get a snapshot to old ORA-01555 error. I guess that is because my retention is only 900 seconds but I thought the database should query the archive redo logs or have I got that totally wrong?!
    My undo tablespace is set to AUTOEXTEND ON and MAXSIZE UNLIMITED so there should be no space issues
    Any help would be greatly appreciated!
    Thanks
    Robert

    If you want to go back 24 hours, you need to undo the changes...
    See e.g. the app dev guide - fundamentals, chapter on Flashback features: [doc search|http://www.oracle.com/pls/db102/ranked?word=flashback&remark=federated_search].

  • Oracle 10g R2 Database Redo Log Files

    I had 3 redo log files, each of size 50 MB. i added 3 more redo log files, each of size 250 MB.
    Database is running in archive mode, files are generating with different sizes like 44 MB and 240 MB, i need to know is this harm for database or not?
    to make all archive redo log files generation of equal size what should i do?
    Please guide

    Waheed,
    When the redo log switch willbe happening,oracle would be asking archiver to log that into the archive file.So in case you have any parameters set to make the switch happen at certain time,depending on the activity of teh database,the archive file size may vary.There is no harm wit the different sizes of the files.What matters is the transaction informaiton contained in them not their size.
    to make all archive redo log files generation of equal size what should i do?
    As mentioned by Syed, you can make the switch happen at a defined interval which will not ensure but still will be a step to make the archive files of the same size.But I shall say you should bother more about making sure that the files are available rather than their size.
    Aman....

  • How to reduce excessive redo log generation in Oracle 10G

    Hi All,
    Please let me know is there any way to reduce excessive redo log generation in Oracle DB 10.2.0.3
    previously per day there is only 15 Archive log files are generating but now a days it is increased to 40 to 45
    below is the size of redo log file members:
    L.BYTES/1024/1024     MEMBER
    200     /u05/applprod/prdnlog/redolog1a.dbf
    200     /u06/applprod/prdnlog/redolog1b.dbf
    200     /u05/applprod/prdnlog/redolog2a.dbf
    200     /u06/applprod/prdnlog/redolog2b.dbf
    200     /u05/applprod/prdnlog/redolog3a.dbf
    200     /u06/applprod/prdnlog/redolog3b.dbf
    here is the some content of alert message for your reference how frequent log switch is occuring:
    Beginning log switch checkpoint up to RBA [0x441f.2.10], SCN: 4871839752
    Thread 1 advanced to log sequence 17439
    Current log# 3 seq# 17439 mem# 0: /u05/applprod/prdnlog/redolog3a.dbf
    Current log# 3 seq# 17439 mem# 1: /u06/applprod/prdnlog/redolog3b.dbf
    Tue Jul 13 14:46:17 2010
    Completed checkpoint up to RBA [0x441f.2.10], SCN: 4871839752
    Tue Jul 13 14:46:38 2010
    Beginning log switch checkpoint up to RBA [0x4420.2.10], SCN: 4871846489
    Thread 1 advanced to log sequence 17440
    Current log# 1 seq# 17440 mem# 0: /u05/applprod/prdnlog/redolog1a.dbf
    Current log# 1 seq# 17440 mem# 1: /u06/applprod/prdnlog/redolog1b.dbf
    Tue Jul 13 14:46:52 2010
    Completed checkpoint up to RBA [0x4420.2.10], SCN: 4871846489
    Tue Jul 13 14:53:33 2010
    Beginning log switch checkpoint up to RBA [0x4421.2.10], SCN: 4871897354
    Thread 1 advanced to log sequence 17441
    Current log# 2 seq# 17441 mem# 0: /u05/applprod/prdnlog/redolog2a.dbf
    Current log# 2 seq# 17441 mem# 1: /u06/applprod/prdnlog/redolog2b.dbf
    Tue Jul 13 14:53:37 2010
    Completed checkpoint up to RBA [0x4421.2.10], SCN: 4871897354
    Tue Jul 13 14:55:37 2010
    Incremental checkpoint up to RBA [0x4421.4b45c.0], current log tail at RBA [0x4421.4b5c5.0]
    Tue Jul 13 15:15:37 2010
    Incremental checkpoint up to RBA [0x4421.4d0c1.0], current log tail at RBA [0x4421.4d377.0]
    Tue Jul 13 15:35:38 2010
    Incremental checkpoint up to RBA [0x4421.545e2.0], current log tail at RBA [0x4421.54ad9.0]
    Tue Jul 13 15:55:39 2010
    Incremental checkpoint up to RBA [0x4421.55eda.0], current log tail at RBA [0x4421.56aa5.0]
    Tue Jul 13 16:15:41 2010
    Incremental checkpoint up to RBA [0x4421.58bc6.0], current log tail at RBA [0x4421.596de.0]
    Tue Jul 13 16:35:41 2010
    Incremental checkpoint up to RBA [0x4421.5a7ae.0], current log tail at RBA [0x4421.5aae2.0]
    Tue Jul 13 16:42:28 2010
    Beginning log switch checkpoint up to RBA [0x4422.2.10], SCN: 4872672366
    Thread 1 advanced to log sequence 17442
    Current log# 3 seq# 17442 mem# 0: /u05/applprod/prdnlog/redolog3a.dbf
    Current log# 3 seq# 17442 mem# 1: /u06/applprod/prdnlog/redolog3b.dbf
    Thanks in advance

    hi,
    Use the below script to find out at what hour the generation of archives are more and in the hour check for eg. if MV's are running...or any programs where delete * from table is going on..
    L
      1  select
      2    to_char(first_time,'DD-MM-YY') day,
      3    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'00',1,0)),'999') "00",
      4    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'01',1,0)),'999') "01",
      5    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'02',1,0)),'999') "02",
      6    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'03',1,0)),'999') "03",
      7    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'04',1,0)),'999') "04",
      8    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'05',1,0)),'999') "05",
      9    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'06',1,0)),'999') "06",
    10    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'07',1,0)),'999') "07",
    11    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'08',1,0)),'999') "08",
    12    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'09',1,0)),'999') "09",
    13    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'10',1,0)),'999') "10",
    14    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'11',1,0)),'999') "11",
    15    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'12',1,0)),'999') "12",
    16    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'13',1,0)),'999') "13",
    17    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'14',1,0)),'999') "14",
    18    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'15',1,0)),'999') "15",
    19    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'16',1,0)),'999') "16",
    20    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'17',1,0)),'999') "17",
    21    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'18',1,0)),'999') "18",
    22    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'19',1,0)),'999') "19",
    23    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'20',1,0)),'999') "20",
    24    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'21',1,0)),'999') "21",
    25    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'22',1,0)),'999') "22",
    26    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'23',1,0)),'999') "23",
    27    COUNT(*) TOT
    28    from v$log_history
    29  group by to_char(first_time,'DD-MM-YY')
    30  order by daythanks,
    baskar.l

  • How to recover from corrupt redo log file in non-archived 10g db

    Hello Friends,
    I don't know much about recovering databases. I have a 10.2.0.2 database with corrupt redo file and I am getting following error on startup. (db is non archived and no backup) Thanks very much for any help.
    Database mounted.
    ORA-00368: checksum error in redo log block
    ORA-00353: log corruption near block 6464 change 9979452011066 time 06/27/2009
    15:46:47
    ORA-00312: online log 1 thread 1: '/dbfiles/data_files/log3.dbf'
    ====
    SQL> select Group#,members,status from v$log;
    GROUP# MEMBERS STATUS
    1 1 CURRENT
    3 1 UNUSED
    2 1 INACTIVE
    ==
    I have tried this so far but no luck
    I have tried following commands but no help.
    SQL> ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;
    Database altered.
    SQL> alter database open resetlogs;
    alter database open resetlogs
    ERROR at line 1:
    ORA-01139: RESETLOGS option only valid after an incomplete database recovery
    SQL> alter database open;
    alter database open
    ERROR at line 1:
    ORA-00368: checksum error in redo log block
    ORA-00353: log corruption near block 6464 change 9979452011066 time 06/27/2009
    15:46:47
    ORA-00312: online log 1 thread 1: '/dbfiles/data_files/log3.dbf'

    user652965 wrote:
    Thanks very much for your help guys. I appreciate it. unfortunately none of these commands worked for me. I kept getting error on clearing logs that redo log is needed to perform recovery so it can't be cleared. So I ended up restoring from earlier snapshot of my db volume. Database is now open.
    Thanks again for your input.And now, as a follow-up, at a minimum you should make sure that all redo log groups have at least 3 members. Then, if you lose a single redo log file, all you have to do is shutdown the db and copy one of the good members (of the same group as the lost member) over the lost member.
    And as an additional follow-up, if you value your data you will run in archivelog mode and take regular backups of the database and archivelogs. If you fail to do this you are saying that your data is not worth saving.

  • Discussion about "Automatisation of resizing redo logs"

    Hi
    I've automated the resizing of redo log from plsql procedure. it's work very well but i'd like to optimize the number of necessary log switch.
    Example: on the basis of 4 redo log groups.
    I need to create 4 new Redo Log group with the desired size. ok. in the final step i need to delete the old groups... (hum) How to proceed to have the smallest number of log switches to finally stay on the database with only my 4 new Redo ?
    Actually and as i said at the beginning, routine works fine but with a basis of 4 redo log groups i need 20 switch to perform the complete algorythmn !!!
    My target databases are 8i, 9i or 10g.
    Thanks in advance,
    Regards
    Den

    I have a primary database and standby database. The archived redo logs will apply to standby database every 1hr.What is the DB version?
    Why you want to shutdown the standby database?
    either you can 1) Cancel MRP (or) 2) Set log_archive_dest_state_2='defer' on primary
    you no need to shutdown standby database.
    Suppose If I want to shutdown the standby database, What procedure I need to follow?1) Cancel MRP
    SQL> alter databaes recover managed standby database cancel;
    2) Shutdown Standby
    3) startup mount
    4) start MRP
    SQL> alter databaes recover managed standby database disconnect from session;
    In many sites, I have came across that, I need to cancel the managed recovery before shutting down the standby database.You no need to cancel MRP. please read above what i have written.
    Ex: If my current apply of archived redo log is from 12:55 PM to 1:05 PM, what happens if I issue SHUTDOWN IMMEDIATE at 1PM. Also what happens if I issue SHUTDOWN IMMEDIATE after cancelling managed recovery. Consider I am going to start my standby database again at 4PM. So, what about the redo logs generated at 2pm, 3pm and 4pm. Will all these redo logs apply when I start the standby database at 4PM.Recover will be performed based on SCN, So lets suppose.
    Sequence: 100
    FIRST_CHANGE: 20000
    NEXT_CHANGE: 21000
    If your MRP was stopped at sequence 100, then your SCN would be as 21000, Now whenever you start MRP it will look for the SCN as "21001" which SCN exist in sequence "101".. So based on, it will be performed recovery.
    Even standby or primary database recover concept is same.
    Also, please let me know the need of cancelling managed recovery before shutting down standby database.Safely plug out...
    When you give shutdown, MRP will be Interrupted , so cancel it properly and shutdown
    Hope this clears.... :)

  • Hoping for a quick response : EXP and Archived REDO log files

    I apologize in advance if this question has been asked and answered 100 times. I admit I didn't search, I don't have time. I'm leaving on vacation tomorrow, and I need to know if I'm correct about something to do with backup / restore.
    we have 10g R2 running a single instance on a single server. The application vendor has "embedded" oracle with their application. The vendor's backup is a batch file using EXP - thus:
    exp system/xpwdxx@db full=y file=D:\Orant\admin\db\EXP\db_full.dmp log=D:\Orant\admin\db\EXP\db_full.txt direct=y compress=y
    This command is executed nightly at midnight. The files are then backed up by our nightly backup to offsite storage media.
    Te database is running in autoarchive mode. The problem is, the archived redo files filled the drive they were being stored on, and it is the drive the database is on. I used OS commands to move 136G of archived redo logs onto other storage media to free the drive.
    My question: Since the EXP runs at midnight, when there is likely NO activity, do I need to run in AutoArchive Mode? From what I have read, you cannot even apply archived redo log files to this type of backup strategy (IMP) Is that true? We are ok losing changes since our last EXP. I have read a lot of stuff about restoring consistent vs. inconsistent, and just need to know: If my disk fails, and I have to start with a clean install of Oracle and nothing else, can I IMP this EXP and get back up and running as of the last EXP? Or do I need the autoarchived redo log files back to July 2009 (136G of them).
    Hoping for a quick response
    Best Regards, and thanks in advance
    Bruce Davis

    Bruce Davis wrote:
    Amardeep Sidhu
    Thank you for your quick reply. I am reading in the other responses that since I am using EXP without consistent=y, I might not even have a backup. The application vendor said that with this dmp file they can restore us to the most recent backup. I don't really care for this strategy as it is untested. I asked them to verify that they could restore us and they said they tested the dmp file and it was OK.
    Thank you for taking the time to reply.
    Best Regards
    BruceThe dump file is probably ok in the sense it is not corrupted and can be used in an imp operation. That doesn't mean the data in it is transactionally consistent. And to use it at all, you have to have a database up and running. If the database is physically corrupted, you'll have to rebuild a new database from scratch before you can even think about using your dmp file.
    Vendors never understand databases. I once had a vendor tell me that Oracle's performance would be intolerable if there were more than 5 concurrent connections. Well, maybe in HIS product ..... Discussions terminated quickly after he made that statement.

Maybe you are looking for

  • Sierra Wireless AT&T WWAN issue

    Hi - I've got a Lenovo Thinkpad X300 with an OEM Sierra Wireles MC8775 HSDPA card, driver 2.0.9.0, 1/11/2007. It has worked fine except in two instances: first when traveling from the US to Switzerland, upon arrival it "failed to initialize" the devi

  • Cache in OBIEE 11g

    Hi, I have a sql query which gets generated after running a analysis (Report) which has prompts defined above it. Like select a,b,c from DB1 where year = "2011" and Month = "July" Now if i use IBOTS to cache this analysis then it only caches till the

  • Set Grant Permission to table in Access Database

    Hello All, How to set grant permission in Access 2003 Database. I am using Microsoft ACE OLEDB 12.0 Connection String, Using System.Data.OleDb Component and want to set Grant DELETE, INSERT, PROCEDURE, SELECT, UPDATE ON MSysObjects TO Admin Best Rega

  • How to set pre-defined constants via FXML

    Does anymode know how to set a pre-defined constant like javafx.scene.control.Control.USE_PREF_SIZE via FXML when you create a node? Instead of using a fixed value like this    <Label text="Last Name:" minWidth="100" />I would like to use one of the

  • Cloning EBS from multiuser environment to single user enviroment

    Hi Hussein Sawwan, I am creating a new clone for my EBS R12.13 Production system to be used as a test system. My Production system is multiuser (meaning there is a user that owns the database tier and another user that owns the application tier). I w