Corrupt LOG File MaxDB 7.5 (Netweaver 04)

At the Installation the LOG went full. LOG-Mode = SINGLE.
Then I have added a new LOG File. Nothing happens. After rebooting the maschine,
the database doesnt come up, in krnldiag there is following:
2004-11-14 22:46:30  7579     11000 vattach  '/sapdb/XI3/sapdata/DISKD0001' devno 1 T58 succeeded
2004-11-14 22:46:30  7520     11597 IO       Open '/sapdb/XI3/saplog/DISKL001' successfull, fd: 16
2004-11-14 22:46:30  7633     12821 TASKING  Thread 7633 starting
2004-11-14 22:46:30  7632     11565 startup  DEVi started
2004-11-14 22:46:30  7520     11597 IO       Open '/sapdb/XI3/saplog/DISKL001' successfull, fd: 17
2004-11-14 22:46:30  7634     12821 TASKING  Thread 7634 starting
2004-11-14 22:46:30  7633     11565 startup  DEVi started
2004-11-14 22:46:30  7579     11000 vattach  '/sapdb/XI3/saplog/DISKL001' devno 2 T58 succeeded
2004-11-14 22:46:30  7520     11597 IO       Open '/data2/sapdb/XI3/DISKL002' successfull, fd: 19
2004-11-14 22:46:30  7635     12821 TASKING  Thread 7635 starting
2004-11-14 22:46:30  7634     11565 startup  DEVi started
2004-11-14 22:46:30  7520     11597 IO       Open '/data2/sapdb/XI3/DISKL002' successfull, fd: 20
2004-11-14 22:46:30  7636     12821 TASKING  Thread 7636 starting
2004-11-14 22:46:30  7635     11565 startup  DEVi started
2004-11-14 22:46:30  7579     11000 vattach  '/data2/sapdb/XI3/DISKL002' devno 3 T58 succeeded
2004-11-14 22:46:30  7579 ERR    30 IOMan    Log volume configuration corrupted: Linkage missmatch between volume 1 and 2
2004-11-14 22:46:30  7579     11000 vdetach  '/sapdb/XI3/sapdata/DISKD0001' devno 1 T58
2004-11-14 22:46:30  7520     12822 TASKING  Thread 7631 joining
2004-11-14 22:46:30  7636     11565 startup  DEVi started
2004-11-14 22:46:30  7631     11566 stop     DEVi stopped
2004-11-14 22:46:30  7520     12822 TASKING  Thread 7632 joining
2004-11-14 22:46:30  7579 ERR    16 Admin    RestartFilesystem failed with 'I/O error'
2004-11-14 22:46:30  7579 ERR     8 Admin    ERROR 'disk_not_accessibl' CAUSED EMERGENCY SHUTDOWN
2004-11-14 22:46:30  7579     12696 DBSTATE  Change DbState to 'SHUTDOWN'(25)
Is there any way to delete the corrupt LOG files in order to add new one? The data file is ok.
Thanks
Helmut

Thread was also opened on the Mailing list:
Here is the answer
Hi,
in most situations an "add log volume" does not solve
a log full situation, because the added logvolume is
accessible first at the moment, when the write position
on the log is at the end of the old logvolume.
If a LOGFULL occurs, the save log!
All "parts" of the LogVolume are numbered and each one
has stored the identifying number of it's predecessor and
it successor in it's header-page. The message
     "Linkage missmatch between volume"
means, that these values does not match any more.
Did you somehow changed the contents of this file/device?
If so: can you revert the change? Then the restart should
work again. do a logbackup and your database will run again.
If this doesn't work for you, you could patch the
header pages of the logvolume (I assume you do not really
want this).
Another way to solve the situation is to do a data backup
in admin-mode, reinstall the database and recover the
databackup, because the removal of a logvolume is not
supported.
kind regards, Martin

Similar Messages

  • Corrupt log file, but how does db keep working?

    We recently had a fairly devastating outage involving a hard drive failure, but are a little mystified about the mechanics of what went on with berkeleydb which I hope someone here can clear up.
    A hard drive running a production instance failed because of a disk error, and we had to do a hard reboot to get the system to come back up and right itself (we are running RedHat Enterprise). We actually had three production environments running on that machine, and two came back just fine, but in one, we would get this during recovery:
    BDBStorage> Running recovery.
    BerkeleyDB> : Log file corrupt at LSN: [4906][8294478]
    BerkeleyDB> : PANIC: Invalid argument
    BerkeleyDB> : /usr/local/BerkeleyDB.4.8/lib/libdb_java-4.8.so(__os_stack+0x20) [0x2c23af2380]
    BerkeleyDB> : /usr/local/BerkeleyDB.4.8/lib/libdb_java-4.8.so(__os_abort+0x15) [0x2c23aee9c9]
    BerkeleyDB> : /usr/local/BerkeleyDB.4.8/lib/libdb_java-4.8.so(__env_panic+0xef) [0x2c23a796f9]
    BerkeleyDB> : /usr/local/BerkeleyDB.4.8/lib/libdb_java-4.8.so(__env_attach_regions+0x788) [0x2c23aae82c]
    BerkeleyDB> : /usr/local/BerkeleyDB.4.8/lib/libdb_java-4.8.so(__env_open+0x130) [0x2c23aad1e7]
    BerkeleyDB> : /usr/local/BerkeleyDB.4.8/lib/libdb_java-4.8.so(__env_open_pp+0x2e7) [0x2c23aad0af]
    BerkeleyDB> : /usr/local/BerkeleyDB.4.8/lib/libdb_java-4.8.so [0x2c23949dc7]
    BerkeleyDB> : /usr/local/BerkeleyDB.4.8/lib/libdb_java-4.8.so(Java_com_sleepycat_db_internal_db_1javaJNI_DbEnv_1open+0xbc) [0x2c239526ea]
    BerkeleyDB> : [0x2a99596e77]
    We thought, well, perhaps this is related to the disk error, it corrupted a log file and then died. Luckily (or so we thought) we diligently do backups twice a day, and keep a week's worth around. These are made using the standard backup procedure described in the developer's guide, and whenever we've had to restore them, they have been just fine (we've been using our basic setup for something like 9 years now). However, as we retrieved backup after backup, going back three or four days, they all had similar errors, always starting with [4096]. Then we noticed an odd log file, numbered with 4096, which sat around in our logs directory ever since it was created. Eventually we found a good backup, but the customer lost several days' worth of work.
    My question here is, how could a log file be corrupted for days and days but not be noticed, say during a checkpoint (which we run every minute or so)? Doesn't a checkpoint itself basically scan the logs, and shouldn't that have hit the corrupt part not long after it was written? The system was running without incident, getting fairly heavy use, so it really mystifies me as to how that issue could be sitting around for days and days like that.
    For now all we can promise the customer is that we will automatically restore every backup as soon as it's made, and if something like this happens, we immediately try a graceful shutdown, and if that doesn't come back up, we automatically go back to the 12-hour-old backup. And perhaps we should be doing that anyway, but still, I would like to understand what happened here. Any ideas?

    Please note, I don't want to make it sound like I'm somehow blaming berkeleydb for the outage-- we realize in hindsight there were better things to do than go back to an old backup, but the customer wanted an immediate answer, even if it was suboptimal. I just feel like I am missing something major about how the system works.

  • Steps to empty SAPDB (MaxDB) log file

    Hello All,
    i am on Redhat Unix Os with NW 7.1 CE and SAPDB as Back end. I am trying to login but my log file is full. Ii want to empty log file but i havn't done any data backup yet. Can anybody guide me how toproceed to handle this problem.
    I do have some idea what to do like the steps below
    1.  take databackup (but i want to skip this step if possible) since this is a QA system and we are not a production company.
    2. Take log backup using same methos as data backup but with Log type (am i right or there is somethign else)
    3. It will automatically overwrite log after log backups.
    or should i use this as an alternative, i found this in note Note 869267 - FAQ: SAP MaxDB LOG area
    Can the log area be overwritten cyclically without having to make a log backup?
    Yes, the log area can be automatically overwritten without log backups. Use the DBM command
    util_execute SET LOG AUTO OVERWRITE ON
    to set this status. The behavior of the database corresponds to the DEMO log mode in older versions. With version 7.4.03 and above, this behavior can be set online.
    Log backups are not possible after switching on automatic overwrite. Backup history is broken down and flagged by the abbreviation HISTLOST in the backup history (dbm.knl file). The backup history is restarted when you switch off automatic overwrite without log backups using the command
    util_execute SET LOG AUTO OVERWRITE OFF
    and by creating a complete data backup in the ADMIN or ONLINE status.
    Automatic overwrite of the log area without log backups is NOT suitable for production operation. Since no backup history exists for the following changes in the database, you cannot track transactions in the case of recovery.
    any reply will be highly appreciated.
    Thanks
    Mani

    Hello Mani,
    1. Please review the document u201CUsing SAP MaxDB X Server Behind a Firewallu201D at MAXDB library
    http://maxdb.sap.com/doc/7_7/44/bbddac91407006e10000000a155369/content.htm
               u201CTo enable access to X Server (and thus the database) behind a firewall using a client program such as Database Studio, open the necessary ports in your  firewall and restrict access to these ports to only those computers that need to access the database.u201D
                 Is the database server behind a Firewall? If yes, then the Xserver port need to be open. You could restrict access to this port to the computers of your database administrators, for example.
    Is "nq2host" the name of the database server? Could you ping to the server "nq2host" from your machine?
    2. And if the database server and your PC in the local area NetWork you could start the x_server on the database server & connect to the database using the DB studio on your PC, as you already told by Lars.
    See the document u201CNetwork Communicationu201D at
    http://maxdb.sap.com/doc/7_7/44/d7c3e72e6338d3e10000000a1553f7/content.htm
    Thank you and best regards, Natalia Khlopina

  • MAXDB copy log file from one server to the other server.

    Hi All,
    I want to copy a log file from one server to the other server , the database is Maxdb and i don't know how to do it. I want to do it through command prompt, (set of commands required) the front end tool which we are using is DBM ( database manager). Please help.
    Regards
    M.A

    Hi,
    Basically, the process is of log shipping. Transferring logs from DC to DR system.
    For that, you can check HowTo - Standby DB log shipping - MaxDB - SCN Wiki
    This is script based. I have not done it but the idea remains the same.
    Regards,
    Divyanshu

  • Transaction log shipping restore with standby failed: log file corrupted

    Restore transaction log failed and I get this error: for only 04 no of database in same SQL server, renaming are working fine.
    Date                     
    9/10/2014 6:09:27 AM
    Log                        
    Job History (LSRestore_DATA_TPSSYS)
    Step ID                
    1
    Server                  
    DATADR
    Job Name                           
    LSRestore_DATA_TPSSYS
    Step Name                        
    Log shipping restore log job step.
    Duration                             
    00:00:03
    Sql Severity        0
    Sql Message ID 0
    Operator Emailed           
    Operator Net sent          
    Operator Paged               
    Retries Attempted         
    0
    Message
    2014-09-10 06:09:30.37  *** Error: Could not apply log backup file '\\10.227.32.27\LsSecondery\TPSSYS\TPSSYS_20140910003724.trn' to secondary database 'TPSSYS'.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: An error occurred while processing the log for database 'TPSSYS'. 
    If possible, restore from backup. If a backup is not available, it might be necessary to rebuild the log.
    An error occurred during recovery, preventing the database 'TPSSYS' (13:0) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.
    RESTORE LOG is terminating abnormally.
    Processed 0 pages for database 'TPSSYS', file 'TPSSYS' on file 1.
    Processed 1 pages for database 'TPSSYS', file 'TPSSYS_log' on file 1.(.Net SqlClient Data Provider) ***
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  Skipping log backup file '\\10.227.32.27\LsSecondery\TPSSYS\TPSSYS_20140910003724.trn' for secondary database 'TPSSYS' because the file could not be verified.
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: An error occurred restoring the database access mode.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteScalar requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: An error occurred restoring the database access mode.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteScalar requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  Deleting old log backup files. Primary Database: 'TPSSYS'
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  The restore operation completed with errors. Secondary ID: 'dd25135a-24dd-4642-83d2-424f29e9e04c'
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: Could not cleanup history.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.38  ----- END OF TRANSACTION LOG RESTORE    
    Exit Status: 1 (Error)

    I Have restore the database to new server and check with new log shipping but its give this same error again, If it is network issue i believe issue need to occur on every database in  that server with log shipping configuration
    error :
    Message
    2014-09-12 10:50:03.18    *** Error: Could not apply log backup file 'E:\LsSecondery\EAPDAT\EAPDAT_20140912051511.trn' to secondary database 'EAPDAT'.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-12 10:50:03.18    *** Error: An error occurred while processing the log for database 'EAPDAT'.  If possible, restore from backup. If a backup is not available, it might be necessary to rebuild the log.
    An error occurred during recovery, preventing the database 'EAPDAT' (8:0) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.
    RESTORE LOG is terminating abnormally.
    can this happened due to data base or log file corruption, if so how can i check on that to verify the issue
    Its not necessary if the issue is with network it would happen every day IMO it basically happens when load on network is high and you transfer log file which is big in size.
    As per message database engine was not able to restore log backup and said that you must rebuild log because it did not find out log to be consistent. From here it seems log corruption.
    Is it the same log file you restored ? if that is the case since log file was corrupt it would ofcourse give error on wehatever server you restore.
    Can you try creating logshipping on new server by taking fresh full and log backup and see if you get issue there as well. I would also say you to raise case with Microsoft and let them tell what is root cause to this problem
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Articles

  • How to remove log file on maxdb

    Hi,
    I have installed IDES ECC 5.0 on MaxDB 7.5. I have 4 files size of 1 GB and I would like to remove it.
    Is it possible to remove log files without Complete Data Backup procedure in Database Manager?
    Is it helpfull to switch database in Overwrite Log Mode?
    We use a testing sistem without needs for recovery procedures.
    Regards,
    Aleksandar

    Hi,
    you should not delete the logvolumes.
    For your case it would suffice to switch on the auto-overwrite mode of the log.
    You can do this most conveniently by using the Database Manager GUI.
    1. Put the DB instance in state ADMIN
    2. Goto 'COnfiguration' -> 'Log Settings'.
    3. From the wizard select 'Overwrite Mode...' and follow the described follow-ups.
    Kind regards,
    Roland

  • How to add a log file to an MaxDB?

    I see that we have only one log file there. I want to add more. Please help either the syntax or the GUI menu path.  Thanks!

    Hi Linda,
    of course the topic is covered in the MaxDB documentation (e.g. here http://maxdb.sap.com/documentation/)
    [Adding Log Volumes|http://maxdb.sap.com/doc/7_7/0c/b9bc43f7b24eef9afff1968bf4fcdc/content.htm]
    However, you should be really sure that you actually need to add a volume to the log area.
    Usually this is not the case.
    Most often, people believe they should add a log volume when they should run a log backup.
    regards,
    Lars

  • How to recover from one corrupted redo log file in NOARCHIVE mode?

    Oracle 10.2.1.
    The redo log file was corrupted and Oracle can't work.
    When I use STARTUP mount, I got no error msg.
    SQL> startup mount
    ORACLE instance started.
    Total System Global Area 1652555776 bytes
    Fixed Size 1251680 bytes
    Variable Size 301991584 bytes
    Database Buffers 1342177280 bytes
    Redo Buffers 7135232 bytes
    Database mounted.
    But I have some applications which are depended on Oracle can't be started.
    So, I tried STARTUP open. But I got error msg.
    SQL> startup open
    ORACLE instance started.
    Total System Global Area 1652555776 bytes
    Fixed Size 1251680 bytes
    Variable Size 301991584 bytes
    Database Buffers 1342177280 bytes
    Redo Buffers 7135232 bytes
    Database mounted.
    ORA-00368: checksum error in redo log block
    ORA-00353: log corruption near block 497019 change 42069302 time 11/07/2007
    23:43:09
    ORA-00312: online log 4 thread 1:
    'G:\ORACLE\PRODUCT\10.2.0\ORADATA\NMDATA\REDO04.LOG'
    So, how can I restore and recover my database?
    If use RMAN, how to do that?
    Any help will be appreciated.
    Thanks.

    Hi, Yingkuan,
    Thanks for the helps.
    Actually, I have 10 redo log files exists. All of them are here.
    I tried your suggestion:
    alter database clear unarchived logfile group 4;
    The error msg I got is the same as before:
    SQL> alter database clear unarchived logfile group 4;
    alter database clear unarchived logfile group 4
    ERROR at line 1:
    ORA-01624: log 4 needed for crash recovery of instance nmdata (thread 1)
    ORA-00312: online log 4 thread 1:
    'G:\ORACLE\PRODUCT\10.2.0\ORADATA\NMDATA\REDO04.LOG'
    Compared to losing all the data, it is OK for me lose some of them.
    I have more than 1 TB data stored and 99.9% of them are raster images.
    The loading of these data were the headache. If I can save them, I can bear the lost.
    I want to grasp the last straw.
    But I don't know how set the parameter: allowresetlogs_corruption
    I got the error msg:
    SQL> set allowresetlogs_corruption=true;
    SP2-0735: unknown SET option beginning "_allow_res..."
    I have run the command:
    Recover database until cancel
    Alter database open resetlogs
    The error msg I got is the following:
    SQL> recover database until cancel
    ORA-00279: change 41902930 generated at 11/05/2007 22:01:48 needed for thread 1
    ORA-00289: suggestion :
    D:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\NMDATA\ARCHIVELOG\2007_11_09\O1_MF_
    1_1274_%U_.ARC
    ORA-00280: change 41902930 for thread 1 is in sequence #1274
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    cancel
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\NMDATA\SYSTEM01.DBF'
    ORA-01112: media recovery not started
    SQL>
    From the log file, I got the following:
    ALTER DATABASE RECOVER database until cancel
    Fri Nov 09 00:12:48 2007
    Media Recovery Start
    parallel recovery started with 2 processes
    ORA-279 signalled during: ALTER DATABASE RECOVER database until cancel ...
    Fri Nov 09 00:13:20 2007
    ALTER DATABASE RECOVER CANCEL
    Fri Nov 09 00:13:21 2007
    ORA-1547 signalled during: ALTER DATABASE RECOVER CANCEL ...
    Fri Nov 09 00:13:21 2007
    ALTER DATABASE RECOVER CANCEL
    ORA-1112 signalled during: ALTER DATABASE RECOVER CANCEL ...
    Thank you very much. and I am looking forward to your followup input.

  • Log file corrupt and can't open the database.

    I use Replication Manager to manage my Replication; and there is only one site currently in my application.
    I killed the App process with signal -9. (ie. $ kill -9 appID).
    Then I try to restart the App, but the event_callback function got a PANIC event when to open the envirment.
    The open flag is:
    flags = DB_CREATE | DB_INIT_LOCK | DB_INIT_LOG | DB_INIT_MPOOL |
    DB_INIT_REP | DB_INIT_TXN | DB_RECOVER | DB_THREAD;
    ret = dbenv->open(dbenv, path, flags | DB_SYSTEM_MEM, 0);
    What's the reason cause this problem?
    How can recover it ?
    The logs list as below:
    [D 11/19 09:44] dbpf using shm key: 977431433
    [E 11/19 09:44] [src/io/dbpf-mgmt.c, 400]
    [E 11/19 09:44] yuga: DB_LOGC->get: LSN 1/9776: invalid log record header
    [E 11/19 09:44] yuga: Log file corrupt at LSN: [1][9906]
    [E 11/19 09:44] yuga: PANIC: Invalid argument
    [E 11/19 09:44] [src/io/dbpf-mgmt.c] Rep EventGot a panic: Invalid argument (22)
    Edited by: dbnicker on Nov 18, 2010 6:08 PM

    First, what version of Berkeley DB are you running and on what system?
    The error indicates something amiss in the log. The LSN values are quite
    small. Can you run 'db_printlog -N -h <env path>' and post the log
    contents?
    If you are using BDB 5.0 or later, can you also post the contents of
    the __db.rep.diag00 file in the environment home directory? Thanks.
    Sue LoVerso
    Oracle

  • Redo log files lost in Disk Corruption/failure

    DB version:10gR2
    In one of our test databases, I have lost all members of a group due to disk corruption(it was a bad practise to store both members on the same filesystem)
    Database is still up. We don't have a backup of the lost redo log group. Altogether we had 3 redo log groups. this is what i am going to do to fix this issue.
    Step 1. Add a redo log group
    ALTER DATABASE
    ADD LOGFILE ('/u04/oradata/ora67tst/log4a.rdo',
       '/u05/oradata/ora67tst/log4b.rdo') SIZE 15m;Step 2. Drop the corrupted redo log group
    ALTER DATABASE DROP LOGFILE GROUP 2;Is this the right way to go about fixing this issue?
    The documentation says that you need to make sure a redo log group is archived before you DROP it.
    When i use the query
    SQL>SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG;
        GROUP# ARC STATUS
             1 YES  ACTIVE
             2 NO  CURRENT
             3 NO  INACTIVE
    How can i force the archiving of a redo log group 2?
    Edited by: Citizen_2 on May 28, 2009 10:10 AM
    Edited by: Citizen_2 on May 28, 2009 10:11 AM

    Citizen_2 wrote:
    How can i force the archiving of a redo log group 2?How could you archive a log group when you have lost all members of that group?
    Have you checked out this documentation: [Recovering After the Loss of Online Redo Log Files|file://///castle/1862/Home%20Directories/Vaillancourt/Oracle%2010gR2%20Doc/backup.102/b14191/recoscen008.htm#sthref1872]
    More specifically:
    If the group is . . .      Current      
    Then . . .
    It is the log that the database is currently writing to.
    And you should . . .
    Attempt to clear the log; if impossible, then you must restore a backup and perform incomplete recovery up to the most recent available redo log.
    >
    HTH!

  • How to correct the corrupted archive log files?

    Friends,
    Our restore method is cloning type.
    today i fired this statement(this is one is usually do for the restore) "recover database until cancel using backup controlfile"
    i have 60 files in the archive folder.
    it executs only 50 files when it comes to 51st file it came out and says could not copy the file.
    is that particular file is corrupted?
    or is there any restriction in copying no of archivelog files? i mean only 60 files like that.....
    suppose if the archive file is corrupted how can i correct it?
    thanks
    sathyguy

    Now, this is the error message....
    ORA-00310: archived log contains sequence 17480; sequence 17481 required
    ORA-00334 archived log; '/archive2/OURDB/archive/ar0000017481.arc'i googled....and find out.....
    ORA-00310 archived log contains sequence string; sequence string required
    Cause: The archived log is out of sequence, probably because it is corrupted or
    the wrong redo log file name was specified during recovery.
    Action: Specify the correct redo log file and then retry the operation.so...from the above error messages.....i think the particular archive file(17481) is corrupted. now can i correct this corrupted archive file.
    According to the above action, it says to specify the correct redo log file. if the file is not corrupted then where should i specify the redo log file's path?
    thanks
    sathyguy

  • How to recover from corrupt redo log file in non-archived 10g db

    Hello Friends,
    I don't know much about recovering databases. I have a 10.2.0.2 database with corrupt redo file and I am getting following error on startup. (db is non archived and no backup) Thanks very much for any help.
    Database mounted.
    ORA-00368: checksum error in redo log block
    ORA-00353: log corruption near block 6464 change 9979452011066 time 06/27/2009
    15:46:47
    ORA-00312: online log 1 thread 1: '/dbfiles/data_files/log3.dbf'
    ====
    SQL> select Group#,members,status from v$log;
    GROUP# MEMBERS STATUS
    1 1 CURRENT
    3 1 UNUSED
    2 1 INACTIVE
    ==
    I have tried this so far but no luck
    I have tried following commands but no help.
    SQL> ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;
    Database altered.
    SQL> alter database open resetlogs;
    alter database open resetlogs
    ERROR at line 1:
    ORA-01139: RESETLOGS option only valid after an incomplete database recovery
    SQL> alter database open;
    alter database open
    ERROR at line 1:
    ORA-00368: checksum error in redo log block
    ORA-00353: log corruption near block 6464 change 9979452011066 time 06/27/2009
    15:46:47
    ORA-00312: online log 1 thread 1: '/dbfiles/data_files/log3.dbf'

    user652965 wrote:
    Thanks very much for your help guys. I appreciate it. unfortunately none of these commands worked for me. I kept getting error on clearing logs that redo log is needed to perform recovery so it can't be cleared. So I ended up restoring from earlier snapshot of my db volume. Database is now open.
    Thanks again for your input.And now, as a follow-up, at a minimum you should make sure that all redo log groups have at least 3 members. Then, if you lose a single redo log file, all you have to do is shutdown the db and copy one of the good members (of the same group as the lost member) over the lost member.
    And as an additional follow-up, if you value your data you will run in archivelog mode and take regular backups of the database and archivelogs. If you fail to do this you are saying that your data is not worth saving.

  • Corrupt logs & some other files.

    It's a pity that my first post will be about a problem, but I don't have time for long introductions. So, Hi, I'm new here.
    I've got two problems which might be related somehow, but I will post them in two separate threads. (This is the other one - X wont start.)
    Some files in /var/log are corrupted. This is how it looks like:
    /var/log # file *
    ConsoleKit: directory
    Xorg.0.log: ASCII English text
    Xorg.0.log.old: data
    auth.log: data
    btmp: empty
    crond: data
    daemon.log: data
    dmesg.log: ASCII C++ program text
    errors.log: data
    everything.log: data
    faillog: data
    kernel.log: data
    lastlog: data
    messages.log: data
    mpd: directory
    old: directory
    pacman.log: ASCII English text
    pm-powersave.log: ASCII text
    pm-suspend.log: ASCII text
    syslog.log: data
    user.log: ASCII text
    wtmp: data
    These files should contain text and should not be recognized as 'data'. Instead, some are normal, some contain non-textual data.
    I discovered this after X was not starting and I checked the Xorg.0.log file. 'less' warned me that the file might be binary and indeed it was gibberish. It even contained some log entries which weren't normal xorg logs, but looked like the output I get when the kernel loads. Then I found most of the files in that folder are not ascii text as they are supposed to, but 'data' and even more corrupted.
    On my home partition I only found that viminfo got corrupted twice, again containing some random stuff which should be in other files.
    I ran the system rescue cd and fsck'ed both partitions (ext3), but it says there are no problems.
    Now here is the connection to my second problem. X sometimes won't start, I get a black screen and I need to power down by holding the power button. After I restart the xorg log is corrupted. If X starts correctly, the log is normal. Could it be that this hard shutdown while the files are open for IO corrupts them? This sounds plausible to me, but such thing never happened to me before.
    Thanks for your time.
    Last edited by spupy (2009-08-15 20:05:24)

    djszapi wrote:1. "On my home partition I only found that viminfo got corrupted twice" Sometimes, it occurs by me too, I delete this file then, but it's a rare situation
    2. "I ran the system rescue cd and fsck'ed both partitions" -> Did you get any direct filesystem error message from the system?
    3. What happens if you try to delete the corrupted files and restart ? Does it become corrupted too every time?
    4."X sometimes won't start, I get a black screen and I need to power down by holding the power button." -> Which driver/kernel/xorg version do you use ? I've experienced such a situation with the nvidia-185.18.31 driver.
    Thank you for your quick response.
    1. Is it safe to delete the corrupted files in /var/log? I already deleted viminfo after it got corrupted the first time, but it happened again.
    2. In the rescue cd, fsck quickly said "it's ok" without checking. Is there a way to force it to check? The man page doesn't say. The longer check I get sometimes when booting ("hasn't been checked in the last X mounts yadda yadda") shows only "fixed write time in the future" from time to time.
    3. The Xorg.0.log particularly gets corrupted after the hard restart that I issue when X fails to start and a black screen locks up the pc.
    4. xf86-video-ati is 6.12.2-2, kernel26 is 2.6.30.4-1. xorg-server: 1.6.3-2. The black screen issue will be (I hope) solved in the other thread I made (link in first post), but I have the suspicion that both problems are very closely related (one is the cause for the other, or some sort of vicious circle.)

  • Recovering data when log files corrupt

    I am interested to understand if you can easily salvage the data files if the log files corrupt for some reason. Thus far I cannot see how this can be easilly achieved as db_backup requires a working environment, and since the log files may have vanished, the LSN of the data files will be greater than of the last log - and therefore refuse to create the environment.
    Ideally I guess, I am looking for a tool that can reset the LSN in place. It would be better to have access to your 100's of GB of data and accept a small amount of inconsistency or data loss than have nothing.
    Thanks

    Hi,
    Resetting LSNs can be done using db_load -r lsn or using the lsn_reset() method, and it is done in place.
    If your log files are corrupted, you would need to verify the database files (you can use the db_verify BDB utility or the verify() method). The actions you will further take depend on the result of verifying the database files:
    - if they verify correctly then you just need to reset the LSNs and the file IDs in the databases and start with a fresh environment,
    - if they do not verify correctly, you could restore the data from the most recent backup, or you can perform a salvage dump of the data in the database files using db_dump -r or db_dump -R and than reload the data using db_load; see the Dumping and Reloading Databases doc section in the Berkeley DB Programmer's Reference Guide.
    Regards,
    Andrei

  • Block corruption error keep on repeating in alert log file

    Hi,
    Oracle version : 9.2.0.8.0
    os : sun soalris
    error in alert log file:
    Errors in file /u01/app/oracle/admin/qtrain/bdump/qtrain_smon_24925.trc:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-01578: ORACLE data block corrupted (file # 1, block # 19750)
    ORA-01110: data file 1: '/u01/app/oracle/admin/qtrain/dbfiles/system.dbf'system datafile is restored from backup still the error is logged in alert log file
    Inputs are appreciated.
    Thanks
    Prakash

    Hi,
    Thanks for the inputs
    OWNER                          SEGMENT_NAME                                                                      PARTITION_NAME                 SEGMENT_TYPE       TABLESPACE_NAME                 EXTENT_ID    FILE_ID   BLOCK_ID      BYTES     BLOCKS RELATIVE_FNO
    SYS                            SMON_SCN_TO_TIME                                                                                                 CLUSTER            SYSTEM                                  1          1      19749      16384          1            1
    SYS                            SMON_SCN_TO_TIME                                                                                                 CLUSTER            SYSTEM                                  2          1      19750      32768          2            1Thanks
    Prakash

Maybe you are looking for

  • Cancel order for skype number

    I just placed an order for a skype number and learned I can't call using it! I want to cancel it and get a refund. There has been no activity, how do I do that?

  • How can you edit a User defined Graphic Style Library?

    Sometimes I need to make small changes to my User defined Graphic Style Libraries - Add or remove an item or update another etc... Is this possible? Thank you for your time ++Michael

  • OAS 4.0.8.1 wrksf taking all CPU

    Hi, My configuration is: NT SP6a, Oracle 8.0.5, OAS 4.0.8.1. All running on the same server. OAS and Oracle 8 are in the same ORACLE_HOME. Whenever I start OAS from the admin site, the WRKSF (catridge server) process takes up 100% of the CPU. Could s

  • Install Software Projects

    Hi, I'm running Oracle Database 10g Release 2 (10.2.0.1) Express Edition for Linux x86 on my openSUSE 10.3 machine. I'm trying to import the Oracle APEX example "Software Projects" and i get an error. The error message is: NOT COMPATIBLE (Your export

  • In place display of folders:available?

    Hello, Is the inplace display of folders and their contents available in 3.0.8 ? currently, I use a html portlet to show the fixed tabset. But, in 3.0.7, if you go to a folder, then you lose the tabs. Any workarounds or do I have to go to 3.0.8 ? Tha