Corrupt logical block
I want to check logical block corruption and repair it with dbms_repair package.
For that first i need to corrupt block logically.
How to corrupt oracle database table block logically? Is there any procedure to do that?
Thanks,
Hello,
Thanks for you resposes.
I tried to corrupt datablock by editing datafile and remove some data and add some data.
1. Execurted below rman commad
RMAN> backup validate check logical database;
Output show block is corrupted.
Below query is also listing corrupted blocks.
SQL> select * from v$database_block_corruption;
But when i am checking corruption with dbms_repair packges it doesn't list any corruption.
SET SERVEROUTPUT ON
DECLARE num_corrupt INT;
BEGIN
num_corrupt := 0;
DBMS_REPAIR.CHECK_OBJECT (
SCHEMA_NAME => 'TEST',
OBJECT_NAME => 'TEST1',
REPAIR_TABLE_NAME => 'REPAIR_TABLE',
CORRUPT_COUNT => num_corrupt);
DBMS_OUTPUT.PUT_LINE('number corrupt: ' || TO_CHAR (num_corrupt));
END;
Please let me know any other scenarios to corrupt LOGICAL block.
Thanks,
Similar Messages
-
Dear Experts
Can you pls help in understanding of logical block corruption in detail.
Thanks
Asif Husain KhanI wrote a small piece of note about it over my blog, you may want to read that,
http://blog.aristadba.com/?p=109
HTH
Aman.... -
Logical Block corruption - not enough RMAN backups
I have to deal with logical block corruption but these guys do not have enough rman backups to go back enough to recover blocks.
All bad blocks are in SYSAUX and it seems because of it EM doesn't work as it's suppose to do. I dropped and recreated EM repository hoping it will clean itself but .... no.
Any ideas?
Oracle Linux 4.7 i386
Oracle 10.2.0.4OCCUPANT_NAME OCCUPANT_DESC SCHEMA_NAME MOVE_PROCEDURE MOVE_PROCEDURE_DESC SPACE_USAGE_KBYTES
EM Enterprise Manager Repository SYSMAN emd_maintenance.move_em_tblspc Move Procedure for Enterprise Manager Repository 52800
EM_MONITORING_USER Enterprise Manager Monitoring User DBSNMP *** MOVE PROCEDURE NOT APPLICABLE *** 1600 -
ORA-27046: file size is not a multiple of logical block size
Hi All,
Getting the below error while creating Control File after database restore. Permission and ownership of CONTROL.SQL file is 777 and ora<sid>:dba
ERROR -->
SQL> !pwd
/oracle/SID/sapreorg
SQL> @CONTROL.SQL
ORACLE instance started.
Total System Global Area 3539992576 bytes
Fixed Size 2088096 bytes
Variable Size 1778385760 bytes
Database Buffers 1744830464 bytes
Redo Buffers 14688256 bytes
CREATE CONTROLFILE SET DATABASE "SID" RESETLOGS ARCHIVELOG
ERROR at line 1:
ORA-01503: CREATE CONTROLFILE failed
ORA-01565: error in identifying file
'/oracle/SID/sapdata5/p11_19/p11.data19.dbf'
ORA-27046: file size is not a multiple of logical block size
Additional information: 1
Additional information: 1895833576
Additional information: 8192
Checked in target system init<SID>.ora and found the parameter db_block_size is 8192. Also checked in source system init<SID>.ora and found the parameter db_block_size is also 8192.
/oracle/SID/102_64/dbs$ grep -i block initSID.ora
Kindly look into the issue.
Regards,
SoumyaPlease chk the following things
1.SPfile corruption :
Startup the DB in nomount using pfile (ie init<sid>.ora) create spfile from pfile;restart the instance in nomount state
Then create the control file from the script.
2. Check Ulimit of the target server , the filesize parameter for ulimit shud be unlimited.
3. Has the db_block_size parameter been changed in init file by any chance.
Regards
Kausik -
Buffer I/O error on device hda1, logical block 81934
I getting the follow erro message when I session into the CUE of the UC520. All advice appreciated. I read the similar post, however I'm not given any prompts to make a selection.
Buffer I/O error on device hda1, logical block 81934
Processing manifests . Error processing file exceptions.IOError [Errno 5] Input/output error
. . . . . . Error processing file zlib.error Error -3 while decompressing: invalid distances set
. . . . . . . . Error processing file zlib.error Error -3 while decompressing: invalid distances set
. . complete
==> Management interface is eth0
==> Management interface is eth0
malloc: builtins/evalfile.c:138: assertion botched
free: start and end chunk sizes differ
Stopping myself.../etc/rc.d/rc.aesop: line 478: 1514 Aborted /bin/runrecovery.sh
Serial Number:
INIT: Entering runlevel: 2
********** rc.post_install ****************
INIT: Switching to runlevel: 4
INIT: Sending processes the TERM signal
STARTED: cli_server.sh
STARTED: ntp_startup.sh
STARTED: LDAP_startup.sh
STARTED: SQL_startup.sh
STARTED: dwnldr_startup.sh
STARTED: HTTP_startup.sh
STARTED: probe
STARTED: superthread_startup.sh
STARTED: ${ROOT}/usr/bin/products/herbie/herbie_startup.sh
STARTED: /usr/wfavvid/run-wfengine.sh
STARTED: /usr/bin/launch_ums.sh
Waiting 5 ...Buffer I/O error on device hda1, logical block 70429
Waiting 6 ...hda: no DRQ after issuing MULTWRITE
hda: drive not ready for command
Buffer I/O error on device hda1, logical block 2926
Buffer I/O error on device hda1, logical block 2927
Buffer I/O error on device hda1, logical block 2928
Buffer I/O error on device hda1, logical block 2929
Buffer I/O error on device hda1, logical block 2930
Buffer I/O error on device hda1, logical block 2931
Buffer I/O error on device hda1, logical block 2932
Buffer I/O error on device hda1, logical block 2933
Buffer I/O error on device hda1, logical block 2934
REISERFS: abort (device hda1): Journal write error in flush_commit_list
REISERFS: Aborting journal for filesystem on hda1
Jun 17 16:36:11 localhost kernel: REISERFS: abort (device hda1): Journal write error in flush_commit_list
Jun 17 16:36:11 localhost kernel: REISERFS: Aborting journal for filesystem on hda1
Waiting 8 ...MONITOR EXITING...
SAVE TRACE BUFFER
Jun 17 16:36:13 localhost err_handler: CRASH appsServices startup startup.sh System has crashed. The trace buffer information is stored in the file "atrace_save.log". You can upload the file using "copy log" command
/bin/startup.sh: line 262: /usr/bin/atr_buf_save: Input/output error
Waiting 9 ...Buffer I/O error on device hda1, logical block 172794
INIT: Sending processes the TERM signal
INIT: cannot execute "/etc/rc.d/rc.reboot"
INIT: no more processes left in this runlevelThe flash card for CUE might be corrupt. Try and reinstall CUE and restore from backup to see if that fixes it. If it doesnt, try a different flash card.
Cole -
Block corruption에서 physial block corruption과 soft block corruption 의 차이점
어느 분께서 저에게 block corruption에 대한 질문을 하셨는데
그래서 솔루션을 제시해주었고 block dump를 보니 아래와 같았습니다.
(1번덤프) 제가 잘못 진단한 block dump(soft corrupt로 판단했는데 raid5의
디스크가 나간 physical corrupt이네요)
*** 2007-03-06 09:54:33.103
Start dump data blocks tsn: 6 file#: 6 minblk 102032 maxblk 102032
buffer tsn: 6 rdba: 0x01818e90 (6/102032)
scn: 0x056c.f689329c seq: 0x01 flg: 0x06 tail: 0xa0cf0000
frmt: 0x02 chkval: 0x7675 type: 0x06=trans data
Hex dump of corrupt header 2 = BROKEN
이었습니다.
그런데 저는 이미 이전에 physical block corruption을 경험 한 후
아래와 같이 dump를 떠서 확인한 결과 여러문서를 찾아본 후
이와같이 판단하였습니다.
(2번 덤프) physical corrupt로 확인된 덤프
- 아래의 scn: 0x0000.00000000 seq: 0xff flg: 0x00 tail: 0x000006ff
에서 SCN이 0x0000 이고 seq값이 UB1MAXVAL-1 이 아니므로
physical block corruption 임을 확인
*** SESSION ID:(25.59041) 2005-12-12 11:41:13.132
Start dump data blocks tsn: 7 file#: 8 minblk 169994 maxblk 169994
buffer tsn: 7 rdba: 0x0202980a (8/169994)
scn: 0x0000.00000000 seq: 0xff flg: 0x00 tail: 0x000006ff
frmt: 0x02 chkval: 0x0000 type: 0x06=trans data
Block header dump: 0x0202980a
Object id on Block? Y
seg/obj: 0xef9 csc: 0x00.8c13d2b itc: 2 flg: O typ: 2 - INDEX
fsl: 2 fnx: 0x2024b0f ver: 0x01
Itl Xid Uba Flag Lck Scn/Fsc
0x01 xid: 0x0001.052.0001888b uba: 0x00c07725.2482.02 C--- 0 scn 0x0000.083a0f80
0x02 xid: 0x0007.038.0001a196 uba: 0x00c07574.1f28.0b ---- 232 fsc 0x0f57.00000000
질문..
physial block corruption과 soft block corruption 인지의 여부를 block dump를
통해서 알고 싶은데요. 기존에 제가 잘못 알고 있는 것 같습니다.좀 이상한 부분이 있어서 직접 login 하여서 조사해보았습니다.
alert 에러 코드는
Errors in file /app/oracle/product/10.2.0/admin/TEST_T_ktrp4vpe/bdump/test_t_smon_12714.trc:
ORA-00604: error occurred at recursive SQL level 1
ORA-01578: ORACLE data block corrupted (file # 2, block # 2714)
ORA-01110: data file 2: '/test_tdata/SYSTEM04.dbf'
상기로 메타 링크를 찾아보면...
제목: FAQ: Physical Corruption
문서 ID: 공지:403747.1 유형: FAQ
(a) ORA-01578 - This error explains physical structural damage with a particular block.
(b) ORA-08103 - This error is a logical corruption error for a particular data block.
(c) ORA-00600 [2662] - This error is related to block corruption , and occurs due to a higher SCN than of database SCN.
-- 좀 이상한게 physical corruption 이란 문서내에
physical corrupt 와 logical corrupt 를 분류하는게 좀 이상한데 ㅡ_ㅡ;
a 항목이라, physical structual damage ... 자연스럽게 physcial corrupt 로 보입니다. dump 내용으로 제가 판독이 안되고, alert 의 error code 로는
physical corrupt 로 보이는데..
alert 의 error code 와 dump 의 내용이 상이 할수 있는건가요 ?
글 수정:
darkturtle -
Corrupting the block to continue recovery in physical standby
Hi,
Just like to inquire how I will be able to corrupt the block to be able to continue the recovery in the physical standby.
DB Version: 11.1.0.7
Database Type: Data Warehouse
The setup we have is primary database and standby database, we are not using dataguard, and our standby setup is another physical copy of production which act as standby and being sync using script that being run from time to time to apply the archive log came from production (its not configured to sync using ARCH or LGWR and its corresponding configurations).
Then, the standby database is not sync due to errors encountered while trying to apply the archive log, error is below:
Fri Feb 11 05:50:59 2011
ORA-279 signalled during: ALTER DATABASE RECOVER CONTINUE DEFAULT ...
ALTER DATABASE RECOVER CONTINUE DEFAULT
Media Recovery Log /u01/archive/<sid>/1_50741_651679913.arch
Fri Feb 11 05:52:06 2011
Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x7FFFD2F18FF8] [PC:0x60197E0, kdr9ir2rst0()+326]
Errors in file /u01/app/oracle/diag/rdbms/<sid>/<sid>/trace/<sid>pr0028085.trc (incident=631460):
ORA-07445: exception encountered: core dump [kdr9ir2rst0()+326] [SIGSEGV] [ADDR:0x7FFFD2F18FF8] [PC:0x60197E0] [Address not mapped to object] []
Incident details in: /u01/app/oracle/diag/rdbms/<sid>/<sid>/incident/incdir_631460/<sid>pr0028085_i631460.trc
Fri Feb 11 05:52:10 2011
Trace dumping is performing id=[cdmp_20110211055210]
Fri Feb 11 05:52:14 2011
Sweep Incident[631460]: completed
Fri Feb 11 05:52:17 2011
Slave exiting with ORA-10562 exception
Errors in file /u01/app/oracle/diag/rdbms/<sid>/<sid>/trace/<sid>pr0028085.trc:
ORA-10562: Error occurred while applying redo to data block (file# 36, block# 1576118)
ORA-10564: tablespace <tablespace name>
ORA-01110: data file 36: '/u02/oradata/<sid>/<datafile>.dbf'
ORA-10561: block type 'TRANSACTION MANAGED DATA BLOCK', data object# 14877145
ORA-00607: Internal error occurred while making a change to a data block
ORA-00602: internal programming exception
ORA-07445: exception encountered: core dump [kdr9ir2rst0()+326] [SIGSEGV] [ADDR:0x7FFFD2F18FF8] [PC:0x60197E0] [Address not mapped to object] []
Based on the error log it seems we are hitting some bug from metalink (document id 460169.1 and 882851.1)
my question is, the datafile # is given, block# is known too and the data object is also identified. I just verified that object is not that important, is there a way to set the block# to corrupted to be able the recovery to continue? Then I will just drop the table from production so that will also happen in standby, and the block corrupted will be gone too. Is this feasible?
If its not, can you suggest what's next I can do so the the physical standby will be able to sync again to prod aside from rebuilding the standby?
Please take note that I also tried to dbv the file to confirm if there is marked as corrupted and the result for that datafile is also good:
dbv file=/u02/oradata/<sid>/<datafile>_19.dbf logfile=dbv_file_36.log blocksize=16384
oracle@<server>:[~] $ cat dbv_file_36.log
DBVERIFY: Release 11.1.0.7.0 - Production on Sun Feb 13 04:35:28 2011
Copyright (c) 1982, 2007, Oracle. All rights reserved.
DBVERIFY - Verification starting : FILE = /u02/oradata/<sid>/<datafile>_19.dbf
DBVERIFY - Verification complete
Total Pages Examined : 3840000
Total Pages Processed (Data) : 700644
Total Pages Failing (Data) : 0
Total Pages Processed (Index): 417545
Total Pages Failing (Index): 0
Total Pages Processed (Other): 88910
Total Pages Processed (Seg) : 0
Total Pages Failing (Seg) : 0
Total Pages Empty : 2632901
Total Pages Marked Corrupt : 0
Total Pages Influx : 0
Total Pages Encrypted : 0
Highest block SCN : 3811184883 (1.3811184883)
Any help is really appreciated. I hope to hear feedback from you.
Thanksdamorgan, i understand the opinion.
just new with the organization and just inherit a data warehouse database without rman backup. I am still setting up the rman backup thats why i can't use rman to resolve the issue, the only i have is physical standby and its not a standby that automatically sync using dataguard or standard standby setup, i am just checking solution that is applicable in the current situation -
OSD-04001: invalid logical block size (OS 2800189884)
My Windows 2003 crashed which was running Oracle XE.
I installed Oracle XE on Windows XP on another machine.
I coped my D:\oracle\XE10g\oradata folder of Win2003 to the same location in WinXP machine.
When I start the database in WinXP using SQLPLUS i get the following message
SQL> startup
ORACLE instance started.
Total System Global Area 146800640 bytes
Fixed Size 1286220 bytes
Variable Size 62918580 bytes
Database Buffers 79691776 bytes
Redo Buffers 2904064 bytes
ORA-00205: error in identifying control file, check alert log for more info
I my D:\oracle\XE10g\app\oracle\admin\XE\bdump\alert_xe I found following errors
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
starting up 4 shared server(s) ...
Oracle Data Guard is not available in this edition of Oracle.
Wed Apr 25 18:38:36 2007
ALTER DATABASE MOUNT
Wed Apr 25 18:38:36 2007
ORA-00202: control file: 'D:\ORACLE\XE10G\ORADATA\XE\CONTROL.DBF'
ORA-27047: unable to read the header block of file
OSD-04001: invalid logical block size (OS 2800189884)
Wed Apr 25 18:38:36 2007
ORA-205 signalled during: ALTER DATABASE MOUNT...
ORA-00202: control file: 'D:\ORACLE\XE10G\ORADATA\XE\CONTROL.DBF'
ORA-27047: unable to read the header block of file
OSD-04001: invalid logical block size (OS 2800189884)
Please help.
Regards,
ZulqarnainHi Zulqarnain,
Error OSD-04001 is Windows NT specific Oracle message. It means that the logical block size is not a multiple of 512 bytes, or it is too large.
So what you can do? Well you should try to change the value of DB_BLOCK_SIZE in the initialization parameter file.
Regards -
Unable to drop materialized view with corrupted data blocks
Hi,
The alert log of our database is giving this message
Wed Jan 31 05:23:13 2007
ORACLE Instance mesh (pid = 9) - Error 1578 encountered while recovering transaction (6, 15) on object 13355.
Wed Jan 31 05:23:13 2007
Errors in file /u01/app/oracle/admin/mesh/bdump/mesh_smon_4369.trc:
ORA-01578: ORACLE data block corrupted (file # 5, block # 388260)
ORA-01110: data file 5: '/u03/oradata/mesh/mview.dbf'
No one is using this mview still oracle is trying to recover this transaction (6, 15).
when i tried to drop this mview it gives me this error
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 5, block # 388260)
ORA-01110: data file 5: '/u03/oradata/mesh/mview.dbf'
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2255
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2461
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2430
ORA-06512: at line 1
I have tried to fix the corrupted data blocks by using dbms_repair package, but of no use.
I have marked this block to be skipped by using dbms_repair.skip_block but still unable to drop it.
Please suggest what should I do?
Thanks in advance
AnujYou are lucky if only your undesirable MV is affected by theese corrupted blocks. This is an advice to do a complete-super-full-hot-cold-middle backup of ypur database and search for any disk for a "possible replace".
God save us! -
DB Cloning.file size is not a multiple of logical block size
Dear All,
I am trying to create database in windowsXP from the database files running in Linux.
When i try to create control file, i m getting the following errors.
ORA-01503: CREATE CONTROLFILE failed
ORA-01565: error in identifying file
'D:\oracle\orcl\oradata\orcl\system01.dbf'
ORA-27046: file size is not a multiple of logical block size
OSD-04012: file size mismatch (OS 367009792)
Pls tell me the workarounds.
Thanks
Sathis.Hi ,
I created database service by oradim. Now i m trying to create control file after editing the controlfile with the location of windows datafiles(copied from Linux)
Thanks,
Sathis. -
SAN Configuration Error: Buffer I/O error on device sdd, logical block 0
hi,
I had successfully to configured dm-multipath.. now i can see the mapped device as
# ll /dev/mapper
crw------- 1 root root 10, 61 Nov 19 18:59 control
brw-rw---- 1 root disk 253, 0 Nov 19 20:00 ovs
ovs is an alias for /dev/sdc which is the active path of the storage Lun , but the passive path /dev/sdd is still existing and it giving error at the boot of OVM Server
end_request: I/O error, dev sdd, sector 0
Buffer I/O error on device sdd, logical block 0
end_request: I/O error, dev sdd, sector 0
Buffer I/O error on device sdd, logical block 0
end_request: I/O error, dev sdd, sector 0
Buffer I/O error on device sdd, logical block 0
i can't understand why the system is still looking for the passive path /dev/sdd even after multipath was configured ????
did anybody faced this kind of problem then please help me..
Thanks in AdvanceRaja Gounder wrote:
I had successfully to configured dm-multipath.. now i can see the mapped device asYou haven't configured it correctly, otherwise you wouldn't have a friendly name configured. Oracle VM requires WWIDs in /dev/mapper to make the device available. -
Hello!
I'm a bit puzzled here to be honest. Granted i'm not using linux as much as I used to (not after windows 7). I have archlinux running on my HTPC. Never had any issues before that was this severe, unless I upgraded and forgot to read news section. Booted the htpc today to be greeted by "Buffer I/O error on device sdd1, logical block" with a massive wall of text, a few seconds later "welcome to emergency mode."
*This is NOT hdd where the linux kernel is residing on. What logical purpose would it serve for the kernel/userspace to abort everything just because fsck fails or something? If this was indeed my linux partition I would fully understand.
Anyways, I used parted magic, ran fsck, smart. Sure enough fsck warned me about bad/missing superblock. Restored the superblock by using e2fsck. I had over 10 000 "incorrect size" chunks. Ran 2-3 SMART after that. fsck says okay, smart gives a 100% status report with no errors.
Oh yeah, I have turned off FSCK completly in my fstab, thinking about at least turning it on my bigger hdds
Questions:
*Is SMART reliable? If it says it's alright, does that mean i'm safe? Would physical broken sectors turn up by SMART?
*I know SMART warns the user in windows 7 if hdd failure is imment. Is this possible within linux as well? Since i'm NOT using a GUI, is this possible to send through a terminal/email?
*Sometimes the HTPC have been forefully shut down (power breakage), could this be one of the causes of the I/O error?
As always, thank you for your support.
Last edited by greenfish (2013-10-23 13:23:21)graysky wrote:Any reallocated sectors in smartmontools? If you run 'e2fsck -fv /dev/sdd1' does it complete wo/ errors? Probably best to repeat for all linux partitions on that disk.
Sorry for the late reply guys. Been busy with my other hdd that decided to screw with me. e2fsck first complained about bad sectors, and wrong size. Now it says all clean. I've decided to remove this HDD from server and mark it "damaged".
Thank you again for your help
alphaniner wrote:
greenfish wrote:*Is SMART reliable? If it says it's alright, does that mean i'm safe? Would physical broken sectors turn up by SMART?
*I know SMART warns the user in windows 7 if hdd failure is imment. Is this possible within linux as well? Since i'm NOT using a GUI, is this possible to send through a terminal/email?
1) Don't trust the 'SMART overall-health self-assessment test result', run the diagnostics (short, long, conveyance, offline). The short and conveyance tests are quick so start with them. If they both pass run the long test. The offline test is supposed to update SMART attributes, but it generally takes longer than the long test, so save it for last if at all. Usually when I see bad drives the short or long tests pick them up.
2) Look into smartd.service.
greenfish wrote:What logical purpose would it serve for the kernel/userspace to abort everything just because fsck fails or something?
Systemd craps itself if an fs configured to mount during boot can't be mounted, even if the fs isn't necessary for the system to boot. Rot sure about how it handles fsck failures. This 'feature' can be disabled by putting nofail in the fstab options. I add it to every non-essential automounting fs.
Thank you for the useful information. I will save this post for future references.
Will deff look into smartd.service, especially when I have so much data running 24/7.
Will also update my fstab table with "nofail" like you suggested
Thank You! -
The IO operation at logical block address # for Disk # was retried
Hello everyone,
A warning appears in the system log:
===
Log Name: System
Source: disk
Date: 2/20/2013 1:00:28 PM
Event ID: 153
Task Category: None
Level: Warning
Keywords: Classic
User: N/A
Computer: STRANGE.aqa.com.ru
Description:
The IO operation at logical block address af7ff for Disk 7 was retried.
Event Xml:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
<System>
<Provider Name="disk" />
<EventID Qualifiers="32772">153</EventID>
<Level>3</Level>
<Task>0</Task>
<Keywords>0x80000000000000</Keywords>
<TimeCreated SystemTime="2013-02-20T09:00:28.199176700Z" />
<EventRecordID>12669</EventRecordID>
<Channel>System</Channel>
<Computer>STRANGE.aqa.com.ru</Computer>
<Security />
</System>
<EventData>
<Data>\Device\Harddisk7\DR142</Data>
<Data>af7ff</Data>
<Data>7</Data>
<Binary>0F01040003002C00000000009900048000000000000000000000000000000000000000000000000000020828</Binary>
</EventData>
</Event>
===
This warning occurred in several seconds after the Windows Server Backup start. Our backup job finishes successfully. That server is in provisioning without a heavy workload, and we have not experienced any problem yet. But we do not want to face any problems
due to this error in the production environment.
All disks of the server are managed by the LSI MegaRAID controller, which doesn’t report any errors in the disk system.
it is Windows Server 2012 with the latest updates.Wow, I have been having the exact same problems with Server 2012 WSB. I thought I had it resolved but it started acting up again. I tried 3 different external hard drives thinking they may be the problem. The raid array also seems fine,
it is not giving me any errors, no amber lights.
If I run a backup system state + hyper-v it would fail 9/10 of the time on the host component. I have posted every where and cannot find anything. These are the event during any backup I run.
Source: Disk
Event ID: 153
The IO operation at logical block address 10a58027 for Disk 5 was retried.
Source: VOLSNAP
Event ID: 25
The shadow copies of volume C: were deleted because the shadow copy storage could not grow in time. Consider reducing the IO load on the system or choose a shadow copy storage volume that is not being shadow copied.
Source: Filter Manager
Event ID: 3
Filter Manager failed to attach to volume '\Device\HarddiskVolume109'. This volume will be unavailable for filtering until a reboot. The final status was 0xC03A001C.
Source: VOLSNAP
Event ID: 27
The shadow copies of volume \\?\Volume{a21d0bb7-7147-11e2-93ed-842b2b0982fe} were aborted during detection because a critical control file could not be opened.
Source: VHDMP
Event ID: 129
Reset to device, \Device\RaidPort4, was issued.
Source: VOLSNAP
Event ID: 25
The shadow copies of volume G: were deleted because the shadow copy storage could not grow in time. Consider reducing the IO load on the system or choose a shadow copy storage volume that is not being shadow copied.
Windows backup gives me various errors for what did not backup. Mainly this one:
Error in backup of C:\ during enumerate: Error [0x80070003] The system cannot find the path specified.
Application backup
Writer Id: {66841CD4-6DED-4F4B-8F17-FD23F8DDC3DE}
Component: Host Component
Caption : Host Component
Logical Path:
Error : 8078010D
Error Message : Enumeration of the files failed.
Detailed Error : 80070003
Detailed Error Message : (null)
Not just the host component, sometimes the entire C: ...
So no one has any recommendations on fixing this?
Is any one running dell AppAssure? I have two servers backing up to this server with dell AppAssure. Then I am using WSB to backup the this machines OS and 1 windows 7 VM. -
Log corruption near block 1737
hi,
i am getting an error while open the db
SQL> alter database open;
alter database open
ERROR at line 1:
ORA-00368: checksum error in redo log block
ORA-00353: log corruption near block 1737 change 16680088 time 12/22/2008
10:40:13
ORA-00312: online log 2 thread 1: 'G:\ORACLE\ORADATA\HOTEST\REDO02.LOG'while doing an incomplete recovery it is asking for an archived file which i am not having i.e ARC_801_1.ARC.........
SQL> recover database until cancel;
ORA-00279: change 16679127 generated at 12/22/2008 10:37:11 needed for thread 1
ORA-00289: suggestion : G:\ORACLE\ARCH\ARC_801_1.ARC
ORA-00280: change 16679127 for thread 1 is in sequence #801
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
cancel
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: 'G:\ORACLE\ORADATA\HOTEST\SYSTEM01.DBF'
ORA-01112: media recovery not startedpls suggest me for the same....I am not having a archived log of sequence no. 801 neither i am having backup, is there any other way to up the db....
SMON: enabling tx recovery
Mon Dec 22 10:37:16 2008
Database Characterset is WE8MSWIN1252
replication_dependency_tracking turned off (no async multimaster replication found)
Completed: alter database open
Corrupt block relative dba: 0x0300cc4b (file 12, block 52299)
Bad check value found during buffer read
Data in bad block -
type: 6 format: 2 rdba: 0x0300cc4b
last change scn: 0x0000.000fa127 seq: 0x1 flg: 0x06
consistency value in tail: 0xa1270601
check value in block header: 0x8f85, computed block checksum: 0xcf00
spare1: 0x0, spare2: 0x0, spare3: 0x0
Reread of rdba: 0x0300cc4b (file 12, block 52299) found valid data
Corrupt block relative dba: 0x0300cdfb (file 12, block 52731)
Bad check value found during buffer read
Data in bad block -
type: 6 format: 2 rdba: 0x0300cdfb
last change scn: 0x0000.000fa128 seq: 0x1 flg: 0x04
consistency value in tail: 0xa1280601
check value in block header: 0x446b, computed block checksum: 0x3200
spare1: 0x0, spare2: 0x0, spare3: 0x0
Reread of rdba: 0x0300cdfb (file 12, block 52731) found valid data
Dump file g:\oracle\admin\hotest\bdump\alert_hotest.log
Mon Dec 22 10:41:29 2008
ORACLE V9.2.0.4.0 - Production vsnsta=0
vsnsql=12 vsnxtr=3
Windows 2000 Version 5.1 Service Pack 2, CPU type 586
Mon Dec 22 10:41:29 2008
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
SCN scheme 2
Using log_archive_dest parameter default value
LICENSE_MAX_USERS = 0
SYS auditing is disabled
Starting up ORACLE RDBMS Version: 9.2.0.4.0.
System parameters with non-default values:
processes = 150
timed_statistics = TRUE
shared_pool_size = 50331648
large_pool_size = 8388608
java_pool_size = 33554432
control_files = G:\oracle\oradata\hotest\CONTROL01.CTL, G:\oracle\oradata\hotest\CONTROL02.CTL, G:\oracle\oradata\hotest\CONTROL03.CTL
db_block_size = 8192
db_cache_size = 25165824
compatible = 9.2.0.0.0
log_archive_start = TRUE
log_archive_dest_1 = location=G:\oracle\arch
log_archive_dest_2 = SERVICE=stand LGWR ASYNC
log_archive_dest_state_1 = ENABLE
log_archive_dest_state_2 = ENABLE
fal_server = STAND
fal_client = HOTEST
log_archive_format = arc_%s_%t.arc
db_file_multiblock_read_count= 16
fast_start_mttr_target = 300
undo_management = AUTO
undo_tablespace = UNDOTBS1
undo_retention = 10800
remote_login_passwordfile= EXCLUSIVE
db_domain =
instance_name = hotest
dispatchers = (PROTOCOL=TCP) (SERVICE=hotestXDB)
job_queue_processes = 10
hash_join_enabled = TRUE
background_dump_dest = G:\oracle\admin\hotest\bdump
user_dump_dest = G:\oracle\admin\hotest\udump
core_dump_dest = G:\oracle\admin\hotest\cdump
sort_area_size = 524288
db_name = hotest
open_cursors = 300
star_transformation_enabled= FALSE
query_rewrite_enabled = FALSE
pga_aggregate_target = 25165824
aq_tm_processes = 1
PMON started with pid=2
DBW0 started with pid=3
LGWR started with pid=4
CKPT started with pid=5
SMON started with pid=6
RECO started with pid=7
CJQ0 started with pid=8
QMN0 started with pid=9
Mon Dec 22 10:41:36 2008
starting up 1 shared server(s) ...
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
ARCH: STARTING ARCH PROCESSES
ARC0 started with pid=12
ARC0: Archival started
ARC1 started with pid=13
Mon Dec 22 10:41:37 2008
ARCH: STARTING ARCH PROCESSES COMPLETE
Mon Dec 22 10:41:37 2008
ARC0: Thread not mounted
Mon Dec 22 10:41:38 2008
ARC1: Archival started
Mon Dec 22 10:41:38 2008
ARC1: Thread not mounted
Mon Dec 22 10:41:38 2008
alter database mount exclusive
Mon Dec 22 10:41:43 2008
Successful mount of redo thread 1, with mount id 1003521250.
Mon Dec 22 10:41:43 2008
Database mounted in Exclusive Mode.
Completed: alter database mount exclusive
Mon Dec 22 10:41:43 2008
alter database open
Mon Dec 22 10:41:44 2008
Beginning crash recovery of 1 threads
Mon Dec 22 10:41:44 2008
Started first pass scan
ORA-368 signalled during: alter database open...
Mon Dec 22 10:42:35 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Dump file g:\oracle\admin\hotest\bdump\alert_hotest.log
Mon Dec 29 10:14:22 2008
ORACLE V9.2.0.4.0 - Production vsnsta=0
vsnsql=12 vsnxtr=3
Windows 2000 Version 5.1 Service Pack 2, CPU type 586
Mon Dec 29 10:14:22 2008
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
SCN scheme 2
Using log_archive_dest parameter default value
LICENSE_MAX_USERS = 0
SYS auditing is disabled
Starting up ORACLE RDBMS Version: 9.2.0.4.0.
System parameters with non-default values:
processes = 150
timed_statistics = TRUE
shared_pool_size = 50331648
large_pool_size = 8388608
java_pool_size = 33554432
control_files = G:\oracle\oradata\hotest\CONTROL01.CTL, G:\oracle\oradata\hotest\CONTROL02.CTL, G:\oracle\oradata\hotest\CONTROL03.CTL
db_block_size = 8192
db_cache_size = 25165824
compatible = 9.2.0.0.0
log_archive_start = TRUE
log_archive_dest_1 = location=G:\oracle\arch
log_archive_dest_2 = SERVICE=stand LGWR ASYNC
log_archive_dest_state_1 = ENABLE
log_archive_dest_state_2 = ENABLE
fal_server = STAND
fal_client = HOTEST
log_archive_format = arc_%s_%t.arc
db_file_multiblock_read_count= 16
fast_start_mttr_target = 300
undo_management = AUTO
undo_tablespace = UNDOTBS1
undo_retention = 10800
remote_login_passwordfile= EXCLUSIVE
db_domain =
instance_name = hotest
dispatchers = (PROTOCOL=TCP) (SERVICE=hotestXDB)
job_queue_processes = 10
hash_join_enabled = TRUE
background_dump_dest = G:\oracle\admin\hotest\bdump
user_dump_dest = G:\oracle\admin\hotest\udump
core_dump_dest = G:\oracle\admin\hotest\cdump
sort_area_size = 524288
db_name = hotest
open_cursors = 300
star_transformation_enabled= FALSE
query_rewrite_enabled = FALSE
pga_aggregate_target = 25165824
aq_tm_processes = 1
PMON started with pid=2
DBW0 started with pid=3
LGWR started with pid=4
CKPT started with pid=5
SMON started with pid=6
RECO started with pid=7
CJQ0 started with pid=8
QMN0 started with pid=9
Mon Dec 29 10:14:28 2008
starting up 1 shared server(s) ...
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
ARCH: STARTING ARCH PROCESSES
ARC0 started with pid=12
ARC0: Archival started
ARC1 started with pid=13
ARC1: Archival started
Mon Dec 29 10:14:30 2008
ARCH: STARTING ARCH PROCESSES COMPLETE
Mon Dec 29 10:14:30 2008
ARC1: Thread not mounted
Mon Dec 29 10:14:31 2008
ARC0: Thread not mounted
Mon Dec 29 10:14:32 2008
alter database mount exclusive
Mon Dec 29 10:14:37 2008
Successful mount of redo thread 1, with mount id 1004122632.
Mon Dec 29 10:14:37 2008
Database mounted in Exclusive Mode.
Completed: alter database mount exclusive
Mon Dec 29 10:14:37 2008
alter database open
Mon Dec 29 10:14:37 2008
Beginning crash recovery of 1 threads
Mon Dec 29 10:14:38 2008
Started first pass scan
ORA-368 signalled during: alter database open...
Mon Dec 29 10:15:29 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 10:20:44 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 10:25:54 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 10:31:09 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 10:36:24 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 10:41:40 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 10:46:55 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 10:52:10 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 10:57:26 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 11:02:41 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 11:07:56 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 11:13:05 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 11:18:15 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 11:23:30 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 11:28:39 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 11:33:49 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 11:39:04 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 11:44:19 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 11:49:35 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 11:54:44 2008
Restarting dead background process QMN0
QMN0 started with pid=15
Mon Dec 29 11:59:59 2008
Restarting dead background process QMN0
QMN0 started with pid=14
Mon Dec 29 12:03:06 2008
alter database open
Mon Dec 29 12:03:06 2008
Beginning crash recovery of 1 threads
Mon Dec 29 12:03:07 2008
Started first pass scan
ORA-368 signalled during: alter database open...
Mon Dec 29 12:05:09 2008
Restarting dead background process QMN0
QMN0 started with pid=15
Mon Dec 29 12:05:14 2008
ALTER DATABASE RECOVER database until cancel
Mon Dec 29 12:05:14 2008
Media Recovery Start
Starting datafile 1 recovery in thread 1 sequence 801
Datafile 1: 'G:\ORACLE\ORADATA\HOTEST\SYSTEM01.DBF'
Starting datafile 2 recovery in thread 1 sequence 801
Datafile 2: 'G:\ORACLE\ORADATA\HOTEST\UNDOTBS01.DBF'
Starting datafile 3 recovery in thread 1 sequence 801
Datafile 3: 'G:\ORACLE\ORADATA\HOTEST\CWMLITE01.DBF'
Starting datafile 4 recovery in thread 1 sequence 801
Datafile 4: 'G:\ORACLE\ORADATA\HOTEST\DRSYS01.DBF'
Starting datafile 5 recovery in thread 1 sequence 801
Datafile 5: 'G:\ORACLE\ORADATA\HOTEST\EXAMPLE01.DBF'
Starting datafile 6 recovery in thread 1 sequence 801
Datafile 6: 'G:\ORACLE\ORADATA\HOTEST\INDX01.DBF'
Starting datafile 7 recovery in thread 1 sequence 801
Datafile 7: 'G:\ORACLE\ORADATA\HOTEST\ODM01.DBF'
Starting datafile 8 recovery in thread 1 sequence 801
Datafile 8: 'G:\ORACLE\ORADATA\HOTEST\TOOLS01.DBF'
Starting datafile 9 recovery in thread 1 sequence 801
Datafile 9: 'G:\ORACLE\ORADATA\HOTEST\USERS01.DBF'
Starting datafile 10 recovery in thread 1 sequence 801
Datafile 10: 'G:\ORACLE\ORADATA\HOTEST\XDB01.DBF'
Starting datafile 11 recovery in thread 1 sequence 801
Datafile 11: 'G:\ORACLE\ORADATA\HOTEST\CADATA3.ORA'
Starting datafile 12 recovery in thread 1 sequence 801
Datafile 12: 'G:\ORACLE\ORADATA\HOTEST\CADATA.ORA'
Starting datafile 13 recovery in thread 1 sequence 801
Datafile 13: 'G:\ORACLE\ORADATA\HOTEST\CADATAWRO.ORA'
Starting datafile 14 recovery in thread 1 sequence 801
Datafile 14: 'G:\ORACLE\ORADATA\HOTEST\CADATANRO.ORA'
Starting datafile 15 recovery in thread 1 sequence 801
Datafile 15: 'G:\ORACLE\ORADATA\HOTEST\CADATACRO.ORA'
Starting datafile 16 recovery in thread 1 sequence 801
Datafile 16: 'G:\ORACLE\ORADATA\HOTEST\CADATAOTH.ORA'
Starting datafile 17 recovery in thread 1 sequence 801
Datafile 17: 'G:\ORACLE\ORADATA\HOTEST\CADATAERO.ORA'
Starting datafile 18 recovery in thread 1 sequence 801
Datafile 18: 'G:\ORACLE\ORADATA\HOTEST\CADATAHO.ORA'
Starting datafile 19 recovery in thread 1 sequence 801
Datafile 19: 'G:\ORACLE\ORADATA\HOTEST\CADATASRO.ORA'
Starting datafile 20 recovery in thread 1 sequence 801
Datafile 20: 'G:\ORACLE\ORADATA\HOTEST\PAYROLL.ORA'
Starting datafile 21 recovery in thread 1 sequence 801
Datafile 21: 'G:\ORACLE\ORADATA\HOTEST\ORION.ORA'
Starting datafile 22 recovery in thread 1 sequence 801
Datafile 22: 'G:\ORACLE\ORADATA\HOTEST\ATHENA.ORA'
Starting datafile 23 recovery in thread 1 sequence 801
Datafile 23: 'G:\ORACLE\ORADATA\HOTEST\CAL_YEAR_2004.ORA'
Starting datafile 24 recovery in thread 1 sequence 801
Datafile 24: 'G:\ORACLE\ORADATA\HOTEST\CAL_YEAR_2005.ORA'
Starting datafile 25 recovery in thread 1 sequence 801
Datafile 25: 'G:\ORACLE\ORADATA\HOTEST\CAL_YEAR_2006.ORA'
Starting datafile 26 recovery in thread 1 sequence 801
Datafile 26: 'G:\ORACLE\ORADATA\HOTEST\CAL_YEAR_2007.ORA'
Starting datafile 27 recovery in thread 1 sequence 801
Datafile 27: 'G:\ORACLE\ORADATA\HOTEST\CAL_YEAR_2008.ORA'
Starting datafile 28 recovery in thread 1 sequence 801
Datafile 28: 'G:\ORACLE\ORADATA\HOTEST\CAL_YEAR_2009.ORA'
Starting datafile 29 recovery in thread 1 sequence 801
Datafile 29: 'G:\ORACLE\ORADATA\HOTEST\EXAMPLE02.DBF'
Starting datafile 30 recovery in thread 1 sequence 801
Datafile 30: 'G:\ORACLE\ORADATA\HOTEST\EXAMPLE02.RAW'
Media Recovery Log
ORA-279 signalled during: ALTER DATABASE RECOVER database until cancel ...
Mon Dec 29 12:06:24 2008
ALTER DATABASE RECOVER CANCEL
Mon Dec 29 12:06:24 2008
ORA-1547 signalled during: ALTER DATABASE RECOVER CANCEL ...
Mon Dec 29 12:06:24 2008
ALTER DATABASE RECOVER CANCEL
ORA-1112 signalled during: ALTER DATABASE RECOVER CANCEL ...
Mon Dec 29 12:06:40 2008
alter database open resetlogs
Mon Dec 29 12:06:41 2008
ORA-1194 signalled during: alter database open resetlogs...
Mon Dec 29 12:06:47 2008
alter database open noresetlogs
Mon Dec 29 12:06:48 2008
Beginning crash recovery of 1 threads
Mon Dec 29 12:06:48 2008
Started first pass scan
ORA-368 signalled during: alter database open noresetlogs...
Mon Dec 29 12:10:24 2008
Restarting dead background process QMN0
QMN0 started with pid=15
Mon Dec 29 12:15:39 2008
Restarting dead background process QMN0
QMN0 started with pid=15
Mon Dec 29 12:20:55 2008
Restarting dead background process QMN0
QMN0 started with pid=15
Mon Dec 29 12:26:10 2008 -
RMAN-05501: RMAN-11003 ORA-00353: log corruption near block 2048 change
Hi Gurus,
I've posted few days ago an issue I got while recreating my Dataguard.
The Main issue was while duplicating target from active database I got this errors during the recovery process.
The Restore Process Went fine, RMAN Copied the Datafiles very well, but stop when at the moment to start the recovery process from the auxiliary db
Yesterday I took one last try,
I follow same procedure, the one described in all Oracle Docs, Google and so on ... it's not a secret I guess.
The I got the same issue, same errors.
I read soemthing about archivelogs, and the Block corruption and so on, I've tried so many things (register the log... etc etc ), and than I read something about "catalog the logfile)
and that's waht I did.
But I was just connect to the target db.
contents of Memory Script:
set until scn 1638816629;
recover
standby
clone database
delete archivelog
executing Memory Script
executing command: SET until clause
Starting recover at 14-MAY-13
starting media recovery
archived log for thread 1 with sequence 32196 is already on disk as file /archives/CMOVP/stby/1_32196_810397891.arc
archived log for thread 1 with sequence 32197 is already on disk as file /archives/CMOVP/stby/1_32197_810397891.arc
archived log for thread 1 with sequence 32198 is already on disk as file /archives/CMOVP/stby/1_32198_810397891.arc
archived log for thread 1 with sequence 32199 is already on disk as file /archives/CMOVP/stby/1_32199_810397891.arc
archived log for thread 1 with sequence 32200 is already on disk as file /archives/CMOVP/stby/1_32200_810397891.arc
archived log for thread 1 with sequence 32201 is already on disk as file /archives/CMOVP/stby/1_32201_810397891.arc
archived log for thread 1 with sequence 32202 is already on disk as file /archives/CMOVP/stby/1_32202_810397891.arc
archived log for thread 1 with sequence 32203 is already on disk as file /archives/CMOVP/stby/1_32203_810397891.arc
archived log for thread 1 with sequence 32204 is already on disk as file /archives/CMOVP/stby/1_32204_810397891.arc
archived log for thread 1 with sequence 32205 is already on disk as file /archives/CMOVP/stby/1_32205_810397891.arc
archived log for thread 1 with sequence 32206 is already on disk as file /archives/CMOVP/stby/1_32206_810397891.arc
archived log for thread 1 with sequence 32207 is already on disk as file /archives/CMOVP/stby/1_32207_810397891.arc
archived log for thread 1 with sequence 32208 is already on disk as file /archives/CMOVP/stby/1_32208_810397891.arc
archived log for thread 1 with sequence 32209 is already on disk as file /archives/CMOVP/stby/1_32209_810397891.arc
archived log for thread 1 with sequence 32210 is already on disk as file /archives/CMOVP/stby/1_32210_810397891.arc
archived log for thread 1 with sequence 32211 is already on disk as file /archives/CMOVP/stby/1_32211_810397891.arc
archived log for thread 1 with sequence 32212 is already on disk as file /archives/CMOVP/stby/1_32212_810397891.arc
archived log for thread 1 with sequence 32213 is already on disk as file /archives/CMOVP/stby/1_32213_810397891.arc
archived log for thread 1 with sequence 32214 is already on disk as file /archives/CMOVP/stby/1_32214_810397891.arc
archived log for thread 1 with sequence 32215 is already on disk as file /archives/CMOVP/stby/1_32215_810397891.arc
archived log for thread 1 with sequence 32216 is already on disk as file /archives/CMOVP/stby/1_32216_810397891.arc
archived log for thread 1 with sequence 32217 is already on disk as file /archives/CMOVP/stby/1_32217_810397891.arc
archived log for thread 1 with sequence 32218 is already on disk as file /archives/CMOVP/stby/1_32218_810397891.arc
archived log for thread 1 with sequence 32219 is already on disk as file /archives/CMOVP/stby/1_32219_810397891.arc
archived log for thread 1 with sequence 32220 is already on disk as file /archives/CMOVP/stby/1_32220_810397891.arc
archived log for thread 1 with sequence 32221 is already on disk as file /archives/CMOVP/stby/1_32221_810397891.arc
archived log for thread 1 with sequence 32222 is already on disk as file /archives/CMOVP/stby/1_32222_810397891.arc
archived log for thread 1 with sequence 32223 is already on disk as file /archives/CMOVP/stby/1_32223_810397891.arc
archived log file name=/archives/CMOVP/stby/1_32196_810397891.arc thread=1 sequence=32196
released channel: prm1
released channel: stby1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 05/14/2013 01:11:33
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
ORA-00283: recovery session canceled due to errors
RMAN-11003: failure during parse/execution of SQL statement: alter database recover logfile '/archives/CMOVP/stby/1_32196_810397891.arc'
ORA-00283: recovery session canceled due to errors
ORA-00354: corrupt redo log block header
ORA-00353: log corruption near block 2048 change 1638686297 time 05/13/2013 22:42:03
ORA-00334: archived log: '/archives/CMOVP/stby/1_32196_810397891.arc'
################# What I did: ################################
Rma target /
RMAN>catalog archivelog '/archives/CMOVP/stby/1_32196_810397891.arc';
The I connect to target and Auxiliary again :Rman target / catalog rman/rman@rman auxiliary
and I run the last content of the failing memory script:RMAN> run
set until scn 1638816629;
recover
standby
clone database
delete archivelog
And The DB start the recovery Process and my Standby complete the recovery very weel with message "Recovery Finnish or Termintaed or Completed"
The I could configure Datagurd
And I check the process and the Log Apply was on and running fine, no gaps, perfect!!!!!
How !!! Just Cataloging an "Supposed Corrupted"archive log !!!!!!
If Any ideas, that ould be great to understand this.
Rgds
CarlosokKarol wrote:
Hi Gurus,
I've posted few days ago an issue I got while recreating my Dataguard.
The Main issue was while duplicating target from active database I got this errors during the recovery process.
The Restore Process Went fine, RMAN Copied the Datafiles very well, but stop when at the moment to start the recovery process from the auxiliary db
Yesterday I took one last try,
I follow same procedure, the one described in all Oracle Docs, Google and so on ... it's not a secret I guess.
The I got the same issue, same errors.
I read soemthing about archivelogs, and the Block corruption and so on, I've tried so many things (register the log... etc etc ), and than I read something about "catalog the logfile)
and that's waht I did.
But I was just connect to the target db.
contents of Memory Script:
set until scn 1638816629;
recover
standby
clone database
delete archivelog
executing Memory Script
executing command: SET until clause
Starting recover at 14-MAY-13
starting media recovery
archived log for thread 1 with sequence 32196 is already on disk as file /archives/CMOVP/stby/1_32196_810397891.arc
archived log for thread 1 with sequence 32197 is already on disk as file /archives/CMOVP/stby/1_32197_810397891.arc
archived log for thread 1 with sequence 32198 is already on disk as file /archives/CMOVP/stby/1_32198_810397891.arc
archived log for thread 1 with sequence 32199 is already on disk as file /archives/CMOVP/stby/1_32199_810397891.arc
archived log for thread 1 with sequence 32200 is already on disk as file /archives/CMOVP/stby/1_32200_810397891.arc
archived log for thread 1 with sequence 32201 is already on disk as file /archives/CMOVP/stby/1_32201_810397891.arc
archived log for thread 1 with sequence 32202 is already on disk as file /archives/CMOVP/stby/1_32202_810397891.arc
archived log for thread 1 with sequence 32203 is already on disk as file /archives/CMOVP/stby/1_32203_810397891.arc
archived log for thread 1 with sequence 32204 is already on disk as file /archives/CMOVP/stby/1_32204_810397891.arc
archived log for thread 1 with sequence 32205 is already on disk as file /archives/CMOVP/stby/1_32205_810397891.arc
archived log for thread 1 with sequence 32206 is already on disk as file /archives/CMOVP/stby/1_32206_810397891.arc
archived log for thread 1 with sequence 32207 is already on disk as file /archives/CMOVP/stby/1_32207_810397891.arc
archived log for thread 1 with sequence 32208 is already on disk as file /archives/CMOVP/stby/1_32208_810397891.arc
archived log for thread 1 with sequence 32209 is already on disk as file /archives/CMOVP/stby/1_32209_810397891.arc
archived log for thread 1 with sequence 32210 is already on disk as file /archives/CMOVP/stby/1_32210_810397891.arc
archived log for thread 1 with sequence 32211 is already on disk as file /archives/CMOVP/stby/1_32211_810397891.arc
archived log for thread 1 with sequence 32212 is already on disk as file /archives/CMOVP/stby/1_32212_810397891.arc
archived log for thread 1 with sequence 32213 is already on disk as file /archives/CMOVP/stby/1_32213_810397891.arc
archived log for thread 1 with sequence 32214 is already on disk as file /archives/CMOVP/stby/1_32214_810397891.arc
archived log for thread 1 with sequence 32215 is already on disk as file /archives/CMOVP/stby/1_32215_810397891.arc
archived log for thread 1 with sequence 32216 is already on disk as file /archives/CMOVP/stby/1_32216_810397891.arc
archived log for thread 1 with sequence 32217 is already on disk as file /archives/CMOVP/stby/1_32217_810397891.arc
archived log for thread 1 with sequence 32218 is already on disk as file /archives/CMOVP/stby/1_32218_810397891.arc
archived log for thread 1 with sequence 32219 is already on disk as file /archives/CMOVP/stby/1_32219_810397891.arc
archived log for thread 1 with sequence 32220 is already on disk as file /archives/CMOVP/stby/1_32220_810397891.arc
archived log for thread 1 with sequence 32221 is already on disk as file /archives/CMOVP/stby/1_32221_810397891.arc
archived log for thread 1 with sequence 32222 is already on disk as file /archives/CMOVP/stby/1_32222_810397891.arc
archived log for thread 1 with sequence 32223 is already on disk as file /archives/CMOVP/stby/1_32223_810397891.arc
archived log file name=/archives/CMOVP/stby/1_32196_810397891.arc thread=1 sequence=32196
released channel: prm1
released channel: stby1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 05/14/2013 01:11:33
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
ORA-00283: recovery session canceled due to errors
RMAN-11003: failure during parse/execution of SQL statement: alter database recover logfile '/archives/CMOVP/stby/1_32196_810397891.arc'
ORA-00283: recovery session canceled due to errors
ORA-00354: corrupt redo log block header
ORA-00353: log corruption near block 2048 change 1638686297 time 05/13/2013 22:42:03
ORA-00334: archived log: '/archives/CMOVP/stby/1_32196_810397891.arc'
################# What I did: ################################
Rma target /
RMAN>catalog archivelog '/archives/CMOVP/stby/1_32196_810397891.arc';
The I connect to target and Auxiliary again :Rman target / catalog rman/rman@rman auxiliary
and I run the last content of the failing memory script:RMAN> run
set until scn 1638816629;
recover
standby
clone database
delete archivelog
And The DB start the recovery Process and my Standby complete the recovery very weel with message "Recovery Finnish or Termintaed or Completed"
The I could configure Datagurd
And I check the process and the Log Apply was on and running fine, no gaps, perfect!!!!!
How !!! Just Cataloging an "Supposed Corrupted"archive log !!!!!!
If Any ideas, that ould be great to understand this.
Rgds
CarlosHi,
Can you change standby database archive destination from /archives/CMOVP/stby/ other disk?
I think this problem on your disk.
Mahir
p.s. I remember you before thread, too
Maybe you are looking for
-
F4 help not working in mss requisition form for erec
HI, I am getting a problem while filling a mss requisition form for E recruting Ehp 4 . F4 help is not working for position id & org unit.it is not returning values while searching.I am using form scenarion sqr03.Plz suggest any solution.
-
Multiple VNICs in Solaris 11 Zone
Is there a best way to create a vnic within a solaris 11 zone on top of the zone's interface. I was able to create a vnic from the ipmp interface but this may not be a proper way. Does multiple vnics within a zone need to be created outside the zone
-
I have an early 2009 iMac with 10.8.3, iTunes 11.03, and iTunes Match. I thought always before if I entered the artists name in the search box their albums and songs would come up. I downloaded some albums from the web today. If I click on albums and
-
Group by Month, Group By Year
I have a date column in my table and it looks like 2009/06/29. Is there a way I can GROUP BY MONTH and GROUP BY YEAR on this column? Or do I have to create a YEAR column to GROUP BY YEAR, and a month column to GROUP BY MONTH? Thanks.
-
How to make dynamical drop down menu?
I am trying to make a drop down menu which is dynamical in dreamweaver? is there a way to do that? i am not good at programming.