Logical corruption of block
Dear Experts
Can you pls help in understanding of logical block corruption in detail.
Thanks
Asif Husain Khan
I wrote a small piece of note about it over my blog, you may want to read that,
http://blog.aristadba.com/?p=109
HTH
Aman....
Similar Messages
-
How to logically corrupt a block
sir
i know different ways to check whether the block is corrupted.
But i can test it only if any block is corrupted .
how will u make a block corrupt . i will bw very much grateful if u can give an
example for the block corruption .Hi Sheila M
I need to find a way to extract lists of emails from Junk mail folder and add them en masse to blocked senders, without having to add them to my blocked senders one by one.
I have had the same email address for too long to change it, but I receive huge amounts of junk mail. I have so far barred nearly 2000 senders and have 1100 sitting in my junk folder form the last week. I barred them one by one, using RULES function...I simply cannot keep doing this.
I have MacBook Pro running Mavericks, plus iPad plus iPhone 5.
The problem is aggravated by the fact that Apple has a weakness in Mail: I may bar thousands of senders on my Mac, but the filter does not sync the barred addresses to my iPad or iPhone, so if I use Mail on either, I get many thousands of emails at once and have to mark them one by one to delete. It's ridiculous.
To solve this problem, I have decided to do is block all the senders in webmail.
However I still have the same problem with extracting multiple addresses from my junk mail folder in order to copy them into blocked senders list.
Genius Bar said it can't be done. I think they're wrong. There simply as to be (a) a location on my Mac where I can find a list of all those I have already barred; (b) a way to extract email addresses from groups of received mail.
I'd be grateful for some advice, please. -
Diff between logical and physical block corruption
What is the difference between Physical and Logical block corruption.
Dbverify utility, analyze command is used to check the logical block corruption not the physical one am i correct??
When i get
ORA-01578: ORACLE data block corrupted (file # 9, block # 13)
ORA-01110: data file 9: '/oracle/dbs/tbs_91.f'
ORA-01578: ORACLE data block corrupted (file # 2, block # 19)
ORA-01110: data file 2: '/oracle/dbs/tbs_21.f'
How to conform that this a logical or physical block corruption???
please through some light regarding this....
kumareshthe following link may help u
http://download-east.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmconc1012.htm -
Logically corrupted blocks in standby
Hi
Assume I have a primary database and standby database.
Accidentally, Some of the objects (indexes and tables) are in nologging mode in primary database.
Force logging is not set.
When I scan the datafiles in standby I realize that some datafiles are logically corrupted because of this issue.
How can I get rid of these corrupted blocks?
If I rebuild indexes with logging option, and recreate table as logging,
Will it solve the problem? or any other suggestion
Many thanksSivok wrote:
Hi
Assume I have a primary database and standby database.
Accidentally, Some of the objects (indexes and tables) are in nologging mode in primary database.
Force logging is not set.
When I scan the datafiles in standby I realize that some datafiles are logically corrupted because of this issue.
How can I get rid of these corrupted blocks?
If I rebuild indexes with logging option, and recreate table as logging,
Will it solve the problem? or any other suggestion
Many thanksyour primary should run in force logging mode (ALTER DATABASE FORCE LOGGING) then the object level setting is ignored for direct path operations. You can apply an incremental backup to the standby to catchup (or just recreate the standby which might be as quick depending on volumes).
Niall Litchfield
http://www.orawin.info/ -
Questions on Logical corruption
Hello all,
My DB version is 10g+ - 11.2.0.3 on various different OS. We are in process of deploying RMAN on our system and i am having a hard time on testing/get a grip around the whole logical corruption... from what i understand(please correct me if i am wrong)
1. I can have a check logical syntax in my backup cmd(and that will check both physical and logical corruption)...But how much overhead dose it have, Seems to be anywhere from 14-20% overhead on backup time.
2. Leaving the maxCorrupt to default(which i beleive is 0)...if there is a physical corruption my backup will break and i should get an email/alert saying backup broke...
3. Would this be same for logical corruption too ??, would RMAN report logical corrution right away like physical corruption would do? Or do i have to query v$database_block_corruption after backup is done to figure out if i have logical corruption
4. how would one test logical corruption ?? (besides the NO_LOGGING operation, as our DB have force logging turned on)
5. Is it a good practice to have check logical corruption in your daily backup? ( i guess i have no problems for it if DB are small, but some of our DB are close to 50TB+ and i think the check logical is going to increase the backup time significantly)
6. If RMAN cannot repair logical corruption, then why would i want to do the check logical (besides knowing i have a problem and the end user have to fix it by reload the data...assuming its a table not index that is corrupt)..
7. any best practices when it comes for checking logical corruption for DB in 50+ TB
I have actually searched on here and on google, but i could not find any way to reproducing logical corrpution(maybe there is none), but i wanted to ask the community about it....
Thank you in advance for your time.General info:
http://www.oracle.com/technetwork/database/focus-areas/availability/maa-datacorruption-bestpractices-396464.pdf
You might want to google "fractured block" for information about it without RMAN. You can simulate that by writing a C program to flip some bits, although technically that would be physical corruption. Also see Dealing with Oracle Database Block Corruption in 11g | The Oracle Instructor
One way to simulate is to use nologging operations and then try to recover (this is why force logging is used, so google corruption force logging). Here's an example: Block corruption after RMAN restore and recovery !!! | Practical Oracl Hey, no simulate, that's for realz!
Somewhere in the recovery docs it explains... aw, I lost my train of thought, you might get better answers with shorter questions, or one question per thread, for this kind of fora. Oh yeah, somewhere in the docs it explains that RMAN doesn't report the error right away, because later in the recovery stream it may decide the block is newly formatted and there wasn't really a problem.
This really is dependent on how much data is changing and how. If you do many nologging operations or run complicated standby, you can run into this more. There's a trade-off between verifying everything and backup windows, site requirements control everything. That said, I've found only paranoid DBA's check enough, IT managers often say "that will never happen." Actually, even paranoid DBA's don't check enough, the vagaries of manual labor and flaky equipment can overshadow anything. -
Logical corruption in datafile
what is logical corruption.
How this can occur in datafile , is it related caused due to disk.
how to avoid this.
Is it possible to check the this on regular interval. with some job script .. any idea what command how to do it .. does dbverify will do.
Any good reading/url is most welcomed.
Thank You Very Much.user642237 wrote:
what is logical corruption.
How this can occur in datafile , is it related caused due to disk.
how to avoid this.
Is it possible to check the this on regular interval. with some job script .. any idea what command how to do it .. does dbverify will do.
Any good reading/url is most welcomed.
Thank You Very Much.What's the db version and o/s? Where did you read the term logical corruption in datafiles? AFAIK, datafiles get physically corrupted only. The logical corruption happens within the blocks , for example some index entry pointing towards a null rowid. I am not sure that I have come across any situation/reference where this corruption is mentioned for files as well. To check it, the best possible tool is RMAN which can do the job by some simple commands.
HTH
Aman.... -
Logical corruption found in the sysaux tablespace
Dear All:
We lately see the logical corruption error when running dbverify command which shows the block corruption. It is always on the the sysaux tablespace. The database is 11g and platform is Linux.
we get the error like:error backing up file 2 block xxxx: logical corruption and this comes to alert.log out of the automated maintenance job like sqltunning advisor running during maintenance window.
Now As far as I know,we can't drop or rename the sysaux tablespace. there is a startup migrate option to drop the SYSAUX but it does not work due to the presence of domain indexes. you may run the rman block media recovery but it ends with not fixing since rman backups are more of physical than maintain the logical integrity.
Any help, advise, suggestion will be highly appreciated.If you let this corruption there then you are likely to face a big issue that will compromise database availability sooner or later. The sysaux is a critical tablespace, so you must proceed with caution.
Make sure you have a valid backup and don't do any thing unless you are sure about what you are doing and you have a fall back procedure.
if you still have a valid backup then you can use rman to perform a db block level recovery, this will help you in fixing the block. Otherwise try to restore and recover the sysaux. In case you cannot fix the block by refreshing the sysaux tablespace then I suggest you to create a new database and use aTransportable Tablespace technique to migrate all tablespaces from your current database to the new one and get rid of this database.
~ Madrid
http://hrivera99.blogspot.com -
Hello,
I am running a backup and checking for any logical corruption -
RMAN> backup check logical database;
Starting backup at 03-MAR-10
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=135 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Data Protection for Oracle: version 5.5.1.0
allocated channel: ORA_SBT_TAPE_2
channel ORA_SBT_TAPE_2: SID=137 device type=SBT_TAPE
channel ORA_SBT_TAPE_2: Data Protection for Oracle: version 5.5.1.0
allocated channel: ORA_SBT_TAPE_3
channel ORA_SBT_TAPE_3: SID=138 device type=SBT_TAPE
channel ORA_SBT_TAPE_3: Data Protection for Oracle: version 5.5.1.0
channel ORA_SBT_TAPE_1: starting full datafile backup set
channel ORA_SBT_TAPE_1: specifying datafile(s) in backup set
input datafile file number=00014 name=/oracle1/data01/TESTDB/TESTDB_compress_test_01.dbf
input datafile file number=00006 name=/oracle/TESTDB/data01/TESTDB_shau_01.dbf
input datafile file number=00015 name=/oracle/product/11.1/dbs/ILM_TOOLKIT_IML_TEST_TAB_A.f
channel ORA_SBT_TAPE_1: starting piece 1 at 03-MAR-10
channel ORA_SBT_TAPE_2: starting full datafile backup set
channel ORA_SBT_TAPE_2: specifying datafile(s) in backup set
input datafile file number=00003 name=/oracle/TESTDB/data02/TESTDB_undo_01.dbf
input datafile file number=00013 name=/oracle/TESTDB/data01/TESTDB_roop_01.dbf
input datafile file number=00012 name=/oracle/TESTDB/data01/TESTDB_example_01.dbf
input datafile file number=00005 name=/oracle/TESTDB/data01/TESTDB_sysaud_tab_1m_01.dbf
channel ORA_SBT_TAPE_2: starting piece 1 at 03-MAR-10
channel ORA_SBT_TAPE_3: starting full datafile backup set
channel ORA_SBT_TAPE_3: specifying datafile(s) in backup set
input datafile file number=00004 name=/oracle/TESTDB/data01/TESTDB_users_01.dbf
input datafile file number=00001 name=/oracle/TESTDB/data01/TESTDB_system_01.dbf
input datafile file number=00002 name=/oracle/TESTDB/data01/TESTDB_sysaux_01.dbf
input datafile file number=00025 name=/oracle/export_files/TESTDB_users_02.dbf
channel ORA_SBT_TAPE_3: starting piece 1 at 03-MAR-10
channel ORA_SBT_TAPE_3: finished piece 1 at 03-MAR-10
piece handle=5ul7ltsd_1_1 tag=TAG20100303T204356 comment=API Version 2.0,MMS Version 5.5.1.0
channel ORA_SBT_TAPE_3: backup set complete, elapsed time: 00:05:15
channel ORA_SBT_TAPE_2: finished piece 1 at 03-MAR-10
piece handle=5tl7ltsd_1_1 tag=TAG20100303T204356 comment=API Version 2.0,MMS Version 5.5.1.0
channel ORA_SBT_TAPE_2: backup set complete, elapsed time: 00:06:56
channel ORA_SBT_TAPE_1: finished piece 1 at 03-MAR-10
piece handle=5sl7ltsd_1_1 tag=TAG20100303T204356 comment=API Version 2.0,MMS Version 5.5.1.0
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:08:16
Finished backup at 03-MAR-10
Starting Control File and SPFILE Autobackup at 03-MAR-10
piece handle=c-2109934325-20100303-0c comment=API Version 2.0,MMS Version 5.5.1.0
Finished Control File and SPFILE Autobackup at 03-MAR-10
Question: By looking at the output, how can I say that RMAN did an Logical Check for the corruption? This output looks same as a simple backup without logical corruption check. Please advice how to check about this?
Thanks!hi
I think you won't see any summary on this, only when corruption is found.
There is also one related setting that can be incorporated here - see example:
Example 2-25 Specifying Corruption Tolerance for Datafile Backups
This example assumes a database that contains 5 datafiles. It uses the SET MAXCORRUPT command to indicate than no more than 1 corruption should be tolerated in each datafile. Because the CHECK LOGICAL option is specified on the BACKUP command, RMAN checks for both physical and logical corruption.
RUN
+{+
SET MAXCORRUPT FOR DATAFILE 1,2,3,4,5 TO 1;
BACKUP CHECK LOGICAL
DATABASE;
+}+
use this to see clear output:
-- Check for physical corruption of all database files.
VALIDATE DATABASE;
-- Check for physical and logical corruption of a tablespace.
VALIDATE CHECK LOGICAL TABLESPACE USERS;
eg.
List of Datafiles
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
+1 FAILED 0 3536 57600 637711+
File Name: /disk1/oradata/prod/system01.dbf
Block Type Blocks Failing Blocks Processed
Data 1 41876
Index 0 7721
Other 0 4467 -
Ocrcheck shows "Logical corruption check failed"
Hi, I have a strange issue, that I am not sure how to recover from...
In a random 'ocrcheck' we found the above 'logical corruption'. In the CRS_HOME/log/nodename/client/ I found the previous ocrcheck was done a month earlier and was successful. So, something in the last month caused a logical corruption. The cluster is functioning ok currently.
So, I tried doing an ocrdump on some backups we have and I am receiving the following error -
#ocrdump -backupfile backup00.ocr <<< any backup I try for the past month
PROT-306: Failed to retrieve cluster registry data
This error occurrs even on the backup file taken just prior to the successful ocrcheck from a month earlier. The log for this ocrdump shows -
cat ocrdump_6494.log
Oracle Database 11g CRS Release 11.1.0.7.0 - Production Copyright 1996, 2007 Oracle. All rights reserved.
2010-08-18 12:57:17.024: [ OCRDUMP][2813008768]ocrdump starts...
2010-08-18 12:57:17.038: [ OCROSD][2813008768]utread:3: Problem reading buffer 7473000 buflen 4096 retval 0 phy_offset 15982592 retry 0
2010-08-18 12:57:17.038: [ OCROSD][2813008768]utread:4: Problem reading the buffer errno 2 errstring No such file or directory
2010-08-18 12:57:17.038: [ OCRRAW][2813008768]gst: Dev/Page/Block [0/3870/3927] is CORRUPT (header)
2010-08-18 12:57:17.039: [ OCRRAW][2813008768]rbkp:2: could not read the free list
2010-08-18 12:57:17.039: [ OCRRAW][2813008768]gst:could not read fcl page 1
2010-08-18 12:57:17.039: [ OCRRAW][2813008768]rbkp:2: could not read the free list
2010-08-18 12:57:17.039: [ OCRRAW][2813008768]gst:could not read fcl page 2
2010-08-18 12:57:17.039: [ OCRRAW][2813008768]fkce:2: problem reading the tnode 131072
2010-08-18 12:57:17.039: [ OCRRAW][2813008768]propropen: Failed in finding key comp entry [26]
2010-08-18 12:57:17.039: [ OCRDUMP][2813008768]Failed to open key handle for key name [SYSTEM] [PROC-26: Error while accessing the physical storage]
2010-08-18 12:57:17.039: [ OCRDUMP][2813008768]Failure when trying to traverse ROOTKEY [SYSTEM]
2010-08-18 12:57:17.039: [ OCRDUMP][2813008768]Exiting [status=success]...
NOTE: an 'ocrdump' of the active ocr does work and creates the ocrdumpfile
The corruption in the ocr seems to be two keynames pointing to the same block.
Oracle Database 11g CRS Release 11.1.0.7.0 - Production Copyright 1996, 2007 Oracle. All rights reserved.
2010-08-18 13:22:54.095: [OCRCHECK][285084544]ocrcheck starts...
2010-08-18 13:22:55.447: [OCRCHECK][285084544]protchcheck: OCR status : total = [262120], used = [15496], avail = [246624]
2010-08-18 13:22:55.545: [OCRCHECK][285084544]LOGICAL CORRUPTION: current_keyname [SYSTEM.css.diskfile2], and keyname [SYSTEM.css.diskfile1.FILENAME] point to same block_number [3928]
2010-08-18 13:22:55.732: [OCRCHECK][285084544]LOGICAL CORRUPTION: current_keyname [SYSTEM.OCR.MANUALBACKUP.ITEMS.0], and keyname [SYSTEM.css.diskfile1] point to same block_number [3927]
2010-08-18 13:23:03.159: [OCRCHECK][285084544]Exiting [status=success]...
Since one of the keynames refers to the votedisk, that is not appearing correctly on a query -
crsctl query css votedisk
0. 0 /oracrsfiles/voting_disk_01
1. 0
2. 0 backup_20100818_103455.ocr <<<<this value changes if I issue a command that writes something to the ocr, in this case a manual backup.
My DBA is opening an SR, but I am wondering if I can use 'ocrconfig -restore' if the backupfile I want to use cannot be 'ocrdump'd?
Also, is anyone familiar with the 'ocrconfig -repair' as a possible solution?
Although this is a developement cluster (two nodes) rebuilding would be a disaster ;)
Any help or thoughts would be much appreciated!Hi buddy,
My DBA is opening an SRWell.... corruption problems, no doubts that it's better work with support team
, but I am wondering if I can use 'ocrconfig -restore' if the backupfile I want to use cannot be 'ocrdump'd?No, that is not the idea...if Your backup is not good, it's not safe restoring it. ;)
Also, is anyone familiar with the 'ocrconfig -repair' as a possible solution?This is for repairing nodes that were down when some kind of change on the configuration (replace OCR for example) has been executed while it was "off", so, I guess it's not Your case.
Good Luck!
Cerreia -
Block corruption에서 physial block corruption과 soft block corruption 의 차이점
어느 분께서 저에게 block corruption에 대한 질문을 하셨는데
그래서 솔루션을 제시해주었고 block dump를 보니 아래와 같았습니다.
(1번덤프) 제가 잘못 진단한 block dump(soft corrupt로 판단했는데 raid5의
디스크가 나간 physical corrupt이네요)
*** 2007-03-06 09:54:33.103
Start dump data blocks tsn: 6 file#: 6 minblk 102032 maxblk 102032
buffer tsn: 6 rdba: 0x01818e90 (6/102032)
scn: 0x056c.f689329c seq: 0x01 flg: 0x06 tail: 0xa0cf0000
frmt: 0x02 chkval: 0x7675 type: 0x06=trans data
Hex dump of corrupt header 2 = BROKEN
이었습니다.
그런데 저는 이미 이전에 physical block corruption을 경험 한 후
아래와 같이 dump를 떠서 확인한 결과 여러문서를 찾아본 후
이와같이 판단하였습니다.
(2번 덤프) physical corrupt로 확인된 덤프
- 아래의 scn: 0x0000.00000000 seq: 0xff flg: 0x00 tail: 0x000006ff
에서 SCN이 0x0000 이고 seq값이 UB1MAXVAL-1 이 아니므로
physical block corruption 임을 확인
*** SESSION ID:(25.59041) 2005-12-12 11:41:13.132
Start dump data blocks tsn: 7 file#: 8 minblk 169994 maxblk 169994
buffer tsn: 7 rdba: 0x0202980a (8/169994)
scn: 0x0000.00000000 seq: 0xff flg: 0x00 tail: 0x000006ff
frmt: 0x02 chkval: 0x0000 type: 0x06=trans data
Block header dump: 0x0202980a
Object id on Block? Y
seg/obj: 0xef9 csc: 0x00.8c13d2b itc: 2 flg: O typ: 2 - INDEX
fsl: 2 fnx: 0x2024b0f ver: 0x01
Itl Xid Uba Flag Lck Scn/Fsc
0x01 xid: 0x0001.052.0001888b uba: 0x00c07725.2482.02 C--- 0 scn 0x0000.083a0f80
0x02 xid: 0x0007.038.0001a196 uba: 0x00c07574.1f28.0b ---- 232 fsc 0x0f57.00000000
질문..
physial block corruption과 soft block corruption 인지의 여부를 block dump를
통해서 알고 싶은데요. 기존에 제가 잘못 알고 있는 것 같습니다.좀 이상한 부분이 있어서 직접 login 하여서 조사해보았습니다.
alert 에러 코드는
Errors in file /app/oracle/product/10.2.0/admin/TEST_T_ktrp4vpe/bdump/test_t_smon_12714.trc:
ORA-00604: error occurred at recursive SQL level 1
ORA-01578: ORACLE data block corrupted (file # 2, block # 2714)
ORA-01110: data file 2: '/test_tdata/SYSTEM04.dbf'
상기로 메타 링크를 찾아보면...
제목: FAQ: Physical Corruption
문서 ID: 공지:403747.1 유형: FAQ
(a) ORA-01578 - This error explains physical structural damage with a particular block.
(b) ORA-08103 - This error is a logical corruption error for a particular data block.
(c) ORA-00600 [2662] - This error is related to block corruption , and occurs due to a higher SCN than of database SCN.
-- 좀 이상한게 physical corruption 이란 문서내에
physical corrupt 와 logical corrupt 를 분류하는게 좀 이상한데 ㅡ_ㅡ;
a 항목이라, physical structual damage ... 자연스럽게 physcial corrupt 로 보입니다. dump 내용으로 제가 판독이 안되고, alert 의 error code 로는
physical corrupt 로 보이는데..
alert 의 error code 와 dump 의 내용이 상이 할수 있는건가요 ?
글 수정:
darkturtle -
Corrupting the block to continue recovery in physical standby
Hi,
Just like to inquire how I will be able to corrupt the block to be able to continue the recovery in the physical standby.
DB Version: 11.1.0.7
Database Type: Data Warehouse
The setup we have is primary database and standby database, we are not using dataguard, and our standby setup is another physical copy of production which act as standby and being sync using script that being run from time to time to apply the archive log came from production (its not configured to sync using ARCH or LGWR and its corresponding configurations).
Then, the standby database is not sync due to errors encountered while trying to apply the archive log, error is below:
Fri Feb 11 05:50:59 2011
ORA-279 signalled during: ALTER DATABASE RECOVER CONTINUE DEFAULT ...
ALTER DATABASE RECOVER CONTINUE DEFAULT
Media Recovery Log /u01/archive/<sid>/1_50741_651679913.arch
Fri Feb 11 05:52:06 2011
Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x7FFFD2F18FF8] [PC:0x60197E0, kdr9ir2rst0()+326]
Errors in file /u01/app/oracle/diag/rdbms/<sid>/<sid>/trace/<sid>pr0028085.trc (incident=631460):
ORA-07445: exception encountered: core dump [kdr9ir2rst0()+326] [SIGSEGV] [ADDR:0x7FFFD2F18FF8] [PC:0x60197E0] [Address not mapped to object] []
Incident details in: /u01/app/oracle/diag/rdbms/<sid>/<sid>/incident/incdir_631460/<sid>pr0028085_i631460.trc
Fri Feb 11 05:52:10 2011
Trace dumping is performing id=[cdmp_20110211055210]
Fri Feb 11 05:52:14 2011
Sweep Incident[631460]: completed
Fri Feb 11 05:52:17 2011
Slave exiting with ORA-10562 exception
Errors in file /u01/app/oracle/diag/rdbms/<sid>/<sid>/trace/<sid>pr0028085.trc:
ORA-10562: Error occurred while applying redo to data block (file# 36, block# 1576118)
ORA-10564: tablespace <tablespace name>
ORA-01110: data file 36: '/u02/oradata/<sid>/<datafile>.dbf'
ORA-10561: block type 'TRANSACTION MANAGED DATA BLOCK', data object# 14877145
ORA-00607: Internal error occurred while making a change to a data block
ORA-00602: internal programming exception
ORA-07445: exception encountered: core dump [kdr9ir2rst0()+326] [SIGSEGV] [ADDR:0x7FFFD2F18FF8] [PC:0x60197E0] [Address not mapped to object] []
Based on the error log it seems we are hitting some bug from metalink (document id 460169.1 and 882851.1)
my question is, the datafile # is given, block# is known too and the data object is also identified. I just verified that object is not that important, is there a way to set the block# to corrupted to be able the recovery to continue? Then I will just drop the table from production so that will also happen in standby, and the block corrupted will be gone too. Is this feasible?
If its not, can you suggest what's next I can do so the the physical standby will be able to sync again to prod aside from rebuilding the standby?
Please take note that I also tried to dbv the file to confirm if there is marked as corrupted and the result for that datafile is also good:
dbv file=/u02/oradata/<sid>/<datafile>_19.dbf logfile=dbv_file_36.log blocksize=16384
oracle@<server>:[~] $ cat dbv_file_36.log
DBVERIFY: Release 11.1.0.7.0 - Production on Sun Feb 13 04:35:28 2011
Copyright (c) 1982, 2007, Oracle. All rights reserved.
DBVERIFY - Verification starting : FILE = /u02/oradata/<sid>/<datafile>_19.dbf
DBVERIFY - Verification complete
Total Pages Examined : 3840000
Total Pages Processed (Data) : 700644
Total Pages Failing (Data) : 0
Total Pages Processed (Index): 417545
Total Pages Failing (Index): 0
Total Pages Processed (Other): 88910
Total Pages Processed (Seg) : 0
Total Pages Failing (Seg) : 0
Total Pages Empty : 2632901
Total Pages Marked Corrupt : 0
Total Pages Influx : 0
Total Pages Encrypted : 0
Highest block SCN : 3811184883 (1.3811184883)
Any help is really appreciated. I hope to hear feedback from you.
Thanksdamorgan, i understand the opinion.
just new with the organization and just inherit a data warehouse database without rman backup. I am still setting up the rman backup thats why i can't use rman to resolve the issue, the only i have is physical standby and its not a standby that automatically sync using dataguard or standard standby setup, i am just checking solution that is applicable in the current situation -
Unable to drop materialized view with corrupted data blocks
Hi,
The alert log of our database is giving this message
Wed Jan 31 05:23:13 2007
ORACLE Instance mesh (pid = 9) - Error 1578 encountered while recovering transaction (6, 15) on object 13355.
Wed Jan 31 05:23:13 2007
Errors in file /u01/app/oracle/admin/mesh/bdump/mesh_smon_4369.trc:
ORA-01578: ORACLE data block corrupted (file # 5, block # 388260)
ORA-01110: data file 5: '/u03/oradata/mesh/mview.dbf'
No one is using this mview still oracle is trying to recover this transaction (6, 15).
when i tried to drop this mview it gives me this error
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 5, block # 388260)
ORA-01110: data file 5: '/u03/oradata/mesh/mview.dbf'
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2255
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2461
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2430
ORA-06512: at line 1
I have tried to fix the corrupted data blocks by using dbms_repair package, but of no use.
I have marked this block to be skipped by using dbms_repair.skip_block but still unable to drop it.
Please suggest what should I do?
Thanks in advance
AnujYou are lucky if only your undesirable MV is affected by theese corrupted blocks. This is an advice to do a complete-super-full-hot-cold-middle backup of ypur database and search for any disk for a "possible replace".
God save us! -
Reduce Logical IO [db block gets/consistent gets]
Hi,
Still I'm unsure about the Logical IO (db block gets + consistent gets).
I want to reduce 'consistent gets' for this query
SQL> set autotrace traceonly
SQL> select * from cm_per_phone_vw;
905 rows selected.
Execution Plan
Plan hash value: 524433310
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 868 | 38192 | 8 (0)| 00:00:01 |
| 1 | SORT GROUP BY NOSORT | | 868 | 38192 | 8 (0)| 00:00:01 |
| 2 | TABLE ACCESS BY INDEX ROWID| CI_PER_PHONE | 1238 | 54472 | 8 (0)| 00:00:01 |
| 3 | INDEX FULL SCAN | CM172C0 | 1238 | | 1 (0)| 00:00:01 |
Statistics
8 recursive calls
0 db block gets
922 consistent gets
4 physical reads
0 redo size
39151 bytes sent via SQL*Net to client
1045 bytes received via SQL*Net from client
62 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
905 rows processedFollowing is the view it's accessing
CREATE OR REPLACE VIEW CM_PER_PHONE_VW
AS
SELECT
per_id
, MAX(DECODE(TRIM(phone_type_cd), 'MOB', phone)) AS MOB
, MAX(DECODE(TRIM(phone_type_cd), 'HOME', phone)) AS HOME
, MAX(DECODE(TRIM(phone_type_cd), 'BUSN', TRIM(phone) || ' ' || TRIM(extension))) AS BUSN
, MAX(DECODE(TRIM(phone_type_cd), 'FAX', phone)) AS FAX
, MAX(DECODE(TRIM(phone_type_cd), 'INT', phone)) AS INT
FROM
ci_per_phone
GROUP BY
per_idI have following indexes on table ci_per_phone
INDEX_NAME COLUMN_NAME COLUMN_POSITION
XM172P0 PER_ID 1
XM172P0 SEQ_NUM 2
XM172S1 PHONE 1
CM172C0 PER_ID 1I tried creating indexes on PER_ID and PHONE_TYPE_CD but the consistent gets reduces to 920 instead of 922.
Just for curiosity, how can I reduce this?
secondly, is there any explanation on 'OPERATION' break of the plan, e.g. TABLE ACCESS BY INDEX ROWID ?
Please advice.
Luckys.Further I'm having problem with another query which is a view
CREATE OR REPLACE VIEW CM_PER_CHAR_VW
AS
SELECT
/*+ full (a) */
a.acct_id
, MAX(DECODE(a.char_type_cd, 'ACCTYPE', a.char_val)) acct_type
, MAX(DECODE(a.char_type_cd, 'PRVBLCYC', a.adhoc_char_val)) prev_bill_cyc
FROM
ci_acct_char a
WHERE
a.effdt =
(SELECT
MAX(a1.effdt)
FROM
ci_acct_char a1
WHERE a1.acct_id = a.acct_id
AND a1.char_type_cd = a.char_type_cd)
GROUP BY
a.acct_idI'm not able to reduce the consistent gets and even the filter appears.
I've analyzed the table as well as the index on the table.
cisadm@CCBDEV> select * from cm_acct_char_vw;
2649 rows selected.
Execution Plan
Plan hash value: 132362271
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 27 | 4536 | 14 (8)| 00:00:01 |
| 1 | HASH GROUP BY | | 27 | 4536 | 14 (8)| 00:00:01 |
| 2 | VIEW | | 27 | 4536 | 14 (8)| 00:00:01 |
|* 3 | FILTER | | | | | |
| 4 | HASH GROUP BY | | 27 | 2916 | 14 (8)| 00:00:01 |
| 5 | NESTED LOOPS | | 2686 | 283K| 13 (0)| 00:00:01 |
| 6 | TABLE ACCESS FULL| CI_ACCT_CHAR | 2686 | 157K| 12 (0)| 00:00:01 |
|* 7 | INDEX RANGE SCAN | XM064P0 | 1 | 48 | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
3 - filter("A"."EFFDT"=MAX("A1"."EFFDT"))
7 - access("A1"."ACCT_ID"="A"."ACCT_ID" AND
"A1"."CHAR_TYPE_CD"="A"."CHAR_TYPE_CD")
Statistics
0 recursive calls
0 db block gets
2754 consistent gets
0 physical reads
0 redo size
76517 bytes sent via SQL*Net to client
2321 bytes received via SQL*Net from client
178 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
2649 rows processedhere's the tkprof
select *
from
cm_acct_char_vw
call count cpu elapsed disk query current rows
Parse 2 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 178 0.07 0.05 0 2754 0 2649
total 181 0.07 0.05 0 2754 0 2649
Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: 63 (CISADM)
Rows Execution Plan
0 SELECT STATEMENT MODE: CHOOSE
0 HASH (GROUP BY)
0 VIEW
0 FILTER
0 HASH (GROUP BY)
0 NESTED LOOPS
0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'CI_ACCT_CHAR'
(TABLE)
0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'XM064P0'
(INDEX (UNIQUE))
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 179 0.00 0.00
SQL*Net message from client 179 0.00 0.08
********************************************************************************I've an similar query for another table, where there are 1110 rows, but in the explain, no filter appears in the predicate
Predicate Information (identified by operation id):
2 - access("P"."EFFDT"="VW_COL_1" AND "PER_ID"="P"."PER_ID" AND
"CHAR_TYPE_CD"="P"."CHAR_TYPE_CD")Both the queries have somewhat similar views.
I've got 2 questions,
Is there a way I can reduce the consistent gets( I've tried with/without HINTS),
secondly whats the predicate access shows as 'VW_COL_1'
please advice. -
Log corruption near block 1737
hi,
i am getting an error while open the db
SQL> alter database open;
alter database open
ERROR at line 1:
ORA-00368: checksum error in redo log block
ORA-00353: log corruption near block 1737 change 16680088 time 12/22/2008
10:40:13
ORA-00312: online log 2 thread 1: 'G:\ORACLE\ORADATA\HOTEST\REDO02.LOG'while doing an incomplete recovery it is asking for an archived file which i am not having i.e ARC_801_1.ARC.........
SQL> recover database until cancel;
ORA-00279: change 16679127 generated at 12/22/2008 10:37:11 needed for thread 1
ORA-00289: suggestion : G:\ORACLE\ARCH\ARC_801_1.ARC
ORA-00280: change 16679127 for thread 1 is in sequence #801
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
cancel
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: 'G:\ORACLE\ORADATA\HOTEST\SYSTEM01.DBF'
ORA-01112: media recovery not startedpls suggest me for the same....I am not having a archived log of sequence no. 801 neither i am having backup, is there any other way to up the db....
SMON: enabling tx recovery
Mon Dec 22 10:37:16 2008
Database Characterset is WE8MSWIN1252
replication_dependency_tracking turned off (no async multimaster replication found)
Completed: alter database open
Corrupt block relative dba: 0x0300cc4b (file 12, block 52299)
Bad check value found during buffer read
Data in bad block -
type: 6 format: 2 rdba: 0x0300cc4b
last change scn: 0x0000.000fa127 seq: 0x1 flg: 0x06
consistency value in tail: 0xa1270601
check value in block header: 0x8f85, computed block checksum: 0xcf00
spare1: 0x0, spare2: 0x0, spare3: 0x0
Reread of rdba: 0x0300cc4b (file 12, block 52299) found valid data
Corrupt block relative dba: 0x0300cdfb (file 12, block 52731)
Bad check value found during buffer read
Data in bad block -
type: 6 format: 2 rdba: 0x0300cdfb
last change scn: 0x0000.000fa128 seq: 0x1 flg: 0x04
consistency value in tail: 0xa1280601
check value in block header: 0x446b, computed block checksum: 0x3200
spare1: 0x0, spare2: 0x0, spare3: 0x0
Reread of rdba: 0x0300cdfb (file 12, block 52731) found valid data
Dump file g:\oracle\admin\hotest\bdump\alert_hotest.log
Mon Dec 22 10:41:29 2008
ORACLE V9.2.0.4.0 - Production vsnsta=0
vsnsql=12 vsnxtr=3
Windows 2000 Version 5.1 Service Pack 2, CPU type 586
Mon Dec 22 10:41:29 2008
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
SCN scheme 2
Using log_archive_dest parameter default value
LICENSE_MAX_USERS = 0
SYS auditing is disabled
Starting up ORACLE RDBMS Version: 9.2.0.4.0.
System parameters with non-default values:
processes = 150
timed_statistics = TRUE
shared_pool_size = 50331648
large_pool_size = 8388608
java_pool_size = 33554432
control_files = G:\oracle\oradata\hotest\CONTROL01.CTL, G:\oracle\oradata\hotest\CONTROL02.CTL, G:\oracle\oradata\hotest\CONTROL03.CTL
db_block_size = 8192
db_cache_size = 25165824
compatible = 9.2.0.0.0
log_archive_start = TRUE
log_archive_dest_1 = location=G:\oracle\arch
log_archive_dest_2 = SERVICE=stand LGWR ASYNC
log_archive_dest_state_1 = ENABLE
log_archive_dest_state_2 = ENABLE
fal_server = STAND
fal_client = HOTEST
log_archive_format = arc_%s_%t.arc
db_file_multiblock_read_count= 16
fast_start_mttr_target = 300
undo_management = AUTO
undo_tablespace = UNDOTBS1
undo_retention = 10800
remote_login_passwordfile= EXCLUSIVE
db_domain =
instance_name = hotest
dispatchers = (PROTOCOL=TCP) (SERVICE=hotestXDB)
job_queue_processes = 10
hash_join_enabled = TRUE
background_dump_dest = G:\oracle\admin\hotest\bdump
user_dump_dest = G:\oracle\admin\hotest\udump
core_dump_dest = G:\oracle\admin\hotest\cdump
sort_area_size = 524288
db_name = hotest
open_cursors = 300
star_transformation_enabled= FALSE
query_rewrite_enabled = FALSE
pga_aggregate_target = 25165824
aq_tm_processes = 1
PMON started with pid=2
DBW0 started with pid=3
LGWR started with pid=4
CKPT started with pid=5
SMON started with pid=6
RECO started with pid=7
CJQ0 started with pid=8
QMN0 started with pid=9
Mon Dec 22 10:41:36 2008
starting up 1 shared server(s) ...
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
ARCH: STARTING ARCH PROCESSES
ARC0 started with pid=12
ARC0: Archival started
ARC1 started with pid=13
Mon Dec 22 10:41:37 2008
ARCH: STARTING ARCH PROCESSES COMPLETE
Mon Dec 22 10:41:37 2008
ARC0: Thread not mounted
Mon Dec 22 10:41:38 2008
ARC1: Archival started
Mon Dec 22 10:41:38 2008
ARC1: Thread not mounted
Mon Dec 22 10:41:38 2008
alter database mount exclusive
Mon Dec 22 10:41:43 2008
Successful mount of redo thread 1, with mount id 1003521250.
Mon Dec 22 10:41:43 2008
Database mounted in Exclusive Mode.
Completed: alter database mount exclusive
Mon Dec 22 10:41:43 2008
alter database open
Mon Dec 22 10:41:44 2008
Beginning crash recovery of 1 threads
Mon Dec 22 10:41:44 2008
Started first pass scan
ORA-368 signalled during: alter database open...
Mon Dec 22 10:42:35 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Dump file g:\oracle\admin\hotest\bdump\alert_hotest.log
Mon Dec 29 10:14:22 2008
ORACLE V9.2.0.4.0 - Production vsnsta=0
vsnsql=12 vsnxtr=3
Windows 2000 Version 5.1 Service Pack 2, CPU type 586
Mon Dec 29 10:14:22 2008
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
SCN scheme 2
Using log_archive_dest parameter default value
LICENSE_MAX_USERS = 0
SYS auditing is disabled
Starting up ORACLE RDBMS Version: 9.2.0.4.0.
System parameters with non-default values:
processes = 150
timed_statistics = TRUE
shared_pool_size = 50331648
large_pool_size = 8388608
java_pool_size = 33554432
control_files = G:\oracle\oradata\hotest\CONTROL01.CTL, G:\oracle\oradata\hotest\CONTROL02.CTL, G:\oracle\oradata\hotest\CONTROL03.CTL
db_block_size = 8192
db_cache_size = 25165824
compatible = 9.2.0.0.0
log_archive_start = TRUE
log_archive_dest_1 = location=G:\oracle\arch
log_archive_dest_2 = SERVICE=stand LGWR ASYNC
log_archive_dest_state_1 = ENABLE
log_archive_dest_state_2 = ENABLE
fal_server = STAND
fal_client = HOTEST
log_archive_format = arc_%s_%t.arc
db_file_multiblock_read_count= 16
fast_start_mttr_target = 300
undo_management = AUTO
undo_tablespace = UNDOTBS1
undo_retention = 10800
remote_login_passwordfile= EXCLUSIVE
db_domain =
instance_name = hotest
dispatchers = (PROTOCOL=TCP) (SERVICE=hotestXDB)
job_queue_processes = 10
hash_join_enabled = TRUE
background_dump_dest = G:\oracle\admin\hotest\bdump
user_dump_dest = G:\oracle\admin\hotest\udump
core_dump_dest = G:\oracle\admin\hotest\cdump
sort_area_size = 524288
db_name = hotest
open_cursors = 300
star_transformation_enabled= FALSE
query_rewrite_enabled = FALSE
pga_aggregate_target = 25165824
aq_tm_processes = 1
PMON started with pid=2
DBW0 started with pid=3
LGWR started with pid=4
CKPT started with pid=5
SMON started with pid=6
RECO started with pid=7
CJQ0 started with pid=8
QMN0 started with pid=9
Mon Dec 29 10:14:28 2008
starting up 1 shared server(s) ...
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
ARCH: STARTING ARCH PROCESSES
ARC0 started with pid=12
ARC0: Archival started
ARC1 started with pid=13
ARC1: Archival started
Mon Dec 29 10:14:30 2008
ARCH: STARTING ARCH PROCESSES COMPLETE
Mon Dec 29 10:14:30 2008
ARC1: Thread not mounted
Mon Dec 29 10:14:31 2008
ARC0: Thread not mounted
Mon Dec 29 10:14:32 2008
alter database mount exclusive
Mon Dec 29 10:14:37 2008
Successful mount of redo thread 1, with mount id 1004122632.
Mon Dec 29 10:14:37 2008
Database mounted in Exclusive Mode.
Completed: alter database mount exclusive
Mon Dec 29 10:14:37 2008
alter database open
Mon Dec 29 10:14:37 2008
Beginning crash recovery of 1 threads
Mon Dec 29 10:14:38 2008
Started first pass scan
ORA-368 signalled during: alter database open...
Mon Dec 29 10:15:29 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 10:20:44 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 10:25:54 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 10:31:09 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 10:36:24 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 10:41:40 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 10:46:55 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 10:52:10 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 10:57:26 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 11:02:41 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 11:07:56 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 11:13:05 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 11:18:15 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 11:23:30 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 11:28:39 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 11:33:49 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 11:39:04 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 11:44:19 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 11:49:35 2008
Restarting dead background process QMN0
QMN0 started with pid=9
Mon Dec 29 11:54:44 2008
Restarting dead background process QMN0
QMN0 started with pid=15
Mon Dec 29 11:59:59 2008
Restarting dead background process QMN0
QMN0 started with pid=14
Mon Dec 29 12:03:06 2008
alter database open
Mon Dec 29 12:03:06 2008
Beginning crash recovery of 1 threads
Mon Dec 29 12:03:07 2008
Started first pass scan
ORA-368 signalled during: alter database open...
Mon Dec 29 12:05:09 2008
Restarting dead background process QMN0
QMN0 started with pid=15
Mon Dec 29 12:05:14 2008
ALTER DATABASE RECOVER database until cancel
Mon Dec 29 12:05:14 2008
Media Recovery Start
Starting datafile 1 recovery in thread 1 sequence 801
Datafile 1: 'G:\ORACLE\ORADATA\HOTEST\SYSTEM01.DBF'
Starting datafile 2 recovery in thread 1 sequence 801
Datafile 2: 'G:\ORACLE\ORADATA\HOTEST\UNDOTBS01.DBF'
Starting datafile 3 recovery in thread 1 sequence 801
Datafile 3: 'G:\ORACLE\ORADATA\HOTEST\CWMLITE01.DBF'
Starting datafile 4 recovery in thread 1 sequence 801
Datafile 4: 'G:\ORACLE\ORADATA\HOTEST\DRSYS01.DBF'
Starting datafile 5 recovery in thread 1 sequence 801
Datafile 5: 'G:\ORACLE\ORADATA\HOTEST\EXAMPLE01.DBF'
Starting datafile 6 recovery in thread 1 sequence 801
Datafile 6: 'G:\ORACLE\ORADATA\HOTEST\INDX01.DBF'
Starting datafile 7 recovery in thread 1 sequence 801
Datafile 7: 'G:\ORACLE\ORADATA\HOTEST\ODM01.DBF'
Starting datafile 8 recovery in thread 1 sequence 801
Datafile 8: 'G:\ORACLE\ORADATA\HOTEST\TOOLS01.DBF'
Starting datafile 9 recovery in thread 1 sequence 801
Datafile 9: 'G:\ORACLE\ORADATA\HOTEST\USERS01.DBF'
Starting datafile 10 recovery in thread 1 sequence 801
Datafile 10: 'G:\ORACLE\ORADATA\HOTEST\XDB01.DBF'
Starting datafile 11 recovery in thread 1 sequence 801
Datafile 11: 'G:\ORACLE\ORADATA\HOTEST\CADATA3.ORA'
Starting datafile 12 recovery in thread 1 sequence 801
Datafile 12: 'G:\ORACLE\ORADATA\HOTEST\CADATA.ORA'
Starting datafile 13 recovery in thread 1 sequence 801
Datafile 13: 'G:\ORACLE\ORADATA\HOTEST\CADATAWRO.ORA'
Starting datafile 14 recovery in thread 1 sequence 801
Datafile 14: 'G:\ORACLE\ORADATA\HOTEST\CADATANRO.ORA'
Starting datafile 15 recovery in thread 1 sequence 801
Datafile 15: 'G:\ORACLE\ORADATA\HOTEST\CADATACRO.ORA'
Starting datafile 16 recovery in thread 1 sequence 801
Datafile 16: 'G:\ORACLE\ORADATA\HOTEST\CADATAOTH.ORA'
Starting datafile 17 recovery in thread 1 sequence 801
Datafile 17: 'G:\ORACLE\ORADATA\HOTEST\CADATAERO.ORA'
Starting datafile 18 recovery in thread 1 sequence 801
Datafile 18: 'G:\ORACLE\ORADATA\HOTEST\CADATAHO.ORA'
Starting datafile 19 recovery in thread 1 sequence 801
Datafile 19: 'G:\ORACLE\ORADATA\HOTEST\CADATASRO.ORA'
Starting datafile 20 recovery in thread 1 sequence 801
Datafile 20: 'G:\ORACLE\ORADATA\HOTEST\PAYROLL.ORA'
Starting datafile 21 recovery in thread 1 sequence 801
Datafile 21: 'G:\ORACLE\ORADATA\HOTEST\ORION.ORA'
Starting datafile 22 recovery in thread 1 sequence 801
Datafile 22: 'G:\ORACLE\ORADATA\HOTEST\ATHENA.ORA'
Starting datafile 23 recovery in thread 1 sequence 801
Datafile 23: 'G:\ORACLE\ORADATA\HOTEST\CAL_YEAR_2004.ORA'
Starting datafile 24 recovery in thread 1 sequence 801
Datafile 24: 'G:\ORACLE\ORADATA\HOTEST\CAL_YEAR_2005.ORA'
Starting datafile 25 recovery in thread 1 sequence 801
Datafile 25: 'G:\ORACLE\ORADATA\HOTEST\CAL_YEAR_2006.ORA'
Starting datafile 26 recovery in thread 1 sequence 801
Datafile 26: 'G:\ORACLE\ORADATA\HOTEST\CAL_YEAR_2007.ORA'
Starting datafile 27 recovery in thread 1 sequence 801
Datafile 27: 'G:\ORACLE\ORADATA\HOTEST\CAL_YEAR_2008.ORA'
Starting datafile 28 recovery in thread 1 sequence 801
Datafile 28: 'G:\ORACLE\ORADATA\HOTEST\CAL_YEAR_2009.ORA'
Starting datafile 29 recovery in thread 1 sequence 801
Datafile 29: 'G:\ORACLE\ORADATA\HOTEST\EXAMPLE02.DBF'
Starting datafile 30 recovery in thread 1 sequence 801
Datafile 30: 'G:\ORACLE\ORADATA\HOTEST\EXAMPLE02.RAW'
Media Recovery Log
ORA-279 signalled during: ALTER DATABASE RECOVER database until cancel ...
Mon Dec 29 12:06:24 2008
ALTER DATABASE RECOVER CANCEL
Mon Dec 29 12:06:24 2008
ORA-1547 signalled during: ALTER DATABASE RECOVER CANCEL ...
Mon Dec 29 12:06:24 2008
ALTER DATABASE RECOVER CANCEL
ORA-1112 signalled during: ALTER DATABASE RECOVER CANCEL ...
Mon Dec 29 12:06:40 2008
alter database open resetlogs
Mon Dec 29 12:06:41 2008
ORA-1194 signalled during: alter database open resetlogs...
Mon Dec 29 12:06:47 2008
alter database open noresetlogs
Mon Dec 29 12:06:48 2008
Beginning crash recovery of 1 threads
Mon Dec 29 12:06:48 2008
Started first pass scan
ORA-368 signalled during: alter database open noresetlogs...
Mon Dec 29 12:10:24 2008
Restarting dead background process QMN0
QMN0 started with pid=15
Mon Dec 29 12:15:39 2008
Restarting dead background process QMN0
QMN0 started with pid=15
Mon Dec 29 12:20:55 2008
Restarting dead background process QMN0
QMN0 started with pid=15
Mon Dec 29 12:26:10 2008 -
RMAN-05501: RMAN-11003 ORA-00353: log corruption near block 2048 change
Hi Gurus,
I've posted few days ago an issue I got while recreating my Dataguard.
The Main issue was while duplicating target from active database I got this errors during the recovery process.
The Restore Process Went fine, RMAN Copied the Datafiles very well, but stop when at the moment to start the recovery process from the auxiliary db
Yesterday I took one last try,
I follow same procedure, the one described in all Oracle Docs, Google and so on ... it's not a secret I guess.
The I got the same issue, same errors.
I read soemthing about archivelogs, and the Block corruption and so on, I've tried so many things (register the log... etc etc ), and than I read something about "catalog the logfile)
and that's waht I did.
But I was just connect to the target db.
contents of Memory Script:
set until scn 1638816629;
recover
standby
clone database
delete archivelog
executing Memory Script
executing command: SET until clause
Starting recover at 14-MAY-13
starting media recovery
archived log for thread 1 with sequence 32196 is already on disk as file /archives/CMOVP/stby/1_32196_810397891.arc
archived log for thread 1 with sequence 32197 is already on disk as file /archives/CMOVP/stby/1_32197_810397891.arc
archived log for thread 1 with sequence 32198 is already on disk as file /archives/CMOVP/stby/1_32198_810397891.arc
archived log for thread 1 with sequence 32199 is already on disk as file /archives/CMOVP/stby/1_32199_810397891.arc
archived log for thread 1 with sequence 32200 is already on disk as file /archives/CMOVP/stby/1_32200_810397891.arc
archived log for thread 1 with sequence 32201 is already on disk as file /archives/CMOVP/stby/1_32201_810397891.arc
archived log for thread 1 with sequence 32202 is already on disk as file /archives/CMOVP/stby/1_32202_810397891.arc
archived log for thread 1 with sequence 32203 is already on disk as file /archives/CMOVP/stby/1_32203_810397891.arc
archived log for thread 1 with sequence 32204 is already on disk as file /archives/CMOVP/stby/1_32204_810397891.arc
archived log for thread 1 with sequence 32205 is already on disk as file /archives/CMOVP/stby/1_32205_810397891.arc
archived log for thread 1 with sequence 32206 is already on disk as file /archives/CMOVP/stby/1_32206_810397891.arc
archived log for thread 1 with sequence 32207 is already on disk as file /archives/CMOVP/stby/1_32207_810397891.arc
archived log for thread 1 with sequence 32208 is already on disk as file /archives/CMOVP/stby/1_32208_810397891.arc
archived log for thread 1 with sequence 32209 is already on disk as file /archives/CMOVP/stby/1_32209_810397891.arc
archived log for thread 1 with sequence 32210 is already on disk as file /archives/CMOVP/stby/1_32210_810397891.arc
archived log for thread 1 with sequence 32211 is already on disk as file /archives/CMOVP/stby/1_32211_810397891.arc
archived log for thread 1 with sequence 32212 is already on disk as file /archives/CMOVP/stby/1_32212_810397891.arc
archived log for thread 1 with sequence 32213 is already on disk as file /archives/CMOVP/stby/1_32213_810397891.arc
archived log for thread 1 with sequence 32214 is already on disk as file /archives/CMOVP/stby/1_32214_810397891.arc
archived log for thread 1 with sequence 32215 is already on disk as file /archives/CMOVP/stby/1_32215_810397891.arc
archived log for thread 1 with sequence 32216 is already on disk as file /archives/CMOVP/stby/1_32216_810397891.arc
archived log for thread 1 with sequence 32217 is already on disk as file /archives/CMOVP/stby/1_32217_810397891.arc
archived log for thread 1 with sequence 32218 is already on disk as file /archives/CMOVP/stby/1_32218_810397891.arc
archived log for thread 1 with sequence 32219 is already on disk as file /archives/CMOVP/stby/1_32219_810397891.arc
archived log for thread 1 with sequence 32220 is already on disk as file /archives/CMOVP/stby/1_32220_810397891.arc
archived log for thread 1 with sequence 32221 is already on disk as file /archives/CMOVP/stby/1_32221_810397891.arc
archived log for thread 1 with sequence 32222 is already on disk as file /archives/CMOVP/stby/1_32222_810397891.arc
archived log for thread 1 with sequence 32223 is already on disk as file /archives/CMOVP/stby/1_32223_810397891.arc
archived log file name=/archives/CMOVP/stby/1_32196_810397891.arc thread=1 sequence=32196
released channel: prm1
released channel: stby1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 05/14/2013 01:11:33
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
ORA-00283: recovery session canceled due to errors
RMAN-11003: failure during parse/execution of SQL statement: alter database recover logfile '/archives/CMOVP/stby/1_32196_810397891.arc'
ORA-00283: recovery session canceled due to errors
ORA-00354: corrupt redo log block header
ORA-00353: log corruption near block 2048 change 1638686297 time 05/13/2013 22:42:03
ORA-00334: archived log: '/archives/CMOVP/stby/1_32196_810397891.arc'
################# What I did: ################################
Rma target /
RMAN>catalog archivelog '/archives/CMOVP/stby/1_32196_810397891.arc';
The I connect to target and Auxiliary again :Rman target / catalog rman/rman@rman auxiliary
and I run the last content of the failing memory script:RMAN> run
set until scn 1638816629;
recover
standby
clone database
delete archivelog
And The DB start the recovery Process and my Standby complete the recovery very weel with message "Recovery Finnish or Termintaed or Completed"
The I could configure Datagurd
And I check the process and the Log Apply was on and running fine, no gaps, perfect!!!!!
How !!! Just Cataloging an "Supposed Corrupted"archive log !!!!!!
If Any ideas, that ould be great to understand this.
Rgds
CarlosokKarol wrote:
Hi Gurus,
I've posted few days ago an issue I got while recreating my Dataguard.
The Main issue was while duplicating target from active database I got this errors during the recovery process.
The Restore Process Went fine, RMAN Copied the Datafiles very well, but stop when at the moment to start the recovery process from the auxiliary db
Yesterday I took one last try,
I follow same procedure, the one described in all Oracle Docs, Google and so on ... it's not a secret I guess.
The I got the same issue, same errors.
I read soemthing about archivelogs, and the Block corruption and so on, I've tried so many things (register the log... etc etc ), and than I read something about "catalog the logfile)
and that's waht I did.
But I was just connect to the target db.
contents of Memory Script:
set until scn 1638816629;
recover
standby
clone database
delete archivelog
executing Memory Script
executing command: SET until clause
Starting recover at 14-MAY-13
starting media recovery
archived log for thread 1 with sequence 32196 is already on disk as file /archives/CMOVP/stby/1_32196_810397891.arc
archived log for thread 1 with sequence 32197 is already on disk as file /archives/CMOVP/stby/1_32197_810397891.arc
archived log for thread 1 with sequence 32198 is already on disk as file /archives/CMOVP/stby/1_32198_810397891.arc
archived log for thread 1 with sequence 32199 is already on disk as file /archives/CMOVP/stby/1_32199_810397891.arc
archived log for thread 1 with sequence 32200 is already on disk as file /archives/CMOVP/stby/1_32200_810397891.arc
archived log for thread 1 with sequence 32201 is already on disk as file /archives/CMOVP/stby/1_32201_810397891.arc
archived log for thread 1 with sequence 32202 is already on disk as file /archives/CMOVP/stby/1_32202_810397891.arc
archived log for thread 1 with sequence 32203 is already on disk as file /archives/CMOVP/stby/1_32203_810397891.arc
archived log for thread 1 with sequence 32204 is already on disk as file /archives/CMOVP/stby/1_32204_810397891.arc
archived log for thread 1 with sequence 32205 is already on disk as file /archives/CMOVP/stby/1_32205_810397891.arc
archived log for thread 1 with sequence 32206 is already on disk as file /archives/CMOVP/stby/1_32206_810397891.arc
archived log for thread 1 with sequence 32207 is already on disk as file /archives/CMOVP/stby/1_32207_810397891.arc
archived log for thread 1 with sequence 32208 is already on disk as file /archives/CMOVP/stby/1_32208_810397891.arc
archived log for thread 1 with sequence 32209 is already on disk as file /archives/CMOVP/stby/1_32209_810397891.arc
archived log for thread 1 with sequence 32210 is already on disk as file /archives/CMOVP/stby/1_32210_810397891.arc
archived log for thread 1 with sequence 32211 is already on disk as file /archives/CMOVP/stby/1_32211_810397891.arc
archived log for thread 1 with sequence 32212 is already on disk as file /archives/CMOVP/stby/1_32212_810397891.arc
archived log for thread 1 with sequence 32213 is already on disk as file /archives/CMOVP/stby/1_32213_810397891.arc
archived log for thread 1 with sequence 32214 is already on disk as file /archives/CMOVP/stby/1_32214_810397891.arc
archived log for thread 1 with sequence 32215 is already on disk as file /archives/CMOVP/stby/1_32215_810397891.arc
archived log for thread 1 with sequence 32216 is already on disk as file /archives/CMOVP/stby/1_32216_810397891.arc
archived log for thread 1 with sequence 32217 is already on disk as file /archives/CMOVP/stby/1_32217_810397891.arc
archived log for thread 1 with sequence 32218 is already on disk as file /archives/CMOVP/stby/1_32218_810397891.arc
archived log for thread 1 with sequence 32219 is already on disk as file /archives/CMOVP/stby/1_32219_810397891.arc
archived log for thread 1 with sequence 32220 is already on disk as file /archives/CMOVP/stby/1_32220_810397891.arc
archived log for thread 1 with sequence 32221 is already on disk as file /archives/CMOVP/stby/1_32221_810397891.arc
archived log for thread 1 with sequence 32222 is already on disk as file /archives/CMOVP/stby/1_32222_810397891.arc
archived log for thread 1 with sequence 32223 is already on disk as file /archives/CMOVP/stby/1_32223_810397891.arc
archived log file name=/archives/CMOVP/stby/1_32196_810397891.arc thread=1 sequence=32196
released channel: prm1
released channel: stby1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 05/14/2013 01:11:33
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
ORA-00283: recovery session canceled due to errors
RMAN-11003: failure during parse/execution of SQL statement: alter database recover logfile '/archives/CMOVP/stby/1_32196_810397891.arc'
ORA-00283: recovery session canceled due to errors
ORA-00354: corrupt redo log block header
ORA-00353: log corruption near block 2048 change 1638686297 time 05/13/2013 22:42:03
ORA-00334: archived log: '/archives/CMOVP/stby/1_32196_810397891.arc'
################# What I did: ################################
Rma target /
RMAN>catalog archivelog '/archives/CMOVP/stby/1_32196_810397891.arc';
The I connect to target and Auxiliary again :Rman target / catalog rman/rman@rman auxiliary
and I run the last content of the failing memory script:RMAN> run
set until scn 1638816629;
recover
standby
clone database
delete archivelog
And The DB start the recovery Process and my Standby complete the recovery very weel with message "Recovery Finnish or Termintaed or Completed"
The I could configure Datagurd
And I check the process and the Log Apply was on and running fine, no gaps, perfect!!!!!
How !!! Just Cataloging an "Supposed Corrupted"archive log !!!!!!
If Any ideas, that ould be great to understand this.
Rgds
CarlosHi,
Can you change standby database archive destination from /archives/CMOVP/stby/ other disk?
I think this problem on your disk.
Mahir
p.s. I remember you before thread, too
Maybe you are looking for
-
How can I use field value as a pdf filename in save as dialog box
Hello, I want to use a submit/save button and once you press that the document is automatically named like a field in that document, e.g. their is a date field in PDF, When I click Save button in PDF Then the file name in Save As window is automical
-
Xperia Z2 phone Copy -Paste option is not available in most of places
Hi, I have noticed that there is no option to do copy-paste from many of places in this phone. e.g. from chrome, address bar , can not copy the url .. Where in Samsung, can copy paste from most of the places. Can expect any new software update from
-
Here's the deal, a friend installed Final Cut Express onto my iMac for a short time for me to test the application. I liked it a lot on decided to buy it. Here's the problem, after I deleted the old Final Cut Express (registered under my friends name
-
hello, i'm trying to monitor two real instruments at the same time but it is only allowing me to hear one (the track that is highlighted). is there a way of setting it so that i can hearb both instruments at the same time?
-
How can I copy Apple Configurator profiles?
I'm trying to maintain two different profiles with 60+ wireless access point "payloads" and I can't seem to load both of them at the same time because: "This Configuration Profile already exists. Do you want to replace it? Another version of this Con