Alert log table
Hi ,
I'm unable to fetch the rows from X$DBGALERTEXT , getting ORA-00942: table or view does not exist.
please guide me what exact permission needs to be granted to that particuar user...? DB is oracle 11g R2
Thanks.
select substr(MESSAGE_TEXT, 1, 300) message_text, count(*) cnt
from X$DBGALERTEXT
where (MESSAGE_TEXT like '%ORA-%' or upper(MESSAGE_TEXT) like '%ERROR%')
-- and cast(ORIGINATING_TIMESTAMP as DATE) > sysdate - #FREQUENCY#/1440
group by substr(MESSAGE_TEXT, 1, 300);
Edited by: Oragg on Mar 1, 2012 4:11 PM
No, There is a way.
sys@ORCL> conn scott/tiger
Connected.
GLOBAL_NAME
scott@ORCL
scott@ORCL> desc X$DBGALERTEXT
ERROR:
ORA-04043: object X$DBGALERTEXT does not exist
scott@ORCL> conn / as sysdba
Connected.
GLOBAL_NAME
sys@ORCL
sys@ORCL> grant select on X$DBGALERTEXT to scott;
grant select on X$DBGALERTEXT to scott
ERROR at line 1:
ORA-02030: can only select from fixed tables/views
sys@ORCL> show user
USER is "SYS"
sys@ORCL> create view test as select * from sys.x$dbgalertext;
View created.
sys@ORCL> grant select on sys.test to scott;
Grant succeeded.
sys@ORCL> conn scott/tiger
Connected.
GLOBAL_NAME
scott@ORCL
scott@ORCL> select substr(MESSAGE_TEXT, 1, 300) message_text, count(*) cnt
2 from sys.test
3 where (MESSAGE_TEXT like '%ORA-%' or upper(MESSAGE_TEXT) like '%ERROR%')
4 -- and cast(ORIGINATING_TIMESTAMP as DATE) > sysdate - #FREQUENCY#/1440
5 group by substr(MESSAGE_TEXT, 1, 300);
MESSAGE_TEXT
CNT
ORA-1507 signalled during: alter database dismount...
1
ORA-279 signalled during: alter database recover logfile 'C:\ORACLE\FLASH_RECOVERY_AREA\ARC000000000
6_0776787855.0001'...
1
Errors in file c:\oracle\diag\rdbms\orcl\orcl\trace\orcl_dbw0_5976.trc:
ORA-01157: cannot identify/lock data file 201 - see DBWR trace file
ORA-01110: data file 201: 'C:\ORACLE\ORADATA\ORCL\TEMP01.DBF'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot
1
Similar Messages
-
ORA-07445 in the alert log when inserting into table with XMLType column
I'm trying to insert an xml-document into a table with a schema-based XMLType column. When I try to insert a row (using plsql-developer) - oracle is busy for a few seconds and then the connection to oracle is lost.
Below you''ll find the following to recreate the problem:
a) contents from the alert log
b) create script for the table
c) the before-insert trigger
d) the xml-schema
e) code for registering the schema
f) the test program
g) platform information
Alert Log:
Fri Aug 17 00:44:11 2007
Errors in file /oracle/app/oracle/product/10.2.0/db_1/admin/dntspilot2/udump/dntspilot2_ora_13807.trc:
ORA-07445: exception encountered: core dump [SIGSEGV] [Address not mapped to object] [475177] [] [] []
Create script for the table:
CREATE TABLE "DNTSB"."SIGNATURETABLE"
( "XML_DOCUMENT" "SYS"."XMLTYPE" ,
"TS" TIMESTAMP (6) WITH TIME ZONE NOT NULL ENABLE
) XMLTYPE COLUMN "XML_DOCUMENT" XMLSCHEMA "http://www.sporfori.fo/schemas/www.w3.org/TR/xmldsig-core/xmldsig-core-schema.xsd" ELEMENT "Object"
ROWDEPENDENCIES ;
Before-insert trigger:
create or replace trigger BIS_SIGNATURETABLE
before insert on signaturetable
for each row
declare
-- local variables here
l_sigtab_rec signaturetable%rowtype;
begin
if (:new.xml_document is not null) then
:new.xml_document.schemavalidate();
end if;
l_sigtab_rec.xml_document := :new.xml_document;
end BIS_SIGNATURETABLE2;
XML-Schema (xmldsig-core-schema.xsd):
=====================================================================================
<?xml version="1.0" encoding="utf-8"?>
<!-- Schema for XML Signatures
http://www.w3.org/2000/09/xmldsig#
$Revision: 1.1 $ on $Date: 2002/02/08 20:32:26 $ by $Author: reagle $
Copyright 2001 The Internet Society and W3C (Massachusetts Institute
of Technology, Institut National de Recherche en Informatique et en
Automatique, Keio University). All Rights Reserved.
http://www.w3.org/Consortium/Legal/
This document is governed by the W3C Software License [1] as described
in the FAQ [2].
[1] http://www.w3.org/Consortium/Legal/copyright-software-19980720
[2] http://www.w3.org/Consortium/Legal/IPR-FAQ-20000620.html#DTD
-->
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:ds="http://www.w3.org/2000/09/xmldsig#" xmlns:xdb="http://xmlns.oracle.com/xdb"
targetNamespace="http://www.w3.org/2000/09/xmldsig#" version="0.1" elementFormDefault="qualified">
<!-- Basic Types Defined for Signatures -->
<xs:simpleType name="CryptoBinary">
<xs:restriction base="xs:base64Binary">
</xs:restriction>
</xs:simpleType>
<!-- Start Signature -->
<xs:element name="Signature" type="ds:SignatureType"/>
<xs:complexType name="SignatureType">
<xs:sequence>
<xs:element ref="ds:SignedInfo"/>
<xs:element ref="ds:SignatureValue"/>
<xs:element ref="ds:KeyInfo" minOccurs="0"/>
<xs:element ref="ds:Object" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
<xs:attribute name="Id" type="xs:ID" use="optional"/>
</xs:complexType>
<xs:element name="SignatureValue" type="ds:SignatureValueType"/>
<xs:complexType name="SignatureValueType">
<xs:simpleContent>
<xs:extension base="xs:base64Binary">
<xs:attribute name="Id" type="xs:ID" use="optional"/>
</xs:extension>
</xs:simpleContent>
</xs:complexType>
<!-- Start SignedInfo -->
<xs:element name="SignedInfo" type="ds:SignedInfoType"/>
<xs:complexType name="SignedInfoType">
<xs:sequence>
<xs:element ref="ds:CanonicalizationMethod"/>
<xs:element ref="ds:SignatureMethod"/>
<xs:element ref="ds:Reference" maxOccurs="unbounded"/>
</xs:sequence>
<xs:attribute name="Id" type="xs:ID" use="optional"/>
</xs:complexType>
<xs:element name="CanonicalizationMethod" type="ds:CanonicalizationMethodType"/>
<xs:complexType name="CanonicalizationMethodType" mixed="true">
<xs:sequence>
<xs:any namespace="##any" minOccurs="0" maxOccurs="unbounded"/>
<!-- (0,unbounded) elements from (1,1) namespace -->
</xs:sequence>
<xs:attribute name="Algorithm" type="xs:anyURI" use="required"/>
</xs:complexType>
<xs:element name="SignatureMethod" type="ds:SignatureMethodType"/>
<xs:complexType name="SignatureMethodType" mixed="true">
<xs:sequence>
<xs:element name="HMACOutputLength" minOccurs="0" type="ds:HMACOutputLengthType"/>
<xs:any namespace="##other" minOccurs="0" maxOccurs="unbounded"/>
<!-- (0,unbounded) elements from (1,1) external namespace -->
</xs:sequence>
<xs:attribute name="Algorithm" type="xs:anyURI" use="required"/>
</xs:complexType>
<!-- Start Reference -->
<xs:element name="Reference" type="ds:ReferenceType"/>
<xs:complexType name="ReferenceType">
<xs:sequence>
<xs:element ref="ds:Transforms" minOccurs="0"/>
<xs:element ref="ds:DigestMethod"/>
<xs:element ref="ds:DigestValue"/>
</xs:sequence>
<xs:attribute name="Id" type="xs:ID" use="optional"/>
<xs:attribute name="URI" type="xs:anyURI" use="optional"/>
<xs:attribute name="Type" type="xs:anyURI" use="optional"/>
</xs:complexType>
<xs:element name="Transforms" type="ds:TransformsType"/>
<xs:complexType name="TransformsType">
<xs:sequence>
<xs:element ref="ds:Transform" maxOccurs="unbounded"/>
</xs:sequence>
</xs:complexType>
<xs:element name="Transform" type="ds:TransformType"/>
<xs:complexType name="TransformType" mixed="true">
<xs:choice minOccurs="0" maxOccurs="unbounded">
<xs:any namespace="##other" processContents="lax"/>
<!-- (1,1) elements from (0,unbounded) namespaces -->
<xs:element name="XPath" type="xs:string"/>
</xs:choice>
<xs:attribute name="Algorithm" type="xs:anyURI" use="required"/>
</xs:complexType>
<!-- End Reference -->
<xs:element name="DigestMethod" type="ds:DigestMethodType"/>
<xs:complexType name="DigestMethodType" mixed="true">
<xs:sequence>
<xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
<xs:attribute name="Algorithm" type="xs:anyURI" use="required"/>
</xs:complexType>
<xs:element name="DigestValue" type="ds:DigestValueType"/>
<xs:simpleType name="DigestValueType">
<xs:restriction base="xs:base64Binary"/>
</xs:simpleType>
<!-- End SignedInfo -->
<!-- Start KeyInfo -->
<xs:element name="KeyInfo" type="ds:KeyInfoType"/>
<xs:complexType name="KeyInfoType" mixed="true">
<xs:choice maxOccurs="unbounded">
<xs:element ref="ds:KeyName"/>
<xs:element ref="ds:KeyValue"/>
<xs:element ref="ds:RetrievalMethod"/>
<xs:element ref="ds:X509Data"/>
<xs:element ref="ds:PGPData"/>
<xs:element ref="ds:SPKIData"/>
<xs:element ref="ds:MgmtData"/>
<xs:any processContents="lax" namespace="##other"/>
<!-- (1,1) elements from (0,unbounded) namespaces -->
</xs:choice>
<xs:attribute name="Id" type="xs:ID" use="optional"/>
</xs:complexType>
<xs:element name="KeyName" type="xs:string"/>
<xs:element name="MgmtData" type="xs:string"/>
<xs:element name="KeyValue" type="ds:KeyValueType"/>
<xs:complexType name="KeyValueType" mixed="true">
<xs:choice>
<xs:element ref="ds:DSAKeyValue"/>
<xs:element ref="ds:RSAKeyValue"/>
<xs:any namespace="##other" processContents="lax"/>
</xs:choice>
</xs:complexType>
<xs:element name="RetrievalMethod" type="ds:RetrievalMethodType"/>
<xs:complexType name="RetrievalMethodType">
<xs:sequence>
<xs:element ref="ds:Transforms" minOccurs="0"/>
</xs:sequence>
<xs:attribute name="URI" type="xs:anyURI"/>
<xs:attribute name="Type" type="xs:anyURI" use="optional"/>
</xs:complexType>
<!-- Start X509Data -->
<xs:element name="X509Data" type="ds:X509DataType"/>
<xs:complexType name="X509DataType">
<xs:sequence maxOccurs="unbounded">
<xs:choice>
<xs:element name="X509IssuerSerial" type="ds:X509IssuerSerialType"/>
<xs:element name="X509SKI" type="xs:base64Binary"/>
<xs:element name="X509SubjectName" type="xs:string"/>
<xs:element name="X509Certificate" type="xs:base64Binary"/>
<xs:element name="X509CRL" type="xs:base64Binary"/>
<xs:any namespace="##other" processContents="lax"/>
</xs:choice>
</xs:sequence>
</xs:complexType>
<xs:complexType name="X509IssuerSerialType">
<xs:sequence>
<xs:element name="X509IssuerName" type="xs:string"/>
<xs:element name="X509SerialNumber" type="xs:integer"/>
</xs:sequence>
</xs:complexType>
<!-- End X509Data -->
<!-- Begin PGPData -->
<xs:element name="PGPData" type="ds:PGPDataType"/>
<xs:complexType name="PGPDataType">
<xs:choice>
<xs:sequence>
<xs:element name="PGPKeyID" type="xs:base64Binary"/>
<xs:element name="PGPKeyPacket" type="xs:base64Binary" minOccurs="0"/>
<xs:any namespace="##other" processContents="lax" minOccurs="0"
maxOccurs="unbounded"/>
</xs:sequence>
<xs:sequence>
<xs:element name="PGPKeyPacket" type="xs:base64Binary"/>
<xs:any namespace="##other" processContents="lax" minOccurs="0"
maxOccurs="unbounded"/>
</xs:sequence>
</xs:choice>
</xs:complexType>
<!-- End PGPData -->
<!-- Begin SPKIData -->
<xs:element name="SPKIData" type="ds:SPKIDataType"/>
<xs:complexType name="SPKIDataType">
<xs:sequence maxOccurs="unbounded">
<xs:element name="SPKISexp" type="xs:base64Binary"/>
<xs:any namespace="##other" processContents="lax" minOccurs="0"/>
</xs:sequence>
</xs:complexType>
<!-- End SPKIData -->
<!-- End KeyInfo -->
<!-- Start Object (Manifest, SignatureProperty) -->
<xs:element name="Object" type="ds:ObjectType"/>
<xs:complexType name="ObjectType" mixed="true">
<xs:sequence minOccurs="0" maxOccurs="unbounded">
<xs:any namespace="##any" processContents="lax"/>
</xs:sequence>
<xs:attribute name="Id" type="xs:ID" use="optional"/>
<xs:attribute name="MimeType" type="xs:string" use="optional"/> <!-- add a grep facet -->
<xs:attribute name="Encoding" type="xs:anyURI" use="optional"/>
</xs:complexType>
<xs:element name="Manifest" type="ds:ManifestType"/>
<xs:complexType name="ManifestType">
<xs:sequence>
<xs:element ref="ds:Reference" maxOccurs="unbounded"/>
</xs:sequence>
<xs:attribute name="Id" type="xs:ID" use="optional"/>
</xs:complexType>
<xs:element name="SignatureProperties" type="ds:SignaturePropertiesType"/>
<xs:complexType name="SignaturePropertiesType">
<xs:sequence>
<xs:element ref="ds:SignatureProperty" maxOccurs="unbounded"/>
</xs:sequence>
<xs:attribute name="Id" type="xs:ID" use="optional"/>
</xs:complexType>
<xs:element name="SignatureProperty" type="ds:SignaturePropertyType"/>
<xs:complexType name="SignaturePropertyType" mixed="true">
<xs:choice maxOccurs="unbounded">
<xs:any namespace="##other" processContents="lax"/>
<!-- (1,1) elements from (1,unbounded) namespaces -->
</xs:choice>
<xs:attribute name="Target" type="xs:anyURI" use="required"/>
<xs:attribute name="Id" type="xs:ID" use="optional"/>
</xs:complexType>
<!-- End Object (Manifest, SignatureProperty) -->
<!-- Start Algorithm Parameters -->
<xs:simpleType name="HMACOutputLengthType">
<xs:restriction base="xs:integer"/>
</xs:simpleType>
<!-- Start KeyValue Element-types -->
<xs:element name="DSAKeyValue" type="ds:DSAKeyValueType"/>
<xs:complexType name="DSAKeyValueType">
<xs:sequence>
<xs:sequence minOccurs="0">
<xs:element name="P" type="ds:CryptoBinary"/>
<xs:element name="Q" type="ds:CryptoBinary"/>
</xs:sequence>
<xs:element name="G" type="ds:CryptoBinary" minOccurs="0"/>
<xs:element name="Y" type="ds:CryptoBinary"/>
<xs:element name="J" type="ds:CryptoBinary" minOccurs="0"/>
<xs:sequence minOccurs="0">
<xs:element name="Seed" type="ds:CryptoBinary"/>
<xs:element name="PgenCounter" type="ds:CryptoBinary"/>
</xs:sequence>
</xs:sequence>
</xs:complexType>
<xs:element name="RSAKeyValue" type="ds:RSAKeyValueType"/>
<xs:complexType name="RSAKeyValueType">
<xs:sequence>
<xs:element name="Modulus" type="ds:CryptoBinary"/>
<xs:element name="Exponent" type="ds:CryptoBinary"/>
</xs:sequence>
</xs:complexType>
<!-- End KeyValue Element-types -->
<!-- End Signature -->
</xs:schema>
===============================================================================
Code for registering the xml-schema
begin
dbms_xmlschema.deleteSchema('http://xmlns.oracle.com/xdb/schemas/DNTSB/www.sporfori.fo/schemas/www.w3.org/TR/xmldsig-core/xmldsig-core-schema.xsd',
dbms_xmlschema.DELETE_CASCADE_FORCE);
end;
begin
DBMS_XMLSCHEMA.REGISTERURI(
schemaurl => 'http://www.sporfori.fo/schemas/www.w3.org/TR/xmldsig-core/xmldsig-core-schema.xsd',
schemadocuri => 'http://www.sporfori.fo/schemas/www.w3.org/TR/xmldsig-core/xmldsig-core-schema.xsd',
local => TRUE,
gentypes => TRUE,
genbean => FALSE,
gentables => TRUE,
force => FALSE,
owner => 'DNTSB',
options => 0);
end;
Test program
-- Created on 17-07-2006 by EEJ
declare
XML_TEXT3 CLOB := '<Object xmlns="http://www.w3.org/2000/09/xmldsig#">
<SignatureProperties>
<SignatureProperty Target="">
<Timestamp xmlns="http://www.sporfori.fo/schemas/dnts/general/2006/11/14">2007-05-10T12:00:00-05:00</Timestamp>
</SignatureProperty>
</SignatureProperties>
</Object>';
xmldoc xmltype;
begin
xmldoc := xmltype(xml_text3);
insert into signaturetable
(xml_document, ts)
values
(xmldoc, current_timestamp);
end;
Platform information
Operating system:
-bash-3.00$ uname -a
SunOS dntsdb 5.10 Generic_125101-09 i86pc i386 i86pc
SQLPlus:
SQL*Plus: Release 10.2.0.3.0 - Production on Fri Aug 17 00:15:13 2007
Copyright (c) 1982, 2006, Oracle. All Rights Reserved.
Enter password:
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning and Data Mining options
Kind Regards,
EyðunYou should report this in a service request on http://metalink.oracle.com.
It is a shame that you put all the effort here to describe your problem, but on the other hand you can now also copy & paste the question to Oracle Support.
Because you are using 10.2.0.3; I am guessing that you have a valid service contract... -
Hi
Oracle Version 10.2.0.3.0
Last friday we had a power failure and a server rebooted abruptly. After it came online I restarted the database and the db did a instance recovery and came online without any problems. However when I checked the alert log file I noticed that the date & timestamp has gone back 14 days. This was there for a while and then it started showing the current date & timestamp. Is that normal? If it's not could some one help me to figure out why this has happened?
Fri Feb 27 21:26:29 2009
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Picked latch-free SCN scheme 2
Using LOG_ARCHIVE_DEST_1 parameter default value as /opt/oracle/product/10.2/db_1/dbs/arch
Autotune of undo retention is turned on.
IMODE=BR
ILAT =121
LICENSE_MAX_USERS = 0
SYS auditing is disabled
ksdpec: called for event 13740 prior to event group initialization
Starting up ORACLE RDBMS Version: 10.2.0.3.0.
System parameters with non-default values:
processes = 1000
sessions = 1105
__shared_pool_size = 184549376
__large_pool_size = 16777216
__java_pool_size = 16777216
__streams_pool_size = 0
nls_language = ENGLISH
nls_territory = UNITED KINGDOM
filesystemio_options = SETALL
sga_target = 1577058304
control_files = /opt/oracle/oradata/rep/control.001.dbf, /opt/oracle/oradata/rep/control.002.dbf, /opt/oracle/oradata/rep/control.003.dbf
db_block_size = 8192
__db_cache_size = 1342177280
compatible = 10.2.0
Fri Feb 27 21:26:31 2009
ALTER DATABASE MOUNT
Fri Feb 27 21:26:35 2009
Setting recovery target incarnation to 1
Fri Feb 27 21:26:36 2009
Successful mount of redo thread 1, with mount id 740543687
Fri Feb 27 21:26:36 2009
Database mounted in Exclusive Mode
Completed: ALTER DATABASE MOUNT
Fri Feb 27 21:26:36 2009
ALTER DATABASE OPEN
Fri Feb 27 21:26:36 2009
Beginning crash recovery of 1 threads
parallel recovery started with 3 processes
Fri Feb 27 21:26:37 2009
Started redo scan
Fri Feb 27 21:26:41 2009
Completed redo scan
481654 redo blocks read, 13176 data blocks need recovery
Fri Feb 27 21:26:50 2009
Started redo application at
Thread 1: logseq 25176, block 781367
Fri Feb 27 21:26:50 2009
Recovery of Online Redo Log: Thread 1 Group 6 Seq 25176 Reading mem 0
Mem# 0: /opt/oracle/oradata/rep/redo_a/redo06.log
Mem# 1: /opt/oracle/oradata/rep/redo_b/redo06.log
Fri Feb 27 21:26:53 2009
Completed redo application
Fri Feb 27 21:27:00 2009
Completed crash recovery at
Thread 1: logseq 25176, block 1263021, scn 77945260488
13176 data blocks read, 13176 data blocks written, 481654 redo blocks read
Fri Feb 27 21:27:02 2009
Expanded controlfile section 9 from 1168 to 2336 records
Requested to grow by 1168 records; added 4 blocks of records
Thread 1 advanced to log sequence 25177
Thread 1 opened at log sequence 25177
Current log# 7 seq# 25177 mem# 0: /opt/oracle/oradata/rep/redo_a/redo07.log
Current log# 7 seq# 25177 mem# 1: /opt/oracle/oradata/rep/redo_b/redo07.log
Successful open of redo thread 1
Fri Feb 27 21:27:02 2009
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Fri Feb 27 21:27:02 2009
SMON: enabling cache recovery
Fri Feb 27 21:27:04 2009
Successfully onlined Undo Tablespace 1.
Fri Feb 27 21:27:04 2009
SMON: enabling tx recovery
Fri Feb 27 21:27:04 2009
Database Characterset is AL32UTF8
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
QMNC started with pid=17, OS id=4563
Fri Feb 27 21:27:08 2009
Completed: ALTER DATABASE OPEN
Fri Feb 27 22:46:04 2009
Thread 1 advanced to log sequence 25178
Current log# 8 seq# 25178 mem# 0: /opt/oracle/oradata/rep/redo_a/redo08.log
Current log# 8 seq# 25178 mem# 1: /opt/oracle/oradata/rep/redo_b/redo08.log
Fri Feb 27 23:43:49 2009
Thread 1 advanced to log sequence 25179
Current log# 9 seq# 25179 mem# 0: /opt/oracle/oradata/rep/redo_a/redo09.log
Current log# 9 seq# 25179 mem# 1: /opt/oracle/oradata/rep/redo_b/redo09.log
Fri Mar 13 20:09:29 2009
MMNL absent for 1194469 secs; Foregrounds taking over
Fri Mar 13 20:10:16 2009
Thread 1 advanced to log sequence 25180
Current log# 10 seq# 25180 mem# 0: /opt/oracle/oradata/rep/redo_a/redo10.log
Current log# 10 seq# 25180 mem# 1: /opt/oracle/oradata/rep/redo_b/redo10.log
Fri Mar 13 20:21:17 2009
Thread 1 advanced to log sequence 25181
Current log# 1 seq# 25181 mem# 0: /opt/oracle/oradata/rep/redo_a/redo01.log
Current log# 1 seq# 25181 mem# 1: /opt/oracle/oradata/rep/redo_b/redo01.logyes, you are right. I just found that the server was shutdown for more than 4 hours and server came back online @ 8:08pm and I think within few minutes those old timestamp appeared in the alertlog. We have a table which captures current timestamp from the db and timestamp from application and usually both columns are same. But following are the rows inserted during the time of the issue. Not sure why this has happened. One more thing is that the listener was started and on while database was starting and performing instance recovery.
DBTimestamp_ ApplicationTimestamp_
27-02-2009 21:27:45 13-03-2009 20:08:42
27-02-2009 21:31:47 13-03-2009 20:08:43
27-02-2009 21:31:54 13-03-2009 20:08:43
27-02-2009 21:33:39 13-03-2009 20:08:42
27-02-2009 21:35:47 13-03-2009 20:08:42
27-02-2009 21:37:45 13-03-2009 20:08:42
27-02-2009 21:38:24 13-03-2009 20:08:42
27-02-2009 21:39:42 13-03-2009 20:08:42
27-02-2009 21:40:01 13-03-2009 20:08:42
27-02-2009 21:41:13 13-03-2009 20:08:42
27-02-2009 21:44:07 13-03-2009 20:08:43
27-02-2009 21:53:54 13-03-2009 20:08:42
27-02-2009 22:03:45 13-03-2009 20:08:42
27-02-2009 22:07:02 13-03-2009 20:08:42 -
ORA-01403: no data found in alert.log
Dear All,
I am getting ORA-01403: no data found in alert.log.Could you pls help me out what could be reasons behind it..Due to this i m getting loads of alerts.Pls suggest.
ThanksORA-01403 No Data Found
Typically, an ORA-01403 error occurs when an apply process tries to update an existing row and the OLD_VALUES in the row LCR do not match the current values at the destination database.
Typically, one of the following conditions causes this error:
Supplemental logging is not specified for columns that require supplemental logging at the source database. In this case, LCRs from the source database might not contain values for key columns. You can use a DML handler to modify the LCR so that it contains the necessary supplemental data. See "Using a DML Handler to Correct Error Transactions". Also, specify the necessary supplemental logging at the source database to prevent future errors.
There is a problem with the primary key in the table for which an LCR is applying a change. In this case, make sure the primary key is enabled by querying the DBA_CONSTRAINTS data dictionary view. If no primary key exists for the table, or if the target table has a different primary key than the source table, then specify substitute key columns using the SET_KEY_COLUMNS procedure in the DBMS_APPLY_ADM package. You also might encounter error ORA-23416 if a table being applied does not have a primary key. After you make these changes, you can reexecute the error transaction.
The transaction being applied depends on another transaction which has not yet executed. For example, if a transaction tries to update an employee with an employee_id of 300, but the row for this employee has not yet been inserted into the employees table, then the update fails. In this case, execute the transaction on which the error transaction depends. Then, reexecute the error transaction.
There is a data mismatch between a row LCR and the table for which the LCR is applying a change. Make sure row data in the table at the destination database matches the row data in the LCR. When you are checking for differences in the data, if there are any DATE columns in the shared table, then make sure your query shows the hours, minutes, and seconds. If there is a mismatch, then you can use a DML handler to modify an LCR so that it matches the table. See "Using a DML Handler to Correct Error Transactions".
Alternatively, you can update the current values in the row so that the row LCR can be applied successfully. If changes to the row are captured by a capture process at the destination database, then you probably do not want to replicate this manual change to destination databases. In this case, complete the following steps:
Set a tag in the session that corrects the row. Make sure you set the tag to a value that prevents the manual change from being replicated. For example, the tag can prevent the change from being captured by a capture process.
EXEC DBMS_STREAMS.SET_TAG(tag => HEXTORAW('17'));
In some environments, you might need to set the tag to a different value.
Update the row in the table so that the data matches the old values in the LCR.
Reexecute the error or reexecute all errors. To reexecute an error, run the EXECUTE_ERROR procedure in the DBMS_APPLY_ADM package, and specify the transaction identifier for the transaction that caused the error. For example:
EXEC DBMS_APPLY_ADM.EXECUTE_ERROR(local_transaction_id => '5.4.312');
Or, execute all errors for the apply process by running the EXECUTE_ALL_ERRORS procedure:
EXEC DBMS_APPLY_ADM.EXECUTE_ALL_ERRORS(apply_name => 'APPLY');
If you are going to make other changes in the current session that you want to replicate destination databases, then reset the tag for the session to an appropriate value, as in the following example:
EXEC DBMS_STREAMS.SET_TAG(tag => NULL);
In some environments, you might need to set the tag to a value other than NULL. -
Errors appeared in alert log file should send an email
Hi,
I have one requirement to do, as i am new dba, i dont know how to do this,
sendmail is configured on my unix machine, but dont know how to send the errors appeared in alert logfile should send an email to administrator.
daily, it has to check the errors in alert log file, if any errors occurs(ORA- errors or WARNING errors) should send an email to administrator.
please help me how to do itHi,
There are many methods for interrogating the alert log and sending e-mail. Here are my notes:
http://www.dba-oracle.com/t_alert_log_monitoring_errors.htm
- PL/SQL or Java with e-mail alert: http://www.dba-village.com/village/dvp_papers.PaperDetails?PaperIdA=2383
- Shell script - Database independent and on same level as alert log file and e-mail.
- OEM - Too inflexible for complex alert log analysis rules
- SQL against the alert log - You can define the alert log file as an external table and detect messages with SQL and then e-mail.
Because the alert log is a server side flat file and because e-mail is also at the OS-level, I like to use a server side scell script. It's also far more robust then OEM, especially when combining and evaluating multiple alert log events. Jon Emmons has great notes on this:
http://www.lifeaftercoffee.com/2007/12/04/when-to-use-shell-scripts/
If you are on Windows, see here:
http://www.dba-oracle.com/t_windows_alert_log_script.htm
For UNIX, Linux, Jon Emmons has a great alert log e-mail script.
Hope this helps . . . .
Donald K. Burleson
Oracle Press author -
Hi,
The last 2-3 days I am seeing the following messages in my alert log :
Sun Mar 29 19:28:09 2009
Errors in file /db01/oracle/TAP/dump/udump/ora_13542.trc:
ORA-00600: internal error code, arguments: [13013], [5001], [1], [134636630], [35], [134636630], [], []
Dump file /db01/oracle/TAP/dump/udump/ora_13542.trc
Oracle7 Server Release 7.3.2.1.0 - Production Release
With the distributed and parallel query options
PL/SQL Release 2.3.2.0.0 - Production
ORACLE_HOME = /usr/oracle/server7.3
System name: AIX
Node name: rs1
Release: 2
Version: 4
Machine: 000001276600
Instance name: TAP
Redo thread mounted by this instance: 1
Oracle process number: 47
Unix process pid: 13542, image:
Sun Mar 29 19:28:08 2009
*** SESSION ID:(54.498) 2009.03.29.19.28.08.000
updexe: Table 0 Code 1 Cannot get stable set - last failed row is: 08066456.23
This happens at a time when I believe there is no user activity. There is a job which runs on midnights ; but the time the errors show up there isnt supposed to be any user activity.
I am not sure if Oracle is gonna support this version even if I did log in a SR.
In Metalink I came across this note : Doc ID: 40673.1 ; could this be related ?
Else what options do I have in getting this resolved.
Thanks.It is possible that you could also be facing an issue as described in ML note 28185.1.
The current option you have is to apply the patch 7.3.4.x and check if this resolves the problem; else consider upgrade to supported versions itself.
Of course, if this error is only occurring at off-peak times and not affecting any of the users, then you could probably ignore that but it would be better to find a fix in first place. -
I wanted to know when the ORA- errors are reported into the alert.log file?
And which tye of errors are reported into it?
And if i wanted to log all the ORA-errors in the alert log then is there any setting for that?You can find the type of errors reported in the link above.
Also, you maynot like to log ORA-00942: table or view does not exist, ORA-00904: : invalid identifier type of errors in alert.log - right? You may want to research at setting up events for generating trace files on specific error numbers you are looking at. That might resolve your problem - hth. -
Recovery is repairing media corrupt block x of file x in standby alert log
Hi,
oracle version:8.1.7.0.0
os version :solaris 5.9
we have oracle 8i primary and standby database. i am getting erorr in alert log file:
Thu Aug 28 22:48:12 2008
Media Recovery Log /oratranslog/arch_1_1827391.arc
Thu Aug 28 22:50:42 2008
Media Recovery Log /oratranslog/arch_1_1827392.arc
bash-2.05$ tail -f alert_pindb.log
Recovery is repairing media corrupt block 991886 of file 179
Recovery is repairing media corrupt block 70257 of file 184
Recovery is repairing media corrupt block 70258 of file 184
Recovery is repairing media corrupt block 70259 of file 184
Recovery is repairing media corrupt block 70260 of file 184
Recovery is repairing media corrupt block 70261 of file 184
Thu Aug 28 22:48:12 2008
Media Recovery Log /oratranslog/arch_1_1827391.arc
Thu Aug 28 22:50:42 2008
Media Recovery Log /oratranslog/arch_1_1827392.arc
Recovery is repairing media corrupt block 500027 of file 181
Recovery is repairing media corrupt block 500028 of file 181
Recovery is repairing media corrupt block 500029 of file 181
Recovery is repairing media corrupt block 500030 of file 181
Recovery is repairing media corrupt block 500031 of file 181
Recovery is repairing media corrupt block 991837 of file 179
Recovery is repairing media corrupt block 991838 of file 179
how i can resolve this.
[pre]
Thanks
Prakash
Edited by: user612485 on Aug 28, 2008 10:53 AMDear satish kandi,
recently we have created index for one table with nologgign option, i think for that reason i am getting that error.
if i run dbv utility on the files which are shown in alert log file i am getting the following results.
bash-2.05$ dbv file=/oracle15/oradata/pindb/pinx055.dbf blocksize=4096
DBVERIFY: Release 8.1.7.0.0 - Production on Fri Aug 29 12:18:27 2008
(c) Copyright 2000 Oracle Corporation. All rights reserved.
DBVERIFY - Verification starting : FILE = /oracle15/oradata/pindb/pinx053.dbf
Block Checking: DBA = 751593895, Block Type =
Found block already marked corrupted
Block Checking: DBA = 751593896, Block Type =
.DBVERIFY - Verification complete
Total Pages Examined : 1048576
Total Pages Processed (Data) : 0
Total Pages Failing (Data) : 0
Total Pages Processed (Index): 1036952
Total Pages Failing (Index): 0
Total Pages Processed (Other): 7342
Total Pages Empty : 4282
Total Pages Marked Corrupt : 0
Total Pages Influx : 0
bash-2.05$ dbv file=/oracle15/oradata/pindb/pinx053.dbf blocksize=4096
DBVERIFY: Release 8.1.7.0.0 - Production on Fri Aug 29 12:23:12 2008
(c) Copyright 2000 Oracle Corporation. All rights reserved.
DBVERIFY - Verification starting : FILE = /oracle15/oradata/pindb/pinx054.dbf
Block Checking: DBA = 759492966, Block Type =
Found block already marked corrupted
Block Checking: DBA = 759492967, Block Type =
Found block already marked corrupted
Block Checking: DBA = 759492968, Block Type =
.DBVERIFY - Verification complete
Total Pages Examined : 1048576
Total Pages Processed (Data) : 0
Total Pages Failing (Data) : 0
Total Pages Processed (Index): 585068
Total Pages Failing (Index): 0
Total Pages Processed (Other): 8709
Total Pages Empty : 454799
Total Pages Marked Corrupt : 0
Total Pages Influx : 0
bash-2.05$ dbv file=/oracle15/oradata/pindb/pinx054.dbf blocksize=4096
DBVERIFY: Release 8.1.7.0.0 - Production on Fri Aug 29 12:32:28 2008
(c) Copyright 2000 Oracle Corporation. All rights reserved.
DBVERIFY - Verification starting : FILE = /oracle15/oradata/pindb/pinx055.dbf
Block Checking: DBA = 771822208, Block Type =
Found block already marked corrupted
Block Checking: DBA = 771822209, Block Type =
Found block already marked corrupted
Block Checking: DBA = 771822210, Block Type =
.DBVERIFY - Verification complete
Total Pages Examined : 1048576
Total Pages Processed (Data) : 0
Total Pages Failing (Data) : 0
Total Pages Processed (Index): 157125
Total Pages Failing (Index): 0
Total Pages Processed (Other): 4203
Total Pages Empty : 887248
Total Pages Marked Corrupt : 0
Total Pages Influx : 0
My doubts are :
1.if i drop the index and recreate the index with logging option will this error won't repeat in alert log file
2.in future if i activate the standby database will database is going to open without any error.
Thanks
Prakash
. -
10.2.0.4 Streams Alert log message
Good afternoon everyone,
Wanted to get some input on a message that ocasionally is logged to alert log of the owning QT instance in our RAC environment. In summary we have two RAC environments and perform bi-directional replication between the two environments. The message in question is usually logged in the busier environment.
krvxenq: Failed to acquire logminer dictionary lock
During that time it it seems that the Logminer process in not mining for changes, which could lead to latency between the two environments.
I have looked at AWR reports for times during the occurence of these errors and its common to see the following procedure running.
BEGIN dbms_capture_adm_internal.enforce_checkpoint_retention(:1, :2, :3); END;
This procedure, which I assume purges Capture chekcpoints that exceed the retention duration, takes between 10-20 minutes to run. The table which stores the checkpoints is logmnr_restart_ckpt$. I suspect that the issue could be caused by the size of the logmnr_restart_ckpt$ whcih is 12GB in our environemnt. A purge job needs to be scheduled to shrink the table.
If anyone has seen anything similar in her or his environment please offer any additional knowledge you may have about the topic.
Thank you,There are 2 possibilities : either you have too much load on the table LOMNR_RESTART_CKPT$
due to the heavy system load or there is a bad streams query.
At this stage I would not like to admit the 'we-can't-cope-the-load' without having investigated more common issues.
Let's assume optimistically that one or many streams internal SQL do not run as intended. This is our best bet.
The table LOMNR_RESTART_CKPT$ is big is a perfect suspect.
If there is a problem with one of the queries involving this table, we find it using sys.col_usage$,
identify which columns are used and from there jump to SQL_ID checking plans:
set linesize 132 head on pagesize 33
col obj format a35 head "Table name"
col col1 format a26 head "Column"
col equijoin_preds format 9999999 head "equijoin|Preds" justify c
col nonequijoin_preds format 9999999 head "non|equijoin|Preds" justify c
col range_preds format 999999 head "Range|Pred" justify c
col equality_preds format 9999999 head "Equality|Preds" justify c
col like_preds format 999999 head "Like|Preds" justify c
col null_preds format 999999 head "Null|Preds" justify c
select r.name ||'.'|| o.name "obj" , c.name "col1",
equality_preds, equijoin_preds, nonequijoin_preds, range_preds,
like_preds, null_preds, to_char(timestamp,'DD-MM-YY HH24:MI:SS') "Date"
from sys.col_usage$ u, sys.obj$ o, sys.col$ c, sys.user$ r
where o.obj# = u.obj# and o.name = 'LOMNR_RESTART_CKPT$' -- and $AND_OWNER
c.obj# = u.obj# and
c.col# = u.intcol# and
o.owner# = r.user# and
(u.equijoin_preds > 0 or u.nonequijoin_preds > 0)
order by 4 desc
For each column predicate checks for full table scan :
define COL_NAME='col to check'
col PLAN_HASH_VALUE for 999999999999 head 'Plan hash |value' justify c
col id for 999 head 'Id'
col child for 99 head 'Ch|ld'
col cost for 999999 head 'Oper|Cost'
col tot_cost for 999999 head 'Plan|cost' justify c
col est_car for 999999999 head 'Estimed| card' justify c
col cur_car for 999999999 head 'Avg seen| card' justify c
col ACC for A3 head 'Acc|ess'
col FIL for A3 head 'Fil|ter'
col OTHER for A3 head 'Oth|er'
col ope for a30 head 'Operation'
col exec for 999999 head 'Execution'
break on PLAN_HASH_VALUE on sql_id on child
select distinct
a.PLAN_HASH_VALUE, a.id , a.sql_id, a.CHILD_NUMBER child , a.cost, c.cost tot_cost,
a.cardinality est_car, b.output_rows/decode(b.EXECUTIONS,0,1,b.EXECUTIONS) cur_car,
b.EXECUTIONS exec,
case when length(a.ACCESS_PREDICATES) > 0 then ' Y' else ' N' end ACC,
case when length(a.FILTER_PREDICATES) > 0 then ' Y' else ' N' end FIL,
case when length(a.projection) > 0 then ' Y' else ' N' end OTHER,
a.operation||' '|| a.options ope
from
v$sql_plan a,
v$sql_plan_statistics_all b ,
v$sql_plan_statistics_all c
where
a.PLAN_HASH_VALUE = b.PLAN_HASH_VALUE
and a.sql_id = b.sql_id
and a.child_number = b.child_number
and a.id = b.id
and a.PLAN_HASH_VALUE= c.PLAN_HASH_VALUE (+)
and a.sql_id = c.sql_id
and a.child_number = c.child_number and c.id=0
and a.OBJECT_NAME = 'LOMNR_RESTART_CKPT$' -- $AND_A_OWNER
and (instr(a.FILTER_PREDICATES,'&&COL_NAME') > 0
or instr(a.ACCESS_PREDICATES,'&&COL_NAME') > 0
or instr(a.PROJECTION, '$COL_NAME') > 0
order by sql_id, PLAN_HASH_VALUE, id
now for each query with a FULL table scan check the predicate and
see if adding and index will not improveAnother possibility :
One of the structure associated to streams, that you don't necessary see, may have remained too big.
Many of these structures are only accessed by streams maintenance using Full table scan.
They are supposed to remain small tables. But after a big streams crash, they inflate and whenever
the problem is solved, These supposed-to-be-small remain as the crash made them : too big.
In consequence, the FTS intended to run on small tables suddenly loop over stretches of empty blocks.
This is very frequent on big streams environment. You find these structures, if any exists using this query
Note that a long running of this query, imply big structures. You will have to assess if the number of rows is realistic with the number of blocks.
break on parent_table
col type format a30
col owner format a16
col index_name head 'Related object'
col parent_table format a30
prompt
prompt To shrink a lob associated to a queue type : alter table AQ$_<queue_table>_P modify lob(USER_DATA) ( shrink space ) cascade ;
prompt
select a.owner,a.table_name parent_table,index_name ,
decode(index_type,'LOB','LOB INDEX',index_type) type,
(select blocks from dba_segments where segment_name=index_name and owner=b.owner) blocks
from
dba_indexes a,
( select owner, queue_table table_name from dba_queue_tables
where recipients='SINGLE' and owner NOT IN ('SYSTEM') and (compatible LIKE '8.%' or compatible LIKE '10.%')
union
select owner, queue_table table_name from dba_queue_tables
where recipients='MULTIPLE' and (compatible LIKE '8.1%' or compatible LIKE '10.%')
) b
where a.owner=b.owner
and a.table_name = b.table_name
and a.owner not like 'SYS%' and a.owner not like 'WMSYS%'
union
-- LOB Segment for QT
select a.owner,a.segment_name parent_table,l.segment_name index_name, 'LOB SEG('||l.column_name||')' type,
(select sum(blocks) from dba_segments where segment_name = l.segment_name ) blob_blocks
from dba_segments a,
dba_lobs l,
( select owner, queue_table table_name from dba_queue_tables
where recipients='SINGLE' and owner NOT IN ('SYSTEM') and (compatible LIKE '8.%' or compatible LIKE '10.%')
union
select owner, queue_table table_name from dba_queue_tables
where recipients='MULTIPLE' and (compatible LIKE '8.1%' or compatible LIKE '10.%')
) b
where a.owner=b.owner and
a.SEGMENT_name = b.table_name and
l.table_name = a.segment_name and
a.owner not like 'SYS%' and a.owner not like 'WMSYS%'
union
-- LOB Segment of QT.._P
select a.owner,a.segment_name parent_table,l.segment_name index_name, 'LOB SEG('||l.column_name||')',
(select sum(blocks) from dba_segments where segment_name = l.segment_name ) blob_blocks
from dba_segments a,
dba_lobs l,
( select owner, queue_table table_name from dba_queue_tables
where recipients='SINGLE' and owner NOT IN ('SYSTEM') and (compatible LIKE '8.%' or compatible LIKE '10.%')
union
select owner, queue_table table_name from dba_queue_tables
where recipients='MULTIPLE' and (compatible LIKE '8.1%' or compatible LIKE '10.%')
) b
where a.owner=b.owner and
a.SEGMENT_name = 'AQ$_'||b.table_name||'_P' and
l.table_name = a.segment_name and
a.owner not like 'SYS%' and a.owner not like 'WMSYS%'
union
-- Related QT
select a2.owner, a2.table_name parent_table, '-' index_name , decode(nvl(a2.initial_extent,-1), -1, 'IOT TABLE','NORMAL') type,
case
when decode(nvl(a2.initial_extent,-1), -1, 'IOT TABLE','NORMAL') = 'IOT TABLE'
then ( select sum(leaf_blocks) from dba_indexes where table_name=a2.table_name and owner=a2.owner)
when decode(nvl(a2.initial_extent,-1), -1, 'IOT TABLE','NORMAL') = 'NORMAL'
then (select blocks from dba_segments where segment_name=a2.table_name and owner=a2.owner)
end blocks
from dba_tables a2,
( select owner, queue_table table_name from dba_queue_tables
where recipients='SINGLE' and owner NOT IN ('SYSTEM') and (compatible LIKE '8.%' or compatible LIKE '10.%')
union all
select owner, queue_table table_name from dba_queue_tables
where recipients='MULTIPLE' and (compatible LIKE '8.1%' or compatible LIKE '10.%' )
) b2
where
a2.table_name in ( 'AQ$_'||b2.table_name ||'_T' , 'AQ$_'||b2.table_name ||'_S', 'AQ$_'||b2.table_name ||'_H' , 'AQ$_'||b2.table_name ||'_G' ,
'AQ$_'|| b2.table_name ||'_I' , 'AQ$_'||b2.table_name ||'_C', 'AQ$_'||b2.table_name ||'_D', 'AQ$_'||b2.table_name ||'_P')
and a2.owner not like 'SYS%' and a2.owner not like 'WMSYS%'
union
-- IOT Table normal
select
u.name owner , o.name parent_table, c.table_name index_name, 'RELATED IOT' type,
(select blocks from dba_segments where segment_name=c.table_name and owner=c.owner) blocks
from sys.obj$ o,
user$ u,
(select table_name, to_number(substr(table_name,14)) as object_id , owner
from dba_tables where table_name like 'SYS_IOT_OVER_%' and owner not like '%SYS') c
where
o.obj#=c.object_id
and o.owner#=u.user#
and obj# in (
select to_number(substr(table_name,14)) as object_id from dba_tables where table_name like 'SYS_IOT_OVER_%' and owner not like '%SYS')
order by parent_table , index_name desc;
"I hope it is one of the above case, otherwise thing may become more complicates. -
Thread 1 cannot allocate new log -- from alert log
in my alert log file, I have noticed occasionally that this message was showing out during the business hour (8 am ~ 8 pm). Not often , maybe 3 or 4 times, but there are more during the midnight when system run some administration process such as KGL object name :SELECT /*+rule*/ SYS_XMLGEN(VALUE(KU$), XMLFORMAT.createFormat2('VIEW_T', '7')), KU$.OBJ_NUM ,K
U$.SCHEMA_OBJ.NAME ,KU$.SCHEMA_OBJ.NAME ,'VIEW' ,KU$.SCHEMA_OBJ.OWNER_NAME FROM SYS.KU$_VIEW_VIEW KU$ WHERE KU$
.OBJ_NUM IN (SELECT * FROM TABLE(DBMS_METADATA.FETCH_OBJNUMS(200001)))
. My question is : Do I have to add another redo log file to system? and how much is proper?
I also noticed there where message of "Memory Notification: Library Cache Object loaded into SGA
Heap size 2829K exceeds notification threshold (2048K)" during that hours. Appreciate any advice. my system is 10g r2 on AIX 5.3 L.The "Thread cannot allocate new log" occurs more often off the regular hours due to system run process. It seems more regular occurring at evry 20 minutes. By OEM, I noticed thate there some system processes run at 2 am, 8 am, 2 pm, 8pm. for examples
1) EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS();
2) MGMT_ADMIN_DATA.EVALUATE_MGMT_METRICS;
3) SELECT /*+ USE_HASH(f a) */ SUM(NVL(F.BYTES - NVL(A.BYTES, 0), 0)/1024/1024), SUM(NVL(F.BYTES, 0)/1024/1024) FROM (SELECT DISTINCT TABLESPACE_NAME FROM USER_TABLES) U, (SELECT TABLESPACE_NAME, SUM(BYTES) BYTES FROM DBA_DATA_FILES GROUP BY TABLESPACE_NAME) F, (SELECT TABLESPACE_NAME, SUM(BYTES) BYTES FROM DBA_FREE_SPACE GROUP BY TABLESPACE_NAME) A WHERE A.TABLESPACE_NAME = U.TABLESPACE_NAME AND F.T...
I wonder why they are necessary? or I have to ad one more redo log. -
Logical data corrupton in alert log
Thanks for taking my question! I am hoping someone can help me because I am really in a bind. I had some issues restoring my database the other day. I thought I recovered everything but now I am seeing more errors in the alert log and I have no idea how what to do. Any help would be greatly appreciate!!!!!
I listed alert log errors at the end and I am 11g windows 2008 .
My first mistake was stop archive logging while a large job ran to save some time.
After the job completed I restarted the archive logging and then backed up the database and took an scn.
I then recovered back to the scn above and it appeared to go ok excetp I then noticed I had (2) v%database_block_corruptions.
I then decided to recover back several days and restore an the prd schema via an export takien before 1st restor.
I changed the incarniation back 1 and recovered and imported schema back.
I checheck v$database_block_corruption and it as empty. I am thinking ai am good.
I restart several batch jobs which finished ok.
I backup and it is good.
I now have the below logical errors in the alert log. They appeared inbetween the nightly rman backup and export.
Help! What is causing this and how do I fix it?
Kathie
export taken at 9:30
Wed Jul 22 22:00:03 2009
Error backing up file 2, block 47924: logical corruption
Error backing up file 2, block 47925: logical corruption
Error backing up file 2, block 47926: logical corruption
Error backing up file 2, block 47927: logical corruption
Error backing up file 2, block 71194: logical corruption
Error backing up file 2, block 78234: logical corruption
Error backing up file 2, block 78236: logical corruption
Error backing up file 2, block 78237: logical corruption
Error backing up file 2, block 78238: logical corruption
Error backing up file 2, block 78239: logical corruption
Error backing up file 2, block 78353: logical corruption
Error backing up file 2, block 78473: logical corruption
Error backing up file 2, block 79376: logical corruption
Error backing up file 2, block 79377: logical corruption
Error backing up file 2, block 79378: logical corruption
Error backing up file 2, block 81282: logical corruption
Error backing up file 2, block 81297: logical corruption
Error backing up file 2, block 81305: logical corruption
Error backing up file 2, block 81309: logical corruption
Error backing up file 2, block 81313: logical corruption
Error backing up file 2, block 81341: logical corruption
Error backing up file 2, block 81370: logical corruption
Error backing up file 2, block 81396: logical corruption
Error backing up file 2, block 82115: logical corruption
Error backing up file 2, block 82116: logical corruption
Error backing up file 2, block 82117: logical corruption
Error backing up file 2, block 82118: logical corruption
Error backing up file 2, block 82119: logical corruption
Error backing up file 2, block 85892: logical corruption
Error backing up file 2, block 85897: logical corruption
Error backing up file 2, block 85900: logical corruption
Error backing up file 2, block 85901: logical corruption
Error backing up file 2, block 85904: logical corruption
Error backing up file 2, block 85905: logical corruption
Error backing up file 2, block 85906: logical corruption
Error backing up file 2, block 85909: logical corruption
Error backing up file 2, block 85910: logical corruption
Error backing up file 2, block 85913: logical corruption
Error backing up file 2, block 85917: logical corruption
Error backing up file 2, block 85918: logical corruption
Error backing up file 2, block 85925: logical corruption
Error backing up file 2, block 85937: logical corruption
Error backing up file 2, block 85943: logical corruption
Error backing up file 2, block 85944: logical corruption
Error backing up file 2, block 85947: logical corruption
Error backing up file 2, block 85949: logical corruption
Error backing up file 2, block 85951: logical corruption
Error backing up file 2, block 85953: logical corruption
Error backing up file 2, block 85956: logical corruption
Error backing up file 2, block 85958: logical corruption
Error backing up file 2, block 85965: logical corruption
Error backing up file 2, block 85976: logical corruption
Error backing up file 2, block 85977: logical corruption
Error backing up file 2, block 85980: logical corruption
Error backing up file 2, block 85981: logical corruption
Error backing up file 2, block 85988: logical corruption
Error backing up file 2, block 85989: logical corruption
Error backing up file 2, block 85995: logical corruption
Error backing up file 2, block 86001: logical corruption
Error backing up file 2, block 86003: logical corruption
Error backing up file 2, block 86005: logical corruption
Error backing up file 2, block 86012: logical corruption
Error backing up file 2, block 86013: logical corruption
Error backing up file 2, block 86015: logical corruption
Error backing up file 2, block 93961: logical corruption
Error backing up file 2, block 93965: logical corruption
Error backing up file 2, block 93968: logical corruption
Error backing up file 2, block 93971: logical corruption
Error backing up file 2, block 93975: logical corruption
Error backing up file 2, block 93979: logical corruption
Error backing up file 2, block 93983: logical corruption
Error backing up file 2, block 93984: logical corruption
Error backing up file 2, block 93987: logical corruption
Error backing up file 2, block 93988: logical corruption
Error backing up file 2, block 93992: logical corruption
Error backing up file 2, block 93996: logical corruption
Error backing up file 2, block 94008: logical corruption
Error backing up file 2, block 94022: logical corruption
Error backing up file 2, block 94026: logical corruption
Error backing up file 2, block 94027: logical corruption
Error backing up file 2, block 94030: logical corruption
Error backing up file 2, block 94031: logical corruption
Error backing up file 2, block 94034: logical corruption
Error backing up file 2, block 94038: logical corruption
Error backing up file 2, block 94041: logical corruption
Error backing up file 2, block 94042: logical corruption
Error backing up file 2, block 94047: logical corruption
Error backing up file 2, block 94074: logical corruption
Error backing up file 2, block 94077: logical corruption
Error backing up file 2, block 118881: logical corruption
Error backing up file 2, block 118882: logical corruption
Error backing up file 2, block 118883: logical corruption
Error backing up file 2, block 118884: logical corruption
Wed Jul 22 22:00:27 2009
Thread 1 advanced to log sequence 279 (LGWR switch)
Current log# 3 seq# 279 mem# 0: F:\ORACLE\11.1.0\ORADATA\CS90QAP\REDO03.LOG
Current log# 3 seq# 279 mem# 1: E:\ORACLE\11.1.0\ORADATA\CS90QAP\REDO03A.LOG
Wed Jul 22 22:04:29 2009
Thread 1 advanced to log sequence 280 (LGWR switch)
Current log# 4 seq# 280 mem# 0: F:\ORACLE\11.1.0\ORADATA\CS90QAP\REDO04.LOG
Current log# 4 seq# 280 mem# 1: E:\ORACLE\11.1.0\ORADATA\CS90QAP\REDO04A.LOG
Wed Jul 22 22:37:15 2009
ALTER SYSTEM ARCHIVE LOG -this is the nightly backup.
Edited by: user579885 on Jul 23, 2009 7:36 AMIf the index is part of EM why wait? How many EM users can you have? It should just be the DBA's.
Being the object is a PK there could be FK that references it. If there are FK to the PK I would try the alter index rebuild to see if that 1- works and 2- fixes the issue before I resorted to drop/create since the rebuild can be done without having to disable and re-enable the FK.
Also note that under certain conditions such as if the index status is INVALID the alter index rebuild will read the table rather than just read the index for its data.
HTH -- Mark D Powell -- -
When the Database startup and open? please please not alert log
Hello all
is there is a script or dinamic view which query what time the database startup and shutdown.
please don't answer me with opent alert log, am looking for view or script only
thank you
regardsYou wrote you don't want to use the alert.log. But
still this could provide all the information you want
to have.
If you dont want to use the alert.log because you
need to query the information by using sql you may
consider to access the alert.log via a external table.Even the alert log may not meet the new requirements of 'all startup and shutdown history'. In a typical shop, the alert log is periodically archived or wiped.
The answer to the OPs question probably lies in creating a new table to store the v$instance information, and create a database startup and possibly a database shutdown trigger to fill that table.
Or start using Grid Control which has that information tucked away in one of the internal sysman tables. -
Corrupt block relative dba: in alert log
hi,
In our recovery database,I've found following error in alert log.
Thu Mar 27 07:05:57 2008
Hex dump of (file 3, block 30826) in trace file /u01/app/oracle/admin/catdb/bdump/catdb_m000_21795.trc
Corrupt block relative dba: 0x00c0786a (file 3, block 30826)
Bad check value found during buffer read
Data in bad block:
type: 6 format: 2 rdba: 0x00c0786a
last change scn: 0x0000.0013ed4d seq: 0x1 flg: 0x06
spare1: 0x0 spare2: 0x0 spare3: 0x0
consistency value in tail: 0xed4d0601
check value in block header: 0x937c
computed block checksum: 0x8000
Reread of rdba: 0x00c0786a (file 3, block 30826) found same corrupted data
Thu Mar 27 07:05:59 2008
Corrupt Block Found
TSN = 2, TSNAME = SYSAUX
RFN = 3, BLK = 30826, RDBA = 12613738
OBJN = 8964, OBJD = 8964, OBJECT = WRH$_ENQUEUE_STAT, SUBOBJECT =
SEGMENT OWNER = SYS, SEGMENT TYPE = Table Segment
Now, how solve this error?
Thanks in advance.
LeoRefer to metalink Note:77587.1. You may need to do further sanity block checking by placing the following parameters in the database instance init.ora parameter file:
db_block_checking=true
db_block_checksum=true
dbblock_cache_protect= true
Consult Oracle support before changing hidden parameters. -
Hi all,
DB Version: 10.2.0
OS Version: Windows 2003 Server
When i check alertlog file there i get following error
ORA-00942: table or view does not exist
I don't know exactly for which table i am getting this errorVikas Kohli wrote:
Hi all,
DB Version: 10.2.0
OS Version: Windows 2003 Server
When i check alertlog file there i get following error
ORA-00942: table or view does not exist
I don't know exactly for which table i am getting this errorNot sure why you would be getting this error in your alert log.
You could try creating an 'after servererror on database' trigger. The below asktom thread is for ORA-1555 which you would need to change to ORA-942 for your situation.
http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:40115659055475 -
Hi,
We are using Oracle Enterprise 11.2.0.3 and the dba is reporting memory warnings in the alert log on continual nightly basis over last week.
No difference in volume of data being processed or programs so not sure why this is the case - aplogioes don't have details of error to hand.
We run many processees in parallel - database set for auto-parallelism (PARALLEL_DEGREE_POLICY=AUTO) and 3rd party reporting tool relies on this for spped of reports aganist partitoned tables.
Our data loads complete fine despite these messages.
Dba has asked if we can reduce what being done during these times or reduce the number of proceses
Are there any database parameters which can be set to manage memory such that database doesn't exhaust available memory in this way.
We want the database to manage the workload automatically ( would hope via parameter tuning in partciluar of the parameters related to parallel query this would be possible).
Thanksuser5716448 wrote:
Hi,
Aplogies for delya.
Here is the message from alert log
ORA-04031: unable to allocate 512024 bytes of shared memory ("large pool","unknown object","large pool","PX msg pool")
And this is repeated many times in the log.
On IBM Power Series 7 hardware AIX operating system.
Oracle version 11.2.0.3
Any ideas to allow database to manage memory more effectively when many paralllel queries runing - tuning of parallel-related parameters - which ones?
Thanksresults when reasoning is as follows
Parallel must be faster than no parallel
higher degree of parallel must be faster than lower value.
if/when you reduce parallel activities, then this error no longer occurs.
Maybe you are looking for
-
Problem of GL ACCT in PO creation
while creating PO for consumables materials (widout mat master) following error appears. " GL acct No. 55000104 can not be used. (please correct) Msg. No. ME 045 Diagnosis: Comparison of the field selection string from GL acct 55000101 & account assi
-
Interfacin​g LabView compatible DAQ board w/ third party DAQ boards
Dear NI-team and board-members, In my current project I'm using a third party Direct Digital Controller which supports customizable Programmable Logic Control (PLC) via a graphical programming software. Therefore, I'm acquisitioning temperatures (etc
-
Mapping virtual drive in a new node
iFS domain has a folder hierarchy and it can be accessed through windows (mount points, ej. \root) We created a new node in another machine and my question is can I mount the virtual disk drive (ej. \root) in the new node like in the domain server? W
-
Learning Path: Java & XML
Right now I'm learning my way through Java as a novice to programming aside from a little bit of script language knowledge. I want to start developing Web Services using Java, XML/SOAP. Being a beginner should I wait until I have a strong foundation
-
Please help me with an After Effects dispute
I was in a conversation with someone I know looking for a video editor. This person wants someone to be able to turn around a lot of footage fairly quickly. I was wondering how much a freelance video editor typically charges? (For example, someone wh