ORA-600 in standby alert log
We have recently migrated our database from Solaris to Linux (RHEL5) and since this migration we are seeing weird errors related to archive log shipping to the remote standby site and a corresponding ORA-600 in the standby site.
What's interesting is everything gets resolved by itself, it always seems to happen during the heavy database load when there is frequent log switch happening (~3 min). So initially it tries to archive to the remote site and it fails with the following error in the primary alert log.
Errors in file /app/oracle/admin/UIIP01/bdump/uiip01_arc1_9772.trc:
ORA-00272: error writing archive log
Mon Jul 14 10:57:36 2008
FAL[server, ARC1]: FAL archive failed, see trace file.
Mon Jul 14 10:57:36 2008
Errors in file /app/oracle/admin/UIIP01/bdump/uiip01_arc1_9772.trc:
ORA-16055: FAL request rejected
ARCH: FAL archive failed. Archiver continuing
Mon Jul 14 10:57:36 2008
ORACLE Instance UIIP01 - Archival Error. Archiver continuing.
And then we see a ORA-600 on standby database related to this which complains about redo block corruption.
Mon Jul 14 09:57:32 2008
Errors in file /app/oracle/admin/UIIP01/udump/uiip01_rfs_12775.trc:
ORA-00600: internal error code, arguments: [kcrrrfswda.11], [4], [368], [], [], [], [], []
Mon Jul 14 09:57:36 2008
And the trace file has this wonderful block corruption error..
Corrupt redo block 424432 detected: bad checksum
Flag: 0x1 Format: 0x22 Block: 0x000679f0 Seq: 0x000006ef Beg: 0x150 Cks:0xa2e5
----- Dump of Corrupt Redo Buffer -----
*** 2008-07-14 09:57:32.550
ksedmp: internal or fatal error
ORA-00600: internal error code, arguments: [kcrrrfswda.11], [4], [368], [], [], [], [], []
So ARC tries to resend this redo log again and it succeeds, end of the day all we have is a bunch of these ORA- errors in our alert logs, triggering off our monitors and these errors resolve themselves without any manual intervention, opened a tar with Oracle support as this is not affecting our primary database, they are in no hurry to get this one prioritized and also they are reluctant to accept that it's a bug that resolves itself.
Just wanted to get it out here to see if anyone experienced a similar problem, let me know if you need any more details.
As I said earlier this behaviour happens only during peak loads espceially when we have full 500M redo logs that are being archived.
Thanks in Advance.
Thanks Madrid!..
I scoured thru these metalink notes before looking for possible solutions and almost all of them were closed citing a customer problem related to OS, firewall, network, etc or some were closed saying requested data not provided.
Looks as if they were never successfully closed with a resolution.
I just want to assure myself that the redo corruption that standby is reporting will not haunt me later, when I am doing a recovery or even a crash recovery using redo logs?..
I have multiplexed my logs, just in case and have all the block checking parameters enabled on both primary and standby databases.
Thanks,
Ramki
Similar Messages
-
Hi,
The last 2-3 days I am seeing the following messages in my alert log :
Sun Mar 29 19:28:09 2009
Errors in file /db01/oracle/TAP/dump/udump/ora_13542.trc:
ORA-00600: internal error code, arguments: [13013], [5001], [1], [134636630], [35], [134636630], [], []
Dump file /db01/oracle/TAP/dump/udump/ora_13542.trc
Oracle7 Server Release 7.3.2.1.0 - Production Release
With the distributed and parallel query options
PL/SQL Release 2.3.2.0.0 - Production
ORACLE_HOME = /usr/oracle/server7.3
System name: AIX
Node name: rs1
Release: 2
Version: 4
Machine: 000001276600
Instance name: TAP
Redo thread mounted by this instance: 1
Oracle process number: 47
Unix process pid: 13542, image:
Sun Mar 29 19:28:08 2009
*** SESSION ID:(54.498) 2009.03.29.19.28.08.000
updexe: Table 0 Code 1 Cannot get stable set - last failed row is: 08066456.23
This happens at a time when I believe there is no user activity. There is a job which runs on midnights ; but the time the errors show up there isnt supposed to be any user activity.
I am not sure if Oracle is gonna support this version even if I did log in a SR.
In Metalink I came across this note : Doc ID: 40673.1 ; could this be related ?
Else what options do I have in getting this resolved.
Thanks.It is possible that you could also be facing an issue as described in ML note 28185.1.
The current option you have is to apply the patch 7.3.4.x and check if this resolves the problem; else consider upgrade to supported versions itself.
Of course, if this error is only occurring at off-peak times and not affecting any of the users, then you could probably ignore that but it would be better to find a fix in first place. -
Error in Standby alert log continuously
After I setup the Physical Standby and Logical Standby database, I found lots of errors showing in alert log of Standby database.
LOGMINER: Error 308 encountered, failed to read missing logfile /u01/oradata/arch2/primary/1_1_111111111.dbf
According to the doc, I set the values LOG_ARCHIVE_DEST_3 in case of role transition:
(for primary)
LOCATION=/u01/oradata/arch2/primary/ VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=primary(for standby)
LOCATION=/u01/oradata/arch2/standby/ VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=standbyWhen I run this command, the error keeps showing in the alert log continuously! Why does it look for the path "primary", but not "standby" ?
ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
Help! Does anyone know the problem? All new data can't be found in the logical standby now.
Thank you!When I enable the LOG_ARCHIVE_DEST_STATE_3, the errors show in alert log again!
LOGMINER: Error 308 encountered, failed to read missing logfile /u01/oradata/arch2/primary/1_1_111111111.dbf
SELECT TYPE,STATUS FROM V$LOGSTDBY;
COORDINATOR
ORA-01291: missing logfile
here are my parameters:
Primary:
LOG_ARCHIVE_DEST_1='LOCATION=/u01/oradata/arch/ MANDATORY VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=primary'
LOG_ARCHIVE_DEST_2='SERVICE=ORCL_SB LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=standby'
LOG_ARCHIVE_DEST_3='LOCATION=/u01/oradata/arch2/primary/ VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=primary'
Standby:
LOG_ARCHIVE_DEST_1='LOCATION=/u01/oradata/arch/ MANDATORY VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=standby'
LOG_ARCHIVE_DEST_2='SERVICE=ORCL_PRI LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=primary'
LOG_ARCHIVE_DEST_3='LOCATION=/u01/oradata/arch2/standby/ VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=standby'
Why does the path in primary parameter file show in standby alert log?? -
Dear All,
We have 10gR2 RAC with Physical Data Guard environment using ASM and both have same disk group names. Lets say the primary database name is prim and stand by database name is stdby. We are getting the following errors in alert log file of standby:
Clearing online redo logfile 9 +DG_DATAFILES_AND_FB/prim/onlinelog/group9_2a.rdo
Clearing online log 9 of thread 2 sequence number 0
Errors in file c:\oracle\product\10.2.0\admin\stdby\bdump\stdby1_mrp0_4288.trc:
ORA-00313: Message 313 not found; No message file for product=RDBMS, facility=ORA; arguments: [9] [2]
ORA-00312: Message 312 not found; No message file for product=RDBMS, facility=ORA; arguments: [9] [2] [+DG_DATAFILES_AND_FB/prim/onlinelog/group9_2a.rdo]
ORA-17503: Message 17503 not found; No message file for product=RDBMS, facility=ORA; arguments: [2] [+DG_DATAFILES_AND_FB/prim/onlinelog/group9_2a.rdo]
ORA-15173: entry 'prim does not exist in directory '/'
Errors in file c:\oracle\product\10.2.0\admin\stdby\bdump\stdby1_mrp0_4288.trc:
ORA-00344: Message 344 not found; No message file for product=RDBMS, facility=ORA; arguments: [+DG_DATAFILES_AND_FB/prim/onlinelog/group9_2a.rdo]
ORA-17502: Message 17502 not found; No message file for product=RDBMS, facility=ORA; arguments: [4] [+DG_DATAFILES_AND_FB/prim/onlinelog/group9_2a.rdo]
ORA-15173: entry 'prim' does not exist in directory '/'The errors show that the standby is trying to find files in directory +DG_DATAFILES_AND_FB/prim/onlinelog which apparently doesn’t exist on standby. Below is the result of query for redo logs on standby:
SQL> SELECT group#, status, member FROM v$logfile where member like '%prim/%'
GROUP# STATUS MEMEBER
9 +DG_DATAFILES_AND_FB/prim/onlinelog/group9_2a.rdo
1 +DG_DATAFILES_AND_FB/prim/standbylogs/sredo1.rdo
10 +DG_DATAFILES_AND_FB/prim/onlinelog/group10_1a.rdo
2 +DG_DATAFILES_AND_FB/prim/standbylogs/sredo2.rdo
3 +DG_DATAFILES_AND_FB/prim/standbylogs/sredo3.rdo
4 +DG_DATAFILES_AND_FB/prim/standbylogs/sredo4.rdo
11 +DG_DATAFILES_AND_FB/prim/onlinelog/group11_1a.rdo
12 +DG_DATAFILES_AND_FB/prim/onlinelog/group12_2a.rdo
8 rows selected. How we can get rid of this error?
Best regards,Generally when we setup standby, are these directories created automatically (i mean '+DG_DATAFILES_AND_FB/prim and '+DG_DATAFILES_AND_FB/stdby) on standby? My understanding is that by default only '+DG_DATAFILES_AND_FB/stdby is created.
What if i want to put all logs (that are in stdby and prim) in +DG_DATAFILES_AND_FB/stdby?What is the value of DB_CREATE_FILE_DEST and you also set DB_CREATE_ONLINE_LOG_DEST_<> value?
Also i don't know whether it is relevant or not, but we performed a roll forward for standby using metalink doc id: 836986.1 (Steps to perform for Rolling forward a standby database using RMAN incremental backup when primary and standby are in ASM filesystem). But i am not sure whether the error started coming after that or not.
But in the beginning for sure, there were no such errors. Just trying to put as much information as i can.Even though you are using same disk groups, But the sub directory names such as "prim","stby" are different,
After you changed the values of DB_FILE_NAME_CONVERT/LOG_FILE_NAME_CONVERT have you bounced database ? They are static parameters.
Bounce it and then start MRP, initally errors are expected even it happens during RMAN duplicate.
logfile member shows in database but not on physical disk, not match
if you haven't used RMAN duplicate then drop and create redo logs, this can be done at any time. -
Hi DBAs,
I saw the error ORA-16009 in alert log file. The database version is 10.2.0.3 configured with Physical standby on AIX 6.1 . The error is coming in alert log for a long time , I beleive after a swithover to Standby and switch back to Primary.
But I see the log from Primary to Standby are being applied and MRP is working fine on Standby Site.
I am posting the error message from alert log and trace file.
-------- Message in alert.log --------------
--Connected User is Valid
RFS[61129]: Assigned to RFS process 1511478
RFS[61129]: Database mount ID mismatch [0x8afdb574:0x8afd830b]
RFS[61129]: Client instance is standby database instead of primary
RFS[61129]: Not using real application clusters
Fri May 8 22:32:57 2009
Errors in file /oracle/admin/DBSID/udump/DBSID_rfs_1511478.trc:
ORA-16009: remote archive log destination must be a STANDBY database/oracle/admin/DBSID/bdump
HOST [DBSID]-> May 8 22:33:52 hostname daemon:err|error ltid[933986]: Remote scan failed on host HOST, drive IBMULTRIUM-TD25, Host is not the scan host for this shared drive (304)
----------------- Trace File with complete detail of the Error -----------------------------------
/oracle/admin/DBSID/udump/DBSID_rfs_1511478.trc
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning and Data Mining options
ORACLE_HOME = /oracle/product/10.2.0/db_1
System name: AIX
Node name: hostname
Release: 3
Version: 5
Machine: XXXXXXX
Instance name: DBSID
Redo thread mounted by this instance: 1
Oracle process number: 19
Unix process pid: 1491066, image: oracleDBSID@hostname*** SERVICE NAME:(DBSID.DOMAIN.COM) 2009-05-08 22:31:57.314
*** SESSION ID:(1094.21412) 2009-05-08 22:31:57.314
Detected dead process 1491030; subsuming V$MANAGED_STANDBY slot
*** 2009-05-08 22:31:57.318 61287 kcrr.c
RFS[61128]: Database mount ID mismatch [0x8afdb574:0x8afd830b]
*** 2009-05-08 22:31:57.318 61287 kcrr.c
RFS[61128]: Client instance is standby database instead of primary
*** 2009-05-08 22:31:57.318 61287 kcrr.c
RFS[61128]: Not using real application clusters
ORA-16009: remote archive log destination must be a STANDBY database
Please advise and suggest,
Thanks
-Samar-I have the follow setting for log_archive_dest_2 on Primary and standby which seems to be OK but above mention note which is for Oracle 9i server suggesting to disable the remote archiving on the standby database. Also I am using ASYNC option for log transport.
At Primary:
SQL> show parameter log_archive_dest_2
NAME TYPE VALUE
log_archive_dest_2 string SERVICE=STDBY LGWR ASYNC REOPEN=60 DB_UNIQUE_NAME=STDBY
At standby:
SQL> show parameter log_archive_dest_2
NAME TYPE VALUE
log_archive_dest_2 string SERVICE=PRI LGWR ASYNC REOPEN=60 DB_UNIQUE_NAME=PRI
Thanks
-Samar- -
ORA-07445 in the alert log when inserting into table with XMLType column
I'm trying to insert an xml-document into a table with a schema-based XMLType column. When I try to insert a row (using plsql-developer) - oracle is busy for a few seconds and then the connection to oracle is lost.
Below you''ll find the following to recreate the problem:
a) contents from the alert log
b) create script for the table
c) the before-insert trigger
d) the xml-schema
e) code for registering the schema
f) the test program
g) platform information
Alert Log:
Fri Aug 17 00:44:11 2007
Errors in file /oracle/app/oracle/product/10.2.0/db_1/admin/dntspilot2/udump/dntspilot2_ora_13807.trc:
ORA-07445: exception encountered: core dump [SIGSEGV] [Address not mapped to object] [475177] [] [] []
Create script for the table:
CREATE TABLE "DNTSB"."SIGNATURETABLE"
( "XML_DOCUMENT" "SYS"."XMLTYPE" ,
"TS" TIMESTAMP (6) WITH TIME ZONE NOT NULL ENABLE
) XMLTYPE COLUMN "XML_DOCUMENT" XMLSCHEMA "http://www.sporfori.fo/schemas/www.w3.org/TR/xmldsig-core/xmldsig-core-schema.xsd" ELEMENT "Object"
ROWDEPENDENCIES ;
Before-insert trigger:
create or replace trigger BIS_SIGNATURETABLE
before insert on signaturetable
for each row
declare
-- local variables here
l_sigtab_rec signaturetable%rowtype;
begin
if (:new.xml_document is not null) then
:new.xml_document.schemavalidate();
end if;
l_sigtab_rec.xml_document := :new.xml_document;
end BIS_SIGNATURETABLE2;
XML-Schema (xmldsig-core-schema.xsd):
=====================================================================================
<?xml version="1.0" encoding="utf-8"?>
<!-- Schema for XML Signatures
http://www.w3.org/2000/09/xmldsig#
$Revision: 1.1 $ on $Date: 2002/02/08 20:32:26 $ by $Author: reagle $
Copyright 2001 The Internet Society and W3C (Massachusetts Institute
of Technology, Institut National de Recherche en Informatique et en
Automatique, Keio University). All Rights Reserved.
http://www.w3.org/Consortium/Legal/
This document is governed by the W3C Software License [1] as described
in the FAQ [2].
[1] http://www.w3.org/Consortium/Legal/copyright-software-19980720
[2] http://www.w3.org/Consortium/Legal/IPR-FAQ-20000620.html#DTD
-->
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:ds="http://www.w3.org/2000/09/xmldsig#" xmlns:xdb="http://xmlns.oracle.com/xdb"
targetNamespace="http://www.w3.org/2000/09/xmldsig#" version="0.1" elementFormDefault="qualified">
<!-- Basic Types Defined for Signatures -->
<xs:simpleType name="CryptoBinary">
<xs:restriction base="xs:base64Binary">
</xs:restriction>
</xs:simpleType>
<!-- Start Signature -->
<xs:element name="Signature" type="ds:SignatureType"/>
<xs:complexType name="SignatureType">
<xs:sequence>
<xs:element ref="ds:SignedInfo"/>
<xs:element ref="ds:SignatureValue"/>
<xs:element ref="ds:KeyInfo" minOccurs="0"/>
<xs:element ref="ds:Object" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
<xs:attribute name="Id" type="xs:ID" use="optional"/>
</xs:complexType>
<xs:element name="SignatureValue" type="ds:SignatureValueType"/>
<xs:complexType name="SignatureValueType">
<xs:simpleContent>
<xs:extension base="xs:base64Binary">
<xs:attribute name="Id" type="xs:ID" use="optional"/>
</xs:extension>
</xs:simpleContent>
</xs:complexType>
<!-- Start SignedInfo -->
<xs:element name="SignedInfo" type="ds:SignedInfoType"/>
<xs:complexType name="SignedInfoType">
<xs:sequence>
<xs:element ref="ds:CanonicalizationMethod"/>
<xs:element ref="ds:SignatureMethod"/>
<xs:element ref="ds:Reference" maxOccurs="unbounded"/>
</xs:sequence>
<xs:attribute name="Id" type="xs:ID" use="optional"/>
</xs:complexType>
<xs:element name="CanonicalizationMethod" type="ds:CanonicalizationMethodType"/>
<xs:complexType name="CanonicalizationMethodType" mixed="true">
<xs:sequence>
<xs:any namespace="##any" minOccurs="0" maxOccurs="unbounded"/>
<!-- (0,unbounded) elements from (1,1) namespace -->
</xs:sequence>
<xs:attribute name="Algorithm" type="xs:anyURI" use="required"/>
</xs:complexType>
<xs:element name="SignatureMethod" type="ds:SignatureMethodType"/>
<xs:complexType name="SignatureMethodType" mixed="true">
<xs:sequence>
<xs:element name="HMACOutputLength" minOccurs="0" type="ds:HMACOutputLengthType"/>
<xs:any namespace="##other" minOccurs="0" maxOccurs="unbounded"/>
<!-- (0,unbounded) elements from (1,1) external namespace -->
</xs:sequence>
<xs:attribute name="Algorithm" type="xs:anyURI" use="required"/>
</xs:complexType>
<!-- Start Reference -->
<xs:element name="Reference" type="ds:ReferenceType"/>
<xs:complexType name="ReferenceType">
<xs:sequence>
<xs:element ref="ds:Transforms" minOccurs="0"/>
<xs:element ref="ds:DigestMethod"/>
<xs:element ref="ds:DigestValue"/>
</xs:sequence>
<xs:attribute name="Id" type="xs:ID" use="optional"/>
<xs:attribute name="URI" type="xs:anyURI" use="optional"/>
<xs:attribute name="Type" type="xs:anyURI" use="optional"/>
</xs:complexType>
<xs:element name="Transforms" type="ds:TransformsType"/>
<xs:complexType name="TransformsType">
<xs:sequence>
<xs:element ref="ds:Transform" maxOccurs="unbounded"/>
</xs:sequence>
</xs:complexType>
<xs:element name="Transform" type="ds:TransformType"/>
<xs:complexType name="TransformType" mixed="true">
<xs:choice minOccurs="0" maxOccurs="unbounded">
<xs:any namespace="##other" processContents="lax"/>
<!-- (1,1) elements from (0,unbounded) namespaces -->
<xs:element name="XPath" type="xs:string"/>
</xs:choice>
<xs:attribute name="Algorithm" type="xs:anyURI" use="required"/>
</xs:complexType>
<!-- End Reference -->
<xs:element name="DigestMethod" type="ds:DigestMethodType"/>
<xs:complexType name="DigestMethodType" mixed="true">
<xs:sequence>
<xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
<xs:attribute name="Algorithm" type="xs:anyURI" use="required"/>
</xs:complexType>
<xs:element name="DigestValue" type="ds:DigestValueType"/>
<xs:simpleType name="DigestValueType">
<xs:restriction base="xs:base64Binary"/>
</xs:simpleType>
<!-- End SignedInfo -->
<!-- Start KeyInfo -->
<xs:element name="KeyInfo" type="ds:KeyInfoType"/>
<xs:complexType name="KeyInfoType" mixed="true">
<xs:choice maxOccurs="unbounded">
<xs:element ref="ds:KeyName"/>
<xs:element ref="ds:KeyValue"/>
<xs:element ref="ds:RetrievalMethod"/>
<xs:element ref="ds:X509Data"/>
<xs:element ref="ds:PGPData"/>
<xs:element ref="ds:SPKIData"/>
<xs:element ref="ds:MgmtData"/>
<xs:any processContents="lax" namespace="##other"/>
<!-- (1,1) elements from (0,unbounded) namespaces -->
</xs:choice>
<xs:attribute name="Id" type="xs:ID" use="optional"/>
</xs:complexType>
<xs:element name="KeyName" type="xs:string"/>
<xs:element name="MgmtData" type="xs:string"/>
<xs:element name="KeyValue" type="ds:KeyValueType"/>
<xs:complexType name="KeyValueType" mixed="true">
<xs:choice>
<xs:element ref="ds:DSAKeyValue"/>
<xs:element ref="ds:RSAKeyValue"/>
<xs:any namespace="##other" processContents="lax"/>
</xs:choice>
</xs:complexType>
<xs:element name="RetrievalMethod" type="ds:RetrievalMethodType"/>
<xs:complexType name="RetrievalMethodType">
<xs:sequence>
<xs:element ref="ds:Transforms" minOccurs="0"/>
</xs:sequence>
<xs:attribute name="URI" type="xs:anyURI"/>
<xs:attribute name="Type" type="xs:anyURI" use="optional"/>
</xs:complexType>
<!-- Start X509Data -->
<xs:element name="X509Data" type="ds:X509DataType"/>
<xs:complexType name="X509DataType">
<xs:sequence maxOccurs="unbounded">
<xs:choice>
<xs:element name="X509IssuerSerial" type="ds:X509IssuerSerialType"/>
<xs:element name="X509SKI" type="xs:base64Binary"/>
<xs:element name="X509SubjectName" type="xs:string"/>
<xs:element name="X509Certificate" type="xs:base64Binary"/>
<xs:element name="X509CRL" type="xs:base64Binary"/>
<xs:any namespace="##other" processContents="lax"/>
</xs:choice>
</xs:sequence>
</xs:complexType>
<xs:complexType name="X509IssuerSerialType">
<xs:sequence>
<xs:element name="X509IssuerName" type="xs:string"/>
<xs:element name="X509SerialNumber" type="xs:integer"/>
</xs:sequence>
</xs:complexType>
<!-- End X509Data -->
<!-- Begin PGPData -->
<xs:element name="PGPData" type="ds:PGPDataType"/>
<xs:complexType name="PGPDataType">
<xs:choice>
<xs:sequence>
<xs:element name="PGPKeyID" type="xs:base64Binary"/>
<xs:element name="PGPKeyPacket" type="xs:base64Binary" minOccurs="0"/>
<xs:any namespace="##other" processContents="lax" minOccurs="0"
maxOccurs="unbounded"/>
</xs:sequence>
<xs:sequence>
<xs:element name="PGPKeyPacket" type="xs:base64Binary"/>
<xs:any namespace="##other" processContents="lax" minOccurs="0"
maxOccurs="unbounded"/>
</xs:sequence>
</xs:choice>
</xs:complexType>
<!-- End PGPData -->
<!-- Begin SPKIData -->
<xs:element name="SPKIData" type="ds:SPKIDataType"/>
<xs:complexType name="SPKIDataType">
<xs:sequence maxOccurs="unbounded">
<xs:element name="SPKISexp" type="xs:base64Binary"/>
<xs:any namespace="##other" processContents="lax" minOccurs="0"/>
</xs:sequence>
</xs:complexType>
<!-- End SPKIData -->
<!-- End KeyInfo -->
<!-- Start Object (Manifest, SignatureProperty) -->
<xs:element name="Object" type="ds:ObjectType"/>
<xs:complexType name="ObjectType" mixed="true">
<xs:sequence minOccurs="0" maxOccurs="unbounded">
<xs:any namespace="##any" processContents="lax"/>
</xs:sequence>
<xs:attribute name="Id" type="xs:ID" use="optional"/>
<xs:attribute name="MimeType" type="xs:string" use="optional"/> <!-- add a grep facet -->
<xs:attribute name="Encoding" type="xs:anyURI" use="optional"/>
</xs:complexType>
<xs:element name="Manifest" type="ds:ManifestType"/>
<xs:complexType name="ManifestType">
<xs:sequence>
<xs:element ref="ds:Reference" maxOccurs="unbounded"/>
</xs:sequence>
<xs:attribute name="Id" type="xs:ID" use="optional"/>
</xs:complexType>
<xs:element name="SignatureProperties" type="ds:SignaturePropertiesType"/>
<xs:complexType name="SignaturePropertiesType">
<xs:sequence>
<xs:element ref="ds:SignatureProperty" maxOccurs="unbounded"/>
</xs:sequence>
<xs:attribute name="Id" type="xs:ID" use="optional"/>
</xs:complexType>
<xs:element name="SignatureProperty" type="ds:SignaturePropertyType"/>
<xs:complexType name="SignaturePropertyType" mixed="true">
<xs:choice maxOccurs="unbounded">
<xs:any namespace="##other" processContents="lax"/>
<!-- (1,1) elements from (1,unbounded) namespaces -->
</xs:choice>
<xs:attribute name="Target" type="xs:anyURI" use="required"/>
<xs:attribute name="Id" type="xs:ID" use="optional"/>
</xs:complexType>
<!-- End Object (Manifest, SignatureProperty) -->
<!-- Start Algorithm Parameters -->
<xs:simpleType name="HMACOutputLengthType">
<xs:restriction base="xs:integer"/>
</xs:simpleType>
<!-- Start KeyValue Element-types -->
<xs:element name="DSAKeyValue" type="ds:DSAKeyValueType"/>
<xs:complexType name="DSAKeyValueType">
<xs:sequence>
<xs:sequence minOccurs="0">
<xs:element name="P" type="ds:CryptoBinary"/>
<xs:element name="Q" type="ds:CryptoBinary"/>
</xs:sequence>
<xs:element name="G" type="ds:CryptoBinary" minOccurs="0"/>
<xs:element name="Y" type="ds:CryptoBinary"/>
<xs:element name="J" type="ds:CryptoBinary" minOccurs="0"/>
<xs:sequence minOccurs="0">
<xs:element name="Seed" type="ds:CryptoBinary"/>
<xs:element name="PgenCounter" type="ds:CryptoBinary"/>
</xs:sequence>
</xs:sequence>
</xs:complexType>
<xs:element name="RSAKeyValue" type="ds:RSAKeyValueType"/>
<xs:complexType name="RSAKeyValueType">
<xs:sequence>
<xs:element name="Modulus" type="ds:CryptoBinary"/>
<xs:element name="Exponent" type="ds:CryptoBinary"/>
</xs:sequence>
</xs:complexType>
<!-- End KeyValue Element-types -->
<!-- End Signature -->
</xs:schema>
===============================================================================
Code for registering the xml-schema
begin
dbms_xmlschema.deleteSchema('http://xmlns.oracle.com/xdb/schemas/DNTSB/www.sporfori.fo/schemas/www.w3.org/TR/xmldsig-core/xmldsig-core-schema.xsd',
dbms_xmlschema.DELETE_CASCADE_FORCE);
end;
begin
DBMS_XMLSCHEMA.REGISTERURI(
schemaurl => 'http://www.sporfori.fo/schemas/www.w3.org/TR/xmldsig-core/xmldsig-core-schema.xsd',
schemadocuri => 'http://www.sporfori.fo/schemas/www.w3.org/TR/xmldsig-core/xmldsig-core-schema.xsd',
local => TRUE,
gentypes => TRUE,
genbean => FALSE,
gentables => TRUE,
force => FALSE,
owner => 'DNTSB',
options => 0);
end;
Test program
-- Created on 17-07-2006 by EEJ
declare
XML_TEXT3 CLOB := '<Object xmlns="http://www.w3.org/2000/09/xmldsig#">
<SignatureProperties>
<SignatureProperty Target="">
<Timestamp xmlns="http://www.sporfori.fo/schemas/dnts/general/2006/11/14">2007-05-10T12:00:00-05:00</Timestamp>
</SignatureProperty>
</SignatureProperties>
</Object>';
xmldoc xmltype;
begin
xmldoc := xmltype(xml_text3);
insert into signaturetable
(xml_document, ts)
values
(xmldoc, current_timestamp);
end;
Platform information
Operating system:
-bash-3.00$ uname -a
SunOS dntsdb 5.10 Generic_125101-09 i86pc i386 i86pc
SQLPlus:
SQL*Plus: Release 10.2.0.3.0 - Production on Fri Aug 17 00:15:13 2007
Copyright (c) 1982, 2006, Oracle. All Rights Reserved.
Enter password:
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning and Data Mining options
Kind Regards,
EyðunYou should report this in a service request on http://metalink.oracle.com.
It is a shame that you put all the effort here to describe your problem, but on the other hand you can now also copy & paste the question to Oracle Support.
Because you are using 10.2.0.3; I am guessing that you have a valid service contract... -
Create custom rule: Looking for ORA errors in the alert log
I would like to create a rule to notify me when ORA errors are generated to the alert log. When I try to create a rule, I seem to only be able to choose from predefined lists. Has anyone configured a rule for this?
Grrr. This is for 10.2 Grid:
Go to databases --> select your database
Under Diagnostic summary, you'll see:
Alert Log 28-Dec-2006 09:19:48
The date part is a clickable link, click on it. Scroll to the bootom of the next page and click on:
"Generic Alert Log Error Monitoring Configuration"
Now read the page very carefully, its all self explanatory. The reason who haven't had some errors reported is because of that filter. Now set it up the way you want it.
Good luck
Bazzza -
Database generating a large number of ORA-1403 errors in Alert Log & trace
Hi,
Many ORA-1403: no data found messages appear in the alert log, but users are not seeing this error on their screens. The errors are only generated in the alert log along with an associated trace file.
We have not set the "set events 1403 trace" for either system or session level.
I confirmed this with the initialization file.
The database is generating huge trace files..
Database version: 10.1.0.4.2
Errors in Trace:
ksedmp: internal or fatal error
ORA-01403: no data found
Regards,
ChandanHi,
Check Note:271490.1. Try to issue
alter system set events '1403 trace name errorstack off'; as recommended in that Note.
If this not help then check other possible cases of this on Metalink or address this problem to Oracle support directly.
Best Regards,
Alex -
Recovery is repairing media corrupt block x of file x in standby alert log
Hi,
oracle version:8.1.7.0.0
os version :solaris 5.9
we have oracle 8i primary and standby database. i am getting erorr in alert log file:
Thu Aug 28 22:48:12 2008
Media Recovery Log /oratranslog/arch_1_1827391.arc
Thu Aug 28 22:50:42 2008
Media Recovery Log /oratranslog/arch_1_1827392.arc
bash-2.05$ tail -f alert_pindb.log
Recovery is repairing media corrupt block 991886 of file 179
Recovery is repairing media corrupt block 70257 of file 184
Recovery is repairing media corrupt block 70258 of file 184
Recovery is repairing media corrupt block 70259 of file 184
Recovery is repairing media corrupt block 70260 of file 184
Recovery is repairing media corrupt block 70261 of file 184
Thu Aug 28 22:48:12 2008
Media Recovery Log /oratranslog/arch_1_1827391.arc
Thu Aug 28 22:50:42 2008
Media Recovery Log /oratranslog/arch_1_1827392.arc
Recovery is repairing media corrupt block 500027 of file 181
Recovery is repairing media corrupt block 500028 of file 181
Recovery is repairing media corrupt block 500029 of file 181
Recovery is repairing media corrupt block 500030 of file 181
Recovery is repairing media corrupt block 500031 of file 181
Recovery is repairing media corrupt block 991837 of file 179
Recovery is repairing media corrupt block 991838 of file 179
how i can resolve this.
[pre]
Thanks
Prakash
Edited by: user612485 on Aug 28, 2008 10:53 AMDear satish kandi,
recently we have created index for one table with nologgign option, i think for that reason i am getting that error.
if i run dbv utility on the files which are shown in alert log file i am getting the following results.
bash-2.05$ dbv file=/oracle15/oradata/pindb/pinx055.dbf blocksize=4096
DBVERIFY: Release 8.1.7.0.0 - Production on Fri Aug 29 12:18:27 2008
(c) Copyright 2000 Oracle Corporation. All rights reserved.
DBVERIFY - Verification starting : FILE = /oracle15/oradata/pindb/pinx053.dbf
Block Checking: DBA = 751593895, Block Type =
Found block already marked corrupted
Block Checking: DBA = 751593896, Block Type =
.DBVERIFY - Verification complete
Total Pages Examined : 1048576
Total Pages Processed (Data) : 0
Total Pages Failing (Data) : 0
Total Pages Processed (Index): 1036952
Total Pages Failing (Index): 0
Total Pages Processed (Other): 7342
Total Pages Empty : 4282
Total Pages Marked Corrupt : 0
Total Pages Influx : 0
bash-2.05$ dbv file=/oracle15/oradata/pindb/pinx053.dbf blocksize=4096
DBVERIFY: Release 8.1.7.0.0 - Production on Fri Aug 29 12:23:12 2008
(c) Copyright 2000 Oracle Corporation. All rights reserved.
DBVERIFY - Verification starting : FILE = /oracle15/oradata/pindb/pinx054.dbf
Block Checking: DBA = 759492966, Block Type =
Found block already marked corrupted
Block Checking: DBA = 759492967, Block Type =
Found block already marked corrupted
Block Checking: DBA = 759492968, Block Type =
.DBVERIFY - Verification complete
Total Pages Examined : 1048576
Total Pages Processed (Data) : 0
Total Pages Failing (Data) : 0
Total Pages Processed (Index): 585068
Total Pages Failing (Index): 0
Total Pages Processed (Other): 8709
Total Pages Empty : 454799
Total Pages Marked Corrupt : 0
Total Pages Influx : 0
bash-2.05$ dbv file=/oracle15/oradata/pindb/pinx054.dbf blocksize=4096
DBVERIFY: Release 8.1.7.0.0 - Production on Fri Aug 29 12:32:28 2008
(c) Copyright 2000 Oracle Corporation. All rights reserved.
DBVERIFY - Verification starting : FILE = /oracle15/oradata/pindb/pinx055.dbf
Block Checking: DBA = 771822208, Block Type =
Found block already marked corrupted
Block Checking: DBA = 771822209, Block Type =
Found block already marked corrupted
Block Checking: DBA = 771822210, Block Type =
.DBVERIFY - Verification complete
Total Pages Examined : 1048576
Total Pages Processed (Data) : 0
Total Pages Failing (Data) : 0
Total Pages Processed (Index): 157125
Total Pages Failing (Index): 0
Total Pages Processed (Other): 4203
Total Pages Empty : 887248
Total Pages Marked Corrupt : 0
Total Pages Influx : 0
My doubts are :
1.if i drop the index and recreate the index with logging option will this error won't repeat in alert log file
2.in future if i activate the standby database will database is going to open without any error.
Thanks
Prakash
. -
ORA-3136 error in alert log file(EBS R12)
I have found ora-3136 alert log error in ebs r12 instance Can any one help on this error. Alert log continuously write this error. Below is the error i found.
Fatal NI connect error 12170.
VERSION INFORMATION:
TNS for Linux: Version 11.1.0.7.0 - Production
Oracle Bequeath NT Protocol Adapter for Linux: Version 11.1.0.7.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.1.0.7.0 - Production
Time: 07-NOV-2013 18:33:49
Tracing not turned on.
Tns error struct:
ns main err code: 12535
TNS-12535: TNS:operation timed out
ns secondary err code: 12606
nt main err code: 0
nt secondary err code: 0
nt OS err code: 0
Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=172.17.100.108)(PORT=60495))
WARNING: inbound connection timed out (ORA-3136)Does this affect the functionality of the application?
Do you get any errors when opening forms or self-service pages?
Please see:
ORA-01403 and FRM-40735 error occur while opening any forms (Doc ID 1358117.1)
Troubleshooting ORA-3135/ORA-3136 Connection Timeouts Errors - Database Diagnostics (Doc ID 730066.1)
ORA-3136/TNS-12535 or Hang Reported for Connections Across Firewall (Doc ID 974783.1)
Thanks,
Hussein -
ORA.00600 Error in alert log file
Hi all,
I'm using Enterprise Linux Red hat 5 OS and Oracle version 10.2.0.1.0
I checked alert log file, yesterday i got this error,
* Errors in file /opt/oracle/admin/ssharp/udump/ssharp_ora_11162.trc:
ORA-00600: internal error code, arguments: [kksfbc-reparse-infinite-loop], [0xB7F8F6B4], [], [], [], [], [], []
Fri Mar 11 19:44:51 2011
Memory Notification: Library Cache Object loaded into SGA
Heap size 3784K exceeds notification threshold (2048K) *
how to solve this issue?
thanks in advanceHello,
ORA-00600: internal error code, arguments: [kksfbc-reparse-infinite-loop], [0xB7F8F6B4], [], [], [], [], [], []For this kind of error you should open a Service Request to My Oracle Support and, send them all the information you can (Alert log, trace files, ...).
More over, as you are in *10.2.0.1*, you may apply the latest patchset to the Database (10.2.0.4 / 10.2.0.5). Sometime, ORA-00600 may be due to a Bug. So, perhaps, by chance, applying a Patch set can help to solve this kind of issue.
Anyway, first of all, contact the Support.
Hope this help.
Best regards,
Jean-Valentin -
I keep getting messages like this in my OracleXE alter log:
ORA-17624: Failed to delete directory C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\BACKUPSET\2008_08_04
I get these for the backupset, autobackup, and archivelog directories. These are happening while my rman backup is running. The rman backup is set to delete the archivelog files after they have been backed-up and then to only keep 2 backup sets. Has anyone else had this problem?Hello John,
I get the same errors during the full RMAN backup i do. I get to Oracle error messages: ORA-17624 and ORA-27093.
I think this error has to do with the XE version. I couldn't find any detailed information (like a workaround) in Metalink.
Metalink means in a standard message that it has to do with the user rights. The user you take has not enough rights to delete the directory. RMAN uses the windows rmdir command to delete all obsolete/old directories.
I did the following steps with no success:
- User got full administrator rights for the whole volume
- I tried the command rmdir manually and it worked
- If I let the script do it for me it fails again.
Best regards,
Tobias Arnhold -
ORA-00316 in the alert log file
Thu Aug 21 17:23:05 2008
Errors in file /ora_dump/CONCILIA/bdump/concilia_arc1_205172.trc:
ORA-00316: log 3 of thread 1, type 0 in header is not log file
ORA-00312: online log 3 thread 1: '/ora_data04/CONCILIA/redo01.log'
Thu Aug 21 17:23:05 2008
Errors in file /ora_dump/CONCILIA/bdump/concilia_arc1_205172.trc:
ORA-00316: log 3 of thread 1, type 0 in header is not log file
ORA-00312: online log 3 thread 1: '/ora_data04/CONCILIA/redo01.log'
$It looks like your redo log might be corrupted.
Check the content of trc file and work with Oracle support.
00316, 00000, "log %s of thread %s, type %s in header is not log file"
// *Cause: The online log is corrupted or is an old version.
// *Action: Find and install correct version of log or reset logs. -
Hi All,
I have oracle 11gR2 installed on OVM. From last few days I am getting below error ORA-600 in my alert.log file
Errors in file /u02/app/oracle/diag/rdbms/......./SID_ora_30314.trc (incident=169998):
ORA-00600: internal error code, arguments: [qcsprfro_tree:jrs present], [0x2B7CA0837030], [], [], [], [], [], [], [], [], [], []
from the above referenced trc file : I got below message:
DDE: Problem Key 'ORA 600 [qcsprfro_tree:jrs present]' was flood controlled (0x6) (incident: 169998)
ORA-00600: internal error code, arguments: [qcsprfro_tree:jrs present], [0x2B7CA0837030], [], [], [], [], [], [], [], [], [], []
"/u02/app/oracle/diag/rdbms/....../SID_eng11g02_ora_30314.trc" 60L, 3421C
Is anybody having any idea about this error's first argument [qcsprfro_tree:jrs present] ? I am not getting any reference document from oracle support also.
Thanks...Hello,
I have oracle 11gR2 installed on OVMAs previously posted, you must open a SR to My Oracle Support.
In *11.2* you have the possibility with ADR (Automatic Diagnostic Repository) to create a Incident Package that you could send to the Support. It can help in the resolution process.
Please, find enclosed a link to this new features in 11g:
http://download.oracle.com/docs/cd/E11882_01/server.112/e17120/diag002.htm#CHDJDBCH
Hope this help.
Best regards,
Jean-Valentin -
ORA-326/ORA-334 Physical Standby DB - Not applying logs
Hi, all, I have a Dataguard, the primary db is a RAC with three nodes, the standby db is a physical standby. the dataguard are all oracle 10.2.0.3 on solaris 9 sparc system
Now, my standby db did not apply the archivelogs,there are some errors in standby alert log :
Thu Jan 8 15:44:31 2009
Errors in file /oracle/admin/xxxx/bdump/whut_mrp0_12863.trc:
ORA-00326:log begins at change 2984028173, need earlier change 2984013192
ORA-00334:archived log '+DGARCH/whut/3_22648_611942629.dbf'
Managed Standby Recovery not using Real Time Apply
Recovery interrupted!
Thu Jan 8 15:44:34 2009
Errors in file /export/home/oracle/admin/whut/bdump/whut_mrp0_12863.trc:
ORA-00326:log begins at change 2984028173, need earlier change 2984013192
ORA-00334:archived log '+DGARCH/whut/3_22648_611942629.dbf'
Thu Jan 8 15:44:34 2009
MRP0: Background Media Recovery process shutdown (whut)
and the archivelogs on the standby db like this :
NAME APP DEL
+DGARCH/whut/3_22651_611942629.dbf NO NO
+DGARCH/whut/3_22650_611942629.dbf NO NO
+DGARCH/whut/3_22649_611942629.dbf NO NO
+DGARCH/whut/3_22648_611942629.dbf NO NO
+DGARCH/whut/3_22648_611942629.dbf NO NO
+DGARCH/whut/3_22648_611942629.dbf NO YES
+DGARCH/whut/3_22291_611942629.dbf YES YES
+DGARCH/whut/2_62553_611942629.dbf NO NO
+DGARCH/whut/2_62552_611942629.dbf NO NO
+DGARCH/whut/2_62551_611942629.dbf NO NO
+DGARCH/whut/2_62550_611942629.dbf NO NO
query the first_change# on the standby db like this :
SEQUENCE# FIRST_CHANGE# APP ARC
22646 2983721221 NO YES
22647 2983911304 NO YES
22648 2984028173 NO YES
23813 2984013365 NO YES
23814 2984118027 NO YES
62546 2984027708 NO YES
SEQUENCE# FIRST_CHANGE# APP ARC
62547 2984163823 NO YES
23815 2984132128 NO YES
23816 2984312713 NO YES
62548 2984255538 NO YES
62549 2984312692 NO YES
22649 2984028196 NO YES
23817 2984351070 NO YES
then ,how to fix the dataguard and apply the archivelog on standby db ?
why I can found 2984028173 with 22648,but can not found 2984013192 ??
and why there have three 22648 with different applied status and deleted status on standby db ???
Thanks all.Thank you for your reply.
the trc file like this:
/oracle/admin/xxx/bdump/xxx_mrp0_12863.trc
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
ORACLE_HOME = /oracle/product/10.2.0/database
System name: SunOS
Node name: stdb
Release: 5.9
Version: Generic_122300-25
Machine: sun4u
Instance name: xxx
Redo thread mounted by this instance: 1
Oracle process number: 27
Unix process pid: 12863, image: oracle@stdb (MRP0)
*** SERVICE NAME:() 2009-01-08 15:44:24.179
*** SESSION ID:(197.3) 2009-01-08 15:44:24.179
ARCH: Connecting to console port...
*** 2009-01-08 15:44:24.180 61287 kcrr.c
MRP0: Background Managed Standby Recovery process started
*** 2009-01-08 15:44:29.191 1103 krsm.c
Managed Recovery: Initialization posted.
*** 2009-01-08 15:44:29.192 61287 kcrr.c
Managed Standby Recovery starting Real Time Apply
Recovery target incarnation = 5, activation ID = 721050634
Influx buffer limit = 152383 (50% x 304766)
Successfully allocated 3 recovery slaves
Using 367 overflow buffers per recovery slave
Start recovery at thread 3 ckpt scn 2984013192 logseq 22648 block 2
*** 2009-01-08 15:44:31.181
Media Recovery add redo thread 3
*** 2009-01-08 15:44:31.181
Media Recovery add redo thread 1
*** 2009-01-08 15:44:31.182
Media Recovery add redo thread 2
*** 2009-01-08 15:44:31.184 1103 krsm.c
Managed Recovery: Active posted.
*** 2009-01-08 15:44:31.838
Media Recovery Log +DGARCH/whut/3_22648_611942629.dbf
*** 2009-01-08 15:44:31.851 61287 kcrr.c
MRP0: Background Media Recovery terminated with error 326
ORA-00326: log begin at 2984028173, need earlier 2984013192
ORA-00334: archivelog: '+DGARCH/whut/3_22648_611942629.dbf'
*** 2009-01-08 15:44:31.851 61287 kcrr.c
Managed Standby Recovery not using Real Time Apply
*** 2009-01-08 15:44:31.851
Media Recovery drop redo thread 3
*** 2009-01-08 15:44:31.851
Media Recovery drop redo thread 1
*** 2009-01-08 15:44:31.851
Media Recovery drop redo thread 2
*** 2009-01-08 15:44:34.269 1103 krsm.c
Managed Recovery: Not Active posted.
ORA-00326: log begin at 2984028173,need earlier 2984013192
ORA-00334: archivelog: '+DGARCH/whut/3_22648_611942629.dbf'
ARCH: Connecting to console port...
*** 2009-01-08 15:44:34.359 61287 kcrr.c
MRP0: Background Media Recovery process shutdown
*** 2009-01-08 15:44:34.359 1103 krsm.c
Maybe you are looking for
-
Hi, we´ve got a newsstand app v24. Yesterday I tried to update the app with the DPSAppBuilder DPSAppBuilder-2.5.0.36.69922. But I can´t complete the procedure, in the window where I schould enter the certificate, a pop up with no error mesaage appear
-
HT1379 need help- frozen mac w flashing file folder & ?
My mac is frozen. nothing but the white screen of death. tried troubleshooting w no luck. Tried re-setting NVRam/PRAM and now have white/grey screen w flashing file folder icon and question mark. Help!
-
Compiz bug , help!!!
I have install compiz fusion,but when I change compiz mode,I can't see anything in gnome-terminal.can you help me ?
-
Mails are not getting deleted from server
Hi I am reading a mail from using java mail from my mail server. When i am setting the message.setFlag(Flag.DETELED,true); the mail is not getting deleted from the server. I am not able to trace the problem, please help me solving this problem. It's
-
I have been creating docs on my imac with pages and saved them to dropbox and then on my Ipad i go to dropbox and open my doc with pages. Now with the updated pages on Imac and the new app on Ipad I cannot open the new docs on my ipad I created on m