Alert Log Content

Hello!!!
I want to see the alert log content for a specific date. But whenever I select a date window (date from.... date until) using the calendar utility there is none to be displayed!!!!
Without search criteria the utility displays the last 10000 lines of the alert log content.
Why this happens????
Simon

Hi ,
When I say 'a window (date from... date to)' I mean a date period for example 01/01/2006 until 03/01/2006.
On Enterprise Manager Console 10g there is the capability to select this date period using the built-in calendar (on right of each text item where I can write down the dates (begin/end date)).
The answer you gave me means that I cannot search the log file content giving specific criteria , as I want?
Thanks for your help !!!
Simon

Similar Messages

  • Need to find the way to get the actual error message in the alert log.

    Hi,
    I have configured OEM 11G and monitoring target versions are from 9i to 11g. Now my problem is i have defined the metrics for monitoring the alert log contents and OEM is sending alert if there is any error in the alert log but it is not showing the actual error message just it is showing as below.
    ============================
    Target Name=IDMPRD
    Target type=Database Instance
    Host=oidmprd01.ho.abc.com
    Occurred At=Dec 21, 2011 12:05:21 AM GMT+03:00
    Message=1 distinct types of ORA- errors have been found in the alert log.
    Metric=Generic Alert Log Error Status
    Metric value=1
    Severity=Warning
    Acknowledged=No
    Notification Rule Name=RULE_4_PROD_DATABASES
    Notification Rule Owner=SYSMAN
    ============================
    Is there any way to get the complete error details in the OEM alert itself.
    Regards
    DBA.

    You need to look at the Alert Log error messages, not the "status" messages. See doc http://docs.oracle.com/cd/E11857_01/em.111/e16285/oracle_database.htm#autoId2

  • The database alert log has errors. Please help me

    Hi all
    I have an Oracle 8i Database that seams to be functioning well but when I
    Checked the Database alert Log I realized that something is not fine
    Part of the alert log content is here under. What can be problem and how can I
    Resolve it. Please I need you assistance...
    Thu Feb 07 08:50:37 2008
    Thread 1 advanced to log sequence 266
    Thu Feb 07 08:50:37 2008
    Current log# 11 seq# 266 mem# 0: E:\ORACLE\ORADATA\MUKREC\LOGS\REDO11.LOG
    Thu Feb 07 08:50:38 2008
    ARC0: Beginning to archive log# 10 seq# 265
    Thu Feb 07 08:51:07 2008
    ARC0: Completed archiving log# 10 seq# 265
    Thu Feb 07 09:23:22 2008
    Errors in file E:\Oracle\admin\MUKREC\udump\ORA03976.TRC:
    Thu Feb 07 09:23:23 2008
    Errors in file E:\Oracle\admin\MUKREC\udump\ORA04092.TRC:
    ................................................................................................................

    I'll just throw a shameless copy-paste from Metalink note : 164839.1 , hope you can use it to get further ? :
    Subject: FATAL ERROR IN TWO-TASK SERVER: error = 12571 Found in Alert File
    Doc ID: Note:164839.1 Type: PROBLEM
    Last Revision Date: 17-DEC-2007 Status: PUBLISHED
    fact: Oracle Server - Enterprise Edition
    symptom: Errors appear in alert file
    symptom: FATAL ERROR IN TWO-TASK SERVER:
    symptom: ERROR = 12571
    symptom: trace file generated
    symptom: Database operations continue successfully
    cause: The most common cause for the above error is an ungraceful
    disconnection of a session from the oracle db while the db is currently running
    a dml statement issued by that session. The error is recorded when oracle
    attempts to reply back to the session with the results of the dml and cannot
    access the session. Overall database operations are usually not affected.
    An ungraceful disconnection could cause by but is not limited to any of the
    following:
    - the client machine crashed
    - the network connection crashed
    - the user exited the session improperly (not issuing the 'exit' command)
    - the user application allows the user to exit the application without properly
    terminating the session.
    The above can cause problems with corrupted rollback segments if occurring on a
    regular basis and is not addressed. This would require db recovery and
    possibly a db rebuild (not a light matter)
    PMON will usually rollback most transactions in the rollback segments for a
    session if it finds that the session has been ungracefully disconnected, but
    there is always a chance that it cannot and this will lead to rollback segment
    corruption.
    fix:
    The dml and the user that issued the dml can be determined from the trace file.
    The current dml is in the tracefile header section. The user can be found in
    the process state dump of the trace. The process state shows the machine,
    o/suser, and user for the session.
    The DBA can use this information to determine what the user was doing at the
    time and if there was an ungraceful exit from the session the user was
    utilizing.
    The DBA should then address the cause of the ungraceful exit to reduce the
    possibility of recurrence.

  • Separate alert log for every day

    Hi,
    Apart from regular alert_SID_.log file, can i configure alert log something like :
    alert_SID_StartDateAndTime_EndDateAndTime.log
    I mean can i configure separate aler log for 12.00 AM to 11.59 PM everyday, because :
    1.It will easy to have separate alert log; so that if i have to send it i can send.
    2.It will record all message in separate file, so that we may know that on which date, what db actions have been performed etc.
    Thanks,

    Satish Kandi wrote:
    You could achieve this with a simple cron job, as indicated by CKPT, to rename the existing alert.log to required format.If you please show an exmple, it will help me to understand it
    >
    On a side note,
    1.It will easy to have separate alert log; so that if i have to send it i can send.Where do you need to send the alert log on daily basis?Sometime if client ask it.
    How will you send the alert log for last one month, if required? Wouldn't that be simpler with just one file?
    Ofcourse, sending one file is easy than many, but many have separate info as well.
    2.It will record all message in separate file, so that we may know that on which date, what db actions have been performed etc.Alert log already has a timestamp included, so I don't see how this will benefit you.
    Still, if there is dbms_separate_alert_log (assumed new feature for Oracle 12g), i think it may be helpful to us.
    Its a general practice to review the alert log on daily basis and keep only the "required" (definition changes per individual) information in the file. Personally, I prefer to have a complete backup of the alert log contents in a separate file and stripped out version of alert log as current one.Currently i am using 10.2.0.1 on windows xp.
    Thanks,

  • Seeking comments about managing Oracle's Alert log

    Good morning,
    Alert logs can often be very useful.
    I am forming the opinion that it would probably be a good idea to keep a backup of the alert logs for, maybe, a 3 to 6 months sliding window.
    I would like to know the views of the forum members on the subject of alert log backups, including how long the backups (if any) are kept. Actual practice on the subject in production environments is greatly appreciated.
    Thank you for taking the time to comment on the subject,
    John.

    Hello Asif,
    >
    Yes , may be you need to open a SR in metalink and they may ask something old incident. For this reason you need to check the old alert log content. That's why I suggest to keep the alert log for production environment for 3/4 years.
    >
    I had not thought about someone asking for an incident that would be over a year old but, I can see that on occasion having the records might be useful.
    >
    Now a days storage are cheap so it's not that critical to store the alert log files.
    >
    I completely agree. I was more concerned about managing that many logs... on the other, they are just flat files, not that hard to manage.
    Thank you,
    John.

  • Cloud Control 12c monitoring Oracle 11g Standard Edition alert.log

    Hi guys.
    I just installed Cloud Control 12c3 and added my cluster database, I have readed many papers, tech docs and tech discuss and now i have a huge confussion about packs, licensing and other fruits. Please help...
    I have four Oracle 11g databases (differents hosts) Standard Edition, and i want monitoring and notificate alert.log errors. For example, if alert.log says "ORA-01438, blablabla" i want a email notification. 
    I read about packs, and says that "diagnostic pack" is needed for alert.log monitoring in Cloud Control 12c. But my version database is Standard Edition without packs, so I CAN'T MONITORING ALERT LOG!!!.
    Question:
      Do I really need Diagnostic Pack for monitoring alert.log with Cloud Control?
      If Diagnostinc Pack is not necessary, how can i monitoring alert.log?
    Thanks

    I do not think you require any pack for alert.log content monitoring but you might need to check with Oracle rep. I am saying so because if you go to the "Alert log content" page and click on management pack for this page, Grid will display a message that this page does not require any pack.
    Go to the Alert log content by "oracle database -> logs -> alert log contents"
    then
    go to SETUP -> management packs -> packs for this page
    You will see the message will be displayed " this pages does not require any pack"
    Can you also provide the doc where it says that this pae needs diagnostic pack?

  • Delete content in alert.log file

    hii,
    Presently, i am working at oracle 10g database on windows 2003 server.
    alter.log file size is 432 MB around which take several time time to open.
    so, how can i delete content form alert.log file to reduce size
    Regards
    Vaibhav

    Hi,
    No need to delete contents of alertlog.
    just rename the alert log file or copy to another location and delete old one,then it will automatically creates new alert log file with format alert_<sid>.log
    note:
    no need of database bounce.

  • ORA-07445 in the alert log when inserting into table with XMLType column

    I'm trying to insert an xml-document into a table with a schema-based XMLType column. When I try to insert a row (using plsql-developer) - oracle is busy for a few seconds and then the connection to oracle is lost.
    Below you''ll find the following to recreate the problem:
    a) contents from the alert log
    b) create script for the table
    c) the before-insert trigger
    d) the xml-schema
    e) code for registering the schema
    f) the test program
    g) platform information
    Alert Log:
    Fri Aug 17 00:44:11 2007
    Errors in file /oracle/app/oracle/product/10.2.0/db_1/admin/dntspilot2/udump/dntspilot2_ora_13807.trc:
    ORA-07445: exception encountered: core dump [SIGSEGV] [Address not mapped to object] [475177] [] [] []
    Create script for the table:
    CREATE TABLE "DNTSB"."SIGNATURETABLE"
    (     "XML_DOCUMENT" "SYS"."XMLTYPE" ,
    "TS" TIMESTAMP (6) WITH TIME ZONE NOT NULL ENABLE
    ) XMLTYPE COLUMN "XML_DOCUMENT" XMLSCHEMA "http://www.sporfori.fo/schemas/www.w3.org/TR/xmldsig-core/xmldsig-core-schema.xsd" ELEMENT "Object"
    ROWDEPENDENCIES ;
    Before-insert trigger:
    create or replace trigger BIS_SIGNATURETABLE
    before insert on signaturetable
    for each row
    declare
    -- local variables here
    l_sigtab_rec signaturetable%rowtype;
    begin
    if (:new.xml_document is not null) then
    :new.xml_document.schemavalidate();
    end if;
    l_sigtab_rec.xml_document := :new.xml_document;
    end BIS_SIGNATURETABLE2;
    XML-Schema (xmldsig-core-schema.xsd):
    =====================================================================================
    <?xml version="1.0" encoding="utf-8"?>
    <!-- Schema for XML Signatures
    http://www.w3.org/2000/09/xmldsig#
    $Revision: 1.1 $ on $Date: 2002/02/08 20:32:26 $ by $Author: reagle $
    Copyright 2001 The Internet Society and W3C (Massachusetts Institute
    of Technology, Institut National de Recherche en Informatique et en
    Automatique, Keio University). All Rights Reserved.
    http://www.w3.org/Consortium/Legal/
    This document is governed by the W3C Software License [1] as described
    in the FAQ [2].
    [1] http://www.w3.org/Consortium/Legal/copyright-software-19980720
    [2] http://www.w3.org/Consortium/Legal/IPR-FAQ-20000620.html#DTD
    -->
    <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:ds="http://www.w3.org/2000/09/xmldsig#" xmlns:xdb="http://xmlns.oracle.com/xdb"
    targetNamespace="http://www.w3.org/2000/09/xmldsig#" version="0.1" elementFormDefault="qualified">
    <!-- Basic Types Defined for Signatures -->
    <xs:simpleType name="CryptoBinary">
    <xs:restriction base="xs:base64Binary">
    </xs:restriction>
    </xs:simpleType>
    <!-- Start Signature -->
    <xs:element name="Signature" type="ds:SignatureType"/>
    <xs:complexType name="SignatureType">
    <xs:sequence>
    <xs:element ref="ds:SignedInfo"/>
    <xs:element ref="ds:SignatureValue"/>
    <xs:element ref="ds:KeyInfo" minOccurs="0"/>
    <xs:element ref="ds:Object" minOccurs="0" maxOccurs="unbounded"/>
    </xs:sequence>
    <xs:attribute name="Id" type="xs:ID" use="optional"/>
    </xs:complexType>
    <xs:element name="SignatureValue" type="ds:SignatureValueType"/>
    <xs:complexType name="SignatureValueType">
    <xs:simpleContent>
    <xs:extension base="xs:base64Binary">
    <xs:attribute name="Id" type="xs:ID" use="optional"/>
    </xs:extension>
    </xs:simpleContent>
    </xs:complexType>
    <!-- Start SignedInfo -->
    <xs:element name="SignedInfo" type="ds:SignedInfoType"/>
    <xs:complexType name="SignedInfoType">
    <xs:sequence>
    <xs:element ref="ds:CanonicalizationMethod"/>
    <xs:element ref="ds:SignatureMethod"/>
    <xs:element ref="ds:Reference" maxOccurs="unbounded"/>
    </xs:sequence>
    <xs:attribute name="Id" type="xs:ID" use="optional"/>
    </xs:complexType>
    <xs:element name="CanonicalizationMethod" type="ds:CanonicalizationMethodType"/>
    <xs:complexType name="CanonicalizationMethodType" mixed="true">
    <xs:sequence>
    <xs:any namespace="##any" minOccurs="0" maxOccurs="unbounded"/>
    <!-- (0,unbounded) elements from (1,1) namespace -->
    </xs:sequence>
    <xs:attribute name="Algorithm" type="xs:anyURI" use="required"/>
    </xs:complexType>
    <xs:element name="SignatureMethod" type="ds:SignatureMethodType"/>
    <xs:complexType name="SignatureMethodType" mixed="true">
    <xs:sequence>
    <xs:element name="HMACOutputLength" minOccurs="0" type="ds:HMACOutputLengthType"/>
    <xs:any namespace="##other" minOccurs="0" maxOccurs="unbounded"/>
    <!-- (0,unbounded) elements from (1,1) external namespace -->
    </xs:sequence>
    <xs:attribute name="Algorithm" type="xs:anyURI" use="required"/>
    </xs:complexType>
    <!-- Start Reference -->
    <xs:element name="Reference" type="ds:ReferenceType"/>
    <xs:complexType name="ReferenceType">
    <xs:sequence>
    <xs:element ref="ds:Transforms" minOccurs="0"/>
    <xs:element ref="ds:DigestMethod"/>
    <xs:element ref="ds:DigestValue"/>
    </xs:sequence>
    <xs:attribute name="Id" type="xs:ID" use="optional"/>
    <xs:attribute name="URI" type="xs:anyURI" use="optional"/>
    <xs:attribute name="Type" type="xs:anyURI" use="optional"/>
    </xs:complexType>
    <xs:element name="Transforms" type="ds:TransformsType"/>
    <xs:complexType name="TransformsType">
    <xs:sequence>
    <xs:element ref="ds:Transform" maxOccurs="unbounded"/>
    </xs:sequence>
    </xs:complexType>
    <xs:element name="Transform" type="ds:TransformType"/>
    <xs:complexType name="TransformType" mixed="true">
    <xs:choice minOccurs="0" maxOccurs="unbounded">
    <xs:any namespace="##other" processContents="lax"/>
    <!-- (1,1) elements from (0,unbounded) namespaces -->
    <xs:element name="XPath" type="xs:string"/>
    </xs:choice>
    <xs:attribute name="Algorithm" type="xs:anyURI" use="required"/>
    </xs:complexType>
    <!-- End Reference -->
    <xs:element name="DigestMethod" type="ds:DigestMethodType"/>
    <xs:complexType name="DigestMethodType" mixed="true">
    <xs:sequence>
    <xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/>
    </xs:sequence>
    <xs:attribute name="Algorithm" type="xs:anyURI" use="required"/>
    </xs:complexType>
    <xs:element name="DigestValue" type="ds:DigestValueType"/>
    <xs:simpleType name="DigestValueType">
    <xs:restriction base="xs:base64Binary"/>
    </xs:simpleType>
    <!-- End SignedInfo -->
    <!-- Start KeyInfo -->
    <xs:element name="KeyInfo" type="ds:KeyInfoType"/>
    <xs:complexType name="KeyInfoType" mixed="true">
    <xs:choice maxOccurs="unbounded">
    <xs:element ref="ds:KeyName"/>
    <xs:element ref="ds:KeyValue"/>
    <xs:element ref="ds:RetrievalMethod"/>
    <xs:element ref="ds:X509Data"/>
    <xs:element ref="ds:PGPData"/>
    <xs:element ref="ds:SPKIData"/>
    <xs:element ref="ds:MgmtData"/>
    <xs:any processContents="lax" namespace="##other"/>
    <!-- (1,1) elements from (0,unbounded) namespaces -->
    </xs:choice>
    <xs:attribute name="Id" type="xs:ID" use="optional"/>
    </xs:complexType>
    <xs:element name="KeyName" type="xs:string"/>
    <xs:element name="MgmtData" type="xs:string"/>
    <xs:element name="KeyValue" type="ds:KeyValueType"/>
    <xs:complexType name="KeyValueType" mixed="true">
    <xs:choice>
    <xs:element ref="ds:DSAKeyValue"/>
    <xs:element ref="ds:RSAKeyValue"/>
    <xs:any namespace="##other" processContents="lax"/>
    </xs:choice>
    </xs:complexType>
    <xs:element name="RetrievalMethod" type="ds:RetrievalMethodType"/>
    <xs:complexType name="RetrievalMethodType">
    <xs:sequence>
    <xs:element ref="ds:Transforms" minOccurs="0"/>
    </xs:sequence>
    <xs:attribute name="URI" type="xs:anyURI"/>
    <xs:attribute name="Type" type="xs:anyURI" use="optional"/>
    </xs:complexType>
    <!-- Start X509Data -->
    <xs:element name="X509Data" type="ds:X509DataType"/>
    <xs:complexType name="X509DataType">
    <xs:sequence maxOccurs="unbounded">
    <xs:choice>
    <xs:element name="X509IssuerSerial" type="ds:X509IssuerSerialType"/>
    <xs:element name="X509SKI" type="xs:base64Binary"/>
    <xs:element name="X509SubjectName" type="xs:string"/>
    <xs:element name="X509Certificate" type="xs:base64Binary"/>
    <xs:element name="X509CRL" type="xs:base64Binary"/>
    <xs:any namespace="##other" processContents="lax"/>
    </xs:choice>
    </xs:sequence>
    </xs:complexType>
    <xs:complexType name="X509IssuerSerialType">
    <xs:sequence>
    <xs:element name="X509IssuerName" type="xs:string"/>
    <xs:element name="X509SerialNumber" type="xs:integer"/>
    </xs:sequence>
    </xs:complexType>
    <!-- End X509Data -->
    <!-- Begin PGPData -->
    <xs:element name="PGPData" type="ds:PGPDataType"/>
    <xs:complexType name="PGPDataType">
    <xs:choice>
    <xs:sequence>
    <xs:element name="PGPKeyID" type="xs:base64Binary"/>
    <xs:element name="PGPKeyPacket" type="xs:base64Binary" minOccurs="0"/>
    <xs:any namespace="##other" processContents="lax" minOccurs="0"
    maxOccurs="unbounded"/>
    </xs:sequence>
    <xs:sequence>
    <xs:element name="PGPKeyPacket" type="xs:base64Binary"/>
    <xs:any namespace="##other" processContents="lax" minOccurs="0"
    maxOccurs="unbounded"/>
    </xs:sequence>
    </xs:choice>
    </xs:complexType>
    <!-- End PGPData -->
    <!-- Begin SPKIData -->
    <xs:element name="SPKIData" type="ds:SPKIDataType"/>
    <xs:complexType name="SPKIDataType">
    <xs:sequence maxOccurs="unbounded">
    <xs:element name="SPKISexp" type="xs:base64Binary"/>
    <xs:any namespace="##other" processContents="lax" minOccurs="0"/>
    </xs:sequence>
    </xs:complexType>
    <!-- End SPKIData -->
    <!-- End KeyInfo -->
    <!-- Start Object (Manifest, SignatureProperty) -->
    <xs:element name="Object" type="ds:ObjectType"/>
    <xs:complexType name="ObjectType" mixed="true">
    <xs:sequence minOccurs="0" maxOccurs="unbounded">
    <xs:any namespace="##any" processContents="lax"/>
    </xs:sequence>
    <xs:attribute name="Id" type="xs:ID" use="optional"/>
    <xs:attribute name="MimeType" type="xs:string" use="optional"/> <!-- add a grep facet -->
    <xs:attribute name="Encoding" type="xs:anyURI" use="optional"/>
    </xs:complexType>
    <xs:element name="Manifest" type="ds:ManifestType"/>
    <xs:complexType name="ManifestType">
    <xs:sequence>
    <xs:element ref="ds:Reference" maxOccurs="unbounded"/>
    </xs:sequence>
    <xs:attribute name="Id" type="xs:ID" use="optional"/>
    </xs:complexType>
    <xs:element name="SignatureProperties" type="ds:SignaturePropertiesType"/>
    <xs:complexType name="SignaturePropertiesType">
    <xs:sequence>
    <xs:element ref="ds:SignatureProperty" maxOccurs="unbounded"/>
    </xs:sequence>
    <xs:attribute name="Id" type="xs:ID" use="optional"/>
    </xs:complexType>
    <xs:element name="SignatureProperty" type="ds:SignaturePropertyType"/>
    <xs:complexType name="SignaturePropertyType" mixed="true">
    <xs:choice maxOccurs="unbounded">
    <xs:any namespace="##other" processContents="lax"/>
    <!-- (1,1) elements from (1,unbounded) namespaces -->
    </xs:choice>
    <xs:attribute name="Target" type="xs:anyURI" use="required"/>
    <xs:attribute name="Id" type="xs:ID" use="optional"/>
    </xs:complexType>
    <!-- End Object (Manifest, SignatureProperty) -->
    <!-- Start Algorithm Parameters -->
    <xs:simpleType name="HMACOutputLengthType">
    <xs:restriction base="xs:integer"/>
    </xs:simpleType>
    <!-- Start KeyValue Element-types -->
    <xs:element name="DSAKeyValue" type="ds:DSAKeyValueType"/>
    <xs:complexType name="DSAKeyValueType">
    <xs:sequence>
    <xs:sequence minOccurs="0">
    <xs:element name="P" type="ds:CryptoBinary"/>
    <xs:element name="Q" type="ds:CryptoBinary"/>
    </xs:sequence>
    <xs:element name="G" type="ds:CryptoBinary" minOccurs="0"/>
    <xs:element name="Y" type="ds:CryptoBinary"/>
    <xs:element name="J" type="ds:CryptoBinary" minOccurs="0"/>
    <xs:sequence minOccurs="0">
    <xs:element name="Seed" type="ds:CryptoBinary"/>
    <xs:element name="PgenCounter" type="ds:CryptoBinary"/>
    </xs:sequence>
    </xs:sequence>
    </xs:complexType>
    <xs:element name="RSAKeyValue" type="ds:RSAKeyValueType"/>
    <xs:complexType name="RSAKeyValueType">
    <xs:sequence>
    <xs:element name="Modulus" type="ds:CryptoBinary"/>
    <xs:element name="Exponent" type="ds:CryptoBinary"/>
    </xs:sequence>
    </xs:complexType>
    <!-- End KeyValue Element-types -->
    <!-- End Signature -->
    </xs:schema>
    ===============================================================================
    Code for registering the xml-schema
    begin
    dbms_xmlschema.deleteSchema('http://xmlns.oracle.com/xdb/schemas/DNTSB/www.sporfori.fo/schemas/www.w3.org/TR/xmldsig-core/xmldsig-core-schema.xsd',
    dbms_xmlschema.DELETE_CASCADE_FORCE);
    end;
    begin
    DBMS_XMLSCHEMA.REGISTERURI(
    schemaurl => 'http://www.sporfori.fo/schemas/www.w3.org/TR/xmldsig-core/xmldsig-core-schema.xsd',
    schemadocuri => 'http://www.sporfori.fo/schemas/www.w3.org/TR/xmldsig-core/xmldsig-core-schema.xsd',
    local => TRUE,
    gentypes => TRUE,
    genbean => FALSE,
    gentables => TRUE,
    force => FALSE,
    owner => 'DNTSB',
    options => 0);
    end;
    Test program
    -- Created on 17-07-2006 by EEJ
    declare
    XML_TEXT3 CLOB := '<Object xmlns="http://www.w3.org/2000/09/xmldsig#">
                                  <SignatureProperties>
                                       <SignatureProperty Target="">
                                            <Timestamp xmlns="http://www.sporfori.fo/schemas/dnts/general/2006/11/14">2007-05-10T12:00:00-05:00</Timestamp>
                                       </SignatureProperty>
                                  </SignatureProperties>
                             </Object>';
    xmldoc xmltype;
    begin
    xmldoc := xmltype(xml_text3);
    insert into signaturetable
    (xml_document, ts)
    values
    (xmldoc, current_timestamp);
    end;
    Platform information
    Operating system:
    -bash-3.00$ uname -a
    SunOS dntsdb 5.10 Generic_125101-09 i86pc i386 i86pc
    SQLPlus:
    SQL*Plus: Release 10.2.0.3.0 - Production on Fri Aug 17 00:15:13 2007
    Copyright (c) 1982, 2006, Oracle. All Rights Reserved.
    Enter password:
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
    With the Partitioning and Data Mining options
    Kind Regards,
    Eyðun

    You should report this in a service request on http://metalink.oracle.com.
    It is a shame that you put all the effort here to describe your problem, but on the other hand you can now also copy & paste the question to Oracle Support.
    Because you are using 10.2.0.3; I am guessing that you have a valid service contract...

  • Oracle errors in alert.log

    Hello guys,
    I have the following Oracle Release 10.2.0.1.0 server configuration:
    - Windows 2003 Server SP2
    - 3,25 GB RAM (I think that is a 4 GB but this is a windows 2003 standard edition)
    - 2 Intel Xeon Quad core
    - 3 disk partitions (C -> 40GB, 25 GB free / E -> 100 GB 85 GB free / F 300 GB -> 270 GB free)
    The PRISM instance configuration is:
    Archivelog: TRUE
    SGA Size: 584 MB
    Actual PGA Size: 40 MB
    Services running (OracleDBConsolePRISM, OracleOraDb10g_home1TNSListener, OracleServeicePRISM)
    The problem is that there are just 5 users that use this database, but there are a lot of errors in the alert.log that makes the database not available, the error messages are like this:
    Mon Feb 16 16:31:24 2009
    Errors in file e:\oracle\product\10.2.0\admin\prism\bdump\prism_j001_11416.trc:
    ORA-12012: error on auto execute of job 42567
    ORA-00376: file ORA-00376: file 3 cannot be read at this time
    ORA-01110: data file 3: 'E:\ORACLE\PRODUCT\10.2.0\ORADATA\PRISM\SYSAUX01.DBF'
    ORA-06512: at "EXFSYS.DBMS_RLMGR_DR", line 15
    ORA-06512: at line 1
    cannot be read at this timeThe content of prism_j001_11416.trc is:
    Dump file e:\oracle\product\10.2.0\admin\prism\bdump\prism_mmon_4620.trc
    Mon Feb 16 11:07:33 2009
    ORACLE V10.2.0.1.0 - Production vsnsta=0
    vsnsql=14 vsnxtr=3
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Windows Server 2003 Version V5.2 Service Pack 2
    CPU                 : 8 - type 586, 2 Physical Cores
    Process Affinity    : 0x00000000
    Memory (Avail/Total): Ph:1981M/3325M, Ph+PgF:3612M/5221M, VA:1278M/2047M
    Instance name: prism
    Redo thread mounted by this instance: 1
    Oracle process number: 11
    Windows thread id: 4620, image: ORACLE.EXE (MMON)
    *** SERVICE NAME:(SYS$BACKGROUND) 2009-02-16 11:07:33.480
    *** SESSION ID:(161.1) 2009-02-16 11:07:33.480
    KEWRCTLRD: OCIStmtFetch Error. ctl_dbid= 1515633963, sga_dbid= 1515633963
    KEWRCTLRD: Retcode: -1, Error Message: ORA-00376: file 3 cannot be read at this time
    ORA-01110: data file 3: 'E:\ORACLE\PRODUCT\10.2.0\ORADATA\PRISM\SYSAUX01.DBF'
      *** SQLSTR: total-len=328, dump-len=240,
          STR={select snap_interval, retention,most_recent_snap_time, most_recent_snap_id, status_flag, most_recent_purge_time, most_recent_split_id, most_recent_split_time, mrct_snap_time_num, mrct_purge_time_num, snapint_num, retention_num, swrf_version}
    *** kewrwdbi_1: Error=13509 encountered during run_once
    keaInitAdvCache: failed, err=604
    02/16/09 11:07:33 >ERROR: exception at dbms_ha_alerts_prvt.post_instance_up308: SQLCODE -13917,ORA-13917: Posting system
    alert with reason_id 135 failed with code [5] [post_error]
    02/16/09 11:07:33 >ERROR: exception at dbms_ha_alerts_prvt.check_ha_resources637: SQLCODE -13917,ORA-13917: Posting syst
    em alert with reason_id 136 failed with code [5] [post_error]
    02/16/09 11:07:33 >parameter dump for dbms_ha_alerts_prvt.check_ha_resources
    02/16/09 11:07:33 > - local_db_unique_name (PRISM)
    02/16/09 11:07:33 > - local_db_domain (==N/A==)
    02/16/09 11:07:33 > - rows deleted (0)
    02/16/09 11:07:33 >ERROR: exception at dbms_ha_alerts_prvt.check_ha_resources637: SQLCODE -13917,ORA-13917: Posting syst
    em alert with reason_id 136 failed with code [5] [post_error]
    02/16/09 11:07:33 >parameter dump for dbms_ha_alerts_prvt.check_ha_resources
    02/16/09 11:07:33 > - local_db_unique_name (PRISM)
    02/16/09 11:07:33 > - local_db_domain (==N/A==)
    02/16/09 11:07:33 > - rows deleted (0)
    *** 2009-02-16 11:07:41.293
    ****KELR Apply Log Failed, return code 376
    *** 2009-02-16 11:08:35.294
    ****KELR Apply Log Failed, return code 376
    *** 2009-02-16 11:09:38.294
    ****KELR Apply Log Failed, return code 376
    *** 2009-02-16 11:10:41.295
    ****KELR Apply Log Failed, return code 376
    *** 2009-02-16 11:11:44.296
    ****KELR Apply Log Failed, return code 376
    *** 2009-02-16 11:12:29.328And this is an extract from listener.log file:
    TNSLSNR for 32-bit Windows: Version 10.2.0.1.0 - Production on 16-FEB-2009 11:05:44
    Copyright (c) 1991, 2005, Oracle.  All rights reserved.
    System parameter file is e:\oracle\product\10.2.0\db_1\network\admin\listener.ora
    Log messages written to e:\oracle\product\10.2.0\db_1\network\log\listener.log
    Trace information written to e:\oracle\product\10.2.0\db_1\network\trace\listener.trc
    Trace level is currently 0
    Started with pid=9212
    Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=akscl-mfs15.am.enterdir.com)(PORT=1521)))
    Listener completed notification to CRS on start
    TIMESTAMP * CONNECT DATA [* PROTOCOL INFO] * EVENT [* SID] * RETURN CODE
    16-FEB-2009 11:07:25 * service_register * prism * 0
    16-FEB-2009 11:07:31 * service_update * prism * 0
    16-FEB-2009 11:07:34 * service_update * prism * 0
    16-FEB-2009 11:07:37 * service_update * prism * 0
    16-FEB-2009 11:07:57 * (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=PRISM)(CID=(PROGRAM=\\akscl-mfs15\PRISMPM\PRISMPM.EXE)(HOST=xx-W20972)(USER=CSC3157))) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.225.7)(PORT=1607)) * establish * PRISM * 0
    16-FEB-2009 11:07:58 * service_update * prism * 0
    16-FEB-2009 11:07:58 * (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=PRISM)(CID=(PROGRAM=\\akscl-mfs15\PRISMPM\PRISMPM.EXE)(HOST=xx-W20972)(USER=CSC3157))) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.225.7)(PORT=1608)) * establish * PRISM * 0
    16-FEB-2009 11:07:58 * (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=PRISM)(CID=(PROGRAM=\\akscl-mfs15\PRISMPM\PRISMPM.EXE)(HOST=xx-W20972)(USER=CSC3157))) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.225.7)(PORT=1612)) * establish * PRISM * 0
    16-FEB-2009 11:07:59 * (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=PRISM)(CID=(PROGRAM=\\akscl-mfs15\PRISMPM\PRISMPM.EXE)(HOST=xx-W20972)(USER=CSC3157))) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.225.7)(PORT=1613)) * establish * PRISM * 0
    16-FEB-2009 11:07:59 * (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=PRISM)(CID=(PROGRAM=\\akscl-mfs15\PRISMPM\PRISMPM.EXE)(HOST=xx-W20972)(USER=CSC3157))) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.225.7)(PORT=1614)) * establish * PRISM * 0
    16-FEB-2009 11:08:01 * service_update * prism * 0
    16-FEB-2009 15:05:26 * (CONNECT_DATA=(SERVICE_NAME=PRISM)(CID=(PROGRAM=C:\Program Files\Microsoft Office\OFFICE11\EXCEL.EXE)(HOST=xx-W21498)(USER=csc2682))) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.131)(PORT=1999)) * establish * PRISM * 0
    16-FEB-2009 15:05:27 * (CONNECT_DATA=(SERVICE_NAME=PRISM)(CID=(PROGRAM=C:\Program Files\Microsoft Office\OFFICE11\EXCEL.EXE)(HOST=xx-W21498)(USER=csc2682))) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.131)(PORT=2000)) * establish * PRISM * 0
    16-FEB-2009 17:33:26 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2513)) * establish * PRISM * 0
    16-FEB-2009 17:33:26 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2514)) * establish * PRISM * 0
    16-FEB-2009 17:33:27 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2515)) * establish * PRISM * 0
    16-FEB-2009 17:33:27 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2516)) * establish * PRISM * 0
    16-FEB-2009 17:33:27 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2517)) * establish * PRISM * 0
    16-FEB-2009 17:33:27 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2518)) * establish * PRISM * 0
    16-FEB-2009 17:33:27 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2519)) * establish * PRISM * 0
    16-FEB-2009 17:33:27 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2520)) * establish * PRISM * 0
    16-FEB-2009 17:33:27 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2521)) * establish * PRISM * 0
    16-FEB-2009 17:33:27 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2523)) * establish * PRISM * 0
    16-FEB-2009 17:33:27 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2524)) * establish * PRISM * 0
    16-FEB-2009 17:33:27 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2525)) * establish * PRISM * 0
    16-FEB-2009 17:33:27 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2526)) * establish * PRISM * 0
    16-FEB-2009 17:33:27 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2527)) * establish * PRISM * 0
    16-FEB-2009 17:33:27 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2528)) * establish * PRISM * 0
    16-FEB-2009 17:33:27 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2529)) * establish * PRISM * 0
    16-FEB-2009 17:33:27 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2530)) * establish * PRISM * 0
    16-FEB-2009 17:33:29 * service_update * prism * 0Please need help to solve this issue, best regards.

    Hello guys,
    Thanks for the tips, after checking the tablespaces the results are:
    SQL> select tablespace_name,status from dba_tablespaces;
    TABLESPACE_NAME                STATUS
    SYSTEM                         ONLINE
    UNDOTBS1                       ONLINE
    SYSAUX                         ONLINE
    TEMP                           ONLINE
    USERS                          ONLINE
    PRISM                          ONLINEThe datafiles:
    SQL> select file#,name,status,enabled from v$datafile;
    FILE#         NAME                                        STATUS  ENABLED
             1     E:\ORACLE\PRODUCT\10.2.0\ORADATA\PRISM\SYSTEM01.DBF          SYSTEM  READ WRITE
             2     E:\ORACLE\PRODUCT\10.2.0\ORADATA\PRISM\UNDOTBS01.DBF          ONLINE  READ WRITE
             3     E:\ORACLE\PRODUCT\10.2.0\ORADATA\PRISM\SYSAUX01.DBF          *RECOVER* READ WRITE
             4     E:\ORACLE\PRODUCT\10.2.0\ORADATA\PRISM\USERS01.DBF          ONLINE  READ WRITE
             5     F:\ORACLE\PRISM\PRISM.ORA                         ONLINE  READ WRITESo, following the metalink guide:
    SQL> recover datafile 'E:\ORACLE\PRODUCT\10.2.0\ORADATA\PRISM\SYSAUX01.DBF';  -->Here the system ask for the log file, I select AUTO
    SQL> alter database datafile 'E:\ORACLE\PRODUCT\10.2.0\ORADATA\PRISM\SYSAUX01.DBF' online; Now the datafiles are all ONLINE, I will wait some time to check the database behavior after this change and back with the results and correct answer, best regards.

  • RMAN success, but errors in alert.log file

    My RMAN backup script runs well, but generates errors in alert.log file.
    Here is the trace file contents:
    /usr/lib/oracle/xe/app/oracle/admin/XE/udump/xe_ora_3990.trc
    Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
    ORACLE_HOME = /usr/lib/oracle/xe/app/oracle/product/10.2.0/server
    System name: Linux
    Node name: plockton
    Release: 2.6.18-128.2.1.el5
    Version: #1 SMP Wed Jul 8 11:54:54 EDT 2009
    Machine: i686
    Instance name: XE
    Redo thread mounted by this instance: 1
    Oracle process number: 26
    Unix process pid: 3990, image: oracle@plockton (TNS V1-V3)
    *** 2009-07-23 23:05:01.835
    *** ACTION NAME:(0000025 STARTED111) 2009-07-23 23:05:01.823
    *** MODULE NAME:(backup full datafile) 2009-07-23 23:05:01.823
    *** SERVICE NAME:(SYS$USERS) 2009-07-23 23:05:01.823
    *** SESSION ID:(33.154) 2009-07-23 23:05:01.823
    *** 2009-07-23 23:05:18.689
    *** ACTION NAME:(0000045 STARTED111) 2009-07-23 23:05:18.689
    *** MODULE NAME:(backup archivelog) 2009-07-23 23:05:18.689
    Does anyone know why? Thanks.
    Richard

    I'm not sure if this will answer your question or not, but I believe these messages can likely be ignored.
    I'm currently running 10.2.0.1.0 Enterprise Edition in pre-production (yes, I know I should apply the latest patchset and I plan to do so as soon as I get a development box allocated to me and can test it's impact). I see the same types of messages that you've reported with each of my regularly-scheduled backups:
    a) The alert_<$SID>.log reports that there are errors in trace files:
    Mon Aug 10 04:33:49 2009
    Starting control autobackup
    Mon Aug 10 04:33:50 2009
    Errors in file /opt/oracle/admin/blah/udump/blah_ora_32520.trc:
    Mon Aug 10 04:33:50 2009
    Errors in file /opt/oracle/admin/blah/udump/blah_ora_32520.trc:
    Mon Aug 10 04:33:50 2009
    Errors in file /opt/oracle/admin/blah/udump/blah_ora_32520.trc:
    Control autobackup written to DISK device
    handle '/backup/physical/BLAH/RMAN/cf_c-2740124895-20090810-00'
    b) The .trc files, when you look at them contain no errors - only these "informational" messages:
    *** 2009-08-10 04:33:50.781
    *** ACTION NAME:(0000105 STARTED111) 2009-08-10 04:33:50.754
    *** MODULE NAME:(backup archivelog) 2009-08-10 04:33:50.754
    *** SERVICE NAME:(SYS$USERS) 2009-08-10 04:33:50.754
    *** SESSION ID:(126.28030) 2009-08-10 04:33:50.754
    c) I've verified that LOG_ARCHIVE_TRACE is set to 0:
    SQL*Plus> show parameter log_archive_trace
    NAME TYPE VALUE
    log_archive_trace integer 0
    As best I can discern from my own experience, these should just be ignored and I trust (read: "hope") they will simply go away once the latest patchset is applied. As for you running Oracle XE, a patchset is not an option, unfortunately.
    V/R
    -Eric

  • Question regarding alert log file and trace files

    What should be the alert log file size ? And when should it be deleted? And for how many days user trace files should be kept?
    Also will anyone please tell me the importance of these files.
    Thanks

    This may help: http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14231/manproc.htm#sthref729
    There are a few discussions on it here:
    Re: Alert Log File
    alert log file contents viewing
    Re: how to read alert log file? is there any tool available?

  • EM12CR3 - Alert Log Errors page - empty

    What controls the content shown in Oracle Database->Logs->Alert Log Errors ?
    The page shows empty - but if we look in the actual alert log we can clearly see ORA-0600 errors showing there. In fact if we do the more generic show alert log, search and look for Errors, it shows up there just fine.
    Oct 10, 2013 3:04:21 PM UTC
    ERROR
    8
    https://oem03cnc.rim.net:7803/em/console/database/instance/adrAlertLogContent?target=edwp_edwp7&type=oracle_databasehttps://oem03cnct:7803/em/console/database/instance/adrAlertLogContent?target=edwp_edwp7&type=oracle_databasehttps://oem03cnc:7803/em/console/database/instance/adrAlertLogContent?target=edwp_edwp7&type=oracle_database
    ami_comp
    dbgripsto_sweep_staged_obj:15736:70631439
    Sweep [inc2][672054]: completed
    Oct 10, 2013 3:04:21 PM UTC
    ERROR
    8
    https://oem03cnc.rim.net:7803/em/console/database/instance/adrAlertLogContent?target=edwp_edwp7&type=oracle_databasehttps://oem03cnct:7803/em/console/database/instance/adrAlertLogContent?target=edwp_edwp7&type=oracle_databasehttps://oem03cnc:7803/em/console/database/instance/adrAlertLogContent?target=edwp_edwp7&type=oracle_database
    ami_comp
    dbgripsto_sweep_staged_obj:15736:70631439
    Sweep [inc][672054]: completed
    Thoughts?

    This alert should show up ..
    <msg time='2013-11-02T23:37:18.436+00:00' org_id='oracle' comp_id='rdbms'
    msg_id='684699380' type='INCIDENT_ERROR' group='Generic Internal Error'
    level='1' host_id='edwp07' host_addr='10.65.65.189'
    prob_key='ORA 600 [kghstack_free1]' upstream_comp='' downstream_comp='KGH'
    ecid='' errid='714327' detail_path='/u01/app/oracle/diag/rdbms/edwp/edwp7/trace/edwp7_bmr0_52817.trc'>
    <txt>Errors in file /u01/app/oracle/diag/rdbms/edwp/edwp7/trace/edwp7_bmr0_52817.trc  (incident=714327):
    ORA-00600: internal error code, arguments: [kghstack_free1], [iobuf_krbabrAddBlock], [], [], [], [], [], [], [], [], [], []
    </txt>
    </msg>

  • Oracle 9i (SE) Solaris 9 (Alert.log file)

    Hi Guys,
    How do I clean up my alert.log file. Do I have to delete it completely or only delete the content of the file.
    Thank you.

    Laura Gaigala wrote:
    Why You want to clean up it? Too large or some other reasons?
    If You have some planned db restart, then during restart rename archive.log and upon startup it will be created automaticaly again.
    If no restart is planned, then delete it's content, but before that make a copy of alert.log. Just in order You will need to find something.Actually, I've found that it is quite safe to rename the alert.log right out from under a running database. It will simply start a new one with the next message to be written. However, I agree it is much nicer to do this housekeeping while the db is down, so that the first entry in an alert log is the startup.
    This renameing of the log against a running process does not work for the listener log. I've found that if I rename listener.log to listener.log.1, the listener will continue to log to the original file (now listener.log.1) until such time as it is restarted. THEN it will start a new listener.log.
    This was all on unix. Windows may behave differently.

  • Urgent Database Down - alert log file

    Hi Guru's
    this is the alert log file - today morning production database went down - any one can you please let me know the problem area!
    Thread 1 advanced to log sequence 17563
    Current log# 4 seq# 17563 mem# 0: E:\ORACLE\ORADATA\PNLDB\REDO04.LOG
    Thu Jan 26 08:12:56 2006
    KCF: write/open error block=0x7f0bc online=1
    file=12 F:\ORACLE\ORADATA\PNLDB\USERS02.DBF
    error=27069 txt: 'OSD-04026: Invalid parameter passed. (OS 520380)'
    Thu Jan 26 08:12:57 2006
    Errors in file f:\oracle\admin\pnldb\udump\pnldb_ora_1420.trc:
    ORA-01115: IO error reading block from file 12 (block # 517514)
    ORA-01110: data file 12: 'F:\ORACLE\ORADATA\PNLDB\USERS02.DBF'
    ORA-27091: skgfqio: unable to queue I/O
    ORA-27069: skgfdisp: attempt to do I/O beyond the range of the file
    OSD-04026: Invalid parameter passed. (OS 517513)
    ORA-01115: IO error reading block from file 12 (block # 132287)
    ORA-01110: data file 12: 'F:\ORACLE\ORADATA\PNLDB\USERS02.DBF'
    ORA-27091: skgfqio: unable to queue I/O
    ORA-27069: skgfdisp: attempt to do I/O beyond the range of the file
    OSD-04026: Invalid parameter passed. (OS 132286)
    Thu Jan 26 08:12:57 2006
    Errors in file f:\oracle\admin\pnldb\bdump\pnldb_dbw0_956.trc:
    ORA-01242: data file suffered media failure: database in NOARCHIVELOG mode
    ORA-01114: IO error writing block to file 12 (block # 520380)
    ORA-01110: data file 12: 'F:\ORACLE\ORADATA\PNLDB\USERS02.DBF'
    ORA-27069: skgfdisp: attempt to do I/O beyond the range of the file
    OSD-04026: Invalid parameter passed. (OS 520380)
    DBW0: terminating instance due to error 1242
    Thu Jan 26 08:12:59 2006
    Errors in file f:\oracle\admin\pnldb\udump\pnldb_ora_1420.trc:
    ORA-00603: ORACLE server session terminated by fatal error
    ORA-01115: IO error reading block from file 12 (block # 517514)
    ORA-01110: data file 12: 'F:\ORACLE\ORADATA\PNLDB\USERS02.DBF'
    ORA-27091: skgfqio: unable to queue I/O
    ORA-27069: skgfdisp: attempt to do I/O beyond the range of the file
    OSD-04026: Invalid parameter passed. (OS 517513)
    ORA-01115: IO error reading block from file 12 (block # 132287)
    ORA-01110: data file 12: 'F:\ORACLE\ORADATA\PNLDB\USERS02.DBF'
    ORA-27091: skgfqio: unable to queue I/O
    ORA-27069: skgfdisp: attempt to do I/O beyond the range of the file
    OSD-04026: Invalid parameter passed. (OS 132286)
    Thu Jan 26 08:12:59 2006
    Errors in file f:\oracle\admin\pnldb\bdump\pnldb_pmon_3804.trc:
    ORA-01242: data file suffered media failure: database in NOARCHIVELOG mode
    Instance terminated by DBW0, pid = 956
    Thanks
    Ravi

    You are having problems on disks E\ and F\ - possibly hardware problems.
    (Hopefully these disks are not network drives. If they are, your problem might be network contention. Network mounted drives are never a good idea for databases.)
    If this is truly urgent, then I encourage you to use your Oracle Support contract to open a service request for assistance.

  • "checkpoint not complete" in alert log file.

    Hi, all.
    I have got a message of "Checkpoint not complete" in alert log file.
    Thread 2 advanced to log sequence 531
    Current log# 7 seq# 531 mem# 0: \\.\REDO231
    Current log# 7 seq# 531 mem# 1: \\.\REDO232
    Thread 2 cannot allocate new log, sequence 532
    Checkpoint not complete
    Current log# 7 seq# 531 mem# 0: \\.\REDO231
    Current log# 7 seq# 531 mem# 1: \\.\REDO232
    I searched "Checkpoint not complete" issue in this forum.
    As solutions,
    1. add more redo log groups
    2. increase the size of redo log
    3. check I/O contention
    4. set LOG_CHECKPOINT_INTERVAL, LOG_CHECKPOINT_TIMEOUT or
    FAST_START_MTTR_TARGET
    I think No.4 is the possible first approach in our environment.
    I think No.1 and No2 are not the ploblems in our environment.
    I ask the above issue oracle support center, but
    I was told that "if you are not getting this message frequently, you do not need to worry about it" from an oracle engineer.
    Is this true?? If I am not getting this message frequently, there is no problem
    in terms of database integrity, consistency, and performance?
    I will be waiting for your advice and experience in real life.
    Thanks and Regards.

    Redo Log Tuning Advisory and Automatic Checkpoint Tuning are new features introduced in Oracle 10G, if you are on 10g you may benefit from these features.
    The size of the redo log files can influence performance, because the behavior of the database writer and archiver processes depend on the redo log sizes. Generally, larger redo log files provide better performance, however it must balanced out with the expected recovery time. Undersized log files increase checkpoint activity and increase CPU usage. As rule of thumb switching logs at most once every fifteen minutes.
    Checkpoint frequency is affected by several factors, including log file size and the setting of the FAST_START_MTTR_TARGET initialization parameter. If the FAST_START_MTTR_TARGET parameter is set to limit the instance recovery time, Oracle automatically tries to checkpoint as frequently as necessary. Under this condition, the size of the log files should be large enough to avoid additional checkpointing due to under sized log files.
    The redo logfile sizing advisory is specified by column optimal_logfile_size of v$instance_recovery. This feature require setting the parameter "fast_start_mttr_target" for the advisory to take effect and populate the column optimal_logfile_size.
    Also you can obtain redo sizing advice on the Redo Log Groups page of Oracle Enterprise Manager Database Control.
    To enable automatic checkpoint tuning, unset FAST_START_MTTR_TARGET or set it to a nonzero value(This is measured in seconds and by default, this feature is not enabled, because FAST_START_MTTR_TARGET has a default value of 0). If you set this parameter to zero this feature will be disabled. When you enable fast-start checkpointing, remove or disable(set to 0) the following initialization parameters:
    - LOG_CHECKPOINT_INTERVAL
    - LOG_CHECKPOINT_TIMEOUT
    - FAST_START_IO_TARGET
    Enabling fast-start checkpointing can be done statically using the initialization files or dynamically using -
    SQL> alter system set FAST_START_MTTR_TARGET=10;
    Best regards.

Maybe you are looking for

  • Oracle JHeadstart 10g release 9.0.5.1 now available!

    All, Oracle JHeadstart 10g Release 9.0.5.1 is now available. More than a month ago Oracle released Oracle JDeveloper 10g (9.0.5.1). This new JHeadstart release is certified against both JDeveloper 9.0.5.1 and 9.0.5.2. You can now benefit the strong c

  • Cross Site Publishing in SharePoint Online

    I was asked to test Cross Site Publishing features in SharePoint 2013 Online. I saved the Authoring site collection's (Used Team Site's Template since Product Catalog Template not avialable in SP Online) Pages library as a Catalog. When I connected t

  • Warnings while simulation (install) BI content objects

    Hi! I am installing 0FIGL_C10 (simulation) from "BI Content". The system gives log with errors: some needed ifoobjects are not available in active version. I am istalling these infoobjects and one of them is 0COSTCENTER. After simulation (install) of

  • How do I fix a proxy error 502?

    This is exact of what showed up on my computer. "The proxy server received an invalid response from an upstream server. The proxy server could not handle the request GET /wamvalidate. Reason: Error reading from remote server"

  • VFAT drive keeps turning into read only

    I notice that I one of my mounted VFAT drives, after online for a while, is becoming read only. Why would this happen? To resolve it, I have to umount and then mount it. After a while, I have to do it again. Here's the relevant line in my fstab: /dev