Date in Alert Log is 00.00.00 00:00:00
Hello,
I have a problem with date in Alert Log on Solution Manager.
In Actual Date Time, the date is 00.00.00 00:00:00 for some systems.
Do you know how I can correct that ?
Thank you
It seems the problem appends when everything is green (screenshoot attached).
The name of the parameter is ACTUAL_DATE_TIME_FMT.
I don't know where I can change that.
Similar Messages
-
Which background process writes date into alert log file in oralce
which background process writes date into alert log file in oralce
Hi,
AFAIK, all the background process are eligible for writing information to alert log file. As the file name indicates to show the alerts, so background process have the access rights in terms of packages (dbms), to write to alert log.
I might be wrong also..
- Pavan Kumar N -
Hi
Oracle Version 10.2.0.3.0
Last friday we had a power failure and a server rebooted abruptly. After it came online I restarted the database and the db did a instance recovery and came online without any problems. However when I checked the alert log file I noticed that the date & timestamp has gone back 14 days. This was there for a while and then it started showing the current date & timestamp. Is that normal? If it's not could some one help me to figure out why this has happened?
Fri Feb 27 21:26:29 2009
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Picked latch-free SCN scheme 2
Using LOG_ARCHIVE_DEST_1 parameter default value as /opt/oracle/product/10.2/db_1/dbs/arch
Autotune of undo retention is turned on.
IMODE=BR
ILAT =121
LICENSE_MAX_USERS = 0
SYS auditing is disabled
ksdpec: called for event 13740 prior to event group initialization
Starting up ORACLE RDBMS Version: 10.2.0.3.0.
System parameters with non-default values:
processes = 1000
sessions = 1105
__shared_pool_size = 184549376
__large_pool_size = 16777216
__java_pool_size = 16777216
__streams_pool_size = 0
nls_language = ENGLISH
nls_territory = UNITED KINGDOM
filesystemio_options = SETALL
sga_target = 1577058304
control_files = /opt/oracle/oradata/rep/control.001.dbf, /opt/oracle/oradata/rep/control.002.dbf, /opt/oracle/oradata/rep/control.003.dbf
db_block_size = 8192
__db_cache_size = 1342177280
compatible = 10.2.0
Fri Feb 27 21:26:31 2009
ALTER DATABASE MOUNT
Fri Feb 27 21:26:35 2009
Setting recovery target incarnation to 1
Fri Feb 27 21:26:36 2009
Successful mount of redo thread 1, with mount id 740543687
Fri Feb 27 21:26:36 2009
Database mounted in Exclusive Mode
Completed: ALTER DATABASE MOUNT
Fri Feb 27 21:26:36 2009
ALTER DATABASE OPEN
Fri Feb 27 21:26:36 2009
Beginning crash recovery of 1 threads
parallel recovery started with 3 processes
Fri Feb 27 21:26:37 2009
Started redo scan
Fri Feb 27 21:26:41 2009
Completed redo scan
481654 redo blocks read, 13176 data blocks need recovery
Fri Feb 27 21:26:50 2009
Started redo application at
Thread 1: logseq 25176, block 781367
Fri Feb 27 21:26:50 2009
Recovery of Online Redo Log: Thread 1 Group 6 Seq 25176 Reading mem 0
Mem# 0: /opt/oracle/oradata/rep/redo_a/redo06.log
Mem# 1: /opt/oracle/oradata/rep/redo_b/redo06.log
Fri Feb 27 21:26:53 2009
Completed redo application
Fri Feb 27 21:27:00 2009
Completed crash recovery at
Thread 1: logseq 25176, block 1263021, scn 77945260488
13176 data blocks read, 13176 data blocks written, 481654 redo blocks read
Fri Feb 27 21:27:02 2009
Expanded controlfile section 9 from 1168 to 2336 records
Requested to grow by 1168 records; added 4 blocks of records
Thread 1 advanced to log sequence 25177
Thread 1 opened at log sequence 25177
Current log# 7 seq# 25177 mem# 0: /opt/oracle/oradata/rep/redo_a/redo07.log
Current log# 7 seq# 25177 mem# 1: /opt/oracle/oradata/rep/redo_b/redo07.log
Successful open of redo thread 1
Fri Feb 27 21:27:02 2009
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Fri Feb 27 21:27:02 2009
SMON: enabling cache recovery
Fri Feb 27 21:27:04 2009
Successfully onlined Undo Tablespace 1.
Fri Feb 27 21:27:04 2009
SMON: enabling tx recovery
Fri Feb 27 21:27:04 2009
Database Characterset is AL32UTF8
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
QMNC started with pid=17, OS id=4563
Fri Feb 27 21:27:08 2009
Completed: ALTER DATABASE OPEN
Fri Feb 27 22:46:04 2009
Thread 1 advanced to log sequence 25178
Current log# 8 seq# 25178 mem# 0: /opt/oracle/oradata/rep/redo_a/redo08.log
Current log# 8 seq# 25178 mem# 1: /opt/oracle/oradata/rep/redo_b/redo08.log
Fri Feb 27 23:43:49 2009
Thread 1 advanced to log sequence 25179
Current log# 9 seq# 25179 mem# 0: /opt/oracle/oradata/rep/redo_a/redo09.log
Current log# 9 seq# 25179 mem# 1: /opt/oracle/oradata/rep/redo_b/redo09.log
Fri Mar 13 20:09:29 2009
MMNL absent for 1194469 secs; Foregrounds taking over
Fri Mar 13 20:10:16 2009
Thread 1 advanced to log sequence 25180
Current log# 10 seq# 25180 mem# 0: /opt/oracle/oradata/rep/redo_a/redo10.log
Current log# 10 seq# 25180 mem# 1: /opt/oracle/oradata/rep/redo_b/redo10.log
Fri Mar 13 20:21:17 2009
Thread 1 advanced to log sequence 25181
Current log# 1 seq# 25181 mem# 0: /opt/oracle/oradata/rep/redo_a/redo01.log
Current log# 1 seq# 25181 mem# 1: /opt/oracle/oradata/rep/redo_b/redo01.logyes, you are right. I just found that the server was shutdown for more than 4 hours and server came back online @ 8:08pm and I think within few minutes those old timestamp appeared in the alertlog. We have a table which captures current timestamp from the db and timestamp from application and usually both columns are same. But following are the rows inserted during the time of the issue. Not sure why this has happened. One more thing is that the listener was started and on while database was starting and performing instance recovery.
DBTimestamp_ ApplicationTimestamp_
27-02-2009 21:27:45 13-03-2009 20:08:42
27-02-2009 21:31:47 13-03-2009 20:08:43
27-02-2009 21:31:54 13-03-2009 20:08:43
27-02-2009 21:33:39 13-03-2009 20:08:42
27-02-2009 21:35:47 13-03-2009 20:08:42
27-02-2009 21:37:45 13-03-2009 20:08:42
27-02-2009 21:38:24 13-03-2009 20:08:42
27-02-2009 21:39:42 13-03-2009 20:08:42
27-02-2009 21:40:01 13-03-2009 20:08:42
27-02-2009 21:41:13 13-03-2009 20:08:42
27-02-2009 21:44:07 13-03-2009 20:08:43
27-02-2009 21:53:54 13-03-2009 20:08:42
27-02-2009 22:03:45 13-03-2009 20:08:42
27-02-2009 22:07:02 13-03-2009 20:08:42 -
How to see data for particular date from a alert log file
Hi Experts,
I would like to know how can i see data for a particular date from alert_db.log in unix environment. I'm suing 0racle 9i in unix
Right now i'm using tail -500 alert_db.log>alert.txt then view the whole thing. But is there any easier way to see for a partiicular date or time
Thanks
ShaanHi Jaffar,
Here i have to pass exactly date and time, is there any way to see records for let say Nov 23 2007. because when i used this
tail -500 alert_sid.log | grep " Nov 23 2007" > alert_date.txt
It's not working. Here is the sample log file
Mon Nov 26 21:42:43 2007
Thread 1 advanced to log sequence 138
Current log# 3 seq# 138 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo3.log
Mon Nov 26 21:42:43 2007
ARCH: Evaluating archive log 1 thread 1 sequence 137
Mon Nov 26 21:42:43 2007
ARC1: Evaluating archive log 1 thread 1 sequence 137
ARC1: Unable to archive log 1 thread 1 sequence 137
Log actively being archived by another process
Mon Nov 26 21:42:43 2007
ARCH: Beginning to archive log 1 thread 1 sequence 137
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_137
.dbf'
ARCH: Completed archiving log 1 thread 1 sequence 137
Mon Nov 26 21:42:44 2007
Thread 1 advanced to log sequence 139
Current log# 2 seq# 139 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo2.log
Mon Nov 26 21:42:44 2007
ARC0: Evaluating archive log 3 thread 1 sequence 138
ARC0: Beginning to archive log 3 thread 1 sequence 138
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_138
.dbf'
Mon Nov 26 21:42:44 2007
ARCH: Evaluating archive log 3 thread 1 sequence 138
ARCH: Unable to archive log 3 thread 1 sequence 138
Log actively being archived by another process
Mon Nov 26 21:42:45 2007
ARC0: Completed archiving log 3 thread 1 sequence 138
Mon Nov 26 21:45:12 2007
Starting control autobackup
Mon Nov 26 21:45:56 2007
Control autobackup written to SBT_TAPE device
comment 'API Version 2.0,MMS Version 5.0.0.0',
media 'WP0033'
handle 'c-2861328927-20071126-01'
Clearing standby activation ID 2873610446 (0xab47d0ce)
The primary database controlfile was created using the
'MAXLOGFILES 5' clause.
The resulting standby controlfile will not have enough
available logfile entries to support an adequate number
of standby redo logfiles. Consider re-creating the
primary controlfile using 'MAXLOGFILES 8' (or larger).
Use the following SQL commands on the standby database to create
standby redo logfiles that match the primary database:
ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 10485760;
ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 10485760;
ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 10485760;
ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 10485760;
Tue Nov 27 21:23:50 2007
Starting control autobackup
Tue Nov 27 21:30:49 2007
Control autobackup written to SBT_TAPE device
comment 'API Version 2.0,MMS Version 5.0.0.0',
media 'WP0280'
handle 'c-2861328927-20071127-00'
Tue Nov 27 21:30:57 2007
ARC1: Evaluating archive log 2 thread 1 sequence 139
ARC1: Beginning to archive log 2 thread 1 sequence 139
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_139
.dbf'
Tue Nov 27 21:30:57 2007
Thread 1 advanced to log sequence 140
Current log# 1 seq# 140 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo1.log
Tue Nov 27 21:30:57 2007
ARCH: Evaluating archive log 2 thread 1 sequence 139
ARCH: Unable to archive log 2 thread 1 sequence 139
Log actively being archived by another process
Tue Nov 27 21:30:58 2007
ARC1: Completed archiving log 2 thread 1 sequence 139
Tue Nov 27 21:30:58 2007
Thread 1 advanced to log sequence 141
Current log# 3 seq# 141 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo3.log
Tue Nov 27 21:30:58 2007
ARCH: Evaluating archive log 1 thread 1 sequence 140
ARCH: Beginning to archive log 1 thread 1 sequence 140
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_140
.dbf'
Tue Nov 27 21:30:58 2007
ARC1: Evaluating archive log 1 thread 1 sequence 140
ARC1: Unable to archive log 1 thread 1 sequence 140
Log actively being archived by another process
Tue Nov 27 21:30:58 2007
ARCH: Completed archiving log 1 thread 1 sequence 140
Tue Nov 27 21:33:16 2007
Starting control autobackup
Tue Nov 27 21:34:29 2007
Control autobackup written to SBT_TAPE device
comment 'API Version 2.0,MMS Version 5.0.0.0',
media 'WP0205'
handle 'c-2861328927-20071127-01'
Clearing standby activation ID 2873610446 (0xab47d0ce)
The primary database controlfile was created using the
'MAXLOGFILES 5' clause.
The resulting standby controlfile will not have enough
available logfile entries to support an adequate number
of standby redo logfiles. Consider re-creating the
primary controlfile using 'MAXLOGFILES 8' (or larger).
Use the following SQL commands on the standby database to create
standby redo logfiles that match the primary database:
ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 10485760;
ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 10485760;
ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 10485760;
ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 10485760;
Wed Nov 28 21:43:31 2007
Starting control autobackup
Wed Nov 28 21:43:59 2007
Control autobackup written to SBT_TAPE device
comment 'API Version 2.0,MMS Version 5.0.0.0',
media 'WP0202'
handle 'c-2861328927-20071128-00'
Wed Nov 28 21:44:08 2007
Thread 1 advanced to log sequence 142
Current log# 2 seq# 142 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo2.log
Wed Nov 28 21:44:08 2007
ARCH: Evaluating archive log 3 thread 1 sequence 141
ARCH: Beginning to archive log 3 thread 1 sequence 141
Wed Nov 28 21:44:08 2007
ARC1: Evaluating archive log 3 thread 1 sequence 141
ARC1: Unable to archive log 3 thread 1 sequence 141
Log actively being archived by another process
Wed Nov 28 21:44:08 2007
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_141
.dbf'
Wed Nov 28 21:44:08 2007
ARC0: Evaluating archive log 3 thread 1 sequence 141
ARC0: Unable to archive log 3 thread 1 sequence 141
Log actively being archived by another process
Wed Nov 28 21:44:08 2007
ARCH: Completed archiving log 3 thread 1 sequence 141
Wed Nov 28 21:44:09 2007
Thread 1 advanced to log sequence 143
Current log# 1 seq# 143 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo1.log
Wed Nov 28 21:44:09 2007
ARCH: Evaluating archive log 2 thread 1 sequence 142
ARCH: Beginning to archive log 2 thread 1 sequence 142
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_142
.dbf'
Wed Nov 28 21:44:09 2007
ARC0: Evaluating archive log 2 thread 1 sequence 142
ARC0: Unable to archive log 2 thread 1 sequence 142
Log actively being archived by another process
Wed Nov 28 21:44:09 2007
ARCH: Completed archiving log 2 thread 1 sequence 142
Wed Nov 28 21:44:36 2007
Starting control autobackup
Wed Nov 28 21:45:00 2007
Control autobackup written to SBT_TAPE device
comment 'API Version 2.0,MMS Version 5.0.0.0',
media 'WP0202'
handle 'c-2861328927-20071128-01'
Clearing standby activation ID 2873610446 (0xab47d0ce)
The primary database controlfile was created using the
'MAXLOGFILES 5' clause.
The resulting standby controlfile will not have enough
available logfile entries to support an adequate number
of standby redo logfiles. Consider re-creating the
primary controlfile using 'MAXLOGFILES 8' (or larger).
Use the following SQL commands on the standby database to create
standby redo logfiles that match the primary database:
ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 10485760;
ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 10485760;
ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 10485760;
ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 10485760;
Thu Nov 29 21:36:44 2007
Starting control autobackup
Thu Nov 29 21:42:53 2007
Control autobackup written to SBT_TAPE device
comment 'API Version 2.0,MMS Version 5.0.0.0',
media 'WP0206'
handle 'c-2861328927-20071129-00'
Thu Nov 29 21:43:01 2007
Thread 1 advanced to log sequence 144
Current log# 3 seq# 144 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo3.log
Thu Nov 29 21:43:01 2007
ARCH: Evaluating archive log 1 thread 1 sequence 143
ARCH: Beginning to archive log 1 thread 1 sequence 143
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_143
.dbf'
Thu Nov 29 21:43:01 2007
ARC1: Evaluating archive log 1 thread 1 sequence 143
ARC1: Unable to archive log 1 thread 1 sequence 143
Log actively being archived by another process
Thu Nov 29 21:43:02 2007
ARCH: Completed archiving log 1 thread 1 sequence 143
Thu Nov 29 21:43:03 2007
Thread 1 advanced to log sequence 145
Current log# 2 seq# 145 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo2.log
Thu Nov 29 21:43:03 2007
ARCH: Evaluating archive log 3 thread 1 sequence 144
ARCH: Beginning to archive log 3 thread 1 sequence 144
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_144
.dbf'
Thu Nov 29 21:43:03 2007
ARC0: Evaluating archive log 3 thread 1 sequence 144
ARC0: Unable to archive log 3 thread 1 sequence 144
Log actively being archived by another process
Thu Nov 29 21:43:03 2007
ARCH: Completed archiving log 3 thread 1 sequence 144
Thu Nov 29 21:49:00 2007
Starting control autobackup
Thu Nov 29 21:50:14 2007
Control autobackup written to SBT_TAPE device
comment 'API Version 2.0,MMS Version 5.0.0.0',
media 'WP0280'
handle 'c-2861328927-20071129-01'
Thanks
Shaan -
ORA-01403: no data found in alert.log
Dear All,
I am getting ORA-01403: no data found in alert.log.Could you pls help me out what could be reasons behind it..Due to this i m getting loads of alerts.Pls suggest.
ThanksORA-01403 No Data Found
Typically, an ORA-01403 error occurs when an apply process tries to update an existing row and the OLD_VALUES in the row LCR do not match the current values at the destination database.
Typically, one of the following conditions causes this error:
Supplemental logging is not specified for columns that require supplemental logging at the source database. In this case, LCRs from the source database might not contain values for key columns. You can use a DML handler to modify the LCR so that it contains the necessary supplemental data. See "Using a DML Handler to Correct Error Transactions". Also, specify the necessary supplemental logging at the source database to prevent future errors.
There is a problem with the primary key in the table for which an LCR is applying a change. In this case, make sure the primary key is enabled by querying the DBA_CONSTRAINTS data dictionary view. If no primary key exists for the table, or if the target table has a different primary key than the source table, then specify substitute key columns using the SET_KEY_COLUMNS procedure in the DBMS_APPLY_ADM package. You also might encounter error ORA-23416 if a table being applied does not have a primary key. After you make these changes, you can reexecute the error transaction.
The transaction being applied depends on another transaction which has not yet executed. For example, if a transaction tries to update an employee with an employee_id of 300, but the row for this employee has not yet been inserted into the employees table, then the update fails. In this case, execute the transaction on which the error transaction depends. Then, reexecute the error transaction.
There is a data mismatch between a row LCR and the table for which the LCR is applying a change. Make sure row data in the table at the destination database matches the row data in the LCR. When you are checking for differences in the data, if there are any DATE columns in the shared table, then make sure your query shows the hours, minutes, and seconds. If there is a mismatch, then you can use a DML handler to modify an LCR so that it matches the table. See "Using a DML Handler to Correct Error Transactions".
Alternatively, you can update the current values in the row so that the row LCR can be applied successfully. If changes to the row are captured by a capture process at the destination database, then you probably do not want to replicate this manual change to destination databases. In this case, complete the following steps:
Set a tag in the session that corrects the row. Make sure you set the tag to a value that prevents the manual change from being replicated. For example, the tag can prevent the change from being captured by a capture process.
EXEC DBMS_STREAMS.SET_TAG(tag => HEXTORAW('17'));
In some environments, you might need to set the tag to a different value.
Update the row in the table so that the data matches the old values in the LCR.
Reexecute the error or reexecute all errors. To reexecute an error, run the EXECUTE_ERROR procedure in the DBMS_APPLY_ADM package, and specify the transaction identifier for the transaction that caused the error. For example:
EXEC DBMS_APPLY_ADM.EXECUTE_ERROR(local_transaction_id => '5.4.312');
Or, execute all errors for the apply process by running the EXECUTE_ALL_ERRORS procedure:
EXEC DBMS_APPLY_ADM.EXECUTE_ALL_ERRORS(apply_name => 'APPLY');
If you are going to make other changes in the current session that you want to replicate destination databases, then reset the tag for the session to an appropriate value, as in the following example:
EXEC DBMS_STREAMS.SET_TAG(tag => NULL);
In some environments, you might need to set the tag to a value other than NULL. -
Logical data corrupton in alert log
Thanks for taking my question! I am hoping someone can help me because I am really in a bind. I had some issues restoring my database the other day. I thought I recovered everything but now I am seeing more errors in the alert log and I have no idea how what to do. Any help would be greatly appreciate!!!!!
I listed alert log errors at the end and I am 11g windows 2008 .
My first mistake was stop archive logging while a large job ran to save some time.
After the job completed I restarted the archive logging and then backed up the database and took an scn.
I then recovered back to the scn above and it appeared to go ok excetp I then noticed I had (2) v%database_block_corruptions.
I then decided to recover back several days and restore an the prd schema via an export takien before 1st restor.
I changed the incarniation back 1 and recovered and imported schema back.
I checheck v$database_block_corruption and it as empty. I am thinking ai am good.
I restart several batch jobs which finished ok.
I backup and it is good.
I now have the below logical errors in the alert log. They appeared inbetween the nightly rman backup and export.
Help! What is causing this and how do I fix it?
Kathie
export taken at 9:30
Wed Jul 22 22:00:03 2009
Error backing up file 2, block 47924: logical corruption
Error backing up file 2, block 47925: logical corruption
Error backing up file 2, block 47926: logical corruption
Error backing up file 2, block 47927: logical corruption
Error backing up file 2, block 71194: logical corruption
Error backing up file 2, block 78234: logical corruption
Error backing up file 2, block 78236: logical corruption
Error backing up file 2, block 78237: logical corruption
Error backing up file 2, block 78238: logical corruption
Error backing up file 2, block 78239: logical corruption
Error backing up file 2, block 78353: logical corruption
Error backing up file 2, block 78473: logical corruption
Error backing up file 2, block 79376: logical corruption
Error backing up file 2, block 79377: logical corruption
Error backing up file 2, block 79378: logical corruption
Error backing up file 2, block 81282: logical corruption
Error backing up file 2, block 81297: logical corruption
Error backing up file 2, block 81305: logical corruption
Error backing up file 2, block 81309: logical corruption
Error backing up file 2, block 81313: logical corruption
Error backing up file 2, block 81341: logical corruption
Error backing up file 2, block 81370: logical corruption
Error backing up file 2, block 81396: logical corruption
Error backing up file 2, block 82115: logical corruption
Error backing up file 2, block 82116: logical corruption
Error backing up file 2, block 82117: logical corruption
Error backing up file 2, block 82118: logical corruption
Error backing up file 2, block 82119: logical corruption
Error backing up file 2, block 85892: logical corruption
Error backing up file 2, block 85897: logical corruption
Error backing up file 2, block 85900: logical corruption
Error backing up file 2, block 85901: logical corruption
Error backing up file 2, block 85904: logical corruption
Error backing up file 2, block 85905: logical corruption
Error backing up file 2, block 85906: logical corruption
Error backing up file 2, block 85909: logical corruption
Error backing up file 2, block 85910: logical corruption
Error backing up file 2, block 85913: logical corruption
Error backing up file 2, block 85917: logical corruption
Error backing up file 2, block 85918: logical corruption
Error backing up file 2, block 85925: logical corruption
Error backing up file 2, block 85937: logical corruption
Error backing up file 2, block 85943: logical corruption
Error backing up file 2, block 85944: logical corruption
Error backing up file 2, block 85947: logical corruption
Error backing up file 2, block 85949: logical corruption
Error backing up file 2, block 85951: logical corruption
Error backing up file 2, block 85953: logical corruption
Error backing up file 2, block 85956: logical corruption
Error backing up file 2, block 85958: logical corruption
Error backing up file 2, block 85965: logical corruption
Error backing up file 2, block 85976: logical corruption
Error backing up file 2, block 85977: logical corruption
Error backing up file 2, block 85980: logical corruption
Error backing up file 2, block 85981: logical corruption
Error backing up file 2, block 85988: logical corruption
Error backing up file 2, block 85989: logical corruption
Error backing up file 2, block 85995: logical corruption
Error backing up file 2, block 86001: logical corruption
Error backing up file 2, block 86003: logical corruption
Error backing up file 2, block 86005: logical corruption
Error backing up file 2, block 86012: logical corruption
Error backing up file 2, block 86013: logical corruption
Error backing up file 2, block 86015: logical corruption
Error backing up file 2, block 93961: logical corruption
Error backing up file 2, block 93965: logical corruption
Error backing up file 2, block 93968: logical corruption
Error backing up file 2, block 93971: logical corruption
Error backing up file 2, block 93975: logical corruption
Error backing up file 2, block 93979: logical corruption
Error backing up file 2, block 93983: logical corruption
Error backing up file 2, block 93984: logical corruption
Error backing up file 2, block 93987: logical corruption
Error backing up file 2, block 93988: logical corruption
Error backing up file 2, block 93992: logical corruption
Error backing up file 2, block 93996: logical corruption
Error backing up file 2, block 94008: logical corruption
Error backing up file 2, block 94022: logical corruption
Error backing up file 2, block 94026: logical corruption
Error backing up file 2, block 94027: logical corruption
Error backing up file 2, block 94030: logical corruption
Error backing up file 2, block 94031: logical corruption
Error backing up file 2, block 94034: logical corruption
Error backing up file 2, block 94038: logical corruption
Error backing up file 2, block 94041: logical corruption
Error backing up file 2, block 94042: logical corruption
Error backing up file 2, block 94047: logical corruption
Error backing up file 2, block 94074: logical corruption
Error backing up file 2, block 94077: logical corruption
Error backing up file 2, block 118881: logical corruption
Error backing up file 2, block 118882: logical corruption
Error backing up file 2, block 118883: logical corruption
Error backing up file 2, block 118884: logical corruption
Wed Jul 22 22:00:27 2009
Thread 1 advanced to log sequence 279 (LGWR switch)
Current log# 3 seq# 279 mem# 0: F:\ORACLE\11.1.0\ORADATA\CS90QAP\REDO03.LOG
Current log# 3 seq# 279 mem# 1: E:\ORACLE\11.1.0\ORADATA\CS90QAP\REDO03A.LOG
Wed Jul 22 22:04:29 2009
Thread 1 advanced to log sequence 280 (LGWR switch)
Current log# 4 seq# 280 mem# 0: F:\ORACLE\11.1.0\ORADATA\CS90QAP\REDO04.LOG
Current log# 4 seq# 280 mem# 1: E:\ORACLE\11.1.0\ORADATA\CS90QAP\REDO04A.LOG
Wed Jul 22 22:37:15 2009
ALTER SYSTEM ARCHIVE LOG -this is the nightly backup.
Edited by: user579885 on Jul 23, 2009 7:36 AMIf the index is part of EM why wait? How many EM users can you have? It should just be the DBA's.
Being the object is a PK there could be FK that references it. If there are FK to the PK I would try the alter index rebuild to see if that 1- works and 2- fixes the issue before I resorted to drop/create since the rebuild can be done without having to disable and re-enable the FK.
Also note that under certain conditions such as if the index status is INVALID the alter index rebuild will read the table rather than just read the index for its data.
HTH -- Mark D Powell -- -
ORA-01033 Error cannot be traced in the alert log or v$views
Hello There,
I'm hoping you can shed some light on what seems a rather odd occurrence on our Production Oracle Instance.
Before i elaborate on the nature of the problem, I must confess that i am not an Oracle DBA and have been compelled to post this query since i have no joy from the in house DBA community on the origins of this error.
We've had an ORA-01033 error being issued by our ETL installation (deployed on a Linux machine) a couple of days ago whence trying to extract some data from the Oracle instance (during overnight DataWarehouse loads) which has consequently aborted the loads necessitating a cleanup.
This event has already occurred twice before, the first time being a month ago when the event was captured in the v$instance (using the Startup time column where the timestamp corroborates to the first time we experienced this issue and also ackowledged by the DBA team) and also in the V$Dataguard_status.
Since then, This error has been generated twice although there seems to be no evidence of this in either the Oracle alert log(as confirmed by DBA team) or in any of the v$views (as pointed out by Tom in an earlier post) such as DBA_LOGSTDBY_HISTORY, v$logstdby_state, $logstdby_stats, dba_logstdby_log, dba_logstdby_events,v$dataguard_status, v$dataguard_stats, v$dataguard_config,v$database,v$database_incarnation, v$managed_standby, v$standby_log, v$instance. I searched these views since i suspected a latency issue during a failover which could be the reason for the
ORA-01033 but found nothing.
The DBA team have pretty much disowned this issue since they claim to not have any actual evidence of this from the logs and this is the crux of the matter here.
The Problem i have as the downstream "recepient" of this error is to prove to the DBA team that this is indeed a genuine issue with the Oracle instance affecting it's availability and concomittantly affecting the DW loads.
FYI, The Oracle instance is in Failover mode, so it's swiftly back online after bombing out for a few seconds.
Also, I don't have access to the Alert log as it's a Production environment and employs restricted access policy to the server.
Having said that, Is there anything else besides the obvious ORA errors that should be looked for in the Alert log?
Where else can ORA-01033 errors be traced/traced in the Dictionary besides the Alert Log?
Thoughts??
RegardsThank You John for that query, but i'm on V10.2 and this view is relevant for 11g and beyond i believe.
Perhaps there is an equivalent for V10g?
I am also a bit bemused by the comment earlier about no trace being left behind if the DBA's performed a manual restart, surely it doesn't matter how the DB is restarted, the event is captured in the dictionary?
In the meantime, I've got a copy of the Alert log and have found redo log issues (DBWR/LGWR) very close to the time (a min after ORA-01033) of the shutdown event.
ALTER SYSTEM ARCHIVE LOG
Thread 1 cannot allocate new log, sequence 117732
Checkpoint not complete
I've looked into this a fair bit and this error apparently causes Oracle to suspend all processing on the database until the log switch is made.
Is this the causal link for my issue?
Does " suspend all processing on the database " actually translate into an actual ORA-01033 error (or some form of) when the ETL application is trying to connect to and extract data from the Oracle database at that time?
Edited by: shareeman on 16-Oct-2012 03:50 -
Ora -3113 error in the alert log
Hi ,
We are getting ORA-3113 error in the alertlog and i pasted below alert log entry.
VERSION INFORMATION:
TNS for Linux: Version 11.1.0.7.0 - Production
Unix Domain Socket IPC NT Protocol Adaptor for Linux: Version 11.1.0.7.0 - Production
Oracle Bequeath NT Protocol Adapter for Linux: Version 11.1.0.7.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.1.0.7.0 - Production
Time: 27-JAN-2011 16:11:25
Tracing not turned on.
Tns error struct:
ns main err code: 12535
TNS-12535: TNSperation timed out
ns secondary err code: 12560
nt main err code: 505
TNS-00505: Operation timed out
nt secondary err code: 110
nt OS err code: 0
Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=10.100.72.127)(PORT=2844))
Thu Jan 27 16:13:55 2011
opidcl aborting process unknown ospid (18585_47319544949952) due to error ORA-3113
Thu Jan 27 16:14:03 2011
Thread 2 advanced to log sequence 2022 (LGWR switch)
Current log# 4 seq# 2022 mem# 0: +DATA/systemprod/onlinelog/group_4.269.736019283
Current log# 4 seq# 2022 mem# 1: +FLASH1/systemprod/onlinelog/group_4.262.736019285
Thu Jan 27 16:14:13 2011
opidcl aborting process unknown ospid (14096_47207734746304) due to error ORA-3113
Thu Jan 27 16:16:34 2011
Thread 2 advanced to log sequence 2023 (LGWR switch)
Current log# 8 seq# 2023 mem# 0: +DATA/systemprod/onlinelog/group_8.319.736018999
Current log# 8 seq# 2023 mem# 1: +FLASH1/systemprod/onlinelog/group_8.3138.736018999
Thu Jan 27 16:19:33 2011
Thread 2 advanced to log sequence 2024 (LGWR switch)
Current log# 3 seq# 2024 mem# 0: +DATA/systemprod/onlinelog/group_3.268.736019049
Current log# 3 seq# 2024 mem# 1: +FLASH1/systemprod/onlinelog/group_3.261.736019051
Thu Jan 27 16:22:17 2011
What could be the workaround to resove this issue
Regards
PremORA-00600/ORA-07445/ORA-03113 = Oracle bug => search on Metalink and/or call Oracle support
-
Ora-07445 reported in the alert log file
Hi all,
We are using the following platform:-
OS: Solaris Operating System (SPARC 32-bit)
Oracle Server - Enterprise Edition / Product Version: 9.2.0.5.0
We encountered the following problem:-
There is Ora-07445 reported in the alert log file.
"ORA-07445: exception encountered: core dump [kgghash()+308] [SIGSEGV] [Address not mapped to object] [0x3222
000] [] []reported in the database."
These errors are signaled in more than 3 independent, unexplained, occurrences every day.
When this error occurred while User was accessing the application system, by right the case will be shown on the system, but they were no case found.
alert log are as below:-
Fri Jul 27 09:12:30 2007
Errors in file /disc3/oracle9205/RFDB/udump/rfdb_ora_27371.trc:
ORA-07445: exception encountered: core dump [kgghash()+340] [SIGSEGV] [Address not mapped to object] [0x3184000] [] []
Fri Jul 27 09:22:10 2007
Thread 1 advanced to log sequence 10730
Current log# 2 seq# 10730 mem# 0: /disc3/oracle9205/RFDB/RDO/logRFDB2a.rdo
Current log# 2 seq# 10730 mem# 1: /disc3/oracle9205/RFDB/RDO/logRFDB2b.rdo
Fri Jul 27 09:29:26 2007
Errors in file /disc3/oracle9205/RFDB/udump/rfdb_ora_27372.trc:
ORA-07445: exception encountered: core dump [kgghash()+296] [SIGSEGV] [Address not mapped to object] [0x3182000] [] []
The applications have encountered ora-3113 after a short period of time followed by ora-3114.
application log:-
RF0120-1 2007-Jul-27 09:46:57] Load m[RF0120-1 2007-Jul-27 09:29:30] SQLCODE: -3113
[RF0120-1 2007-Jul-27 09:29:30] Error Code -4105 returning from get score pan no.
[RF0120-1 2007-Jul-27 09:29:30] Message type :120
[RF0120-1 2007-Jul-27 09:29:30] Primary Account Number(PAN) DE0
02 :5440640155262702
[RF0120-1 2007-Jul-27 09:29:30] Processing code DE003 :003000
[RF0120-1 2007-Jul-27 09:29:30] Transaction amount DE004 :000000000001
[RF0120-1 2007-Jul-27 09:29:30] Settlement amount DE005 :000000000000
[RF0120-1 2007-Jul-27 09:29:30] Transmission Date and time
DE007 :0727092717
[RF0120-1 2007-Jul-27 09:29:30] Settlement conversion rate DE009 :60263158
[RF0120-1 2007-Jul-27 09:29:30] System trace audit no. DE011 :754710
[RF0120-1 2007-Jul-27 09:29:30] Local transaction time DE012 :092717
[RF0120-1 2007-Jul-27 09:29:30] Local transaction date DE013 :0727
[RF0120-1 2007-Jul-27 09:29:30] Expiration date D
E014 :0712
[RF0120-1 2007-Jul-27 09:29:30] Settlement date DE015 :0727
[RF0120-1 2007-Jul-27 09:29:30] Merchant type DE018 :5311
[RF0120-1 2007-Jul-27 09:29:30] Point-of-service(POS) entry code DE022 :051
[RF0120-1 2007-Jul-27 09:29:30] Acquiring inst. ID code DE032 :001912
[RF0120-1 2007-Jul-27 09:29:30] Forwarding Inst. ID code DE033 :001912
[RF0120-1 2007-Jul-27 09:29:30] Retrieval ref. no.
DE037 :754710356390
[RF0120-1 2007-Jul-27 09:29:30] Autholization ID response DE038 :356390
[RF0120-1 2007-Jul-27 09:29:30] Response code DE039 :00
[RF0120-1 2007-Jul-27 09:29:30] Card acceptor terminal ID DE041 :19306002
[RF0120-1 2007-Jul-27 09:29:30] Card acceptor ID code DE042 :000001106
020132
[RF0120-1 2007-Jul-27 09:29:30] Card acceptor Name/Location
What could have caused the above mentioned errors i.e Ora-07445 ; ora-3113 / ora-3114? How to resolve the problem.
Please help .
Thanks.i am also facing the same some time in ora 9.2.0.6 on Sun OS 9 SPARC 64-bits
Errors in file /oracle/oracle9i/admin/FINPROD/udump/finprod_ora_6076.trc:
ORA-07445: exception encountered: core dump [0000000100FDE088] [SIGSEGV] [Address not mapped to object] [0x00000013A] [] []
Thu Aug 30 08:52:39 2007
Errors in file /oracle/oracle9i/admin/FINPROD/udump/finprod_ora_6078.trc:
ORA-07445: exception encountered: core dump [0000000100FDE088] [SIGSEGV] [Address not mapped to object] [0x00000013A] [] []
Thu Aug 30 09:41:49 2007 -
ORA-07445 in the alert log when inserting into table with XMLType column
I'm trying to insert an xml-document into a table with a schema-based XMLType column. When I try to insert a row (using plsql-developer) - oracle is busy for a few seconds and then the connection to oracle is lost.
Below you''ll find the following to recreate the problem:
a) contents from the alert log
b) create script for the table
c) the before-insert trigger
d) the xml-schema
e) code for registering the schema
f) the test program
g) platform information
Alert Log:
Fri Aug 17 00:44:11 2007
Errors in file /oracle/app/oracle/product/10.2.0/db_1/admin/dntspilot2/udump/dntspilot2_ora_13807.trc:
ORA-07445: exception encountered: core dump [SIGSEGV] [Address not mapped to object] [475177] [] [] []
Create script for the table:
CREATE TABLE "DNTSB"."SIGNATURETABLE"
( "XML_DOCUMENT" "SYS"."XMLTYPE" ,
"TS" TIMESTAMP (6) WITH TIME ZONE NOT NULL ENABLE
) XMLTYPE COLUMN "XML_DOCUMENT" XMLSCHEMA "http://www.sporfori.fo/schemas/www.w3.org/TR/xmldsig-core/xmldsig-core-schema.xsd" ELEMENT "Object"
ROWDEPENDENCIES ;
Before-insert trigger:
create or replace trigger BIS_SIGNATURETABLE
before insert on signaturetable
for each row
declare
-- local variables here
l_sigtab_rec signaturetable%rowtype;
begin
if (:new.xml_document is not null) then
:new.xml_document.schemavalidate();
end if;
l_sigtab_rec.xml_document := :new.xml_document;
end BIS_SIGNATURETABLE2;
XML-Schema (xmldsig-core-schema.xsd):
=====================================================================================
<?xml version="1.0" encoding="utf-8"?>
<!-- Schema for XML Signatures
http://www.w3.org/2000/09/xmldsig#
$Revision: 1.1 $ on $Date: 2002/02/08 20:32:26 $ by $Author: reagle $
Copyright 2001 The Internet Society and W3C (Massachusetts Institute
of Technology, Institut National de Recherche en Informatique et en
Automatique, Keio University). All Rights Reserved.
http://www.w3.org/Consortium/Legal/
This document is governed by the W3C Software License [1] as described
in the FAQ [2].
[1] http://www.w3.org/Consortium/Legal/copyright-software-19980720
[2] http://www.w3.org/Consortium/Legal/IPR-FAQ-20000620.html#DTD
-->
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:ds="http://www.w3.org/2000/09/xmldsig#" xmlns:xdb="http://xmlns.oracle.com/xdb"
targetNamespace="http://www.w3.org/2000/09/xmldsig#" version="0.1" elementFormDefault="qualified">
<!-- Basic Types Defined for Signatures -->
<xs:simpleType name="CryptoBinary">
<xs:restriction base="xs:base64Binary">
</xs:restriction>
</xs:simpleType>
<!-- Start Signature -->
<xs:element name="Signature" type="ds:SignatureType"/>
<xs:complexType name="SignatureType">
<xs:sequence>
<xs:element ref="ds:SignedInfo"/>
<xs:element ref="ds:SignatureValue"/>
<xs:element ref="ds:KeyInfo" minOccurs="0"/>
<xs:element ref="ds:Object" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
<xs:attribute name="Id" type="xs:ID" use="optional"/>
</xs:complexType>
<xs:element name="SignatureValue" type="ds:SignatureValueType"/>
<xs:complexType name="SignatureValueType">
<xs:simpleContent>
<xs:extension base="xs:base64Binary">
<xs:attribute name="Id" type="xs:ID" use="optional"/>
</xs:extension>
</xs:simpleContent>
</xs:complexType>
<!-- Start SignedInfo -->
<xs:element name="SignedInfo" type="ds:SignedInfoType"/>
<xs:complexType name="SignedInfoType">
<xs:sequence>
<xs:element ref="ds:CanonicalizationMethod"/>
<xs:element ref="ds:SignatureMethod"/>
<xs:element ref="ds:Reference" maxOccurs="unbounded"/>
</xs:sequence>
<xs:attribute name="Id" type="xs:ID" use="optional"/>
</xs:complexType>
<xs:element name="CanonicalizationMethod" type="ds:CanonicalizationMethodType"/>
<xs:complexType name="CanonicalizationMethodType" mixed="true">
<xs:sequence>
<xs:any namespace="##any" minOccurs="0" maxOccurs="unbounded"/>
<!-- (0,unbounded) elements from (1,1) namespace -->
</xs:sequence>
<xs:attribute name="Algorithm" type="xs:anyURI" use="required"/>
</xs:complexType>
<xs:element name="SignatureMethod" type="ds:SignatureMethodType"/>
<xs:complexType name="SignatureMethodType" mixed="true">
<xs:sequence>
<xs:element name="HMACOutputLength" minOccurs="0" type="ds:HMACOutputLengthType"/>
<xs:any namespace="##other" minOccurs="0" maxOccurs="unbounded"/>
<!-- (0,unbounded) elements from (1,1) external namespace -->
</xs:sequence>
<xs:attribute name="Algorithm" type="xs:anyURI" use="required"/>
</xs:complexType>
<!-- Start Reference -->
<xs:element name="Reference" type="ds:ReferenceType"/>
<xs:complexType name="ReferenceType">
<xs:sequence>
<xs:element ref="ds:Transforms" minOccurs="0"/>
<xs:element ref="ds:DigestMethod"/>
<xs:element ref="ds:DigestValue"/>
</xs:sequence>
<xs:attribute name="Id" type="xs:ID" use="optional"/>
<xs:attribute name="URI" type="xs:anyURI" use="optional"/>
<xs:attribute name="Type" type="xs:anyURI" use="optional"/>
</xs:complexType>
<xs:element name="Transforms" type="ds:TransformsType"/>
<xs:complexType name="TransformsType">
<xs:sequence>
<xs:element ref="ds:Transform" maxOccurs="unbounded"/>
</xs:sequence>
</xs:complexType>
<xs:element name="Transform" type="ds:TransformType"/>
<xs:complexType name="TransformType" mixed="true">
<xs:choice minOccurs="0" maxOccurs="unbounded">
<xs:any namespace="##other" processContents="lax"/>
<!-- (1,1) elements from (0,unbounded) namespaces -->
<xs:element name="XPath" type="xs:string"/>
</xs:choice>
<xs:attribute name="Algorithm" type="xs:anyURI" use="required"/>
</xs:complexType>
<!-- End Reference -->
<xs:element name="DigestMethod" type="ds:DigestMethodType"/>
<xs:complexType name="DigestMethodType" mixed="true">
<xs:sequence>
<xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
<xs:attribute name="Algorithm" type="xs:anyURI" use="required"/>
</xs:complexType>
<xs:element name="DigestValue" type="ds:DigestValueType"/>
<xs:simpleType name="DigestValueType">
<xs:restriction base="xs:base64Binary"/>
</xs:simpleType>
<!-- End SignedInfo -->
<!-- Start KeyInfo -->
<xs:element name="KeyInfo" type="ds:KeyInfoType"/>
<xs:complexType name="KeyInfoType" mixed="true">
<xs:choice maxOccurs="unbounded">
<xs:element ref="ds:KeyName"/>
<xs:element ref="ds:KeyValue"/>
<xs:element ref="ds:RetrievalMethod"/>
<xs:element ref="ds:X509Data"/>
<xs:element ref="ds:PGPData"/>
<xs:element ref="ds:SPKIData"/>
<xs:element ref="ds:MgmtData"/>
<xs:any processContents="lax" namespace="##other"/>
<!-- (1,1) elements from (0,unbounded) namespaces -->
</xs:choice>
<xs:attribute name="Id" type="xs:ID" use="optional"/>
</xs:complexType>
<xs:element name="KeyName" type="xs:string"/>
<xs:element name="MgmtData" type="xs:string"/>
<xs:element name="KeyValue" type="ds:KeyValueType"/>
<xs:complexType name="KeyValueType" mixed="true">
<xs:choice>
<xs:element ref="ds:DSAKeyValue"/>
<xs:element ref="ds:RSAKeyValue"/>
<xs:any namespace="##other" processContents="lax"/>
</xs:choice>
</xs:complexType>
<xs:element name="RetrievalMethod" type="ds:RetrievalMethodType"/>
<xs:complexType name="RetrievalMethodType">
<xs:sequence>
<xs:element ref="ds:Transforms" minOccurs="0"/>
</xs:sequence>
<xs:attribute name="URI" type="xs:anyURI"/>
<xs:attribute name="Type" type="xs:anyURI" use="optional"/>
</xs:complexType>
<!-- Start X509Data -->
<xs:element name="X509Data" type="ds:X509DataType"/>
<xs:complexType name="X509DataType">
<xs:sequence maxOccurs="unbounded">
<xs:choice>
<xs:element name="X509IssuerSerial" type="ds:X509IssuerSerialType"/>
<xs:element name="X509SKI" type="xs:base64Binary"/>
<xs:element name="X509SubjectName" type="xs:string"/>
<xs:element name="X509Certificate" type="xs:base64Binary"/>
<xs:element name="X509CRL" type="xs:base64Binary"/>
<xs:any namespace="##other" processContents="lax"/>
</xs:choice>
</xs:sequence>
</xs:complexType>
<xs:complexType name="X509IssuerSerialType">
<xs:sequence>
<xs:element name="X509IssuerName" type="xs:string"/>
<xs:element name="X509SerialNumber" type="xs:integer"/>
</xs:sequence>
</xs:complexType>
<!-- End X509Data -->
<!-- Begin PGPData -->
<xs:element name="PGPData" type="ds:PGPDataType"/>
<xs:complexType name="PGPDataType">
<xs:choice>
<xs:sequence>
<xs:element name="PGPKeyID" type="xs:base64Binary"/>
<xs:element name="PGPKeyPacket" type="xs:base64Binary" minOccurs="0"/>
<xs:any namespace="##other" processContents="lax" minOccurs="0"
maxOccurs="unbounded"/>
</xs:sequence>
<xs:sequence>
<xs:element name="PGPKeyPacket" type="xs:base64Binary"/>
<xs:any namespace="##other" processContents="lax" minOccurs="0"
maxOccurs="unbounded"/>
</xs:sequence>
</xs:choice>
</xs:complexType>
<!-- End PGPData -->
<!-- Begin SPKIData -->
<xs:element name="SPKIData" type="ds:SPKIDataType"/>
<xs:complexType name="SPKIDataType">
<xs:sequence maxOccurs="unbounded">
<xs:element name="SPKISexp" type="xs:base64Binary"/>
<xs:any namespace="##other" processContents="lax" minOccurs="0"/>
</xs:sequence>
</xs:complexType>
<!-- End SPKIData -->
<!-- End KeyInfo -->
<!-- Start Object (Manifest, SignatureProperty) -->
<xs:element name="Object" type="ds:ObjectType"/>
<xs:complexType name="ObjectType" mixed="true">
<xs:sequence minOccurs="0" maxOccurs="unbounded">
<xs:any namespace="##any" processContents="lax"/>
</xs:sequence>
<xs:attribute name="Id" type="xs:ID" use="optional"/>
<xs:attribute name="MimeType" type="xs:string" use="optional"/> <!-- add a grep facet -->
<xs:attribute name="Encoding" type="xs:anyURI" use="optional"/>
</xs:complexType>
<xs:element name="Manifest" type="ds:ManifestType"/>
<xs:complexType name="ManifestType">
<xs:sequence>
<xs:element ref="ds:Reference" maxOccurs="unbounded"/>
</xs:sequence>
<xs:attribute name="Id" type="xs:ID" use="optional"/>
</xs:complexType>
<xs:element name="SignatureProperties" type="ds:SignaturePropertiesType"/>
<xs:complexType name="SignaturePropertiesType">
<xs:sequence>
<xs:element ref="ds:SignatureProperty" maxOccurs="unbounded"/>
</xs:sequence>
<xs:attribute name="Id" type="xs:ID" use="optional"/>
</xs:complexType>
<xs:element name="SignatureProperty" type="ds:SignaturePropertyType"/>
<xs:complexType name="SignaturePropertyType" mixed="true">
<xs:choice maxOccurs="unbounded">
<xs:any namespace="##other" processContents="lax"/>
<!-- (1,1) elements from (1,unbounded) namespaces -->
</xs:choice>
<xs:attribute name="Target" type="xs:anyURI" use="required"/>
<xs:attribute name="Id" type="xs:ID" use="optional"/>
</xs:complexType>
<!-- End Object (Manifest, SignatureProperty) -->
<!-- Start Algorithm Parameters -->
<xs:simpleType name="HMACOutputLengthType">
<xs:restriction base="xs:integer"/>
</xs:simpleType>
<!-- Start KeyValue Element-types -->
<xs:element name="DSAKeyValue" type="ds:DSAKeyValueType"/>
<xs:complexType name="DSAKeyValueType">
<xs:sequence>
<xs:sequence minOccurs="0">
<xs:element name="P" type="ds:CryptoBinary"/>
<xs:element name="Q" type="ds:CryptoBinary"/>
</xs:sequence>
<xs:element name="G" type="ds:CryptoBinary" minOccurs="0"/>
<xs:element name="Y" type="ds:CryptoBinary"/>
<xs:element name="J" type="ds:CryptoBinary" minOccurs="0"/>
<xs:sequence minOccurs="0">
<xs:element name="Seed" type="ds:CryptoBinary"/>
<xs:element name="PgenCounter" type="ds:CryptoBinary"/>
</xs:sequence>
</xs:sequence>
</xs:complexType>
<xs:element name="RSAKeyValue" type="ds:RSAKeyValueType"/>
<xs:complexType name="RSAKeyValueType">
<xs:sequence>
<xs:element name="Modulus" type="ds:CryptoBinary"/>
<xs:element name="Exponent" type="ds:CryptoBinary"/>
</xs:sequence>
</xs:complexType>
<!-- End KeyValue Element-types -->
<!-- End Signature -->
</xs:schema>
===============================================================================
Code for registering the xml-schema
begin
dbms_xmlschema.deleteSchema('http://xmlns.oracle.com/xdb/schemas/DNTSB/www.sporfori.fo/schemas/www.w3.org/TR/xmldsig-core/xmldsig-core-schema.xsd',
dbms_xmlschema.DELETE_CASCADE_FORCE);
end;
begin
DBMS_XMLSCHEMA.REGISTERURI(
schemaurl => 'http://www.sporfori.fo/schemas/www.w3.org/TR/xmldsig-core/xmldsig-core-schema.xsd',
schemadocuri => 'http://www.sporfori.fo/schemas/www.w3.org/TR/xmldsig-core/xmldsig-core-schema.xsd',
local => TRUE,
gentypes => TRUE,
genbean => FALSE,
gentables => TRUE,
force => FALSE,
owner => 'DNTSB',
options => 0);
end;
Test program
-- Created on 17-07-2006 by EEJ
declare
XML_TEXT3 CLOB := '<Object xmlns="http://www.w3.org/2000/09/xmldsig#">
<SignatureProperties>
<SignatureProperty Target="">
<Timestamp xmlns="http://www.sporfori.fo/schemas/dnts/general/2006/11/14">2007-05-10T12:00:00-05:00</Timestamp>
</SignatureProperty>
</SignatureProperties>
</Object>';
xmldoc xmltype;
begin
xmldoc := xmltype(xml_text3);
insert into signaturetable
(xml_document, ts)
values
(xmldoc, current_timestamp);
end;
Platform information
Operating system:
-bash-3.00$ uname -a
SunOS dntsdb 5.10 Generic_125101-09 i86pc i386 i86pc
SQLPlus:
SQL*Plus: Release 10.2.0.3.0 - Production on Fri Aug 17 00:15:13 2007
Copyright (c) 1982, 2006, Oracle. All Rights Reserved.
Enter password:
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning and Data Mining options
Kind Regards,
EyðunYou should report this in a service request on http://metalink.oracle.com.
It is a shame that you put all the effort here to describe your problem, but on the other hand you can now also copy & paste the question to Oracle Support.
Because you are using 10.2.0.3; I am guessing that you have a valid service contract... -
How to be notified for all ORA- Errors recorded in the alert.log file
based on Note:405396.1, I Changed the Matches Warning from the default value ORA-0*(600?|7445|4[0-9][0-9][0-9])[^0-9] to ORA-* in order to receive an warning alert for all ORA- errors.
but I just recieved the alert like the following:
Metric=Generic Alert Log Error
Time/Line Number=Mon Feb 25 23:52:21 2008/21234
Timestamp=Feb 26, 2008 12:06:03 AM EST
Severity=Warning
Message=ORA-error stack (1654, 1654, 1654) logged in /opt/oracle/admin/PRD/bdump/alert_PRD.log.
Notification Rule Name=Alert Log Error
Notification Rule Owner=SYSMAN
as you can see, the message only indicate the ORA-1654, nothing else.
How to set in 10g grid control to get the details alert that in the alert log like:
"ORA-1654: unable to extend index ADM.RC_BP_STATUS by 1024 in tablespace PSINDEX"
I can't believe Oracle 10g Grid control only provide the ORA- number without detailsGo to your database target.
On the home tab, on the left hand side under Diagnostic Summary, you'll see a clickable date link next to where it says 'Alert Log'. Click on that.
next click on Generic Alert Log Error Monitoring Configuration (its at the bottom)
In the alert thresholds put:
ORA-0*(600?|7445|4[0-9][0-9][0-9])[^0-9]
I believe that will pick anything up but experiment, its only perl.
If you want to test this use the DBMS_System.ksdwrt package but I would advise you only do so on a test database. If you've never heard of it, google it. Its a way of writing to your alert log.
Make sure you have your emails sent to long format as well. -
Dear All,
We have 10gR2 RAC with Physical Data Guard environment using ASM and both have same disk group names. Lets say the primary database name is prim and stand by database name is stdby. We are getting the following errors in alert log file of standby:
Clearing online redo logfile 9 +DG_DATAFILES_AND_FB/prim/onlinelog/group9_2a.rdo
Clearing online log 9 of thread 2 sequence number 0
Errors in file c:\oracle\product\10.2.0\admin\stdby\bdump\stdby1_mrp0_4288.trc:
ORA-00313: Message 313 not found; No message file for product=RDBMS, facility=ORA; arguments: [9] [2]
ORA-00312: Message 312 not found; No message file for product=RDBMS, facility=ORA; arguments: [9] [2] [+DG_DATAFILES_AND_FB/prim/onlinelog/group9_2a.rdo]
ORA-17503: Message 17503 not found; No message file for product=RDBMS, facility=ORA; arguments: [2] [+DG_DATAFILES_AND_FB/prim/onlinelog/group9_2a.rdo]
ORA-15173: entry 'prim does not exist in directory '/'
Errors in file c:\oracle\product\10.2.0\admin\stdby\bdump\stdby1_mrp0_4288.trc:
ORA-00344: Message 344 not found; No message file for product=RDBMS, facility=ORA; arguments: [+DG_DATAFILES_AND_FB/prim/onlinelog/group9_2a.rdo]
ORA-17502: Message 17502 not found; No message file for product=RDBMS, facility=ORA; arguments: [4] [+DG_DATAFILES_AND_FB/prim/onlinelog/group9_2a.rdo]
ORA-15173: entry 'prim' does not exist in directory '/'The errors show that the standby is trying to find files in directory +DG_DATAFILES_AND_FB/prim/onlinelog which apparently doesn’t exist on standby. Below is the result of query for redo logs on standby:
SQL> SELECT group#, status, member FROM v$logfile where member like '%prim/%'
GROUP# STATUS MEMEBER
9 +DG_DATAFILES_AND_FB/prim/onlinelog/group9_2a.rdo
1 +DG_DATAFILES_AND_FB/prim/standbylogs/sredo1.rdo
10 +DG_DATAFILES_AND_FB/prim/onlinelog/group10_1a.rdo
2 +DG_DATAFILES_AND_FB/prim/standbylogs/sredo2.rdo
3 +DG_DATAFILES_AND_FB/prim/standbylogs/sredo3.rdo
4 +DG_DATAFILES_AND_FB/prim/standbylogs/sredo4.rdo
11 +DG_DATAFILES_AND_FB/prim/onlinelog/group11_1a.rdo
12 +DG_DATAFILES_AND_FB/prim/onlinelog/group12_2a.rdo
8 rows selected. How we can get rid of this error?
Best regards,Generally when we setup standby, are these directories created automatically (i mean '+DG_DATAFILES_AND_FB/prim and '+DG_DATAFILES_AND_FB/stdby) on standby? My understanding is that by default only '+DG_DATAFILES_AND_FB/stdby is created.
What if i want to put all logs (that are in stdby and prim) in +DG_DATAFILES_AND_FB/stdby?What is the value of DB_CREATE_FILE_DEST and you also set DB_CREATE_ONLINE_LOG_DEST_<> value?
Also i don't know whether it is relevant or not, but we performed a roll forward for standby using metalink doc id: 836986.1 (Steps to perform for Rolling forward a standby database using RMAN incremental backup when primary and standby are in ASM filesystem). But i am not sure whether the error started coming after that or not.
But in the beginning for sure, there were no such errors. Just trying to put as much information as i can.Even though you are using same disk groups, But the sub directory names such as "prim","stby" are different,
After you changed the values of DB_FILE_NAME_CONVERT/LOG_FILE_NAME_CONVERT have you bounced database ? They are static parameters.
Bounce it and then start MRP, initally errors are expected even it happens during RMAN duplicate.
logfile member shows in database but not on physical disk, not match
if you haven't used RMAN duplicate then drop and create redo logs, this can be done at any time. -
Hi,
we have Oracle 11.2.0 standard running under Windows Xp 32
At last time alert log is full of error? like
Tue Dec 18 11:14:26 2012
Errors in file c:\oracle\diag\rdbms\ \trace\ratik_j000_4448.trc:
Errors in file c:\oracle\diag\rdbms\ \trace\ratik_j000_4448.trc:
and Db is open and in read/write mode
each trace file is over 4 GB size and new trace files appear and they all are like <sid>j000<>.trc
all trace files contains following messages
Trace file c:\oracle\diag\rdbms\ \trace\<sid>j0003300.trc
Oracle Database 11g Release 11.2.0.1.0 - Production
Windows XP Version V5.1 Service Pack 3
CPU : 4 - type 586, 4 Physical Cores
Process Affinity : 0x0x00000000
Memory (Avail/Total): Ph:705M/2986M, Ph+PgF:3533M/6914M, VA:978M/2047M
Instance name: ratik
Redo thread mounted by this instance: 1
Oracle process number: 19
Windows thread id: 3300, image: ORACLE.EXE (J000)
*** 2013-03-12 09:07:08.906
*** SESSION ID:(192.5) 2013-03-12 09:07:08.906
*** CLIENT ID:() 2013-03-12 09:07:08.906
*** SERVICE NAME:(SYS$USERS) 2013-03-12 09:07:08.906
*** MODULE NAME:(EM_PING) 2013-03-12 09:07:08.906
*** ACTION NAME:(AGENT_STATUS_MARKER) 2013-03-12 09:07:08.906
--------Dumping Sorted Master Trigger List --------
Trigger Owner : SYSMAN
Trigger Name : MGMT_JOB_EXEC_UPDATE
--------Dumping Trigger Sublists --------
trigger sublist 0 :
trigger sublist 1 :
Trigger Owner : SYSMAN
Trigger Name : MGMT_JOB_EXEC_UPDATE
trigger sublist 2 :
trigger sublist 3 :
trigger sublist 4 :
what it can be and whow can I fix it?
Thank'sThere are no ora-00600 errors not in aler not in treace file. And it is not a standby, it is a standalone Db server
full trace files are over 4GB
so again the alert log
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Picked latch-free SCN scheme 2
Using LOG_ARCHIVE_DEST_1 parameter default value as USE_DB_RECOVERY_FILE_DEST
Autotune of undo retention is turned on.
IMODE=BR
ILAT =27
LICENSE_MAX_USERS = 0
SYS auditing is disabled
Starting up:
Oracle Database 11g Release 11.2.0.1.0 - Production.
Using parameter settings in server-side spfile C:\ORACLE\PRODUCT\11.2.0\DBHOME_1\DATABASE\SPFILERATIK.ORA
System parameters with non-default values:
processes = 150
nls_language = "AMERICAN"
nls_territory = "AMERICA"
memory_target = 1200M
control_files = "C:\ORACLE\ORADATA\RATIK\CONTROL01.CTL"
control_files = "C:\ORACLE\FLASH_RECOVERY_AREA\ \CONTROL02.CTL"
Wed Mar 13 11:32:52 2013
db_block_size = 8192
compatible = "11.2.0.0.0"
db_recovery_file_dest = "C:\oracle\flash_recovery_area"
db_recovery_file_dest_size= 3852M
undo_tablespace = "UNDOTBS1"
max_enabled_roles = 148
remote_login_passwordfile= "EXCLUSIVE"
db_domain = ""
dispatchers = "(PROTOCOL=TCP) (SERVICE=ratikXDB)"
utl_file_dir = "*"
job_queue_processes = 100
audit_file_dest = "C:\ORACLE\ADMIN\ \ADUMP"
audit_trail = "DB"
db_name = " SID "
open_cursors = 300
optimizer_mode = "ALL_ROWS"
query_rewrite_enabled = "TRUE"
query_rewrite_integrity = "TRUSTED"
aq_tm_processes = 1
diagnostic_dest = "C:\ORACLE"
Deprecated system parameters with specified values:
Wed Mar 13 11:33:02 2013
max_enabled_roles
End of deprecated system parameter listing
Wed Mar 13 11:33:03 2013
PMON started with pid=2, OS id=2484
Wed Mar 13 11:33:03 2013
MMAN started with pid=9, OS id=2176
Wed Mar 13 11:33:04 2013
DBW0 started with pid=10, OS id=2896
Wed Mar 13 11:33:04 2013
SMON started with pid=13, OS id=2380
Wed Mar 13 11:33:03 2013
GEN0 started with pid=4, OS id=1056
Wed Mar 13 11:33:03 2013
DBRM started with pid=6, OS id=2188
Wed Mar 13 11:33:04 2013
RECO started with pid=14, OS id=3452
Wed Mar 13 11:33:04 2013
MMNL started with pid=16, OS id=2588
Wed Mar 13 11:33:03 2013
VKTM started with pid=3, OS id=2492 at elevated priority
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
Wed Mar 13 11:33:03 2013
DIA0 started with pid=8, OS id=2180
starting up 1 shared server(s) ...
Wed Mar 13 11:33:04 2013
MMON started with pid=15, OS id=1640
VKTM running at (10)millisec precision with DBRM quantum (100)ms
Wed Mar 13 11:33:03 2013
PSP0 started with pid=7, OS id=2504
Wed Mar 13 11:33:04 2013
CKPT started with pid=12, OS id=2544
Wed Mar 13 11:33:04 2013
LGWR started with pid=11, OS id=3228
ORACLE_BASE from environment = C:\oracle
Wed Mar 13 11:33:03 2013
DIAG started with pid=5, OS id=2488
Wed Mar 13 11:33:10 2013
alter database mount exclusive
Successful mount of redo thread 1, with mount id 327119926
Database mounted in Exclusive Mode
Lost write protection disabled
Completed: alter database mount exclusive
alter database open
Wed Mar 13 11:33:21 2013
Beginning crash recovery of 1 threads
Started redo scan
Completed redo scan
read 463 KB redo, 146 data blocks need recovery
Started redo application at
Thread 1: logseq 517, block 72654
Recovery of Online Redo Log: Thread 1 Group 1 Seq 517 Reading mem 0
Mem# 0: C:\ORACLE\ORADATA\RATIK\REDO01.LOG
Completed redo application of 0.35MB
Completed crash recovery at
Thread 1: logseq 517, block 73581, scn 11916940
146 data blocks read, 146 data blocks written, 463 redo k-bytes read
Wed Mar 13 11:33:32 2013
Thread 1 advanced to log sequence 518 (thread open)
Thread 1 opened at log sequence 518
Current log# 2 seq# 518 mem# 0: C:\ORACLE\ORADATA\RATIK\REDO02.LOG
Successful open of redo thread 1
Wed Mar 13 11:33:35 2013
SMON: enabling cache recovery
Wed Mar 13 11:33:36 2013
Successfully onlined Undo Tablespace 2.
Verifying file header compatibility for 11g tablespace encryption..
Verifying 11g file header compatibility for tablespace encryption completed
SMON: enabling tx recovery
Database Characterset is CL8MSWIN1251
No Resource Manager plan active
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
Wed Mar 13 11:33:45 2013
QMNC started with pid=20, OS id=4012
Wed Mar 13 11:33:49 2013
Completed: alter database open
Wed Mar 13 11:33:56 2013
Starting background process CJQ0
Wed Mar 13 11:33:56 2013
CJQ0 started with pid=22, OS id=2540
Wed Mar 13 11:33:57 2013
db_recovery_file_dest_size of 3852 MB is 0.00% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Wed Mar 13 11:34:23 2013
Errors in file c:\oracle\diag\rdbms\\trace\_j000_632.trc:
Wed Mar 13 11:34:29 2013
Trace dumping is performing id=[cdmp_20130313113429]
Errors in file c:\oracle\diag\rdbms\\trace\_j000_632.trc:
Wed Mar 13 11:34:39 2013
Errors in file c:\oracle\diag\rdbms\\trace\_j000_632.trc:
Wed Mar 13 11:34:41 2013
Trace dumping is performing id=[cdmp_20130313113441]
Errors in file c:\oracle\diag\rdbms\\trace\_j000_632.trc:
Trace dumping is performing id=[cdmp_20130313113448]
Wed Mar 13 11:34:53 2013
Errors in file c:\oracle\diag\rdbms\k\trace\_j000_632.trc:
Errors in file c:\oracle\diag\rdbms\\trace\_j000_632.trc:
Wed Mar 13 11:35:01 2013
Trace dumping is performing id=[cdmp_20130313113501]
Errors in file c:\oracle\diag\rdbms\\trace\_j000_632.trc:
Wed Mar 13 11:35:07 2013
Errors in file c:\oracle\diag\rdbms\\trace\_j000_632.trc:
Trace dumping is performing id=[cdmp_20130313113509]
Errors in file c:\oracle\diag\rdbms\ratik\ratik\trace\ratik_j000_632.trc:
Wed Mar 13 11:35:20 2013
Errors in file c:\oracle\diag\rdbms\\trace\_j000_632.trc:
Errors in file c:\oracle\diag\rdbms\\trace\_j000_632.trc:
Errors in file c:\oracle\diag\rdbms\\trace\_j000_632.trc:
Wed Mar 13 11:36:20 2013
Errors in file c:\oracle\diag\rdbms\\trace\_j000_632.trc:
Errors in file c:\oracle\diag\rdbms\\trace\_j000_632.trc:
Errors in file c:\oracle\diag\rdbms\\trace\_j000_632.trc:
Errors in file c:\oracle\diag\rdbms\\trace\_j000_632.trc:
and this message goes on till server is restarted after server is up that message appear again
in trace file
Trace file c:\oracle\diag\rdbms\ \trace\<sid>j0003300.trc
Oracle Database 11g Release 11.2.0.1.0 - Production
Windows XP Version V5.1 Service Pack 3
CPU : 4 - type 586, 4 Physical Cores
Process Affinity : 0x0x00000000
Memory (Avail/Total): Ph:705M/2986M, Ph+PgF:3533M/6914M, VA:978M/2047M
Instance name: ratik
Redo thread mounted by this instance: 1
Oracle process number: 19
Windows thread id: 3300, image: ORACLE.EXE (J000)
*** 2013-03-12 09:07:08.906
*** SESSION ID:(192.5) 2013-03-12 09:07:08.906
*** CLIENT ID:() 2013-03-12 09:07:08.906
*** SERVICE NAME:(SYS$USERS) 2013-03-12 09:07:08.906
*** MODULE NAME:(EM_PING) 2013-03-12 09:07:08.906
*** ACTION NAME:(AGENT_STATUS_MARKER) 2013-03-12 09:07:08.906
--------Dumping Sorted Master Trigger List --------
Trigger Owner : SYSMAN
Trigger Name : MGMT_JOB_EXEC_UPDATE
--------Dumping Trigger Sublists --------
trigger sublist 0 :
trigger sublist 1 :
Trigger Owner : SYSMAN
Trigger Name : MGMT_JOB_EXEC_UPDATE
trigger sublist 2 :
trigger sublist 3 :
trigger sublist 4 :
*** 2013-03-12 09:07:14.046
oer 8102.3 - obj# 86174, rdba: 0x00810e2c(afn 2, blk# 69164)
kdk key 8102.3:
ncol: 8, len: 47
key: (47):
01 80 05 41 44 4d 34 36 02 c1 02 0b 78 70 0a 10 08 2c 32 28 f2 c9 c0 02 c1
04 01 80 01 80 10 c0 89 b8 fe e0 85 45 5a 8d 44 5f 2f 1c b6 ea bb
mask: (2048):
05 09 80 54 00 00 00 00 00 4c 00 00 00 88 70 9e 3e 00 00 00 00 00 00 00 00
40 00 00 00 c8 78 34 0d 94 d1 aa 06 a0 03 8d 0b 00 00 00 00 ed d4 aa 06 9c
70 9e 3e 00 00 00 00 38 00 00 00 0d 00 00 00 01 00 00 00 03 00 00 00 01 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 a0 79 9e 3e 00 00 00 00 ed d4 aa
06 e8 70 9e 3e 00 00 00 00 38 00 00 00 00 00 00 00 38 00 00 00 00 00 00 00
4c 00 00 00 00 00 00 00 9c 70 9e 3e 00 00 00 00 00 00 00 00 14 00 00 00 00
40 00 00 00 00 00 00 a0 79 9e 3e 6c 00 00 00 c8 1c e4 3e 68 b6 3d 0d 7c 79
34 0d fc f5 c4 62 a0 03 8d 0b a0 79 9e 3e 38 00 00 00 ff ff ff 7f 00 00 00
00 00 00 00 00 00 40 00 01 00 00 00 00 38 ca f4 62 00 00 00 00 00 00 00 00
00 00 00 00 02 00 00 00 02 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 c4 b0 3d 0d 08 76 9e 3e 16 00 00 00 f0 1c e4 3e 00 00 00 00 64 81
34 0d 00 00 00 00 44 79 34 0d 8c 2c 20 07 17 00 00 00 c4 b0 3d 0d 4c 79 34
0d 28 23 ea 06 5c 79 34 0d 8c 2c 20 07 17 00 00 00 c4 b0 3d 0d 64 79 34 0d
28 23 ea 06 b4 79 34 0d a5 d2 ec 62 5c 88 34 0d 80 79 34 0d 8c 2c 20 07 17
00 00 00 6c 00 00 00 88 79 34 0d 28 23 ea 06 ac 79 34 0d 58 14 cd 62 08 76
9e 3e a0 03 8d 0b 00 00 00 00 00 00 00 00 02 00 00 00 0c 7e 34 0d c4 b0 3d
0d 80 ff 8c 0b 6c 00 00 00 d4 79 34 0d 36 a4 c2 62 c8 1c e4 3e 06 00 00 00
0c 7e 34 0d 02 00 00 00 84 00 00 00 c4 b0 3d 0d 10 7e 34 0d ef 5f 3b 02 c8
1c e4 3e 06 00 00 00 0c 7e 34 0d 02 00 00 00 84 00 00 00 c4 b0 3d 0d 06 00
00 00 40 7e 34 0d 02 00 00 00 84 00 00 00 c4 b0 3d 0d 44 7e 34 0d ef 5f 3b
and those memory blocks till the end of log file
NO SR is raised. -
Oracle errors in alert.log
Hello guys,
I have the following Oracle Release 10.2.0.1.0 server configuration:
- Windows 2003 Server SP2
- 3,25 GB RAM (I think that is a 4 GB but this is a windows 2003 standard edition)
- 2 Intel Xeon Quad core
- 3 disk partitions (C -> 40GB, 25 GB free / E -> 100 GB 85 GB free / F 300 GB -> 270 GB free)
The PRISM instance configuration is:
Archivelog: TRUE
SGA Size: 584 MB
Actual PGA Size: 40 MB
Services running (OracleDBConsolePRISM, OracleOraDb10g_home1TNSListener, OracleServeicePRISM)
The problem is that there are just 5 users that use this database, but there are a lot of errors in the alert.log that makes the database not available, the error messages are like this:
Mon Feb 16 16:31:24 2009
Errors in file e:\oracle\product\10.2.0\admin\prism\bdump\prism_j001_11416.trc:
ORA-12012: error on auto execute of job 42567
ORA-00376: file ORA-00376: file 3 cannot be read at this time
ORA-01110: data file 3: 'E:\ORACLE\PRODUCT\10.2.0\ORADATA\PRISM\SYSAUX01.DBF'
ORA-06512: at "EXFSYS.DBMS_RLMGR_DR", line 15
ORA-06512: at line 1
cannot be read at this timeThe content of prism_j001_11416.trc is:
Dump file e:\oracle\product\10.2.0\admin\prism\bdump\prism_mmon_4620.trc
Mon Feb 16 11:07:33 2009
ORACLE V10.2.0.1.0 - Production vsnsta=0
vsnsql=14 vsnxtr=3
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Windows Server 2003 Version V5.2 Service Pack 2
CPU : 8 - type 586, 2 Physical Cores
Process Affinity : 0x00000000
Memory (Avail/Total): Ph:1981M/3325M, Ph+PgF:3612M/5221M, VA:1278M/2047M
Instance name: prism
Redo thread mounted by this instance: 1
Oracle process number: 11
Windows thread id: 4620, image: ORACLE.EXE (MMON)
*** SERVICE NAME:(SYS$BACKGROUND) 2009-02-16 11:07:33.480
*** SESSION ID:(161.1) 2009-02-16 11:07:33.480
KEWRCTLRD: OCIStmtFetch Error. ctl_dbid= 1515633963, sga_dbid= 1515633963
KEWRCTLRD: Retcode: -1, Error Message: ORA-00376: file 3 cannot be read at this time
ORA-01110: data file 3: 'E:\ORACLE\PRODUCT\10.2.0\ORADATA\PRISM\SYSAUX01.DBF'
*** SQLSTR: total-len=328, dump-len=240,
STR={select snap_interval, retention,most_recent_snap_time, most_recent_snap_id, status_flag, most_recent_purge_time, most_recent_split_id, most_recent_split_time, mrct_snap_time_num, mrct_purge_time_num, snapint_num, retention_num, swrf_version}
*** kewrwdbi_1: Error=13509 encountered during run_once
keaInitAdvCache: failed, err=604
02/16/09 11:07:33 >ERROR: exception at dbms_ha_alerts_prvt.post_instance_up308: SQLCODE -13917,ORA-13917: Posting system
alert with reason_id 135 failed with code [5] [post_error]
02/16/09 11:07:33 >ERROR: exception at dbms_ha_alerts_prvt.check_ha_resources637: SQLCODE -13917,ORA-13917: Posting syst
em alert with reason_id 136 failed with code [5] [post_error]
02/16/09 11:07:33 >parameter dump for dbms_ha_alerts_prvt.check_ha_resources
02/16/09 11:07:33 > - local_db_unique_name (PRISM)
02/16/09 11:07:33 > - local_db_domain (==N/A==)
02/16/09 11:07:33 > - rows deleted (0)
02/16/09 11:07:33 >ERROR: exception at dbms_ha_alerts_prvt.check_ha_resources637: SQLCODE -13917,ORA-13917: Posting syst
em alert with reason_id 136 failed with code [5] [post_error]
02/16/09 11:07:33 >parameter dump for dbms_ha_alerts_prvt.check_ha_resources
02/16/09 11:07:33 > - local_db_unique_name (PRISM)
02/16/09 11:07:33 > - local_db_domain (==N/A==)
02/16/09 11:07:33 > - rows deleted (0)
*** 2009-02-16 11:07:41.293
****KELR Apply Log Failed, return code 376
*** 2009-02-16 11:08:35.294
****KELR Apply Log Failed, return code 376
*** 2009-02-16 11:09:38.294
****KELR Apply Log Failed, return code 376
*** 2009-02-16 11:10:41.295
****KELR Apply Log Failed, return code 376
*** 2009-02-16 11:11:44.296
****KELR Apply Log Failed, return code 376
*** 2009-02-16 11:12:29.328And this is an extract from listener.log file:
TNSLSNR for 32-bit Windows: Version 10.2.0.1.0 - Production on 16-FEB-2009 11:05:44
Copyright (c) 1991, 2005, Oracle. All rights reserved.
System parameter file is e:\oracle\product\10.2.0\db_1\network\admin\listener.ora
Log messages written to e:\oracle\product\10.2.0\db_1\network\log\listener.log
Trace information written to e:\oracle\product\10.2.0\db_1\network\trace\listener.trc
Trace level is currently 0
Started with pid=9212
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=akscl-mfs15.am.enterdir.com)(PORT=1521)))
Listener completed notification to CRS on start
TIMESTAMP * CONNECT DATA [* PROTOCOL INFO] * EVENT [* SID] * RETURN CODE
16-FEB-2009 11:07:25 * service_register * prism * 0
16-FEB-2009 11:07:31 * service_update * prism * 0
16-FEB-2009 11:07:34 * service_update * prism * 0
16-FEB-2009 11:07:37 * service_update * prism * 0
16-FEB-2009 11:07:57 * (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=PRISM)(CID=(PROGRAM=\\akscl-mfs15\PRISMPM\PRISMPM.EXE)(HOST=xx-W20972)(USER=CSC3157))) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.225.7)(PORT=1607)) * establish * PRISM * 0
16-FEB-2009 11:07:58 * service_update * prism * 0
16-FEB-2009 11:07:58 * (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=PRISM)(CID=(PROGRAM=\\akscl-mfs15\PRISMPM\PRISMPM.EXE)(HOST=xx-W20972)(USER=CSC3157))) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.225.7)(PORT=1608)) * establish * PRISM * 0
16-FEB-2009 11:07:58 * (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=PRISM)(CID=(PROGRAM=\\akscl-mfs15\PRISMPM\PRISMPM.EXE)(HOST=xx-W20972)(USER=CSC3157))) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.225.7)(PORT=1612)) * establish * PRISM * 0
16-FEB-2009 11:07:59 * (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=PRISM)(CID=(PROGRAM=\\akscl-mfs15\PRISMPM\PRISMPM.EXE)(HOST=xx-W20972)(USER=CSC3157))) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.225.7)(PORT=1613)) * establish * PRISM * 0
16-FEB-2009 11:07:59 * (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=PRISM)(CID=(PROGRAM=\\akscl-mfs15\PRISMPM\PRISMPM.EXE)(HOST=xx-W20972)(USER=CSC3157))) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.225.7)(PORT=1614)) * establish * PRISM * 0
16-FEB-2009 11:08:01 * service_update * prism * 0
16-FEB-2009 15:05:26 * (CONNECT_DATA=(SERVICE_NAME=PRISM)(CID=(PROGRAM=C:\Program Files\Microsoft Office\OFFICE11\EXCEL.EXE)(HOST=xx-W21498)(USER=csc2682))) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.131)(PORT=1999)) * establish * PRISM * 0
16-FEB-2009 15:05:27 * (CONNECT_DATA=(SERVICE_NAME=PRISM)(CID=(PROGRAM=C:\Program Files\Microsoft Office\OFFICE11\EXCEL.EXE)(HOST=xx-W21498)(USER=csc2682))) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.131)(PORT=2000)) * establish * PRISM * 0
16-FEB-2009 17:33:26 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2513)) * establish * PRISM * 0
16-FEB-2009 17:33:26 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2514)) * establish * PRISM * 0
16-FEB-2009 17:33:27 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2515)) * establish * PRISM * 0
16-FEB-2009 17:33:27 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2516)) * establish * PRISM * 0
16-FEB-2009 17:33:27 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2517)) * establish * PRISM * 0
16-FEB-2009 17:33:27 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2518)) * establish * PRISM * 0
16-FEB-2009 17:33:27 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2519)) * establish * PRISM * 0
16-FEB-2009 17:33:27 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2520)) * establish * PRISM * 0
16-FEB-2009 17:33:27 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2521)) * establish * PRISM * 0
16-FEB-2009 17:33:27 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2523)) * establish * PRISM * 0
16-FEB-2009 17:33:27 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2524)) * establish * PRISM * 0
16-FEB-2009 17:33:27 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2525)) * establish * PRISM * 0
16-FEB-2009 17:33:27 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2526)) * establish * PRISM * 0
16-FEB-2009 17:33:27 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2527)) * establish * PRISM * 0
16-FEB-2009 17:33:27 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2528)) * establish * PRISM * 0
16-FEB-2009 17:33:27 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2529)) * establish * PRISM * 0
16-FEB-2009 17:33:27 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=PRISM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=144.180.227.18)(PORT=2530)) * establish * PRISM * 0
16-FEB-2009 17:33:29 * service_update * prism * 0Please need help to solve this issue, best regards.Hello guys,
Thanks for the tips, after checking the tablespaces the results are:
SQL> select tablespace_name,status from dba_tablespaces;
TABLESPACE_NAME STATUS
SYSTEM ONLINE
UNDOTBS1 ONLINE
SYSAUX ONLINE
TEMP ONLINE
USERS ONLINE
PRISM ONLINEThe datafiles:
SQL> select file#,name,status,enabled from v$datafile;
FILE# NAME STATUS ENABLED
1 E:\ORACLE\PRODUCT\10.2.0\ORADATA\PRISM\SYSTEM01.DBF SYSTEM READ WRITE
2 E:\ORACLE\PRODUCT\10.2.0\ORADATA\PRISM\UNDOTBS01.DBF ONLINE READ WRITE
3 E:\ORACLE\PRODUCT\10.2.0\ORADATA\PRISM\SYSAUX01.DBF *RECOVER* READ WRITE
4 E:\ORACLE\PRODUCT\10.2.0\ORADATA\PRISM\USERS01.DBF ONLINE READ WRITE
5 F:\ORACLE\PRISM\PRISM.ORA ONLINE READ WRITESo, following the metalink guide:
SQL> recover datafile 'E:\ORACLE\PRODUCT\10.2.0\ORADATA\PRISM\SYSAUX01.DBF'; -->Here the system ask for the log file, I select AUTO
SQL> alter database datafile 'E:\ORACLE\PRODUCT\10.2.0\ORADATA\PRISM\SYSAUX01.DBF' online; Now the datafiles are all ONLINE, I will wait some time to check the database behavior after this change and back with the results and correct answer, best regards. -
ORA-00205: error in identifying control file, check alert log for more inf
Hi All,
I created my Database on a directory /mydir .The database is created and sucessufyully working then I mounted this directoy to a shared disk on the same /mydir.My database datafile where restored under /mydir .Now When I try to restart my database I m getting
ORA-00205: error in identifying control file, check alert log for more inf
I check all my database data files are there including my control files.
Any idea why this occuring and How to fix it.
Best Regards.user11191992 wrote:
Hi All,
I created my Database on a directory /mydir .The database is created and sucessufyully working then I mounted this directoy to a shared disk on the same /mydir.My database datafile where restored under /mydir .Now When I try to restart my database I m getting
ORA-00205: error in identifying control file, check alert log for more inf
I check all my database data files are there including my control files.
Any idea why this occuring and How to fix it.
Best Regards.Please post the message in the alert-file as stated.
It can e.g. be that while copying the file permissions have been changed, making the files not readable for the oracle user
HTH
FJFranken
Maybe you are looking for
-
Account determination in Goods issue
Hi Experts, Is it possible to determine different accounts during goods issue based on customer accounting group? Regards Federico
-
Photoshop CS6 crashes when using burn tool, crop tool, and dodge tool?
Just recently upgraded from Photoshop CS5.5 64bit to Photoshop CS6 64 bit. Crashes when using burn tool, crop tool, or dodge tool? Thanks for your help!
-
How to reduild BPEL process in remote Integration server?
Please advise kindly how to rebuild a remote wdsl... I have many BPEL processes in my remote Integration server and I need to rebuild one of them after some correction was done in its .xsl file. When I right-click on the process name in the Jdevelope
-
HT201471 What does the (MM) stand for ie wi-fi+cellular(MM)?
What does the (MM) stand for ie:wi-fi+cellular (MM)?
-
How do i restore my reading list on my mini?
I was fidiling around with it and then it sort of went stil and then safari exited by itself. I got some of the reding list back but does anyone know how I could restore the whole thing. Please Thanks