Ossnet messages alert log
Hello Experts,
i am seeing following error messages in database alert log. based on error message OS process was not able to connect one of the cell server. is there any way to find why it is failing to connect cell server?
OSPID: 27669: connect: ossnet: connection failed to server 192.168.10.11, result=5 (login: sosstcpreadtry failed) (difftime=1953)
Mon Nov 05 17:35:16 2012
OSPID: 27665: connect: ossnet: connection failed to server 192.168.10.11, result=5 (login: sosstcpreadtry failed) (difftime=1953)
Mon Nov 05 17:35:17 2012
OSPID: 18681: connect: ossnet: connection failed to server 192.168.10.11, result=5 (login: sosstcpreadtry failed) (difftime=1953)
Mon Nov 05 17:42:22 2012
OSPID: 12134: connect: ossnet: connection failed to server 192.168.10.11, result=5 (login: sosstcpreadtry failed) (difftime=1953)
Mon Nov 05 17:42:24 2012
OSPID: 20097: connect: ossnet: connection failed to server 192.168.10.11, result=5 (login: sosstcpreadtry failed) (difftime=1954)
OSPID: 12134: connect: ossnet: connection failed to server 192.168.10.11, result=5 (login: sosstcpreadtry failed) (difftime=1953)
OSPID: 20097: connect: ossnet: connection failed to server 192.168.10.11, result=5 (login: sosstcpreadtry failed) (difftime=1954)
OSPID: 12134: connect: ossnet: connection failed to server 192.168.10.11, result=5 (login: sosstcpreadtry failed) (difftime=1954)
please advice
Hi 892564,
i was wonder, how do you determine this is Cell server #7 ?The private network of your first exadata has a default range starting from 192.168.10.1 from the lowest compute node (db01).
A halfrack contains 11 components in the private network (4 nodes and 7 cells).
Hence 192.168.10.11 would be cell 7.
For a full rack this IP would match to cell 3.
Regards,
Tycho
Similar Messages
-
Need to find the way to get the actual error message in the alert log.
Hi,
I have configured OEM 11G and monitoring target versions are from 9i to 11g. Now my problem is i have defined the metrics for monitoring the alert log contents and OEM is sending alert if there is any error in the alert log but it is not showing the actual error message just it is showing as below.
============================
Target Name=IDMPRD
Target type=Database Instance
Host=oidmprd01.ho.abc.com
Occurred At=Dec 21, 2011 12:05:21 AM GMT+03:00
Message=1 distinct types of ORA- errors have been found in the alert log.
Metric=Generic Alert Log Error Status
Metric value=1
Severity=Warning
Acknowledged=No
Notification Rule Name=RULE_4_PROD_DATABASES
Notification Rule Owner=SYSMAN
============================
Is there any way to get the complete error details in the OEM alert itself.
Regards
DBA.You need to look at the Alert Log error messages, not the "status" messages. See doc http://docs.oracle.com/cd/E11857_01/em.111/e16285/oracle_database.htm#autoId2
-
Can I reduce the message in the alert log ?
Hi All,
I receive lot of message in my alert log. Can I reduce the message in the alert log ? please help me
Tue Sep 12 13:53:45 2006
ARC0: received prod
Tue Sep 12 13:56:13 2006
LGWR: prodding the archiver
Thread 1 advanced to log sequence 2105494
Tue Sep 12 13:56:13 2006
Current log# 4 seq# 2105494 mem# 0: E:\ORACLE\MMP\LOG\REDO04.LOG
Current log# 4 seq# 2105494 mem# 1: C:\ORACLE\MMP\LOG\REDO04.LOG
Tue Sep 12 13:56:14 2006
ARC1: received prod
Tue Sep 12 13:56:14 2006
ARC1: Beginning to archive log# 3 seq# 2105493
ARC1: Completed archiving log# 3 seq# 2105493
ARC1: re-scanning for new log files
ARC1: prodding the archiver
Tue Sep 12 13:56:18 2006
ARC0: received prod
Tue Sep 12 13:58:26 2006
LGWR: prodding the archiver
Thread 1 advanced to log sequence 2105495
Tue Sep 12 13:58:26 2006
Current log# 1 seq# 2105495 mem# 0: C:\ORACLE\MMP\LOG\REDO01.LOG
Current log# 1 seq# 2105495 mem# 1: E:\ORACLE\MMP\LOG\REDO01.LOG
Tue Sep 12 13:58:27 2006
ARC1: received prod
Tue Sep 12 13:58:27 2006
ARC1: Beginning to archive log# 4 seq# 2105494
ARC1: Completed archiving log# 4 seq# 2105494
ARC1: re-scanning for new log files
ARC1: prodding the archiver
Tue Sep 12 13:58:31 2006
ARC0: received prodHi,
The Oracle database writes an audit trail of the archived redo log files received from the primary database into a trace file. This parameter specifies the level of trace that should be generated when redo logs are archived. The value of the parameter indicates the level of trace to be generated.
Level Description
0 Disabled (default)
1 Track archival of redo log file
2 Track status of each archivelog destination
4 Track archival operational phase
8 Track archivelog destination activity
16 Track detailed archivelog destination activity
32 Track archivelog destination parameter changes
64 Track ARCn process state activity
128 Track FAL server related activities
It can be used in a Primary Database or Standby Database
for more details see:
http://docs.nojabrsk.ru/sol10/B12037_01/server.101/b10823/trace.htm
Cheers -
SMON ABOUT TO RECOVER UNDO SEGMENT s messages in alert log
Problem
======
There are lots of messages appearing in alert log of the following form:
SMON: about to recover undo segment %s
SMON: mark undo segment %s as available
Reason
======
When the recovery is going on after a abnormal shutdown. Cause These errors do not indicate rollback segment corruption. In oracle8i, this may becoz of problem with the "rollback_segments" parameter in the init.ora. where as in oracle9i, When the instance is shutdown, during the next startup instance recovery needs to take place.
In AUM we do not have any control over which undo segments will brought online after the instance startup.When SMON finds such offline undo segments with transactions needing recovery ,then it does what is intended to do recovery.
Solution
======
with oracle8i, we need to cross check rollback_segments" parameter in the init.ora
with oracle9i,
first note down segment from SMON: mark undo segment %s as available
sqlplus "/ as sysdba"
alter session set "_smu_debug_mode"=4;
alter rollback segment <offline segment name> online;
e.g. alter rollback segment "_SYSSMU11$" online;
Where 11 is the number that is appearing in the messages in the alert log.What's the point duplicate metalink doc here,
SMON: ABOUT TO RECOVER UNDO SEGMENT %s messages in alert log
Doc ID: Note:266159.1
besides it's violation of Oracle support service contract. -
Receive an error in alert log after successful porpagated aq message
Hi experts,
I have a problem with the propagation feature of Oracle Advanced Queues!
First of all I use a 10gR1 RAC database.
After creating my queue and schedule I get the following error in alert log but the message will be successful propagated.
After 16 runs the job will be set broken. Ok this is clear but so works DBMS_JOB. But why I get this message below? Is it possible that grants are missing?
Errors in file /u01/app/oracle/admin/rdstest/bdump/rdsts1_j000_385024.trc:
ORA-12012: error on auto execute of job 5523173
ORA-06521: PL/SQL: Error mapping function
ORA-00604: error occurred at recursive SQL level 2
ORA-06521: PL/SQL: Error mapping function
ORA-06512: at "SYS.DBMS_AQADM_SYSCALLS", line 574
ORA-06512: at "SYS.DBMS_PRVTAQIP", line 2054
ORA-06512: at line 1
ORA-25254: time-out in LISTEN while waiting for a message
ORA-06512: at "SYS.DBMS_AQADM_SYSCALLS", line 574
ORA-06512: at "SYS.DBMS_PRVTAQIP", line 2054
ORA-06512: at line 1
ORA-06512: at "SYS.DBMS_AQADM_SYS", line 6662
ORA-06512: at "SYS.DBMS_AQADM_SYS", line 6913
ORA-06512: at "SYS.DBMS_AQADM", line 915
ORA-06512: at line 1
Thank you for helping and best regards!!I found that Babylon had changed the proxy setting. Try going to Tools/Options/Advanced/Network/Settings and select "No proxy".
This has helped but things are still slow and I suspect that some other changes have been made to the Firefox settings.
Installing unauthorised software on your PC should be made illegal. I will avoid Babylon forever so they have done themselves no favours by trying to be smart! -
10.2.0.4 Streams Alert log message
Good afternoon everyone,
Wanted to get some input on a message that ocasionally is logged to alert log of the owning QT instance in our RAC environment. In summary we have two RAC environments and perform bi-directional replication between the two environments. The message in question is usually logged in the busier environment.
krvxenq: Failed to acquire logminer dictionary lock
During that time it it seems that the Logminer process in not mining for changes, which could lead to latency between the two environments.
I have looked at AWR reports for times during the occurence of these errors and its common to see the following procedure running.
BEGIN dbms_capture_adm_internal.enforce_checkpoint_retention(:1, :2, :3); END;
This procedure, which I assume purges Capture chekcpoints that exceed the retention duration, takes between 10-20 minutes to run. The table which stores the checkpoints is logmnr_restart_ckpt$. I suspect that the issue could be caused by the size of the logmnr_restart_ckpt$ whcih is 12GB in our environemnt. A purge job needs to be scheduled to shrink the table.
If anyone has seen anything similar in her or his environment please offer any additional knowledge you may have about the topic.
Thank you,There are 2 possibilities : either you have too much load on the table LOMNR_RESTART_CKPT$
due to the heavy system load or there is a bad streams query.
At this stage I would not like to admit the 'we-can't-cope-the-load' without having investigated more common issues.
Let's assume optimistically that one or many streams internal SQL do not run as intended. This is our best bet.
The table LOMNR_RESTART_CKPT$ is big is a perfect suspect.
If there is a problem with one of the queries involving this table, we find it using sys.col_usage$,
identify which columns are used and from there jump to SQL_ID checking plans:
set linesize 132 head on pagesize 33
col obj format a35 head "Table name"
col col1 format a26 head "Column"
col equijoin_preds format 9999999 head "equijoin|Preds" justify c
col nonequijoin_preds format 9999999 head "non|equijoin|Preds" justify c
col range_preds format 999999 head "Range|Pred" justify c
col equality_preds format 9999999 head "Equality|Preds" justify c
col like_preds format 999999 head "Like|Preds" justify c
col null_preds format 999999 head "Null|Preds" justify c
select r.name ||'.'|| o.name "obj" , c.name "col1",
equality_preds, equijoin_preds, nonequijoin_preds, range_preds,
like_preds, null_preds, to_char(timestamp,'DD-MM-YY HH24:MI:SS') "Date"
from sys.col_usage$ u, sys.obj$ o, sys.col$ c, sys.user$ r
where o.obj# = u.obj# and o.name = 'LOMNR_RESTART_CKPT$' -- and $AND_OWNER
c.obj# = u.obj# and
c.col# = u.intcol# and
o.owner# = r.user# and
(u.equijoin_preds > 0 or u.nonequijoin_preds > 0)
order by 4 desc
For each column predicate checks for full table scan :
define COL_NAME='col to check'
col PLAN_HASH_VALUE for 999999999999 head 'Plan hash |value' justify c
col id for 999 head 'Id'
col child for 99 head 'Ch|ld'
col cost for 999999 head 'Oper|Cost'
col tot_cost for 999999 head 'Plan|cost' justify c
col est_car for 999999999 head 'Estimed| card' justify c
col cur_car for 999999999 head 'Avg seen| card' justify c
col ACC for A3 head 'Acc|ess'
col FIL for A3 head 'Fil|ter'
col OTHER for A3 head 'Oth|er'
col ope for a30 head 'Operation'
col exec for 999999 head 'Execution'
break on PLAN_HASH_VALUE on sql_id on child
select distinct
a.PLAN_HASH_VALUE, a.id , a.sql_id, a.CHILD_NUMBER child , a.cost, c.cost tot_cost,
a.cardinality est_car, b.output_rows/decode(b.EXECUTIONS,0,1,b.EXECUTIONS) cur_car,
b.EXECUTIONS exec,
case when length(a.ACCESS_PREDICATES) > 0 then ' Y' else ' N' end ACC,
case when length(a.FILTER_PREDICATES) > 0 then ' Y' else ' N' end FIL,
case when length(a.projection) > 0 then ' Y' else ' N' end OTHER,
a.operation||' '|| a.options ope
from
v$sql_plan a,
v$sql_plan_statistics_all b ,
v$sql_plan_statistics_all c
where
a.PLAN_HASH_VALUE = b.PLAN_HASH_VALUE
and a.sql_id = b.sql_id
and a.child_number = b.child_number
and a.id = b.id
and a.PLAN_HASH_VALUE= c.PLAN_HASH_VALUE (+)
and a.sql_id = c.sql_id
and a.child_number = c.child_number and c.id=0
and a.OBJECT_NAME = 'LOMNR_RESTART_CKPT$' -- $AND_A_OWNER
and (instr(a.FILTER_PREDICATES,'&&COL_NAME') > 0
or instr(a.ACCESS_PREDICATES,'&&COL_NAME') > 0
or instr(a.PROJECTION, '$COL_NAME') > 0
order by sql_id, PLAN_HASH_VALUE, id
now for each query with a FULL table scan check the predicate and
see if adding and index will not improveAnother possibility :
One of the structure associated to streams, that you don't necessary see, may have remained too big.
Many of these structures are only accessed by streams maintenance using Full table scan.
They are supposed to remain small tables. But after a big streams crash, they inflate and whenever
the problem is solved, These supposed-to-be-small remain as the crash made them : too big.
In consequence, the FTS intended to run on small tables suddenly loop over stretches of empty blocks.
This is very frequent on big streams environment. You find these structures, if any exists using this query
Note that a long running of this query, imply big structures. You will have to assess if the number of rows is realistic with the number of blocks.
break on parent_table
col type format a30
col owner format a16
col index_name head 'Related object'
col parent_table format a30
prompt
prompt To shrink a lob associated to a queue type : alter table AQ$_<queue_table>_P modify lob(USER_DATA) ( shrink space ) cascade ;
prompt
select a.owner,a.table_name parent_table,index_name ,
decode(index_type,'LOB','LOB INDEX',index_type) type,
(select blocks from dba_segments where segment_name=index_name and owner=b.owner) blocks
from
dba_indexes a,
( select owner, queue_table table_name from dba_queue_tables
where recipients='SINGLE' and owner NOT IN ('SYSTEM') and (compatible LIKE '8.%' or compatible LIKE '10.%')
union
select owner, queue_table table_name from dba_queue_tables
where recipients='MULTIPLE' and (compatible LIKE '8.1%' or compatible LIKE '10.%')
) b
where a.owner=b.owner
and a.table_name = b.table_name
and a.owner not like 'SYS%' and a.owner not like 'WMSYS%'
union
-- LOB Segment for QT
select a.owner,a.segment_name parent_table,l.segment_name index_name, 'LOB SEG('||l.column_name||')' type,
(select sum(blocks) from dba_segments where segment_name = l.segment_name ) blob_blocks
from dba_segments a,
dba_lobs l,
( select owner, queue_table table_name from dba_queue_tables
where recipients='SINGLE' and owner NOT IN ('SYSTEM') and (compatible LIKE '8.%' or compatible LIKE '10.%')
union
select owner, queue_table table_name from dba_queue_tables
where recipients='MULTIPLE' and (compatible LIKE '8.1%' or compatible LIKE '10.%')
) b
where a.owner=b.owner and
a.SEGMENT_name = b.table_name and
l.table_name = a.segment_name and
a.owner not like 'SYS%' and a.owner not like 'WMSYS%'
union
-- LOB Segment of QT.._P
select a.owner,a.segment_name parent_table,l.segment_name index_name, 'LOB SEG('||l.column_name||')',
(select sum(blocks) from dba_segments where segment_name = l.segment_name ) blob_blocks
from dba_segments a,
dba_lobs l,
( select owner, queue_table table_name from dba_queue_tables
where recipients='SINGLE' and owner NOT IN ('SYSTEM') and (compatible LIKE '8.%' or compatible LIKE '10.%')
union
select owner, queue_table table_name from dba_queue_tables
where recipients='MULTIPLE' and (compatible LIKE '8.1%' or compatible LIKE '10.%')
) b
where a.owner=b.owner and
a.SEGMENT_name = 'AQ$_'||b.table_name||'_P' and
l.table_name = a.segment_name and
a.owner not like 'SYS%' and a.owner not like 'WMSYS%'
union
-- Related QT
select a2.owner, a2.table_name parent_table, '-' index_name , decode(nvl(a2.initial_extent,-1), -1, 'IOT TABLE','NORMAL') type,
case
when decode(nvl(a2.initial_extent,-1), -1, 'IOT TABLE','NORMAL') = 'IOT TABLE'
then ( select sum(leaf_blocks) from dba_indexes where table_name=a2.table_name and owner=a2.owner)
when decode(nvl(a2.initial_extent,-1), -1, 'IOT TABLE','NORMAL') = 'NORMAL'
then (select blocks from dba_segments where segment_name=a2.table_name and owner=a2.owner)
end blocks
from dba_tables a2,
( select owner, queue_table table_name from dba_queue_tables
where recipients='SINGLE' and owner NOT IN ('SYSTEM') and (compatible LIKE '8.%' or compatible LIKE '10.%')
union all
select owner, queue_table table_name from dba_queue_tables
where recipients='MULTIPLE' and (compatible LIKE '8.1%' or compatible LIKE '10.%' )
) b2
where
a2.table_name in ( 'AQ$_'||b2.table_name ||'_T' , 'AQ$_'||b2.table_name ||'_S', 'AQ$_'||b2.table_name ||'_H' , 'AQ$_'||b2.table_name ||'_G' ,
'AQ$_'|| b2.table_name ||'_I' , 'AQ$_'||b2.table_name ||'_C', 'AQ$_'||b2.table_name ||'_D', 'AQ$_'||b2.table_name ||'_P')
and a2.owner not like 'SYS%' and a2.owner not like 'WMSYS%'
union
-- IOT Table normal
select
u.name owner , o.name parent_table, c.table_name index_name, 'RELATED IOT' type,
(select blocks from dba_segments where segment_name=c.table_name and owner=c.owner) blocks
from sys.obj$ o,
user$ u,
(select table_name, to_number(substr(table_name,14)) as object_id , owner
from dba_tables where table_name like 'SYS_IOT_OVER_%' and owner not like '%SYS') c
where
o.obj#=c.object_id
and o.owner#=u.user#
and obj# in (
select to_number(substr(table_name,14)) as object_id from dba_tables where table_name like 'SYS_IOT_OVER_%' and owner not like '%SYS')
order by parent_table , index_name desc;
"I hope it is one of the above case, otherwise thing may become more complicates. -
Message in alert log file during the startup of the DB
Hi,
i have two instances, i have the same warning in the alert log file during the startup of the DB:
Oracle instance running on a system with low open file descriptor limit.
Tune your system to increase this limit to avoid severe performance degradation.
The db_files parameter was at 1024 for the two instances, i have set it to 300 for the first instance and 550 for second, the message for the first instance was droped(it's ok), but the message was there in the alert log for the second instance.
The two instance are on the same machine.
The number of the open file limit on the linux machine is equal to 1024.
I have shutdown the first instance (with db_files=300) the problem persits for the second instance (with db_files=550).
My question is how to determinate the right value of db_files parameter to avoid this message.
Regards.
Message was edited by:
learnIncrease the max number of open file on Linux or decrease db_files.
f the OS limit is 1024 db_files would have to be >= 472 for the message to be printed. t is printed if an instance is configured to access more datafiles than the OS-specific limit on the number of files that can be open in a single process. In this case the server will re-cycle the file descriptors.
Vadim Bobrov
Oracle Database Tools
http://www.fourthelephant.com -
Db_writer_processes alert log message
Oracle 11.2.0.3 running on HP-UX Itanium RX8640 with 16 CPUs and OS B.11.31
Upgraded to 11.2.0.3 last night and now I am receiving the following message in alert.log when starting database:
"NOTE: db_writer_processes has been changed from 4 to 1 due to NUMA requirements"
Any thoughts on what this means?Is your system NUMA-enabled?
In a NUMA-enabled box, the minimum number of DBWR process is the number of processor groups, oracle MUST start this minimum of DBWR no matter the parameter you set.
You seem to have the opposite case, as in oracle is forcing it to 1. In this case, I'm led to believe that maybe oracle is mistakenly identifiying your system as NUMA perphaps.
Upload the results for this:
select a.ksppinm "Parameter",
b.ksppstvl "Session Value",
c.ksppstvl "Instance Value"
from x$ksppi a, x$ksppcv b, x$ksppsv c
where a.indx = b.indx and a.indx = c.indx
and a.ksppinm = '_db_block_numa';You may try to do this:
- Set the following OS variable in your database OS owner user profile: DISABLE_NUMA = true
- Set the DBWR to 4 in the SPFILE and bounce the database.
- Verify if the issue continues. In the OS: ps -ef | grep dbwr to see how many dbwr the instance spawned. -
RMAN ALert Log Message: ALTER SYSTEM ARCHIVE LOG
Created a new Database on Oracle 10.2.0.4 and now seeing "ALTER SYSTEM ARCHIVE LOG" in the Alert Log only when the online RMAN backup runs:
Wed Aug 26 21:52:03 2009
ALTER SYSTEM ARCHIVE LOG
Wed Aug 26 21:52:03 2009
Thread 1 advanced to log sequence 35 (LGWR switch)
Current log# 2 seq# 35 mem# 0: /u01/app/oracle/oradata/aatest/redo02.log
Current log# 2 seq# 35 mem# 1: /u03/oradata/aatest/redo02a.log
Wed Aug 26 21:53:37 2009
ALTER SYSTEM ARCHIVE LOG
Wed Aug 26 21:53:37 2009
Thread 1 advanced to log sequence 36 (LGWR switch)
Current log# 3 seq# 36 mem# 0: /u01/app/oracle/oradata/aatest/redo03.log
Current log# 3 seq# 36 mem# 1: /u03/oradata/aatest/redo03a.log
Wed Aug 26 21:53:40 2009
Starting control autobackup
Control autobackup written to DISK device
handle '/u03/exports/backups/aatest/c-2538018370-20090826-00'
I am not issuing a log swiitch command. The RMAN commands I am running are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u03/exports/backups/aatest/%F';
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET;
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/u03/exports/backups/aatest/%d_%U';
BACKUP DATABASE PLUS ARCHIVELOG;
DELETE NOPROMPT OBSOLETE;
DELETE NOPROMPT ARCHIVELOG UNTIL TIME 'SYSDATE-2';
I do not see this message on any other 10.2.0.4 instances. Has anyone seen this and if so why is this showing in the log?
Thank you,
Curt SwartzlanderThere's no problem with log switch. Please refer to documentation for more information on syntax "PLUS ARCHIVELOG"
http://download.oracle.com/docs/cd/B19306_01/backup.102/b14192/bkup003.htm#sthref377
Adding BACKUP ... PLUS ARCHIVELOG causes RMAN to do the following:
*1. Runs the ALTER SYSTEM ARCHIVE LOG CURRENT command.*
*2. Runs BACKUP ARCHIVELOG ALL. Note that if backup optimization is enabled, then RMAN skips logs that it has already backed up to the specified device.*
*3. Backs up the rest of the files specified in BACKUP command.*
*4. Runs the ALTER SYSTEM ARCHIVE LOG CURRENT command.*
*5. Backs up any remaining archived logs generated during the backup.*
This guarantees that datafile backups taken during the command are recoverable to a consistent state. -
Prevent message to go to the alert.log file
Hi
How can I prevent any message (for exm. SHUTDOWN IMMEDIATE) to be written to alert.log file? Is it possible?
Thanks
JohnOf course this can be done with some scripting, but alot of work and totaly worthless, why dont you want it to be shown in your alert log?
if shutdown immediate present
then cache alert log, rewrite it to new alert log without the text shutdown immediate, but as i said earlier, useless. :) -
Hi All,
I'm getting one message into Alertlog file as follows -
Mon Jun 22 15:06:29 2009
ARC0: Completed archiving log 7 thread 1 sequence 35246
ARC0: Evaluating archive log 8 thread 1 sequence 35247
ARC0: Unable to archive log 8 thread 1 sequence 35247
Log actively being archived by another process
ARC0: Evaluating archive log 5 thread 1 sequence 35248
ARC0: Beginning to archive log 5 thread 1 sequence 35248
Creating archive destination LOG_ARCHIVE_DEST_1: 'F:\ORACLE\ARC\ARC35248.001'
My database running in oracle version 9ir2, total 4 redolog groups are available and each group is having two member and os is windows 2003.
Need your helpYou're archivng though network?
SQL> show parameter LOG_ARCHIVE_DEST
check metalink:260040.1 perhaps can help
When archiving locally and remotely using the ARCH process where the remote destination is across a saturated or slow network you can receive the following errors in the alert log:
ARC0: Evaluating archive log 2 thread 1 sequence 100
ARC0: Unable to archive log 2 thread 1 sequence 100
Log actively being archived by another process
If the ARCH process is unable to archive at the rate at which online logs are switched then it is possible for the primary database to suspend while waiting for archiving to complete. The following discussion describes how this can occur.Edited by: Surachart Opun (HunterX) on Jun 23, 2009 8:23 PM -
A002: warning messages in alert log
Can anyone tell me if this is really an informational message or if it is indicating a problem
It is appearing on the apply system (10gr3)
Thu Jul 21 17:20:08 2005
A002: warning -- apply server 1, sid 118 waiting for event (since 5456 seconds):
A002: [rdbms ipc message] timeout=1f4, =0, =0
ThanksThis message is informational only. There was a bug opened for apply process writing too many messages into the alert log. This issue has been fixed in 10.2 and fix was included in PSR 10.1.0.4. Please apply this patchset.
-
Hi ,
following is the message from Alert log file and trace files.
This error encountered during RMAN backup but the backup was fine.
Didn't get any errors in RMAN log file.
Can anybody tell me what's the problem and solution?
----------- alert log file --------------------
Thu Mar 9 02:14:13 2006
Errors in file /app/oracle/admin/SOFPDWB4/udump/sofpdwb4_ora_950328.trc:
ORA-27037: unable to obtain file status
IBM AIX RISC System/6000 Error: 2: No such file or directory
Additional information: 4
Thu Mar 9 02:14:13 2006
Errors in file /app/oracle/admin/SOFPDWB4/udump/sofpdwb4_ora_1769540.trc:
ORA-27037: unable to obtain file status
IBM AIX RISC System/6000 Error: 2: No such file or directory
Additional information: 4
Thu Mar 9 02:14:13 2006
Errors in file /app/oracle/admin/SOFPDWB4/udump/sofpdwb4_ora_1392642.trc:
ORA-27037: unable to obtain file status
IBM AIX RISC System/6000 Error: 2: No such file or directory
Additional information: 4
----------- trace files -------------------
/app/oracle/admin/SOFPDWB4/udump/sofpdwb4_ora_950328.trc
Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production
With the Partitioning and Oracle Data Mining options
JServer Release 9.2.0.5.0 - Production
ORACLE_HOME = /app/oracle/product/9.2.0
System name: AIX
Node name: sof016
Release: 1
Version: 5
Machine: 000CEF9C4C00
Instance name: SOFPDWB4
Redo thread mounted by this instance: 1
Oracle process number: 24
Unix process pid: 950328, image: oracle@sof016 (TNS V1-V3)
*** 2006-03-09 02:14:13.053
*** SESSION ID:(77.60408) 2006-03-09 02:14:13.040
ORA-27037: unable to obtain file status
IBM AIX RISC System/6000 Error: 2: No such file or directory
Additional information: 4
~
/app/oracle/admin/SOFPDWB4/udump/sofpdwb4_ora_1769540.trc
Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production
With the Partitioning and Oracle Data Mining options
JServer Release 9.2.0.5.0 - Production
ORACLE_HOME = /app/oracle/product/9.2.0
System name: AIX
Node name: sof016
Release: 1
Version: 5
Machine: 000CEF9C4C00
Instance name: SOFPDWB4
Redo thread mounted by this instance: 1
Oracle process number: 25
Unix process pid: 1769540, image: oracle@sof016 (TNS V1-V3)
*** 2006-03-09 02:14:13.311
*** SESSION ID:(53.5721) 2006-03-09 02:14:13.310
ORA-27037: unable to obtain file status
IBM AIX RISC System/6000 Error: 2: No such file or directory
Additional information: 4
/app/oracle/admin/SOFPDWB4/udump/sofpdwb4_ora_1392642.trc
Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production
With the Partitioning and Oracle Data Mining options
JServer Release 9.2.0.5.0 - Production
ORACLE_HOME = /app/oracle/product/9.2.0
System name: AIX
Node name: sof016
Release: 1
Version: 5
Machine: 000CEF9C4C00
Instance name: SOFPDWB4
Redo thread mounted by this instance: 1
Oracle process number: 27
Unix process pid: 1392642, image: oracle@sof016 (TNS V1-V3)
*** 2006-03-09 02:14:13.549
*** SESSION ID:(59.52476) 2006-03-09 02:14:13.548
ORA-27037: unable to obtain file status
IBM AIX RISC System/6000 Error: 2: No such file or directory
Additional information: 4Hello looks like when backing up database using RMAN it did not find the archivelogs hence you got the errors. Try doing "Crosscheck archivelog all" and see.
-Sri
<< Symptoms >>
Archivelog backup using RMAN failed with error :
RMAN-03002: failure of backup command at 08/18/2005 14:51:16
RMAN-06059: expected archived log not found, lost of archived log compromises recoverability
ORA-19625: error identifying file /arch/arch2/1_266_563489673.dbf
ORA-27037: unable to obtain file status
IBM AIX RISC System/6000 Error: 2: A file or directory in the path name does not exist.
Additional information: 3
<<Cause>>
RMAN get the information about what archivelog files are required to back up from the v$archived_log view.
RMAN cannot find the file archivelog in the archivelog destination. So itscannot continue taking the backup because this file does not exist.
<<Solution>>
Check if the archivelog exist in the log archive destination. If the is moved to some other location then restore the file back to its original location and then do RMAN back.
else
RMAN> crosscheck archivelog all;
After the crosscheck all take the rman backup. -
Alert log contains ARCH and FRA mount/dismount messages
Hi Guys,
We see messages like:
SUCCESS: diskgroup DBNAME_ARCH was mounted
SUCCESS: diskgroup DBNAME_ARCH was dismounted
SUCCESS: diskgroup DBNAME_FRA was mounted
SUCCESS: diskgroup DBNAME_FRA was dismounted
in our alert.log file for a RAC DB. This message coincides with the creation of archive logs.
Do we know why DB keeps on mounting and dismnounting these diskgroups?
Why not keep them mounted as long as the database(instance) is up? Is it because these diskgroups dont have any live db file that they are dismounted as soon as the archive log is created?
Any suggestions/ideas would be welcome.
Regards.Hi,
There is a metalink Note:361173.1 which states that this is an expected behavior. So you can ignore these messages.
-Amit
http://askoracledba.wordpress.com -
Production database Alert log Message
Hi,
I am using oracle 10gR1 on windows.Recently i have created a physical standby database (sid=smtm) and production database (sid=mtm).My production alter log file dispaly some thing like please suggest my what it shows.
Mon Mar 02 00:18:31 2009
Private_strands 7 at log switch
Thread 1 advanced to log sequence 35722
Current log# 4 seq# 35722 mem# 0: D:\ORACLE\PRODUCT\10.1.0\ORADATA\MTM\REDO04.LOG
Current log# 4 seq# 35722 mem# 1: D:\ORACLE\PRODUCT\10.1.0\ORADATA\MTM\REDO04_A.LOG
Mon Mar 02 00:18:31 2009
ARC1: Evaluating archive log 2 thread 1 sequence 35721
ARC1: Destination LOG_ARCHIVE_DEST_2 archival not expedited
Committing creation of archivelog 'E:\ORACLE\MTM\ARCHIVES\MTM_945F37AC_1_35721_500044525.ARC'
Invoking non-expedited destination LOG_ARCHIVE_DEST_2 thread 1 sequence 35721 host SMTM
*FAL[server, ARC1]: Begin FAL noexpedite archive (branch 500044525 thread 1 sequence 35721 dest SMTM)*
*FAL[server, ARC1]: Complete FAL noexpedite archive (thread 1 sequence 35721 destination SMTM)*Mon Mar 02 00:29:42 2009
Private_strands 7 at log switch
Thread 1 advanced to log sequence 35723
Current log# 3 seq# 35723 mem# 0: D:\ORACLE\PRODUCT\10.1.0\ORADATA\MTM\REDO03.LOG
Current log# 3 seq# 35723 mem# 1: D:\ORACLE\PRODUCT\10.1.0\ORADATA\MTM\REDO03_A.LOG
Mon Mar 02 00:29:42 2009
ARC1: Evaluating archive log 4 thread 1 sequence 35722
ARC1: Destination LOG_ARCHIVE_DEST_2 archival not expedited
Committing creation of archivelog 'E:\ORACLE\MTM\ARCHIVES\MTM_945F37AC_1_35722_500044525.ARC'
Invoking non-expedited destination LOG_ARCHIVE_DEST_2 thread 1 sequence 35722 host SMTM
*FAL[server, ARC1]: Begin FAL noexpedite archive (branch 500044525 thread 1 sequence 35722 dest SMTM)*
*FAL[server, ARC1]: Complete FAL noexpedite archive (thread 1 sequence 35722 destination SMTM)*
ThanksSorry i have no metalink account.
On production database there is no any ORA error and on standby database Alert log shows.
Mon Mar 02 03:40:38 2009
RFS[1]: No standby redo logfiles created
RFS[1]: Archived Log: 'E:\ORACLE\PRODUCT\10.1.0\ARCHIVES\MTM_945F37AC_1_35728_500044525.ARC'
Committing creation of archivelog 'E:\ORACLE\PRODUCT\10.1.0\ARCHIVES\MTM_945F37AC_1_35728_500044525.ARC'
RFS[1]: No standby redo logfiles created
RFS[1]: Archived Log: 'E:\ORACLE\PRODUCT\10.1.0\ARCHIVES\MTM_945F37AC_1_35729_500044525.ARC'
Committing creation of archivelog 'E:\ORACLE\PRODUCT\10.1.0\ARCHIVES\MTM_945F37AC_1_35729_500044525.ARC'
RFS[1]: No standby redo logfiles created
RFS[1]: Archived Log: 'E:\ORACLE\PRODUCT\10.1.0\ARCHIVES\MTM_945F37AC_1_35730_500044525.ARC'
Committing creation of archivelog 'E:\ORACLE\PRODUCT\10.1.0\ARCHIVES\MTM_945F37AC_1_35730_500044525.ARC'Mon Mar 02 04:29:14 2009
RFS[1]: No standby redo logfiles created
RFS[1]: Archived Log: 'E:\ORACLE\PRODUCT\10.1.0\ARCHIVES\MTM_945F37AC_1_35731_500044525.ARC'
Committing creation of archivelog 'E:\ORACLE\PRODUCT\10.1.0\ARCHIVES\MTM_945F37AC_1_35731_500044525.ARC'
Media Recovery Log
ORA-279 signalled during: ALTER DATABASE RECOVER standby database ...Mon Mar 02 11:01:57 2009
ALTER DATABASE RECOVER CONTINUE DEFAULT
Mon Mar 02 11:01:57 2009
Media Recovery Log E:\ORACLE\PRODUCT\10.1.0\ARCHIVES\MTM_945F37AC_1_35553_500044525.ARC
ORA-279 signalled during: ALTER DATABASE RECOVER CONTINUE DEFAULT ...Mon Mar 02 11:02:05 2009
ALTER DATABASE RECOVER CONTINUE DEFAULT
Media Recovery Log E:\ORACLE\PRODUCT\10.1.0\ARCHIVES\MTM_945F37AC_1_35554_500044525.ARC
ORA-279 signalled during: ALTER DATABASE RECOVER CONTINUE DEFAULT ...
Mon Mar 02 11:02:14 2009
ALTER DATABASE RECOVER CONTINUE DEFAULT
Media Recovery Log E:\ORACLE\PRODUCT\10.1.0\ARCHIVES\MTM_945F37AC_1_35555_500044525.ARC
ORA-279 signalled during: ALTER DATABASE RECOVER CONTINUE DEFAULT ...
Regards
Thanks for reply.
Maybe you are looking for
-
DVD drive cover on Power Mac G5 stays down (that little aluminum door)
I have an older model Power Mac G5 (2004) and the little aluminum cover that moves up and down before you insert/eject a disc seems to be stuck in the down position. I'm pretty sure it's because of dust that probably accumulated in the hinges over ti
-
I cannot get the numerical keys to work on my Mac pro any ideas?
HI I am trying to return something bought on line and need to fill out various information on the website. I cannot get the numerical keys to work anybody any ideas as to what I may be doing wrong? GL
-
Project Online with an existing SharePoint Online tenant
Hi, As I understand Project Online includes his own SharePoint Online infrastructure, my customer needs evidence on if they implement Project Online this service will not interact with an existing site information on very customized SharePoint online
-
Are global temporary tables in Oracle 10.2 behaving differently?
My procedure is creating and inserting data into a Global Temporary Table (on commit data is preserved). I am running this procedure in two different environments. The first environment is running under Oracle 10.2 and after the procedure runs succes
-
Rsh disconnect problems in Solaris 10
My company uses rsh to transmit data to proprietary cards, from Ultra 25's running Solaris 10. We're experiencing random timeouts. The error is rsh connection timeout. We never had this problem on Solaris 8 or Solaris 7. Has anyone run across this be