Rogue alert log in $ORACLE_HOME/dbs

Hi there, have looked all over and not yet found the answer to my query. I have a database instance running at 10.2.0.4.0. There is a rougue alert-{ORACLE_SID}.log created in $ORACLE_HOME/dbs directory when an instance is started. Show parameter background_dump_dest points to the correct directory - $ORACLE_ADMIN/$ORACLE_SID/bdump wether using spfile or pfile.  To confirm what is happening I shutdown the instance and deleted both alert logs from the two locations and restarted the instance. Again two alert logs were created. Can anyone please advise why this is happening. I have found threads suggesting that logs are created in $ORACLE_HOME/dbs as default if parameters are not set correctly, but this is creating an alert log in two locations at startup.
in $ORACLE_ADMIN/$ORACLE_SID/bdump/alert_DBAASS06.log shows -
[oracle@el64dev03 bdump]$ cat alert_DBAASS06.log
Wed Jul 24 09:29:19 2013
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Shared memory segment for instance monitoring created
Picked latch-free SCN scheme 3
Autotune of undo retention is turned on.
IMODE=BR
ILAT =30
LICENSE_MAX_USERS = 0
SYS auditing is disabled
ksdpec: called for event 13740 prior to event group initialization
Starting up ORACLE RDBMS Version: 10.2.0.4.0.
System parameters with non-default values:
  processes                = 250
  __shared_pool_size       = 92274688
  __large_pool_size        = 4194304
  __java_pool_size         = 4194304
  __streams_pool_size      = 0
  nls_language             = ENGLISH
  nls_territory            = UNITED KINGDOM
  sga_target               = 327155712
  control_files            = /oradata/10.2.0/DBAASS06/DBAASS06ctrl01.ctl, /oradata/10.2.0/DBAASS06/DBAASS06ctrl02.ctl
  db_block_size            = 8192
  __db_cache_size          = 218103808
  compatible               = 10.2.0.4.0
  log_archive_dest_1       = LOCATION=/oradata/10.2.0/DBAASS06/archive
  log_archive_format       = DBAASS06%t_%s_%r.dbf
  db_file_multiblock_read_count= 16
  undo_management          = AUTO
  undo_tablespace          = UNDO
  remote_login_passwordfile= EXCLUSIVE
  db_domain                = bctn.wheatley-associates.co.uk
  utl_file_dir             = *
  job_queue_processes      = 10
  background_dump_dest     = /orahome/app/oracle/admin/DBAASS06/bdump
  user_dump_dest           = /orahome/app/oracle/admin/DBAASS06/udump
  core_dump_dest           = /orahome/app/oracle/admin/DBAASS06/cdump
  session_max_open_files   = 20
  sort_area_size           = 65536
  db_name                  = DBAASS06
  db_unique_name           = DBAASS06
  open_cursors             = 300
  pga_aggregate_target     = 31457280
PMON started with pid=2, OS id=13977
PSP0 started with pid=3, OS id=13979
MMAN started with pid=4, OS id=13981
DBW0 started with pid=5, OS id=13983
DBW1 started with pid=6, OS id=13985
LGWR started with pid=7, OS id=13987
CKPT started with pid=8, OS id=13989
SMON started with pid=9, OS id=13991
RECO started with pid=10, OS id=13993
CJQ0 started with pid=11, OS id=13995
MMON started with pid=12, OS id=13997
MMNL started with pid=13, OS id=13999
Wed Jul 24 09:29:57 2013
ALTER DATABASE   MOUNT
Wed Jul 24 09:30:01 2013
Setting recovery target incarnation to 2
Wed Jul 24 09:30:01 2013
Successful mount of redo thread 1, with mount id 3614370965
Wed Jul 24 09:30:01 2013
Database mounted in Exclusive Mode
Completed: ALTER DATABASE   MOUNT
Wed Jul 24 09:30:01 2013
ALTER DATABASE OPEN
Wed Jul 24 09:30:01 2013
LGWR: STARTING ARCH PROCESSES
ARC0 started with pid=15, OS id=14163
Wed Jul 24 09:30:01 2013
ARC0: Archival started
ARC1: Archival started
LGWR: STARTING ARCH PROCESSES COMPLETE
ARC1 started with pid=16, OS id=14165
Wed Jul 24 09:30:01 2013
Thread 1 opened at log sequence 14
  Current log# 2 seq# 14 mem# 0: /oradata/10.2.0/DBAASS06/DBAASS06_redo02.log
Successful open of redo thread 1
Wed Jul 24 09:30:01 2013
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Wed Jul 24 09:30:01 2013
ARC1: Becoming the 'no FAL' ARCH
ARC1: Becoming the 'no SRL' ARCH
Wed Jul 24 09:30:01 2013
ARC0: Becoming the heartbeat ARCH
Wed Jul 24 09:30:01 2013
SMON: enabling cache recovery
Wed Jul 24 09:30:01 2013
Successfully onlined Undo Tablespace 1.
Wed Jul 24 09:30:01 2013
SMON: enabling tx recovery
Wed Jul 24 09:30:01 2013
Database Characterset is WE8MSWIN1252
Opening with internal Resource Manager plan
where NUMA PG = 1, CPUs = 12
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
QMNC started with pid=17, OS id=14167
Wed Jul 24 09:30:01 2013
Completed: ALTER DATABASE OPEN
[oracle@el64dev03 bdump]$
There is also a trace file created -
[oracle@el64dev03 bdump]$ cat dbaass06_lgwr_13987.trc
/orahome/app/oracle/admin/DBAASS06/bdump/dbaass06_lgwr_13987.trc
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORACLE_HOME = /orahome/app/oracle/product/10.2.0/db_1
System name:    Linux
Node name:      el64dev03.bctn.wheatley-associates.co.uk
Release:        2.6.32-300.39.2.el5uek
Version:        #1 SMP Wed Dec 19 14:56:59 PST 2012
Machine:        x86_64
Instance name: DBAASS06
Redo thread mounted by this instance: 1
Oracle process number: 7
Unix process pid: 13987, image: [email protected]
*** 2013-07-24 09:30:01.622
*** SERVICE NAME:() 2013-07-24 09:30:01.622
*** SESSION ID:(275.1) 2013-07-24 09:30:01.622
Maximum redo generation record size = 156160 bytes
Maximum redo generation change vector size = 150676 bytes
tkcrrsarc: (WARN) Failed to find ARCH for message (message:0x10)
tkcrrpa: (WARN) Failed initial attempt to send ARCH message (message:0x10)
in $ORACLE_HOME/dbs/alert_DBAASS06.log shows -
[oracle@el64dev03 dbs]$ cat alert_DBAASS06.log
Wed Jul 24 09:29:19 2013
Adjusting the default value of parameter parallel_max_servers

Thanks for the reply. The cloned database source looks like it uses spfile on startup on the test box, but as I hadn't checked this at the time it was cloned, the spfile was not copied over, so I'm assuming it would have used the default init.ora created which had the correct dump dest defined. As for the new database, it was created from scratch, so I don't see why this should not work correctly. Unfortunately as I am new to this establishment, I have no history as to how Oracle was installed on either machine or how any of the other databases were created, though I do understand that many may have been cloned.
So while I can't speak for the databases that already existed on the Dev box, it seemed weird that they all appeared to be doing the same thing.  So I found one that hadn't been started for a while for which there was no historic alert log in dbs and started it to see what happened. It had no spfile, started with pfile and did not create a rogue log. I created an spfile and restarted and still no rouge log.  I ran show paremeters and compared to another similar database that had a rougue log, DASSEN02 v DASSEN04 and the only differences were with the instance name and all the files (bdump, cdump etc) existed correctly. Had a further look ref RMAN as some but not all databases had snapcf files in the dbs directory, but again some that had snap files had rougue alert files and others didn't. Also double checked file permissions on the bdump directory, no issues there. Unexplained issues really do bug me, but I think I'll just have to live with not knowing for now, till I have a spare few hours to look at again.

Similar Messages

  • During daily refresh/clone alter database rename file genrating logs under $ORACLE_HOME/dbs with name c-1437102747-20130920-16

    Did anyone have seen this behavior?
      DB version : 11.1.0.7
      OS : HP-UX Itanum
      EBS -11.5.10.2
    We are doing daily refresh/clone of a database instance from production. Recently we are seeing the growth of $ORACLE_HOME/dbs directory during this
    refresh.   During investigation we find out that
      1) When we rename the database files after database restoration. Each below command genrating the 24MB of log file under $ORACLE_HOME/dbs.
      Command :
      alter database rename file '/db02/prod/XDB.dbf' to '/db02/test/XDB.dbf';
    alter database rename file '/db02/prod/a_archive01.dbf' to '/db02/test/a_archive01.dbf';
      Logfiles under $ORACLE_HOME/dbs on target :
      -rw-r----- xxxxx 24379392 Sep 20 05:30 ./dbs/c-1437102747-20130920-02
      -rw-r----- xxxxx 24379392 Sep 20 05:30 ./dbs/c-1437102747-20130920-03
      2) After few minutes, these logs got removed from the directory.
      3) Did not find anything unusual in the alert log.

    These are controlfile autobackups.   Every time you make a physical change to the database structure, an autobackup is created.  In 11.2, the frequency is reduced -- for example if you make 5 changes in quick succession, one autobackup is created.
    CONTROLFILE AUTOBACKUP ON    would be visible when you do a SHOW ALL in rman.
    Hemant K Chitale

  • How to access alert log---  $ORACLE_HOME/saptrace/background

    Hi,
    Using the Alert log under $ORACLE_HOME/saptrace/background.
    Is it possible to determine whether the database is currently in an intermediate start or stop phase.???
    If it so how can i proceed the process for alert log?
    with regards
    vijay

    Dear Vijay ,
    Q ) Is it possible to determine whether the database is currently in an intermediate start or stop phase.??? - Yes log will provide the information .
    Q ) If it so how can i proceed the process for alert log? ->
    Under $ORACLE_HOME/saptrace/background you will find the log file alertsid.log .
    Regards ,
    Santosh Karadkar

  • Database Generating Errors in Alert Log

    Hie,
    my db is generating errors in the alert log
    Errors in file /export/home/app/oracle/diag/rdbms/ORACLE_SID/ORACLE_SID/trace/ORACLE_SID_j000_15845.trc (incident=44144):
    ORA-00600: internal error code, arguments: [kdsgrp1], [], [], [], [], [], [], [], [], [], [], []
    ORA-00001: unique constraint (SYSMAN.PK_MGMT_JOB_EXECUTION) violated
    DDE: Problem Key 'ORA 600 [13011]' was completely flood controlled (0x4)
    Further messages for this problem key will be suppressed for up to 10 minutes
    looking forward to your assistance
    Mike

    Tue May 22 12:55:56 2012
    Adjusting the default value of parameter parallel_max_servers
    from 960 to 285 due to the value of parameter processes (300)
    Starting ORACLE instance (normal)
    Tue May 22 13:00:16 2012
    Adjusting the default value of parameter parallel_max_servers
    from 960 to 285 due to the value of parameter processes (300)
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Shared memory segment for instance monitoring created
    Picked latch-free SCN scheme 3
    Using LOG_ARCHIVE_DEST_1 parameter default value as /export/home/app/oracle/product/11.2.0/dbhome_1/dbs/arch
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =52
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    Starting up:
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options.
    ORACLE_HOME = /export/home/app/oracle/product/11.2.0/dbhome_1
    System name:     SunOS
    Node name:     server_1
    Release:     5.10
    Version:     Generic_141445-09
    Machine:     i86pc

  • Core file in $ORACLE_HOME/dbs

    Hi All,
    Database vertion: 11.2.0.1
    OS: SunOS with Sun cluster
    Core file is getting generating in huge size in $ORACLE_HOME/dbs and $ORACLE_HOME reaches 100%.
    Can any one tell me why core file is getting generating.
    background_core_dump is Partial
    Regards,
    Prasanna

    See CORE_DUMP_DEST in the docs. You can change it to somewhere with more room. Also see http://www.orafaq.com/faq/what_should_one_do_with_those_core_files
    If you don't get a hint from the file command or the alert log as to what is causing these, then you have to deal with Oracle support.
    You can also limit core size from the OS side, details depend on OS version.
    background_core_dump partial means don't dump sga with background process core dumps. Are background processes dumping?

  • How to view alert log?

    I tried clicking on the xml alert log, but it goes into IE and tells me "Cannot view xml input using xsl style sheet...Only one top level element allowed in an xml document". I don't see any adrci, and I can't find any text file alert log? trace directory only has files beginning with cdump. database and dbs directories don't have it. And nothing about it in the docs?
    I hope I'm missing something obvious. The database is running. XP Pro SP3.

    Udo wrote:
    Hello Joel,
    the good old text alert log is still there, it just moved a bit. The default location would be ORACLE_HOME\diag\rdbms\xe\xe\trace, e.g. D:\oracle\product\database_xe_11_2\app\oracle\diag\rdbms\xe\xe\trace for the instance on my machine.Yep, that's the trace directory I was looking in, it only has an xml.
    See this thread for further hints: {thread:id=2281565}(You had an extra ampersand in the thread id). Yeah, 'Diag Trace' says same directory.
    Anyone know how to get the css right? I'm clueless about such things.
    >
    -Udo

  • ORA-00205: error in identifying controlfile, check alert log for more info

    Hello All,
    I am performing my first Collab Suite install in quite some time and I have successfully installed Collaboration Suite on Linux Red Hat AS 4. I first performed an infrastructure installation for the Collab Suite database which completed with no errors, however, it appears that no control files were created in my $ORACLE_HOME/dbs directory during installation. I am also unable to specify their exact location in the initSID.ora file as I am unable to find them in any other directory on my server. The database starts despite the control file error, as you can see from the second SQL*Plus prompt. However, I am unsure how to create the control files or correct this error. Any assistance would be greatly appreciated!
    SQL*Plus: Release 10.1.0.4.2 - Production on Wed Mar 21 09:51:49 2007
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    SQL> connect SYS as SYSDBA
    Enter password:
    Connected to an idle instance.
    SQL> startup
    ORACLE instance started.
    Total System Global Area 100663296 bytes
    Fixed Size 778024 bytes
    Variable Size 99623128 bytes
    Database Buffers 0 bytes
    Redo Buffers 262144 bytes
    ORA-00205: error in identifying controlfile, check alert log for more info
    ======================================================================
    SQL*Plus: Release 10.1.0.4.2 - Production on Wed Mar 21 11:18:57 2007
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    SQL> connect SYS as SYSDBA
    Enter password:
    Connected.
    SQL> startup nomount
    ORA-01081: cannot start already-running ORACLE - shut it down first
    SQL> exit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.1.0.4.2 - Production
    With the Partitioning, OLAP and Data Mining options

    Control files are database-specific, since they contain information describing the database (location of data files, etc.) You can't copy them from one database to another.
    Do you have a backup?

  • Change of date in alert log

    Hi
    Oracle Version 10.2.0.3.0
    Last friday we had a power failure and a server rebooted abruptly. After it came online I restarted the database and the db did a instance recovery and came online without any problems. However when I checked the alert log file I noticed that the date & timestamp has gone back 14 days. This was there for a while and then it started showing the current date & timestamp. Is that normal? If it's not could some one help me to figure out why this has happened?
    Fri Feb 27 21:26:29 2009
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Picked latch-free SCN scheme 2
    Using LOG_ARCHIVE_DEST_1 parameter default value as /opt/oracle/product/10.2/db_1/dbs/arch
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =121
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    ksdpec: called for event 13740 prior to event group initialization
    Starting up ORACLE RDBMS Version: 10.2.0.3.0.
    System parameters with non-default values:
    processes = 1000
    sessions = 1105
    __shared_pool_size = 184549376
    __large_pool_size = 16777216
    __java_pool_size = 16777216
    __streams_pool_size = 0
    nls_language = ENGLISH
    nls_territory = UNITED KINGDOM
    filesystemio_options = SETALL
    sga_target = 1577058304
    control_files = /opt/oracle/oradata/rep/control.001.dbf, /opt/oracle/oradata/rep/control.002.dbf, /opt/oracle/oradata/rep/control.003.dbf
    db_block_size = 8192
    __db_cache_size = 1342177280
    compatible = 10.2.0
    Fri Feb 27 21:26:31 2009
    ALTER DATABASE MOUNT
    Fri Feb 27 21:26:35 2009
    Setting recovery target incarnation to 1
    Fri Feb 27 21:26:36 2009
    Successful mount of redo thread 1, with mount id 740543687
    Fri Feb 27 21:26:36 2009
    Database mounted in Exclusive Mode
    Completed: ALTER DATABASE MOUNT
    Fri Feb 27 21:26:36 2009
    ALTER DATABASE OPEN
    Fri Feb 27 21:26:36 2009
    Beginning crash recovery of 1 threads
    parallel recovery started with 3 processes
    Fri Feb 27 21:26:37 2009
    Started redo scan
    Fri Feb 27 21:26:41 2009
    Completed redo scan
    481654 redo blocks read, 13176 data blocks need recovery
    Fri Feb 27 21:26:50 2009
    Started redo application at
    Thread 1: logseq 25176, block 781367
    Fri Feb 27 21:26:50 2009
    Recovery of Online Redo Log: Thread 1 Group 6 Seq 25176 Reading mem 0
    Mem# 0: /opt/oracle/oradata/rep/redo_a/redo06.log
    Mem# 1: /opt/oracle/oradata/rep/redo_b/redo06.log
    Fri Feb 27 21:26:53 2009
    Completed redo application
    Fri Feb 27 21:27:00 2009
    Completed crash recovery at
    Thread 1: logseq 25176, block 1263021, scn 77945260488
    13176 data blocks read, 13176 data blocks written, 481654 redo blocks read
    Fri Feb 27 21:27:02 2009
    Expanded controlfile section 9 from 1168 to 2336 records
    Requested to grow by 1168 records; added 4 blocks of records
    Thread 1 advanced to log sequence 25177
    Thread 1 opened at log sequence 25177
    Current log# 7 seq# 25177 mem# 0: /opt/oracle/oradata/rep/redo_a/redo07.log
    Current log# 7 seq# 25177 mem# 1: /opt/oracle/oradata/rep/redo_b/redo07.log
    Successful open of redo thread 1
    Fri Feb 27 21:27:02 2009
    MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
    Fri Feb 27 21:27:02 2009
    SMON: enabling cache recovery
    Fri Feb 27 21:27:04 2009
    Successfully onlined Undo Tablespace 1.
    Fri Feb 27 21:27:04 2009
    SMON: enabling tx recovery
    Fri Feb 27 21:27:04 2009
    Database Characterset is AL32UTF8
    replication_dependency_tracking turned off (no async multimaster replication found)
    Starting background process QMNC
    QMNC started with pid=17, OS id=4563
    Fri Feb 27 21:27:08 2009
    Completed: ALTER DATABASE OPEN
    Fri Feb 27 22:46:04 2009
    Thread 1 advanced to log sequence 25178
    Current log# 8 seq# 25178 mem# 0: /opt/oracle/oradata/rep/redo_a/redo08.log
    Current log# 8 seq# 25178 mem# 1: /opt/oracle/oradata/rep/redo_b/redo08.log
    Fri Feb 27 23:43:49 2009
    Thread 1 advanced to log sequence 25179
    Current log# 9 seq# 25179 mem# 0: /opt/oracle/oradata/rep/redo_a/redo09.log
    Current log# 9 seq# 25179 mem# 1: /opt/oracle/oradata/rep/redo_b/redo09.log
    Fri Mar 13 20:09:29 2009
    MMNL absent for 1194469 secs; Foregrounds taking over
    Fri Mar 13 20:10:16 2009
    Thread 1 advanced to log sequence 25180
    Current log# 10 seq# 25180 mem# 0: /opt/oracle/oradata/rep/redo_a/redo10.log
    Current log# 10 seq# 25180 mem# 1: /opt/oracle/oradata/rep/redo_b/redo10.log
    Fri Mar 13 20:21:17 2009
    Thread 1 advanced to log sequence 25181
    Current log# 1 seq# 25181 mem# 0: /opt/oracle/oradata/rep/redo_a/redo01.log
    Current log# 1 seq# 25181 mem# 1: /opt/oracle/oradata/rep/redo_b/redo01.log

    yes, you are right. I just found that the server was shutdown for more than 4 hours and server came back online @ 8:08pm and I think within few minutes those old timestamp appeared in the alertlog. We have a table which captures current timestamp from the db and timestamp from application and usually both columns are same. But following are the rows inserted during the time of the issue. Not sure why this has happened. One more thing is that the listener was started and on while database was starting and performing instance recovery.
    DBTimestamp_ ApplicationTimestamp_
    27-02-2009 21:27:45 13-03-2009 20:08:42
    27-02-2009 21:31:47 13-03-2009 20:08:43
    27-02-2009 21:31:54 13-03-2009 20:08:43
    27-02-2009 21:33:39 13-03-2009 20:08:42
    27-02-2009 21:35:47 13-03-2009 20:08:42
    27-02-2009 21:37:45 13-03-2009 20:08:42
    27-02-2009 21:38:24 13-03-2009 20:08:42
    27-02-2009 21:39:42 13-03-2009 20:08:42
    27-02-2009 21:40:01 13-03-2009 20:08:42
    27-02-2009 21:41:13 13-03-2009 20:08:42
    27-02-2009 21:44:07 13-03-2009 20:08:43
    27-02-2009 21:53:54 13-03-2009 20:08:42
    27-02-2009 22:03:45 13-03-2009 20:08:42
    27-02-2009 22:07:02 13-03-2009 20:08:42

  • Oradism not set up correctly error in the alert log in 9.2.0.4

    Hello,
    I have installed 9.2.0.4 patch on top of 9.2.0.1 database.
    The O/S is Solaris 5.9
    The database is set to work with:
    workarea_size_policy=AUTO.
    The database is not working properly – compared to other machines we have.
    After looking in the alert log I found the following error popping after restart:
    WARNING: -------------------------------
    WARNING: oradism not set up correctly.
    Dynamic ISM can not be locked. Please
    setup oradism, or unset sga_max_size.
    [diagnostic 0, 16, 5001]
    I found the following two notes on the subject, contradicting each other and also contradicting the status of the machine:
    Note:151222.1
    http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=151222.1
    Telling me not to do the below procedure on 9.2.0.4, though I get the same error and miss the required etc entries specified.
    Note:262886.1
    http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=262886.1
    Telling me to copy oradism.sav to oradism and I already have those.
    Can you please supply a full and DETAILED procedure regarding what should I do, in case it is not a procedure, please specify how I should solve the ‘oradism not set up correctly’ error.
    Tal Olier
    [email protected]

    Hi Tal,
    Just looked at the notes you referred to. The first doesn't apply to 9.2.0.4. as you state. The other note is for 9.2.0.4e (note the "e", stands for embedded). This "e" release is not what you have, so that note doesn't apply either.
    The AR that I referred to is for 10g, but the 9iR2 info is the same. You need to make sure that $ORACLE_HOME/bin/oradism is owned by root.

  • DG Observer triggering SIGSEGV Address not mapped to object errors in alert log

    Hi,
    I've got a Data Guard configuration using two 11.2.0.3 single instance databases.  The configuration has been configured for automatic failover and I have an observer running on a separate box.
    This fast-start failover configuration has been in place for about a month and in the last week, numerous SEGSEGV (address not mapped to object) errors are reported in the alert log.  This is happening quite frequently (every 4/5 minutes or so).
    The corresponding trace files show the process triggering the error coming from the observer.
    Has anyone experienced this problem?  I'm at my wits end trying to figure out how to fix the configuration to eliminate this error.
    I must also note that even though this error is occurring a lot, it doesn't seem to be affecting any of the database functionality.
    Help?
    Thanks in advance.
    Beth

    Hi..   The following is the alert log message, the traced file generated, and the current values of the data guard configuration.  In addition, as part of my research, I attempted to apply patch 12615660 which did not take care of the issue.  I also set the inbound_connection_timeout parameter to 0 and that didn't help either.  I'm still researching but any pointer in the right direction is very much appreciated.
    Error in Alert Log
    Thu Apr 09 10:28:59 2015
    Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x9] [PC:0x85CE503, nstimexp()+71] [flags: 0x0, count: 1]
    Errors in file /u01/app/oracle/diag/rdbms/<db_unq_name>/<SID>/trace/<SID>_ora_29902.trc  (incident=69298):
    ORA-07445: exception encountered: core dump [nstimexp()+71] [SIGSEGV] [ADDR:0x9] [PC:0x85CE503] [Address not mapped to object] []
    Use ADRCI or Support Workbench to package the incident.
    See Note 411.1 at My Oracle Support for error and packaging details.
    Thu Apr 09 10:29:02 2015
    Sweep [inc][69298]: completed
    Trace file:
    Trace file /u01/app/oracle/diag/rdbms/<db_unq_name>/<SID>/trace/<SID>_ora_29902.trc
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning and Oracle Label Security options
    ORACLE_HOME = /u01/app/oracle/product/11.2.0.3/dbhome_1
    System name:    Linux
    Node name:      <host name>
    Release:        2.6.32-431.17.1.el6.x86_64
    Version:        #1 SMP Wed May 7 14:14:17 CDT 2014
    Machine:        x86_64
    Instance name: <SID>
    Redo thread mounted by this instance: 1
    Oracle process number: 19
    Unix process pid: 29902, image: oracle@<host name>
    *** 2015-04-09 10:28:59.966
    *** SESSION ID:(416.127) 2015-04-09 10:28:59.966
    *** CLIENT ID:() 2015-04-09 10:28:59.966
    *** SERVICE NAME:(<db_unq_name>) 2015-04-09 10:28:59.966
    *** MODULE NAME:(dgmgrl@<observer host> (TNS V1-V3)) 2015-04-09 10:28:59.966
    *** ACTION NAME:() 2015-04-09 10:28:59.966
    Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x9] [PC:0x85CE503, nstimexp()+71] [flags: 0x0, count: 1]
    DDE: Problem Key 'ORA 7445 [nstimexp()+71]' was flood controlled (0x6) (incident: 69298)
    ORA-07445: exception encountered: core dump [nstimexp()+71] [SIGSEGV] [ADDR:0x9] [PC:0x85CE503] [Address not mapped to object] []
    ssexhd: crashing the process...
    Shadow_Core_Dump = PARTIAL
    ksdbgcra: writing core file to directory '/u01/app/oracle/diag/rdbms/<db_unq_name>/<SID>/cdump'
    Data Guard Configuration
    DGMGRL> show configuration verbose;
    Configuration - dg_config
      Protection Mode: MaxPerformance
      Databases:
        dbprim - Primary database
        dbstby - (*) Physical standby database
      (*) Fast-Start Failover target
      Properties:
        FastStartFailoverThreshold      = '30'
        OperationTimeout                = '30'
        FastStartFailoverLagLimit       = '180'
        CommunicationTimeout            = '180'
        FastStartFailoverAutoReinstate  = 'TRUE'
        FastStartFailoverPmyShutdown    = 'TRUE'
        BystandersFollowRoleChange      = 'ALL'
    Fast-Start Failover: ENABLED
      Threshold:        30 seconds
      Target:           dbstby
      Observer:         observer_host
      Lag Limit:        180 seconds
      Shutdown Primary: TRUE
      Auto-reinstate:   TRUE
    Configuration Status:
    SUCCESS
    DGMGRL> show database verbose dbprim
    Database - dbprim
      Role:            PRIMARY
      Intended State:  TRANSPORT-ON
      Instance(s):
        DG_CONFIG
      Properties:
        DGConnectIdentifier             = 'dbprim'
        ObserverConnectIdentifier       = ''
        LogXptMode                      = 'ASYNC'
        DelayMins                       = '0'
        Binding                         = 'optional'
        MaxFailure                      = '0'
        MaxConnections                  = '1'
        ReopenSecs                      = '300'
        NetTimeout                      = '30'
        RedoCompression                 = 'DISABLE'
        LogShipping                     = 'ON'
        PreferredApplyInstance          = ''
        ApplyInstanceTimeout            = '0'
        ApplyParallel                   = 'AUTO'
        StandbyFileManagement           = 'MANUAL'
        ArchiveLagTarget                = '0'
        LogArchiveMaxProcesses          = '4'
        LogArchiveMinSucceedDest        = '1'
        DbFileNameConvert               = ''
        LogFileNameConvert              = ''
        FastStartFailoverTarget         = 'dbstby'
        InconsistentProperties          = '(monitor)'
        InconsistentLogXptProps         = '(monitor)'
        SendQEntries                    = '(monitor)'
        LogXptStatus                    = '(monitor)'
        RecvQEntries                    = '(monitor)'
        SidName                         = ‘<sid>’
        StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=<db host name>)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=<service_name>)(INSTANCE_NAME=<sid>)(SERVER=DEDICATED)))'
        StandbyArchiveLocation          = 'USE_DB_RECOVERY_FILE_DEST'
        AlternateLocation               = ''
        LogArchiveTrace                 = '0'
        LogArchiveFormat                = '%t_%s_%r.dbf'
        TopWaitEvents                   = '(monitor)'
    Database Status:
    SUCCESS
    DGMGRL> show database verbose dbstby
    Database - dbstby
      Role:            PHYSICAL STANDBY
      Intended State:  APPLY-ON
      Transport Lag:   0 seconds
      Apply Lag:       0 seconds
      Real Time Query: ON
      Instance(s):
        DG_CONFIG
      Properties:
        DGConnectIdentifier             = 'dbstby'
        ObserverConnectIdentifier       = ''
        LogXptMode                      = 'ASYNC'
        DelayMins                       = '0'
        Binding                         = 'optional'
        MaxFailure                      = '0'
        MaxConnections                  = '1'
        ReopenSecs                      = '300'
        NetTimeout                      = '30'
        RedoCompression                 = 'DISABLE'
        LogShipping                     = 'ON'
        PreferredApplyInstance          = ''
        ApplyInstanceTimeout            = '0'
        ApplyParallel                   = 'AUTO'
        StandbyFileManagement           = 'AUTO'
        ArchiveLagTarget                = '0'
        LogArchiveMaxProcesses          = '4'
        LogArchiveMinSucceedDest        = '1'
        DbFileNameConvert               = ''
        LogFileNameConvert              = ''
        FastStartFailoverTarget         = 'dbprim'
        InconsistentProperties          = '(monitor)'
        InconsistentLogXptProps         = '(monitor)'
        SendQEntries                    = '(monitor)'
        LogXptStatus                    = '(monitor)'
        RecvQEntries                    = '(monitor)'
        SidName                         = ‘<sid>’
        StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=<db host name>)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=<service_name>)(INSTANCE_NAME=<sid>)(SERVER=DEDICATED)))'
        StandbyArchiveLocation          = 'USE_DB_RECOVERY_FILE_DEST'
        AlternateLocation               = ''
        LogArchiveTrace                 = '0'
        LogArchiveFormat                = '%t_%s_%r.dbf'
        TopWaitEvents                   = '(monitor)'
    Database Status:
    SUCCESS

  • After 10.2.0.2 Upgrade Errors in Alert log

    After upgrading the database to 10.2.0.2 - I am receiving the following errors during database database startup and shutdown in the alert log:
    Errors in file /upgdb19/oracle/apgl19udb/10.2.0/admin/apgl19u_rosexdevpgl2/udump/apgl19u_ora_692308.trc:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-06521: PL/SQL: Error mapping function
    ORA-06512: at "DMSYS.OLAPIHISTORYRETENTION", line 1
    ORA-06512: at line 15
    Wed Mar 7 18:47:58 2007
    Completed: ALTER DATABASE OPEN
    I am running the Enterprise Edition Database and according to the OUI the OLAP option is installed.
    Any ideas?

    DMSYS relates to Data Mining. You may have installed the pure software components, but check for invalid database components and run
    @$ORACLE_HOME/rdbms/admin/utlu102s.sql TEXT . You should see something like this:
    Oracle Database 10.2 Upgrade Status Utility 04-20-2005 05:18:40
    Component Status Version HH:MM:SS
    Oracle Database Server VALID 10.2.0.1.0 00:11:37
    JServer JAVA Virtual Machine VALID 10.2.0.1.0 00:02:47
    Oracle XDK VALID 10.2.0.1.0 00:02:15
    Oracle Database Java Packages VALID 10.2.0.1.0 00:00:48
    Oracle Text VALID 10.2.0.1.0 00:00:28
    Oracle XML Database VALID 10.2.0.1.0 00:01:27
    Oracle Workspace Manager VALID 10.2.0.1.0 00:00:35
    Oracle Data Mining VALID 10.2.0.1.0 00:15:56
    Messaging Gateway VALID 10.2.0.1.0 00:00:11
    OLAP Analytic Workspace VALID 10.2.0.1.0 00:00:28
    OLAP Catalog VALID 10.2.0.1.0 00:00:59
    Oracle OLAP API VALID 10.2.0.1.0 00:00:53
    Oracle interMedia VALID 10.2.0.1.0 00:08:03
    Spatial VALID 10.2.0.1.0 00:05:37
    Oracle Ultra Search VALID 10.2.0.1.0 00:00:46
    Oracle Label Security VALID 10.2.0.1.0 00:00:14
    Oracle Expression Filter VALID 10.2.0.1.0 00:00:16
    Oracle Enterprise Manager VALID 10.2.0.1.0 00:00:58
    ========================
    Werner

  • RMAN success, but errors in alert.log file

    My RMAN backup script runs well, but generates errors in alert.log file.
    Here is the trace file contents:
    /usr/lib/oracle/xe/app/oracle/admin/XE/udump/xe_ora_3990.trc
    Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
    ORACLE_HOME = /usr/lib/oracle/xe/app/oracle/product/10.2.0/server
    System name: Linux
    Node name: plockton
    Release: 2.6.18-128.2.1.el5
    Version: #1 SMP Wed Jul 8 11:54:54 EDT 2009
    Machine: i686
    Instance name: XE
    Redo thread mounted by this instance: 1
    Oracle process number: 26
    Unix process pid: 3990, image: oracle@plockton (TNS V1-V3)
    *** 2009-07-23 23:05:01.835
    *** ACTION NAME:(0000025 STARTED111) 2009-07-23 23:05:01.823
    *** MODULE NAME:(backup full datafile) 2009-07-23 23:05:01.823
    *** SERVICE NAME:(SYS$USERS) 2009-07-23 23:05:01.823
    *** SESSION ID:(33.154) 2009-07-23 23:05:01.823
    *** 2009-07-23 23:05:18.689
    *** ACTION NAME:(0000045 STARTED111) 2009-07-23 23:05:18.689
    *** MODULE NAME:(backup archivelog) 2009-07-23 23:05:18.689
    Does anyone know why? Thanks.
    Richard

    I'm not sure if this will answer your question or not, but I believe these messages can likely be ignored.
    I'm currently running 10.2.0.1.0 Enterprise Edition in pre-production (yes, I know I should apply the latest patchset and I plan to do so as soon as I get a development box allocated to me and can test it's impact). I see the same types of messages that you've reported with each of my regularly-scheduled backups:
    a) The alert_<$SID>.log reports that there are errors in trace files:
    Mon Aug 10 04:33:49 2009
    Starting control autobackup
    Mon Aug 10 04:33:50 2009
    Errors in file /opt/oracle/admin/blah/udump/blah_ora_32520.trc:
    Mon Aug 10 04:33:50 2009
    Errors in file /opt/oracle/admin/blah/udump/blah_ora_32520.trc:
    Mon Aug 10 04:33:50 2009
    Errors in file /opt/oracle/admin/blah/udump/blah_ora_32520.trc:
    Control autobackup written to DISK device
    handle '/backup/physical/BLAH/RMAN/cf_c-2740124895-20090810-00'
    b) The .trc files, when you look at them contain no errors - only these "informational" messages:
    *** 2009-08-10 04:33:50.781
    *** ACTION NAME:(0000105 STARTED111) 2009-08-10 04:33:50.754
    *** MODULE NAME:(backup archivelog) 2009-08-10 04:33:50.754
    *** SERVICE NAME:(SYS$USERS) 2009-08-10 04:33:50.754
    *** SESSION ID:(126.28030) 2009-08-10 04:33:50.754
    c) I've verified that LOG_ARCHIVE_TRACE is set to 0:
    SQL*Plus> show parameter log_archive_trace
    NAME TYPE VALUE
    log_archive_trace integer 0
    As best I can discern from my own experience, these should just be ignored and I trust (read: "hope") they will simply go away once the latest patchset is applied. As for you running Oracle XE, a patchset is not an option, unfortunately.
    V/R
    -Eric

  • ORA 600 Errors - in Alert log

    Hi,
    The last 2-3 days I am seeing the following messages in my alert log :
    Sun Mar 29 19:28:09 2009
    Errors in file /db01/oracle/TAP/dump/udump/ora_13542.trc:
    ORA-00600: internal error code, arguments: [13013], [5001], [1], [134636630], [35], [134636630], [], []
    Dump file /db01/oracle/TAP/dump/udump/ora_13542.trc
    Oracle7 Server Release 7.3.2.1.0 - Production Release
    With the distributed and parallel query options
    PL/SQL Release 2.3.2.0.0 - Production
    ORACLE_HOME = /usr/oracle/server7.3
    System name:    AIX
    Node name: rs1
    Release: 2
    Version:        4
    Machine: 000001276600
    Instance name: TAP
    Redo thread mounted by this instance: 1
    Oracle process number: 47
    Unix process pid: 13542, image:
    Sun Mar 29 19:28:08 2009
    *** SESSION ID:(54.498) 2009.03.29.19.28.08.000
    updexe: Table 0 Code 1 Cannot get stable set - last failed row is: 08066456.23
    This happens at a time when I believe there is no user activity. There is a job which runs on midnights ; but the time the errors show up there isnt supposed to be any user activity.
    I am not sure if Oracle is gonna support this version even if I did log in a SR.
    In Metalink I came across this note : Doc ID:      40673.1 ; could this be related ?
    Else what options do I have in getting this resolved.
    Thanks.

    It is possible that you could also be facing an issue as described in ML note 28185.1.
    The current option you have is to apply the patch 7.3.4.x and check if this resolves the problem; else consider upgrade to supported versions itself.
    Of course, if this error is only occurring at off-peak times and not affecting any of the users, then you could probably ignore that but it would be better to find a fix in first place.

  • Alert log monitor help

    Hello,
    I am very new to shell scripting. Our DB is 10g on AIX. And i wanted to setup something that will monitor my alertlog and send me e-mail out. And i found this online. But have very little knowledge on cronjob. I can set one up. But this script dont tell what goes here. Here is the script that i found online. So if anyone could give me what goes where i would be thankfull. it does says put the check_alert.awk someplace. But is that where the cron comes in place. i mean do i schedule check_alert.awk in my cronjob ??? Just wanted to know what parts goes where and how to set this up the right way so i get e-mail alert for my alert log. a step - step process would be good. Thanks
    UNIX shell script to monitor and email errors found in the alert log. Is ran as the oracle OS owner. Make sure you change the "emailaddresshere" entries to the email you want and put the check_alert.awk someplace. I have chosen $HOME for this example, in real life I put it on as mounted directory on the NAS.
    if test $# -lt 1
    then
    echo You must pass a SID
    exit
    fi
    # ensure environment variables set
    #set your environment here
    export ORACLE_SID=$1
    export ORACLE_HOME=/home/oracle/orahome
    export MACHINE=`hostname`
    export PATH=$ORACLE_HOME/bin:$PATH
    # check if the database is running, if not exit
    ckdb ${ORACLE_SID} -s
    if [ "$?" -ne 0 ]
    then
    echo " $ORACLE_SID is not running!!!"
    echo "${ORACLE_SID is not running!" | mailx -m -s "Oracle sid ${ORACLE_SID} is not running!" "
    |emailaddresshere|"
    exit 1
    fi;
    #Search the alert log, and email all of the errors
    #move the alert_log to a backup copy
    #cat the existing alert_log onto the backup copy
    #oracle 8 or higher DB's only.
    sqlplus '/ as sysdba' << EOF > /tmp/${ORACLE_SID}_monitor_temp.txt
    column xxxx format a10
    column value format a80
    set lines 132
    SELECT 'xxxx' ,value FROM v\$parameter WHERE name = 'background_dump_dest'
    exit
    EOF
    cat /tmp/${ORACLE_SID}_monitor_temp.txt | awk '$1 ~ /xxxx/ {print $2}' > /tmp/${ORACLE_SID}_monitor_location.txt
    read ALERT_DIR < /tmp/${ORACLE_SID}_monitor_location.txt
    ORIG_ALERT_LOG=${ALERT_DIR}/alert_${ORACLE_SID}.log
    NEW_ALERT_LOG=${ORIG_ALERT_LOG}.monitored
    TEMP_ALERT_LOG=${ORIG_ALERT_LOG}.temp
    cat ${ORIG_ALERT_LOG} | awk -f $HOME/check_alert.awk > /tmp/${ORACLE_SID}_check_monitor_log.log
    rm /tmp/${ORACLE_SID}_monitor_temp.txt 2>/dev/null
    if [ -s /tmp/${ORACLE_SID}_check_monitor_log.log ]
    then
    echo "Found errors in sid ${ORACLE_SID}, mailed errors"
    echo "The following errors were found in the alert log for ${ORACLE_SID}" > /tmp/${ORACLE_SID}_check_monitor_log.mail
    echo "Alert log was copied into ${NEW_ALERT_LOG}" >> /tmp/${ORACLE_SID}_check_monitor_log.mail
    echo " "
    date >> /tmp/${ORACLE_SID}_check_monitor_log.mail
    echo "--------------------------------------------------------------">>/tmp/${ORACLE_SID}_check_monitor_log.mail
    echo " "
    echo " " >> /tmp/${ORACLE_SID}_check_monitor_log.mail
    echo " " >> /tmp/${ORACLE_SID}_check_monitor_log.mail
    cat /tmp/${ORACLE_SID}_check_monitor_log.log >> /tmp/${ORACLE_SID}_check_monitor_log.mail
    cat /tmp/${ORACLE_SID}_check_monitor_log.mail | mailx -m -s "on ${MACHINE}, MONITOR of Alert Log for ${ORACLE_SID} found errors" "
    |emailaddresshere|"
    mv ${ORIG_ALERT_LOG} ${TEMP_ALERT_LOG}
    cat ${TEMP_ALERT_LOG} >> ${NEW_ALERT_LOG}
    touch ${ORIG_ALERT_LOG}
    rm /tmp/${ORACLE_SID}_monitor_temp.txt 2> /dev/null
    rm /tmp/${ORACLE_SID}_check_monitor_log.log
    rm /tmp/${ORACLE_SID}_check_monitor_log.mail
    exit
    fi;
    rm /tmp/${ORACLE_SID}_check_monitor_log.log > /dev/null
    rm /tmp/${ORACLE_SID}_monitor_location.txt > /dev/null
    The referenced awk script (check_alert.awk). You can modify it as needed to add or remove things you wish to look for. The ERROR_AUDIT is a custom entry that a trigger on DB error writes in our environment.
    $0 ~ /Errors in file/ {print $0}
    $0 ~ /PMON: terminating instance due to error 600/ {print $0}
    $0 ~ /Started recovery/{print $0}
    $0 ~ /Archival required/{print $0}
    $0 ~ /Instance terminated/ {print $0}
    $0 ~ /Checkpoint not complete/ {print $0}
    $1 ~ /ORA-/ { print $0; flag=1 }
    $0 !~ /ORA-/ {if (flag==1){print $0; flag=0;print " "} }
    $0 ~ /ERROR_AUDIT/ {print $0}
    I simply put this script into cron to run every 5 minutes passing the SID of the DB I want to monitor.

    I have a PERL script that I wrote that does exactly what you want and I'll be glad to share that with you along with the CRON entries.
    The script runs opens the current alert_log and searches for key phrases and send e-mail if it finds anything. It then sleeps for 60 sec, wakes up and reads from were it left off to the bottom of the file, searching again and then sleeping. The only down side is it keeps a file handle open on the alert_log, so you have to kill this processes if you want to rename or delete the alert_log.
    My email in my profile is not hidden.
    Tom

  • Help in regarding alert log error...referring to standby

    Database: Single Instance, 10.2.0.3
    OS: Redhat Linux 5
    I checkd on my standby if logs are applied or not by the following command.
    SQL> SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG
    ORDER BY SEQUENCE#;
    The result is "YES" for all corresponding logs.
    Below is my alert log error.
    LNS1 started with pid=46, OS id=27318
    Tue Oct 4 03:03:04 2011
    Thread 1 advanced to log sequence 32284
    Current log# 3 seq# 32284 mem# 0: /u01/oradata/PROD/redo_PROD_03a.log
    Current log# 3 seq# 32284 mem# 1: /u02/oradata/PROD/redo_PROD_03b.log
    Tue Oct 4 03:03:05 2011
    LNS: Standby redo logfile selected for thread 1 sequence 32284 for destination LOG_ARCHIVE_DEST_2 Tue Oct 4 03:03:06 2011
    ARCt: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (3135)
    ARCt: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned Tue Oct 4 03:03:06 2011 Errors in file /u01/app/oracle/admin/PROD/bdump/prod_arct_16306.trc:
    ORA-03135: connection lost contact
    FAL[server, ARCt]: Error 3135 creating remote archivelog file 'STNBY'
    FAL[server, ARCt]: FAL archive failed, see trace file.
    Tue Oct 4 03:03:06 2011
    Errors in file /u01/app/oracle/admin/PROD/bdump/prod_arct_16306.trc:
    ORA-16055: FAL request rejected
    ARCH: FAL archive failed. Archiver continuing Tue Oct 4 03:03:06 2011 ORACLE Instance PROD - Archival Error. Archiver continuing.
    Correspoing trace files is below:
    Dump file /u01/app/oracle/admin/PROD/bdump/prod_arct_16306.trc
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
    With the Partitioning, OLAP and Data Mining options
    ORACLE_HOME = /u01/app/oracle/product/10.2.0
    System name: Linux
    Node name: prod-db.aamu.edu
    Release: 2.6.9-78.0.17.ELsmp
    Version: #1 SMP Thu Mar 5 04:52:17 EST 2009
    Machine: i686
    Instance name: PROD
    Redo thread mounted by this instance: 1
    Oracle process number: 45
    Unix process pid: 16306, image: [email protected] (ARCt)
    *** SERVICE NAME:(SYS$BACKGROUND) 2011-10-02 10:28:10.027
    *** SESSION ID:(2005.1) 2011-10-02 10:28:10.027
    *** 2011-10-02 10:28:10.027 2549 kcrf.c
    tkcrf_clear_srl: Started clearing Standby Redo Logs
    *** 2011-10-02 10:28:10.481 2855 kcrf.c
    tkcrf_clear_srl: Completed clearing Standby Redo Logs
    *** 2011-10-03 03:02:54.982
    Redo shipping client performing standby login
    *** 2011-10-03 03:02:55.218 65190 kcrr.c
    Logged on to standby successfully
    Client logon and security negotiation successful!
    *** 2011-10-04 03:03:06.584
    Error 3135 creating standby archive log file at host 'STNBY'
    *** 2011-10-04 03:03:06.584 61283 kcrr.c
    ARCt: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (3135)
    *** 2011-10-04 03:03:06.584 61283 kcrr.c
    ARCt: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned
    ORA-03135: connection lost contact
    *** 2011-10-04 03:03:06.591 59526 kcrr.c
    kcrrfail: dest:2 err:3135 force:0 blast:1
    Error 1041 detaching RFS from standby instance at host 'STNBY'
    kcrrwkx: unknown error:3135
    ORA-16055: FAL request rejected
    ARCH: Connecting to console port...
    ARCH: Connecting to console port...
    Can someone explain what they mean?
    Thank you soo much.

    Hello;
    I might look at the SQLNET.ORA file on both servers
    Try setting these :
    SQLNET.INBOUND_CONNECT_TIMEOUT=120
    SQLNET.SEND_TIMEOUT = 300
    SQLNET.RECV_TIMEOUT = 300
    Make sure you restart the LISTENERS.
    I'd look at this first :
    ORA - 03135 : connection lost contact while shipping from Primary Server to Standby server [ID 739522.1]
    I once work with an SE who we called "The human firewall". It was always the firewall until proven otherwise.
    And then this :
    Troubleshooting ORA-3135 Connection Lost Contact [ID 787354.1]
    Have you looked at this ?
    Database Hanging during Archival [ID 1142856.1]
    Best Regards
    mseberg

Maybe you are looking for

  • Problem when editting a row in report

    Hello, I have a problem when I want to edit a row in a report. I have created a form with report, and now whe I click on the edit icon on the report page, it navigates me to the form page but wothout the row data, so it let me create a new row.. What

  • Clarifications in pick list output type EK00

    Hello , Have any one of you used the pick list output EK00 . I have tried this and I do have some issues with We are using SAP R/3 4.7 x 200 version . Output type EK00 is not available in the application V2 ( Shipping ) . Its available in application

  • Date & Time - Set Automatically Not Working

    I'm experiencing some very quirky behviour with the time that appears on status bar on the home screen and the world clocks. Ever since the DST change on the weekend my iPhone no longer acquires the correct time from the network when I have "set auto

  • Selection Tool behavior

    And here I mean "The Selection Tool" not "direct" or "perspective"... I'm taking a Lynda.com video training course (AI CS6 One-on-One/Fundamentals) in which the instructor is demonstrating (Chapter 7) moving and duplicating objects by clicking the an

  • Sharing error when iMac awakes from overnight sleep

    Every few days I get this error message when my iMac wakes from sleep. The error is "The name of your computer (my computer name) is already in use on this network. The name has been changed to (new name). I must then go to Sharing and change name ba