Adrci - purging alert logs

Been playing around with 11g,mainly with ADRCI (trying to identify what it does, etc...) and from what I can see, based on the documentation, within ADRCI, you can purge incidents, problems, reports, etc...
So, I set my SHORTP_POLICY to 168 (7 fdays), and within ADRCI, if I just type "purge" (or purge -age 60 -type alert), should it not purge the alert logs (I assume it will do both the log.xml and alert_<SID>.log files????).
This doesn't seem to work...Has anyone tried (or worked) with this.
And I also assume that because there are policies (SHORTP_POLICY & LONGP_POLICY), that there is some kind of automated purging process? is this correct?
Thanks

I've read that and a few other related articles. Those are for the database ADR homes, which i'm relatively comfortable with:
You goto ORACLE_BASE for the database oracle user (ie. /u01/app/oracle), cycle through all results in adrci exec="show homes", and do a purge -age XXXXX. Either that, or wait the 7-day rolling purge period in which ADR automatically purges for a SHORTP policy (30 days) or the LONGP policy (365 days).
I'm talking about the grid user. That means the +ASM instance's logs, CRS data/logs, the listener, and scan_listener entries.
my general understanding is for the grid user, adr homes typically reside under:
/u01/app/grid (which is the typical ORACLE_BASE)
/u01/app/11.2.0/grid/log (which is the typical ORACLE_HOME/log)
.. but there are a ton of other things.
MOS 1368695.1 spells out the clusterware log locations and some of their archival policies:
<GRID_HOME>/log/$HOST/alert$HOST.log
<GRID_HOME>/log/$HOST/client
<GRID_HOME>/log/$HOST/racg*
<GRID_HOME>/log/$HOST/srvm
<GRID_HOME>/rdbms/audit
<GRID_HOME>/log/diag/*
I've been told by ora support that these dir's are not auto-rotated.. have to be manually handled by the DBA. Some of these dir's under $GRID_HOME/log/<server_name>/ . . ., and they are owned by both ROOT and GRID users.  So I was wondering how the rest of the community is dealing with it? How large have people typically made their /u01/app filesystems?  Have they done purging via a cron script under the grid user or root user?

Similar Messages

  • Alert log in 11g

    Hi,
    I have a database where it started on 20th Jun, but my alert_sid.log contains entries starting from 27th Jun. Database version is 11.2.0.2
    I know ADRCI purges only log.xml, but wondering why i'm unable to see the old entries in alert_sid.log. Can someone pls let me know why is this.
    Thanks,
    praveen
    Edited by: 943486 on Jun 28, 2012 11:48 AM

    943486 wrote:
    Hi,
    I have a database where it started on 20th Jun, but my alert_sid.log contains entries starting from 27th Jun. Database version is 11.2.0.2
    I know ADRCI purges only log.xml, but wondering why i'm unable to see the old entries in alert_sid.log. Can someone pls let me know why is this.
    Thanks,
    praveen
    Edited by: 943486 on Jun 28, 2012 11:48 AMwhat OS name & version?

  • Most Recent Alert Log Entries - Error processing Alert Log

    Hi..
    Can someone help me?
    When I want to check the log of alerts the following thing goes out for me:
    "This shows the last 100,000 bytes of the alert log. The log is constantly growing, so select the browser's Refresh button to see the most recent log entries.
    Number of Lines Displayed          Error processing Alert Log"
    Since I can solve it?

    Number of Lines Displayed Error processing Alert Log"
    check ur alert.log file content any data.
    bcoz if ur alert.log file entries is null then above error is show. ( happen with me when i am purge alert.log file and try to view content of alert.log then above error received so may be problem is null contents ( empty alert.log file)).
    HTH

  • 12c Cloud Control Alert Log Purge Error

    I'm trying to purge old alert log events in EM 12c, but i'm getting error - The alert(s) could not be purged. Please ensure you have Edit privileges on this target while purging.
    What's i'm doing wrong? Cant find anything close to this in EM documentation.
    Thank you.

    Hi,
    What privileges do you have on the target? You'll need at least Manage Target Events (a subprivilege of Operator) in order to clear events on the target.
    regards,
    Ana

  • Purging the ALERT LOG

    Hi,
    I am using RDBMS : 9.2.0.6.0 on AIX 5.3 platform.
    I want to purge my alter log file as it is never been purged and is almost contains the entry of past 1 year.
    Can you please provide me the way to do it?
    Thanks
    Shivank

    If there is no alert log the database will create another one, so you can remove/delete the alert log file.
    If you don't want to delete the alert log file. You can rename it while the database is down. So it will create and use a new alert log file.
    You can also empty the file as follows;
    cp -p alert.log alert.log.date (backing up the alert log file)
    echo ""> alert.log (empty the alert log file)
    You can also use as follows;
    tail -3000 alert_test01.log > alert_test01.log_new (this will take last 3000 lines).
    Additionally specifying your O/S would be helpful.
    Adith
    Thanks
    Message was edited by:
    Adith

  • Trim the alert log?

    i have a alert log in <grid home>/log/<node>/alert<node>.log that is growing in size and is also owned by root. is this just like a DB alert log and if I rename it a new one will be created. i'm not sure how to maintenance this log file and I don't want to guess - any suggestions or documentation are appreciated.
    thanks.

    You can use the ADRCI to manage (purge,etc..) the logs of Oracle.
    See this links..
    http://www.dbasupport.com/oracle/ora11g/ADRCI-Extended-Commands.shtml
    http://www.databasejournal.com/features/oracle/article.php/3875896/Purging-Oracle-Databases-Alert-Log-with-ADRCI---Usage-and-Warning.html
    Regards,
    Levi Pereira

  • DG Observer triggering SIGSEGV Address not mapped to object errors in alert log

    Hi,
    I've got a Data Guard configuration using two 11.2.0.3 single instance databases.  The configuration has been configured for automatic failover and I have an observer running on a separate box.
    This fast-start failover configuration has been in place for about a month and in the last week, numerous SEGSEGV (address not mapped to object) errors are reported in the alert log.  This is happening quite frequently (every 4/5 minutes or so).
    The corresponding trace files show the process triggering the error coming from the observer.
    Has anyone experienced this problem?  I'm at my wits end trying to figure out how to fix the configuration to eliminate this error.
    I must also note that even though this error is occurring a lot, it doesn't seem to be affecting any of the database functionality.
    Help?
    Thanks in advance.
    Beth

    Hi..   The following is the alert log message, the traced file generated, and the current values of the data guard configuration.  In addition, as part of my research, I attempted to apply patch 12615660 which did not take care of the issue.  I also set the inbound_connection_timeout parameter to 0 and that didn't help either.  I'm still researching but any pointer in the right direction is very much appreciated.
    Error in Alert Log
    Thu Apr 09 10:28:59 2015
    Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x9] [PC:0x85CE503, nstimexp()+71] [flags: 0x0, count: 1]
    Errors in file /u01/app/oracle/diag/rdbms/<db_unq_name>/<SID>/trace/<SID>_ora_29902.trc  (incident=69298):
    ORA-07445: exception encountered: core dump [nstimexp()+71] [SIGSEGV] [ADDR:0x9] [PC:0x85CE503] [Address not mapped to object] []
    Use ADRCI or Support Workbench to package the incident.
    See Note 411.1 at My Oracle Support for error and packaging details.
    Thu Apr 09 10:29:02 2015
    Sweep [inc][69298]: completed
    Trace file:
    Trace file /u01/app/oracle/diag/rdbms/<db_unq_name>/<SID>/trace/<SID>_ora_29902.trc
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning and Oracle Label Security options
    ORACLE_HOME = /u01/app/oracle/product/11.2.0.3/dbhome_1
    System name:    Linux
    Node name:      <host name>
    Release:        2.6.32-431.17.1.el6.x86_64
    Version:        #1 SMP Wed May 7 14:14:17 CDT 2014
    Machine:        x86_64
    Instance name: <SID>
    Redo thread mounted by this instance: 1
    Oracle process number: 19
    Unix process pid: 29902, image: oracle@<host name>
    *** 2015-04-09 10:28:59.966
    *** SESSION ID:(416.127) 2015-04-09 10:28:59.966
    *** CLIENT ID:() 2015-04-09 10:28:59.966
    *** SERVICE NAME:(<db_unq_name>) 2015-04-09 10:28:59.966
    *** MODULE NAME:(dgmgrl@<observer host> (TNS V1-V3)) 2015-04-09 10:28:59.966
    *** ACTION NAME:() 2015-04-09 10:28:59.966
    Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x9] [PC:0x85CE503, nstimexp()+71] [flags: 0x0, count: 1]
    DDE: Problem Key 'ORA 7445 [nstimexp()+71]' was flood controlled (0x6) (incident: 69298)
    ORA-07445: exception encountered: core dump [nstimexp()+71] [SIGSEGV] [ADDR:0x9] [PC:0x85CE503] [Address not mapped to object] []
    ssexhd: crashing the process...
    Shadow_Core_Dump = PARTIAL
    ksdbgcra: writing core file to directory '/u01/app/oracle/diag/rdbms/<db_unq_name>/<SID>/cdump'
    Data Guard Configuration
    DGMGRL> show configuration verbose;
    Configuration - dg_config
      Protection Mode: MaxPerformance
      Databases:
        dbprim - Primary database
        dbstby - (*) Physical standby database
      (*) Fast-Start Failover target
      Properties:
        FastStartFailoverThreshold      = '30'
        OperationTimeout                = '30'
        FastStartFailoverLagLimit       = '180'
        CommunicationTimeout            = '180'
        FastStartFailoverAutoReinstate  = 'TRUE'
        FastStartFailoverPmyShutdown    = 'TRUE'
        BystandersFollowRoleChange      = 'ALL'
    Fast-Start Failover: ENABLED
      Threshold:        30 seconds
      Target:           dbstby
      Observer:         observer_host
      Lag Limit:        180 seconds
      Shutdown Primary: TRUE
      Auto-reinstate:   TRUE
    Configuration Status:
    SUCCESS
    DGMGRL> show database verbose dbprim
    Database - dbprim
      Role:            PRIMARY
      Intended State:  TRANSPORT-ON
      Instance(s):
        DG_CONFIG
      Properties:
        DGConnectIdentifier             = 'dbprim'
        ObserverConnectIdentifier       = ''
        LogXptMode                      = 'ASYNC'
        DelayMins                       = '0'
        Binding                         = 'optional'
        MaxFailure                      = '0'
        MaxConnections                  = '1'
        ReopenSecs                      = '300'
        NetTimeout                      = '30'
        RedoCompression                 = 'DISABLE'
        LogShipping                     = 'ON'
        PreferredApplyInstance          = ''
        ApplyInstanceTimeout            = '0'
        ApplyParallel                   = 'AUTO'
        StandbyFileManagement           = 'MANUAL'
        ArchiveLagTarget                = '0'
        LogArchiveMaxProcesses          = '4'
        LogArchiveMinSucceedDest        = '1'
        DbFileNameConvert               = ''
        LogFileNameConvert              = ''
        FastStartFailoverTarget         = 'dbstby'
        InconsistentProperties          = '(monitor)'
        InconsistentLogXptProps         = '(monitor)'
        SendQEntries                    = '(monitor)'
        LogXptStatus                    = '(monitor)'
        RecvQEntries                    = '(monitor)'
        SidName                         = ‘<sid>’
        StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=<db host name>)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=<service_name>)(INSTANCE_NAME=<sid>)(SERVER=DEDICATED)))'
        StandbyArchiveLocation          = 'USE_DB_RECOVERY_FILE_DEST'
        AlternateLocation               = ''
        LogArchiveTrace                 = '0'
        LogArchiveFormat                = '%t_%s_%r.dbf'
        TopWaitEvents                   = '(monitor)'
    Database Status:
    SUCCESS
    DGMGRL> show database verbose dbstby
    Database - dbstby
      Role:            PHYSICAL STANDBY
      Intended State:  APPLY-ON
      Transport Lag:   0 seconds
      Apply Lag:       0 seconds
      Real Time Query: ON
      Instance(s):
        DG_CONFIG
      Properties:
        DGConnectIdentifier             = 'dbstby'
        ObserverConnectIdentifier       = ''
        LogXptMode                      = 'ASYNC'
        DelayMins                       = '0'
        Binding                         = 'optional'
        MaxFailure                      = '0'
        MaxConnections                  = '1'
        ReopenSecs                      = '300'
        NetTimeout                      = '30'
        RedoCompression                 = 'DISABLE'
        LogShipping                     = 'ON'
        PreferredApplyInstance          = ''
        ApplyInstanceTimeout            = '0'
        ApplyParallel                   = 'AUTO'
        StandbyFileManagement           = 'AUTO'
        ArchiveLagTarget                = '0'
        LogArchiveMaxProcesses          = '4'
        LogArchiveMinSucceedDest        = '1'
        DbFileNameConvert               = ''
        LogFileNameConvert              = ''
        FastStartFailoverTarget         = 'dbprim'
        InconsistentProperties          = '(monitor)'
        InconsistentLogXptProps         = '(monitor)'
        SendQEntries                    = '(monitor)'
        LogXptStatus                    = '(monitor)'
        RecvQEntries                    = '(monitor)'
        SidName                         = ‘<sid>’
        StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=<db host name>)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=<service_name>)(INSTANCE_NAME=<sid>)(SERVER=DEDICATED)))'
        StandbyArchiveLocation          = 'USE_DB_RECOVERY_FILE_DEST'
        AlternateLocation               = ''
        LogArchiveTrace                 = '0'
        LogArchiveFormat                = '%t_%s_%r.dbf'
        TopWaitEvents                   = '(monitor)'
    Database Status:
    SUCCESS

  • Alert Log file

    Hi,
    I'm new at administrating a database (11.2.0.2 on OEL).
    1) How often one need to see the alert log file?
    2) Why are some Oracle errors not logged in the Alert Log?

    Hello,
    I wouldn't look in the alert log for performance problems either. At least not immediately, but I would take a quick look in there to see if there were any major problems. The alert log is a good place to look first to check the overall health of the DB because any major errors will be written there.
    You can use the Enterprise Manager to automatically email you if certain errors are raised in the alert log.
    Also, you can search through the alert log using the [url http://www.ora00600.com/wordpress/scripts/databaseconfig/adrci/]ADRCI utility which can be pretty useful.
    Rob

  • 10.2.0.4 Streams Alert log message

    Good afternoon everyone,
    Wanted to get some input on a message that ocasionally is logged to alert log of the owning QT instance in our RAC environment. In summary we have two RAC environments and perform bi-directional replication between the two environments. The message in question is usually logged in the busier environment.
    krvxenq: Failed to acquire logminer dictionary lock
    During that time it it seems that the Logminer process in not mining for changes, which could lead to latency between the two environments.
    I have looked at AWR reports for times during the occurence of these errors and its common to see the following procedure running.
    BEGIN dbms_capture_adm_internal.enforce_checkpoint_retention(:1, :2, :3); END;
    This procedure, which I assume purges Capture chekcpoints that exceed the retention duration, takes between 10-20 minutes to run. The table which stores the checkpoints is logmnr_restart_ckpt$. I suspect that the issue could be caused by the size of the logmnr_restart_ckpt$ whcih is 12GB in our environemnt. A purge job needs to be scheduled to shrink the table.
    If anyone has seen anything similar in her or his environment please offer any additional knowledge you may have about the topic.
    Thank you,

    There are 2 possibilities : either you have too much load on the table LOMNR_RESTART_CKPT$
    due to the heavy system load or there is a bad streams query.
    At this stage I would not like to admit the 'we-can't-cope-the-load' without having investigated more common issues.
    Let's assume optimistically that one or many streams internal SQL do not run as intended. This is our best bet.
    The table LOMNR_RESTART_CKPT$ is big is a perfect suspect.
    If there is a problem with one of the queries involving this table, we find it using sys.col_usage$,
    identify which columns are used and from there jump to SQL_ID checking plans:
    set linesize 132 head on pagesize 33
    col obj format a35 head "Table name"
    col col1 format a26 head "Column"
    col equijoin_preds format 9999999 head "equijoin|Preds" justify c
    col nonequijoin_preds format 9999999 head "non|equijoin|Preds" justify c
    col range_preds format 999999 head "Range|Pred" justify c
    col equality_preds format 9999999 head "Equality|Preds" justify c
    col like_preds format 999999 head "Like|Preds" justify c
    col null_preds format 999999 head "Null|Preds" justify c
    select r.name ||'.'|| o.name "obj" , c.name "col1",
          equality_preds, equijoin_preds, nonequijoin_preds, range_preds,
          like_preds, null_preds, to_char(timestamp,'DD-MM-YY HH24:MI:SS') "Date"
    from sys.col_usage$ u, sys.obj$ o, sys.col$ c, sys.user$ r
      where o.obj# = u.obj#    and o.name = 'LOMNR_RESTART_CKPT$'    -- and $AND_OWNER 
            c.obj# = u.obj#    and
            c.col# = u.intcol# and
            o.owner# = r.user# and
           (u.equijoin_preds > 0 or u.nonequijoin_preds > 0)
       order by 4 desc
    For each column predicate checks for full table scan :
    define COL_NAME='col to check'
    col PLAN_HASH_VALUE for 999999999999 head 'Plan hash |value' justify c
         col id for 999 head 'Id'
         col child for 99 head 'Ch|ld'
         col cost for 999999 head 'Oper|Cost'
         col tot_cost for 999999 head 'Plan|cost' justify c
         col est_car for 999999999 head 'Estimed| card' justify c
         col cur_car for 999999999 head 'Avg seen| card' justify c
         col ACC for A3 head 'Acc|ess'
         col FIL for A3 head 'Fil|ter'
         col OTHER for A3 head 'Oth|er'
         col ope for a30 head 'Operation'
         col exec for 999999 head 'Execution'
         break on PLAN_HASH_VALUE on sql_id on child
         select distinct
           a.PLAN_HASH_VALUE, a.id , a.sql_id, a.CHILD_NUMBER child , a.cost, c.cost tot_cost,
           a.cardinality est_car,  b.output_rows/decode(b.EXECUTIONS,0,1,b.EXECUTIONS) cur_car,
           b.EXECUTIONS exec,
           case when length(a.ACCESS_PREDICATES) > 0 then ' Y' else ' N' end ACC,
           case when length(a.FILTER_PREDICATES) > 0 then ' Y' else ' N' end FIL,
           case when length(a.projection) > 0 then ' Y' else ' N' end OTHER,
            a.operation||' '|| a.options ope
    from
        v$sql_plan  a,
        v$sql_plan_statistics_all b ,
        v$sql_plan_statistics_all c
    where
            a.PLAN_HASH_VALUE =  b.PLAN_HASH_VALUE
        and a.sql_id = b.sql_id
        and a.child_number = b.child_number
        and a.id = b.id
        and a.PLAN_HASH_VALUE=  c.PLAN_HASH_VALUE (+)
         and a.sql_id = c.sql_id
         and a.child_number = c.child_number and c.id=0
        and  a.OBJECT_NAME = 'LOMNR_RESTART_CKPT$'       -- $AND_A_OWNER
        and   (instr(a.FILTER_PREDICATES,'&&COL_NAME') > 0
            or instr(a.ACCESS_PREDICATES,'&&COL_NAME') > 0
            or instr(a.PROJECTION, '$COL_NAME') > 0 
    order by sql_id, PLAN_HASH_VALUE, id
    now for each query with a FULL table scan check the predicate and
    see if adding and index will not improveAnother possibility :
    One of the structure associated to streams, that you don't necessary see, may have remained too big.
    Many of these structures are only accessed by streams maintenance using Full table scan.
    They are supposed to remain small tables. But after a big streams crash, they inflate and whenever
    the problem is solved, These supposed-to-be-small remain as the crash made them : too big.
    In consequence, the FTS intended to run on small tables suddenly loop over stretches of empty blocks.
    This is very frequent on big streams environment. You find these structures, if any exists using this query
    Note that a long running of this query, imply big structures. You will have to assess if the number of rows is realistic with the number of blocks.
    break on parent_table
    col type format a30
    col owner format a16
    col index_name head 'Related object'
    col parent_table format a30
    prompt
    prompt To shrink a lob associated to a queue type : alter table AQ$_<queue_table>_P modify lob(USER_DATA) ( shrink space  ) cascade ;
    prompt
    select a.owner,a.table_name parent_table,index_name ,
           decode(index_type,'LOB','LOB INDEX',index_type) type,
          (select blocks from dba_segments where segment_name=index_name and owner=b.owner) blocks
       from
          dba_indexes  a,
          ( select owner, queue_table table_name from dba_queue_tables
               where recipients='SINGLE' and owner NOT IN ('SYSTEM') and (compatible LIKE '8.%' or compatible LIKE '10.%')
             union
             select owner, queue_table table_name from dba_queue_tables
                    where recipients='MULTIPLE' and (compatible LIKE '8.1%' or compatible LIKE '10.%')
         ) b
       where   a.owner=b.owner
           and a.table_name = b.table_name
           and a.owner not like 'SYS%' and a.owner not like 'WMSYS%'
    union
    -- LOB Segment  for QT
    select a.owner,a.segment_name parent_table,l.segment_name index_name, 'LOB SEG('||l.column_name||')' type,
                      (select sum(blocks) from dba_segments where segment_name = l.segment_name ) blob_blocks
                 from dba_segments  a,
                      dba_lobs l,
                      ( select owner, queue_table table_name from dba_queue_tables
                               where recipients='SINGLE' and owner NOT IN ('SYSTEM') and (compatible LIKE '8.%' or compatible LIKE '10.%')
                        union
                        select owner, queue_table table_name from dba_queue_tables
                                where recipients='MULTIPLE' and (compatible LIKE '8.1%' or compatible LIKE '10.%')
                      ) b
                 where a.owner=b.owner and
                       a.SEGMENT_name = b.table_name  and
                       l.table_name = a.segment_name and
                       a.owner not like 'SYS%' and a.owner not like 'WMSYS%'
    union
    -- LOB Segment of QT.._P
    select a.owner,a.segment_name parent_table,l.segment_name index_name, 'LOB SEG('||l.column_name||')',
           (select sum(blocks) from dba_segments where segment_name = l.segment_name ) blob_blocks
       from dba_segments  a,
              dba_lobs l,
              ( select owner, queue_table table_name from dba_queue_tables
                       where recipients='SINGLE' and owner NOT IN ('SYSTEM') and (compatible LIKE '8.%' or compatible LIKE '10.%')
                union
                select owner, queue_table table_name from dba_queue_tables
                        where recipients='MULTIPLE' and (compatible LIKE '8.1%' or compatible LIKE '10.%')
              ) b
       where a.owner=b.owner and
               a.SEGMENT_name = 'AQ$_'||b.table_name||'_P'  and
               l.table_name = a.segment_name and
               a.owner not like 'SYS%' and a.owner not like 'WMSYS%'
    union
    -- Related QT
    select a2.owner, a2.table_name parent_table,  '-' index_name , decode(nvl(a2.initial_extent,-1), -1, 'IOT TABLE','NORMAL') type,
              case
                   when decode(nvl(a2.initial_extent,-1), -1, 'IOT TABLE','NORMAL') = 'IOT TABLE'
                        then ( select sum(leaf_blocks) from dba_indexes where table_name=a2.table_name and owner=a2.owner)
                   when decode(nvl(a2.initial_extent,-1), -1, 'IOT TABLE','NORMAL') = 'NORMAL'
                        then (select blocks from dba_segments where segment_name=a2.table_name and owner=a2.owner)
               end blocks
       from dba_tables a2,
           ( select owner, queue_table table_name from dba_queue_tables
                     where recipients='SINGLE' and owner NOT IN ('SYSTEM') and (compatible LIKE '8.%' or compatible LIKE '10.%')
              union all
              select owner, queue_table table_name from dba_queue_tables
                     where recipients='MULTIPLE' and (compatible LIKE '8.1%' or compatible LIKE '10.%' )
           ) b2
       where
             a2.table_name in ( 'AQ$_'||b2.table_name ||'_T' , 'AQ$_'||b2.table_name ||'_S', 'AQ$_'||b2.table_name ||'_H' , 'AQ$_'||b2.table_name ||'_G' ,
                                'AQ$_'|| b2.table_name ||'_I'  , 'AQ$_'||b2.table_name ||'_C', 'AQ$_'||b2.table_name ||'_D', 'AQ$_'||b2.table_name ||'_P')
             and a2.owner not like 'SYS%' and a2.owner not like 'WMSYS%'
    union
    -- IOT Table normal
    select
             u.name owner , o.name parent_table, c.table_name index_name, 'RELATED IOT' type,
             (select blocks from dba_segments where segment_name=c.table_name and owner=c.owner) blocks
       from sys.obj$ o,
            user$ u,
            (select table_name, to_number(substr(table_name,14)) as object_id  , owner
                    from dba_tables where table_name like 'SYS_IOT_OVER_%'  and owner not like '%SYS') c
      where
              o.obj#=c.object_id
          and o.owner#=u.user#
          and obj# in (
               select to_number(substr(table_name,14)) as object_id from dba_tables where table_name like 'SYS_IOT_OVER_%'  and owner not like '%SYS')
    order by parent_table , index_name desc;
    "I hope it is one of the above case, otherwise thing may become more complicates.

  • How to check particular error in alert log file

    Hi all,
    How to check particular error in alert log file,for supose if i get error in batabase yesterday 4 pm & today i want to check alert log file to get basic idea..it might be a big file so how to check that particular error..
    Thanks & regards,
    Eswar..

    What's your oracle version?
    If you are in 11g you can use adrci tool
    1- set homes diag/rdbms/orawiss/ORAWISS/ : the rdbms home
    2- show alert -P "MESSAGE_TEXT LIKE '%ORA-%'" -term : to find all the ORA-% errors in the alert file
    3- show alert -P "MESSAGE_TEXT LIKE '%ORA-%' and originating_timestamp > systimestamp-51 " -term : to find all the ORA-% errors in the alert file during the last 51 days,
    4- show alert -P "MESSAGE_TEXT LIKE '%ORA-%' and originating_timestamp > systimestamp-1/24 " -term : to find all the ORA-% errors in the alert file during the last hour,
    5- show alert -P "MESSAGE_TEXT LIKE '%ORA-12012%' and originating_timestamp > systimestamp-1/24 " -term : to find the particular ORA-12012 error in the alert file during the last hour,
    Example:
    [oracle@wissem wissem]$ adrci
    ADRCI: Release 11.2.0.1.0 - Production on Wed May 4 10:24:54 2011
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    ADR base = "/home/oracle/app/oracle"
    adrci> set homes diag/rdbms/orawiss/ORAWISS/
    adrci> show alert -P "MESSAGE_TEXT LIKE '%ORA-'" -term
    ADR Home = /home/oracle/app/oracle/diag/rdbms/orawiss/ORAWISS:
    adrci> show alert -P "MESSAGE_TEXT LIKE '%ORA-%'" -term
    ADR Home = /home/oracle/app/oracle/diag/rdbms/orawiss/ORAWISS:
    2010-12-11 19:45:41.289000 +01:00
    ORA-1109 signalled during: ALTER DATABASE CLOSE NORMAL...
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01152: file 1 was not restored from a sufficiently old backup
    ORA-01110: data file 1: '/home/oracle/app/oracle/oradata/ORAWISS/system01.dbf'
    ORA-1547 signalled during: ALTER DATABASE RECOVER  database until time '2011-01-21:10:48:00'  ...
    Errors in file /home/oracle/app/oracle/diag/rdbms/orawiss/ORAWISS/trace/ORAWISS_j000_5692.trc:
    ORA-12012: error on auto execute of job 29
    ORA-01435: user does not exist
    2011-03-15 11:39:37.571000 +01:00
    opiodr aborting process unknown ospid (31042) as a result of ORA-609
    2011-03-15 12:04:15.111000 +01:00
    opiodr aborting process unknown ospid (3509) as a result of ORA-609
    adrci>
    adrci> show alert -P "MESSAGE_TEXT LIKE '%ORA-%' and originating_timestamp > systimestamp-51 " -term
    ADR Home = /home/oracle/app/oracle/diag/rdbms/orawiss/ORAWISS:
    2011-03-15 10:19:45.316000 +01:00
    Errors in file /home/oracle/app/oracle/diag/rdbms/orawiss/ORAWISS/trace/ORAWISS_j006_5536.trc:
    ORA-12012: error on auto execute of job 26
    ORA-01435: user does not exist
    Errors in file /home/oracle/app/oracle/diag/rdbms/orawiss/ORAWISS/trace/ORAWISS_j000_5692.trc:
    ORA-12012: error on auto execute of job 29
    ORA-01435: user does not exist
    2011-03-15 11:39:37.571000 +01:00
    opiodr aborting process unknown ospid (31042) as a result of ORA-609
    2011-03-15 12:04:15.111000 +01:00
    opiodr aborting process unknown ospid (3509) as a result of ORA-609
    adrci>

  • If alert log is deleted, how to restore it?

    Hi,
    I want to know if Alert log is deleted, how to restore it?

    RMAN does not backup the alertlog, only an OS filesystem backup does this. In 11g and higher there are 2 versions of the alertlog, the well-known textfile and now additionally a file in xml format. IF only the text version is lost, command line utility adrci can still access the xml file:
    http://docs.oracle.com/cd/E11882_01/server.112/e22490/adrci.htm#BGBBBBEA

  • Alert log file of the database is too big

    Dear Experts?
    Let me update you that we are in process of doing an R12.1 upgrade ? And Our Current Instance is running on the 11.5.10 with 9.2.0.6 ?
    We have the below challenges before going for an database upgrade ?
    We have observed that the customer database alert_SID.log (9.2.0.6) size is 2.5GB ? how to purge this file ? Please advise
    Please also note that our Instance is running on the Oracle Enterprise Linux 4 update 8
    Regards
    Mohammed.

    user977334 wrote:
    Dear Experts?
    Let me update you that we are in process of doing an R12.1 upgrade ? And Our Current Instance is running on the 11.5.10 with 9.2.0.6 ?
    We have the below challenges before going for an database upgrade ?
    We have observed that the customer database alert_SID.log (9.2.0.6) size is 2.5GB ? how to purge this file ? Please advise
    Please also note that our Instance is running on the Oracle Enterprise Linux 4 update 8
    Regards
    Mohammed.Rename the alert logfile. once you rename the logfile another alert log file will create and being populated with time, and later you can delete it old alert log file . it doesnot harm your database.
    --neeraj                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Alert log

    Hi,
    How to use the alert log in production ENV.Bcz the alert log size is 1.6GB.it is difficule to handle.
    Is there any tool.
    can i get the last one day info from alert.log file?
    Edited by: user3266490 on Nov 16, 2009 11:55 AM

    You can purge the alert log file according to your need. You can remove the alert log file also. This won't hamper your running database.
    ]$ tail -f alert_syslog.log -- to see running alert log files
    $ tail -200 alert_syslog.log --to see last 200 lines
    Regards
    Asif kabir
    Edited by: asifkabirdba on Nov 16, 2009 12:34 PM

  • How to view alert log?

    I tried clicking on the xml alert log, but it goes into IE and tells me "Cannot view xml input using xsl style sheet...Only one top level element allowed in an xml document". I don't see any adrci, and I can't find any text file alert log? trace directory only has files beginning with cdump. database and dbs directories don't have it. And nothing about it in the docs?
    I hope I'm missing something obvious. The database is running. XP Pro SP3.

    Udo wrote:
    Hello Joel,
    the good old text alert log is still there, it just moved a bit. The default location would be ORACLE_HOME\diag\rdbms\xe\xe\trace, e.g. D:\oracle\product\database_xe_11_2\app\oracle\diag\rdbms\xe\xe\trace for the instance on my machine.Yep, that's the trace directory I was looking in, it only has an xml.
    See this thread for further hints: {thread:id=2281565}(You had an extra ampersand in the thread id). Yeah, 'Diag Trace' says same directory.
    Anyone know how to get the css right? I'm clueless about such things.
    >
    -Udo

  • Clusterware 12.1.0.2.0:  Alert Log is empty

    Hello,
    After one week working with a cluster initially of two nodes and after 3, I see that my alertrac1.log is empty on all nodes.
    If we go to the ORACLE_HOME for the GI.
    [grid@rac1 rac1]$ pwd
    /u01/app/12.1.0/grid/log/rac1
    [grid@rac1 rac1]$ ls -ltr
    total 17360
    drwxr-x---. 2 root oinstall     4096 Mar  4 13:43 ctssd
    drwxr-x---. 2 root oinstall     4096 Mar  4 13:43 crsd
    drwxr-x---. 2 grid oinstall     4096 Mar  4 13:43 evmd
    drwxr-x---. 2 grid oinstall     4096 Mar  4 13:43 cssd
    drwxr-x---. 2 grid oinstall     4096 Mar  4 13:43 mdnsd
    drwxr-x---. 2 root oinstall     4096 Mar  4 13:43 gnsd
    drwxr-x---. 2 grid oinstall     4096 Mar  4 13:43 srvm
    drwxr-x---. 2 grid oinstall     4096 Mar  4 13:43 gipcd
    drwxr-x---. 2 grid oinstall     4096 Mar  4 13:43 diskmon
    drwxr-xr-x. 2 grid oinstall     4096 Mar  4 13:43 afd
    drwxrwxr-t. 5 grid oinstall     4096 Mar  4 13:43 racg
    drwxr-x---. 2 grid oinstall     4096 Mar  4 13:43 admin
    drwxr-x---. 2 root oinstall     4096 Mar  4 13:43 crfmond
    drwxr-x---. 2 root oinstall     4096 Mar  4 13:43 crflogd
    drwxrwxr-x. 2 grid oinstall     4096 Mar  4 13:43 xag
    drwxr-xr-x. 6 grid oinstall     4096 Mar  4 13:43 acfs
    -rw-rw-r--. 1 grid oinstall        0 Mar  4 13:43 alertrac1.log
    drwxr-x---. 2 root oinstall     4096 Mar  4 13:47 ohasd
    drwxr-x---. 2 grid oinstall     4096 Mar 15 10:14 gpnpd
    drwxrwxrwt. 2 grid oinstall     4096 Mar 15 10:23 client
    Also other logs , like css, crsd are empty.
    This is a extrange issue, never seem for me.
    The Linux servers are installed in Spanish.
    Red Hat Enterprise Linux Server release 6.5 (Santiago)
    However, if I change the environment variables to English.
    The messages are coming out in Spanish.
    Then If I execute any crsctl command from GRID user:
    [grid@rac1 ~]$ crsctl check cluster -all
    rac1:
    CRS-4537: Cluster Ready Services está en lÃnea
    CRS-4529: Cluster Synchronization Services está en lÃnea
    CRS-4533: El gestor de eventos está en lÃnea
    rac2:
    CRS-4537: Cluster Ready Services está en lÃnea
    CRS-4529: Cluster Synchronization Services está en lÃnea
    CRS-4533: El gestor de eventos está en lÃnea
    rac3:
    CRS-4537: Cluster Ready Services está en lÃnea
    CRS-4529: Cluster Synchronization Services está en lÃnea
    CRS-4533: El gestor de eventos está en lÃnea
    The messages are in Spanish, there seems to me this reading the env variables.
    The .bash_profile for GRID user is:
    export TMP=/tmp
    export TMPDIR=$TMP
    export ORACLE_HOSTNAME=rac1.localdomain
    export ORACLE_BASE=/u01/app/grid
    export ORACLE_HOME=/u01/app/12.1.0/grid
    export ORACLE_SID=+ASM1; export ORACLE_SID
    export ORACLE_TERM=xterm; export ORACLE_TERM
    export BASE_PATH=/usr/sbin:$PATH; export BASE_PATH
    export PATH=$ORACLE_HOME/bin:$PATH; export PATH
    export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
    export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH
    ulimit -u 16384 -n 65536
    export NLS_LANG=AMERICAN_AMERICA.AL32UTF8
    export LANG=en_US.UTF-
    You know that it can be happening?
    Many thanks
    Arturo

    Hello,
    Thanks for you answer.
    Yes, If I use adrci from the grid user:
    adrci> show alert
    Choose the home from which to view the alert log:
    1: diag/clients/user_grid/host_203307297_82
    2: diag/clients/user_oracle/host_203307297_82
    3: diag/rdbms/_mgmtdb/-MGMTDB
    4: diag/asm/+asm/+ASM1
    5: diag/tnslsnr/rac1/listener
    6: diag/tnslsnr/rac1/mgmtlsnr
    7: diag/tnslsnr/rac1/asmnet1lsnr_asm
    8: diag/tnslsnr/rac1/listener_scan2
    9: diag/tnslsnr/rac1/listener_scan3
    10: diag/tnslsnr/rac1/listener_scan1
    11: diag/crs/rac1/crs
    Q: to quit
    The 11 option, go to /u01/app/grid/diag/crs/rac1/crs/trace/ alert.log.
    Here in this directory are all the Crs trace files, including css, crsd components etc.
    Well,, I think is a small change of 12c.
    Thanks again.
    Regards
    Arturo

Maybe you are looking for

  • Forcing a commit in a JAVA stored procedure

    Hi, I am using a java stored procedure which updates few tables ;only after it completes the call the commit is happening;But i wanted it to commit the tables as and when the processing happens not at the end of the call to java stored procedure beca

  • Imac looses Ethernet Connection

    The Problem: We have now 3 imacs with ML 10.7/8 who started to get Networkproblems since 2 weeks. When you start the imac, you could work for 5-10 Minutes, and then you have no connection to the network. No error Messages appear. You must reboot the

  • Missing episodes from Star Trek: Enterprise Season 4

    I want to buy season 4 of Star Trek: Enterprise on iTunes, but it appears as if episodes 1 and 18 are missing. Does anyone know what is going on there?

  • AT&T cut off my THunderbird account on January 10, so how do I get it going again?

    Apparently I need to change something in Account Settings. AT&T sent me an email some time ago, but I saw no way to set things as they recommended. Thunderbird does not have a way to set the port number for outgoing mail, for instance. They said to u

  • CS3 and printing to an Epson 7800 (Mac OS10.5)

    I have an Epson 7800 printer. With Photoshop CS3 not all the print setting choices are available(after hitting "Print" from the first, new Print Dialog window). I get the new Print Dialog window, but when I hit Print and then try and choose Print Set