Alert_+asm.log
Hi gurus,
Oracle RAC 11.2.03 at redhat 5.4
we had a RAC database on 11.1.0.1 and we migrated to 11.2.0.3 in a few months.
All works fine but there is a abnormal issue. (i think it is abnormal).
we have asm log at:
/u01/app/oracle/diag/asm/+asm/+ASM1/trace/alert_+asm1.log
/u01/app/oracle/diag/asm/+asm/+ASM2/trace/alert_+asm2.log
but all the rest of log/traces are under /u02
SQL> sho parameter diag
NAME TYPE VALUE
diagnostic_dest string /u02/app/oracle
SQL> sho parameter background_dump_dest
NAME TYPE VALUE
background_dump_dest string /u02/app/oracle/diag/rdbms/db_cos /db_1/traceif i check adr base at sql:
SQL> select value from v$diag_info where name='ADR Base';
VALUE
/u02/app/oracle
but if I check adrci it show ADR base as /u01/app/oracle. I think here i have the problem, i should change it, right?
but if i try:
adrci> set base =/u02/app/oracle
exit and enter again base is reset to /u01/app/oracle. How i set it permanent?
Edited by: Me_101 on 29-ene-2013 4:08
Both was right, i had asm parameters incorrect
SQL> sho parameter diag
NAME TYPE VALUE
diagnostic_dest string /u01/app/oracle
SQL> sho parameter background_dump_dest
NAME TYPE VALUE
background_dump_dest string /u01/app/oracle/diag/asm/+asm/
+ASM1/traceSQL> alter system set diagnostic_dest='/u02/app/oracle' scope=both sid='*';
System altered.
and:
SQL> sho parameter background_dump
NAME TYPE VALUE
background_dump_dest string /u02/app/oracle/diag/asm/+asm/
+ASM2/trace
SQL> sho parameter diag
NAME TYPE VALUE
diagnostic_dest string /u02/app/oracle*JUST A POINT --> CRS_HOME is for previus Oracle, for 11g is ORA_CRS_HOME
Similar Messages
-
Hi all,
I am getting this alert in ASM alert log.
There are so many database and file system. The alert log and trace file is not giving info regarding which db it is coming from.
==========
ORA-15173: entry 'thread_5_seq_9631.2209.732197917' does not exist in directory '2010_10_12'
ORA-15173: entry 'thread_5_seq_9631.2209.732197917' does not exist in directory '2010_10_12'
ORA-15173: entry 'thread_5_seq_9631.2209.732197917' does not exist in directory '2010_10_12'
ORA-15173: entry 'thread_5_seq_9631.2209.732197917' does not exist in directory '2010_10_12'
*** 2010-10-12 12:37:11.553
ORA-15173: entry 'thread_5_seq_9631.2209.732197917' does not exist in directory '2010_10_12'
ORA-15173: entry 'thread_5_seq_9631.2209.732197917' does not exist in directory '2010_10_12'
ORA-15173: entry 'thread_5_seq_9631.2209.732197917' does not exist in directory '2010_10_12'
ORA-15173: entry 'thread_5_seq_9631.2209.732197917' does not exist in directory '2010_10_12'
*** 2010-10-12 12:37:22.541
ORA-15173: entry 'thread_5_seq_9631.2209.732197917' does not exist in directory '2010_10_12'
ORA-15173: entry 'thread_5_seq_9631.2209.732197917' does not exist in directory '2010_10_12'
ORA-15173: entry 'thread_5_seq_9631.2209.732197917' does not exist in directory '2010_10_12'
ORA-15173: entry 'thread_5_seq_9631.2209.732197917' does not exist in directory '2010_10_12'
*** 2010-10-12 12:37:33.144
ORA-15173: entry 'thread_5_seq_9631.2209.732197917' does not exist in directory '2010_10_12'
ORA-15173: entry 'thread_5_seq_9631.2209.732197917' does not exist in directory '2010_10_12'
ORA-15173: entry 'thread_5_seq_9631.2209.732197917' does not exist in directory '2010_10_12'
ORA-15173: entry 'thread_5_seq_9631.2209.732197917' does not exist in directory '2010_10_12'
==========
please advise the solution.In this case, I am getting alert in alert_+ASM.log file, so not able to understand where it is coming from.
-
ASM, Disk Hung: how remove??
Hi guys, I have a big issue to solve: I work on Solaris OS and Oracle 10.0.2 with ASM.
I can't delete a new (no tablespace are expanded till now) disk added to diskgroup (DGFC).
This is my situation:
select name, state FROM V$ASM_DISK where name like 'DGFC%';
NAME STATE
DGFC_0000 NORMAL
DGFC_0001 NORMAL
DGFC_0002 NORMAL
DGFC_0003 NORMAL
DGFC_0004 NORMAL
DGFC_0005 NORMAL
DGFC_0006 NORMAL
DGFC_0007 NORMAL
DGFC_0008 NORMAL <---- This is the last disk added (I would remove it)
Maybe this could be important: DGFC_0008 was dropped from another diskgroup and header_status has changed from CANDIDATE to FORMER, then I added to DGFC diskgroup in the classic way.
Sending ALTER DISKGROUP DGFC DROP DISK DGFC_0008; the situation change into
NAME STATE
DGFC_0008 HUNG
and log file alert_+ASM.log contains:
Fri Jan 14 21:18:08 2011
SQL> ALTER DISKGROUP DGFC DROP DISK DGFC_0008
Fri Jan 14 21:18:08 2011
NOTE: PST update: grp = 1
NOTE: requesting all-instance PST refresh for group=1
Fri Jan 14 21:18:08 2011
NOTE: PST refresh pending for group 1/0x22e8f203 (DGFC)
SUCCESS: refreshed PST for 1/0x22e8f203 (DGFC)
Fri Jan 14 21:18:13 2011
NOTE: starting rebalance of group 1/0x22e8f203 (DGFC) at power 1
Starting background process ARB0
ARB0 started with pid=12, OS id=5071
Fri Jan 14 21:18:13 2011
NOTE: assigning ARB0 to group 1/0x22e8f203 (DGFC)
Fri Jan 14 21:18:13 2011
WARNING: allocation failure on disk DGFC_0000 for file 3 xnum 30
Fri Jan 14 21:18:13 2011
Errors in file /users/app/oracle/admin/+ASM/bdump/+asm_arb0_5071.trc:
ORA-15041: diskgroup space exhausted
Fri Jan 14 21:18:13 2011
NOTE: stopping process ARB0
Fri Jan 14 21:18:16 2011
WARNING: rebalance not completed for group 1/0x22e8f203 (DGFC)
Fri Jan 14 21:18:16 2011
SUCCESS: rebalance completed for group 1/0x22e8f203 (DGFC)
NOTE: PST update: grp = 1
WARNING: grp 1 disk DGFC_0008 still has contents (45 AUs)
NOTE: PST update: grp = 1
This is /users/app/oracle/admin/+ASM/bdump/+asm_arb0_5071.trc:
Instance name: +ASM
Redo thread mounted by this instance: 0 <none>
Oracle process number: 12
Unix process pid: 5071, image: [email protected] (ARB0)
*** SERVICE NAME:() 2011-01-14 21:18:13.067
*** SESSION ID:(39.20) 2011-01-14 21:18:13.067
ARB0 relocating file +DGFC.3.1 (7 entries)
ORA-15041: diskgroup space exhausted
Anyone could help ??? Thanks a lotThanks a lot Levi,
The answer was into the Log: disk exhausted. Original diskgroup was quite full and asm can't rebalance correctly the data on the new disk (which datas??!!).
I solved adding a new disk (16GB) and alterering diskgroup DGFC. New space ready and disk dropped correctly.
Thanks a lot
Bye -
10.1.0.4 on SLES 9 SP1 ora-03113
Hello,
SLES9 SP1 is on PIII with two 800MHz procesors and 2GB RAM,
on four SATA 156GBhard disks are created two RAID1 arays,
on one RAID is OS with 1GB swap and Oracle 10.1.0.4 enterprise edition, on second RAID are two raw partitions each 70GB ....
I start DBCA custom database and configure one ASM disk group by using
two raw partitions (/dev/sdb1, /dev/sdb2),then I add
4 new tablespaces with datafiles of 2GB each autoextend on 10MB, change character set,
deselect spatial & data mining....
Instalation starts but stops betwen 13&14% while creating data dictionary with error ORA-03113
end-of-file on comunication chanell
Any suggestion is extra welcome.....
Thanks 4 Your timeHERE ARE MY ALERT LOGS AND TRACE FILES:
/admin/+ASM/bdump>cat alert_+ASM.log
Tue Jun 28 16:22:50 2005
SUCCESS: diskgroup DISKOVI was mounted
Tue Jun 28 16:22:51 2005
NOTE: recovering COD for group 1/0xac774a8b (DISKOVI)
SUCCESS: completed COD recovery for group 1/0xac774a8b (DISKOVI)
Tue Jun 28 16:29:19 2005
Errors in file /opt/oracle/admin/+ASM/bdump/+asm_smon_5880.trc:
ORA-29702: error occurred in Cluster Group Service operation
ORA-29702: error occurred in Cluster Group Service operation
Tue Jun 28 16:29:19 2005
SMON: terminating instance due to error 29702
Instance terminated by SMON, pid = 5880
Tue Jun 28 16:34:18 2005
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
/admin/+ASM/bdump> cat +asm_smon_5880.trc
/opt/oracle/admin/+ASM/bdump/+asm_smon_5880.trc
Oracle Database 10g Enterprise Edition Release 10.1.0.4.0 - Production
With the Partitioning, OLAP and Data Mining options
ORACLE_HOME = /opt/oracle/product/10g
System name: Linux
Node name: zeus
Release: 2.6.5-7.139-smp
Version: #1 SMP Fri Jan 14 15:41:33 UTC 2005
Machine: i686
Instance name: +ASM
Redo thread mounted by this instance: 0 <none>
Oracle process number: 7
Unix process pid: 5880, image: oracle@zeus (SMON)
*** 2005-06-28 16:29:19.298
*** SERVICE NAME:() 2005-06-28 16:29:19.298
*** SESSION ID:(33.1) 2005-06-28 16:29:19.298
clssgsGroupGetStatus: clsssRecvMsg failed 3 0)
clssgsGroupGetStatus: returning 8
kgxgnpstat: received ABORT event from CLSS
Group services Error [NM abort event ] @ 28019:715
error 29702 detected in background process
ORA-29702: error occurred in Cluster Group Service operation
ORA-29702: error occurred in Cluster Group Service operation
/admin/twrh/bdump>cat alert_twrh.log
ORA-15064: Message 15064 not found; No message file for product=RDBMS, facility=ORA
ORA-01089: Message 1089 not found; No message file for product=RDBMS, facility=ORA
ORA-07445: exception encountered: core dump [kksParseCursor()+9] [SIGSEGV] [Address not mapped to object] [0x0] [] [] -
Disksuite problem / db on errored disk
I recently had a problem on one of my mirrors on my system. The only thing on the drive that has an error is 2 slices, one with data (/export/home) and one slice for the metadb data. I don't think the drive is actually bad, it just appears that it may have needed a fsck to fix things. I was able to fsck the data partition that I have on slice 0 with no problems, but then I noticed if I fsck slice 7 the metadb slice that I get the following: (below) I don't recall using alternate super-blocks for fsck and didn't want to chance making things worse. Is there an easy way to remove the metadb from the disk and then I suppose I would also have to unmirror the drive and re-establish the mirror?
Any suggestions?
# fsck /dev/rdsk/c0t3d0s7
** /dev/rdsk/c0t3d0s7
Can't roll the log for /dev/rdsk/c0t3d0s7.
DISCARDING THE LOG MAY DISCARD PENDING TRANSACTIONS.
DISCARD THE LOG AND CONTINUE? yes
BAD SUPER BLOCK: MAGIC NUMBER WRONG
USE AN ALTERNATE SUPER-BLOCK TO SUPPLY NEEDED INFORMATION;
eg. fsck [-F ufs] -o b=# [special ...]
where # is the alternate super block. SEE fsck_ufs(1M).
I will also include info about the mirror. The output basically states to add another drive, but like I said, I don't think the drive is bad.
d4: Mirror
Submirror 0: d14
State: Needs maintenance
Submirror 1: d24
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 17577945 blocks (8.4 GB)
d14: Submirror of d4
State: Needs maintenance
Invoke: metareplace d4 c0t3d0s0 <new device>
Size: 17577945 blocks (8.4 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0t3d0s0 0 No Maintenance Yes
d24: Submirror of d4
State: Okay
Size: 17577945 blocks (8.4 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0t4d0s0 0 No Okay YesInvestigating this further I found the following error entry in alert log file (D:\app\dwh.admin\diag\asm\+asm\+asm\trace\ alert_+asm.log)
ORA-15025: could not open disk '\\.\ORCLDISKDATA1'
ORA-27037: unable to obtain file status
OSD-04011: GetFileInformationbyHandle() failure, unable to obtain file info
O/S-Error: (OS 1) Incorrect function
This explains the 'UNKNOWN' under HEADER_STATUS for this disk.
Do you reckon if this has anything to do with incompatibility of ORACLE software with the OS version?
Here are more details:
OS
Windows 2008 Server Standard SP2 (32-bit)
select * from v$version;
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
PL/SQL Release 11.1.0.7.0 - Production
CORE 11.1.0.7.0 - Production
TNS for 32-bit Windows: Version 11.1.0.7.0 - Production
NLSRTL Version 11.1.0.7.0 - Production -
Install ASM on Solaris RAC 10g
Hello,
I installed CRS and database software on two nodes RAC 10.2.0.4 Solaris x86-64 5.10, latest updates for Solaris. I have no error with crs.
Problems description:
1) When I run DBCA to create ASM it fails to create it on the nodes, with error ORA-03135: connection lost contact.
2) I see ORA-29702 into the logs (error occurred in Cluster Group Service operation
Cause: An unexpected error occurred while performing a CGS operation.
Action: Verify that the LMON process is still active. Also, check the Oracle LMON trace files for errors.)
Question:
Do you think that the problem is that interface 10.0.0.21 node1-priv-fail2 is started? (see bellow bdump/alert_+ASM1.log).
This interface 10.0.0.21 node1-priv-fail2 is the one that frozens the prompt when I try ssh oracle@node1-priv-fail2 from node2?
Possible solution: I saw Metalink 283684.1 but don't know if/what to change in my interfaces.
Details:
I think is something with the interfaces, but I don't know what.
- One thing I noticed is that is not possible to ssh from node1 to node2-priv-fail2 (this I wad told is the private standby loopback insterface). The same is from node2 to node1-priv-fail2, it gives a frozen prompt.
- in /etc/hosts on both nodes I have:
+127.0.0.1 localhost+
+172.17.1.17 node1+
+172.17.1.18 node1-fail1+
+172.17.1.19 node1-fail2+
+172.17.1.20 node1-vip+
+172.17.1.29 node2 loghost+
+172.17.1.30 node2-fail1+
+172.17.1.31 node2-fail2+
+172.17.1.32 node2-vip+
+10.0.0.1 node1-priv+
+10.0.0.11 node1-priv-fail1+
+10.0.0.21 node1-priv-fail2+
+10.0.0.2 node2-priv+
+10.0.0.12 node2-priv-fail1+
+10.0.0.22 node2-priv-fail2+
Do you think that the problem is that interface 10.0.0.21 node1-priv-fail2 is started (see bdump/alert_+ASM1.log).
This interface 10.0.0.21 node1-priv-fail2 is the one that frozens the prompt when I try ssh oracle@node1-priv-fail2 from node2?
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Interface type 1 e1000g2 10.0.0.0 configured from OCR for use as a cluster interconnect
Interface type 1 e1000g3 10.0.0.0 configured from OCR for use as a cluster interconnect
Interface type 1 e1000g0 172.17.0.0 configured from OCR for use as a public interface
Interface type 1 e1000g1 172.17.0.0 configured from OCR for use as a public interface
Starting up ORACLE RDBMS Version: 10.2.0.4.0.
System parameters with non-default values:
large_pool_size = 12582912
instance_type = asm
cluster_database = TRUE
instance_number = 1
remote_login_passwordfile= EXCLUSIVE
++background_dump_dest = /opt/app/oracle/db/admin/+ASM/bdump++
++user_dump_dest = /opt/app/oracle/db/admin/+ASM/udump++
++core_dump_dest = /opt/app/oracle/db/admin/+ASM/cdump++
Cluster communication is configured to use the following interface(s) for this instance
+10.0.0.1+
+10.0.0.21+
node1:oracle$ oifcfg getif
e1000g0 172.17.0.0 global public
e1000g1 172.17.0.0 global public
e1000g2 10.0.0.0 global cluster_interconnect
e1000g3 10.0.0.0 global cluster_interconnect
node1:oracle$ ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet 172.17.1.17 netmask ffff0000 broadcast 172.17.255.255
groupname orapub
e1000g0:1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 2
inet 172.17.1.18 netmask ffff0000 broadcast 172.17.255.255
e1000g0:2: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 2
inet 172.17.1.20 netmask ffff0000 broadcast 172.17.255.255
e1000g1: flags=39040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,FAILED,STANDBY> mtu 1500 index 3
inet 172.17.1.19 netmask ffff0000 broadcast 172.17.255.255
groupname orapub
e1000g2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 10.0.0.1 netmask ff000000 broadcast 10.255.255.255
groupname oracle_interconnect
e1000g2:1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 4
inet 10.0.0.11 netmask ff000000 broadcast 10.255.255.255
e1000g3: flags=39040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,FAILED,STANDBY> mtu 1500 index 5
inet 10.0.0.21 netmask ff000000 broadcast 10.255.255.255
groupname oracle_interconnectHi,
for a 10g RAC you need:
- a host-IP for every node
- a private IP for every node
- a virtual IP for every node
Host-IP and Private IP must be assigned to both hosts and conenction between the hosts using either the host-IP or the private IP must be possible.
Is is possible to build the RAC and ASM only with the private IP without the public and the virtual IP, if yes how?The term "private" and "public" does not refer to public IPs. It refers to the fact "private" is only for communication between the nodes and public for communication between the client and database.
For a successful installation you need at least these three IPs on each system.
So for instance your public IPs reside in the network 192.168.1.0/255.255.255.0 and your private interconnect network can be 192.168.2.0/255.255.255.0. Both networks consists of private (i.e. non-routeable) IPs. -
Been playing around with 11g,mainly with ADRCI (trying to identify what it does, etc...) and from what I can see, based on the documentation, within ADRCI, you can purge incidents, problems, reports, etc...
So, I set my SHORTP_POLICY to 168 (7 fdays), and within ADRCI, if I just type "purge" (or purge -age 60 -type alert), should it not purge the alert logs (I assume it will do both the log.xml and alert_<SID>.log files????).
This doesn't seem to work...Has anyone tried (or worked) with this.
And I also assume that because there are policies (SHORTP_POLICY & LONGP_POLICY), that there is some kind of automated purging process? is this correct?
ThanksI've read that and a few other related articles. Those are for the database ADR homes, which i'm relatively comfortable with:
You goto ORACLE_BASE for the database oracle user (ie. /u01/app/oracle), cycle through all results in adrci exec="show homes", and do a purge -age XXXXX. Either that, or wait the 7-day rolling purge period in which ADR automatically purges for a SHORTP policy (30 days) or the LONGP policy (365 days).
I'm talking about the grid user. That means the +ASM instance's logs, CRS data/logs, the listener, and scan_listener entries.
my general understanding is for the grid user, adr homes typically reside under:
/u01/app/grid (which is the typical ORACLE_BASE)
/u01/app/11.2.0/grid/log (which is the typical ORACLE_HOME/log)
.. but there are a ton of other things.
MOS 1368695.1 spells out the clusterware log locations and some of their archival policies:
<GRID_HOME>/log/$HOST/alert$HOST.log
<GRID_HOME>/log/$HOST/client
<GRID_HOME>/log/$HOST/racg*
<GRID_HOME>/log/$HOST/srvm
<GRID_HOME>/rdbms/audit
<GRID_HOME>/log/diag/*
I've been told by ora support that these dir's are not auto-rotated.. have to be manually handled by the DBA. Some of these dir's under $GRID_HOME/log/<server_name>/ . . ., and they are owned by both ROOT and GRID users. So I was wondering how the rest of the community is dealing with it? How large have people typically made their /u01/app filesystems? Have they done purging via a cron script under the grid user or root user? -
ORA-12523 while creating a database on ASM using DBCA
HI PPL,
I have set up am ASM instance and also configured the listener services to it.
But when I try to create database on the configured ASM instance using DBCA , I am getting the following error
TNS:listener could not find instance appropriate
for the client connection
The Connection descriptor used by the client was:
(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP(Host=10.10.199.141)
(Port=6881))(CONNECT_DATA=(SERVICE_NAME++ASM1)
(INSTANCE_NAME=+ASM1)(UR=A)))
I have checked the listener status and ASM instance and booth seem to be fine:
$ lsnrctl status ASM
LSNRCTL for HPUX: Version 10.2.0.3.0 - Production on 30-JUL-2010 12:40:17
Copyright (c) 1991, 2006, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.199.141)(PORT=6881)))
STATUS of the LISTENER
Alias ASM
Version TNSLSNR for HPUX: Version 10.2.0.3.0 - Production
Start Date 30-JUL-2010 11:36:11
Uptime 0 days 1 hr. 4 min. 5 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /uat06/asm_home/network/admin/listener.ora
Listener Log File /uat06/asm_home/network/log/asm.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.10.199.141)(PORT=6881)))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "ASM", status BLOCKED, has 1 handler(s) for this service...
Service "+ASM_XPT" has 1 instance(s).
Instance "ASM", status BLOCKED, has 1 handler(s) for this service...
Service "ASM" has 1 instance(s).
Instance "ASM", status UNKNOWN, has 1 handler(s) for this service...
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
SQL> sho parameter local
NAME TYPE VALUE
local_listener string ASM
log_archive_local_first boolean TRUE
Please help as I am new to ASM.
OS-HP-UX
Database;10gR2
regardsHI Chinar,
I have already checke the same and made the changes as per the metalink note.
But stil I am getting teh following error:
$ lsnrctl status ASM
LSNRCTL for HPUX: Version 10.2.0.3.0 - Production on 30-JUL-2010 12:40:17
Copyright (c) 1991, 2006, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.199.141)(PORT=6881)))
STATUS of the LISTENER
Alias ASM
Version TNSLSNR for HPUX: Version 10.2.0.3.0 - Production
Start Date 30-JUL-2010 11:36:11
Uptime 0 days 1 hr. 4 min. 5 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /uat06/asm_home/network/admin/listener.ora
Listener Log File /uat06/asm_home/network/log/asm.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.10.199.141)(PORT=6881)))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "ASM", status BLOCKED, has 1 handler(s) for this service...
Service "+ASM_XPT" has 1 instance(s).
Instance "ASM", status BLOCKED, has 1 handler(s) for this service...
Service "ASM" has 1 instance(s).
Instance "ASM", status UNKNOWN, has 1 handler(s) for this service...
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
Why is it showing status "BLOCKED" if i check the listener status. -
BackPRD.log File Suddenly Increases to 78GB in SAPBACKUP folder in Linux server
Dear Experts,
There is a problem in our SAP Production server. As earlier when i check the space it is good and after 25 Minutes while i am checking of the Directory spaces, the Backup folder size increases suddenly to 100 more than earlier it is. The backPRD.log file size earlier it is 13 MB and it is Now showing as 78 GB. Is there any thing to resolve the issue and i have checked some forms also there is no thread for the same issue. With this the Restoration is also not possible Using the Source system Backup on Target system.
Thanks, Regards,
Harsha.Hi Sanjay,
Could you confirm for
1.Any recent changes into DB as well as at the end of backup device (Could be a third party software or hardware) ?
2.Any recent SP,kernel,DB upgrade as well as OS upgrade activity performed at your end ?
3.Any modifications under backup schedule if using it from DB13 or any changes under third party scripts ?
4.Have you activated trace for the system ?
Addition to all if you're able to login to system than please share system logs from SM21,recent dump details from ST22 if any.
With this the Restoration is also not possible Using the Source system Backup on Target system.
With the above logs would like to check alert_<SID>.log file as well.
Regards,
Gaurav -
RMAN success, but errors in alert.log file
My RMAN backup script runs well, but generates errors in alert.log file.
Here is the trace file contents:
/usr/lib/oracle/xe/app/oracle/admin/XE/udump/xe_ora_3990.trc
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
ORACLE_HOME = /usr/lib/oracle/xe/app/oracle/product/10.2.0/server
System name: Linux
Node name: plockton
Release: 2.6.18-128.2.1.el5
Version: #1 SMP Wed Jul 8 11:54:54 EDT 2009
Machine: i686
Instance name: XE
Redo thread mounted by this instance: 1
Oracle process number: 26
Unix process pid: 3990, image: oracle@plockton (TNS V1-V3)
*** 2009-07-23 23:05:01.835
*** ACTION NAME:(0000025 STARTED111) 2009-07-23 23:05:01.823
*** MODULE NAME:(backup full datafile) 2009-07-23 23:05:01.823
*** SERVICE NAME:(SYS$USERS) 2009-07-23 23:05:01.823
*** SESSION ID:(33.154) 2009-07-23 23:05:01.823
*** 2009-07-23 23:05:18.689
*** ACTION NAME:(0000045 STARTED111) 2009-07-23 23:05:18.689
*** MODULE NAME:(backup archivelog) 2009-07-23 23:05:18.689
Does anyone know why? Thanks.
RichardI'm not sure if this will answer your question or not, but I believe these messages can likely be ignored.
I'm currently running 10.2.0.1.0 Enterprise Edition in pre-production (yes, I know I should apply the latest patchset and I plan to do so as soon as I get a development box allocated to me and can test it's impact). I see the same types of messages that you've reported with each of my regularly-scheduled backups:
a) The alert_<$SID>.log reports that there are errors in trace files:
Mon Aug 10 04:33:49 2009
Starting control autobackup
Mon Aug 10 04:33:50 2009
Errors in file /opt/oracle/admin/blah/udump/blah_ora_32520.trc:
Mon Aug 10 04:33:50 2009
Errors in file /opt/oracle/admin/blah/udump/blah_ora_32520.trc:
Mon Aug 10 04:33:50 2009
Errors in file /opt/oracle/admin/blah/udump/blah_ora_32520.trc:
Control autobackup written to DISK device
handle '/backup/physical/BLAH/RMAN/cf_c-2740124895-20090810-00'
b) The .trc files, when you look at them contain no errors - only these "informational" messages:
*** 2009-08-10 04:33:50.781
*** ACTION NAME:(0000105 STARTED111) 2009-08-10 04:33:50.754
*** MODULE NAME:(backup archivelog) 2009-08-10 04:33:50.754
*** SERVICE NAME:(SYS$USERS) 2009-08-10 04:33:50.754
*** SESSION ID:(126.28030) 2009-08-10 04:33:50.754
c) I've verified that LOG_ARCHIVE_TRACE is set to 0:
SQL*Plus> show parameter log_archive_trace
NAME TYPE VALUE
log_archive_trace integer 0
As best I can discern from my own experience, these should just be ignored and I trust (read: "hope") they will simply go away once the latest patchset is applied. As for you running Oracle XE, a patchset is not an option, unfortunately.
V/R
-Eric -
Hello,
I am very new to shell scripting. Our DB is 10g on AIX. And i wanted to setup something that will monitor my alertlog and send me e-mail out. And i found this online. But have very little knowledge on cronjob. I can set one up. But this script dont tell what goes here. Here is the script that i found online. So if anyone could give me what goes where i would be thankfull. it does says put the check_alert.awk someplace. But is that where the cron comes in place. i mean do i schedule check_alert.awk in my cronjob ??? Just wanted to know what parts goes where and how to set this up the right way so i get e-mail alert for my alert log. a step - step process would be good. Thanks
UNIX shell script to monitor and email errors found in the alert log. Is ran as the oracle OS owner. Make sure you change the "emailaddresshere" entries to the email you want and put the check_alert.awk someplace. I have chosen $HOME for this example, in real life I put it on as mounted directory on the NAS.
if test $# -lt 1
then
echo You must pass a SID
exit
fi
# ensure environment variables set
#set your environment here
export ORACLE_SID=$1
export ORACLE_HOME=/home/oracle/orahome
export MACHINE=`hostname`
export PATH=$ORACLE_HOME/bin:$PATH
# check if the database is running, if not exit
ckdb ${ORACLE_SID} -s
if [ "$?" -ne 0 ]
then
echo " $ORACLE_SID is not running!!!"
echo "${ORACLE_SID is not running!" | mailx -m -s "Oracle sid ${ORACLE_SID} is not running!" "
|emailaddresshere|"
exit 1
fi;
#Search the alert log, and email all of the errors
#move the alert_log to a backup copy
#cat the existing alert_log onto the backup copy
#oracle 8 or higher DB's only.
sqlplus '/ as sysdba' << EOF > /tmp/${ORACLE_SID}_monitor_temp.txt
column xxxx format a10
column value format a80
set lines 132
SELECT 'xxxx' ,value FROM v\$parameter WHERE name = 'background_dump_dest'
exit
EOF
cat /tmp/${ORACLE_SID}_monitor_temp.txt | awk '$1 ~ /xxxx/ {print $2}' > /tmp/${ORACLE_SID}_monitor_location.txt
read ALERT_DIR < /tmp/${ORACLE_SID}_monitor_location.txt
ORIG_ALERT_LOG=${ALERT_DIR}/alert_${ORACLE_SID}.log
NEW_ALERT_LOG=${ORIG_ALERT_LOG}.monitored
TEMP_ALERT_LOG=${ORIG_ALERT_LOG}.temp
cat ${ORIG_ALERT_LOG} | awk -f $HOME/check_alert.awk > /tmp/${ORACLE_SID}_check_monitor_log.log
rm /tmp/${ORACLE_SID}_monitor_temp.txt 2>/dev/null
if [ -s /tmp/${ORACLE_SID}_check_monitor_log.log ]
then
echo "Found errors in sid ${ORACLE_SID}, mailed errors"
echo "The following errors were found in the alert log for ${ORACLE_SID}" > /tmp/${ORACLE_SID}_check_monitor_log.mail
echo "Alert log was copied into ${NEW_ALERT_LOG}" >> /tmp/${ORACLE_SID}_check_monitor_log.mail
echo " "
date >> /tmp/${ORACLE_SID}_check_monitor_log.mail
echo "--------------------------------------------------------------">>/tmp/${ORACLE_SID}_check_monitor_log.mail
echo " "
echo " " >> /tmp/${ORACLE_SID}_check_monitor_log.mail
echo " " >> /tmp/${ORACLE_SID}_check_monitor_log.mail
cat /tmp/${ORACLE_SID}_check_monitor_log.log >> /tmp/${ORACLE_SID}_check_monitor_log.mail
cat /tmp/${ORACLE_SID}_check_monitor_log.mail | mailx -m -s "on ${MACHINE}, MONITOR of Alert Log for ${ORACLE_SID} found errors" "
|emailaddresshere|"
mv ${ORIG_ALERT_LOG} ${TEMP_ALERT_LOG}
cat ${TEMP_ALERT_LOG} >> ${NEW_ALERT_LOG}
touch ${ORIG_ALERT_LOG}
rm /tmp/${ORACLE_SID}_monitor_temp.txt 2> /dev/null
rm /tmp/${ORACLE_SID}_check_monitor_log.log
rm /tmp/${ORACLE_SID}_check_monitor_log.mail
exit
fi;
rm /tmp/${ORACLE_SID}_check_monitor_log.log > /dev/null
rm /tmp/${ORACLE_SID}_monitor_location.txt > /dev/null
The referenced awk script (check_alert.awk). You can modify it as needed to add or remove things you wish to look for. The ERROR_AUDIT is a custom entry that a trigger on DB error writes in our environment.
$0 ~ /Errors in file/ {print $0}
$0 ~ /PMON: terminating instance due to error 600/ {print $0}
$0 ~ /Started recovery/{print $0}
$0 ~ /Archival required/{print $0}
$0 ~ /Instance terminated/ {print $0}
$0 ~ /Checkpoint not complete/ {print $0}
$1 ~ /ORA-/ { print $0; flag=1 }
$0 !~ /ORA-/ {if (flag==1){print $0; flag=0;print " "} }
$0 ~ /ERROR_AUDIT/ {print $0}
I simply put this script into cron to run every 5 minutes passing the SID of the DB I want to monitor.I have a PERL script that I wrote that does exactly what you want and I'll be glad to share that with you along with the CRON entries.
The script runs opens the current alert_log and searches for key phrases and send e-mail if it finds anything. It then sleeps for 60 sec, wakes up and reads from were it left off to the bottom of the file, searching again and then sleeping. The only down side is it keeps a file handle open on the alert_log, so you have to kill this processes if you want to rename or delete the alert_log.
My email in my profile is not hidden.
Tom -
Oracle logs -- give a hand to a MS SQL DBA ;-)
Hello,
I'm a MS SQL DBA, but now have to learn Oracle. There is the only log (ErrorLog) in MS SQL, it stores security events (such as logon/logout), configurations changes and error messages.
I'm very confused while I searched for the same log(s) in Oracle. As far as I understand there are several logs in Oracle (security, alert, backup, performance...), and they're stored as a tables. Is it right?
What kind of logs Oracle DB server manages? Could you recommend a documents (url desired) for reading?
Thank you for your time,
Vladimir.Oracle has an alert log (alert_<<SID>>.log) in the directory specified by the initialization parameter BACKGROUND_DUMP_DEST. That tracks database-level errors, as well as database startup and shutdown and the non-default parameters used at startup. That is probably the log you're looking for.
If one of the background processes fails, a trace file is written to BACKGROUND_DUMP_DEST (<<SID>>_<<process name>>_<<number>>.trc). If a user process fails, or if you want to generate some detailed tracing information, you can have different sorts of trace files written to USER_DUMP_DEST.
You can also enable auditing and specify events that you want to track. The command
audit connectfor example, will log every time a user logs in or out. The audit trail can be set either generate an audit record to a database table or to a file. The table is generally easier to work with, but the log file may offer more protection against a DBA altering the audit trail.
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC -
Hi i am using oracle 10.2 in windows 2003 server in test environment
when i tried to migrate the database from os file system to ASM
it give me the following error
Error
Examine and correct the following error(s), then retry the operation.
Remote Operation Error - ERROR: Wrong password for user
though i have provided the correct details
thanks & regardhi !
i tried it
but still same problem.
Error
Migrate Database To ASM: ASM Instance
Database Migrate Host asm
Logged In As SYS
An ASM instance should exist on host asm and be managed as an Enterprise Manager target. If the ASM instance already exists, add it as an Enterprise Manager target by providing connection information and clicking the Continue button. Otherwise, please use DBCA to create an ASM instance on this host before adding it as an Enterprise Manager target.
Add ASM Instance As An EM Target
* Target Name :+ASM_asm
* Oracle Home :C:\oracle\product\10.2.0\db_1
* SYS Username :sys
* SYS Password :migrate
* Role :SYSDBA
* Port :1521
* SID :+ASM
Host Credentials
Enter the credentials of the user who owns the database Oracle server installation.
* Username :administrator
* Password :pipl?123
Save as Preferred Credential
real error is
Examine and correct the following error(s), then retry the operation.
Remote Operation Error - ERROR: Wrong password for user
thanks & Regard -
Getting GIM-00105: Shared memory region is corrupted in imon log
on 10.2.0.3 oracle clusterware on Sun Solaris x86-64 (64-bit) and Enterprise Manager Grid Control - Version: 10.2.0.4.0 , getting below message in imon log:
2008-07-21 04:25:19.337: [ RACG][18] [364][18][ora.IMZAMBAS_STDBY.IMZAMBAS1.inst]: GIMH:
GIM-00105: Shared memory region is corrupted.
2008-07-21 04:26:20.436: [ RACG][18] [364][18][ora.IMZAMBAS_STDBY.IMZAMBAS1.inst]: GIMH:
GIM-00105: Shared memory region is corrupted.
There is no performance issue or service loss
Kindly help clarifying the reason we get this message.Hello,
I have same problem on both nodes get GIM-00105 in asm/log/node2/racg/imon.log .
Oracle 10.2.0.3 Sun Solaris x86-64 with two nodes. It was not working so I reinstalled everything from operating system.
node2:oracle$ less /opt/app/oracle/asm/log/node2/racg/imon.log
Oracle Database 10g CRS Release 10.2.0.1.0 Production Copyright 1996, 2005 Oracle. All rights reserved.
+2009-11-05 10:34:17.568: [ RACG][1] [3156][1][ora.node2.ASM2.asm]: GIMH: GIM-00105: Shared memory region is corrupted.+
+2009-11-05 10:41:43.008: [ RACG][1] [17391][1][ora.node2.ASM2.asm]: GIMH: GIM-00105: Shared memory region is corrupted.+
+2009-11-05 10:51:38.149: [ RACG][1] [6341][1][ora.node2.ASM2.asm]: GIMH: GIM-00105: Shared memory region is corrupted.+
+2009-11-05 11:01:39.456: [ RACG][1] [25357][1][ora.node2.ASM2.asm]: GIMH: GIM-00105: Shared memory region is corrupted.+
+2009-11-05 11:11:40.768: [ RACG][1] [14460][1][ora.node2.ASM2.asm]: GIMH: GIM-00105: Shared memory region is corrupted.+ -
Who has a check alert log script?
Hi,
Can anyone provide me some good linux script that will read my alert.log
file and report any ORA- error through email dailly
thank youI use this script to monitor my instances. I setup cron job to run this every hour.
5 * * * * /home/oracle/bin/check_alert <SID> 2it's ksh
#!/bin/ksh
# PROGRAM check_alert.ksh
# FUNCTION Checks ORACLE Alert logs and pages in case of
# any new errors. SID is Oracle database identifier.
# CALLED BY cron
SID=$1 # Oracle database identifier
PAGEMESSAGES=$2 # Maximum number of new messages that get paged
PARAM=$#
TMP=/tmp # Temporary directory
MAILX=/bin/mailx # UNIX Mail Program
LIBDIR=/home/oracle/bin # Directory where useful information is saved
ALERTDIR=/home/oracle/admin/${SID}/bdump # Directory where
# Oracle alert file resides
FILE=alert_${SID}.log # Oracle alert file name
MAILDBA=<dba email> # DBA email address
PAGEDBA=<pagermail> # Page only DBA staff
PAGEOTHER=NULL # Do not page OTHER staff members
export PAGEMESSAGES
export PARAM
export TMP
export MAILX
export LIBDIR
export ALERTDIR
export FILE
export SID
export MAILDBA
export PAGEDBA
export PAGEOTHER
checkParameters()
if [ $PARAM -ne 2 ]
then
echo "**USAGE** : $0 <SID> <Count>"
exit 1
fi
LASTCOUNT=`cat $LIBDIR/.oraErrCount_${SID}` # Count of ORA- errors
# detected during last program run
export LASTCOUNT
sendAlertMessage()
MESSAGE="**ALARM**:${SID}:`grep "ORA-" $ALERTDIR/$FILE | tail -${count} | head -1`"
echo $MESSAGE | $MAILX ${PAGEDBA}
echo $MESSAGE | $MAILX -s"`uname -n`:${SID}:ORACLE Trace file Alert" ${MAILDBA}
echo "$NAME:$MESSAGE:`date`"
probeAlertLog()
#set -x
# Count all Oracle errors - search for string "ORA-"
CheckError=`grep "ORA-" $ALERTDIR/$FILE | wc -l`
# keep a count of current errors present in the Alert file
echo $CheckError > $LIBDIR/.oraErrCount_${SID}
count=1
# If new errors are detected (same alert log)
if [ $CheckError -gt $LASTCOUNT ]
then
while [ $LASTCOUNT -lt $CheckError ]
do
sendAlertMessage;
if [ $count -eq $PAGEMESSAGES ]
then
break;
fi
((count=$count+1))
((LASTCOUNT=$LASTCOUNT+1))
done
else
# Looks like alert log file has been switched!
if [ $CheckError -lt $LASTCOUNT ]
then
while [ $count -le $CheckError ]
do
sendAlertMessage;
if [ $count -eq $PAGEMESSAGES ]
then
break;
fi
((count=$count+1))
done
fi
fi
checkParameters;
probeAlertLog;
Maybe you are looking for
-
How do i find what is causing the spinning colorwheel
How do I find the cause of the spinning color wheel and correct it ?
-
Anyone else having this issue?
Hi. I'm on my third iphone. First one had a stuck pixel so i traded it in. Second went completely haywire so they sent me a brand new in the box 8GB iphone. Would you believe this third phone, brand new not refurbished has another stuck pixel! Has an
-
Event ID 1058 Group Policy Preprocessing Error Code 3
You will see this in the event logs, the processing of group policy failed. It is trying to process a policy that doesn't exist. After reading http://technet.microsoft.c the first resolution Error code 3 (The system cannot find the path specified) le
-
Hi,everbody,I have problem when doing chinese internization.Can some body help me?My OS is win2000,the java compiler is jdk1.3 and my text editor is UltraEdit. the test program as follows: import java.util.ResourceBundle; import java.util.Locale; imp
-
I use a Pandigital Photolink one touch scanner. Then I upload the pics from the hd card to the computer. In order to get iphoto to recognize the images I have to first save each photo using 'save as' to my documents, then 'open with' iphoto. Is th