No PST quorum in group 1 (VMware)

Hi, I set up a lab with two vm (vmware workstation) with RHEL5 and Oracle 10g to work with RAC. Now I'm having trouble configuring ASM, in one instance I can create a diskgroup but the other does not work in the ALERT message appears: ERROR: PST quorum in group 1: 1 required, 0 I checked the permissions for both nodes and are correct (oracle.dba) already did the reverse created the diskgroup on node 2 and this node is running and the node 1 don´t.
I'm using oracleasm and when I type / etc / init.d / oracleasm listdisks appears on both nodes:
ASMDISK1
ASMDISK2
ASMDISK3
Node 1:
SQL> select name, state from v $ ASM_DISKGROUP;
STATE NAME
DG_DADOS DISMOUNTED
Node 2:
SQL> select name, state from v $ ASM_DISKGROUP;
STATE NAME
DG_DADOS Mounted
Any ideas?
Thanks.

Hi,
Are you able to see the disks in v$asm_disk? what's the value of asm_diskstring parameter in node2? It may be that node2 doesn't have right spfile parameter value for asm_diskstring and that's why it doesn't discover the disks and as a result, diskgroup is not getting mounted. Also disks attributes should be set to shared to access from multiple nodes in RAC environment. On AIX, we set using chdev command. Please see my blog keyurmakwana.crs.blogspot.com for this permission which was for voting disk. So conclusion is that disks attributes should be changed to be accessed by multiple nodes in RAC environment.
Thanks,
Keyur

Similar Messages

  • Prevent OL2010 PST creation through group policy; need menu number or policy ID

    Greetings all and thanks in advance,
    In a similar fashion as found here:  http://social.technet.microsoft.com/Forums/pl-PL/outlook/thread/76081525-71d8-459e-a0cf-39d80a4c6cc7,  I need to block the creation of Outlook Data files through this path:  File > Account Settings >
    Data Files (tab) > ADD
    What is the menu number or policy ID of this item so I use it to disable from Group Policy?
    Thanks,
    Willis

    I cannot find the specified command bar ID in the following link:
    http://support.microsoft.com/kb/173604
    However, we may prevent users from adding PSTs to Outlook profiles via GPO. The location is "gpedit.msc | Computer Configuration | User Configuration | Administrative Templates | Micrsoft Outlook 2010 | Miscellaneous | PST Settings | prevent users from adding
    PSTs to Outlook profiles..."
    Thanks.
    Tony Chen
    TechNet Community Support

  • Installation of Oracle Database 11g R2 Grid is failing

    Hi Guys,
    I've been trying to install Oracle Grid Infrastructure 11g R2 (11.2.0.2) for a long time, but, failing every time. This is what I've tried so far:
    Things being used:
    OS: Oracle Enterprise Linux 5.2
    VM Software: VMware Workstation 7.1 and VMware Server 2
    Oracle SW: Oracle Grid Infrastructure 11g R2 - 11.2.0.2
    VMs: Three RAC nodes - rac1, rac2, rac3
    I've tried to follow the link given below to on both VMware Server and VMware Workstation all goes well until it asks us to run two scripts as root.
    http://www.oracle-base.com/articles/11g/OracleDB11gR2RACInstallationOnOEL5UsingVMwareServer2.php
    The script "/u01/app/grid/11.2.0/root.sh" completes successfully on the first node, rac1, however, it fails on the rest of them with the following error:
    CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
    An active cluster was found during exclusive startup, restarting to join the cluster
    Start of resource "ora.crsd" failed
    CRS-2800: Cannot start resource 'ora.asm' as it is already in the INTERMEDIATE state on server 'rac2'
    CRS-4000: Command Start failed, or completed with errors.
    Failed to start Oracle Clusterware stack
    Failed to start Cluster Ready Services at /u01/app/grid/11.2.0/crs/install/crsconfig_lib.pm line 1065.
    /u01/app/grid/11.2.0/perl/bin/perl -I/u01/app/grid/11.2.0/perl/lib -I/u01/app/grid/11.2.0/crs/install /u01/app/grid/11.2.0/crs/install/rootcrs.pl execution failed
    Instead of using shared disk as "independent-persistent" disks I've also used openfiler 2.99 to allocate storage by following below given link. With that the issue was that the iSCSI disks were not persistent and no solution worked to fix that.
    http://www.jobacle.nl/?p=957
    I also tried to use OCFS2 for OCR and VOTING disks, the installation completes successfully, however, the asm instance, registry, and few other services doesn't start. When I tried to reboot all the machines not a single oracle service came up. The crs just died because of OCR and VOTING disks.
    I've tried so many things but nothing works.
    My last installation was as follows:
    ==============================
    VM: rac1, rac2, rac3
    Config of each VM:
    Memory: 4GB
    Processors: 4
    HDD (SCSI): 25GB
    /dev/sda1 > /boot (100M)
    /dev/sda2 > swap (8G)
    /dev/sda3 > / (17G)HDD2 (SCSI): 60GB (Persistent)
    /dev/sdb1 > ocrvot1 (2G)
    /dev/sdb2 > ocrvot2 (2G)
    /dev/sdb3 > ocrvot3 (2G)
    /dev/sdb4 > data1 (10G)
    /dev/sdb5 > data2 (10G)
    /dev/sdb6 > data3 (10G)
    /dev/sdb7 > fast1 (10G)
    /dev/sdb8 > fast2 (10G)HDD3 (SCSI): 60GB (Persistent)
    /dev/sdc1 > ocrvot4 (2G)
    /dev/sdc2 > ocrvot5 (2G)
    /dev/sdc3 > ocrvot6 (2G)
    /dev/sdc4 > data4 (10G)
    /dev/sdc5 > data5 (10G)
    /dev/sdc6 > fast3 (10G)
    /dev/sdc7 > fast4 (10G)
    /dev/sdc8 > fast5 (10G)NIC 1: Bridge
    NIC 2: Bridge
    Pre-requisites completed.
    ASM Disks labeled.
    runcluvfy succeeds for all nodes
    All nodes can see and write to the ASM disks.During the installation of GI 11.2.0.2 at prerequisites test got one error:
    PRVF-5449 : Check of Voting Disk location "ORCL:(ORCL:)" failed
    However, according to Metalink, note ID 1267569.1, its a bug and its going to be fixed in 11.2.0.3. The note gives a workaround which is the manual testing of the ASM disks from all the nodes and as long it succeeds we can ignore the error. Therefore, I just ignored the error and carried on. When prompted I executed the first script, "/u01/app/oraInventory/orainstRoot.sh ", on all the nodes successfully. The second script, "/u01/app/grid/11.2.0/root.sh" completes successfully on rac1, however, fails on the rest of the nodes.
    On rac2 the alertrac2.log contains the following errors:
    CRS-5019:All OCR locations are on ASM disk groups [OCRVOT], and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/grid/11.2.0/log/rac2/agent/ohasd/oraagent_oracle/oraagent_oracle.log".and oraagent_oracle.log contains the following errors:
    2011-07-15 22:53:47.748: [ora.asm][1090414912] {0:0:215} [check] checkCrsStat 2 CLSCRS_STAT ret: 1842011-07-15 22:53:47.748: [ora.asm][1090414912] {0:0:215} [check] clsnUtils::error Exception type=2 string=
    2011-07-15 22:53:47.748: [ora.asm][1090414912] {0:0:215} [check] AsmAgent::checkCbk: Exception UserErrorException
    2011-07-15 22:53:47.748: [ora.asm][1090414912] {0:0:215} [check]
    2011-07-15 22:53:47.748: [ora.asm][1090414912] {0:0:215} [check] InstAgent::check checkCounter 0 prev clsagfw_status 1 current clsagfw_status 4
    2011-07-15 22:53:48.599: [    AGFW][1117755712] {0:0:166} Agent received the message: AGENT_HB[Engine] ID 12293:1089
    2011-07-15 22:53:48.755: [ COMMCRS][1128261952]clsc_connect: (0x10c5d5b0) no listener at (ADDRESS=(PROTOCOL=IPC)(KEY=CRSD_UI_SOCKET))
    On rac1:
    ========
    [root@rac1 bin]# ./crsctl status res -t
    NAME TARGET STATE SERVER STATE_DETAILS
    Local Resources
    ora.OCRVOT.dg
    ONLINE ONLINE rac1
    ora.asm
    ONLINE ONLINE rac1 Started
    ora.gsd
    OFFLINE OFFLINE rac1
    ora.net1.network
    ONLINE ONLINE rac1
    ora.ons
    ONLINE ONLINE rac1
    ora.registry.acfs
    ONLINE ONLINE rac1
    Cluster Resources
    ora.LISTENER_SCAN1.lsnr
    1 ONLINE ONLINE rac1
    ora.cvu
    1 ONLINE ONLINE rac1
    ora.oc4j
    1 ONLINE ONLINE rac1
    ora.rac1.vip
    1 ONLINE ONLINE rac1
    ora.scan1.vip
    1 ONLINE ONLINE rac1
    On rac2
    ========
    The alert+ASM2.log has the following errors:
    SQL> ALTER DISKGROUP ALL MOUNT /* asm agent call crs *//* {0:0:215} */
    NOTE: Diskgroup used for Voting files is:
    OCRVOT
    Diskgroup used for OCR is:OCRVOT
    NOTE: cache registered group OCRVOT number=1 incarn=0xee7b8cf5
    NOTE: cache began mount (not first) of group OCRVOT number=1 incarn=0xee7b8cf5
    NOTE: Loaded library: /opt/oracle/extapi/64/asm/orcl/1/libasm.so
    NOTE: Assigning number (1,0) to disk (ORCL:OCRVOT1)
    NOTE: Assigning number (1,1) to disk (ORCL:OCRVOT2)
    NOTE: Assigning number (1,2) to disk (ORCL:OCRVOT3)
    NOTE: Assigning number (1,3) to disk (ORCL:OCRVOT4)
    NOTE: Assigning number (1,4) to disk (ORCL:OCRVOT5)
    NOTE: Assigning number (1,5) to disk (ORCL:OCRVOT6)
    ERROR: no PST quorum in group: required 3, found 0
    NOTE: cache dismounting (clean) group 1/0xEE7B8CF5 (OCRVOT)
    NOTE: dbwr not being msg'd to dismount
    NOTE: lgwr not being msg'd to dismount
    NOTE: cache dismounted group 1/0xEE7B8CF5 (OCRVOT)
    NOTE: cache ending mount (fail) of group OCRVOT number=1 incarn=0xee7b8cf5
    NOTE: cache deleting context for group OCRVOT 1/0xee7b8cf5
    GMON dismounting group 1 at 2 for pid 23, osid 19831
    NOTE: Disk in mode 0x8 marked for de-assignment
    NOTE: Disk in mode 0x8 marked for de-assignment
    NOTE: Disk in mode 0x8 marked for de-assignment
    NOTE: Disk in mode 0x8 marked for de-assignment
    NOTE: Disk in mode 0x8 marked for de-assignment
    NOTE: Disk in mode 0x8 marked for de-assignment
    ERROR: diskgroup OCRVOT was not mounted
    WARNING: Disk Group OCRVOT containing configured OCR is not mounted
    WARNING: Disk Group OCRVOT containing voting files is not mounted
    ORA-15032: not all alterations performed
    ORA-15017: diskgroup "OCRVOT" cannot be mounted
    ORA-15063: ASM discovered an insufficient number of disks for diskgroup "OCRVOT"
    ERROR: ALTER DISKGROUP ALL MOUNT /* asm agent call crs *//* {0:0:215} */
    SQL> ALTER DISKGROUP ALL ENABLE VOLUME ALL /* asm agent *//* {0:0:215} */
    Any help would be appreciated. Thanks.
    Still miss those days when Oracle used to release easy to use and easy to install softwares, but now thats not the case. Grid Infrastructure, Grid Control, and other similar technologies have become so complicated and difficult for us to install on VMs and play with them.
    Kashif.
    Updates:
    Metalink doc, Oracle Grid Infrastructure 11.2.0.2 Installation or Upgrade may fail due to Multicasting Requirement [ID 1212703.1], shows similar errors that I've in my log files. Therefore, I thought Multicast could be the culprit. Nonetheless:
    [oracle@rac3 mcasttest]$ ./mcasttest.pl -n rac1,rac2,rac3 -i eth0,eth1
    ########### Setup for node rac1 ##########
    Checking node access 'rac1'
    Checking node login 'rac1'
    Checking/Creating Directory /tmp/mcasttest for binary on node 'rac1'
    Distributing mcast2 binary to node 'rac1'
    ########### Setup for node rac2 ##########
    Checking node access 'rac2'
    Checking node login 'rac2'
    Checking/Creating Directory /tmp/mcasttest for binary on node 'rac2'
    Distributing mcast2 binary to node 'rac2'
    ########### Setup for node rac3 ##########
    Checking node access 'rac3'
    Checking node login 'rac3'
    Checking/Creating Directory /tmp/mcasttest for binary on node 'rac3'
    Distributing mcast2 binary to node 'rac3'
    ########### testing Multicast on all nodes ##########
    Test for Multicast address 230.0.1.0
    Jul 15 23:57:30 | Multicast Succeeded for eth0 using address 230.0.1.0:42000
    Jul 15 23:57:31 | Multicast Succeeded for eth1 using address 230.0.1.0:42001
    Test for Multicast address 224.0.0.251
    Jul 15 23:57:32 | Multicast Succeeded for eth0 using address 224.0.0.251:42002
    Jul 15 23:57:33 | Multicast Succeeded for eth1 using address 224.0.0.251:42003
    Hence, the issue is not multicast related.
    Kashif.
    Edited by: Kashif Khan on Jul 15, 2011 9:00 PM

    Kashif Khan wrote:
    THE ISSUE HAS BEEN RESOLVED!
    With the same configuration the installation completed successfully on Oracle's Virtual Box. I had a lot of issues while installing Grid Control 11g on VMware, but, that got installed successfully as well on the Virtual Box.
    Hence proved, VMware is no good for Oracle virtualization any more.
    Hi,
    Good if you have it resolved. Now could you please mark the thread as answered to avoid confusion
    Cheers

  • ASM instances on 2 node Oracle RAC 10g r2  on Red Hat 4 u1

    Hi all
    I'm experiencing a problem in configuring diskgroups under +ASM instances on a two node Oracle RAC.
    I followed the official guide and also official documents from metalink site, but i'm stuck with the visibility of asm disks.
    I created fake disks on nfs with Netapp certified storage binding them to block device with the usual trick "losetup /dev/loopX /nfs/disk1 " ,
    run "oracleasm createdisk DISKX /dev/loopX" on one node and
    "oracleasm scandisks" on the other one.
    With "oracleasm listdisks" i can see the disks at OS level in both nodes , but , when i try to create and mount diskgroup in the ASM instances , on the instance on which i create the diskgroup all is well, but the other one doesn't see the disks at all, and diskgroup mount fails with :
    ERROR: no PST quorum in group 1: required 2, found 0
    Tue Sep 20 16:22:32 2005
    NOTE: cache dismounting group 1/0x6F88595E (DG1)
    NOTE: dbwr not being msg'd to dismount
    ERROR: diskgroup DG1 was not mounted
    any help would be appreciated
    thanks a lot.
    Antonello

    I'm having this same problem. Did you ever find a solution?

  • ORA-15001: diskgroup "FRA" does not exist or is not mounted

    Dear Experts,
    We noticed the error "ORA-15001: diskgroup "FRA" does not exist or is not mounted" on the 2nd node of our 4 nodes RAC database system.
    During this weekend, we moved our system to a new data center. After the move was done, we were able to bring up the 4 instances around 3:00 AM. However, 4 hours later (7:00 AM), OEM Grid control alerted us with "archival error" on the 2nd instance. From our investigating, we saw the FRA was dismounted. Below is the oracle alert log file (For secuirty reson, the SID is modified):
    Sat Feb  7 03:46:03 2009
    Completed: ALTER DATABASE OPEN
    Sat Feb  7 03:50:18 2009
    Sat Feb  7 07:01:36 2009
    Thread 2 advanced to log sequence 8534
      Current log# 3 seq# 8534 mem# 0: +REDOGRPA/xyz/onlinelog/redo_03.log
      Current log# 3 seq# 8534 mem# 1: +REDOGRPB/xyz/onlinelog/redo_03b.log
    Sat Feb  7 07:01:39 2009
    ARCH: Archival stopped, error occurred. Will continue retrying
    Sat Feb  7 07:01:39 2009
    ORACLE Instance XYZ2 - Archival Error
    Sat Feb  7 07:01:39 2009
    ORA-16038: log 11 sequence# 8533 cannot be archived
    ORA-00254: error in archive control string ''
    ORA-00312: online log 11 thread 2: '+REDOGRPA/xyz/onlinelog/redo_11.log'
    ORA-00312: online log 11 thread 2: '+REDOGRPB/xyz/onlinelog/redo_11b.log'
    ORA-15001: diskgroup "FRA" does not exist or is not mounted
    ORA-15001: diskgroup "FRA" does not exist or is not mounted It is my understanding, that Oracle will automatically mount the diskgroups on the ASM when the instances are started. In our case, all other diskgroups are started OK but not the FRA diskgroup. Is this a bug with FRA where FRA can not be automatically started up after recycling the instance? Did anyone experience the same situation? Can someone advice?
    Thanks!

    Hello,
    We are running oracle 10.2.0.2.
    Which section of the alert log file for +ASM2 we need here? I will paste it here the ones when "alter diskgroup all mounted" started at 3:00 AM all the way upto 4:00 AM on Saturday, Feb 7:
    SQL> ALTER DISKGROUP ALL MOUNT
    Sat Feb  7 03:44:15 2009
    NOTE: cache registered group FRA' number=1 incarn=0x1e5bf0d2
    NOTE: cache registered group DATAX number=2 incarn=0xbcabf0d1
    NOTE: cache registered group REDO2 number=4 incarn=0x1e5bf0d3
    NOTE: cache registered group REDOGRPA number=5 incarn=0xbcbbf0d4
    NOTE: cache registered group REDOGRPB number=6 incarn=0xbcbbf0d5
    NOTE: cache registered group TEMPX number=7 incarn=0xbcbbf0d6
    NOTE: cache registered group TEMP number=8 incarn=0x1e6bf0d7
    NOTE: cache registered group TIBDATA number=9 incarn=0xbcbbf0d8
    NOTE: cache registered group TIBREDO number=10 incarn=0xbcbbf0d9
    NOTE: cache registered group TIBTEMP number=11 incarn=0x1ebbf0da
    Sat Feb  7 03:44:15 2009
    ERROR: no PST quorum in group 1: required 2, found 0
    Sat Feb  7 03:44:15 2009
    NOTE: cache dismounting group 1/0x1E5BF0D2 (FRA')
    NOTE: dbwr not being msg'd to dismount
    ERROR: diskgroup FRA' was not mounted
    Sat Feb  7 03:44:15 2009
    NOTE: Hbeat: instance not first (grp 2)
    ERROR: no PST quorum in group 4: required 2, found 0
    Sat Feb  7 03:44:15 2009
    NOTE: cache dismounting group 4/0x1E5BF0D3 (REDO2)
    NOTE: dbwr not being msg'd to dismount
    ERROR: diskgroup REDO2 was not mounted
    Sat Feb  7 03:44:15 2009
    NOTE: Hbeat: instance not first (grp 5)
    Sat Feb  7 03:44:15 2009
    NOTE: Hbeat: instance not first (grp 6)
    Sat Feb  7 03:44:15 2009
    NOTE: Hbeat: instance not first (grp 7)
    ERROR: no PST quorum in group 8: required 2, found 0
    Sat Feb  7 03:44:15 2009
    NOTE: cache dismounting group 8/0x1E6BF0D7 (TEMP)
    NOTE: dbwr not being msg'd to dismount
    ERROR: diskgroup TEMP was not mounted
    Sat Feb  7 03:44:15 2009
    NOTE: Hbeat: instance not first (grp 9)
    Sat Feb  7 03:44:15 2009
    NOTE: Hbeat: instance not first (grp 10)
    ERROR: no PST quorum in group 11: required 2, found 0
    Sat Feb  7 03:44:15 2009
    NOTE: cache dismounting group 11/0x1EBBF0DA (TIBTEMP)
    NOTE: dbwr not being msg'd to dismount
    ERROR: diskgroup TIBTEMP was not mounted
    NOTE: cache opening disk 0 of grp 2: DATAX_0000 path:/dev/raw/raw61
    NOTE: F1X0 found on disk 0 fcn 0.0
    NOTE: cache mounting (not first) group 2/0xBCABF0D1 (DATAX)
    Sat Feb  7 03:44:16 2009
    kjbdomatt send to node 0
    kjbdomatt send to node 2
    kjbdomatt send to node 3
    Sat Feb  7 03:44:16 2009
    NOTE: attached to recovery domain 2
    Sat Feb  7 03:44:16 2009
    NOTE: opening chunk 4 at fcn 0.1158090 ABA
    NOTE: seq=54 blk=4821
    Sat Feb  7 03:44:16 2009
    NOTE: cache mounting group 2/0xBCABF0D1 (DATAX) succeeded
    SUCCESS: diskgroup DATAX was mounted
    NOTE: cache opening disk 0 of grp 5: REDOGRPA_0000 path:/dev/raw/raw20
    NOTE: F1X0 found on disk 0 fcn 0.0
    NOTE: cache mounting (not first) group 5/0xBCBBF0D4 (REDOGRPA)
    Sat Feb  7 03:44:16 2009
    kjbdomatt send to node 0
    kjbdomatt send to node 2
    kjbdomatt send to node 3
    Sat Feb  7 03:44:16 2009
    NOTE: recovering COD for group 2/0xbcabf0d1 (DATAX)
    SUCCESS: completed COD recovery for group 2/0xbcabf0d1 (DATAX)
    Sat Feb  7 03:44:16 2009
    NOTE: attached to recovery domain 5
    Sat Feb  7 03:44:16 2009
    NOTE: opening chunk 4 at fcn 0.103802 ABA
    NOTE: seq=43 blk=2123
    Sat Feb  7 03:44:16 2009
    NOTE: cache mounting group 5/0xBCBBF0D4 (REDOGRPA) succeeded
    SUCCESS: diskgroup REDOGRPA was mounted
    NOTE: cache opening disk 0 of grp 6: REDOGRPB_0000 path:/dev/raw/raw12
    NOTE: F1X0 found on disk 0 fcn 0.0
    NOTE: cache mounting (not first) group 6/0xBCBBF0D5 (REDOGRPB)
    Sat Feb  7 03:44:16 2009
    kjbdomatt send to node 0
    kjbdomatt send to node 2
    kjbdomatt send to node 3
    Sat Feb  7 03:44:17 2009
    NOTE: attached to recovery domain 6
    Sat Feb  7 03:44:17 2009
    NOTE: opening chunk 4 at fcn 0.75866 ABA
    NOTE: seq=20 blk=1704
    Sat Feb  7 03:44:17 2009
    NOTE: cache mounting group 6/0xBCBBF0D5 (REDOGRPB) succeeded
    SUCCESS: diskgroup REDOGRPB was mounted
    NOTE: cache opening disk 0 of grp 7: TEMPX_0000 path:/dev/raw/raw71
    NOTE: F1X0 found on disk 0 fcn 0.25328
    NOTE: cache opening disk 1 of grp 7: TEMPX_0001 path:/dev/raw/raw7
    NOTE: cache mounting (not first) group 7/0xBCBBF0D6 (TEMPX)
    Sat Feb  7 03:44:17 2009
    kjbdomatt send to node 0
    kjbdomatt send to node 2
    kjbdomatt send to node 3
    Sat Feb  7 03:44:17 2009
    NOTE: attached to recovery domain 7
    Sat Feb  7 03:44:17 2009
    NOTE: opening chunk 4 at fcn 0.69473 ABA
    NOTE: seq=42 blk=432
    Sat Feb  7 03:44:17 2009
    NOTE: cache mounting group 7/0xBCBBF0D6 (TEMPX) succeeded
    SUCCESS: diskgroup TEMPX was mounted
    NOTE: cache opening disk 0 of grp 9: TIBDATA path:/dev/raw/raw86
    NOTE: F1X0 found on disk 0 fcn 0.0
    NOTE: cache mounting (not first) group 9/0xBCBBF0D8 (TIBDATA)
    Sat Feb  7 03:44:17 2009
    kjbdomatt send to node 0
    kjbdomatt send to node 2
    kjbdomatt send to node 3
    Sat Feb  7 03:44:17 2009
    NOTE: attached to recovery domain 9
    Sat Feb  7 03:44:17 2009
    NOTE: opening chunk 4 at fcn 0.102912 ABA
    NOTE: seq=12 blk=2050
    Sat Feb  7 03:44:17 2009
    NOTE: cache mounting group 9/0xBCBBF0D8 (TIBDATA) succeeded
    SUCCESS: diskgroup TIBDATA was mounted
    NOTE: cache opening disk 0 of grp 10: TIBREDO path:/dev/raw/raw87
    NOTE: F1X0 found on disk 0 fcn 0.0
    NOTE: cache mounting (not first) group 10/0xBCBBF0D9 (TIBREDO)
    Sat Feb  7 03:44:18 2009
    kjbdomatt send to node 0
    kjbdomatt send to node 2
    kjbdomatt send to node 3
    Sat Feb  7 03:44:18 2009
    NOTE: attached to recovery domain 10
    Sat Feb  7 03:44:18 2009
    NOTE: opening chunk 4 at fcn 0.18970 ABA
    NOTE: seq=12 blk=6577
    Sat Feb  7 03:44:18 2009
    NOTE: cache mounting group 10/0xBCBBF0D9 (TIBREDO) succeeded
    SUCCESS: diskgroup TIBREDO was mounted
    Sat Feb  7 03:44:19 2009
    NOTE: recovering COD for group 5/0xbcbbf0d4 (REDOGRPA)
    SUCCESS: completed COD recovery for group 5/0xbcbbf0d4 (REDOGRPA)
    NOTE: recovering COD for group 6/0xbcbbf0d5 (REDOGRPB)
    SUCCESS: completed COD recovery for group 6/0xbcbbf0d5 (REDOGRPB)
    NOTE: recovering COD for group 7/0xbcbbf0d6 (TEMPX)
    SUCCESS: completed COD recovery for group 7/0xbcbbf0d6 (TEMPX)
    NOTE: recovering COD for group 9/0xbcbbf0d8 (TIBDATA)
    SUCCESS: completed COD recovery for group 9/0xbcbbf0d8 (TIBDATA)
    NOTE: recovering COD for group 10/0xbcbbf0d9 (TIBREDO)
    SUCCESS: completed COD recovery for group 10/0xbcbbf0d9 (TIBREDO)
    Sat Feb  7 03:45:18 2009
    Starting background process ASMB
    ASMB started with pid=17, OS id=16122
    Sat Feb  7 03:49:35 2009
    NOTE: ASMB process exiting due to lack of ASM file activity
    Starting background process ASMB
    ASMB started with pid=18, OS id=25963
    Sat Feb  7 03:54:34 2009
    NOTE: ASMB process exiting due to lack of ASM file activity
    Sat Feb  7 03:54:50 2009
    Starting background process ASMB
    ASMB started with pid=18, OS id=913
    Sat Feb  7 03:57:55 2009
    NOTE: ASMB process exiting due to lack of ASM file activity
    Sat Feb  7 04:07:26 2009
    Starting background process ASMB
    ASMB started with pid=21, OS id=23012
    Sat Feb  7 04:13:59 2009
    NOTE: ASMB process exiting due to lack of ASM file activity
    Starting background process ASMB
    ASMB started with pid=22, OS id=31320
    Sat Feb  7 04:36:43 2009
    NOTE: ASMB process exiting due to lack of ASM file activity

  • ORA-15100: invalid or missing diskgroup name in 11g ASM

    We have 11g R2 DB and 11g ASM installed on linux server, it was working till morning,due to some maintenance we rebooted server, then i am trying to bring up asm instance, its falling with below error.
    [oracle@adg dbs]$ sqlplus / as sysasm
    SQL*Plus: Release 11.2.0.1.0 Production on Tue Aug 16 16:15:57 2011
    Copyright (c) 1982, 2009, Oracle. All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Automatic Storage Management option
    SQL> startup force pfile='/u01/app/11.2.0/grid/dbs/init+ASM.ora';
    ASM instance started
    Total System Global Area 284565504 bytes
    Fixed Size 1336036 bytes
    Variable Size 258063644 bytes
    ASM Cache 25165824 bytes
    ORA-15110: no diskgroups mounted
    SQL> show parameter string
    NAME TYPE VALUE
    asm_diskstring string DATA, DATA1
    SQL> shut immediate
    ORA-15100: invalid or missing diskgroup name
    ASM instance shutdown
    SQL> exit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Automatic Storage Management option
    [oracle@adg dbs]$ vi init+ASM.ora
    [oracle@adg dbs]$ /etc/init.d/oracleasm listdisks
    DISK1
    DISK2
    [[root@adg ~]# /etc/init.d/oracleasm scandisks
    Scanning the system for Oracle ASMLib disks: [  OK  ]
    ERROR: diskgroup RECOVERY_AREA was not mounted
    NOTE: cache deleting context for group RECOVERY_AREA 2/625504078
    ORA-15032: not all alterations performed
    ORA-15017: diskgroup "RECOVERY_AREA" cannot be mounted
    ORA-15063: ASM discovered an insufficient number of disks for diskgroup "RECOVERY_AREA"
    ERROR: ALTER DISKGROUP RECOVERY_AREA MOUNT /* asm agent */
    Errors in file /u01/app/oracle/diag/asm/+asm/+ASM/trace/+ASM_rbal_6520.trc ; (incident=8105):
    ORA-00600: internal error code, arguments: [kfrcGetNRB05], [1], [340], [], [], [], [], [], [], [], [], []
    Incident details in: /u01/app/oracle/diag/asm/+asm/+ASM/incident/incdir_8105/+ASM_rbal_6520_i8105.trc
    ERROR: ORA-600 in COD recovery for diskgroup 1/0xfe086f4c (DATA)
    ERROR: ORA-600 thrown in RBAL for group number 1
    Errors in file /u01/app/oracle/diag/asm/+asm/+ASM/trace/+ASM_rbal_6520.trc:
    ORA-00600: internal error code, arguments: [kfrcGetNRB05], [1], [340], [], [], [], [], [], [], [], [], []
    Errors in file /u01/app/oracle/diag/asm/+asm/+ASM/trace/+ASM_rbal_6520.trc:
    ORA-00600: internal error code, arguments: [kfrcGetNRB05], [1], [340], [], [], [], [], [], [], [], [], []
    RBAL (ospid: 6520): terminating the instance due to error 488
    Tue Aug 16 15:47:03 2011
    ORA-1092 : opitsk aborting process
    Tue Aug 16 15:47:04 2011
    Instance terminated by RBAL, pid = 6520
    Tue Aug 16 15:51:08 2011
    Starting ORACLE instance (normal)

    I tried already.
    SQL> select disk_number, name, label, path, mount_status,HEADER_STATUS, state from v$asm_disk
    2 ;
    DISK_NUMBER NAME LABEL
    PATH
    MOUNT_S HEADER_STATU STATE
    1 DISK2
    ORCL:DISK2
    CLOSED MEMBER NORMAL
    0 DISK1
    ORCL:DISK1
    CLOSED MEMBER NORMAL
    DISK_NUMBER NAME LABEL
    PATH
    MOUNT_S HEADER_STATU STATE
    SQL> alter diskgroup recovery_area mount;
    alter diskgroup recovery_area mount
    ERROR at line 1:
    ORA-15032: not all alterations performed
    ORA-15017: diskgroup "RECOVERY_AREA" cannot be mounted
    ORA-15063: ASM discovered an insufficient number of disks for diskgroup
    "RECOVERY_AREA"
    SQL>
    here is alert log information
    ==================
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Picked latch-free SCN scheme 2
    Using LOG_ARCHIVE_DEST_1 parameter default value as /u01/app/11.2.0/grid/dbs/arch
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =0
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    Starting up:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Automatic Storage Management option.
    Using parameter settings in client-side pfile /u01/app/11.2.0/grid/dbs/init+ASM.ora on machine adg.xxxx.com
    System parameters with non-default values:
    large_pool_size = 12M
    instance_type = "asm"
    remote_login_passwordfile= "EXCLUSIVE"
    asm_diskstring = ""
    asm_power_limit = 1
    diagnostic_dest = "/u01/app/oracle"
    Tue Aug 16 16:32:26 2011
    PMON started with pid=2, OS id=11558
    Tue Aug 16 16:32:26 2011
    VKTM started with pid=3, OS id=11560 at elevated priority
    VKTM running at (10)millisec precision with DBRM quantum (100)ms
    Tue Aug 16 16:32:26 2011
    GEN0 started with pid=4, OS id=11564
    Tue Aug 16 16:32:26 2011
    DIAG started with pid=5, OS id=11566
    Tue Aug 16 16:32:26 2011
    PSP0 started with pid=6, OS id=11568
    Tue Aug 16 16:32:26 2011
    DIA0 started with pid=7, OS id=11570
    Tue Aug 16 16:32:27 2011
    MMAN started with pid=8, OS id=11572
    Tue Aug 16 16:32:27 2011
    DBW0 started with pid=9, OS id=11574
    Tue Aug 16 16:32:27 2011
    LGWR started with pid=10, OS id=11576
    Tue Aug 16 16:32:27 2011
    CKPT started with pid=11, OS id=11578
    Tue Aug 16 16:32:27 2011
    SMON started with pid=12, OS id=11580
    Tue Aug 16 16:32:27 2011
    RBAL started with pid=13, OS id=11582
    Tue Aug 16 16:32:27 2011
    GMON started with pid=14, OS id=11584
    Tue Aug 16 16:32:27 2011
    MMON started with pid=15, OS id=11586
    Tue Aug 16 16:32:27 2011
    MMNL started with pid=16, OS id=11588
    ORACLE_BASE from environment = /u01/app/oracle
    Tue Aug 16 16:32:27 2011
    SQL> ALTER DISKGROUP ALL MOUNT
    Tue Aug 16 16:34:23 2011
    SQL> alter diskgroup recovery_area mount
    NOTE: cache registered group RECOVERY_AREA number=1 incarn=0x100b432b
    NOTE: cache began mount (first) of group RECOVERY_AREA number=1 incarn=0x100b432b
    Tue Aug 16 16:34:24 2011
    NOTE: Loaded library: /opt/oracle/extapi/32/asm/orcl/1/libasm.so
    Tue Aug 16 16:34:24 2011
    ERROR: no PST quorum in group: required 2, found 0
    NOTE: cache dismounting (clean) group 1/0x100B432B (RECOVERY_AREA)
    NOTE: dbwr not being msg'd to dismount
    NOTE: lgwr not being msg'd to dismount
    NOTE: cache dismounted group 1/0x100B432B (RECOVERY_AREA)
    NOTE: cache ending mount (fail) of group RECOVERY_AREA number=1 incarn=0x100b432b
    kfdp_dismount(): 2
    kfdp_dismountBg(): 2
    ERROR: diskgroup RECOVERY_AREA was not mounted
    NOTE: cache deleting context for group RECOVERY_AREA 1/269173547
    ORA-15032: not all alterations performed
    ORA-15017: diskgroup "RECOVERY_AREA" cannot be mounted
    ORA-15063: ASM discovered an insufficient number of disks for diskgroup "RECOVERY_AREA"
    ERROR: alter diskgroup recovery_area mount
    Tue Aug 16 16:34:53 2011
    SQL> alter diskgroup recovery_area mount
    NOTE: cache registered group RECOVERY_AREA number=1 incarn=0xc86b4331
    NOTE: cache began mount (first) of group RECOVERY_AREA number=1 incarn=0xc86b4331
    Tue Aug 16 16:34:53 2011
    ERROR: no PST quorum in group: required 2, found 0
    NOTE: cache dismounting (clean) group 1/0xC86B4331 (RECOVERY_AREA)
    NOTE: dbwr not being msg'd to dismount
    NOTE: lgwr not being msg'd to dismount
    NOTE: cache dismounted group 1/0xC86B4331 (RECOVERY_AREA)
    NOTE: cache ending mount (fail) of group RECOVERY_AREA number=1 incarn=0xc86b4331
    kfdp_dismount(): 4
    kfdp_dismountBg(): 4
    ERROR: diskgroup RECOVERY_AREA was not mounted
    NOTE: cache deleting context for group RECOVERY_AREA 1/-932494543
    ORA-15032: not all alterations performed
    ORA-15017: diskgroup "RECOVERY_AREA" cannot be mounted
    ORA-15063: ASM discovered an insufficient number of disks for diskgroup "RECOVERY_AREA"
    ERROR: alter diskgroup recovery_area mount

  • ASM moutning problem.

    Hi,
    I think my diskgroups headers get corrupted. Asm diskgroup are binded with raw devices and I am receiving the following errors in alert log file.
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Picked latch-free SCN scheme 3
    Using LOG_ARCHIVE_DEST_1 parameter default value as /opt/oracle/product/10.2.0/db_1/dbs/arch
    Autotune of undo retention is turned off.
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    ksdpec: called for event 13740 prior to event group initialization
    Starting up ORACLE RDBMS Version: 10.2.0.4.0.
    System parameters with non-default values:
    large_pool_size = 12582912
    instance_type = asm
    remote_login_passwordfile= SHARED
    background_dump_dest = /opt/oracle/admin/+ASM/bdump
    user_dump_dest = /opt/oracle/admin/+ASM/udump
    core_dump_dest = /opt/oracle/admin/+ASM/cdump
    asm_diskstring =
    asm_diskgroups = REDO1, REDO2
    PMON started with pid=2, OS id=24689
    PSP0 started with pid=3, OS id=24691
    MMAN started with pid=4, OS id=24693
    DBW0 started with pid=5, OS id=24695
    LGWR started with pid=6, OS id=24697
    CKPT started with pid=7, OS id=24699
    SMON started with pid=8, OS id=24701
    RBAL started with pid=9, OS id=24703
    GMON started with pid=10, OS id=24705
    Tue Jul 26 09:42:27 2011
    SQL> ALTER DISKGROUP ALL MOUNT
    Tue Jul 26 09:42:27 2011
    NOTE: cache registered group REDO1 number=1 incarn=0x2bfea2ac
    NOTE: cache registered group REDO2 number=2 incarn=0x2bfea2ad
    Tue Jul 26 09:42:27 2011
    Errors in file /opt/oracle/admin/+ASM/bdump/+asm_rbal_24703.trc:
    ORA-00600: internal error code, arguments: [kfklLibFetchNext00], [18446744073709551614], [0], [], [], [], [], []
    Tue Jul 26 09:42:27 2011
    Errors in file /opt/oracle/admin/+ASM/bdump/+asm_rbal_24703.trc:
    ORA-00600: internal error code, arguments: [kfklLibFetchNext00], [18446744073709551614], [0], [], [], [], [], []
    Tue Jul 26 09:42:27 2011
    ERROR: no PST quorum in group 1: required 2, found 0
    Tue Jul 26 09:42:27 2011
    NOTE: cache dismounting group 1/0x2BFEA2AC (REDO1)
    NOTE: dbwr not being msg'd to dismount
    ERROR: diskgroup REDO1 was not mounted
    Tue Jul 26 09:42:27 2011
    ERROR: no PST quorum in group 2: required 2, found 0
    Tue Jul 26 09:42:27 2011
    NOTE: cache dismounting group 2/0x2BFEA2AD (REDO2)
    NOTE: dbwr not being msg'd to dismount
    ERROR: diskgroup REDO2 was not mounted
    Thank you for your help.
    Regards,
    Adnan Hamdus Salam

    adnan wrote:
    Hi,
    Tue Jul 26 09:42:27 2011
    Errors in file /opt/oracle/admin/+ASM/bdump/+asm_rbal_24703.trc:
    ORA-00600: internal error code, arguments: [kfklLibFetchNext00], [18446744073709551614], [0], [], [], [], [], []
    Thank you for your help.
    Regards,
    Adnan Hamdus SalamHi,
    See the log /opt/oracle/admin/+ASM/bdump/+asm_rbal_24703.trc if there is something useful, otherwise contact Oracle Support for this ORA-0600
    Cheers

  • Quorum Query

    Environment
    2 Node RAC Cluster with each containing:
    Oracle Linux 6 Update 5 (x86-64)
    Oracle Grid Infrastructure 12R1 (12.1.0.2.0)
    Oracle Database 12R1 (12.1.0.2.0)
    I am not understanding the role of the quorum in the quorum failure group for Voting Files with Oracle ASM.
    Failure Groups (FG) provide assurance that there is separation of the risk you are trying to mitigate.   The separation is at the extent level with User Data.  With Voting Files, the file is separated to a disk for each FG within Disk Group (DG).  If Voting Files are stored on a Disk Group (DG) at normal redundancy.  A minimum of two FGs are required.  It is recommended that 3 FGs be used.  This is so Partner Status Table (PST) can have at least one other FG where PST is maintain for comparison.  This is in the event  of one FG failure.  Do the FGs need to be QUORUM that are storing Voting Files?  What is the role of the Quorum? When is it needed?

    Hi,
    I'll start with what Quorum means:
    A quorum is the minimum number of members (majority) of a set, necessary to prevent a failure. (IT concept)
    There is many quorum such as Votedisk Quorum, OCR Quorum, Network Quorum, PST Quorum,etc
    We need separate what quorum we are concerned.
    The Quorum of PST is different of Quorum of Votedisk, although all thinks works toogheter.
    Quorum PST:
    A PST contains information about all ASM Disk in a Diskgroup - Disk Number, Disk Status, Disk Partner Number, Heartbeat info and Failgroup Info.
    A Disk Group must be able to access a quorum of the Partner Status Tables (PST) to mount the diskgroup.
    When diskgroup mount is requested the instance reads all disks in the disk group to find and verify all available PST. Once it verifies that there are enough PSTs for a quorum, it mounts the disk group.
    There is a nice post here: ASM Support Guy: Partnership and Status Table
    Quorum Votedisk:
    Is a minimum number of votes to cluster be operational. There is always a votedisk quorum.
    When you setup votedisk in a normal redundancy you have 3 Votedisk one in each Failgroup. To cluster be operational you need at least a quorum with 2 vote online to cluster remain online.
    Quorum Failgroup (clause):
    Quorum Failgroup is an option of setup of a Diskgroup.
    This option must not be confused with Voting Quorum, because Voting Quorum and Failgroup Quorum are different things.
    For example: In a Normal Redudancy diskgroup I can lost my Quorum Failgroup and the Cluster will remain online with 2 Regular Failgroup, so Quorum failgroup is a setup.
    Oracle named as "Failgroup Quorum" a failegroup to a specific purpose that is store only Votedisk due a infrastructure deployment.
    Is not mandatory use "Quorum Failgroup" in a Diskgroup that hold votedisk.
    Now back to your question:
    If your failure groups only have 1 ASM Disk then shouldn't the recommendation be to use High Redundancy (5 failure groups) so in the event of a ASM disk failure a quorum of PST (3 PSTs) would be possible?
    About PTS Quorum: You need must be aware that if you have 5 PST you will need at least a quorum with 3 PST to mount its Diskgroup.
    If you have 5 Failgroup and  each Failgroup has only one ASMDISK you will have one PTS per ASMDISK that support you lost at least 2 PST to be able make a Quorum with 3 PST and keep Diskgroup Mounted or Mount it.
    The bold italicized Oracle documentation above seems to say that if you allocate 3 disk devices that 2 will be used by failure groups in Normal Redundancy. Further, a quorum failure group will exist that will use all disk devices.  What does this mean?
    I have no idea what documentation are saying is so confuse. I'll try contact some Oracle employee to check it.
    But will try clarify some things:
    Suppose you setup a Diskgroup as follow:
    Diskgroup  DATA
    Failgroup data01
    * /dev/hdisk1 and /dev/hdisk2
    Failgroup data02
    * /dev/hdisk3 and /dev/hdisk4
    Failgroup data03
    * /dev/hdisk5 and /dev/hdisk6
    Quorum Failgroup data_quroum
    * /nfs/votedisk
    When you add Votedisk on this diskgroup DATA, the  CSSD will store as follow:
    CSSD will pick randomly one asmdisk per Failgroup and store votedisk on it, but always will pick one ASMDISK from Quorum Failgroup (if exists).
    So, after add votedisk in below diskgroup you can have:
    * Failgroup data01 ( /dev/hdisk2)
    * Failgroup data03 (/dev/hdisk5)
    * Failgroup data_quorum (/nfs/votedisk)
    To mount diskgroup DATA you need failgroup data01,data03 and data_quorum available to mount diskgroup, otherwise diskgroup does not mount.
    About Documentation (https://docs.oracle.com/database/121/CWADD/votocr.htm#CWADD91889) is a bit confuse:
    Normal redundancy
    The redundancy level that you choose for the Oracle ASM disk group determines how Oracle ASM mirrors files in the disk group, and determines the number of disks and amount of disk space that you require. If the voting files are in a disk group, then the disk groups that contain Oracle Clusterware files (OCR and voting files) have a higher minimum number of failure groups than other disk groups because the voting files are stored in quorum failure groups.
    What it saying is: In case of use Quorum Failgroup you will have a higher minimum number of failure groups than other disk groups...
    But remember that qorum Failgroup is optional for those that use a single storage or odd number of storage H/W.
    For Oracle Clusterware files, a normal redundancy disk group requires a minimum of three disk devices (two of the three disks are used by failure groups and all three disks are used by the quorum failure group) and provides three voting files and one OCR and mirror of the OCR. When using a normal redundancy disk group, the cluster can survive the loss of one failure group.
    Trying clarify:
    - Votedisk in Diskgroup with normal redundancy requires three disk devices. (In case of use Quorum Failgroup: you will have two of three disk  used by Regular failgroup and one of three disk are used by Quorum Failgroup but all three disks (regular and quorum failgroup that store votedisk) count when mount that diskgroup.
    - and one OCR and mirror of the OCR:
    It's really confuse. Because the mirror of OCR must be placed in a different diskgroup because the OCR is stored similar to how Oracle Database files are stored. The extents are spread across all the disks in the diskgroup.
    I don't know what it's talking about. If mirror of extent about diskgroup redundancy or OCR Mirror.
    Per as note and above documentation says it's not possible store OCR and OCR Mirror on same diskgroup
    RAC FAQ (Doc ID 220970.1)
    How is the Oracle Cluster Registry (OCR) stored when I use ASM?
    And (https://docs.oracle.com/database/121/CWADD/votocr.htm#CWADD90964)
    * At least two OCR locations if OCR is configured on an Oracle ASM disk group. You should configure OCR in two independent disk groups. Typically this is the work area and the recovery area.
    High redundancy:
    For Oracle Clusterware files, a high redundancy disk group requires a minimum of five disk devices (three of the five disks are used by failure groups and all five disks are used by the quorum failure group) and provides five voting files and one OCR and two mirrors of the OCR. With high redundancy, the cluster can survive the loss of two failure groups.
    Three of five disks are used ??? and Two mirror of OCR?? In a single Diskgroup?
    Now things goes bad.
    Far as I can test and see when use Quorum Votedisk Four (not three) of five disks are used and all five counts.

  • Solaris Cluster 3.3 on VMware ESX 4.1

    Hi there,
    I am trying to setup Solaris Cluster 3.3 on Vmware ESX 4.1
    My first question is: Is there anyone out there setted up Solaris Cluster on vmware accross boxes?
    My tools:
    Solaris 10 U9 x64
    Solaris Cluster 3.3
    Vmware ESX 4.1
    HP DL 380 G7
    HP P2000 Fibre Channel Storage
    When I try to setup cluster, just next next next, it completes successfully. It reboots the second node first and then the itself.
    After second node comes up on login screen, ping stops after 5 sec. Same either nodes!
    I am trying to understand why it does that? I did every possibility to complete this job. Setted up quorum as RDM from VMware. Solaris has direct access to quorum disk now.
    I am new to Solaris and I am having the errors below. If someone would like to help me it will be much appreciated!
    Please explain me in more details i am new bee in solaris :) Thanks!
    I need help especially on error: /proc fails to mount periodically during reboots.
    Here is the error messages. Is there any one out there setted up Solaris Cluster on ESX 4.1 ?
    * cluster check (ver 1.0)
    Report Date: 2011.02.28 at 16.04.46 EET
    2011.02.28 at 14.04.46 GMT
    Command run on host:
    39bc6e2d- sun1
    Checks run on nodes:
    sun1
    Unique Checks: 5
    ===========================================================================
    * Summary of Single Node Check Results for sun1
    ===========================================================================
    Checks Considered: 5
    Results by Status
    Violated : 0
    Insufficient Data : 0
    Execution Error : 0
    Unknown Status : 0
    Information Only : 0
    Not Applicable : 2
    Passed : 3
    Violations by Severity
    Critical : 0
    High : 0
    Moderate : 0
    Low : 0
    * Details for 2 Not Applicable Checks on sun1
    * Check ID: S6708606 ***
    * Severity: Moderate
    * Problem Statement: Multiple network interfaces on a single subnet have the same MAC address.
    * Applicability: Scan output of '/usr/sbin/ifconfig -a' for more than one interface with an 'ether' line. Check does not apply if zero or only one ether line.
    * Check ID: S6708496 ***
    * Severity: Moderate
    * Problem Statement: Cluster node (3.1 or later) OpenBoot Prom (OBP) has local-mac-address? variable set to 'false'.
    * Applicability: Applicable to SPARC architecture only.
    * Details for 3 Passed Checks on sun1
    * Check ID: S6708605 ***
    * Severity: Critical
    * Problem Statement: The /dev/rmt directory is missing.
    * Check ID: S6708638 ***
    * Severity: Moderate
    * Problem Statement: Node has insufficient physical memory.
    * Check ID: S6708642 ***
    * Severity: Critical
    * Problem Statement: /proc fails to mount periodically during reboots.
    ===========================================================================
    * End of Report 2011.02.28 at 16.04.46 EET
    ===========================================================================
    Edited by: user13603929 on 28-Feb-2011 22:22
    Edited by: user13603929 on 28-Feb-2011 22:24
    Note: Please ignore memory error I have installed 5GB memory and it says it requires min 1 GB! i think it is a bug!
    Edited by: user13603929 on 28-Feb-2011 22:25

    @TimRead
    Hi, thanks for reply,
    I have already followed the steps also on your links but no joy on this.
    What i noticed here is cluster seems to be buggy. Because i have tried to install cluster 3.3 on physical hardware and it gave me excat same error messages! interesting isnt it?
    Please see errors below that I got from on top of VMware and also on Solaris Physical hardware installation:
    ERROR1:
    Comment: I have installed different memories all the time. It keeps sayying that silly error.
    problem_statement : *Node has insufficient physical memory.
    <analysis>5120 MB of memory is installed on this node.The current release of Solaris Cluster requires a minimum of 1024 MB of physical memory in each node. Additional memory required for various Data Services.</analysis>
    <recommendations>Add enough memory to this node to bring its physical memory up to the minimum required level.
    ERROR2
    Comment: Despite rmt directory is there I gor error below on cluster check
    <problem_statement>The /dev/rmt directory is missing.
    <analysis>The /dev/rmt directory is missing on this Solaris Cluster node. The current implementation of scdidadm(1M) relies on the existence of /dev/rmt to successfully execute 'scdidadm -r'. The /dev/rmt directory is created by Solaris regardless of the existence of the actual nderlying devices. The expectation is that the user will never delete this directory. During a reconfiguration reboot to add new disk devices, if /dev/rmt is missing scdidadm will not create the new devices and will exit with the following error: 'ERR in discover_paths : Cannot walk /dev/rmt' The absence of /dev/rmt might prevent a failover to this node and result in a cluster outage. See BugIDs 4368956 and 4783135 for more details.</analysis>
    ERROR3
    Comment: All Nics have different MAC address though, also I have done what it suggests me. No joy here as well!
    <problem_statement>Cluster node (3.1 or later) OpenBoot Prom (OBP) has local-mac-address? variable set to 'false'.
    <analysis>The local-mac-address? variable must be set to 'true.' Proper operation of the public networks depends on each interface having a different MAC address.</analysis>
    <recommendations>Change the local-mac-address? variable to true: 1) From the OBP (ok> prompt): ok> setenv local-mac-address? true ok> reset 2) Or as root: # /usr/sbin/eeprom local-mac-address?=true # init 0 ok> reset</recommendations>
    ERROR4
    Comment: No comment on this, i have done what it says no joy...
    <problem_statement>/proc fails to mount periodically during reboots.
    <analysis>Something is trying to access /proc before it is normally mounted during the boot process. This can cause /proc not to mount. If /proc isn't mounted, some Solaris Cluster daemons might fail on startup, which can cause the node to panic. The following lines were found:</analysis>
    Thanks!

  • ASM Quorum Failgroup Setup is Mandatory for Normal and High Redundancy?

    Hi all,
    Since I have worked with version 11.2 I had a concept about Quorum Failgroup and its purpose, now reading the documentation 12c I'm  confuse about some aspect and want your views on this subject.
    My Concept About Quorum Failgroup:
    The Quorum Failgroup was introduced in 11.2 for setup with Extended RAC and/or for setup with Diskgroups that have only 2 ASMDISK using Normal redundancy or 3 ASMDISK using High redundancy.
    But if we are not using Extended RAC and/or have a Diskgroup Normal Redundancy with 3 or more ASMDISK  or Diskgroup High Redundancy with 5 or more ASMDISK the use of Quorum Failgroup is optional, most like not used.
    ==============================================================================
    Documentation isn't clear about WHEN we must to use Quorum Failgroup.
    https://docs.oracle.com/database/121/CWLIN/storage.htm#CWLIN287
    7.4.1 Configuring Storage for Oracle Automatic Storage Management
      2. Choose the Oracle ASM redundancy level to use for the Oracle ASM disk group.
    Except when using external redundancy, Oracle ASM mirrors all Oracle Clusterware files in separate failure groups within a disk group. A quorum failure group, a special type of failure group, contains mirror copies of voting files when voting files are stored in normal or high redundancy disk groups. If the voting files are in a disk group, then the disk groups that contain Oracle Clusterware files (OCR and voting files) have a higher minimum number of failure groups than other disk groups because the voting files are stored in quorum failure groups.
    A quorum failure group is a special type of failure group that is used to store the Oracle Clusterware voting files. The quorum failure group is used to ensure that a quorum of the specified failure groups are available. When Oracle ASM mounts a disk group that contains Oracle Clusterware files, the quorum failure group is used to determine if the disk group can be mounted in the event of the loss of one or more failure groups. Disks in the quorum failure group do not contain user data, therefore a quorum failure group is not considered when determining redundancy requirements in respect to storing user data.
    As mentioned in documentation above, I could understand that in ANY diskgroup that use Normal or High Redundancy MUST have a Quorum failgroup. (does not matter what setup)
    In my view, if a Quorum Failgroup is used to ENSURE that a quorum of the specified failure groups are available, then we must use it, in other words is mandatory.
    What's your view on this matter?
    ==============================================================================
    Another Issue:
    Suppose the following scenario (example using NORMAL Redundancy).
    Example 1
    Diskgroup Normal Redundancy  with 3 ASMDIKS.
    DSK_0000  - FG1 (QUORUM FAILGROUP)
    DSK_0001  - FG2 (REGULAR FAILGROUP)
    DSK_0002  - FG3 (REGULAR FAILGROUP)    
    The ASM will allow create only one Quorum Failgroup, and two Regular Failgroup ( a failgroup to each asm disk)    
    Storing Votedisk on this diskgroup the all three asmdisk will be used one votedisk in each asm disk.    
    Storing OCR on this diskgroup the two Regular Failgroup will be used, only one OCR and primary extents and  mirror of its extents accross two failgroup. (quorum failgroup will not be used to OCR)
    Example 2
    Diskgroup Normal Redundancy  with 5 ASMDIKS.
    DSK_0000  - FG1 (REGULAR FAILGROUP)
    DSK_0001  - FG2 (REGULAR FAILGROUP)
    DSK_0002  - FG3 (QUORUM FAILGROUP) 
    DSK_0003  - FG4 (QUORUM FAILGROUP)
    DSK_0004  - FG5 (QUORUM FAILGROUP)
    The ASM will allow create up to three Quorum Failgroup, and two Regular Failgroup.
    Storing Votedisk on this diskgroup the all three QUORUM FAILGROUP will be used. REGULAR FAILGROUP will not be used.
    Storing OCR on this diskgroup the two Regular Failgroup will be used, only one OCR and primary extents and  mirror of its extents accross two failgroup. (none quorum failgroup will not be used to OCR).
    This right here below is confuse to me.
    https://docs.oracle.com/database/121/CWLIN/storage.htm#CWLIN287
    7.4.1 Configuring Storage for Oracle Automatic Storage Management
      2. Choose the Oracle ASM redundancy level to use for the Oracle ASM disk group.
    The quorum failure group is used to determine if the disk group can be mounted in the event of the loss of one or more failure groups.
    Normal redundancy: For Oracle Clusterware files, a normal redundancy disk group requires a minimum of three disk devices (two of the three disks are used by failure groups and all three disks are used by the quorum failure group) and provides three voting files and one OCR and mirror of the OCR. When using a normal redundancy disk group, the cluster can survive the loss of one failure group.For most installations, Oracle recommends that you select normal redundancy disk groups.
    High redundancy:  For Oracle Clusterware files, a high redundancy disk group requires a minimum of five disk devices (three of the five disks are used by failure groups and all five disks are used by the quorum failure group) and provides five voting files and one OCR and two mirrors of the OCR. With high redundancy, the cluster can survive the loss of two failure groups.
    Documentation says:
    minimum of three disk devices:  two of the three disks are used by failure groups and all three disks are used by the quorum failure group for normal redundancy.
    minimum of five disk devices: three of the five disks are used by failure groups and all five disks are used by the quorum failure group for high redudancy.
    Questions :
    What this USED mean?
    How the all disk are USED by quorum failgroup?
    This USED mean used to determine if the disk group can be mounted?
    How Quorum Failgroup determine if a diskgroup can be mounted, what is the math?
    Consider following scenery:
    Diskgroup Normal Redundancy with 3 ASM Disks. (Two Regular failgroup and One Quorum Failgroup)
    If we lost the Quorum failgroup group. We can mount that diskgroup using force option.
    If we lost one Regular failgroup group. We can mount that diskgroup using force option.
    We can't lost two Failgroup at same time.
    If I don't use Quorum failgroup (i.e only Regular Failgroup) the result of test is the same.
    I see no difference between use Quorum Failgroup and only Regular Failgroup on this matter.
    ======================================================================================
    When Oracle docs says:
    one OCR and mirror of the OCR for normal redundancy
    one OCR and two mirrors of the OCR for high redundancy
    What this means is we have ONLY ONE OCR File and mirror of its extents, but oracle in documentation says 1 mirror of OCR (normal redundancy) and 2 mirror of OCR (high redudancy).
    What is sound like? a single file or two or more files ?
    Please don't confuse it with ocrconfig mirror location.

    Hi Levi Pereira,
    Sorry for the late answer. And as per 12c1 documentation, yes you are right, only the VD will be placed on the quorum fail groups:
    The redundancy level that you choose for the Oracle ASM disk group determines how Oracle ASM mirrors files in the disk group, and determines the number of disks and amount of disk space that you require. If the voting files are in a disk group, then the disk groups that contain Oracle Clusterware files (OCR and voting files) have a higher minimum number of failure groups than other disk groups because the voting files are stored in quorum failure groups.
    Managing Oracle Cluster Registry and Voting Files
    Regarding your question "I want answer about is mandatory to use Quorum Failgroup when use Normal or High Redundancy?" No it isn't, I have a normal redundancy diskgroup that I store VD with no Quorum Failgroup, indeed, it would prevent you to store data on the disks within this kind of failgroup as per the documentation:
    A quorum failure group is a special type of failure group that is used to store the Oracle Clusterware voting files. The quorum failure group is used to ensure that a quorum of the specified failure groups are available. When Oracle ASM mounts a disk group that contains Oracle Clusterware files, the quorum failure group is used to determine if the disk group can be mounted if there is a loss of one or more failure groups. Disks in the quorum failure group do not contain user data, therefore a quorum failure group is not considered when determining redundancy requirements in respect to storing user data.
    Managing Oracle Cluster Registry and Voting Files
    And as per the documentation, my answer are with red color:
    Could you explain what documentation meant:
    minimum of three disk devices:  two of the three disks are used by failure groups and all three disks are used by the quorum failure group for normal redundancy.
    how all three disk are used by the quorum failgroup? [I don't think this is correct, sounds a bit strange and it is the opposite for what is right before...]
    Regards.

  • Adding quorum disk causing wasted space?

    Hi,
    Any idea whether this is a bug or an expected behavior?
    Seeing this with ASM 11.2.0.4 and 12.1.0.4
    Have a Normal redundancy disk group (FLASHDG22G below) with two disks of equal size. With no data on the disk group the Usable_file_MB is equal to the size of one disk, as expected.
    But if I add a small quorum disk to the disk group, the Usable_file_MB decreases to 1/2 of the disk size. So, half of the capacity is lost.
    Thoughts?
    [grid@symcrac3 ~]$ asmcmd lsdsk -k
    Total_MB  Free_MB  OS_MB  Name        Failgroup  Failgroup_Type  Library  Label  UDID  Product  Redund   Path
       20980    20878  20980  SYMCRAC3_A  FG1        REGULAR         System                         UNKNOWN  /dev/symcrac3-a-22G
         953      951    953  QUORUMDISK  FGQ        QUORUM          System                         UNKNOWN  /dev/symcrac3-a-quorum
       20980    20878  20980  SYMCRAC4_A  FG2        REGULAR         System                         UNKNOWN  /dev/symcrac4-a-22G
    [grid@symcrac3 ~]$ asmcmd lsdg
    State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
    MOUNTED  NORMAL  N         512   4096  1048576     42913    42707            20980           10388              0             N  FLASHDG22G/

    There are two separate issues:
    1) ASMCMD silently fails to add quorum failure groups. Adds them, but as regular failure groups.
    2) Even if a quorum failure group is added with SQLPlus, the space is still lost – I have just confirmed it. And it doesn’t matter whether I add a quorum disk to an existing group or create a new group with a quorum disk.
    For #2 here is the likely source of the problem. Usable_File_MB = (FREE_MB – REQUIRED_MIRROR_FREE_MB ) / 2.
    REQUIRED_MIRROR_FREE_MB is computed as follows (per ASM 12.1 user guide):
    –Normal redundancy disk group with more than two failure groups
       The value is the total raw space for all of the disks in the largest failure group. The largest failure group is the one with the largest total raw capacity. For example, if each disk is in its own failure group, then the value would be the size of the largest capacity disk.
    Instead, it should be "with more than two regular failure groups".
    With just two failure groups it is not possible to restore full redundancy after one of them fails. So, REQUIRED_MIRROR_FREE_MB = 0 in this case.
    Also REQUIRED_MIRROR_FREE_MB should remain 0 even when there are three failure groups if one of them is a quorum failure group. But the logic seems to be broken here.

  • Restrict users from archiving PST to local computer

    Hi all,
    I would like to restrict users from archiving emails in outlook to the local computer. We have a serious problem that users are archiving emails to the local computer and then they can copy those emails to external devices or that they can attach this local
    pst file to their personal outlook profile which they can forward it to external recipients. We have ran into a serious problem now and I am try to resolve this problem by restricting users to archive the emails to their local computer. Is there any way I
    can do this?
    Only designated users should be able to archive the outlook emails (from the support team) and they can save it to a central file server.
    Please share me your thoughts. Thank you all for taking time to read this and for your suggestions.

    Hi Friend,
    Use Group Policy Feature and enable the “DisablePST” Reg value as it will not allow users to create new  PST file or even remove the Archive function from their Outlook interface.
    Registry path to disable PST File authentication (Group policy):
    HKEY_CURRENT_USER\Software\Policies\Microsoft\Office\12.0\Outlook
    Take a brief explanation about various restrictions over PST File:
    https://www.simple-talk.com/sysadmin/exchange/using-group-policy-to-restrict-the-use-of-pst-files/
    Note: Improve community discussions by marking the answers helpful otherwise respond back for further help.
    Thanks
    Clark Kent

  • Place Voting files in Quorum failgroup or Regular failgroup ?

    About the concept of quorum failgroup ORACLE says:
    QUORUM disks, or disks in quorum failure groups, cannot contain any database files,
    the Oracle Cluster Registry (OCR), or dynamic volumes. However, QUORUM disks
    can contain the voting file for Cluster Synchronization Services (CSS). Oracle ASM
    uses quorum disks or disks in quorum failure groups for voting files whenever
    possible.
    To sum up, i think the different between a regular failgroup and a quorum failgroup is a quorum failgroup can only contain vote files and a regular one can contains multiple kinds of files.
    So i didn't see any advantage to place voting files on a quorum failgroup than on a regular one. Why Oracle introduce the concept of quorum failgroup ?
    Thx in advance.

    Why Oracle introduce the concept of quorum failgroup ?
    You should configure odd number of voting disks because far as voting files are concerned, a node must be able to access more than the half of the voting files at any time (simple majority). In order to be able to tolerate a failure of n voting files, one must have at least 2n+1 configured. (n= number of voting files) for the cluster.
    If you lose 1/2 or more of all of your voting disks, then nodes get evicted from the cluster, or nodes kick themselves out of the cluster.
    For this reason when using Oracle for the redundancy of your voting disks, Oracle recommends that customers use 3 or more voting disks.
    If you are using only one (1) Storage H/W and it's fail. Whole cluster goes down including all voting disk does not matter the number configured.
    The problem in a stretched cluster (RAC Extended ) configuration is that most installations only use two storage systems (one at each site), which means that the site that hosts the majority of the voting files is a potential single point of failure for the entire cluster. If the storage or the site where n+1 voting files are configured fails, the whole cluster will go down, because Oracle Clusterware will loose the majority of voting files.
    To prevent a full cluster outage, Oracle will support a third voting file on an inexpensive, lowend standard NFS mounted device somewhere in the network. Oracle recommends putting the NFS voting file on a dedicated server, which belongs to a production environment.
    So, you will create a file on NFS "cooked file" (as disk) and present to ASM. From this point ASM does not know which ASMDISK is on Network (wan) and it's a "cooked file".
    Then you must mark that ASMDISK as QUORUM, because Oracle will use that ASMDISK only to store VOTEDISK. This will prevent store data (such as database files) on it causing perfomance problem or dataloss.

  • ASM timeout parameters?

    Oracle: 11.2.0.3 non-RAC (Oracle Restart grid home)
    OS: RHEL Server 5.8
    Can timeout parameters be set for an ASM instance or for any downstream database instances dependent on an ASM instance? Our storage and sysadmins ran a test (failing over a controller). The Oracle database instance detected a problem reaching +FLASHREC on the NetApp device (it was trying to access a control file evidently). Approximately one second later the database instance decided to terminate itself. The ASM instance remained up. But subsequent checking with ASMCD showed no ASM Diskgroup available using an LS command. After bouncing the ASM instance all was well--the diskgroups reappeared and we were able to restart our database instance. A second seems a bit unforgiving. Can any timeout-related parameters be set on the ASM or ASM-client instance to provide more wiggle room during a controller failover?
    Some errors we encountered from the database instance alert log. Further below are errors from the ASM instance's alert log
    Wed Mar 14 04:00:55 2012
    Archived Log entry 89 added for thread 1 sequence 142 ID 0xbb0a69f4 dest 1:
    Wed Mar 14 17:51:06 2012
    Errors in file /u01/app/oracle/diag/rdbms/<instance name here>/<instance name here>/trace/<instance name here>ckpt310.trc:
    ORA-27072: File I/O error
    Linux-x86_64 Error: 5: Input/output error
    Additional information: 4
    Additional information: 120864
    Additional information: -1
    WARNING: Read Failed. group:2 disk:0 AU:59 offset:16384 size:16384
    Wed Mar 14 17:51:06 2012
    Errors in file /u01/app/oracle/diag/rdbms/<instance name here>/<instance name here>/trace/<instance name here>arc2399.trc:
    ORA-27072: File I/O error
    Linux-x86_64 Error: 5: Input/output error
    Additional information: 4
    Additional information: 120864
    Additional information: -1
    WARNING: failed to read mirror side 1 of virtual extent 0 logical extent 0 of file 256 in group [2.2142222202] from disk FLASHREC_0000 allocation unit 59 reason error; if possible, will try another mirror side
    WARNING: Read Failed. group:2 disk:0 AU:59 offset:16384 size:16384
    WARNING: failed to read mirror side 1 of virtual extent 0 logical extent 0 of file 256 in group [2.2142222202] from disk FLASHREC_0000 allocation unit 59 reason error; if possible, will try another mirror side
    Errors in file /u01/app/oracle/diag/rdbms/<instance name here>/<instance name here>/trace/<instance name here>ckpt310.trc:
    ORA-00202: control file: '+FLASHREC/<instance name here>/controlfile/current.256.776099703'
    ORA-15081: failed to submit an I/O operation to a disk
    Errors in file /u01/app/oracle/diag/rdbms/<instance name here>/<instance name here>/trace/<instance name here>arc2399.trc:
    ORA-00202: control file: '+FLASHREC/<instance name here>/controlfile/current.256.776099703'
    ORA-15081: failed to submit an I/O operation to a disk
    Errors in file /u01/app/oracle/diag/rdbms/<instance name here>/<instance name here>/trace/<instance name here>ckpt310.trc:
    ORA-27061: waiting for async I/Os failed
    Linux-x86_64 Error: 5: Input/output error
    Additional information: -1
    Additional information: 16384
    WARNING: Write Failed. group:2 disk:0 AU:59 offset:49152 size:16384
    Errors in file /u01/app/oracle/diag/rdbms/<instance name here>/<instance name here>/trace/<instance name here>ckpt310.trc:
    ORA-15080: synchronous I/O operation to a disk failed
    WARNING: failed to write mirror side 1 of virtual extent 0 logical extent 0 of file 256 in group 2 on disk 0 allocation unit 59
    Wed Mar 14 17:51:06 2012
    Errors in file /u01/app/oracle/diag/rdbms/<instance name here>/<instance name here>/trace/<instance name here>m00023737.trc:
    ORA-27072: File I/O error
    Linux-x86_64 Error: 5: Input/output error
    Additional information: 4
    Additional information: 120864
    Additional information: -1
    WARNING: Read Failed. group:2 disk:0 AU:59 offset:16384 size:16384
    WARNING: failed to read mirror side 1 of virtual extent 0 logical extent 0 of file 256 in group [2.2142222202] from disk FLASHREC_0000 allocation unit 59 reason error; if possible, will try another mirror side
    Errors in file /u01/app/oracle/diag/rdbms/<instance name here>/<instance name here>/trace/<instance name here>m00023737.trc:
    ORA-00202: control file: '+FLASHREC/<instance name here>/controlfile/current.256.776099703'
    ORA-15081: failed to submit an I/O operation to a disk
    Errors in file /u01/app/oracle/diag/rdbms/<instance name here>/<instance name here>/trace/<instance name here>ckpt310.trc:
    ORA-00206: error in writing (block 3, # blocks 1) of control file
    ORA-00202: control file: '+FLASHREC/<instance name here>/controlfile/current.256.776099703'
    ORA-15081: failed to submit an I/O operation to a disk
    ORA-15081: failed to submit an I/O operation to a disk
    Errors in file /u01/app/oracle/diag/rdbms/<instance name here>/<instance name here>/trace/<instance name here>ckpt310.trc:
    ORA-00221: error on write to control file
    ORA-00206: error in writing (block 3, # blocks 1) of control file
    ORA-00202: control file: '+FLASHREC/<instance name here>/controlfile/current.256.776099703'
    ORA-15081: failed to submit an I/O operation to a disk
    ORA-15081: failed to submit an I/O operation to a disk
    CKPT (ospid: 310): terminating the instance due to error 221
    Errors in file /u01/app/oracle/diag/rdbms/<instance name here>/<instance name here>/trace/<instance name here>m00023737.trc:
    ORA-00204: error in reading (block 1, # blocks 1) of control file
    ORA-00202: control file: '+FLASHREC/<instance name here>/controlfile/current.256.776099703'
    ORA-15081: failed to submit an I/O operation to a disk
    Wed Mar 14 17:51:07 2012
    License high water mark = 8
    Instance terminated by CKPT, pid = 310
    USER (ospid: 24054): terminating the instance
    Instance terminated by USER, pid = 24054
    Some errors we encountered from the ASM instance alert log
    Mon Mar 12 14:56:18 2012
    NOTE: ASMB process exiting due to lack of ASM file activity for 305 seconds
    Wed Mar 14 17:51:06 2012
    Errors in file /u01/app/oracle/diag/asm/+asm/+ASM/trace/+ASM_gmon_27396.trc:
    ORA-27072: File I/O error
    Linux-x86_64 Error: 5: Input/output error
    Additional information: 4
    Additional information: 4088
    Additional information: -1
    WARNING: Write Failed. group:1 disk:0 AU:1 offset:1044480 size:4096
    WARNING: Hbeat write to PST disk 0.3916384140 (DATAFILE_0000) in group 1 failed.
    Errors in file /u01/app/oracle/diag/asm/+asm/+ASM/trace/+ASM_gmon_27396.trc:
    ORA-27072: File I/O error
    Linux-x86_64 Error: 5: Input/output error
    Additional information: 4
    Additional information: 4088
    Additional information: -1
    WARNING: Write Failed. group:2 disk:0 AU:1 offset:1044480 size:4096
    WARNING: Hbeat write to PST disk 0.3916384141 (FLASHREC_0000) in group 2 failed.
    Errors in file /u01/app/oracle/diag/asm/+asm/+ASM/trace/+ASM_gmon_27396.trc:
    ORA-27072: File I/O error
    Linux-x86_64 Error: 5: Input/output error
    Additional information: 4
    Additional information: 4088
    Additional information: -1
    WARNING: Write Failed. group:3 disk:0 AU:1 offset:1044480 size:4096
    WARNING: Hbeat write to PST disk 0.3916384142 (TEMPFILE_0000) in group 3 failed.
    Wed Mar 14 17:51:06 2012
    NOTE: process b000+asm (23739) initiating offline of disk 0.3916384140 (DATAFILE_0000) with mask 0x7e in group 1
    WARNING: Disk 0 (DATAFILE_0000) in group 1 in mode 0x7f is now being taken offline on ASM inst 1
    NOTE: initiating PST update: grp = 1, dsk = 0/0xe96f478c, mask = 0x6a, op = clear
    Wed Mar 14 17:51:06 2012
    NOTE: process b001+asm (23753) initiating offline of disk 0.3916384141 (FLASHREC_0000) with mask 0x7e in group 2
    WARNING: Disk 0 (FLASHREC_0000) in group 2 in mode 0x7f is now being taken offline on ASM inst 1
    NOTE: initiating PST update: grp = 2, dsk = 0/0xe96f478d, mask = 0x6a, op = clear
    GMON updating disk modes for group 1 at 13 for pid 20, osid 23739
    ERROR: no read quorum in group: required 1, found 0 disks
    Wed Mar 14 17:51:06 2012
    NOTE: process b002+asm (23791) initiating offline of disk 0.3916384142 (TEMPFILE_0000) with mask 0x7e in group 3
    WARNING: Disk 0 (TEMPFILE_0000) in group 3 in mode 0x7f is now being taken offline on ASM inst 1
    NOTE: initiating PST update: grp = 3, dsk = 0/0xe96f478e, mask = 0x6a, op = clear
    GMON updating disk modes for group 2 at 14 for pid 23, osid 23753
    ERROR: no read quorum in group: required 1, found 0 disks
    Wed Mar 14 17:51:06 2012
    NOTE: cache dismounting (not clean) group 1/0x7FAFB779 (DATAFILE)
    NOTE: messaging CKPT to quiesce pins Unix process pid: 23826, image: oracle@dot-oraprd04 (B003)
    Wed Mar 14 17:51:06 2012
    NOTE: halting all I/Os to diskgroup 1 (DATAFILE)
    Wed Mar 14 17:51:06 2012
    NOTE: LGWR doing non-clean dismount of group 1 (DATAFILE)
    NOTE: LGWR sync ABA=6.6196 last written ABA 6.6196
    NOTE: cache dismounted group 1/0x7FAFB779 (DATAFILE)
    SQL> alter diskgroup DATAFILE dismount force /* ASM SERVER */
    Wed Mar 14 17:51:06 2012
    NOTE: cache dismounting (not clean) group 2/0x7FAFB77A (FLASHREC)
    NOTE: messaging CKPT to quiesce pins Unix process pid: 23836, image: oracle@dot-oraprd04 (B004)
    NOTE: halting all I/Os to diskgroup 2 (FLASHREC)
    NOTE: LGWR doing non-clean dismount of group 2 (FLASHREC)
    NOTE: LGWR sync ABA=5.1120 last written ABA 5.1120
    GMON updating disk modes for group 3 at 15 for pid 25, osid 23791
    ERROR: no read quorum in group: required 1, found 0 disks
    NOTE: cache dismounted group 2/0x7FAFB77A (FLASHREC)
    SQL> alter diskgroup FLASHREC dismount force /* ASM SERVER */
    Wed Mar 14 17:51:06 2012
    NOTE: cache dismounting (not clean) group 3/0x7FAFB77B (TEMPFILE)
    NOTE: messaging CKPT to quiesce pins Unix process pid: 23838, image: oracle@dot-oraprd04 (B005)
    NOTE: halting all I/Os to diskgroup 3 (TEMPFILE)
    NOTE: LGWR doing non-clean dismount of group 3 (TEMPFILE)
    NOTE: LGWR sync ABA=6.11 last written ABA 6.11
    NOTE: cache dismounted group 3/0x7FAFB77B (TEMPFILE)
    SQL> alter diskgroup TEMPFILE dismount force /* ASM SERVER */
    WARNING: Offline of disk 0 (TEMPFILE_0000) in group 3 and mode 0x7f failed on ASM inst 1
    WARNING: Offline of disk 0 (DATAFILE_0000) in group 1 and mode 0x7f failed on ASM inst 1
    WARNING: Offline of disk 0 (FLASHREC_0000) in group 2 and mode 0x7f failed on ASM inst 1
    Wed Mar 14 17:51:07 2012
    ERROR: -9(Error 27061, OS Error (Linux-x86_64 Error: 5: Input/output error
    Additional information: -1
    Additional information: 512)
    ERROR: -9(Error 27061, OS Error (Linux-x86_64 Error: 5: Input/output error
    Additional information: -1
    Additional information: 512)
    ERROR: -9(Error 27061, OS Error (Linux-x86_64 Error: 5: Input/output error
    Additional information: -1
    Additional information: 512)
    Wed Mar 14 17:51:07 2012
    ERROR: -9(Error 27061, OS Error (Linux-x86_64 Error: 5: Input/output error
    Additional information: -1
    Additional information: 512)
    ERROR: -9(Error 27061, OS Error (Linux-x86_64 Error: 5: Input/output error
    Additional information: -1
    Additional information: 512)
    ERROR: -9(Error 27061, OS Error (Linux-x86_64 Error: 5: Input/output error
    Additional information: -1
    Additional information: 512)
    Wed Mar 14 17:51:07 2012
    ERROR: -9(Error 27061, OS Error (Linux-x86_64 Error: 5: Input/output error
    Additional information: -1
    Additional information: 512)
    ERROR: -9(Error 27061, OS Error (Linux-x86_64 Error: 5: Input/output error
    Additional information: -1
    Additional information: 512)
    ERROR: -9(Error 27061, OS Error (Linux-x86_64 Error: 5: Input/output error
    Additional information: -1
    Additional information: 512)
    ASM Health Checker found 1 new failures
    ASM Health Checker found 1 new failures
    ASM Health Checker found 1 new failures
    Wed Mar 14 17:51:07 2012
    NOTE: ASM client <instance name here>:<instance name here> disconnected unexpectedly.
    NOTE: check client alert log.
    NOTE: Trace records dumped in trace file /u01/app/oracle/diag/asm/+asm/+ASM/trace/+ASM_ora_322.trc
    Wed Mar 14 17:51:07 2012
    NOTE: cache deleting context for group FLASHREC 2/0x7fafb77a
    NOTE: cache deleting context for group TEMPFILE 3/0x7fafb77b
    NOTE: cache deleting context for group DATAFILE 1/0x7fafb779
    GMON dismounting group 2 at 16 for pid 27, osid 23836
    GMON dismounting group 1 at 17 for pid 26, osid 23826
    NOTE: Disk in mode 0x8 marked for de-assignment
    GMON dismounting group 3 at 18 for pid 28, osid 23838
    NOTE: Disk in mode 0x8 marked for de-assignment
    NOTE: Disk in mode 0x8 marked for de-assignment
    SUCCESS: diskgroup FLASHREC was dismounted
    SUCCESS: alter diskgroup FLASHREC dismount force /* ASM SERVER */
    SUCCESS: diskgroup DATAFILE was dismounted
    SUCCESS: alter diskgroup DATAFILE dismount force /* ASM SERVER */
    SUCCESS: diskgroup TEMPFILE was dismounted
    SUCCESS: alter diskgroup TEMPFILE dismount force /* ASM SERVER */
    ERROR: PST-initiated MANDATORY DISMOUNT of group TEMPFILE
    ERROR: PST-initiated MANDATORY DISMOUNT of group FLASHREC
    ERROR: PST-initiated MANDATORY DISMOUNT of group DATAFILE
    Wed Mar 14 17:51:07 2012
    NOTE: diskgroup resource ora.FLASHREC.dg is offline
    NOTE: diskgroup resource ora.DATAFILE.dg is offline
    NOTE: diskgroup resource ora.TEMPFILE.dg is offline
    Wed Mar 14 17:51:08 2012
    Errors in file /u01/app/oracle/diag/asm/+asm/+ASM/trace/+ASM_ora_24250.trc:
    ORA-17503: ksfdopn:2 Failed to open file +DATAFILE/<instance name here>/spfile<instance name here>.ora
    ORA-15001: diskgroup "DATAFILE" does not exist or is not mounted
    Wed Mar 14 17:51:08 2012
    SQL> ALTER DISKGROUP FLASHREC MOUNT /* asm agent *//* {0:5:72} */
    NOTE: cache registered group FLASHREC number=1 incarn=0xfa7fb7ea
    SQL> ALTER DISKGROUP FLASHREC MOUNT /* asm agent *//* {0:5:72} */
    NOTE: cache registered group FLASHREC number=1 incarn=0xfa7fb7ea
    NOTE: cache began mount (first) of group FLASHREC number=1 incarn=0xfa7fb7ea
    Errors in file /u01/app/oracle/diag/asm/+asm/+ASM/trace/+ASM_ora_27411.trc:
    ORA-27061: waiting for async I/Os failed
    Linux-x86_64 Error: 5: Input/output error
    Additional information: -1
    Additional information: 4096
    WARNING: Read Failed. group:0 disk:1 AU:0 offset:0 size:4096
    Errors in file /u01/app/oracle/diag/asm/+asm/+ASM/trace/+ASM_ora_27411.trc:
    ORA-27061: waiting for async I/Os failed
    Linux-x86_64 Error: 5: Input/output error
    Additional information: -1
    Additional information: 4096
    WARNING: Read Failed. group:0 disk:0 AU:0 offset:0 size:4096
    Errors in file /u01/app/oracle/diag/asm/+asm/+ASM/trace/+ASM_ora_27411.trc:
    ORA-27061: waiting for async I/Os failed
    Linux-x86_64 Error: 5: Input/output error
    Additional information: -1
    Additional information: 4096
    WARNING: Read Failed. group:0 disk:2 AU:0 offset:0 size:4096
    ERROR: no read quorum in group: required 2, found 0 disks
    NOTE: cache dismounting (clean) group 1/0xFA7FB7EA (FLASHREC)
    NOTE: messaging CKPT to quiesce pins Unix process pid: 27411, image: oracle@dot-oraprd04 (TNS V1-V3)
    NOTE: dbwr not being msg'd to dismount
    NOTE: lgwr not being msg'd to dismount
    NOTE: cache dismounted group 1/0xFA7FB7EA (FLASHREC)
    NOTE: cache ending mount (fail) of group FLASHREC number=1 incarn=0xfa7fb7ea
    NOTE: cache deleting context for group FLASHREC 1/0xfa7fb7ea
    Wed Mar 14 17:51:08 2012
    ERROR: -9(Error 27061, OS Error (Linux-x86_64 Error: 5: Input/output error
    Additional information: -1
    Additional information: 512)
    ERROR: -9(Error 27061, OS Error (Linux-x86_64 Error: 5: Input/output error
    Additional information: -1
    Additional information: 512)
    ERROR: -9(Error 27061, OS Error (Linux-x86_64 Error: 5: Input/output error
    Additional information: -1
    Additional information: 512)
    GMON dismounting group 1 at 20 for pid 18, osid 27411
    ERROR: diskgroup FLASHREC was not mounted
    ORA-15032: not all alterations performed
    ORA-15017: diskgroup "FLASHREC" cannot be mounted
    ORA-15063: ASM discovered an insufficient number of disks for diskgroup "FLASHREC"
    ORA-15080: synchronous I/O operation to a disk failed
    ORA-15080: synchronous I/O operation to a disk failed
    ORA-15080: synchronous I/O operation to a disk failed
    ERROR: ALTER DISKGROUP FLASHREC MOUNT /* asm agent *//* {0:5:72} */
    ASM Health Checker found 1 new failures

    Thanks Dan. The server had been built recently but not yet released. Testing was uncoordinated but no permanent harm was done. Great group of people and NetApp is new to them. Glad they are testing things like controller failover. But it sounds like ASM may be a victim and not the culprit. If ASM remains suspect during continued failover testing I'll consider opening an SR like you suggested.

  • SQL UPDATE Statement Explaination (urgent please...)

    Hi,
    Could anyone explain for me what this statement will do:
    update KPIR KR
    set ( CA ) = (
    select sum(sl.RRC_CONN_STP_ATT) CA
    from PV_WCEL_SERVICE_LEVEL sl, UTP_MO um
    where sl.GLOBAL_ID = um.CO_GID
    group by sl.STARTTIME, um.OBJECT_INSTANCE
    I could not understand how the mapping between the two tables will be ("um" clause is not used in the select statement and it is used only in the "where" and "group by" arguments) What does this mean?.
    The actual problem is that when I execute the above SQL statement, it was take around just 3 minutes to be implemented. But after we upgrade from Oracle 8 to Oracle 9, this statement is taking around 3 hours to be implemented.
    Anyone have any idea what is the problem?

    Below is the plan:
    KPIR table:
    RNC_ID (str), PST ('YYYYMMDD_HH24'), CA (Numner)
    1, 20061105_13, 0
    1, 20061105_14, 0
    1, 20061105_15, 0
    2, 20061105_13, 0
    2, 20061105_14, 0
    2, 20061105_15, 0
    3, 20061105_13, 0
    3, 20061105_14, 0
    3, 20061105_15, 0
    4, 20061105_13, 0
    4, 20061105_14, 0
    4, 20061105_15, 0
    PV_WCEL_SERVICE_LEVEL table:
    GLOBAL_ID (str), STARTTIME ('YYYYMMDD_HH24'), RRC_CONN_STP_ATT (Number)
    A1, 20061101_00, 9
    A1, 20061101_01, 4
    A1, 20061101_23, 3
    A1, 20061130_23, 4
    A2, 20061101_00, 3
    A2, 20061101_01, 4
    A2, 20061101_23, 1
    A2, 20061130_23, 5
    UTP_MO table;
    RNC_ID (str), GLOBAL_ID (str), OBJECT_INSTANCE (str)
    1, A1, <A1_NAME>
    1, A2, <A2_NAME>
    1, A9, <A9_NAME>
    2, B1, <B1_NAME>
    2, B2, <B2_NAME>
    2, B9, <B9_NAME>
    3, C1, <C1_NAME>
    3, C2, <C2_NAME>
    3, C9, <C9_NAME>
    4, D1, <D1_NAME>
    4, D2, <D2_NAME>
    4, D9, <D9_NAME>
    Now, I want to update the value of CA in table KPIR:
    For instance, for RNC_ID='1' and at PST='20061105_13', the CA should equals to the sum of value RRC_CONN_STP_ATT for all GLOBAL_IDs under RNC_ID='1' and at STARTTIME '20061105_13'.
    So, is this SQL UPDATE statement will do it?
    update KPIR kr
    set ( CA ) = (
    select sum(sl.RRC_CONN_STP_ATT) CA
    from PV_WCEL_SERVICE_LEVEL sl, UTP_MO um
    where sl.GLOBAL_ID = um.GLOBAL_ID and kr.RNC_ID = um.RNC_ID and kr.PST = sl.STARTTIME
    group by sl.STARTTIME, um.OBJECT_INSTANCE
    And if so, why it is taking around 3 hours to be implemented? This issue is happened after upgrading from Oracle 8 to 9.
    Really appreciate your help and thanks in advance.

Maybe you are looking for