3rd voting disk on nfs share

Hey, as described in Oracle Clusterware 11g Release 2 (11.2) – Using standard NFS to support a third voting file for extended cluster configurations
I added the nfs file to my OCR diskgroup in ASM as quorum device.
/crs/bin/crsctl query css votedisk
## STATE File Universal Id File Name Disk group
1. ONLINE 0eb6aaa85e5c4fd6bf9d0481cfd7d517 (ORCL:VOTE1) [OCR]
2. ONLINE 5a66449fbada4f34bf0c3be4574f03bc (ORCL:VOTE2) [OCR]
3. ONLINE 76ad295ebc054f07bfea124fb08da432 (ORCL:VOTE3) [OCR]
Located 3 voting disk(s).
asmca is showing also ORCL:VOTE3 and /voting_disk/vote3_nfs
Is my system now correctly configured, or do I still need to change my votedisk configuration ?
crsctl replace..... ?
Chris

Hey Levi, I setup a test enviroment for my problem.
I added an ocrmirror to my OCR diskgroup (containing 4 disk, 22)
My ocrmirror is diskgroup DATA (22 disk with 2 failgroups) - the adding of the ocrmirror was successful. ocrcheck showed both diskgroups available
Then I removed 2 disk from storage 1 - so OCR diskgroup had only 2 disks left.  Right away the logfiles show errors. OCR inaccesible. +DATA can´t be used due to the error below.
So what I am doing wrong, while adding the ocrmirror ? Which command to use, for replacing +OCR diskgroup ?
[client(27318)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /crs/log/rac1/client/ocrcheck_27318.log.
2012-01-06 22:52:57.393
[client(27318)]CRS-1011:OCR cannot determine that the OCR content contains the latest updates. Details in /crs/log/rac1/client/ocrcheck_27318.log.
2012-01-06 22:53:33.859
2012-01-06 23:00:17.272: [  OCRRAW][3674175232]proprior: Header check from OCR device 0 offset 6574080 failed (26).
2012-01-06 23:00:17.272: [  OCRRAW][3674175232]proprior: Retrying buffer read from another mirror for disk group [+OCR] for block at offset [6574080]
2012-01-06 23:00:17.273: [  OCRASM][3674175232]proprasmres: Total 0 mirrors detected
2012-01-06 23:00:17.273: [  OCRASM][3674175232]proprasmres: Only 1 mirror found in this disk group.
2012-01-06 23:00:17.273: [  OCRASM][3674175232]proprasmres: Need to invoke checkdg. Mirror #0 has an invalid buffer.
2012-01-06 23:00:17.300: [  OCRASM][3674175232]ASM Error Stack : ORA-27091: unable to queue I/O
ORA-15078: ASM diskgroup was forcibly dismounted
ORA-06512: at line 4
OCRCHECK
racle Database 11g Clusterware Release 11.2.0.3.0 - Production Copyright 1996, 2011 Oracle. All rights reserved.
2012-01-06 22:55:25.572: [OCRCHECK][903825152]ocrcheck starts...
2012-01-06 22:55:25.640: [  OCRASM][903825152]proprasmo: kgfoCheckMount return [6]. Cannot proceed with dirty open.
2012-01-06 22:55:25.640: [  OCRASM][903825152]proprasmo: Error in open/create file in dg [OCR]
[  OCRASM][903825152]SLOS : SLOS: cat=6, opn=kgfo, dep=0, loc=kgfoCkMt03
2012-01-06 22:55:25.640: [  OCRASM][903825152]ASM Error Stack :
2012-01-06 22:55:25.667: [  OCRASM][903825152]proprasmo: kgfoCheckMount returned [6]
2012-01-06 22:55:25.667: [  OCRASM][903825152]proprasmo: The ASM disk group OCR is not found or not mounted
2012-01-06 22:55:25.668: [  OCRRAW][903825152]proprioo: Failed to open [+OCR]. Returned proprasmo() with [26]. Marking location as UNAVAILABLE.
2012-01-06 22:55:25.707: [  OCRRAW][903825152]proprioini: disk 1 (+DATA) does not have enough votes (1,2)
2012-01-06 22:55:25.707: [  OCRRAW][903825152]proprioo: Not enought quorum to open the disks (26)
2012-01-06 22:55:25.707: [  OCRRAW][903825152]proprinit: Could not open raw device
2012-01-06 22:55:25.707: [  OCRASM][903825152]proprasmcl: asmhandle is NULL
2012-01-06 22:55:25.709: [ default][903825152]a_init:7!: Backend init unsuccessful : [26]
2012-01-06 22:55:25.709: [OCRCHECK][903825152]initreboot: Failed to initialize OCR in REBOOT level. Retval:[26] Error:[PROC-26: Error while accessing the physical storage                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

Similar Messages

  • Oracle RAC 11g. 3rd Voting disk mounting automatically without fstab

    Hi,
    We have a 2 node Oracle 11Gr2 Extened RAC database on Redhat Linux. We have a vote disk on each node and a 3rd vote disk on a separate site. The vote disk directories are NFS mounted. We have noticed that there is no entry in the fstab (on either RAC node) for the 3rd voting disk location, yet the directory is still mounted automatically on each RAC node at startup.
    Can Oracle manage mounting the disks itself without using fstab? Oracle recommends using the fstab for mounting the directories and I have found nothing on Oracle mounting the directories in any other way other than fstab.
    I am completely lost here. We need to do some configuration on the 3rd voting disk location and I need to find out how the disk is being mounted on the RAC nodes. Any help on this would be greatly appreciated. Thanks.
    Rgs,
    Rob

    Did you check that rc.local file? Perhaps, the mount entries are in there.
    HTH

  • Help needed : How to move OCR and Voting Disk

    Hi all experts...
    I am Mahesh. Working in DXB as DBA. We have serveral RAC installed clients. Among those one major client wants their OCR and Voting Disk to be moved from existing SAN to a new SAN which will be configured. Only OCR and Voting Disks needs to be moved.
    Details:
    =====
    Oracle Clusterware,ASM and DB version 10.2.0.3.0
    Windows 2003 Server Enterprise Edition
    2-Node RAC
    OCR and Voting disks are on RAW devices (not on CFS or NFS)
    I have read the clusterware administration document and also Metalink Note 428681.1 for the same.
    But it does not contain information how to move it to another new RAW in a new SAN device. (we need to keep it in raw device itself)
    How to identify the new disk path from windows operating system ? Then only we can move the OCR and VD right ???
    Help me for the same please...
    Thanks & regards,
    Mahesh.

    Hi Mashes,
    Could you list out the steps you did after you create the link on the new SAN using GUIObjectManager.exe?
    I tried on my VM and also in customer's test environment but I only successful using the following steps
    1. shutdown asm service, database service, listener service, CRS, EVM and CSS
    2. Object Manager Service is kept running
    3. from both nodes I issued the following command
    ocrconfig -repair \\.\OCRPRIMARY
    ocrconfig -repair \\.\OCRMIRROR
    4. according to metalink id 428681.1, I should do the following
    ocrconfig -overwrite
    However I was not successful because that command returns an error message (PROT-1: Failed to initialize ocr config).
    Therefore, I ran the following command
    ocrconfig -restore D:\oracle\1020\CRS\cdata\crs\backup00.ocr
    5. after that command, ocrcheck giving me the result and the ocr primary and mirror have been moved to the new partition.
    6. start the whole services starting from CSS, EVM, CRS, ASM, Listerner and Database service.
    Although what I experienced seems a bit weird and not the same as what was written in metalink, the RAC is running fine until now.
    What I would like to know is your experience on this. May be you can share it here.
    Thank you,
    Adhika

  • Mounting an NFS share

    Hi,
    I'm attempting to mount an NFS share and having no success.  Regardless of the settings I try, the Finder still denies me access to the NFS share, even though it mounts fine.  I seem to have no read or write access to the share.
    I've tried exporting the share (in /etc/exports on the server machine) in two ways: with
    /home/REDACTED/share     REDACTED/28(rw,sync,all_squash)
    And
    /home/REDACTED/share     REDACTED/28(rw,sync,insecure,all_squash,anonuid=1001,anongid=1001)
    In the second example, the anonuid and anongid are those of the shared folder's owner and group.  I added "insecure" because a how-to on the web claims that OS X won't work with any shares that don't have this specified.
    With either of these settings applied, Disk Utility verifies the existence of the share, and mounts it.  However, I can neither read files within, or add files to, the shared folder.  The error produced is:
    The folder “share” can’t be opened because you don’t have permission to see its contents.
    I have tried the following Advanced Mount Parameters, each to no effect:
    nodev resvport nolocks locallocks intr soft wsize=32768 rsize=3276
    nodev nosuid resvport nolocks locallocks intr soft wsize=32768 rsize=3276 ro
    nodev,nosuid,resvport,nolocks,locallocks,intr,soft,wsize=32768,rsize=3276
    nodev,nosuid,resvport,nolocks,locallocks,intr,soft,wsize=32768,rsize=3276 ro
    resvport,nolocks,locallocks,intr,soft,wsize=32768,rsize=3276
    -i,-s,-w=32768,-r32768
    -P
    I'd rather not employ SAMBA, and the Apple File Sharing package for my server's OS (Ubuntu 11.10) appears to be bugged currently.  Besides, NFS would be a far neater solution.
    Any helpful advice?
    S.

    Scotch_Brawth wrote:
    I've simply come at NFS as being the most appropriate file-sharing implementation for my needs - it supports automatic mounting at boot using tech native to both my Linux OS and OS X. 
    That is part of the problem. NFS is designed for environments where all servers are mounted (by root) at boot time and permissions are managed via NIS or LDAP. That is the default setting. If you are using something else, it requires some hacking.
    I've had SAMBA working in the past, but I guess a certain air of contamination creeps in when using a Windows protocol to allow interaction between two UNIXy systems.
    Plus, you would now have two different 3rd party reverse-engineered reimplementations of a foreign protocol.
    AFP would be great, but despite receiving support on the Ubuntu forums and IRC, I failed to get it to work - it may be bugged; which would not be surprising, as 11.10 (with Kernel 3) has proved problematic in several other ways.
    Perhaps Ubuntu is targeted more towards desktop rather than server usage. About the time I last played with NFS, I also played with Netatalk - with disastrous results. Supposedly Netatalk is better now. It's authors would be more than happy to sell you a support package.
    I did use the default settings - they failed to allow a working NFS share.  I then applied the variety of settings as recommended by apparently knowledgeable people.  Still no success.  I have read that UID/GID settings are an important aspect of NFS, but the issue in this case (as far as I understand it) is that all UID/GIDs below 1000 are privileged in Ubuntu 11.10, whilst on OS X these are below 501.  So, the choice is either to give the shared folder owner a privileged UID/GID pair, or change the UID/GID of my Mac users to meet the NFS servers needs - not something I'm happy to do for so small a gain.
    You can create a throwaway account on the Mac and just reset the GID/UID to values equal to an account on the Linux machine. That would establish that it is properly working in the default configuration. Then you could edit /etc/idmapd.conf.
    For that reason, I use the "all_squash" option, because the share in question is not for anything remotely critical and the data to be transferred and stored is both worthless and transitory.
    Since all_squash maps everything to nobody, you would have to hack up the permissions on the server to make everything world writeable. I think it will work with /etc/idmapd.conf and without all_squash.
    I know nothing about NFS other than that its capabilities and integration meet my needs.
    Just what are your needs? If the data is worthless and not critical then Netatalk might be the best option. If you can't get that to work, you could try MacFUSE on the Mac side and mount over sshfs. That is normally what I do. It isn't all that reliable, but you don't seem to require that.
    What information I did find regarding OS X and NFS was that there were peculiarities that required certain settings to be present on the server and the client respectively - for example, OS X apparently requires "insecure" to be set as an option, or it simply won't connect properly.  I don't know why, but I have no choice to trust to the advice of others in this case, until I have sufficient grasp to take care of the whole thing myself.
    This goes back to the expectation that NFS expects to be always connected and mounted by root. Apple sells very few desktop machines anymore so it assumes a different, user-centered environment. You could use "insecure" on the server side to allow connections from "insecure" ports > 1024 that a regular users can connect with via the Finder. You could use the terminal with "sudo mount_nfs -o resvport" to tell the Mac to use the root user to connect via a secure port instead.
    If you genuinely think you're able to help, then I'm happy to hear your advice.  What would you recommend?
    I appreciate your meeting me halfway. I think all you really need is /etc/idmapd.conf without all_squash. Then you could setup AutoFS and you could use NFS in a modern environment without even bothering to mount it.

  • Issue with special character in NFS shares

    Hello,
    I run a Ubuntu 10.04 server for serving my files via NFS. I have no problems mounting the NFS shares in my iMac (OS X 10.6.3). I can access all files, even the ones containing special characters in their names. I can copy, create, move them with no problem, both on Finder and on the Terminal.
    The problem comes when I try to synchronize or backup files using backup tools. Files with accents in their names (á, é, ã, ç, etc) are simply ignored by the backup/sync tools I am using. I could reproduce the problem in different softwares like "ChronoSync" and "File Synchronization". Accents are a must have in my network.
    I saw in other posts in this forum that there might be some incompatibilities with Unicode and special chars while using Mac OS X as a NFS client for a Linux NFS server. What strikes me is that Finder and Terminal work just OK.
    Any clue?
    Some details of my NFS configuration:
    /etc/exports on my server:
    /mnt/disco01 10.209.1.0/24(rw,sync,nosubtree_check,anonuid=1000,anongid=1000,allsquash)
    /mnt/disco02 10.209.1.0/24(rw,sync,nosubtree_check,anonuid=1000,anongid=1000,allsquash)
    /mnt/disco03 10.209.1.0/24(rw,sync,nosubtree_check,anonuid=1000,anongid=1000,allsquash)
    /mnt/disco04 10.209.1.0/24(rw,sync,nosubtree_check,anonuid=1000,anongid=1000,allsquash)
    On my iMac I moount them like this using Disk Utility's NFS tool:
    URL: nfs://servidor/mnt/disco01
    Mount point: /Network/disco01
    Options: -P nosuid
    Thanks for any help you can give me.

    Well, I dug a little further and got a solution, although It make no sense to me.
    So, the scenario is:
    server URL: nfs://server/mnt/disco01
    mount point: /Network/disco01
    I was trying to synchronize a folder from the server called /Network/disco01/Música (meaning music in Portugueses) to a local folder /Users/shared/Música. I would use ChronoSync to keep the folders in sync, mirroring the NFS share to the local folder. ChronoSync was ignoring the folder.
    After not being able to copy using the ChronoSync tool, I tried to copy the folder via the Finder. I could browse the nfs share using the finder, but not copy the files the local folder.
    While trying to copy, I use Cmd-C then Cmd-V in the /User/shared/ folder. I noticed that first the Finder named the folder "Music" and then some instants after if would refuse to copy. Very strange.
    I first renamed the "Música" folder on the NFS share to "Músicas", then everything worked allright, even Chronosync. Turns out that Música is the name of one of the system folder OS X creates on the user's home folder. It is actually a translation to the underlying name Music.
    Why it was interfering with the copy of totally unrelated NFS and local folders I can really not understand.
    More interesting, afterwards I renamed the NFS folder again to Música (without s) and it kept working.
    I am happy now, but I have no clue as to why there was the problem, and why it got solved.
    Hope this helps somebody in a similar situation.

  • Root.sh hangs at formatting voting disk on OEL32 11gR2 RAC with ocfs2

    Hi,
    Am trying to bring up Oracle11gR2 RAC on Enterprise Linux x86 (32bit) version 5.6. I am using Ocfs2 1.4 as my cluster file share. Everything went fine till the root.sh and it hangs with a message "now formatting voting disk <vdsk path>
    The logs are mentioned below.
    Checked the alert log:
    {quote}
    cssd(9506)]CRS-1601:CSSD Reconfiguration complete. Active nodes are oel32rac1 .
    2011-08-04 15:58:55.356
    [ctssd(9552)]CRS-2407:The new Cluster Time Synchronization Service reference node is host oel32rac1.
    2011-08-04 15:58:55.917
    [ctssd(9552)]CRS-2401:The Cluster Time Synchronization Service started on host oel32rac1.
    2011-08-04 15:58:56.213
    [client(9567)]CRS-1006:The OCR location /u02/storage/ocr is inaccessible. Details in /u01/app/11.2.0/grid/log/oel32rac1/client/ocrconfig_9567.log.
    2011-08-04 15:58:56.365
    [client(9567)]CRS-1001:The OCR was formatted using version 3.
    2011-08-04 15:58:59.977
    [crsd(9579)]CRS-1012:The OCR service started on node oel32rac1.
    {quote}
    crsctl.log:
    {quote}
    2011-08-04 15:59:00.246: [  CRSCTL][3046184656]crsctl_vformat: obtain cssmode 1
    2011-08-04 15:59:00.247: [  CRSCTL][3046184656]crsctl_vformat: obtain VFListSZ 0
    2011-08-04 15:59:00.258: [  CRSCTL][3046184656]crsctl_vformat: Fails to obtain backuped Lease from CSSD with error code 16
    2011-08-04 15:59:01.857: [  CRSCTL][3046184656]crsctl_vformat: to do clsscfg fmt with lease sz 0
    2011-08-04 15:59:01.910: [   SKGFD][3046184656]NOTE: No asm libraries found in the system
    2011-08-04 15:59:01.910: [    CLSF][3046184656]Allocated CLSF context
    2011-08-04 15:59:01.910: [   SKGFD][3046184656]Discovery with str:/u02/storage/vdsk:
    2011-08-04 15:59:01.910: [   SKGFD][3046184656]UFS discovery with :/u02/storage/vdsk:
    2011-08-04 15:59:01.910: [   SKGFD][3046184656]Fetching UFS disk :/u02/storage/vdsk:
    2011-08-04 15:59:01.911: [   SKGFD][3046184656]OSS discovery with :/u02/storage/vdsk:
    2011-08-04 15:59:01.911: [   SKGFD][3046184656]Handle 0xa6c19f8 from lib :UFS:: for disk :/u02/storage/vdsk:
    2011-08-04 17:10:37.522: [   SKGFD][3046184656]WARNING:io_getevents timed out 618 sec
    2011-08-04 17:10:37.526: [   SKGFD][3046184656]WARNING:io_getevents timed out 618 sec
    {quote}
    ocrconfig log:
    {quote}
    2011-08-04 15:58:56.214: [  OCRRAW][3046991552]ibctx: Failed to read the whole bootblock. Assumes invalid format.
    2011-08-04 15:58:56.214: [  OCRRAW][3046991552]proprinit:problem reading the bootblock or superbloc 22
    2011-08-04 15:58:56.214: [  OCROSD][3046991552]utread:3: Problem reading buffer 8532000 buflen 4096 retval 0 phy_offset 102400 retry 0
    2011-08-04 15:58:56.214: [  OCROSD][3046991552]utread:3: Problem reading buffer 8532000 buflen 4096 retval 0 phy_offset 102400 retry 1
    2011-08-04 15:58:56.214: [  OCROSD][3046991552]utread:3: Problem reading buffer 8532000 buflen 4096 retval 0 phy_offset 102400 retry 2
    2011-08-04 15:58:56.214: [  OCROSD][3046991552]utread:3: Problem reading buffer 8532000 buflen 4096 retval 0 phy_offset 102400 retry 3
    2011-08-04 15:58:56.214: [  OCROSD][3046991552]utread:3: Problem reading buffer 8532000 buflen 4096 retval 0 phy_offset 102400 retry 4
    2011-08-04 15:58:56.214: [  OCROSD][3046991552]utread:3: Problem reading buffer 8532000 buflen 4096 retval 0 phy_offset 102400 retry 5
    2011-08-04 15:58:56.214: [  OCRRAW][3046991552]propriogid:1_1: Failed to read the whole bootblock. Assumes invalid format.
    2011-08-04 15:58:56.365: [  OCRRAW][3046991552]iniconfig:No 92 configuration
    2011-08-04 15:58:56.365: [  OCRAPI][3046991552]a_init:6a: Backend init successful
    2011-08-04 15:58:56.390: [ OCRCONF][3046991552]Initialized DATABASE keys
    2011-08-04 15:58:56.564: [ OCRCONF][3046991552]csetskgfrblock0: output from clsmft: [clsfmt: successfully initialized file /u02/storage/ocr
    2011-08-04 15:58:56.577: [ OCRCONF][3046991552]Successfully set skgfr block 0
    2011-08-04 15:58:56.578: [ OCRCONF][3046991552]Exiting [status=success]...
    {quote}
    ocssd.log:
    {quote}
    2011-08-04 15:59:00.140: [    CSSD][2963602320]clssgmFreeRPCIndex: freeing rpc 23
    2011-08-04 15:59:00.228: [    CSSD][2996054928]clssgmExecuteClientRequest: CONFIG recvd from proc 6 (0xb35f7438)
    2011-08-04 15:59:00.228: [    CSSD][2996054928]clssgmConfig: type(1)
    2011-08-04 15:59:00.234: [    CSSD][2996054928]clssgmExecuteClientRequest: VOTEDISKQUERY recvd from proc 6 (0xb35f7438)
    2011-08-04 15:59:00.247: [    CSSD][2996054928]clssgmExecuteClientRequest: CONFIG recvd from proc 6 (0xb35f7438)
    2011-08-04 15:59:00.247: [    CSSD][2996054928]clssgmConfig: type(1)
    2011-08-04 15:59:03.039: [    CSSD][2942622608]clssnmSendingThread: sending status msg to all nodes
    2011-08-04 15:59:03.039: [    CSSD][2942622608]clssnmSendingThread: sent 4 status msgs to all nodes
    2011-08-04 15:59:07.047: [    CSSD][2942622608]clssnmSendingThread: sending status msg to all nodes
    2011-08-04 15:59:07.047: [    CSSD][2942622608]clssnmSendingThread: sent 4 status msgs to all nodes
    2011-08-04 15:59:11.057: [    CSSD][2942622608]clssnmSendingThread: sending status msg to all nodes
    2011-08-04 15:59:11.057: [    CSSD][2942622608]clssnmSendingThread: sent 4 status msgs to all nodes
    2011-08-04 15:59:16.068: [    CSSD][2942622608]clssnmSendingThread: sending status msg to all nodes
    2011-08-04 15:59:16.068: [    CSSD][2942622608]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-08-04 15:59:21.079: [    CSSD][2942622608]clssnmSendingThread: sending status msg to all nodes
    2011-08-04 15:59:21.079: [    CSSD][2942622608]clssnmSendingThread: sent 5 status msgs to all nodes
    {quote}
    Any help here is appreciated.
    Regards
    Amith R
    Edited by: Mithzz on Aug 4, 2011 4:58 AM

    Did an lsof on vdisk and it showed
    >
    COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
    crsctl.bi 9589 root 26u REG 8,17 21004288 102980 /u02/storage/vdsk
    [root@oel32rac1 ~]# ps -ef |grep crsctl
    root 9589 7583 0 15:58 pts/1 00:00:00 [crsctl.bin] <defunct>
    >
    Could this be a permission issue ?
    --Amith                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Shared file system  recomended for OCR and voting Disk in 10g R2

    Dear Friends,
    For Oracle 10g R2 (10.2.0.5) 64 bit which shared file system is recomended for OCR and voting Disk (NFS / raw devices / OCFS2)
    For Datafiles and FRA i planned to use ASM
    Regards,
    DB

    Hi,
    If your using standard edition then you got no choice but raw devices
    http://docs.oracle.com/cd/B19306_01/license.102/b14199/options.htm#CJAHAGJE
    for ocfs2 you need to take extra care
    Heartbeat/Voting/Quorum Related Timeout Configuration for Linux, OCFS2, RAC Stack to Avoid Unnecessary Node Fencing, Panic and Reboot [ID 395878.1]

  • Pcnfsd authetication in NFS share

    I have a G5 10.4.8 serving a NFS share. The one client is a Win95 box running NFS Maestro . It has worked before but in rebuilding it, I'm stuck. The client requires pcnfsd authentication but inetd.conf states that (rpc.) pcnfsd is not yet implemented in OS X. NFS Manager has nothing as well. Is there a way to fix this?
    G5   Mac OS X (10.4.8)   Not OSX Server

    This is frustrating!.. I've managed to get that working but i have a different issue now ..here is what i have done in details:
    Server: MAC OSX 10.6.8.
    Server was standalone and then I bounded to the AD then promoted to Open Directory Master without kerberos realm - as it is the AD is the one that holds the accounts.. that's how it should be correct ?
    Disk utility: I mounted the NFS share and in WGM i enabled file sharing on the NFS share via AFP, and now it shows under WGM - Home tab as: afp://xserve.mydomain.com/homes
    - for the clients, I bounded them to OD first for MCX and then to AD.
    Directory Utility: settings for both "client & server"
    *Create mobile account at login - false
    *Force local home directory on startup disk - false
    *Une UNC path - True with AFP protocol.
    Server Admin:
    * AFP is enabled on the NFS share "homes" and its auto-mounted.
    * Open Directory Master:
    - LDAP Server is running
    - Password Server is running.
    - Kerberos is stopped.
    Workgroup Manager:
    * I selected the test user "adtest" and assigned the home folder which is:
    Home URL: afp://xserve.domain.com/homes/adtest
    Full Path: /Network/Servers/xserve.domain.com/homes/adtest
    and when i click on Create Home Now, it did create the user home directory under the NFS share which is auto-mounted
    Active Directory Server:
    under the adtest user - profile tab, i see: \\xserve.domain.com\homes\adtest
    Problem:
    - when i try to login with adtest user from the client, i get the error message:
    "You are unable to login in o the user account "adtest" at this time - logging in to the account failed because an error occurred."
    Troubleshooting:
    1- logged in with local admin account and typed id adtest in terminal.. it shows all user attributes and groups, which means the machine is bound correctly to both AD & OD
    2- when I change the home for the user to the the default "/Users".. i can login just fine with the adtest account.. does that look like its a permissions issue ?
    Thanks again for your help.

  • Startup of Clusterware with missing voting disk

    Hello,
    in our environment we have a 2 node cluster.
    The 2 nodes and 2 SAN storages are in different rooms.
    Voting files for Clusterware are in ASM.
    Additionally we have a third voting disk on a NFS server (configured like in this descripton: http://www.oracle.com/technetwork/products/clusterware/overview/grid-infra-thirdvoteonnfs-131158.pdf&ei=lzJXUPvJMsn-4QTJ8YDoDg&usg=AFQjCNGxaRWhwfTehOml-KgGGeRkl4yOGw)
    The Quorum flag is on the disk that is on NFS.
    The diskgroup is with normal redundancy.
    Clusterware keeps running when one of the VDs gets lost (e.g. storage failure).
    So far so good.
    But when I have to restart Clusterware (e.g. reboot of a node) while the VD is still missing, then clusterware does not come up.
    Did not find an indication if this whether is planned behaviour of Clusterware or maybe because I missed a detail.
    From my point of view it should work to start Clusterware as long as the majority of VDs are available.
    Thanks.

    Hi,
    actually what you see is expected (especially in a stretched cluster environment, with 2 failgroups and 1 quorum failgroup).
    It has to do with how ASM handles a disk failure and is doing the mirriong (and the strange "issue" that you need a third failgroup for the votedisks).
    So before looking at this special case, lets look at how ASM normally treats a diskgroup:
    A diskgroup can only be mounted in normal mode, if all disks of the diskgroup are online. If a disks is missing ASM will not allow you to "normally" mount the diskgroup, before the error situation is solved. If a disks is lost, which contents can be mirrored to other disks, then ASM will be able to restore full redundancy and will allow you to mount the diskgroup. If this is not the case ASM expects the user to tell what it should do => The administrator can issue a "alter diskgroup mount force" to tell ASM even though it cannot held up the required redundancy it should mount with disks missing. This then will allow the administrator to correct the error (or replaced failed disks/failgroups). While ASM had the diskgroup mounted the loss of a failgroup will not result in a dismount of the diskgroup.
    The same holds true with the diskgroup containing the voting disks. So what you see (will continue to run, but cannot restart) is pretty much the same like for a normal diskgroup: If a disk is lost, and the contents does not get relocated (like if the quorum failgroup fails it will not allow you to relocatore, since there are no more failgroups to relocate the third vote to), it will continue to run, but it will not be able to automatically remount the diskgroup in normal mode if a disk fails.
    To bring the cluster back online, manual intervention is required: Start the cluster in exclusive mode:
    crsctl start crs -exclThen connect to ASM and do a
    alter disgkroup <dgname> mount forceThen resolve the error (like adding another disk to another failgroup, that the data can be remirrored and the disk can be dropped.
    After that a normal startup will be possible again.
    Regards
    Sebastian

  • Use NAS (NFS) share as storage? Can't make shares.

    Hi,
    Trying to make shares on a network storage, nas, connected via nfs but i noticed that its greyed out. Is it supposed to only use disks or iSCSI för shares?

    This is frustrating!.. I've managed to get that working but i have a different issue now ..here is what i have done in details:
    Server: MAC OSX 10.6.8.
    Server was standalone and then I bounded to the AD then promoted to Open Directory Master without kerberos realm - as it is the AD is the one that holds the accounts.. that's how it should be correct ?
    Disk utility: I mounted the NFS share and in WGM i enabled file sharing on the NFS share via AFP, and now it shows under WGM - Home tab as: afp://xserve.mydomain.com/homes
    - for the clients, I bounded them to OD first for MCX and then to AD.
    Directory Utility: settings for both "client & server"
    *Create mobile account at login - false
    *Force local home directory on startup disk - false
    *Une UNC path - True with AFP protocol.
    Server Admin:
    * AFP is enabled on the NFS share "homes" and its auto-mounted.
    * Open Directory Master:
    - LDAP Server is running
    - Password Server is running.
    - Kerberos is stopped.
    Workgroup Manager:
    * I selected the test user "adtest" and assigned the home folder which is:
    Home URL: afp://xserve.domain.com/homes/adtest
    Full Path: /Network/Servers/xserve.domain.com/homes/adtest
    and when i click on Create Home Now, it did create the user home directory under the NFS share which is auto-mounted
    Active Directory Server:
    under the adtest user - profile tab, i see: \\xserve.domain.com\homes\adtest
    Problem:
    - when i try to login with adtest user from the client, i get the error message:
    "You are unable to login in o the user account "adtest" at this time - logging in to the account failed because an error occurred."
    Troubleshooting:
    1- logged in with local admin account and typed id adtest in terminal.. it shows all user attributes and groups, which means the machine is bound correctly to both AD & OD
    2- when I change the home for the user to the the default "/Users".. i can login just fine with the adtest account.. does that look like its a permissions issue ?
    Thanks again for your help.

  • IOMeter hangs when running to a NFS share from Windows Storage Server 2012

    Hello, 
    I am trying to measure performance of NFS share coming from Windows Storage Server 2012 using IOMeter also running on windows Server 2012. I can create the share on WSS2012. Windows 2012 client does see the share. IOmeter does see the share, and I can start
    running. But fairly quick IOMeter gets an error, and stops. After that NFS share on the client is not visible to IOMeter anymore. This happen every time. 
    I have used IOMeter to SMB shares a lot with no problem..
    Thanks in advance,
    BJ

    I am trying to measure performance of NFS share coming from Windows Storage Server 2012 using IOMeter also running on windows Server 2012. I can create the share on WSS2012. Windows 2012 client does see the share. IOmeter does see the share, and I can start
    running. But fairly quick IOMeter gets an error, and stops. After that NFS share on the client is not visible to IOMeter anymore. This happen every time. 
    I have used IOMeter to SMB shares a lot with no problem..
    1) Can you use NFS share with NFS clients normally? I mean is it I/O Meter who has issues with streaming or do other apps have similar problems? Say normal copy to / from NFS share?
    2) What error exactly is popped up? Do you happen to have a screenshot?
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Disk Utility & NFS Mounting

    So I'm trying to mount some NFS shares in Lion using Disk Utility but the tool keeps timing out unable to verify the URL. However, I'm able to mount the same shares "manually" using the 'mount' command in terminal. I have to create the dummy directory first but it works. I'd prefer to mount with Disk Utility though so the shares remount automatically.
    Any advice/ideas?

    pretty sure that only windows machines can write to ntfs filesystems

  • Adding NFS Share to Mountain Lion Server

    Alright, here goes.
    The company I work for has been using SL server for years and wanted to test a possible upgrade to ML server for NFSv4. Downloaded ML to test machine and ML server. I mounted it via server connector (nfs://blah blah..you get the idea) volume that I want to share comes up on desktop but will not show under the server app. I also have access to posix but I need full access to acl as well.
    I'm trying to figure out how to access this NFS share so that I can share it from the server but cannot seem to get it working. To those who know OSX servers I probably sound like a moron, but I'm just an I.T. guy tasked with setting up a test server even though I'm not a "server guy". Any information would be greatly appreciated.
    One more question. Would a migration from SL to ML server also bring along this volume? Thanks again.

    I upgraded today and had the same issue. I took following steps to fix my computer.
    Boot into Recovery Partition (Hold Option Button while booting)
    Open Terminal.
    Type resetpassword
    Select your hard drive
    Select the user account (Administrator)
    Enter a new password for the user
    Reenter password
    Save
    Restart
    Boot normally, Login as Adminstrator with the new password and add "Admin" permission to your account.
    Restart
    Everything should be working as expected

  • How can a mount a NFS share exported from OpenBSD?

    Hello Apple Discussions:
    I've been experimenting with NFS in a mixed OS environment, and have been successful exporting nfs share with tigerserver, and mounting it on both a powerpc linux system, and on a powerpc openBSD system.
    Likewise, I can export a NFS share from the linux powerpc box, and mount it on the openBSD box and on the tigerserver, although, the latter required using the options (ro,sync,insecure) in my exports file.
    However, when I export a share on the OpenBSD box, I can mount it on the linux box, but not on tigerserver.
    I would like for the OpenBSD box to export a NFS share securely, with read-write permissions, to the tigerserver.
    After reading so many tutorials, that it would be a page of links, just to list them all, I am pulling my hair out. However, I have found one thread that suggests, that perhaps what I'm trying to do is impossible:
    http://www.bsdforums.org/forums/showthread.php?t=54308
    Here it is suggested that the NFS won't work because tigerserver is not using UTF-8?
    I will have to say, that I was somewhat alarmed, that the only times I succeeded in mounting an nfs share exported from linux onto tigerserver, it was when the "insecure" option is used in the /etc/exports file. There doesn't seem to be an equivalent for the linux style exports option "insecure", in the bsd style options of --maproot=user:group1:group2.
    But I don't like using any options that say "insecure" anyways, so rather than trying to find out how to make openbsd "insecure", I would rather like to find out if there is a way to get tigerserver using UTF-8, at least when mounting NFS shares, if this is indeed the issue.
    Here are the more technical details. I've created a user on all sytems named "fives" with the userid of 5555 and the groupid of 5555. I made the user local user in the local net info domain, but I've tried it with an LDAP user as well. The folders I wish to export and the folders into which to mount them are all owned by user fives and group fives, and have permissions set to 0775. The ip addresses are OpenBSD=192.168.222.111 TigerServer=192.168.222.233 LinuxPPC=192.168.222.253. I've included the relevant NFS setup files and running processes below:
    ON THE OPENBSD BOX:
    #/etc/exports
    /fives -alldirs -network=192.168.222.0 -mask=255.255.255.0
    /exports/fives -mapall=fives:fives 192.168.222.233 192.168.222.253
    #/etc/hosts.deny
    ALL: ALL
    #/etc/hosts.allow
    ALL: 192.168.222.233 192.168.222.253
    #/etc/rc.conf.local
    portmap=YES
    lockd=YES
    nfs_server=YES
    #here's proof that the daemons are running on the OpenBSD box;
    rpcinfo -p localhost
    program vers proto port
    100000 2 tcp 111 portmapper
    100000 2 udp 111 portmapper
    100005 1 udp 863 mountd
    100005 3 udp 863 mountd
    100005 1 tcp 613 mountd
    100005 3 tcp 613 mountd
    100003 2 udp 2049 nfs
    100003 3 udp 2049 nfs
    100003 2 tcp 2049 nfs
    100003 3 tcp 2049 nfs
    100021 1 udp 895 nlockmgr
    100021 3 udp 895 nlockmgr
    100021 1 tcp 706 nlockmgr
    100021 3 tcp 706 nlockmgr
    # actually, I don't see statd, but haven't found the equivalent in openbsd. There's rpc.rstatd, and maybe it should be listed here, but there doesn't seem to be a way to launch it directly. This is a competitor with the UTF-8 theory about why it's not working.
    ON THE TIGER SERVER:
    # here's proof that tiger server sees the mounts:
    showmount -e 192.168.222.111
    Exports list on 192.168.222.111:
    /fives 192.168.222.0
    /exports/fives 192.168.222.233 192.168.222.253
    # here's the result of user fives' attempt at mounting a share:
    sudo mount -t nfs 192.168.222.111:/exports/fives /imports/fives
    mount_nfs: /imports/fives: Permission denied
    # yet user fives has no problem mounting same share on linuxppc box.
    What is different about OSX server? I thought it was supposed to speak NFS?
    ---argh... I'm steppin out for a pint.. Hopefully when I'm back it'll just work.

    One thing not mentioned is that if you decide on the multiple user approach, you can have your music folder in Shared Documents so you only store the tracks once.
    Each user is free to choose which of those tracks they want in their library.
    There is an Apple help article on multiple users.
    http://docs.info.apple.com/article.html?artnum=300432

  • Custom NFS share point directory showing up on all network machines

    Hi,
    I'm in the process of migrating our 10.4 PowerMac server to a Mac Pro (running 10.5). I've been trying to recreate our 10.4 server setup as much as possible and so far I've only come across one annoying issue.
    We have fink installed on the server and under our 10.4 setup the /sw directory was set up as an NFS automounted share point with a custom mount point of '/sw'. I.e. users logging into client machines saw a /sw directory and could work with that. This made it easier to add fink packages as I only needed to do this on one machine (the server). This setup worked very well under 10.4 and had been working stably for the last couple of years.
    As we now have (for another month or two at least) a mix of intel and Power PC machines, I don't want to share out the (intel) server version of fink to all clients. In Server Admin, I have chosen to set the NFS protocol options to specify the IP address of just one client (an intel machine). I am only using NFS to share this directory. The plan is to add more client IP addresses as we get more intel machines.
    This works for the one intel client machine. Logging in via the GUI or via ssh allows you to run programs located under the /sw directory. The problem is that a phantom /sw directory appears on all client machines, even though their IP addresses are not specified in Server Admin. The /sw directory has root/wheel permissions (for user/group) and attempting to list its contents returns 'Operation not permitted' (even with 'sudo ls /sw').
    If I use Directory Utility to remove the connection to the Directory server on our main server, then the /sw directory becomes owned by root/admin and I can remove it (it appears empty). Reconnecting to the Directory Server changes the permissions back to root/wheel. It is also worth noting that when I first installed fink on the server (in /sw) the act of making this a share point also changed the permissions on /sw to root/wheel meaning that I couldn't access the fink programs that I had only just installed (this forced me to reinstall fink in /Volumes/Data/fink).
    Has anyone else noticed this behavior? It almost seems like Server Admin is not honoring the list of IP addresses that are being listed as targets for client machines. I had planned to install fink locally on the PowerPC clients until we upgrade them to intel machines. However, I would then also have to install fink somewhere other than /sw as I can't write to that directory. I would presume that this behavior should happen on any NFS share point that is trying to automount to a custom mount point on a client. Can anyone else verify this?
    Regards,
    Keith

    As a footnote. I have now removed my shared fink installation. It is no longer listed as an NFS sharepoint in Server Admin and running the 'showmount -e' command does not list it. However, a /sw directory is still being created on the server and on the client machines on our network.
    This is perplexing and frustrating. There is no sharepoint any more. I rebooted the server but it makes no difference. I removed the /sw directory (on the server) by booting the machine in target firewire mode and removing it by using a 2nd machine. But following the restart, it appeared again.
    This suggests that once you make any custom mountpoint (using NFS sharing) then you will forever be stuck with a directory at the root level of all your clients which you can not remove.
    Keith

Maybe you are looking for

  • Execute DOS command from Java Code

    Hi All I am developing a Java application where I am launching some external Windows application on click event of a button. I am able to launch that application, but now I have to keep a check that if once that application is launched on clicking th

  • Attached images send properly, but are received as characters

    Please help with this problem (and many thanks for any suggestions): I've been running 10.5.5 and Mail 3.2 with no problems, and sending images on a daily basis. Today, however, Mail has begun to send images properly (the e-mail in the "Sent" folder

  • ZERS - Invoice as PDF and Email

    Hello, We have created a new Output Type ZERS copied from ERS. And created a new SAPScript ZERS_PRINT which is agained copied from MR_PRINT and Print Program ZMR_PRINT. Whenever MRRL is run, it's creating the invoice output Spool. Now, the users want

  • Airport Extreme best practice configuration for Sleep Proxy, DHCP/NAT and PPPOE

    Hi I have recently bought a Airport Extreme and it is working well.  One of the reasons I bought is to take advantage of the Bonjour Sleep Proxy on it so I can wake my MAC up remotely from my iPad using the REMOTE app to stream things like iTunes etc

  • Mercury Quicktest Pro 9.2 and Adobe AIR

    Hi all, I was wondering if there is an add-in of adobe air for QTP. I tried adobe flex 3.0 qtp add-in but this didn't work with adobe air application. When I record in qtp, qtp only records window objects and the coordinates of the mouse click. Is th