/u01 and nfs

I am building a 9i RAC system with two nodes, and I wan't to both machines to mount /u01 through nfs. I need the nfs service to be highly available. What is the best way to accomplish this? I have looked at SGI's Failsafe, but does oracle have their own method?
Thanks,
Jared cook

Use OCFS, is a better way to do this.
Regards,
Paulo.

Similar Messages

  • I have an ipad mini 1st gen 64 gb (wifi only) and i have problem with some of my apps. These apps have lags when you play with them for a few seconds the apps are having lags are call of duty strike team, gta San Andreas and nfs most wanted.pleas help me

    I have an ipad mini 1st gen 64 gb (wifi only) and i have problem with some of my apps. These apps have lags when you play with them for a few seconds the apps are having lags are call of duty strike team, gta San Andreas and nfs most wanted.pleas help me

    I'm going to guess videos buffer for a while also...
    Two possibilities, one is you should close apps on the iPad, the other is your internet speed is not able to handle the demands of a high speed device.
    To close apps:   Double click home button, swipe up to close the apps.
    To solve internet problem, contact your provider.   Basic internet is not enough for video games and movies.   Your router may be old and slow.

  • How to set userlevel permission for GFS and NFS

    hi
    how to set userlevel permission for GFS and NFS?
    regards

    hi
    http://www.redhat.com/docs/manuals/enterprise/
    AND
    http://en.tldp.org/
    probably your best bet.
    regards

  • 7110 OMG, CIF and NFS permission woes. I'm tired and I want to go home.

    OK, here's the dealio...
    I have share exported via CIFS and NFS from our 7110 array running 2010.02.09.2.1,1-1.18
    I have AD configured for CIFS Authentication.
    I have a UNIX desktop so I am using SMB authenticate via AD and talk to the CIF share on the array.
    I have the NFS share mounted using vers 3 on Solaris 10.
    Now, the problem..........
    PERMISSIONS!!!
    Here’s what I want to do,
    Create a file or folder on the CIF and preserve the username on NFS.
    Example, I login as myself via AD, bam I’m on the array.
    Create a file.
    Check the ownership of the file on the NFS mount and it’s suddenly become a series of numbers. Of which I assume are taken from my Windows SID. As Solaris can’t relate my SID to a UNIX username I’m left out in the dark.
    So, I then tried to set up some rule based identity mapping so my Windows login would be converted to my UNIX username, no luck still a series of numbers for me listed against my files.
    I could work around this if I could chown but I can’t even do that as it says chown: filename: Not owner
    What gives? How do I keep my username from CIFS to NFS? HELP!!!!

    Did you have any joy with this?
    I have never been able to determine a consistent configuration for NFS/CIFS sharing on a 7310. Ended up opening access to all on the NFS side (v4) and the CIFS just worked out of the box.
    I am using ID Mapping, with IDMU first, then rule based mapping next. The box picks up the correct UID/GID from AD but doesn't always inherit the user & group for the NFS side.
    Chris

  • Deadlocking issue with sshfs and nfs

    Okay, I've used both sshfs and nfs for remotely accessing the home partition on my fileserver, but I have been having a problem where the networking on the server suddenly cuts out.  Any processes that are accessing the folder I mounted nfs/sshfs with become deadlocked.  Any processes that try access my home directory, where the remote folder sits, are also deadlocked.  I cannot get into the machine with ssh.  I have to manually reboot it in order to get any networking at all.
    I have to also force-kill any known processes that are accessing the remote folder, and if I don't know what they are, I have to forcibly unmount it.  This issue has been occuring with this specific fileserver since I got it.  It is running Arch Linux i686, but has had the same problem with the server editions of both Fedora and Ubuntu.
    I don't know where to begin with fixing this problem, nor do I know how to diagnose it.

    Consider "soft" mount option for NFS.

  • [SOLVED] Netbooting with PXE, TFTP and NFS / Numerous errors

    Greetings all, hope you can help me out.
    Been given a task by my company of making a network bootable ICA client (with X and Firefox, with the Citrix ICA client installed) as small as possible to minimize network traffic (as 440 workstations would be downloading the end-product simultaneously, so it'd beat ten bells of proverbial out of the core and edge switches for a little while). I discovered two options. One being to integrate everything in side a cloop image directly inside the INITRD. I have stacks of working INITRDs with their matched kernels yet being my first dabble in to extracting the INITRD, my faffing with CPIO has resulted in me nuking my base layout (Thank god for snapshotting in VMware Workstation!) 4 times, and either getting "Premature end of file" or a copius amount of lines stating "cpio: Malformed Number: <strange characters>" finally ending with "Premature end of file". As a result I went in search of another option, which would be booting off an NFS share. I followed the guide:
    http://wiki.archlinux.org/index.php/Dis … t_NFS_root
    ...in order to set up a network booted install of Arch and hit a few snags along the way, probably a result of using multiple operating systems for the TFTP and NFS server as opposed to using what the guide recommends, but I'm not sure as these seem solvable, although I don't know how right now.
    The set up:
    DHCP is provided by a Microsoft Windows Server 2003 VM (AD Integrated) on 172.16.10.17 on a box called "Rex".
    TFTP is provided by another Windows Server 2003 VM by "TFTPd32" which is a free download. This is located on 172.16.10.158 on a box called "Terra".
    The NFS store is provided by OpenFiler 2.3 which is a specialized version of rPath Linux designed specifically for turning boxes in to dedicated NAS stores. This is located on 172.16.10.6, and is called "frcnet-nas-1".
    The problem:
    DHCP is correctly configured with a Boot Host Name (Which is 172.16.10.158) and a boot file name of "pxelinux.0". This is confirmed as working.
    Client gets the kernel and INITRD from TFTP and boots up fine until it hits "Waiting for devices to settle..." by which point it echos out "Root device /dev/nfs doesn't exist, attempting to create it...", which it seems to do so fine. It then passes control over to kinit and echos "INIT: version 2.86 booting" and the archlinux header, and immediately after that it prints:
    mount: only root can do that
    mount: only root can do that
    mount: only root can do that
    /bin/mknod: '/dev/null': File exists
    /bin/mknod: '/dev/zero': File exists
    /bin/mknod: '/dev/console': File exists
    /bin/mkdir: cannot create directory '/dev/pts': File exists
    /bin/mkdir: cannot create directory '/dev/shm': File exists
    /bin/grep: /proc/cmdline: No such file or directory
    /etc/rc.sysinit: line 72: /proc/sys/kernel/hotplug: No such file or directory
    :: Using static /dev filesystem [DONE]
    :: Mounting Root Read-only [FAIL]
    :: Checking Filesystems [BUSY]
    /bin/grep: /proc/cmdline: No such file or directory
    :: Mounting Local Filesystems
    mount: only root can do that
    mount: only root can do that
    mount: only root can do that
    [DONE]
    :: Activating Swap [DONE]
    :: Configuring System Clock [DONE]
    :: Removing Leftover Files [DONE]
    :: Setting Hostname: myhost [DONE]
    :: Updating Module Dependencies [DONE]
    :: Setting Locale: en_US.utf8 [DONE]
    :: Setting Consoles to UTF-8 mode[BUSY]
    /etc/rc.sysinit: line 362: /dev/vc/0: No such file or directory
    /etc/rc.sysinit: line 363: /dev/vc/0: No such file or directory
    /etc/rc.sysinit: line 362: /dev/vc/1: No such file or directory
    /etc/rc.sysinit: line 363: /dev/vc/1: No such file or directory
    ... all the way down to vc/63 ...
    :: Loading Keyboard Map: us [DONE]
    INIT: Entering runlevel: 3
    :: Starting Syslog-NG [DONE]
    Error opening file for reading; filename='/proc/kmsg', error='No such file or directory (2)'
    Error initializing source driver; source='src'
    :: Starting Network...
    Warning: cannot open /proc/net/dev (No such file or directory). Limited output.
    eth0: dhcpcd 4.0.3 starting
    eth0: broadcasting inform for 172.16.10.154
    eth0: received approval for 172.16.10.154
    eth0: write_lease: Permission denied
    :: Mounting Network Filesystems
    mount: only root can do that
    [FAIL]
    :: Starting Cron Daemon [DONE]
    ...and, nothing after that, it just stops. Kernel doesn't panic, and hitting ctrl+alt+delete does what you'd expect, a clean shutdown minus a few errors about filesystems not being mounted. It seems /proc isn't getting mounted because init apparently doesn't have the appropriate permissions, and /proc not being mounted causes a whole string of other issues. Thing is, proc gets created at boot time as it contains kernel specific information about the system and the kernel's capabilities, right? Why can't it create it? How come init doesn't have the same privileges as root as it usually would, and how would I go about fixing it?
    I admit, while I'm fairly competent in Linux, this one has me stumped. Anyone have any ideas?
    Last edited by PinkFloydYoshi (2008-11-22 12:29:01)

    The idea behind the Windows DHCP and TFTP is that we'd be using an existing server and a NetApp box with NFS license to serve everything off. I would have loved to make a new server which is completely Linux, but my boss, nor the other technician have ever used Linux so if I left for any reason, they'd be stuck if ever they ran in to trouble, which is why I've struggled to get Linux to penetrate our all Windows infrastructure.
    During my hunting around on Google I found a lot of information on making my own initrd, and a lot of it using all manner of switches. I can make them fine, but I figure that I would need to look at extracting the current working one first, adding X, Firefox and the ICA client to it, then compressing it again. Cloop came about when I was looking at DSL's internals. The smaller the initrd, the better, so utilizing this could possibly be a plus too.
    The reason I'm doing this with Archlinux is that I know Arch's internals quite well (and pacman is just wonderous, which is more than I can say for yum), so if I run in to a small problem I'm more likely to fix it without consulting Google. Fair enough though, the NFS booting method is giving me issues I never thought were possible. Ahh, sods law strikes again.
    Addendum: I've noticed something which struck me as odd. Files in the NFS share are somehow owned by 96:scanner instead of root:root. Upon attempting changing, it's telling me "Operation Not Permitted". Further prodding has led me to believe it's an Openfiler thing where GID/UID 96 on the OpenFiler box is "ofgroup"/"ofguest". Chowning / to root:root puts NFS boot right ahead and gives me a prompt, however I cannot log in as root. I've also discovered that chrooting in to the base from my Arch workstation and creating a directory makes the directory owned by ofgroup:ofguest again, so it's an Openfiler thing after all this time. Prodding further.
    Addendum two: For anyone using Openfiler out there, when you allow guest access to the NFS share, be sure to set the Anonymous GID and Anonymous UID to 0. By default it's 96 and as a result when trying to boot you get the errors I experienced. This is insecure and you should use some sort of network/host/ip range restriction. Because the root filesystem has 96:96 as the owner of everything after you install the base layout using pacman (and any changes you make afterward) init and root no longer have the appropriate permissions, user 96:96 (which is "scanner" in Archlinux) has the permissions instead and init, in order to complete boot would need to be "scanner" in order to boot completely.
    Solution is to set Anon GID and Anon UID to 0, chown the entire diskless root filesystem to root, then use a linux desktop to mount the diskless root filesystem, mount /proc, /sys and mount bind /dev, then chroot in to the diskless root filesystem. At this point to clear up any problems with bad passwords, use passwd to change your password. Exit the chroot environment then unmount the diskless proc, sys and dev. Boot up via the network and use your chosen password to log in as root. At this point, start clearing up permissions from the en masse filesystem chown and you should then have a usable diskless root.
    I'll experiment further and clear up some of the remaining permission errors that occured during boot and report on my progress in fixing it. Didn't like the idea of chowning the entire share as root. :S
    Last edited by PinkFloydYoshi (2008-11-21 19:28:15)

  • Slow ZFS-share performance (both CIFS and NFS)

    Hello,
    After upgrading my OpenSolaris file server (newest version) to Solaris 11 Express, the read (and write)-performance on my CIFS and NFS-shares dropped from 40-60MB/s to a few kB/s. I upgraded the ZFS filesystems to the most recent version as well.
    dmsg and /var/log/syslog doesn't list anything abnormal as far as I can see.. I'm not running any scrubs on the zpools, and they are listed as online. top doesn't reveal any process utilizing the CPU more than 0.07%.
    The problem is probably not at the client side, as the clients are 100% untouched when it comes to configuration.
    Where should I start looking for errors (logs etc.)? Any recommended diagnostic tools?
    Best regards,
    KL

    Hi!
    Check Link speed.
    dladm show-dev
    Check for collisions and wrong network packets:
    netstat -ia
    netstat -ia 2 10 ( when file transfered)
    Check for lost packets :
    ping -s <IP client> ( whait more 1 min )
    Check for retransmit, latency for respond:
    snoop -P -td <IP client> ( when file transfered)
    Try replace network cable.
    Regards.

  • Doubt About FTP And NFS

    Hi Experts,
    1...What is the differnce between FPT and NFS( in Trasport Protocol)
    2...When we wil use FTP and NFS......In which Case
    Please Let me know in detailed
    Regards
    Khanna

    Hi Thanks for ur quick reply.
    As u told
    >>>><i>that the client's system is across ur network and the client is not ready to send u the file. At that time u have to use FTP.</i>
    This is ok.
    Q:::::And For this We should be in the VPN   OR No need?????/
    <i>In scenario where the XI system could store the file on their server (e.g. cases where the organization has their XI in place and they dont want to add an extra FTP server in their scenario, they can directly paste the file on the XI file system). In these cases NFS is used.</i>
    In this case u need to put the file in the XI Server, From where u wil get the file to keep it in the server(means via internet or by hand  or like.....)
    Please let me know all the details
    Regards
    Khanna

  • ISCSI, AFP, SMB, and NFS performance with Mac OS X 10.5.5 clients

    Been doing some performance testing with various protocols related to shared storage...
    Client: iMac 24 (Intel), Mac OS X 10.5.5 w/globalSAN iSCSI Initiator version 3.3.0.43
    NAS/Target: Thecus N5200 Pro w/firmware 2.00.14 (Linux-based, 5 x 500 GB SATA II, RAID 6, all volumes XFS except iSCSI which was Mac OS Extended (Journaled))
    Because my NAS/target supports iSCSI, AFP, SMB, and NFS, I was able to run tests that show some interesting performance differences. Because the Thecus N5200 Pro is a closed appliance, no performance tuning could be done on the server side.
    Here are the results of running the following command from the Terminal (where test is the name of the appropriately mounted volume on the NAS) on a gigabit LAN with one subnet (jumbo frames not turned on):
    time dd if=/dev/zero of=/Volumes/test/testfile bs=1048576k count=4
    In seconds:
    iSCSI 134.267530
    AFP 140.285572
    SMB 159.061026
    NFSv3 (w/o tuning) 477.432503
    NFSv3 (w/tuning) 293.994605
    Here's what I put in /etc/nfs.conf to tune the NFS performance:
    nfs.client.allow_async = 1
    nfs.client.mount.options = rsize=32768,wsize=32768,vers=3
    Note: I tried forcing TCP as well as used an rsize and wsize that doubled what I had above. It didn't help.
    I was surprised to see how close AFP performance came to iSCSI. NFS was a huge disappointment but it could have been limitations of the server settings that could not have been changed because it was an appliance. I'll be getting a Sun Ultra 64 Workstation in soon and retrying the tests (and adding NFSv4).
    If you have any suggestions for performance tuning Mac OS X 10.5.5 clients with any of these protocols (beyond using jumbo frames), please share your results here. I'd be especially interested to know whether anyone has found a situation where Mac clients using NFS has an advantage.

    With fully functional ZFS expected in Snow Leopard Server, I thought I'd do some performance testing using a few different zpool configurations and post the results.
    Client:
    - iMac 24 (Intel), 2 GB of RAM, 2.3 GHz dual core
    - Mac OS X 10.5.6
    - globalSAN iSCSI Initiator 3.3.0.43
    NAS/Target:
    - Sun Ultra 24 Workstation, 8 GB of RAM, 2.2 GHz quad core
    - OpenSolaris 2008.11
    - 4 x 1.5 TB Seagate Barracuda SATA II in ZFS zpools (see below)
    - For iSCSI test, created a 200 GB zvol shared as iSCSI target (formatted as Mac OS Extended Journaled)
    Network:
    - Gigabit with MTU of 1500 (performance should be better with jumbo frames).
    Average of 3 tests of:
    # time dd if=/dev/zero of=/Volumes/test/testfile bs=1048576k count=4
    # zpool create vault raidz2 c4t1d0 c4t2d0 c4t3d0 c4t4d0
    # zfs create -o shareiscsi=on -V 200g vault/iscsi
    iSCSI with RAIDZ2: 148.98 seconds
    # zpool create vault raidz c4t1d0 c4t2d0 c4t3d0 c4t4d0
    # zfs create -o shareiscsi=on -V 200g vault/iscsi
    iSCSI with RAIDZ: 123.68 seconds
    # zpool create vault mirror c4t1d0 c4t2d0 mirror c4t3d0 c4t4d0
    # zfs create -o shareiscsi=on -V 200g vault/iscsi
    iSCSI with two mirrors: 117.57 seconds
    # zpool create vault mirror c4t1d0 c4t2d0 mirror c4t3d0 c4t4d0
    # zfs create -o shareiscsi=on -V 200g vault/iscsi
    # zfs set compression=lzjb vault
    iSCSI with two mirrors and compression: 112.99 seconds
    Compared with my earlier testing against the Thecus N5200 Pro as an iSCSI target, I got roughly 16% better performance using the Sun Ultra 24 (with one less SATA II drive in the array).

  • Systemd and nfs-common ?

    My nfs-client setup takes 62s to mount a nfsv4 share. I've tried to solve this by creating an own systemd nfs-common.service but it fails. It doesn't see rpcbind as started, which it is. I've googled high and low for a systemd *.service that works but haven't any nfs-common.service file at all.
    I have ExecStart=/etc/rc.d/nfs-common start in my .service file and it is enabled but status message is;
    nfs-common.service - nfs-common
    Loaded: loaded (/etc/systemd/system/nfs-common.service; enabled)
    Active: failed (Result: exit-code) since Sat, 19 May 2012 12:25:43 +0200; 10min ago
    Process: 1911 ExecStart=/etc/rc.d/nfs-common start (code=exited, status=1/FAILURE)
    CGroup: name=systemd:/system/nfs-common.service
    sudo /etc/rc.d/nfs-common start gives;
    Start rpcbind first. [FAIL]
    Same as in journal systemd-journalctl | grep -i nfs;
    May 19 12:14:09 mediadatorn kernel: RPC: Registered tcp NFSv4.1 backchannel...e.
    May 19 12:14:09 mediadatorn kernel: FS-Cache: Netfs 'nfs' registered for caching
    May 19 12:14:10 mediadatorn[103]: Inserted module 'nfs'
    May 19 12:14:11 mediadatorn nfs-common[227]: Start rpcbind first. [FAIL]
    All this came about since I started trying to optimize systemd. Before today, when I tried to disable all legacy arch units, I had the nfs client running ok. I did this after reading falconindys tip in the big systemd thread. By the way, that thread is so big it has become nearly useless.
    Boot up times:
    systemd-analyze
    Startup finished in 13269ms (kernel) + 94810ms (userspace) = 108080ms
    systemd-analyze blame
    62034ms mnt-SERVER_NYTT.mount
    6764ms console-kit-daemon.service
    2787ms systemd-modules-load.service
    1838ms home.mount
    867ms syslog-ng.service
    552ms rpc-statd.service

    swanson wrote:
    I followed this
    falconindy wrote:
    arch-daemons.target. Sadly, it's going to be a little longer than I wanted until that unit disappears. That said, the intended usage of that target was never meant to be more than:
    1) Install systemd, initscripts-systemd, systemd-arch-units (arch-daemons.target is enabled)
    2) reboot
    3) copy /run/systemd/generator/arch-daemons.target.wants/*.service to /etc/systemd/system for anything where a native unit file doesn't exist
    4) kill -1 1 (refreshes systemd's unit database)
    5) systemctl enable *.service (for the services you copied over in step 3)
    6) disable arch-daemons.target
    7) ????
    8) profit.
    (finally found it)
    Right now I'm trying to reverse it and start over.
    I tried this, and immediately didn't get it to mount.  I restarted rpcbind (supposedly already running) and nfs-common and I think it still stalled on mounting, but now I that I've been away from it, it finally did mount.
    Last edited by nomorewindows (2012-06-24 00:09:47)

  • Software to configure CIFS and NFS on AWS?

    Does anybody have experience with software that will configure CIFS and NFS on Amazon Web Services (AWS)?

    I suggest that you post this question in an AWS community forum available there : https://forums.aws.amazon.com/index.jspa

  • LDAP and NFS mounts/setup OSX Lion iMac with Mac Mini Lion Server

    Hello all,
    I have a local account on my iMac (Lion), and I also have a Mac Mini (Lion Server) and I want to use LDAP and NFS to mount the /Users directory, but am having trouble.
    We have a comination of Linux (Ubuntu), Windows 7 and Macs on this network using LDAP and NFS, except the windows computers.
    We have created users in workgroup management on the server, and we have it working on a few Macs already, but I wasnt there to see that process. 
    Is there a way to keep my local account separate, and still have NFS access to /Users on the server and LDAP for authentification?
    Thanks,
    -Matt

    It would make a great server. Bonus over Apple TV for example is that you have access via both wired ethernet and wireless. Plus if you load tools from XBMC, Firecore and others you have a significant media server. Cost is right too.
    Many people are doing this - google mac mini media server or other for more info.
    Total downside to any windows based system - dealing with constant anti-virus, major security hassels, lack of true media integration and PITA to update, etc.
    You should be aware that Lion Server is not ready for prime time - it stil has significant issues if you are migrating from SNL 10.6.8. If you buy an apple fresh Lion Server mac mini you should have no problems.
    You'll probably be pleased.

  • RAC defined architecture and NFS

    Hi Friends, how are you all?
    Carrying on with my fight with Grid Infrastructure package installation, after to research the internet I found a whitepaper signed by Oracle that treats about the use of Grid 11.2 files over NFS partitions/disk (I'll let the link to this doc below). The environment's architecture I'm working with, which was defined by a consultant, have two VMs and one of them sharing a disk (the entire /dev/sdb) with the another machine through NFS; and then, we have a global shared disk area as a storage to OCR, VF e database files into the RAC. In the midst of installation, I'm using Normal Redundancy where I state the three /ocr0x directories to OCR files and the three /vtf0x to Voting Files, as I explain in more details below.
    +Oracle Clusterware 11g Release 2 (11.2) – Using standard NFS to support a third voting file for extended cluster configurations+
    +=> http://www.oracle.com/technetwork/database/clusterware/overview/grid-infra-thirdvoteonnfs-131158.pdf+
    The exported (NFS) disk partitions are:
    /u01 *(rw,sync,no_wdelay,insecure_locks,no_root_squash) # -> $ORACLE_BASE, $ORACLE_HOME
    /u02 *(rw,sync,no_wdelay,insecure_locks,no_root_squash) # -> $GRID_HOME
    # ocr files
    /ocr01 *(rw,sync,no_wdelay,insecure_locks,no_root_squash) # -> ocr files /ocr01/disk01, disk01 is the file OUI will delete
    /ocr02 *(rw,sync,no_wdelay,insecure_locks,no_root_squash) # -> ocr files /ocr02/disk02, disk02 is the file OUI will delete
    /ocr03 *(rw,sync,no_wdelay,insecure_locks,no_root_squash) # -> ocr files /ocr03/disk03, disk03 is the file OUI will delete
    # ocr files
    /vtf01 *(rw,sync,no_wdelay,insecure_locks,no_root_squash) # -> vtf files /vtf01/disk01, disk01 is the file OUI will delete
    /vtf02 *(rw,sync,no_wdelay,insecure_locks,no_root_squash) # -> vtf files /vtf02/disk02, disk02 is the file OUI will delete
    /vtf03 *(rw,sync,no_wdelay,insecure_locks,no_root_squash) # -> vtf files /vtf03/disk03, disk03 is the file OUI will delete
    ** On that whitepaper I found just to investigate if the manner I am using to share disks with NFS is correct, you may find some relevant options you must consider when mount disks whenever you use Grid + NFS.
    Well, after many efforts and expend many times to have this RAC up & running, I started to receive the same errors when OHASD tries to start up. The error is "+...Inappropriate ioctl for device...+" - I started to "distrust" or "wonder" that something was so strange with some points on the defined architecture. Even running the runcluvfy.sh and having it finish with any one "failed" item, I've gotten problems with the installation since I started with it.
    So, the paper I found, signed by Oracle said:
    +The first Oracle Clusterware version to support a third voting file mounted using the standard NFS protocol is Oracle Clusterware 10.2.0.2. Support has been enhanced to Oracle Clusterware 11.1.0.6 based on successful tests. Using standard NFS to host the third voting file will remain unsupported for versions prior to Oracle Clusterware 10.2.0.2. All other database files are unsupported on standard NFS. In addition, it is assumed that the number of voting files in the cluster is 3 or more. Support for standard NFS is limited to a single voting file amongst these three or more configured voting files only. This paper will focus on Grid Infrastructure 11.2.0.1.0, since starting with Oracle Clusterware 11.2.0.1.0 Oracle Automatic Storage Management (ASM) can be used to host the voting files.+
    => When that stated document said "+All other database files are unsupported on standard NFS+", what does it mean? OCR is one of the unsupported files to be located upon NFS disk/partition?
    I'm wondering that the defined architecture won't support what we're doing. Do you guys know about any case similar, where a two nodes RAC was implemented over a NFS disk? Do you know about any case that OCR, VF e database files were stored upon NFS partitions/disks? Is that mentioned architecture I'm using valid? Could you give some hints?
    I used the forum to discuss some questions around this Grid implementation:
    => Re: G.I. install keeps failing @ root.sh ( specifically when Starting ohasd )
    => Re: Problems in downloading Oracle Linux [4 - 6]
    Thank you in advance for any hint you all can give me this time! I'll appreciate any kind of help.

    Hi,
    I didn't fully understand your environment. You have 2 virtual nodes, right? Let's say node1 and node2. Now, where is your NFS storage and what do you plan it will contain?
    Regarding Oracle documentation, there are 2 NFS "vendors", one is standard NFS (exported by any linux/unix machine), the other one is vendor supported NFS (such as Netapp, EMC, etc.).
    Oracle RAC files (OCR, voting and database) are supported on the second one only. It is not supported to use standard NFS for database files. However, if you wish, you can put 1 voting on a standard NFS (this is for cases where you have 2 locations for the RAC and you want to have another voting in a different site).
    In any case, if I understand correctly you have node1 exporting NFS to node2 for the RAC. If this is true it is not supported, and generally a bad idea because you lose your entire high availability (if node1 fails, you got nothing).
    HTH
    Liron

  • Word 2008 for Mac and NFS mounted home directories "Save File" issues

    Greetings everyone,
    (Long time lurker, first time poster here)
    I admin a small network (under 20 workstaitons) with a centralized NFS server, with user home directories mounted via NFS upon login.  Users are authenticated via LDAP.  This is all working fine, there is no problem here.  The problem lies when my users use Microsoft Word 2008 for Mac.  When they attempt to save a file to thier Desktop (or Documents or any folder under thier home dir) they are met with the following message:
    (dialog box popup)
    "Word cannot save or create this file.  The disk maybe be full or write-protected.  Try one or more of the following: * Free more memory. * Make sure the disk you want to save the file on is not full, write-protected or damaged. (document-name.ext)"
    This happens regardless of file format (Doc, Docx, Txt) and regardless of saved location under the network mounted dir.  I've noticed that when saving Word creates a .tmp file in the target directory, which only further confuses me to the underlying cause of the issue.
    When users logon to a local machine account and attempt the save, there is no issue.
    I have found many posts in other commuity forums, including this one, indicating that the issue is a .TempoaryItems folder in the root of the mounted directory.  This folder already exists and is populated with entries such as "folder.2112" (where 2112 is the uid of the LDAP user).  I find other posts indicating that this is an issue with Word:2008 and OSX10.8, with finger pointing in either direction, but no real solution.
    I have installed all Office for Mac updates from Microsoft (latest version 12.3.6).
    I have verified permissions of the user's home dir.
    I have also ensured that this issue effects ONLY Microsoft Office 2008 for Mac apps, LibreOffice and other applications have no issue.
    Does *ANYONE* have a solution or workaround for this issue?  While we're trying to phase Microsoft products out, getting users to ditch Word and Excel is difficult without removing them from systems completely.  So any pointers or help would be greatly appreciated.
    Thanks.
    ~k

    I can't tell you how to fix bugs in an obsolete version of Office, but a possible workaround is to use mobile home directories under OS X Server. The home directories are hosted locally and synced with the server.

  • Sender File Adapter and NFS

    I am having a sender file adapter and is using NFS as the Transport Protocol. This channel is throwing an error that  " directory does not exist". What can be the possible solutions for this problem?
    When I had previously checked this , it was working fine but now its throwing this errorr.

    Hi Neelansha,
               If you select the transport protocol for NFS, Mention the Directory name for where we can get data.
    and check the communication channel monitor for file sender, it will shows the clear error.
    Regards,
    Sateesh

Maybe you are looking for

  • How to shell script for noob? or Cryptography for someone who doesn't need.

    Hi, I've seen the need of automating some tasks in the Terminal and I believe using shell scripts is my solution, although I don't really even understand how they work. Instead of posting a full how-to here, I'd like to ask if anyone knows about good

  • I keep having a pop up on my screen to enter my ID password.

    Even when i don't try to buy,update Etc.. It just keeps popping up. I turned off my phone for 2 days and it still continued anyone have any ideas?

  • Working problem in 9.21

    Hi, I've made a very simple test in OATS 9.21. This test process repeats only 2 steps at 20 times and stop. (It is a recorded test.) These steps are very simple too: write a word into a textfield and click a "search" button -> search the word from a

  • Custom Views - list showing cancelled alerts (vROps 6.0.1)

    Hi all, I've created a custom View of a list to report on Host Compliance. The view displays all of the symptoms that are alerting, even though they have been rectified and the symptom's metric is reporting the 'cancelled on' time. As most the sympto

  • Transformation  activities in owb process flow

    could someone tell me please about process flow in owb ( oracle warehouse builder) 10.2 . how we can associate a procedure name with transformation. thanks