Hardware for RAC using NFS mounts

Hi,
At a recent Oracle/NetApp seminar we heard that certain RAC configuration could make use of NFS mounts to access its shared storage, leaving SCSI/Fiber/FC Switches out of the picture.
We're currently looking for a budget cluster configuration that, ideally, is not limited to 2 nodes. The NFS option looks promising, however, our NAS hardware may not be NetApp.
Has anybody used this kind of setup? For example, several cheap x86 blade servers mounting shared storage via NFS in a NAS.
Thanks,
Ivan.

NAS is NFS.
See
Following NFS storage vendors are supported: EMC, Fujitsu, HP, IBM, NetApp, Pillar Data, Sun, Hitachi. 
NFS file servers do not require RAC certification. The NFS file server must be supported by the system and storage vendors. 
Currently, only NFS protocol version 3 (NFSv3) is supported.
Hemant K Chitale

Similar Messages

  • To use NFS mount as shared storage for calendar

    hi all,
    Colocated IM deployment <<To ensure high availbility, Oracle Calendar Server is placed on Cold Failover Cluster. Cold Failover Cluster installation requires shared storage for ORACLE_HOME and oraInventory.
    Q: can NFS mount be used as the shared storage? has anyone tried it? thanks

    Hi Arnaud!
    This is of course a test environment on my laptop. I WOULD NEVER do this in production or even mention this to a customer :-)
    In this environment I do not care for performance but it is not slow.
    cu
    Andreas

  • Installation Grid Infrastructure for RAC using UDEV fail

    Hi experts,
    I want to install RAC for testing using UDEV, VirtualBox and Linux 5.8 64 bit.
    I fail at installation Grid Infrastructure(ASM). At the Step 5 of 9 - Create ASM Disk Group, the installer don't display any disk to create ASM diskgroup.
    I think UDEV have been configed correctly. The following are the UDEV configuration:
    "/etc/scsi_id.config"
    vendor="ATA",options=-p 0x80
    options=-g"/etc/udev/rules.d/99-oracle-asmdevices.rules"
    KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s /block/$parent", RESULT=="SATA_VBOX_HARDDISK_VBb1141d78-3196a201_", NAME="asm-disk1", OWNER="oracle", GROUP="dba", MODE="0660"
    KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s /block/$parent", RESULT=="SATA_VBOX_HARDDISK_VB748d0eea-d6385be2_", NAME="asm-disk2", OWNER="oracle", GROUP="dba", MODE="0660"ls -al /dev/asm-disk*
    brw-rw---- 1 oracle dba 8, 17 Mar 30 21:44 /dev/asm-disk1
    brw-rw---- 1 oracle dba 8, 33 Mar 30 21:44 /dev/asm-disk2Any sugestion for me? Please help...
    Thanks and best regards,
    Hai Mai.

    Try this one.
    1. Find the WWID of the disk
    # /sbin/scsi_id -g -u -s /block/sdb
    SATA_VBOX_HARDDISK_VBbb8af1a8-4d4db09b_ Here we got an WWID of 'SATA_VBOX_HARDDISK_VBbb8af1a8-4d4db09b_'.
    2. Create an custom UDEV rule
    Create a new rule under /etc/udev/rules.d/, for example, /etc/udev/rules.d/99-oracle.rules.
    Ensure the file name lists after the default 50-xxx.rules file. The filename must end in ".rules" to be recognized.
    KERNEL=="sd*1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s /block/%P", RESULT=="SATA_VBOX_HARDDISK_VBbb8af1a8-4d4db09b_", RUN+="/bin/raw /dev/raw/raw1 %N"
    KERNEL=="sd*2", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s /block/%P", RESULT=="SATA_VBOX_HARDDISK_VBbb8af1a8-4d4db09b_", RUN+="/bin/raw /dev/raw/raw2 %N"
    KERNEL=="sd*3", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s /block/%P", RESULT=="SATA_VBOX_HARDDISK_VBbb8af1a8-4d4db09b_", RUN+="/bin/raw /dev/raw/raw3 %N"
    ACTION=="add", KERNEL=="raw*", OWNER="oracle", GROUP="oinstall", MODE="0664"The RESULT filed was the WWID of your hard disk found in step 1.
    Please also subsititue the value of 'OWNER', 'GROUP', and 'MODE' to your actual one.
    3. Restart the UDEV service
    # /sbin/start_udev And then, you are free to use /dev/raw/raw1, /dev/raw/raw2, and /dev/raw/raw3 as your RAC OCR and Voting Devices.
    Thanks,
    Kods

  • Word 2008 for Mac and NFS mounted home directories "Save File" issues

    Greetings everyone,
    (Long time lurker, first time poster here)
    I admin a small network (under 20 workstaitons) with a centralized NFS server, with user home directories mounted via NFS upon login.  Users are authenticated via LDAP.  This is all working fine, there is no problem here.  The problem lies when my users use Microsoft Word 2008 for Mac.  When they attempt to save a file to thier Desktop (or Documents or any folder under thier home dir) they are met with the following message:
    (dialog box popup)
    "Word cannot save or create this file.  The disk maybe be full or write-protected.  Try one or more of the following: * Free more memory. * Make sure the disk you want to save the file on is not full, write-protected or damaged. (document-name.ext)"
    This happens regardless of file format (Doc, Docx, Txt) and regardless of saved location under the network mounted dir.  I've noticed that when saving Word creates a .tmp file in the target directory, which only further confuses me to the underlying cause of the issue.
    When users logon to a local machine account and attempt the save, there is no issue.
    I have found many posts in other commuity forums, including this one, indicating that the issue is a .TempoaryItems folder in the root of the mounted directory.  This folder already exists and is populated with entries such as "folder.2112" (where 2112 is the uid of the LDAP user).  I find other posts indicating that this is an issue with Word:2008 and OSX10.8, with finger pointing in either direction, but no real solution.
    I have installed all Office for Mac updates from Microsoft (latest version 12.3.6).
    I have verified permissions of the user's home dir.
    I have also ensured that this issue effects ONLY Microsoft Office 2008 for Mac apps, LibreOffice and other applications have no issue.
    Does *ANYONE* have a solution or workaround for this issue?  While we're trying to phase Microsoft products out, getting users to ditch Word and Excel is difficult without removing them from systems completely.  So any pointers or help would be greatly appreciated.
    Thanks.
    ~k

    I can't tell you how to fix bugs in an obsolete version of Office, but a possible workaround is to use mobile home directories under OS X Server. The home directories are hosted locally and synced with the server.

  • Modifying retired server hardware for home use

    So due to the death of Server 2003 I had to retire quite a few servers in the last year for the company I work for. I decided to adopt all this otherwise discarded hardware and take it home with the goal of creating one sweet media server (and who knows what else down the road).Most of the servers were late Dell PowerEdge 2900/2950 systems but one was a PowerEdge R710.My goal is to make two servers essentially. One will befor a different purpose which will be straight forward and based on one of the PowerEdge 2900s.The other will be media/whatever home server running on the R710.So I have a huge pile of 600GB 15k RPM SAS drives which are awesome drives. However the R710 only has six 3.5 bays. I have A LOT more data than that, and a lot more hard drives than that available here.I don't want to go out and spend a ton of money on...
    This topic first appeared in the Spiceworks Community

    Instead of buying those modules, save yourself a ton of money and buy the twinax. It has the SFP's and the cable together as one piece.
    http://www.cisco.com/en/US/prod/collateral/modules/ps5455/data_sheet_c78-455693.html
    You can purchase a 10gb card for your server, just make sure the server supports the host interface of the card (eg PCI-Express x8).
    http://www.intel.com/Assets/PDF/prodbrief/321634.pdf

  • Can I use virtual Servers in private cloud for RAC

    Hello  to all
    We are going to install an Oracle RAC on two servers
    But our Hardware Administrator says to us   “I Allocate two virtual servers in the our private cloud not two physical Servers (or real Servers)”
    Do you think it’s practical and reasonable to using virtual Server for Oracle RAC  in production environment ?
    Which one is better physical server or virtual server  for RAC?
    Please write your reasons
    Thanks

    Using virtual machines is officially  supported for RAC only in a few cases which can be found here:
    http://www.oracle.com/technetwork/database/virtualizationmatrix-172995.html
    Make sure that you meet these requirements in your private cloud. Some cases like vmware are still somewhat supported despite beeing not on the list.
    Beside this you should make sure that your 2 virtual machines run on different hardware servers in the cloud, otherwise you lose most parts of the rac advantage regarding high availability, when both virtual servers happen to run on the same hardware during a crash
    Virtual servers are used in production environments, but you will have to take greater care for many aspects of rac compared to physical hardware, e.g.. something like "live migration" of vmware can kill a rac node due to timeout.
    I would prefer hardware for rac anytime over virtual servers and spare me the hassle of dealing with all possible issues arising from the virtualization.
    And check oracles licensing policy...
    Running an enterprise edition rac on e.g. a large vmware cluster is insanely expensive, you pay every cpu core the rac COULD run on -> the entire cluster!
    If you must use virtual hardware but don't want to and need an argument against it use the license issue.
    Regards
    Thomas

  • Can you re-export an nfs mount as an nfs share

    If so what is the downside?
    I'm asking because we currently have an iscsi san and a recent upgrade
    severely degraded iscsi connectivity. consequently can't mount my iscsi
    volumes.
    Thanks,
    db

    Originally Posted by David Brown
    The filer/san NFS functionality is working normally. I can't access
    some of the iscsi luns. Thinking of just using NFS as the backend.
    Which would be a better sub forum?
    Thank you,
    db
    Depending on which Novell OS you are running.... this subform is for NetWare, but I suspect you are using OES Linux.
    I've never tried creating a NCP share on OES for a remote NFS mount on the server. My first guess would be it is not allowed and also not a good practice. You could however, with this situation and if you are running an OES2 or OES 11 Linux server, try configuring an NFS mount on the OES server and then configuring the NCP share on that using remote manager on the server.
    What I would recommend however to see if the iSCSI issue cannot be fixed or worked around.
    Could you describe a bit more of the situation there/what happened and what is not working on that end?
    -Willem

  • TM ignores NFS mounted user

    BS"D
    There has been a lot of discussion about using NFS mounts as targets for TM; I have the opposite problem: I want TM to back up a user whose home directory resides on an NFS server. The user's home directory is in /Users, as a link to the mount point of the NFS volume in /Volumes/.....
    TM backs up the system disks, but does not follow the link to the users directory.
    Any ideas?
    Thanks

    I don't think TM supports network mounted disk volumes. You should probably submit feedback to Apple and ask that this feature be supported. You're also might be better off backing up that nfs volume from the server that hosts it.

  • Using NFS for RAC

    Hi I am planning to use NFS for RAc but I am not able to find the certified NAS devices.Where can I get the list
    Thanks

    NAS is NFS.
    See
    Following NFS storage vendors are supported: EMC, Fujitsu, HP, IBM, NetApp, Pillar Data, Sun, Hitachi. 
    NFS file servers do not require RAC certification. The NFS file server must be supported by the system and storage vendors. 
    Currently, only NFS protocol version 3 (NFSv3) is supported.
    Hemant K Chitale

  • Any idea about the pricing of the hardwares for Oracle 10g RAC?

    Hi,
    If I go for 2 nodes RAC using Dell servers and Disk array, any ideas about the prices? (just for the hardwares.)
    Thanks.
    Regards,
    Jason

    It's all about the shared disk(s).
    If you are only ever going to use two (2) nodes a SCSI JBOD solution is cheap. (practical too with ASM as the cluster filesystem / volume manager)
    More than 2 nodes, you step up to certified NFS, iSCSI or FC/AL connected storage. More expensive, faster, more flexible.
    The computers are the cheapest portion of the cost especially with multiple core multiple cpu 64bit linux based servers being so inexpensive.

  • Vi error on nfs mount; E212: Can't open file for writing

    Hi all,
    I've setup a umask of 0 for testing on both NFS client (Centos 5.2) and NFS server (OSX 10.5.5 server).
    I can create files as one user and edit/save out as another user w/o issue when directly logged into the server via ARD.
    However, when I attempt the same from an NFS mount on a client machine, even as root I get the following error using vi;
    "file" E212: Can't open file for writing
    Looking at the system.log file on the server, I see;
    kernel[0]: add_fsevent: no name hard-link! dropping the event. (event 2 vp == 0xa5db510 (-UNKNOWN-FILE)).
    This baffles me. My umask is 0 meaning files I create and attempt to edit as other users are 777, but I cannot save out edits unless I do a wq! in vi. At that point, the owner of the file changes to whomever did the vi.
    This isn't just a vi issue as it happens using any editor, but I like to use vi.
    Any help is greatly appreciated. Hey, beer is on me!

    Hi all,
    Thanks for the replies
    I've narrowed it down to a Centos client issue.
    Everything works fine using other Linux based OS's as clients.
    Since we have such a huge investment in Centos, I must figure out a workaround. Apple support wasn't much help as usual however they were very nice.
    There usual response is "its unsupported".
    If Apple really wants to play in the enterprise of business space, they really need to change there philosophy. I mean telling me that I shouldn't mount home directories via NFS is completely rediculus.
    What am I supposed to use then, Samba of AFP? No, I don't think so. No offense to Microsoft but why would I use a Windows based file sharing protocol to mount network shares in a Nix env???

  • Install Solaris 10 using HP-UX 11.23 NFS-mounted DVD

    Hi,
    I have a Sun Netra t1 105 w/512MB memory currently running Solaris 8. It does not have a DVD drive; so I mounted Solaris 10 DVD on a HP-UX 11.23 server. From the Sun, I can read DVD fine, but if I want to do a manual install of Solaris 10 using command line. When I execute ./installer it begins Live Update interface. The Sun box is not using volume manager. What's best way to upgrade Solaris 8 to 10 using NFS'd DVD?

    Certify - Certification Matrix: Oracle Database - Enterprise Edition on HP-UX PA-RISC
    Server Certifications
    OS      Product      Certified With      Version      Status      Addtl. Info.      Components      Other      Install Issue
    11i v3 (11.31)      9.2 64-bit      N/A      N/A      Extended Support      Yes      None      None      None
    11i v2 (11.23)      9.2 64-bit      N/A      N/A      Extended Support      Yes      None      None      None
    11i v1 (11.11)      9.2 64-bit      N/A      N/A      Extended Support      Yes      None      None      None
    11.0      9.2 64-bit      N/A      N/A      Desup:OS      Yes      None      N/A      N/A
    * sounds like it is a certified platform (but support is at Extendend Support level) - I hope you can find 9.2.0.8 patchset for this platform (it is the final patchset for 9.2 series and should be applied).

  • How to use external table - creating NFS mount -the details involved

    Hi,
    We are using Oracle 10.2.0.3 on Solaris 10. I want to use external tables to load huge csv data into the database. This concept was tested and also found to be working fine. But my doubt that : since ours is a J2EE application, the csv files have to come from the front end- from the app server. So in this case how to move them to the db server?
    For my testing I just used putty to transfer the file to db server, than ran the dos2unix command to strip off the control character at the end of file. but since this is to be done from the app server, putty can not be used. In this case how can this be done? Are there any risks or security issues involved in this process?
    Regards

    orausern wrote:
    For my testing I just used putty to transfer the file to db server, than ran the dos2unix command to strip off the control character at the end of file. but since this is to be done from the app server, putty can not be used. In this case how can this be done? Are there any risks or security issues involved in this process? Not sure why "putty" cannot be used. This s/w uses the standard telnet and ssh protocols. Why would it not work?
    As for getting the files from the app server to the db server. There are a number of options.
    You can look at it from an o/s replication level. The command rdist is common on most (if not all) Unix/Linux flavours and used for remote distribution and sync'ing of files and directories. It also supports scp as the underlying protocol (instead of the older rcp protocol).
    You can use file sharing - the typical Unix approach would be to use NFS. Samba is also an option if NTLM (Windows) is already used in the organisation and you want to hook this into your existing security infrastructure (e.g. using Microsoft's Active Directory).
    You can use a cluster file system - a file system that resides on shared storage and can be used by by both app and db servers as a mounted/cooked file system. Cluster file systems like ACFS, OCFS2 and GFS exist for Linux.
    You can go for a pull method - where the db server on client instruction (that provides the file details), connects to the app server (using scp/sftp/ftp), copy that file from the app server, and then proceed to load it. You can even add a compression feature to this - so that the db server copies a zipped file from the app server and then unzip it for loading.
    Security issues. Well, if the internals is not exposed then security will not be a problem. For example, defining a trusted connection between app server ad db server - so the client instruction does not have to contain any authentication data. Letting the client instruction only specify the filename and have the internal code use a standard and fixed directory structure. That way the client cannot instruct something like +/etc/shadow+ be copied from the app server and loaded into the db sever as a data file. Etc.

  • Hardware for ORACLE RAC.

    hi
    my company decided to migrate from Oracle Single instance to Oracle RAC . now we have to choice a hardware for it . I want to know which hardware is better for mid-size oracle rac database ? HP4300 (lefthand ) or EVA 4400 . the first one is cheaper and support 10g switches but the second one just support Fibre Channel .
    by the way , i want to know that does HP P4300 support oracle rac database ?
    thx in adv.
    Edited by: user9233061 on Mar 7, 2011 12:59 AM

    hi
    basically every machine can run Oracle RAC. It's a matter of tuning how well it performs.
    Also operating systems implies some issues and demands.
    The main HW issue with RAC is shared storage, so if you use disk array that can work with all RAC nodes, you're OK.
    You can configure it on direct connections (SAS) or network (iSCSI) or fiber channels. With or w/o redundancy.
    Go for it.
    Ask also the HP consultant for more details and opinions how they view the specific HW from database point of view.

  • Where can I find mounting rail hardware for single CPU G4 Xserve?

    I am wanting to mount my G4 single CPU 1 GHz in a four post rack now where I have a new G5 (keep them all together). I however cannot find the rest of the hardware for it that I got when I bought it several years ago, all I have are the dog ears that I used to mount it in a telco rack.
    I've looked on eBay, and a few other places. Surely this is available somewhere? I hope?
    Thanks...

    There is no such published reference guide. The closest thing would be the global price list but it's generally not distributed. (What Jouni mentioned was probably an extract of that very large document.)
    Due to the broad range of products (and associated bundles and services), Cisco uses online configuration and ordering tools (Cisco Commerce Workspace or CCW) internally and with its partners. The information in it is very dynamic and can change day to day as the tens of thousands of products Cisco offers are introduced, deprecated (i.e., Approaching or at End of Sales), offered in different promotional bundles, etc.
    When a Cisco salesperson or partner solution advisor talks with a customer, they take the customer's input and build equipment, software and services configuration sets supporting the proposal. CCW will validate the required items are ordered and generate a configuration set that has not only the product IDs (PIDs, generally referred to as SKUs or Stock Keeping Units in this context) but also the plain language descriptions of what each PID means.
    They should be conveying that information to you (or any customer they are engaged with) to enable you to make the informed decision you mention.

Maybe you are looking for