Oracle´s File system over GPFS on Oracle RAC

Hi all,
I would like to know which Oracle´s File systems should be over GPFS on a Oracle RAC implementation.
Regards,
Sebastian.
Edited by: user12549343 on 02-feb-2010 9:10

You didn't provide info about DB relase and OS.
But probably you are talking about AIX OS (I can't see any reason to use GPFS on Linux for RAC and I don't remember if this FS was/is supported for RAC on Linux (check Certification Matrix for this information)). Oracle is provides only one shared "filesystem" for AIX and it is ASM.
Now it depends on DB version what options you can use.
Till 11g R2 you have to store OCR and Voting disks outside of ASM (So you needed to use either RAW devices or Cluster Filesystem which GPFS is). Since 11g R2 you can store these disks in ASM directly.
On Linux if you don't want to use ASM to store database files you can use OCFS2 clusterfile system.

Similar Messages

  • Local file system for archive destination in RAC with ASM

    hi Gurus .
    I need an info.
    I have ASM file system in 2 node RAC.
    The client does not want to use flash recovery area.
    Can we use the archive destination as local file system rather than ASM.
    like /xyzlog1 for archvies coming from node 1 and /xyzlog2 for archive logs coming from node 2.
    Imortant thing is these two destinations are anot shared with each nodes.
    OS is solaris sparc 10.
    version is 10.2.0.2

    There is huge space in the storage.
    Pls tell me in general ho wdo you do this.
    Do we take and one disk from the storage and format it with local file system and share it among the 2 nodes?
    If so then that mount point will have same mount point name if we see it from other node... ..ryt
    In this scenario if one instance is down then from the same shared mount point which is on the same node(down) can apply archves?Here, Earlier you are using any ASM shared location for ARCHIVES ?
    if so you can add a CANDIDATE disk to your existing ARHCIVE disk group(shared).
    if not from LUN's after format, I.e. candidates based on that you can create a new DISK group(shared) according to your space requirements. THen you can assign to log_archive_dest_1 for both the nodes to single shared location(disk group)

  • SAP/ORACLE File Systems Disappeared in Local Zone

    I created some 20-30 files systems as meta devices for a SAP/Oracle installation on my 6320 SAN for a local zone.
    I did a newfs on all of them and then mounted them in the global zone with their appropriate mount points.
    I then did a zonecfg with the following input.
    zonepath: /export/zones/zsap21
    autoboot: true
    pool:
    inherit-pkg-dir:
    dir: /lib
    inherit-pkg-dir:
    dir: /platform
    inherit-pkg-dir:
    dir: /sbin
    inherit-pkg-dir:
    dir: /opt/sfw
    inherit-pkg-dir:
    dir: /usr
    fs:
    dir: /oracle
    special: /oracle_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/stage/920_64
    special: /oracle/stage/920_64_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /temp
    special: /temp_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /usr/local
    special: /usr/local_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /usr/sap
    special: /usr/sap_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /usr/sap/install
    special: /usr/sap/install_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /usr/sap/trans
    special: /usr/sap/trans_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /export/home/zsap21
    special: /export/home_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1
    special: /oracle/FS1
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1/920_64
    special: /oracle/FS1/920_64
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1/mirrlogA
    special: /oracle/FS1/mirrlogA
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1/oraarch
    special: /oracle/FS1/oraarch
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1/origlogA
    special: /oracle/FS1/origlogA
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1/origlogB
    special: /oracle/FS1/origlogB
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1/saparch
    special: /oracle/FS1/saparch
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1/sapdata1
    special: /oracle/FS1/sapdata1
    raw not specified
    type: lofs
    options: []
    ***********more available but I truncated it here**********************
    I successfully installed and configured the zone.
    I then logged into the local zone and installed a new version of Oracle and a working SAP instance. I know that it worked because I was able to log into it.
    Upon rebooting the server and the local zone I have lost access to all of my file systems in the Oracle/SAP zone.
    It's almost as if the global zone has not mounted its file systems over the local zones mount points. In the local zone I can see no files in the directories where I would expect Oracle or SAP files.
    I either mis-configured the zonecfg or missed a step somewhere.
    I suspect that my file system contents are still around somewhere waiting to be hooked up with a zone's mount point.
    In the local zone a df -k shows all of the file systems (like the local zone knows about what file systems belong) but they all have the same size and free space (probably from the root zone).
    Any thoughts appreciated.
    Atis

    Do you have amount point within the zone path for these oradb mounts?
    You have to add a directory entry for /oradb in the zone's root directory, i.e. ``zone_path''/root/oradb

  • Oracle rac and TDE?

    We are running oracle rac on 10.2.0.3 linux Itanium platform. I am setting up TDE for the first time and I setup my wallet location to be on an ocfs file system so that each node in the cluster will have access to the key. Is that all we will need to do and is this a supported configuration for TDE in a rac environment?
    Do you have to open the wallet on each instance during instance startup when running rac?
    Also we have a physicla standby server configured and I setup the same wallet location on the physicla standby and copied the wallet file over, Is that all we need to do for the standby server?
    Thanks.

    Peter,
    Good info and your video makes everything look easy.
    In addition to the encrypted wallet file (ewallet.p12), I also have a cwallet.sso file in the local file system (not ACFS) on both RAC nodes of my Primary and both Standby nodes.
    If I start the database and then run: SELECT * FROM V$ENCRYPTION_WALLET; it says the wallet status is open. However, as soon as a user tries to connect through our application (using jdbc), I get the "ORA-28365: wallet is not open" errors in the alert log. So then I have to run: ALTER SYSTEM SET ENCRYPTION WALLET OPEN IDENTIFIED BY "<Wallet Password>"; on each node and then users can connect through the application.
    Any ideas why auto-login doesn't work and why everything is grayed out on the Wallet Tab drop down menu in OWM?
    Thanks.

  • Copy oracle server data to remote system over internet

    Hi,
    I am using Redhat 4 linux system as Oracle 10g server. My database is in archivelog mode. I am not good at data guard so i am not using it.
    But i want to copy my archive log file to another linux system over internet. so that in case of server crash i can recover my database using backup and archive log files.
    It will be best if i can do it at each archive log creation like data guard and ok if it do at a time interval
    Please suggest me any tool which can copy data over internet on linux system and it should be free or easy to use.
    Thanks
    umesh

    Hi,
    I am using Redhat 4 linux system as Oracle 10g server. My database is in archivelog mode. I am not good at data guard so i am not using it.
    But i want to copy my archive log file to another linux system over internet. so that in case of server crash i can recover my database using backup and archive log files.
    It will be best if i can do it at each archive log creation like data guard and ok if it do at a time interval
    Please suggest me any tool which can copy data over internet on linux system and it should be free or easy to use.
    Thanks
    umeshOne tool I can suggest is OS copy command.
    I used to use this tool for one of our server which was on SCO Unix with Oracle 7. But there are implications with this:
    1. The archives got corrupted frequently during copy, then we need to identify the archives by matching it's size on production and on standby and copy it once again.
    2. Manually recover the database after every 15 minutes.
    I would suggest to go for oracle managed setup i.e. Standby configuration. It's not tough.
    Regards,
    S.K.

  • Store large volume of Image files, what is better ?  File System or Oracle

    I am working on a IM (Image Management) software that need to store and manage over 8.000.000 images.
    I am not sure if I have to use File System to store images or database (blob or clob).
    Until now I only used File System.
    Could someone that already have any experience with store large volume of images tell me what is the advantages and disadvantages to use File System or to use Oracle Database ?
    My initial database will have 8.000.000 images and it will grow 3.000.000 at year.
    Each image will have sizes between 200 KB and 8 MB, but the mean is 300 KB.
    I am using Oracle 10g I. I read in others forums about postgresql and firebird, that isn't good store images on database because always database crashes.
    I need to know if with Oracle is the same and why. Can I trust in Oracle for this large service ? There are tips to store files on database ?
    Thank's for help.
    Best Regards,
    Eduardo
    Brazil.

    1) Assuming I'm doing my math correctly, you're talking about an initial load of 2.4 TB of images with roughly 0.9 TB added per year, right? That sort of data volume certainly isn't going to cause Oracle to crash, but it does put you into the realm of a rather large database, so you have to be rather careful with the architecture.
    2) CLOBs store Character Large OBjects, so you would not use a CLOB to store binary data. You can use a BLOB. And that may be fine if you just want the database to be a bit-bucket for images. Given the volume of images you are going to have, though, I'm going to wager that you'll want the database to be a bit more sophisticated about how the images are handled, so you probably want to use [Oracle interMedia|http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14302/ch_intr.htm#IMURG1000] and store the data in OrdImage columns which provides a number of interfaces to better manage the data.
    3) Storing the data in a database would generally strike me as preferrable if only because of the recoverability implications. If you store data on a file system, you are inevitably going to have cases where an application writes a file and the transaction to insert the row into the database fails or a the transaction to delete a row from the database succeeds before the file is deleted, which can make things inconsistent (images with nothing in the database and database rows with no corresponding images). If something fails, you also can't restore the file system and the database to the same point in time.
    4) Given the volume of data you're dealing with, you may want to look closely at moving to 11g. There are substantial benefits to storing large objects in 11g with Advanced Compression (allowing you to compress the data in LOBs automatically and to automatically de-dupe data if you have similar images). SecureFile LOBs can also be used to substantially reduce the amount of REDO that gets generated when inserting data into a LOB column.
    Justin

  • Oracle cache and File System cache

    on the checkpoint, oracle cache will be written to disk. But, If an oracle database is over file system datafile, it likely that the data are still leave in FileSystem cache. I don't know how could oracle keep data consistency.

    Thanks for your feedback. I am almost clear about this issue now, except one point need to be confirmed: do you mean that on linux or unix, if required, we can set "direct to disk" from OS level, but for windows, it's default "direct to disk", we do not need to set it manually.
    And I have a further question: If a database is stored on a SAN disk, say, a volume from disk array, the disk array could take snapshot for a disk on block level, we need to implement online backup of database. The steps are: alter tablespace begin backup, alter system suspend, take a snapshot of the volume which store all database files, including datafiles, redo logs, archived redo logs, controll file, service parameter file, network parameter files, password file. Do you think this backup is integrity or not. please note, we do not flush the fs cache before all these steps. Let's assume the SAN cache could be flushed automatically. Can I think it's integrity because the redo writes are synchronous.

  • SC 3.0 file system failover for Oracle 8i/9i

    I'm a Oracle DBA for our company. And we have been using shared NFS mounts successfully for the archivelog space on our production 8i 2-node OPS Oracle databases. From each node, both archivelog areas are always available. This is the setup recommended by Oracle for OPS and RAC.
    Our SA team is now wanting to change this to a file system failover configuration instead. And I do not find any information from Oracle about it.
    The SA request states:
    "The current global filesystem configuration on (the OPS production databases) provides poor performance, especially when writing files over 100MB. To prevent an impact to performance on the production servers, we would like to change the configuration ... to use failover filesystems as opposed to the globally available filesystems we are currently using. ... The failover filesystems would be available on only one node at a time, arca on the "A" node and arcb on the "B" node. in the event of a node failure, the remaining node would host both filesystems."
    My question is, does anyone have experience with this kind of configuration with 8iOPS or 9iRAC? Are there any issues with the auto-moving of the archivelog space from the failed node over to the remaining node, in particular when the failure occurs during a transaction?
    Thanks for your help ...
    -j

    The problem with your setup of NFS cross mounting a filesystem (which could have been a recommended solution in SC 2.x for instance versus in SC 3.x where you'd want to choose a global filesystem) is the inherent "instability" of using NFS for a portion of your database (whether it's redo or archivelog files).
    Before this goes up in flames, let me speak from real world experience.
    Having run HA-OPS clusters in the SC 2.x days, we used either private archive log space, or HA archive log space. If you use NFS to cross mount it (either hard, soft or auto), you can run into issues if the machine hosting the NFS share goes out to lunch (either from RPC errors or if the machine goes down unexpectedly due to a panic, etc). At that point, we had only two options : bring the original machine hosting the share back up if possible, or force a reboot of the remaining cluster node to clear the stale NFS mounts so it could resume DB activities. In either case any attempt at failover will fail because you're trying to mount an actual physical filesystem on a stale NFS mount on the surviving node.
    We tried to work this out using many different NFS options, we tried to use automount, we tried to use local_mountpoints then automount to the correct home (e.g. /filesystem_local would be the phys, /filesystem would be the NFS mount where the activity occurred) and anytime the node hosting the NFS share went down unexpectedly, you'd have a temporary hang due to the conditions listed above.
    If you're implementing SC 3.x, use hasp and global filesystems to accomplish this if you must use a single common archive log area. Isn't it possible to use local/private storage for archive logs or is there a sequence numbering issue if you run private archive logs on both sides - or is sequencing just an issue with redo logs? In either case, if you're using rman, you'd have to back up the redologs and archive log files on both nodes, if memory serves me correctly...

  • How to insert a JPG file from file system to Oracle 10g?

    I have developed a schema to store photos as BLOB which store the text description as CLOB original filename, file size.
    I also use ctxsys.context to index TEXT_DESCRIPTION in order to perform Oracle Text Search and it works.
    I would like to insert some JPG file from say C:\MYPHOTO\Photo1.jpg as a new record. How can I do this in SQL PLus and/or Loader?
    How can I retrieve the PHOTO_IMAGE back to the file system using SQL Plus and/or command line in DOS?
    See the following script:
    create user myphoto identified by myphoto;
    grant connect, resource, ctxapp to myphoto;
    connect myphoto/myphoto@orcl;
    PROMPT Creating Table PHOTOS
    CREATE TABLE PHOTOS
    (PHOTO_ID VARCHAR2(15) NOT NULL,
    PHOTO_IMAGE BLOB,
    TEXT_DESCRIPTION CLOB,
    FILENAME VARCHAR2(50),
    FILE_SIZE NUMBER NOT NULL,
    CONSTRAINT PK_PHOTOS PRIMARY KEY (PHOTO_ID)
    create index idx_photos_text_desc on
    PHOTOS(TEXT_DESCRIPTION) indextype is ctxsys.context;
    INSERT INTO PHOTOS VALUES
    ('P00000000000001', empty_blob(), empty_clob(),
    'SCGP1.JPG',100);
    INSERT INTO PHOTOS VALUES
    ('P00000000000002', empty_blob(), 'Cold Play with me at the concert in Melbourne 2005',
    'COLDPLAY1.JPG',200);
    INSERT INTO PHOTOS VALUES
    ('P00000000000003', empty_blob(), 'My parents in Melbourne 2001',
    'COLDPLAY1.JPG',200);
    EXEC CTX_DDL.SYNC_INDEX('idx_photos_text_desc');
    SELECT PHOTO_ID ,TEXT_DESCRIPTION
    FROM PHOTOS;
    SELECT score(1),PHOTO_ID ,TEXT_DESCRIPTION
    FROM PHOTOS
    WHERE CONTAINS(TEXT_DESCRIPTION,'parents',1)> 0
    ORDER BY score(1) DESC;
    SELECT score(1),PHOTO_ID ,TEXT_DESCRIPTION
    FROM PHOTOS
    WHERE CONTAINS(TEXT_DESCRIPTION,'cold play',1)> 0
    ORDER BY score(1) DESC;
    SELECT score(1),score(2), PHOTO_ID ,TEXT_DESCRIPTION
    FROM photos
    WHERE CONTAINS(TEXT_DESCRIPTION,'Melbourne',1)> 0
    AND CONTAINS(TEXT_DESCRIPTION,'2005',2)> 0
    ORDER BY score(1) DESC;

    Hi
    You can use the following to insert an image:
    create table imagetab(id number primary key,imagfile blob, fcol varchar2(10));
    create or replace directory imagefiles as 'c:\'
    declare
        v_bfile BFILE;
        v_blob  BLOB;
      begin
        insert into imagetab (id,imagfile,fcol)
        values (3,empty_blob(),'BINARY')
        return imagfile into v_blob;
        v_bfile := BFILENAME ('IMAGEFILES', 'MyImage.JPG');
        Dbms_Lob.fileopen (v_bfile, Dbms_Lob.File_Readonly);
        Dbms_Lob.Loadfromfile (v_blob, v_bfile, Dbms_Lob.Getlength(v_bfile));
        Dbms_Lob.Fileclose(v_bfile);
        commit;
      end;
    /

  • Uploaded Files stored in Oracle 10G database or in Unix File system

    Hey All,
    I am trying to understand best practices on storing uploaded files. Should you store within the database itself (this is the current method we are using by leveraging BLOB storage) or use a BFILE locator to use the files system storage (we have our DB's on UNIX) . . .or is there another method I should be entertaining? I have read arguments on both sides of this question. I wanted to see what answers forum readers could provide!! I understand there are quite a few factors but the situation I am in is as follows:
    1) Storing text and pdf documents.
    2) File sizes range from a few Kb to up to 15MB in size
    3) uploaded files can be deleted and updated / replaced quite frequently
    Right now we have an Oracle stored procedure that is uploading the files binary data into a BLOB column on our table. We have no real "performance" problems with this method but are entertaining the idea of using the UNIX file system for storage instead of the database.
    Thanks for the insight!!
    Anthony Roeder

    Anthony,
    First word you must learn here in this forum is RESPECT.
    If you require any further explanation, just say so.
    BLOB compared with BFILE
    Security:
    BFILEs are inherently insecure, as insecure as your operating system (OS).
    Features:
    BFILEs are not writable from typical database APIs whereas BLOBs are.
    One of the most important features is that BLOBs can participate in transactions and are recoverable. Not so for BFILEs.
    Performance:
    Roughly the same.
    Upping the size of your buffer cache can make a BIG improvement in BLOB performance.
    BLOBs can be configured to exist in Oracle's cache which should make repeated/multiple reads faster.
    Piece wise/non-sequential access of a BLOB is known to be faster than a that of a BFILE.
    Manageability:
    Only the BFILE locator is stored in an Oracle BACKUP. One needs to do a separate backup to save the OS file that the BFILE locator points to. The BLOB data is backed up along with the rest of the database data.
    Storage:
    The amount of table space required to store file data in a BLOB will be larger than that of the file itself due to LOB index which is the reason for better BLOB performance for piece wise random access of the BLOB value.

  • Oracle vm 3.1.1 server pool file system corruption

    Hi
    I have a small lab environment composed by 2 Oracle VM Servers and SAN access.
    This morning somehow the OCFS2 Server Pool File System (the 12GB one) corrupted and the Server Pool seems dead. How can I recover from this error? I looked in the documentation and MOS but found nothing
    Thank you

    I know it would be a lot of work in such a scenerio. I don't know what you're using for storage but you could snapshot or replicate the pool storage to create a backup that can be restored.
    I do wish they would let you select more than one VM guest in the VM Manager. Would be really nice to have such. Seems like it would be a rather easy feature to implement.

  • How to create a new user id in OID for Oracle Collab suite File System

    Dear Friends,
    I want to know how to create a new user id in the oracle internet directory where i can use that user for the new subscription of the oracle collabration suite file system..
    Please do the needfull and thanks in advance...
    With warm regards
    R.Prasad

    Hi!
    The way you suggest should not be used.
    A CS user will be created as a normal OID user and will receive the CS attributes in a different subtree later during the provisioning.
    For creating CS users use oesuser and uniuser. Files provisioning will work in a different manner anyway.
    cu
    Andreas

  • Oracle Internet File System Release 9.0.2  Doesn't fully download

    So that we may better diagnose DOWNLOAD problems, please provide the following information.
    - Server name - download-uk.oracle.com
    - Filename - hpunix_ifs9.0.2.0.0.cpio.gz
    - Date/Time - 17th Sep 2002 2pm, 5pm, 18th Sept 9am.
    - Browser + Version -- Internet Explorer 6
    - O/S + Version -- Windows 2000
    - Error Msg
    The download of the Oracle Internet File System software never fully completes. It is a 157MB file and although I states that the download has completed successfully, I have only retrieved parts of the file (86MB, 17MB and 87k this morning). Is there a problem with the server I am downloading from ? Any help would be great as I require this software.
    Cheers
    Darren.

    I have Jdeveloper 9.0.2 installed with Oracle 9i Developer Suite (9.0.2) on a Pentium III 600Mhz PC. The operating system is Win 2000.
    To solve this startup problem of Jdeveloper, I uninstalled Oracle 9i Developer Suite (9.0.2) and reinstalled it completely. After reinstallation, Jdeveloper starts up again. The next day, and nothing else has been installed on the PC. I have the same problem back.
    If I start Jdeveloper I have for a short time the hourglass (5 sec) Then nothing happens. The CPU usage goes from low (1%) to (100%), consumed by system and stays high forever.
    What can be the problem? Has any one had this problem? Haven't seen this one. Did you switch the JDK to 1.4? There are issues with 9.0.2 under JDK 1.4.
    Rob

  • Oracle eSSO - File System Synchronization

    I'm having a lot of trouble just setting up a simple file system synchronization for Oracle Enterprise Single Sign-On v10.1.0.2 or v10.1.0.4 (I've tried both). I've tried it many ways, but the agent never seems to be visible to the admin console. Here is the simplest example:
    On a single test machine, install the Admin Console. Use the Admin Console to create a new set of Global Agent Settings. Leave everything to default except under Synchronization, click Manage Synchronizers. Create a new synchronizer with synch type 'File System'. Go to the Required subkey and enter a network share under 'Server'. In this example, I created a share on the same local machine. The share gives 'Full Control' to 'Everyone' and 'Domain Users' groups. Save these settings in the Admin Console. Then Go to Tools > Generate Customized MSI in the Admin Console, exporting the newly created Global Agent Settings to the customized agent installer MSI. Then on the same test machine, install the agent by using the customized MSI. During the agent installation, choose 'Custom' and enable installation of Extensions > Synchronization Manager and File System synchronization type.
    So, that's what I've done. When I then go back to the Admin Console and connect to the repository (the share setup on the local machine) and select 'Configure SSO Support', the result is that it creates a folder called 'People' and two override objects: ENTLIST and ADMINOVERRIDE. That's it. No matter how many or where I install agents on this computer or elsewhere on the network 1) the admin console does not seem to see them 2) the ENTLIST of customized applications never gets pushed to the agents. In fact the ENTLIST shown in the Repository within the Admin Console does not seem to update when the apps list in the admin console is updated (even if I press 'Refresh' in the repository).
    Can anyone help? Is it something silly like the order in which I've done things, or did I miss a step somewhere?

    Hi,
    I have the same problem, but I use an Active Directory in the basckend :o(
    I'm able to configure the admin console (connection to AD, create a global agent for my AD server). When I install the ESSO-Agent on the same machine (where the administrative console is already installed)...it's quite easy because the following "option" is available : "Export Apps to Agent"!!! Using this, there is no problem.
    On the other hand, when I install the ESSO-Agent on the user's workstation...it's really an other story :o( For the monent, I'm not able to get the credentials (pre-configured applications) from Active Directory. I think that the (Global Agent) settings have been successfully pushed into AD, but I'm not able to get them from the user's workstation.
    What are the steps to realize this operation (get the pre-configured applications from AD)?
    I always get a connection window which requires the following three parameters: username, password and user path.
    Because AD is my backend server, I thought that the following "values" were correct...., but that's not the case, and I'm sure that I'm not connected to AD:
    username: cn=toto toto,cn=users,dc=name,dc=com
    password: mypwd
    user path: dc=name,dc=com
    I hope that you have resolved your problem, and that someone could help me :o)
    Thanks a lot.
    c u

  • Does /sapmnt need in cluster file system(SAP ECC 6.0 with oracle RAC)

    We are going to be installing SAP with Oracle 10.2.0.4 RAC on Linux SuSE 10 and OCFS2. The Oracle RAC documentation states:
    You must store the following components in the cluster file system when you use RAC
    in the SAP environment:
    - Oracle Clusterware (CRS) Home
    - Oracle RDBMS Home
    - SAP Home (also /sapmnt)
    - Voting Disks
    - OCR
    - Database
    What I want to ask is if I really need put SAP Home(also /sapmnt) on cluster file system? I will build a two nodes oracel 10g RAC and I also have another two nodes to install SAP CI and DI. My orginial think is sapmnt is a NFS share, and mount to all four nodes(RAC node and CI/DI), and all oracle stuff was on OCFS2(only two rac nodes are OCFS), anybody can tell me if SAP Home(also /sapmnt) can be NFS mount not OCFS2, thanks.
    Best regards,
    Peter

    Hi Peter,
    I don't think you need to keep /sapmnt in  ocfs2 . Reason any file system  need  to be in cluster is,in RAC environment, data stored in the cache of one Oracle instance to be accessed by any other instance by transferring it across the private network  and preserves data integrity and cache coherency by transmitting locking and other synchronization information across cluster nodes.
    AS this applies to redo files, datafiles and control files only ,  you should be fine with nfs mount of /sapmnt sharing across and not having ocfs2.
    -SV

Maybe you are looking for