Linux Cluster File System partitions for 10g RAC

Hi Friends,
I planned to install 2 Node Oracle 10g RAC on RHEL and I planned to use Linux File system itself for OCR,Voting Disk and datafiles (no OCFS2/RAW/ASM)
I am having SAN storage.
I would like to know how do i create shared/cluster partitions for OCR,Voting Disk and Datafiles (common storage on SAN).
Do i need to install any Linux cluster file system for creating these shared partitions (as we have sun cluster in solaris)?
If so let me know what versions are supported and provide the necessary Note / Link
Regards,
DB

Hi ,
Below link may be useful to you:
ORACLE-BASE - Oracle 10g RAC On Linux Using NFS

Similar Messages

  • Linux Cluster File system

    Has any seen a release date for oracle's linux cluster file system they announced last week. I am looking at deploying 3 4 node clusters over the next few months and would rather not use raw file systems.

    http://www.linuxjournal.com/article.php?sid=6123&mode=thread&order=0
    There is the link to the news release:

  • Does /sapmnt need in cluster file system(SAP ECC 6.0 with oracle RAC)

    We are going to be installing SAP with Oracle 10.2.0.4 RAC on Linux SuSE 10 and OCFS2. The Oracle RAC documentation states:
    You must store the following components in the cluster file system when you use RAC
    in the SAP environment:
    - Oracle Clusterware (CRS) Home
    - Oracle RDBMS Home
    - SAP Home (also /sapmnt)
    - Voting Disks
    - OCR
    - Database
    What I want to ask is if I really need put SAP Home(also /sapmnt) on cluster file system? I will build a two nodes oracel 10g RAC and I also have another two nodes to install SAP CI and DI. My orginial think is sapmnt is a NFS share, and mount to all four nodes(RAC node and CI/DI), and all oracle stuff was on OCFS2(only two rac nodes are OCFS), anybody can tell me if SAP Home(also /sapmnt) can be NFS mount not OCFS2, thanks.
    Best regards,
    Peter

    Hi Peter,
    I don't think you need to keep /sapmnt in  ocfs2 . Reason any file system  need  to be in cluster is,in RAC environment, data stored in the cache of one Oracle instance to be accessed by any other instance by transferring it across the private network  and preserves data integrity and cache coherency by transmitting locking and other synchronization information across cluster nodes.
    AS this applies to redo files, datafiles and control files only ,  you should be fine with nfs mount of /sapmnt sharing across and not having ocfs2.
    -SV

  • RAC 10gr2 using ASM for RMAN a cluster file system or a Local directory

    The environment is composed of a RAC with 2 nodes using ASM. I have to determine what design is better for Backup and Recovery with RMAN. The backups are going to be saved to disk only. The database is only transactional and small in size
    I am not sure how to create a cluster file system or if it is better to use a local directory. What's the benefit of having a recovery catalog that is optional to the database?
    I very much appreciate your advice and recommendation, Terry

    Arf,
    I am new to RAC. I analyzed Alejandro's script. He is main connection is to the first instance; then through sql*plus, he gets connected to the second instance. he exits the second instance and starts with RMAN backup to the database . Therefore the backup to the database is done from the first instance.
    I do not see where he setenv again to change to the second instance to run RMAN to backup the second instance. It looks to me that the backup is only done to the first instance, but not to the second instance. I may be wrong, but I do not see the second instance backup.
    Kindly, I request your assistance on the steps/connection to backup the second instance. Thank you so much!! Terry

  • Raw devices versus Cluster File Systems in RAC 10gR2

    Hi,
    Does anyone using cluster file systems in a RAC 10gR2 installation, specifically IBM’s GPFS?
    I’ve visited a company that is running RAC 10gR2 in AIX over raw devices. Why someone would choose to use raw devices , with all the problems to administer , when all the modern file systems are so powerful? Is there any issues when using cluster file systems + RAC? Is there considerable performance benefits when using raw devices with RAC ?
    I´ve always used Oracle stand alone instances over file systems (since version 7) , and performance was always very good. I´ve tested raw devices almost 10 years ago , and even in that time (the hardware today is much better - SAN , 15K rpm disks , huge caches - and the file systems software today is much better) the cost to administer it does not compensate the benefits (only 5% more faster than file systems in Oracle 7).
    So , besides any limitations imposed by RAC , why use raw devices nowadays ?
    Regards,
    Antonio Belloni

    Hi,
    spontaneously, my question would be: How did you eliminate the influence of the Linux File System Cache on ext3? OCFS2 is accessed with the o_direct flag - there will be no caching. The same holds true for RAW devices. This could have an influence on your test and I did not see a configuration step to avoid it.
    What I saw, though, is "counter test": "I have tried comparing simple file copies from an OS level and the speed differences are not apparent - so the issue only manifests itself when the data is read via an oracle db." and I have no good answer to that one.
    Maybe this paper has: http://www.oracle.com/technology/tech/linux/pdf/Linux-FS-Performance-Comparison.pdf - it's a bit older, but explains some of the interdependencies.
    Last question: While you spent a lot of effort on proving that this one query is slower on OCFS2 or RAW than on ext3 for the initial read (that's why you flushed the buffer cache before each run), how realistic is this scenario when this system goes into production? I mean, how many times will this query be read completely from disk as opposed to use some block from the buffer? If you consider that, what impact does the "IO read time from disk" have on the overall performance of the system? If you do not isolate the test to just a read, how do writes compare?
    Just some questions. Thanks.

  • Best Practice for Distributed TREX NFS vs cluster file systems

    Hi,
    We are planning to implement a distributed TREX, using RedHat on X64, but we are wondering which could be the best practice or approach to configure the "file server" used on the TREX distributed environment. The guides mention file server, that seems to be another server connected to a SAN exporting or sharing the file systems required to be mounted in all the TREX systems (Master, Backup and Slaves), but we know that the BI accelerator uses OCFS2 (cluster file systems) to access the storage, in the case of RedHat we have GFS or even OCFS.
    Basically we would like to know which is the best practice and how other companies are doing it, for a TREX distributed environment using either network file systems or cluster file systems.
    Thanks in advance,
    Zareh

    I would like to add one more thing, in my previous comment I assumed that it is possible to use cluster file system on TREX because BI accelerator, but maybe that is not supported, it does not seem to be clear on the TREX guides.
    That should be the initial question:
    Aare cluster file system solutions supported on plain TREX implementation?
    Thanks again,
    Zareh

  • Is OCFS(Oracle Cluster File System) a real file system?

    Hi all
    Is oracle cluster file system something kind of FAT23 or NTFS on Windows, ext3 on Linux ?
    We install operating systems on FS , but with OCFS we install Oracle database on it instead of operation system ?
    As usual
    Applications
    OS
    File System
    Physical Drive
    But the ocfs is something like:
    Oracle
    OCFS
    OS
    OS's file system
    Physical Drive
    Is that right ?

    Robert Geier wrote:
    OCFS is a clustered filesystem. You don't use it for OS partitions (e.g /opt) but only for filesystems that will be shared between servers in a RAC cluster to hold database files.
    Are you installing RAC ? If so, then ASM is a better option than OCFS. If not, then you really don't need OCFS.Thanks
    I'm studying RAC.
    I hear that I can not put all kind of files into ASM but with OCFS I can hold them.
    Is that improved with 11g R2 ?

  • Asmca has grayed out Volumes and ASM Cluster File Systems 11.2.0.3

    I've got a two node cluster which is up and running with the latest 11.2.0.3 grid install on Oracle Linux 6.3
    I need to get a shared storage location I can use for File I/O testing, ASM looks like the solution with an ASM Cluster File System.
    When I run asmca I do not have the ability to create these volumes or file systems as they are Grayed out.
    I found some instructions on how to get it to work, and they said to use acfsload to start up the required daemons:
    [root@oracleA bin]# ./acfsload start -s
    ACFS-9459: ADVM/ACFS is not supported on this OS version: '2.6.39-300.17.3.el6uek.x86_64'
    I installed Patches: 13146560, 14596051 - Which I thought would fix the problem. Rebooted after successfully applying the patches, but asmca still shows them greyed out
    and not supported on this OS error persists.
    I see some posts online saying to edit osds_acfslib.pm and update it to allow for the supported ORACLE version
    Right now it shows: ($release =~ /^oraclelinux-release/))) # Oracle Linux
    under /etc it only has oracle-release - could that have something to do with it not passing the check?
    uname -r
    2.6.39-300.17.3.el6uek.x86_64
    From what I can tell this kernal should support asm..
    Any help in getting these shared storage asm disks setup would be very helpful, oracleasm creates them and sees them fine for databases. Thanks.

    Turns out the Kernel version 2.6.39 does not have support for the ASM Drivers for the ACFS mounting.
    I'm going to have to use Oracle Linux 6.2 (instead of Oracle Linux 6.3) and rebuild my RAC to get a supported version of the drivers -> Kernel version 2.6.32
    http://docs.oracle.com/cd/E11882_01/install.112/e16763/oraclerestart.htm#BGBGEDGA

  • IS RAW DEVICES SUPPORTED OVER A CLUSTER FILE SYSTEM

    Can raw partions be defined for datafiles after having choosen Cluster file system as storage option for database while creating fresh database using
    DBCA?

    > Do update on how the partitions have to be defined in either cases?
    For both ASM and OCFS, a partition must exist on the disk - it can be of any partition type. Does not matter. Simply that the s/w references a partition and not an entire disk.
    So for example, /dev/sdaf and dev/sdag are two shared devices on the cluster (LUNs on the SAN or whatever).
    You create a partition on each. E.g
    # fdisk -l /dev/sdaf
    Disk /dev/sdaf: 36.5 GB, 36573020160 bytes
    255 heads, 63 sectors/track, 4446 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdaf1 1 4446 35712463+ 83 LinuxTo use the first device as a OCFS device, you need to build an ocfs file system on it using mkfs.
    And then it can be mounted as a "normal" cooked file system mount. Remember that /etc/fstab needs to be updated for mounting it on startup.
    To use the second device for ASM, you have two choices. If you have the ASMlib kernel module installed, you can use that to configure a volume label and assign it for use by ASM.
    Alternatively, you simply map the device (partition) to a raw device for detection by ASM. E.g.
    # raw /dev/raw/raw1 /dev/sdag1Of course, you also need to make this permanent by updating the raw device list config file so that this mapping is performed on reboot. On Linux, this is the /etc/sysconfig/rawdevices file. Also remember that the user and group access for the logical raw device created, must allow ASM full access to it (e.g. use chmod oracle.dba /dev/raw/raw1).
    In a nutshell, this is how to raw devices are used as ocfs and asm volumes. (on RHEL specifically, but I expect no major differences in this approach on other o/s's)

  • Sun QFS cluster file system with Veritas Volume Manager

    Hi,
    Can someone confirm whether it is possible to create a Sun QFS cluster file system (for Oracle RAC datafiles) using a VxVM volume?
    Or must we use Solaris Volume Manager with QFS?
    Thinking of storing the static part of the Oracle RAC DB on VxVM raw devices, and the dynamic part on a QFS file system to avoid the overhead of constantly adding new raw devices when i want to create datafiles.
    Thanks,
    Steve

    Steve,
    No, shared QFS is only supported on Solaris Volume Manager. I've not heard of any plans to test it on VxVM.
    Why not keep the static parts of the DB on raw SVM devices? Why keep them on raw devices at all?
    Tim
    ---

  • Cluster file system

    i have 2 cluster nodes (SuSE Linux Enterprise server 9) with shared storage, i will install some applications on this storage(oracle is NOT one of these applications),
    is the oracle cluster file system is suitable to use for this case?

    only the ocfs2, and this is it first prodcution version, so it will be better for you to wait
    for the next versions.

  • Warming up File System Cache for BDB Performance

    Hi,
    We are using BDB DPL - JE package for our application.
    With our current machine configuration, we have
    1) 64 GB RAM
    2) 40-50 GB -- Berkley DB Data Size
    To warm up File System Cache, we cat the .jdb files to /dev/null (To minimize the disk access)
    e.g
         // Read all jdb files in the directory
         p = Runtime.getRuntime().exec("cat " + dirPath + "*.jdb >/dev/null 2>&1");
    Our application checks if new data is available every 15 minutes, If new Data is available then it clears all old reference and loads new data along with Cat *.jdb > /dev/null
    I would like to know that if something like this can be done to improve the BDB Read performance, if not is there any better method to Warm Up File System Cache ?
    Thanks,

    We've done a lot of performance testing with how to best utilize memory to maximize BDB performance.
    You'll get the best and most predictable performance by having everything in the DB cache. If the on-disk size of 40-50GB that you mention includes the default 50% utilization, then it should be able to fit. I probably wouldn't use a JVM larger than 56GB and a database cache percentage larger than 80%. But this depends a lot on the size of the keys and values in the database. The larger the keys and values, the closer the DB cache size will be to the on disk size. The preload option that Charles points out can pull everything into the cache to get to peak performance as soon as possible, but depending on your disk subsystem this still might take 30+ minutes.
    If everything does not fit in the DB cache, then your best bet is to devote as much memory as possible to the file system cache. You'll still need a large enough database cache to store the internal nodes of the btree databases. For our application and a dataset of this size, this would mean a JVM of about 5GB and a database cache percentage around 50%.
    I would also experiment with using CacheMode.EVICT_LN or even CacheMode.EVICT_BIN to reduce the presure on the garbage collector. If you have something in the file system cache, you'll get reasonably fast access to it (maybe 25-50% as fast as if it's in the database cache whereas pulling it from disk is 1-5% as fast), so unless you have very high locality between requests you might not want to put it into the database cache. What we found was that data was pulled in from disk, put into the DB cache, stayed there long enough to be promoted during GC to the old generation, and then it was evicted from the DB cache. This long-lived garbage put a lot of strain on the garbage collector, and led to very high stop-the-world GC times. If your application doesn't have latency requirements, then this might not matter as much to you. By setting the cache mode for a database to CacheMode.EVICT_LN, you effectively tell BDB to not to put the value or (leaf node = LN) into the cache.
    Relying on the file system cache is more unpredictable unless you control everything else that happens on the system since it's easy for parts of the BDB database to get evicted. To keep this from happening, I would recommend reading the files more frequently than every 15 minutes. If the files are in the file system cache, then cat'ing them should be fast. (During one test we ran, "cat *.jdb > /dev/null" took 1 minute when the files were on disk, but only 8 seconds when they were in the file system cache.) And if the files are not all in the file system cache, then you want to get them there sooner rather than later. By the way, if you're using Linux, then you can use "echo 1 > /proc/sys/vm/drop_caches" to clear out the file system cache. This might come in handy during testing. Something else to watch out for with ZFS on Solaris is that sequentially reading a large file might not pull it into the file system cache. To prevent the cache from being polluted, it assumes that sequentially reading through a large file doesn't imply that you're going to do a lot of random reads in that file later, so "cat *.jdb > /dev/null" might not pull the files into the ZFS cache.
    That sums up our experience with using the file system cache for BDB data, but I don't know how much of it will translate to your application.

  • Cluster file system and BDB - does it work ?

    Hi,
    I've read in the reference guide that:
    " No commercial remote filesystem of which we're aware supports
    coherent, distributed shared memory for remote-mounted files"
    What about cluster file systems like veritas cluster file system ?
    Can I use this file system with BDB ?
    I'm trying to find a solution for having one database visible from two computers.
    Reader and writer exist on each machine. Any clue how to do that ?
    best regards
    Moris

    Award points, level up, and earn new privileges
    Level/Points/Privilege
    1 - 0-149 - Reputation level displayed
    2 - 150-499 - Ability to report posts to Apple Support Community Hosts
    3 - 500-999 - Ability to set a custom avatar
    4 - 1000-4999 - Invitation to community conference calls
    5 - 5000-7999 - Regional Meet-up invitations
    6 - 8000-19999 - Access to The Lounge, Apple Support Communities T-Shirt
    7 - 20,000-34,999 - Coming Soon
    8 - 35,000-49,999 - Coming Soon
    9 - 50,000-79,999 - Coming Soon
    10 - 80,000+ - Coming Soon

  • Cannot configure Oracle Cluster File System (OCFS2)

    Dears,,
    While trying to use ocfs2console to Configure Oracle Cluster File System (OCFS2)
    Error appear to me as following:
    Could not start cluster stack error.This must be resolved before any OCFS2 filesystem can be mounted.
    How can solve this please?
    thanks & regards,,

    Dear all ,,
    My issue solved now . . .
    Really , it is strange.
    Once , i added /sbin in .bash_profile under ~ Directory as following . . .
    export PATH=$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:/bin:/usr/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11R6/bin
    Then log out then back again.
    After this change , i able to run ocfs2console successfully.
    Hope this help you.
    Thanks for all,,
    Best regards,,

  • Cluster File system local to Global

    I need to convert a local High available file system to a global file system. The client needs to share data within the cluster and the solution i offered him was this.
    Please let me know if there is a better way to do this. The servers are running 2 failover resource NFS resource groups sharing file systems to clients. Currently the filesystems are configured as HAStoragePlus file systems.
    Thanks

    Tim, thanks much for your reply. I will doing this as a global file system. Currently, the HA file systems are shared out from only one node and I intend to keep it that way. The only difference is i will make the local HA file systems as global.
    I was referring the sun cluster concepts guide which mentions
    http://docs.sun.com/app/docs/doc/820-2554/cachcgee?l=en&a=view
    "A cluster file system to be highly available, the underlying disk storage must be connected to more than one node. Therefore, a local file system (a file system that is stored on a node's local disk) that is made into a cluster file system is not highly available"
    I assume i need to remove the file systems from hastp and make them as global? Please let me know if the understanding is correct...
    Thanks again.

Maybe you are looking for