Raw devices and cluster file system

what is difference between raw and cluster file system

See to this thread if this can help
http://asktom.oracle.com/pls/asktom/f?p=100:11:3285616048047775::::P11_QUESTION_ID:7931107631402

Similar Messages

  • Raw devices versus Cluster File Systems in RAC 10gR2

    Hi,
    Does anyone using cluster file systems in a RAC 10gR2 installation, specifically IBM’s GPFS?
    I’ve visited a company that is running RAC 10gR2 in AIX over raw devices. Why someone would choose to use raw devices , with all the problems to administer , when all the modern file systems are so powerful? Is there any issues when using cluster file systems + RAC? Is there considerable performance benefits when using raw devices with RAC ?
    I´ve always used Oracle stand alone instances over file systems (since version 7) , and performance was always very good. I´ve tested raw devices almost 10 years ago , and even in that time (the hardware today is much better - SAN , 15K rpm disks , huge caches - and the file systems software today is much better) the cost to administer it does not compensate the benefits (only 5% more faster than file systems in Oracle 7).
    So , besides any limitations imposed by RAC , why use raw devices nowadays ?
    Regards,
    Antonio Belloni

    Hi,
    spontaneously, my question would be: How did you eliminate the influence of the Linux File System Cache on ext3? OCFS2 is accessed with the o_direct flag - there will be no caching. The same holds true for RAW devices. This could have an influence on your test and I did not see a configuration step to avoid it.
    What I saw, though, is "counter test": "I have tried comparing simple file copies from an OS level and the speed differences are not apparent - so the issue only manifests itself when the data is read via an oracle db." and I have no good answer to that one.
    Maybe this paper has: http://www.oracle.com/technology/tech/linux/pdf/Linux-FS-Performance-Comparison.pdf - it's a bit older, but explains some of the interdependencies.
    Last question: While you spent a lot of effort on proving that this one query is slower on OCFS2 or RAW than on ext3 for the initial read (that's why you flushed the buffer cache before each run), how realistic is this scenario when this system goes into production? I mean, how many times will this query be read completely from disk as opposed to use some block from the buffer? If you consider that, what impact does the "IO read time from disk" have on the overall performance of the system? If you do not isolate the test to just a read, how do writes compare?
    Just some questions. Thanks.

  • Raw Device Backup to file system(OPS 8i)

    Hi
    Our currently setup is
    Oracle database 8.1.6 (Oracle Parallel Server) Two Node
    Noarchive Mode
    Solaris 2.6
    all database file ,redo logfiles,controlfiles under raw device.
    database size 16 G.B
    oracle block size 8192
    currently we are using only export backup of oracle.
    But now i want to take cold backup of oracle database to disk.
    cold backup Raw --> Disk
    How we can take cold backup with dd command and skip parameter ?
    Is anybody have practical idea of dd command with skip parameter.
    Thanks and regards
    Kuljeet pal singh

    you can use ufsdump instead of dd

  • IS RAW DEVICES SUPPORTED OVER A CLUSTER FILE SYSTEM

    Can raw partions be defined for datafiles after having choosen Cluster file system as storage option for database while creating fresh database using
    DBCA?

    > Do update on how the partitions have to be defined in either cases?
    For both ASM and OCFS, a partition must exist on the disk - it can be of any partition type. Does not matter. Simply that the s/w references a partition and not an entire disk.
    So for example, /dev/sdaf and dev/sdag are two shared devices on the cluster (LUNs on the SAN or whatever).
    You create a partition on each. E.g
    # fdisk -l /dev/sdaf
    Disk /dev/sdaf: 36.5 GB, 36573020160 bytes
    255 heads, 63 sectors/track, 4446 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdaf1 1 4446 35712463+ 83 LinuxTo use the first device as a OCFS device, you need to build an ocfs file system on it using mkfs.
    And then it can be mounted as a "normal" cooked file system mount. Remember that /etc/fstab needs to be updated for mounting it on startup.
    To use the second device for ASM, you have two choices. If you have the ASMlib kernel module installed, you can use that to configure a volume label and assign it for use by ASM.
    Alternatively, you simply map the device (partition) to a raw device for detection by ASM. E.g.
    # raw /dev/raw/raw1 /dev/sdag1Of course, you also need to make this permanent by updating the raw device list config file so that this mapping is performed on reboot. On Linux, this is the /etc/sysconfig/rawdevices file. Also remember that the user and group access for the logical raw device created, must allow ASM full access to it (e.g. use chmod oracle.dba /dev/raw/raw1).
    In a nutshell, this is how to raw devices are used as ocfs and asm volumes. (on RHEL specifically, but I expect no major differences in this approach on other o/s's)

  • Asmca has grayed out Volumes and ASM Cluster File Systems 11.2.0.3

    I've got a two node cluster which is up and running with the latest 11.2.0.3 grid install on Oracle Linux 6.3
    I need to get a shared storage location I can use for File I/O testing, ASM looks like the solution with an ASM Cluster File System.
    When I run asmca I do not have the ability to create these volumes or file systems as they are Grayed out.
    I found some instructions on how to get it to work, and they said to use acfsload to start up the required daemons:
    [root@oracleA bin]# ./acfsload start -s
    ACFS-9459: ADVM/ACFS is not supported on this OS version: '2.6.39-300.17.3.el6uek.x86_64'
    I installed Patches: 13146560, 14596051 - Which I thought would fix the problem. Rebooted after successfully applying the patches, but asmca still shows them greyed out
    and not supported on this OS error persists.
    I see some posts online saying to edit osds_acfslib.pm and update it to allow for the supported ORACLE version
    Right now it shows: ($release =~ /^oraclelinux-release/))) # Oracle Linux
    under /etc it only has oracle-release - could that have something to do with it not passing the check?
    uname -r
    2.6.39-300.17.3.el6uek.x86_64
    From what I can tell this kernal should support asm..
    Any help in getting these shared storage asm disks setup would be very helpful, oracleasm creates them and sees them fine for databases. Thanks.

    Turns out the Kernel version 2.6.39 does not have support for the ASM Drivers for the ACFS mounting.
    I'm going to have to use Oracle Linux 6.2 (instead of Oracle Linux 6.3) and rebuild my RAC to get a supported version of the drivers -> Kernel version 2.6.32
    http://docs.oracle.com/cd/E11882_01/install.112/e16763/oraclerestart.htm#BGBGEDGA

  • Cluster file system and BDB - does it work ?

    Hi,
    I've read in the reference guide that:
    " No commercial remote filesystem of which we're aware supports
    coherent, distributed shared memory for remote-mounted files"
    What about cluster file systems like veritas cluster file system ?
    Can I use this file system with BDB ?
    I'm trying to find a solution for having one database visible from two computers.
    Reader and writer exist on each machine. Any clue how to do that ?
    best regards
    Moris

    Award points, level up, and earn new privileges
    Level/Points/Privilege
    1 - 0-149 - Reputation level displayed
    2 - 150-499 - Ability to report posts to Apple Support Community Hosts
    3 - 500-999 - Ability to set a custom avatar
    4 - 1000-4999 - Invitation to community conference calls
    5 - 5000-7999 - Regional Meet-up invitations
    6 - 8000-19999 - Access to The Lounge, Apple Support Communities T-Shirt
    7 - 20,000-34,999 - Coming Soon
    8 - 35,000-49,999 - Coming Soon
    9 - 50,000-79,999 - Coming Soon
    10 - 80,000+ - Coming Soon

  • Sun QFS cluster file system with Veritas Volume Manager

    Hi,
    Can someone confirm whether it is possible to create a Sun QFS cluster file system (for Oracle RAC datafiles) using a VxVM volume?
    Or must we use Solaris Volume Manager with QFS?
    Thinking of storing the static part of the Oracle RAC DB on VxVM raw devices, and the dynamic part on a QFS file system to avoid the overhead of constantly adding new raw devices when i want to create datafiles.
    Thanks,
    Steve

    Steve,
    No, shared QFS is only supported on Solaris Volume Manager. I've not heard of any plans to test it on VxVM.
    Why not keep the static parts of the DB on raw SVM devices? Why keep them on raw devices at all?
    Tim
    ---

  • Linux Cluster File System partitions for 10g RAC

    Hi Friends,
    I planned to install 2 Node Oracle 10g RAC on RHEL and I planned to use Linux File system itself for OCR,Voting Disk and datafiles (no OCFS2/RAW/ASM)
    I am having SAN storage.
    I would like to know how do i create shared/cluster partitions for OCR,Voting Disk and Datafiles (common storage on SAN).
    Do i need to install any Linux cluster file system for creating these shared partitions (as we have sun cluster in solaris)?
    If so let me know what versions are supported and provide the necessary Note / Link
    Regards,
    DB

    Hi ,
    Below link may be useful to you:
    ORACLE-BASE - Oracle 10g RAC On Linux Using NFS

  • Linux Cluster File system

    Has any seen a release date for oracle's linux cluster file system they announced last week. I am looking at deploying 3 4 node clusters over the next few months and would rather not use raw file systems.

    http://www.linuxjournal.com/article.php?sid=6123&mode=thread&order=0
    There is the link to the news release:

  • Difference between raw device and a block device

    Platform : Unix, Unix-like
    What is the difference between a raw device and a Block device? I found the following URL by googling. But, posts seem to be mutually contradictory
    http://forums11.itrc.hp.com/service/forums/questionanswer.do?admit=109447626+1291798907338+28353475&threadId=583811

    a raw device is a special kind of block device file that allows accessing a storage device such as a hard drive directly, bypassing the operating system's caches and buffers (although the hardware caches might still be used). Applications like a Database management system(Oracle ) can use raw devices directly, enabling them to manage how data is cached, rather than deferring this task to the operating system.
    block devices correspond to devices through which the system moves data in the form of blocks. These device nodes often represent addressable devices such as hard disks, CD-ROM drives, or memory-regions.
    Also Check :
    http://forums11.itrc.hp.com/service/forums/questionanswer.do?admit=109447626+1291806443666+28353475&threadId=987277
    Regards
    Rajesh

  • Best Practice for Distributed TREX NFS vs cluster file systems

    Hi,
    We are planning to implement a distributed TREX, using RedHat on X64, but we are wondering which could be the best practice or approach to configure the "file server" used on the TREX distributed environment. The guides mention file server, that seems to be another server connected to a SAN exporting or sharing the file systems required to be mounted in all the TREX systems (Master, Backup and Slaves), but we know that the BI accelerator uses OCFS2 (cluster file systems) to access the storage, in the case of RedHat we have GFS or even OCFS.
    Basically we would like to know which is the best practice and how other companies are doing it, for a TREX distributed environment using either network file systems or cluster file systems.
    Thanks in advance,
    Zareh

    I would like to add one more thing, in my previous comment I assumed that it is possible to use cluster file system on TREX because BI accelerator, but maybe that is not supported, it does not seem to be clear on the TREX guides.
    That should be the initial question:
    Aare cluster file system solutions supported on plain TREX implementation?
    Thanks again,
    Zareh

  • Difference between ASM Disk Group, ADVM Volume and ACFS File system

    Q1. What is the difference between an ASM Disk Group and an ADVM Volume ?
    To my mind, an ASM Disk Group is effectively a logical volume for Database files ( including FRA files ).
    11gR2 seems to have introduced the concepts of ADVM volumes and ACFS File Systems.
    An 11gR2 ASM Disk Group can contain :
    ASM Disks
    ADVM volumes
    ACFS file systems
    Q2. ADVM volumes appear to be dynamic volumes.
    However is this therefore not effectively layering a logical volume ( the ADVM volume ) beneath an ASM Disk Group ( conceptually a logical volume as well ) ?
    Worse still if you have left ASM Disk Group Redundancy to the hardware RAID / SAN level ( as Oracle recommend ), you could effectively have 3 layers of logical disk ? ( ASM on top of ADVM on top of RAID/SAN ) ?
    Q3. if it is 2 layers of logical disk ( i.e. ASM on top of ADVM ), what makes this better than 2 layers using a 3rd party volume manager ( eg ASM on top of 3rd party LVM ) - something Oracle encourages against ?
    Q4. ACFS File systems, seem to be clustered file systems for non database files including ORACLE_HOMEs, application exe's etc ( but NOT GRID_HOME, OS root, OCR's or Voting disks )
    Can you create / modify ACFS file systems using ASM.
    The oracle toplogy diagram for ASM in the 11gR2 ASM Admin guide, shows ACFS as part of ASM. I am not sure from this if ACFS is part of ASM or ASM sits on top of ACFS ?
    Q5. Connected to Q4. there seems to be a number of different ways, ACFS file systems can be created ? Which of the below are valid methods ?
    through ASM ?
    through native OS file system creation ?
    through OEM ?
    through acfsutil ?
    my head is exploding
    Any help and clarification greatly appreciated
    Jim

    Q1 - ADVM volume is a type of special file created in the ASM DG.  Once created, it creates a block device on the OS itself that can be used just like any other block device.  http://docs.oracle.com/cd/E16655_01/server.121/e17612/asmfilesystem.htm#OSTMG30000
    Q2 - the asm disk group is a disk group, not really a logical volume.  It combines attributes of both when used for database purposes, as the database and certain other applications know how to talk "ASM" protocol.  However, you won't find any general purpose applications that can do so.  In addition, some customers prefer to deal directly with file systems and volume devices, which ADVM is made to do.  In your way of thinking, you could have 3 layers of logical disk, but each of them provides different attributes and characteristics.  This is not a bad thing though, as each has a slightly different focus - os file system\device, database specific, and storage centric.
    Q3 - ADVM is specifically developed to extend the characteristics of ASM for use by general OS applications.  It understands the database performance characteristics and is tuned to work well in that situation.  Because it is developed in house, it takes advantage of the ASM design model.  Additionally, rather than having to contact multiple vendors for support, your support is limited to calling Oracle, a one-stop shop.
    Q4 - You can create and modify ACFS file systems using command line tools and ASMCA.  Creating and modifying logical volumes happens through SQL(ASM), asmcmd, and ASMCA.  EM can also be used for both items.  ACFS sits on top of ADVM, which is a file in an ASM disk group.  ACFS is aware of the characteristics of ASM\ADVM volumes, and tunes it's IO to make best use of those characteristics. 
    Q5 - several ways:
    1) Connect to ASM with SQL, use 'alter diskgroup add volume' as Mihael points out.  This creates an ADVM volume.  Then, format the volume using 'mkfs' (*nix) or acfsformat (windows).
    2) Use ASMCA - A gui to create a volume and format a file system.  Probably the easiest if your head is exploding.
    3) Use 'asmcmd' to create a volume, and 'mkfs' to format the ACFS file system.
    Here is information on ASMCA, with examples:
    http://docs.oracle.com/cd/E16655_01/server.121/e17612/asmca_acfs.htm#OSTMG94348
    Information on command line tools, with examples:
    Basic Steps to Manage Oracle ACFS Systems

  • Verifying and seting file system cache parameters

    I have a Solaris 10 system that has 64Gb of memory that is running a Sybase database with raw devices. Based on the output of "echo ::memstat | mdb -k" it looks like I have about 5Gb of memory being chewed up by filesystem caching which is really not a big deal for us. Can anyone point me to the way for changing the default filesystem caching parameters so I can free up some of this memory?
    EDIT: One last thing is that we're using VxVM for this system with all non-system filesystems being VxFS. That's basically just our dump and tempdb filesystems.
    # echo ::memstat | mdb -k
    Page Summary Pages MB %Tot
    Kernel 424258 3314 5%
    Anon 7004059 54719 85%
    Exec and libs 21785 170 0%
    Page cache 57433 448 1%
    Free (cachelist)           664030              5187    8%
    Free (freelist) 48494 378 1%
    Total 8220059 64219
    Physical 8189297 63978
    Edited by: trouphaz on May 10, 2010 12:49 PM

    So, the memory listed under Free (cachelist) is also useable by applications? I thought that stuff was dedicated to the filesystem cache, which is really unnecessary for our system. Almost all IO on this system is through raw devices and the rest is on VxFS filesystems.

  • Does /sapmnt need in cluster file system(SAP ECC 6.0 with oracle RAC)

    We are going to be installing SAP with Oracle 10.2.0.4 RAC on Linux SuSE 10 and OCFS2. The Oracle RAC documentation states:
    You must store the following components in the cluster file system when you use RAC
    in the SAP environment:
    - Oracle Clusterware (CRS) Home
    - Oracle RDBMS Home
    - SAP Home (also /sapmnt)
    - Voting Disks
    - OCR
    - Database
    What I want to ask is if I really need put SAP Home(also /sapmnt) on cluster file system? I will build a two nodes oracel 10g RAC and I also have another two nodes to install SAP CI and DI. My orginial think is sapmnt is a NFS share, and mount to all four nodes(RAC node and CI/DI), and all oracle stuff was on OCFS2(only two rac nodes are OCFS), anybody can tell me if SAP Home(also /sapmnt) can be NFS mount not OCFS2, thanks.
    Best regards,
    Peter

    Hi Peter,
    I don't think you need to keep /sapmnt in  ocfs2 . Reason any file system  need  to be in cluster is,in RAC environment, data stored in the cache of one Oracle instance to be accessed by any other instance by transferring it across the private network  and preserves data integrity and cache coherency by transmitting locking and other synchronization information across cluster nodes.
    AS this applies to redo files, datafiles and control files only ,  you should be fine with nfs mount of /sapmnt sharing across and not having ocfs2.
    -SV

  • RAC 10gr2 using ASM for RMAN a cluster file system or a Local directory

    The environment is composed of a RAC with 2 nodes using ASM. I have to determine what design is better for Backup and Recovery with RMAN. The backups are going to be saved to disk only. The database is only transactional and small in size
    I am not sure how to create a cluster file system or if it is better to use a local directory. What's the benefit of having a recovery catalog that is optional to the database?
    I very much appreciate your advice and recommendation, Terry

    Arf,
    I am new to RAC. I analyzed Alejandro's script. He is main connection is to the first instance; then through sql*plus, he gets connected to the second instance. he exits the second instance and starts with RMAN backup to the database . Therefore the backup to the database is done from the first instance.
    I do not see where he setenv again to change to the second instance to run RMAN to backup the second instance. It looks to me that the backup is only done to the first instance, but not to the second instance. I may be wrong, but I do not see the second instance backup.
    Kindly, I request your assistance on the steps/connection to backup the second instance. Thank you so much!! Terry

Maybe you are looking for