Firewire storage for RAC

DB Version: 11.2.0.2
OS : Solaris 5.10
We are thinking of setting up a RAC db(Development) with a Firewire 800 device as our Shared storage. We are thinking of buying a 2 port Firewire Storage device mentioned in the below URL along with 2 firewire PCI cards for both of our machines.
http://www.lacie.com/asia/products/product.htm?id=10330
I have read in other OTN posts that RAC with firewire storage is only good for Demo purpose. Does this mean that it is not good for development DBs at least?

T.Boyd wrote:
DB Version: 11.2.0.2
OS : Solaris 5.10
We are thinking of setting up a RAC db(Development) with a Firewire 800 device as our Shared storage. We are thinking of buying a 2 port Firewire Storage device mentioned in the below URL along with 2 firewire PCI cards for both of our machines.
http://www.lacie.com/asia/products/product.htm?id=10330
I have read in other OTN posts that RAC with firewire storage is only good for Demo purpose. Does this mean that it is not good for development DBs at least?Oracle "supports" this in a pure development environment. There is (or was) an Oracle owned mailing list that specifically dealt with this - using firewire shared storage for RAC.
Some years ago I tried it (with a Lacie drive). But could not get both RHEL3 servers to open a connection to the drive. The config was correct, but the 2nd server always failed to establish a connection (complaining something like that no more connections were supported to the drive). I put that one down as a driver bug of sorts - and were not keen to go bug hunting and build the driver from source code. Left it at that - but according to the docs I read (from Oracle) at the time, this was a "valid" config for testing RAC.

Similar Messages

  • Storage for RAC & ASM

    I am planning to install Oracle 10g RAC with ASM for our Oracle 10.2.0.4 database on Solaris 10. We have a Sun StorEdge SAN.
    I would like to get your suggestions for the best storage infrastructure for the RAC/ASM.
    Can someone share their storage design - RAID LUN's, Disk layout for the RAC environment? Do you have RAID volumes or individual Disks presented to the O.S? If RAID Volumes, then what RAID level are you using? Since ASM stripes and mirrors (normal or high redundancy), how have you layed out your storage for ASM?
    Should I create one RAID LUN and present it to the operating system, so that ASM diskgroup can be created from that LUN or individual disks can be presented?
    If anyone can point me to any documentation that can put some light on the storage design and layout, it would be really helpful.
    Thanks in advance!

    Refer:- it may helps you.
    http://www.oracle.com/technetwork/database/clustering/tech-generic-unix-new-166583.html
    http://www.orafaq.com/node/66
    http://www.dba-oracle.com/real_application_clusters_rac_grid/cluster_file_system.htm
    Regards,

  • Please help me deciding the storage for RAC !!

    Hello,
    I am looking for some help to decide hardware in order to install 10g R2 RAC. I have two IBM P5 series boxes with IBM - D24 Storage (SCSI storage). I followed Oracle's Clusterware installation guide. It says you have to create Physical Volumes in "Concurrent Mode" which my system does not support. even if i import this Physical vol with LVs in Concurrent mode it gives error as this is not an concurrent PV. I found that HACMP is needed to do this. But same time i found that HACMP is not needed ... I am very much confused.
    Can we take SAN and get rid of this kind of problem ? is there any software with SAN for concurrency ?...
    If you have any suggestions highly appreciated.
    Thanks

    1. I think what Ashok was saying is that you don't need to put these disks under the volume manager at all--just address them as raw disk devices. I don't recall where the raw disk devices are found on AIX, but you shouldn't have to do any importing or creating of physical volumes in order to use ASM.
    2. I think all the storage arrays I've used have allowed access from more than one host. It's usually an access control list or something similar.
    Dan

  • Shared Storage for RAC

    Dear All,
    what is the best options for shared storage system for Oracle RAC 10g R2 on windows operating system.
    How to share disk in windows so that it can be available in all RAC nodes in Dynamic mode.
    Need help from people who have configured RAC on windows operating system and have used shared disk option.
    Regards,
    Imran

    In production, the only realistic options are to turn to certified SAN or NAS vendors.
    The issue is simplel - even though many types of shared storage allows you to have multiple machines connected to the same disk, certified shared storage allows these machines to write to the same disk sector and block. Most storage solutions have built-in protection to stop that from happening.
    For a small shop, I certainly recommend NetApp NAS.

  • Upgrading my intel iMac to internal SSD and running 1TB external drive from firewire 800 for mass storage while still using a second external drive for backups...my goal-speed with a SSD, to still have 1TB of room for everything i have now..possible?

    I seem to kill HD's every two years...the last two i've installed were WD Caviar Black 1TB 7200rpm 3.5" drives. The speed gains over  stock drives have been remarkable. I don't blame the drives for the failures, my machines are up and running 16 hours a day, every day, year round. They die from 'mileage' so to speak...i assume...there's no viral acvtivity or questionable downloads to gunk things up...just lots of work.
    I have my third new drive ready to install in my intel based iMac...but i've had a thought...I want to install a SSD in my iMac for the speed gain (and recent price drops)...for standard storage i want to use this new WD Caviar Black 1TB in an external drive bay and connect via firewire 800 for storing everything except the OS and my most commonly used software...am i crazy? will the firewire 800 external drive negate the speed gained with an internal SSD??
    I have four iMacs in my office, and one at home. I buy second hand and install new drives and boost memory. I'm going to do this on my 'home' machine...if it works out well i want to upgrade the other three this way...but first i need to know if i'm just dreaming, or will it really make a difference? or even possible??
    Thanks!
    2.66 intell core2duo
    2009 iMac
    8gig ram
    1 gig hd
    OSX ver. 10.8

    The SSD gives great bootup and Application launch speed. I think it also speeds up the video rendering a bit, I do all that on the SSD and then move the finished project to the external drives. As far as the speed for the external drives they are quick enough for viewing video and the file transfer rate is good. I had initially put the SSD into an external cradle (FW800) and the system was faster than on the internal drive. I only got a 1.5GBs SATA drive, perhaps yours could benefit from the 3GBs. I know the 6GBs would be too fast and costs a lot more, even the MacPros need special hookups to make use of the 6GBs.

  • The best option to create  a shared storage for Oracle 11gR2 RAC in OEL 5?

    Hello,
    Could you please tell me the best option to create a shared storage for Oracle 11gR2 RAC in Oracel Enterprise Linux 5? in production environment? And could you help to create shared storage? Because there is no additional step in Oracle installation guide. There are steps for only asm disk creation.
    Thank you.

    Here are names of partitions and permissions. Partitions which have 146 GB, 438 GB, 438 GB of capacity are my storage. Two of three disks which are 438 GB were configured as RAID 5 and remaining disk was configured as RAID 0. My storage is Dell MD 3000i and connected to nodes through ethernet.
    Node 1
    [root@rac1 home]# ll /dev/sd*
    brw-r----- 1 root disk 8, 0 Aug 8 17:39 /dev/sda
    brw-r----- 1 root disk 8, 1 Aug 8 17:40 /dev/sda1
    brw-r----- 1 root disk 8, 16 Aug 8 17:39 /dev/sdb
    brw-r----- 1 root disk 8, 17 Aug 8 17:39 /dev/sdb1
    brw-r----- 1 root disk 8, 32 Aug 8 17:40 /dev/sdc
    brw-r----- 1 root disk 8, 48 Aug 8 17:41 /dev/sdd
    brw-r----- 1 root disk 8, 64 Aug 8 18:26 /dev/sde
    brw-r----- 1 root disk 8, 65 Aug 8 18:43 /dev/sde1
    brw-r----- 1 root disk 8, 80 Aug 8 18:34 /dev/sdf
    brw-r----- 1 root disk 8, 81 Aug 8 18:43 /dev/sdf1
    brw-r----- 1 root disk 8, 96 Aug 8 18:34 /dev/sdg
    brw-r----- 1 root disk 8, 97 Aug 8 18:43 /dev/sdg1
    [root@rac1 home]# fdisk -l
    Disk /dev/sda: 72.7 GB, 72746008576 bytes
    255 heads, 63 sectors/track, 8844 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 8844 71039398+ 83 Linux
    Disk /dev/sdb: 72.7 GB, 72746008576 bytes
    255 heads, 63 sectors/track, 8844 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdb1 * 1 4079 32764536 82 Linux swap / Solaris
    Disk /dev/sdd: 20 MB, 20971520 bytes
    1 heads, 40 sectors/track, 1024 cylinders
    Units = cylinders of 40 * 512 = 20480 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sde: 146.2 GB, 146278449152 bytes
    255 heads, 63 sectors/track, 17784 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sde1 1 17784 142849948+ 83 Linux
    Disk /dev/sdf: 438.8 GB, 438835347456 bytes
    255 heads, 63 sectors/track, 53352 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdf1 1 53352 428549908+ 83 Linux
    Disk /dev/sdg: 438.8 GB, 438835347456 bytes
    255 heads, 63 sectors/track, 53352 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdg1 1 53352 428549908+ 83 Linux
    Node 2
    [root@rac2 ~]# ll /dev/sd*
    brw-r----- 1 root disk 8, 0 Aug 8 17:50 /dev/sda
    brw-r----- 1 root disk 8, 1 Aug 8 17:51 /dev/sda1
    brw-r----- 1 root disk 8, 2 Aug 8 17:50 /dev/sda2
    brw-r----- 1 root disk 8, 16 Aug 8 17:51 /dev/sdb
    brw-r----- 1 root disk 8, 32 Aug 8 17:52 /dev/sdc
    brw-r----- 1 root disk 8, 33 Aug 8 18:54 /dev/sdc1
    brw-r----- 1 root disk 8, 48 Aug 8 17:52 /dev/sdd
    brw-r----- 1 root disk 8, 64 Aug 8 17:52 /dev/sde
    brw-r----- 1 root disk 8, 65 Aug 8 18:54 /dev/sde1
    brw-r----- 1 root disk 8, 80 Aug 8 17:52 /dev/sdf
    brw-r----- 1 root disk 8, 81 Aug 8 18:54 /dev/sdf1
    [root@rac2 ~]# fdisk -l
    Disk /dev/sda: 145.4 GB, 145492017152 bytes
    255 heads, 63 sectors/track, 17688 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 8796 70653838+ 83 Linux
    /dev/sda2 8797 12875 32764567+ 82 Linux swap / Solaris
    Disk /dev/sdc: 146.2 GB, 146278449152 bytes
    255 heads, 63 sectors/track, 17784 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdc1 1 17784 142849948+ 83 Linux
    Disk /dev/sdd: 20 MB, 20971520 bytes
    1 heads, 40 sectors/track, 1024 cylinders
    Units = cylinders of 40 * 512 = 20480 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sde: 438.8 GB, 438835347456 bytes
    255 heads, 63 sectors/track, 53352 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sde1 1 53352 428549908+ 83 Linux
    Disk /dev/sdf: 438.8 GB, 438835347456 bytes
    255 heads, 63 sectors/track, 53352 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdf1 1 53352 428549908+ 83 Linux
    [root@rac2 ~]#
    Thank you.
    Edited by: user12144220 on Aug 10, 2011 1:10 AM
    Edited by: user12144220 on Aug 10, 2011 1:11 AM
    Edited by: user12144220 on Aug 10, 2011 1:13 AM

  • Oracle White Paper- Storage Option for RAC. on Linux

    hi..
    This in reference with White Paper.. Stotage Option for RAC on LINUX. Author- Umadevi Byrappa.
    Which states on page 10: To Use NAS for RAC database file storage select the file system storage option in OUI. or the Clustered file system storage option in DBCA.
    With which i disagree. File storage option of oracle 10g RAC only shows.
    1. Clustered File system.
    2. ASM
    3. Raw Devices.
    When i try to select Cfs and select the shared directory on NFS it says. < Directory_Name> is not a clustered file system or Shared on Both <Server_1> < Server_2>. and at this point of time i m stuck.
    1. I dont want to use OCFS as its not supporting NAS.
    2. Selecting CFS doesnt recognise mounted shared volume as valid storage device option can only store OCRfile and CSS file.
    3. So, I have to use ASM.. with zero padded files..which i dont want to because no other option. ( PART NO. B10766-02 PAGE C-6)
    Also, I would like oracle to provide Back up Recovery Option or Document which tells me how could i Recover Database when i use Zero Padded Files.
    What would be the best Option in the above scenario ?
    I hope by applying patch or with workaround somehow it shows Filesystem ONLY.I'll be the happiest man in this Case.
    Any suggestions and corrections are most welcome. I wish i m wrong.
    Nadeem ( [email protected] )

    NFS isn't a clustered file system at all - by RFC it is an exported file system with access controls
    If your vendor offers an NFS solution over and beyond that - more power to them. However that isn't "NFS"..
    http://www.faqs.org/rfcs/rfc3010.html
    You can use NFS in a clustered server environment (to mount apps, read only data and for synchronous access to files), however it doesn't support the concurrency out of the box for RDBMS transactions - if a vendor is supporting this promise then that is for you to decide.
    However i stand by my statement that NFS is a file system exported across your network and not a full clustered file system.

  • Cheap shared storage for test RAC

    Hi All,
    Is there cheap shared storage device is available to create test environment RAC. I used to create RAC with vmware but environment is not much stable.
    Regards

    Two options:
    The Oracle VM templates can be used to build clusters of any number of nodes using Oracle Database 11g Release 2, which includes Oracle 11g Rel. 2 Clusterware, Oracle 11g Rel. 2 Database, and Oracle Automatic Storage Management (ASM) 11g Rel. 2, patched to the latest, recommended patches.
    This is supported for Production.
    http://www.oracle.com/technetwork/server-storage/vm/rac-template-11grel2-166623.html
    Learn how to set up and configure an Oracle RAC 11g Release 2 development cluster on Oracle Linux for less than US$2,700.
    The information in this guide below is not validated by Oracle, is not supported by Oracle, and should only be used at your own risk; it is for educational purposes only.
    http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677.html
    Regards,
    Levi Pereira
    Edited by: Levi Pereira on Dec 10, 2012 10:59 AM

  • Storage Knowledge for RAC

    Please advice me few books for developing my knowledge on EMC storage.
    We are planning to buy EMC for RAC setup.
    There are not many books around that topic.

    Learning Oracle,
    I don't think this or any other Oracle forum is a place for free consultancy, especially if you don't specify any details.
    Richmond Shee and others have published a book on the Oracle Wait Interface, it is pretty indispensible.
    Oracle Press has also a book on ASM out.
    As this is not the place to post free abstracts of books, made by volunteers, I suggest you visit sites lke www.amazon.com or www.bn.com or whatever vendor you have access to.
    Sybrand Bakker
    Senior Oracle DBA

  • Considering shared storage for Oracle RAC 10g

    Hi, guys!
    My Oracle RAC will be run on VMware ESXI 5.5. So, both 2 nodes and shared storage are on VM. Don't blame for this, I dont have another choice.
    I am choosing shared storage for Oracle RAC. I am choosing between NFS and ISCSI server, both can be done in RedHat linux or FreeNAS.
    Can u, guys, help me to do the choise?
    RedHat or FreeNAS
    ISCSI or NFS
    Any help will be appreciated.

    JohnWatson написал(а):
    NFS is really easy. Create your zero-filled files, set the ownership and access modes, and point your asm_diskstring at them. Much simpler than configuring an iSCSI target and initiators, and then messing about with ASMlib or udev.
    I recorded a public lecture that (if I remember correctly) describes it here, Oracle ASM Free Tutorial
    I will be using OCFS2 as cluster FS. Does it make any difference for NFS vs ISCSI?

  • Doubts about shared disk for RAC

    Hi All,
    I am really new to RAC.Even after reading various documents,I still have many doubts regarding shared storage and file systems needed for RAC.
    1.Clusterware has to be installed on a shared file system like OCFS2.Which type of hard drive is required to install OCFS2 so that it can be accessed from all nodes??
    It has to be an external hard drive???Or we can use any simple hard disk for shared storage??
    If we use external hard drive then does it need to be connected to a seperate server alltogether or can it be connected to any one of the nodes in the cluster???
    Apart from this shared drives,approximately what size of hard disk is required for all nodes(for just a testing environment).
    Sincerely appreciate a reply!!
    Thanks in advance.

    Clusterware has to be installed on shared storage. RAC also requires shared storage for the database.
    Shared storage can be managed via many methods.
    1. Some sites using Linux or UNIX-based OSes choose to use RAW disk devices. This method is not frequently used due to the unpleasant management overhead and long-term manageability for RAW devices.
    2. Many sites use cluster filesystems. On Linux and Windows, Oracle offers OCFS2 as one (free) cluster filesystem. Other vendors also offer add-on products for some OSes that provide supported cluster filesystems (like GFS, GPFS, VxFS, and others). Supported cluster filesystems may be used for Clusterware files (OCR and voting disks) as well as database files. Check Metalink for a list of supported cluster filesystems.
    3. ASM can be used to manage shared storage used for database files. Unfortunately, due to architecture decisions made by Oracle, ASM cannot currently be used for Clusterware files (OCR and voting disks). It is relatively common to see ASM used for DB files and either RAW or a cluster filesystem used for Clusterware files. In other words, ASM and cluster filesystems and RAW are not mutually exclusive.
    As for hardware--I have not seen any hardware capable of easily connecting multiple servers to internal storage. So, shared storage is always (in my experience) housed externally. You can find some articles on OTN and other sites (search Google for them) that use firewire drives or a third computer running openfiler to provide the shared storage in test environments. In production environments, SAN devices are commonly employed to provide concurrent access to storage from multiple servers.
    Hope this helps!
    Message was edited by:
    Dan_Norris

  • HT4847 Question re: storage for photos

    I bought storage for photos, and yet I keep getting the notice that there is not enough available storage to take a photo. Yet, I only have taken ten photos and my settings tell me that of 15.0 GB in iCloud, I have 14.9 available. What can I do?

    Keep ALL the media on one drive and TM on it's own drive. NEVER NEVER NEVER mix a backup drive with any storage. The reason being is if/when that drive crashes you will have lost all the data and backup! For the media drive you may need a high speed drive and bus. The only reason I can think of for a high speed drive would be if you put photos on it and do lots of editing on them in something like Aperture, Photoshop etc...If that does describe your needs get a 7200RPM drive AND use the fastest bus your computer has. This will probably be Firewire or Firewire 800 depending on what iMac you own. If this does not describe your needs as long you keep the media (movies, photos & music) on one drive and the TM on a separate drive you're good to go.
    BTW if you haven't seen these you will find them helpful:
    iPhoto: How to move the Library to a EHD
    iTunes: How to move the library to a EHD

  • OVM disks for RAC implementation

    Dear All
    is there any guide available on how can you create the disks for RAC ASM in OVM 3.3.1 using a fiber channel block level storage?
    Thanks
    George

    You are right, you can't use virtual disks for RAC configuration. Have a look here (especially page 18):
    http://www.oracle.com/technetwork/products/clustering/oracle-rac-in-oracle-vm-environment-131948.pdf
    Using physical disks means that you create LUNs on your storage array connected to Oracle VM Servers by fabric channel. You map these LUNs to all servers in the pool or all standalone servers where you are going to install your virtual machines being Clusterware nodes. Then you rediscover storage in Oracle VM Manager, mark these LUNs as "shared" in OVMM and add them to your virtual machines as "Physical disks" (by editing guest properties in OVMM).
    Alternatively you can directly map iSCSI or NFS storage to your guests. By "directly" I mean you use IP addresses and software in your guests as iSCSI initiator or NFS client - without engaging Oracle VM in the middle.
    Regards,
    Michal

  • Requirement for RAC - Linux AS mandatory ?

    Does the standard Linux 8 version can be used for RAC R2 or we need only Linux AS for the RAC configuration.
    What is the linux requirement for the RAC R2 configuration ?
    Thanks,
    Ashok

    Thanks for the info.,
    could you tell me what firewire components you are using? I know that we need the fireware cards in the pcs/servers, the firewire hub, and the firewire drive. However, I am having difficulty identifying which firewire components to buy, ie., which ones are compatible with Linux Red Hat (AS). The Red Hat HCL revealed nothing to me. Any info you could give me on the components you are using would be greatly appreciated and save me a lot of time.
    Thanks again,
    Steve K.

  • Dedicated switches needed for RAC interconnect or not?

    Currently working on an Extended RAC cluster design implementation, I asked the network engineer for dedicated switches for the RAC interconnects.
    Here is a little background:
    There are 28 RAC clusters over 2X13 physical RAC nodes with separate Oracle_Home for each instance with atleast 2+ instances on each RAC node. So 13 RAC nodes will be in each site(Data-Center). This is basically an Extended RAC solution for SAP databases on RHEL 6 using ASM and Clusterware for Oracle 11gR2. The RAC nodes are Blades in a c7000 enclosure (in each site). The distance between the sites is 55+ kms.
    Oracle recommends to have Infiniband(20GBps) as the network backbone, but here DWDM will be used with 2X10 Gbps (each at 10 GBps) links for the RAC interconnect between the sites. There will be separate 2x1GBps redundant link for the Production network and 2x2 GBps FC(Fiber-Channel) redundant links for the SAN/Storage(ASM traffic will go here) network. There will be switches for the Public-production network and the SAN network each.
    Oracle recommends dedicated switches(which will give acceptable latency/bandwith) with switch redundancy to route the dedicated/non-routable VLANs for the RAC interconnect (private/heartbeat/global cache transfer) network. Since the DWDM interlinks is 2x10Gbps - do I still need the dedicated switches?
    If yes, then how many?
    Your inputs will be greatly appreciated.. and help me take a decision.
    Many Thanks in advance..
    Abhijit

    Absolutely agree.. the chances of overload in a HA(RAC) solution and ultmate RAC node eviction are very high(with very high latency) and for exactly this reason I even suggested inexpensive switches to route the VLANs for the RAC interconnect through these switches. The ASM traffic will get routed through the 2x2GB FC links through SAN-Directors (1 in each site).
    Suggested the network folks to use Up-links from the c7000 enclosure and route the RAC VLAN through these inexpensive switches for the interconnect traffic. We have another challenge here: HP has certified using VirtualConnect/Flex-Fabric architecture for Blades in c7000 to allocate VLANs for RAC interconnect. But this is only for one site, and does not span Production/DR sites separated over a distance.
    Btw, do you have any standard switch model to select from.. and how many to go for a RAC configuration of 13 Extended RAC clusters with each cluster hosting 2+ RAC instances to host total of 28 SAP instances.
    Many Thanks again!
    Abhijit

Maybe you are looking for