No Shared Storage Available (RAC Installation)

Hello Guys,
I am in process of installing RAC 10G R2 on windows 2000 operating system. Well i am going for 2 node clustering. The problem is that we dont have any shared storage system like SAN. Is it possible that we can use any other computer's HDD for stroing data files? All other files can be stored in different drives of the 2 nodes.... OR is it possible to store datafiles on these nodes?
Please guide me... and what type of storage it will be called... obviously not ASM but it would be OCFS?
Please help.
Regards,
Imran

Well we are doing it for testing purpose... when we will go for production installation then obviouly we will keep our data files on a shared storage....
I have read the document but it is not clear to me... can we keep data files on any one of the node?
Regards,
Imran

Similar Messages

  • How to implement RAC with no shared storage available...

    Hello Guys,
    I have installed, configured single node RAC R2 with 2 database instances on windows 2000 platform with the help of vmware software. Installation was successfull.
    I just meet my boss to explain him my acheivements and to get permission to go for prodcution environment where we will be developing 3 nodes RAC on linux platform.
    I asked him that we need a shared stroage system that can be achieved by SAN or NAS or firewire disks... but he refuses to buy any additioan hardware as we have already spend a lot on servers. He wants to me have another computer with a large size Hard Disk Drive to be used as shared storage.
    I just want to know can we configure 3 nodes RAC using HDD of an other computer as sommon shared storage...
    Please guide me. I really want to implement RAC in our production environment.
    Regards,
    Imran

    Yeah, but would openfiler work? Has any one
    implemented RAC using open filer or software
    configured shared storage.
    Any other better solution as i have to implement it
    in production environment and have to seek better
    backup facilities.Are you looking for a production environment, or an evaluation environment?
    For an evaluation environment OpenFiler works. I occasionally teach RAC classes using that. It works but it is not as fast or as robust as I'd like in production.
    For a production environment, plan to pay some money. The least expensive commercial shared stored I have found is from NetApp - any NetApp F8xx or FAS2xx files with iSCSI or even NFS license will do for RAC.
    Message was edited by:
    Hans Forbrich

  • Shared Storage for RAC

    Dear All,
    what is the best options for shared storage system for Oracle RAC 10g R2 on windows operating system.
    How to share disk in windows so that it can be available in all RAC nodes in Dynamic mode.
    Need help from people who have configured RAC on windows operating system and have used shared disk option.
    Regards,
    Imran

    In production, the only realistic options are to turn to certified SAN or NAS vendors.
    The issue is simplel - even though many types of shared storage allows you to have multiple machines connected to the same disk, certified shared storage allows these machines to write to the same disk sector and block. Most storage solutions have built-in protection to stop that from happening.
    For a small shop, I certainly recommend NetApp NAS.

  • Shared storage devices RAC

    hello
    I was doing Grid Infrastructure Installation for RAC in linux. Where the disk group needs to be created I changed the device discovery path to QRCL* as said in the guide that i followed. But after that the list of candidate disks went empty.
    /etc/init.d.oracleasm listdisks command does not return any disks. I can't delete the disks it says they are not instantiated i also cant create them again. please help im stuck :(

    Where the disk group needs to be created I changed the device discovery path to QRCL* give full path of disk in device discovery path like /dev/dsk*

  • Shared Storage RAC

    Hello,
    This is my Oracle RAC 11gR2 real world installation, I need to configure the shared storage for RAC 2 nodes on redhat enterprise linux 5.
    could please send me a step by step how to do it? I want to use Device Mapper Multipath for that and ASM for Storage.
    Thank you

    899660 wrote:
    Hello,
    This is my Oracle RAC 11gR2 real world installation, I need to configure the shared storage for RAC 2 nodes on redhat enterprise linux 5.
    could please send me a step by step how to do it? I want to use Device Mapper Multipath for that and ASM for Storage.
    Thank youHi,
    Shared storage is a hardware device, which cant be created by you. Ofcourse you can use that shared device to configure ASM.
    Check the below links
    http://martincarstenbach.wordpress.com/2010/11/16/configuration-device-mapper-multipath-on-oel5-update-5/
    http://www.oracle.com/technetwork/database/device-mapper-udev-crs-asm.pdf

  • Is it possible to install Oracle RAC without shared storage

    Dear All,
    I would like to seek for your advice.
    I got two different servers. We call it node 1 and node 2. And two different instances name.
    Node 1 -> instance name as "ORCL1"
    Node 2 -> instance name as "ORCL2"
    For the system we need Oracle RAC active-active cluster mode. Our objective is to have 2 replicated databases, in other words we need 2 instances of the same database automatically replicated for 100% up time to the Application server. We have 2 separate database machines and 2 application server machines. We need our application server to connect to any of the databases at any point of time and be having a consistent data on both database machines. We only need the database to be in a cluster mode, we won't need the OS to be in a cluster. There is no shared storage in this case.
    Can this be done? Please advice.

    you should review RAC concepts, and the meaning of instance and database
    For the system we need Oracle RAC active-active cluster mode.RAC = single database with multiple instances all accessing the same shared storage, no replication involved
    Our objective is to have 2 replicated databases, in other words we need 2 instances of the same database automatically replicated for 100% up time to the Application server.what you describe here is = multiple databases with multiple instances, replicated between each other
    We have 2 separate database machines and 2 application server machines. We need our application server to connect to any of the databases at any point of time and be having a consistent data on both database machines. We only need the database to be in a cluster mode, we won't need the OS to be in a cluster. There is no shared storage in this case.no shared storage = no RAC
    you will have two seperate databases synchronizing continuously
    you can use for example Streams / Advanced Replication (with multi-master configuration)
    if you dont insist on an active-active configuration, you can also use Data Guard for building a standby database

  • Oracle RAC Installation: Unix nodes, Windows ASM

    I have a question about configuring Oracle RAC. I have never done any RAC or ASM installation before. Might be a stupid question for some of you.
    Is it possible to install Oracle RAC using following options?
    2 node RAC using Sun Solaris
    Shared Storage using ASM in Windows server
    Any additional information that you can provide will be greatly appreciated.
    Thanks in advance

    Hi,
    First of all, do you have a shared storage available to both unix server or you want to use the windows server as a shared storage ?
    From the documentation
    Single Instance and Clustered Environments:
    Each database server that has database files managed by ASM needs to be running an ASM instance. A single ASM instance can service one or more single-instance databases on a stand-alone server. Each ASM disk group can be shared among all the databases on the server. In a clustered environment, each node runs an ASM instance, and the ASM instances communicate with each other on a peer-to-peer basis.
    Which means that you need to have ASM instance on every server where you have a database instance.
    In case you don't have a shared storage to the unix servers, you have two options - iSCSI or NFS. You can setup the windows machine as an iscsi server and both unix machines as an iscsi clients then you will have shared storage on both unix machines. The other option is to configure the window machine as NFS server and mount the NFS share on both unix machines. Then you can deploy the data files directly at the NFS shared (not supported) or create empty files using dd and use then as device files for ASM.
    For more information on ASM over NFS you can read Tim Halls article:
    http://www.oracle-base.com/articles/linux/UsingNFSWithASM.php
    Regards,
    Sve

  • Firewire storage for RAC

    DB Version: 11.2.0.2
    OS : Solaris 5.10
    We are thinking of setting up a RAC db(Development) with a Firewire 800 device as our Shared storage. We are thinking of buying a 2 port Firewire Storage device mentioned in the below URL along with 2 firewire PCI cards for both of our machines.
    http://www.lacie.com/asia/products/product.htm?id=10330
    I have read in other OTN posts that RAC with firewire storage is only good for Demo purpose. Does this mean that it is not good for development DBs at least?

    T.Boyd wrote:
    DB Version: 11.2.0.2
    OS : Solaris 5.10
    We are thinking of setting up a RAC db(Development) with a Firewire 800 device as our Shared storage. We are thinking of buying a 2 port Firewire Storage device mentioned in the below URL along with 2 firewire PCI cards for both of our machines.
    http://www.lacie.com/asia/products/product.htm?id=10330
    I have read in other OTN posts that RAC with firewire storage is only good for Demo purpose. Does this mean that it is not good for development DBs at least?Oracle "supports" this in a pure development environment. There is (or was) an Oracle owned mailing list that specifically dealt with this - using firewire shared storage for RAC.
    Some years ago I tried it (with a Lacie drive). But could not get both RHEL3 servers to open a connection to the drive. The config was correct, but the 2nd server always failed to establish a connection (complaining something like that no more connections were supported to the drive). I put that one down as a driver bug of sorts - and were not keen to go bug hunting and build the driver from source code. Left it at that - but according to the docs I read (from Oracle) at the time, this was a "valid" config for testing RAC.

  • Doubts about shared disk for RAC

    Hi All,
    I am really new to RAC.Even after reading various documents,I still have many doubts regarding shared storage and file systems needed for RAC.
    1.Clusterware has to be installed on a shared file system like OCFS2.Which type of hard drive is required to install OCFS2 so that it can be accessed from all nodes??
    It has to be an external hard drive???Or we can use any simple hard disk for shared storage??
    If we use external hard drive then does it need to be connected to a seperate server alltogether or can it be connected to any one of the nodes in the cluster???
    Apart from this shared drives,approximately what size of hard disk is required for all nodes(for just a testing environment).
    Sincerely appreciate a reply!!
    Thanks in advance.

    Clusterware has to be installed on shared storage. RAC also requires shared storage for the database.
    Shared storage can be managed via many methods.
    1. Some sites using Linux or UNIX-based OSes choose to use RAW disk devices. This method is not frequently used due to the unpleasant management overhead and long-term manageability for RAW devices.
    2. Many sites use cluster filesystems. On Linux and Windows, Oracle offers OCFS2 as one (free) cluster filesystem. Other vendors also offer add-on products for some OSes that provide supported cluster filesystems (like GFS, GPFS, VxFS, and others). Supported cluster filesystems may be used for Clusterware files (OCR and voting disks) as well as database files. Check Metalink for a list of supported cluster filesystems.
    3. ASM can be used to manage shared storage used for database files. Unfortunately, due to architecture decisions made by Oracle, ASM cannot currently be used for Clusterware files (OCR and voting disks). It is relatively common to see ASM used for DB files and either RAW or a cluster filesystem used for Clusterware files. In other words, ASM and cluster filesystems and RAW are not mutually exclusive.
    As for hardware--I have not seen any hardware capable of easily connecting multiple servers to internal storage. So, shared storage is always (in my experience) housed externally. You can find some articles on OTN and other sites (search Google for them) that use firewire drives or a third computer running openfiler to provide the shared storage in test environments. In production environments, SAN devices are commonly employed to provide concurrent access to storage from multiple servers.
    Hope this helps!
    Message was edited by:
    Dan_Norris

  • The best option to create  a shared storage for Oracle 11gR2 RAC in OEL 5?

    Hello,
    Could you please tell me the best option to create a shared storage for Oracle 11gR2 RAC in Oracel Enterprise Linux 5? in production environment? And could you help to create shared storage? Because there is no additional step in Oracle installation guide. There are steps for only asm disk creation.
    Thank you.

    Here are names of partitions and permissions. Partitions which have 146 GB, 438 GB, 438 GB of capacity are my storage. Two of three disks which are 438 GB were configured as RAID 5 and remaining disk was configured as RAID 0. My storage is Dell MD 3000i and connected to nodes through ethernet.
    Node 1
    [root@rac1 home]# ll /dev/sd*
    brw-r----- 1 root disk 8, 0 Aug 8 17:39 /dev/sda
    brw-r----- 1 root disk 8, 1 Aug 8 17:40 /dev/sda1
    brw-r----- 1 root disk 8, 16 Aug 8 17:39 /dev/sdb
    brw-r----- 1 root disk 8, 17 Aug 8 17:39 /dev/sdb1
    brw-r----- 1 root disk 8, 32 Aug 8 17:40 /dev/sdc
    brw-r----- 1 root disk 8, 48 Aug 8 17:41 /dev/sdd
    brw-r----- 1 root disk 8, 64 Aug 8 18:26 /dev/sde
    brw-r----- 1 root disk 8, 65 Aug 8 18:43 /dev/sde1
    brw-r----- 1 root disk 8, 80 Aug 8 18:34 /dev/sdf
    brw-r----- 1 root disk 8, 81 Aug 8 18:43 /dev/sdf1
    brw-r----- 1 root disk 8, 96 Aug 8 18:34 /dev/sdg
    brw-r----- 1 root disk 8, 97 Aug 8 18:43 /dev/sdg1
    [root@rac1 home]# fdisk -l
    Disk /dev/sda: 72.7 GB, 72746008576 bytes
    255 heads, 63 sectors/track, 8844 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 8844 71039398+ 83 Linux
    Disk /dev/sdb: 72.7 GB, 72746008576 bytes
    255 heads, 63 sectors/track, 8844 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdb1 * 1 4079 32764536 82 Linux swap / Solaris
    Disk /dev/sdd: 20 MB, 20971520 bytes
    1 heads, 40 sectors/track, 1024 cylinders
    Units = cylinders of 40 * 512 = 20480 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sde: 146.2 GB, 146278449152 bytes
    255 heads, 63 sectors/track, 17784 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sde1 1 17784 142849948+ 83 Linux
    Disk /dev/sdf: 438.8 GB, 438835347456 bytes
    255 heads, 63 sectors/track, 53352 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdf1 1 53352 428549908+ 83 Linux
    Disk /dev/sdg: 438.8 GB, 438835347456 bytes
    255 heads, 63 sectors/track, 53352 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdg1 1 53352 428549908+ 83 Linux
    Node 2
    [root@rac2 ~]# ll /dev/sd*
    brw-r----- 1 root disk 8, 0 Aug 8 17:50 /dev/sda
    brw-r----- 1 root disk 8, 1 Aug 8 17:51 /dev/sda1
    brw-r----- 1 root disk 8, 2 Aug 8 17:50 /dev/sda2
    brw-r----- 1 root disk 8, 16 Aug 8 17:51 /dev/sdb
    brw-r----- 1 root disk 8, 32 Aug 8 17:52 /dev/sdc
    brw-r----- 1 root disk 8, 33 Aug 8 18:54 /dev/sdc1
    brw-r----- 1 root disk 8, 48 Aug 8 17:52 /dev/sdd
    brw-r----- 1 root disk 8, 64 Aug 8 17:52 /dev/sde
    brw-r----- 1 root disk 8, 65 Aug 8 18:54 /dev/sde1
    brw-r----- 1 root disk 8, 80 Aug 8 17:52 /dev/sdf
    brw-r----- 1 root disk 8, 81 Aug 8 18:54 /dev/sdf1
    [root@rac2 ~]# fdisk -l
    Disk /dev/sda: 145.4 GB, 145492017152 bytes
    255 heads, 63 sectors/track, 17688 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 8796 70653838+ 83 Linux
    /dev/sda2 8797 12875 32764567+ 82 Linux swap / Solaris
    Disk /dev/sdc: 146.2 GB, 146278449152 bytes
    255 heads, 63 sectors/track, 17784 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdc1 1 17784 142849948+ 83 Linux
    Disk /dev/sdd: 20 MB, 20971520 bytes
    1 heads, 40 sectors/track, 1024 cylinders
    Units = cylinders of 40 * 512 = 20480 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sde: 438.8 GB, 438835347456 bytes
    255 heads, 63 sectors/track, 53352 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sde1 1 53352 428549908+ 83 Linux
    Disk /dev/sdf: 438.8 GB, 438835347456 bytes
    255 heads, 63 sectors/track, 53352 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdf1 1 53352 428549908+ 83 Linux
    [root@rac2 ~]#
    Thank you.
    Edited by: user12144220 on Aug 10, 2011 1:10 AM
    Edited by: user12144220 on Aug 10, 2011 1:11 AM
    Edited by: user12144220 on Aug 10, 2011 1:13 AM

  • Problem of using OCFS2 as shared storage to install RAC 10g on VMware

    Hi, all
    I am installing a RAC 10g cluster with two linux nodes on VMware. I created a shared 5G disk for the two nodes as shared storage partition. By using OCFS2 tools, i formatted this shared storage partition and successfully auto mounted it on both nodes.
    Before installing, i use the command "runcluvfy.sh stage -pre crsinst -n node1,node2" to determine the installation prerequisites. Everything is ok except an error "Could not find a suitable set of interfaces for VIPs.". By searching the web, i found this error could be safely ignored.
    The OCFS2 works well on both nodes, i formatted the shared partition as ocfs2 file system and configure o2bc to auto start ocfs service. I mounted the shared disk on both nodes at /ocfs directory. By adding an entry into both nodes' /etc/fstab, this partition can be auto mounted at system boots. I could access files in shared partition on both nodes.
    My problem is that, when installing clusterware, at the stage "Specify Oracle Cluster Registry", I enter "/ocfs/OCRFILE" for Specify OCR Location and "/ocfs/OCRFILE_Mirror" for Specify OCR Mirror Location. But got an error as following:
    ----- Error Message ----
    The location /ocfs/OCRFILE, entered for the Oracle Cluster Registry(OCR) is not shared across all the nodes in the cluster. Specify a shared raw partition or cluster file system that is visible by the same name on all nodes of the cluster.
    ------ Error Message ---
    I don't know why the OUI can't recognize /ocfs as shared partition. On both nodes, using command "mounted.ocfs2 -f", i can get the result:
    Device FS Nodes
    /dev/sdb1 ocfs2 node1, node2
    What's the possible wrong? Any help is appreciated!
    Addition information:
    1) uname -r
    2.6.9-42.0.0.0.1.EL
    2) Permission of shared partition
    $ls -ld /ocfs/
    drwxrwxr-x 6 oracle dba 4096 Aug 3 18:22 /ocfs/

    Hello
    I am not sure how far this following solution is relevant to your problem (regardless when it was originally posted - may help someone who is reading this thread), here is what I faced and here is how I fixed it:
    I was setting up RAC using VMWare. I prepared rac1 [installed OS, configured disks, users, etc] and the made a copy of it as rac2. So far so good. When, as per the guide I was following for RAC configuration, I started OCFS2 configuration, faced the following error on RAC2 when I tried to mount the /dev/adb1:
    ===================================================
    [Root @ *rac2* ~] # mount - t ocfs2 - o datavolume, nointr / dev / sdb1 / ocfs
    ocfs2_hb_ctl: OCFS2 DIRECTORY corrupted WHILE reading uuid ocfs2_hb_ctl: OCFS2 DIRECTORY corrupted WHILE reading uuid
    mount.ocfs2: Error WHEN attempting TO run / sbin / ocfs2_hb_ctl: "Operation not permitted" mount.ocfs2: Error WHEN attempting TO run / sbin / ocfs2_hb_ctl: "Operation not permitted"
    ===================================================
    After a lot of "googling around", I finally bumped into a page, the kind person who posted the solution said [in my words below and more detailed ]:
    o shutdown both rac1 and rac2
    o in VMWare, "edit virtual machine settings" for rac1
    o remove the disk [make sure you drop the correct one]
    o recreate it and select *"allocate all disk space now"* [with same name and in the same directory where it was before]
    o start rac1 and login as *"root"* and *"fdisk /dev/sdb"* [or whichever is/was your disk where you r installing ocfs2]
    Once done, repeat the steps for configuring OCFS2. I was successfully able to mount the disk on both machines.
    All this problem was apparently caused by not choosing "allocate all disk space now" option while creating the disk to be used for OCFS2.
    If you still have any questions or problem, email me at [email protected] and I'll try to get back to you at my earliest.
    Good luck!
    Muhammad Amer
    [email protected]

  • 10g RAC on varitas Cluster Software and Shared storage

    1. Install oracle binaries and patches (RAC install)
    2. Configure Cluster control interface (shared storage) for Oracle
    3. Create instances of oracle
    These are 3 things i am wondering how to handle, I did all these on Oracle Clusterware , but never on Veritas Cluster ware ...all these 3 steps are the same or different. In someone can help..

    How we can do this while using varitas cluster software
    1. Install oracle binaries and patches (RAC install)
    2. Configure Cluster control interface (shared storage) for Oracle
    3. Create instances of oracle
    If we install RDBMS 10.2.0.1 with standard installer it will catch the vcs and when we will be running dbca it will ask for RAC db option?
    what is Configure Cluster control interface (shared storage) for Oracle??

  • Cheap shared storage for test RAC

    Hi All,
    Is there cheap shared storage device is available to create test environment RAC. I used to create RAC with vmware but environment is not much stable.
    Regards

    Two options:
    The Oracle VM templates can be used to build clusters of any number of nodes using Oracle Database 11g Release 2, which includes Oracle 11g Rel. 2 Clusterware, Oracle 11g Rel. 2 Database, and Oracle Automatic Storage Management (ASM) 11g Rel. 2, patched to the latest, recommended patches.
    This is supported for Production.
    http://www.oracle.com/technetwork/server-storage/vm/rac-template-11grel2-166623.html
    Learn how to set up and configure an Oracle RAC 11g Release 2 development cluster on Oracle Linux for less than US$2,700.
    The information in this guide below is not validated by Oracle, is not supported by Oracle, and should only be used at your own risk; it is for educational purposes only.
    http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677.html
    Regards,
    Levi Pereira
    Edited by: Levi Pereira on Dec 10, 2012 10:59 AM

  • 10g RAC on varitas Cluster Software & Shared storage

    we are in process of making 10g RAC without using Oracle Clusterware , we will be using Varitas Cluster software and varitas shared storage , I am looking for some quick notes/article on setting up/Installing this RAC configuration.

    Step-By-Step Installation of 9i RAC on VERITAS STORAGE FOUNDATION (DBE/AC) and Solaris
    Doc ID: Note:254815.1
    These are the notes i was looking for, Question is Only the RDBMS version will be changes , all other setup will be same as mentioned in Notes, and DBA work will start from creating DBs, right?

  • Oracle RAC with QFS shared storage going down when one disk fails

    Hello,
    I have an oracle RAC on my testing environment. The configuration follows
    nodes: V210
    Shared Storage: A5200
    #clrg status
    Group Name Node Name Suspended Status
    rac-framework-rg host1 No Online
    host2 No Online
    scal-racdg-rg host1 No Online
    host2 No Online
    scal-racfs-rg host1 No Online
    host2 No Online
    qfs-meta-rg host1 No Online
    host2 No Offline
    rac_server_proxy-rg host1 No Online
    host2 No Online
    #metastat -s racdg
    racdg/d200: Concat/Stripe
    Size: 143237376 blocks (68 GB)
    Stripe 0:
    Device Start Block Dbase Reloc
    d3s0 0 No No
    racdg/d100: Concat/Stripe
    Size: 143237376 blocks (68 GB)
    Stripe 0:
    Device Start Block Dbase Reloc
    d2s0 0 No No
    #more /etc/opt/SUNWsamfs/mcf
    racfs 10 ma racfs - shared
    /dev/md/racdg/dsk/d100 11 mm racfs -
    /dev/md/racdg/dsk/d200 12 mr racfs -
    When the disk /dev/did/dsk/d2 failed (I have failed it by removing from the array), the oracle RAC went offline on both nodes, and then both nodes paniced and rebooted. Now the #clrg status shows below output.
    Group Name Node Name Suspended Status
    rac-framework-rg host1 No Pending online blocked
    host2 No Pending online blocked
    scal-racdg-rg host1 No Online
    host2 No Online
    scal-racfs-rg host1 No Online
    host2 No Pending online blocked
    qfs-meta-rg host1 No Offline
    host2 No Offline
    rac_server_proxy-rg host1 No Pending online blocked
    host2 No Pending online blocked
    crs is not started in any of the nodes. I would like to know if anybody faced this kind of a problem when using QFS on diskgroup. When one disk is failed, the oracle is not supposed to go offline as the other disk is working, and also my qfs configuration is to mirror these two disks !!!!!!!!!!!!!!
    Many thanks in advance
    Ushas Symon

    I'm not sure why you say QFS is mirroring these disks!?!? Shared QFS has no inherent mirroring capability. It relies on the underlying volume manager (VM) or array to do that for it. If you need to mirror you storage, you do it at the VM level by creating a mirrored metadevice.
    Tim
    ---

Maybe you are looking for

  • Randomly printed output is garbled with non-human characters and symbols

    When printing from what we believe to be a pdf display, often, but randomly, the printed output is garbled. This may be the second and subsequent pages, only one page, the first one, etc. This happens often, but not exclusively, when printing from a

  • Missing adobe application support file

    I had to purchase a new machine. Mac Book Pro - version 10.6.5 2.4 Ghz intell core 2 duo I was running Photoshop CS 8.0 I dragged my app to the new machine and now see this message: Now at first this was just fine. But I cannot get my pics to open. I

  • Spool issues

    Hi, when I use SP01, display spool,  whichever spool, If its content is more than 10 lines,   Every 10 lines, system will automatically insert ONE empty line.  anybody know what happend? Thanks in advance!

  • What is VB6's EventParameters in C#

    Hi, What is VB6's EventParameters equivalent class/method in .NET framework? and how to get the event's parameters in C#. Thanks,

  • Tween and sandbox violation

    hi, my problem is as follows: i have one swf file - player.swf located under one domain - www.mediandb.com (it contains a flvPlayback component) i have another swf file - BSSmartBar.swf - located under other domain - file.mediandb.com the first line