Is Shared storage provided by VirtualBox better or as good as Openfiler ?

Grid version : 11.2.0.3
Guest OS           : Solaris 10 (64-bit )
Host OS           : Windows 7 (64-bit )
Hypervisor : Virtual Box 4.1.18
In the past , I have created 2-node RAC in virtual environment (11.2.0.2) in which the shared storage was hosted in OpenFiler.
Now that VirtualBox supports shared LUNs. I want to try it out. If VirtualBox's shared storage is as good as Openfiler , I would definitely go for VirtualBox as Openfiler requires a third VM (Linux) to be created just for hosting storage .
For pre-RAC testing, I created a VirtualBox VM and created a Stand alone DB in it. Below test is done in VirtualBox's LOCAL storage (I am yet to learn how to create Shared LUNs in Virtual Box )
I know that a datafile creation is not a definite test to determine I/O throughput. But i did a quick Test by creating a 6gb tablespace.
Is the duration of 2 minutes and 42 seconds acceptable for a 6gb datafile ?
SQL> set timing on
SQL> create tablespace MHDATA datafile '/u01/app/hldat1/oradata/hcmbuat/mhdata01.dbf' SIZE 6G AUTOEXTEND off ;
Tablespace created.
Elapsed: 00:02:42.47
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
$
$ du -sh /u01/app/hldat1/oradata/hcmbuat/mhdata01.dbf
6.0G   /u01/app/hldat1/oradata/hcmbuat/mhdata01.dbf
$ df -h /u01/app/hldat1/oradata/hcmbuat
Filesystem             size   used  avail capacity  Mounted on
/dev/dsk/c0t0d0s6       14G    12G   2.0G    86%    /u01

well once i experimented with Openfiler and built a 2-node 11.2 RAC on Oracle Linux 5 using iSCSI storage (3 VirtualBox VMs in total, all 3 on a desktop PC: Intel i7 2600K, 16GB memory)
CPU/memory wasnt a problem, but as all the 3 VMs were on a single HDD, performance was awful
didnt really run any benchmarks, but a compressed full database backup with RMAN for an empty database (<1 GB) took like 15 minutes...
2 VMs + VirtualBox shared disk on the same single HDD provided much better performance, still using this kind of setup for my sandbox RAC databases
edit: 6 GB in 2'42" is about 37 MB/sec
with the above setup using Openfiler, it was nowhere near this
edit2: made a little test
host: Windows 7
guest:2 x Oracle Linux 6.3, 11.2.0.3
hypervisor is VirtualBox 4.2
PC is the same as above
2 virtual cores + 4GB memory for each VM
2 VMs + VirtualBox shared storage (single file) on a single HDD (Seagate Barracuda 3TB ST3000DM001)
created a 4 GB datafile (not enough space for 6 GB):
{code}SQL> create tablespace test datafile '+DATA' size 4G;
Tablespace created.
Elapsed: 00:00:31.88
{code}
{code}RMAN> backup as compressed backupset database format '+DATA';
Starting backup at 02-OCT-12
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=22 instance=RDB1 device type=DISK
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA/rdb/datafile/system.262.790034147
input datafile file number=00002 name=+DATA/rdb/datafile/sysaux.263.790034149
input datafile file number=00003 name=+DATA/rdb/datafile/undotbs1.264.790034151
input datafile file number=00004 name=+DATA/rdb/datafile/undotbs2.266.790034163
input datafile file number=00005 name=+DATA/rdb/datafile/users.267.790034163
channel ORA_DISK_1: starting piece 1 at 02-OCT-12
channel ORA_DISK_1: finished piece 1 at 02-OCT-12
piece handle=+DATA/rdb/backupset/2012_10_02/nnndf0_tag20121002t192133_0.389.795640895 tag=TAG20121002T192133 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 02-OCT-12
channel ORA_DISK_1: finished piece 1 at 02-OCT-12
piece handle=+DATA/rdb/backupset/2012_10_02/ncsnf0_tag20121002t192133_0.388.795640919 tag=TAG20121002T192133 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 02-OCT-12
{code}
Now i dont know much about Openfiler, maybe i messed up something, but i think its quite good, so i wouldnt use a 3rd VM just for the storage.

Similar Messages

  • How to implement RAC with no shared storage available...

    Hello Guys,
    I have installed, configured single node RAC R2 with 2 database instances on windows 2000 platform with the help of vmware software. Installation was successfull.
    I just meet my boss to explain him my acheivements and to get permission to go for prodcution environment where we will be developing 3 nodes RAC on linux platform.
    I asked him that we need a shared stroage system that can be achieved by SAN or NAS or firewire disks... but he refuses to buy any additioan hardware as we have already spend a lot on servers. He wants to me have another computer with a large size Hard Disk Drive to be used as shared storage.
    I just want to know can we configure 3 nodes RAC using HDD of an other computer as sommon shared storage...
    Please guide me. I really want to implement RAC in our production environment.
    Regards,
    Imran

    Yeah, but would openfiler work? Has any one
    implemented RAC using open filer or software
    configured shared storage.
    Any other better solution as i have to implement it
    in production environment and have to seek better
    backup facilities.Are you looking for a production environment, or an evaluation environment?
    For an evaluation environment OpenFiler works. I occasionally teach RAC classes using that. It works but it is not as fast or as robust as I'd like in production.
    For a production environment, plan to pay some money. The least expensive commercial shared stored I have found is from NetApp - any NetApp F8xx or FAS2xx files with iSCSI or even NFS license will do for RAC.
    Message was edited by:
    Hans Forbrich

  • Does Hyper-v HA need shared storage?

    I have a server running Hyper-v 2012 server. On this hyper-v host I have 1 DC and 1 DHCP server running. I don't worry too much in regards to fault tolerance with these servers. I have two other DCs running on physical servers and if DHCP goes down, it only
    takes minutes to install DHCP elsewhere.
    My company is now looking into virtualization and would like to start virtualizing more servers, servers that are critical and does need some type of HA and/or FT.
    We are a SMB (~20 servers. mostly server 2008) company. For our most critical server, the longest it can stay down is 1 hour. So our recovery time should be less than 1 hour.
    With all this said, what would you recommend in terms of building a virtual environment. Would using hyper-v replica be enough or would HA have to be implemented? If HA is implemented, and I imaging that the hyper-v hosts need to be in a cluster, would we
    need shared storage (SAN) in order to use HA and/or FT?
    Please let me know if you need more information.
    Best regards,
    Alex

    I would recommend using a cluster setup for High Availability. Of course, that would require a shared storage. Ideally, you need to have a SAN for its performance. However, with Hyper-V 2012 and higher you can use SMB 3.0 for the shared storage instead.
    More details here: https://technet.microsoft.com/en-us/library/jj134187.aspx
    Of course, it remains better to have a replica too if this is possible.
    This posting is provided AS IS with no warranties or guarantees , and confers no rights.
    Ahmed MALEK
    My Website Link
    My Linkedin Profile
    My MVP Profile

  • How to Create Shared Storage using VM-Server 2.1 Red Hat Enterprise Linux 5

    Thanks in advance.
    Describe in sequence how to create shared storage for a two guest/node Red Hat Linux Enterprise using Oracle 2.1 VM Server on Red Hat Linux Enterprise 5 using command line or appropriate interface.
    How to create Shared Storage using Oracle 2.1 VM Server?
    How to configure Network for two node cluster (oracle clusterware)?

    Hi Suresh Kumar,
    Oracle Application Server 10g Release 2, Patch Set 3 (10.1.2.3) is required to be fully certified on OEL 5.x or RHEL 5.x.
    Oracle Application Server 10g Release 2 10.1.2.0.0 or 10.1.2.0.1 versions are not supported with Oracle Enterprise Linux (OEL) 5.0 or Red Hat Enterprise Linux (RHEL) 5.0. It is recommended that version 10.1.2.0.2 be obtained and installed.
    Which implies Oracle AS 10.1.2.x is some what certified on RHEL 5.x
    I think it would be better if you get in touch with Oracle Support regarding this .
    Sorry , I am not aware of any document on migration from Sun Solaris to RH Linux 5.2 .
    Thanks,
    Sutirtha

  • DFSr supported cluster configurations - replication between shared storage

    I have a very specific configuration for DFSr that appears to be suffering severe performance issues when hosted on a cluster, as part of a DFS replication group.
    My configuration:
    3 Physical machines (blades) within a physical quadrant.
    3 Physical machines (blades) hosted within a separate physical quadrant
    Both quadrants are extremely well connected, local, 10GBit/s fibre.
    There is local storage in each quadrant, no storage replication takes place.
    The 3 machines in the first quadrant are MS clustered with shared storage LUNs on a 3PAR filer.
    The 3 machines in the second quadrant are also clustered with shared storage, but on a separate 3PAR device.
    8 shared LUNs are presented to the cluster in the first quadrant, and an identical storage layout is connected in the second quadrant. Each LUN has an associated HAFS application associated with it which can fail-over onto any machine in the local cluster.
    DFS replication groups have been set up for each LUN and data is replicated from an "Active" cluster node entry point, to a "Passive" cluster node that provides no entry point to the data via DFSn and a Read-Only copy on it's shared cluster
    storage.
    For the sake of argument, assume that all HAFS application instances in the first quadrant are "Active" in a read/write configuration, and all "Passive" instances of the HAFS applications in the other quadrants are Read-Only.
    This guide: http://blogs.technet.com/b/filecab/archive/2009/06/29/deploying-dfs-replication-on-a-windows-failover-cluster-part-i.aspx defines
    how to add a clustered service to a replication group. It clearly shows using "Shared storage" for the cluster, which is common sense otherwise there effectively is no application fail-over possible and removes the entire point of using a resilient
    cluster.
    This article: http://technet.microsoft.com/en-us/library/cc773238(v=ws.10).aspx#BKMK_061 defines the following:
    DFS Replication in Windows Server 2012 and Windows Server 2008 R2 includes the ability to add a failover cluster
    as a member of a replication group. The DFS Replication service on versions of Windows prior to Windows Server 2008 R2
    is not designed to coordinate with a failover cluster, and the service will not fail over to another node.
    It then goes on to state, quite incredibly: DFS Replication does not support replicating files on Cluster Shared Volumes.
    Stating quite simply that DFSr does not support Cluster Shared Volumes makes absolutely no sense at all after stating clusters
    are supported in replication groups and a technet guide is provided to setup and configure this configuration. What possible use is a clustered HAFS solution that has no shared storage between the clustered nodes - none at all.
    My question:  I need some clarification, is the text meant to read "between" Clustered
    Shared Volumes?
    The storage configuration must to be shared in order to form a clustered service in the first place. What
    we am seeing from experience is a serious degradation of
    performance when attempting to replicate / write data between two clusters running a HAFS configuration, in a DFS replication group.
    If for instance, as a test, local / logical storage is mounted to a physical machine the performance of a DFS replication group between the unshared, logical storage on the physical nodes is approaching 15k small files per minute on initial write and even high
    for file amendments. When replicating between two nodes in a cluster, with shared clustered storage the solution manages a weak 2,500 files per minute on initial write and only 260 files per minute when attempting to update data / amend files.
    By testing various configurations we have effectively ruled out the SAN, the storage, drivers, firmware, DFSr configuration, replication group configuration - the only factor left that makes any difference is replicating from shared clustered storage, to another
    shared clustered storage LUN.
    So in summary:
    Logical Volume ---> Logical Volume = Fast
    Logical Volume ---> Clustered Shared Volume = ??
    Clusted Shared Volume ---> Clustered Shared Volume = Pitifully slow
    Can anyone explain why this might be?
    The guidance in the article is in clear conflict with all other evidence provided around DFSr and clustering, however it seems to lean towards why we may be seeing a real issue with replication performance.
    Many thanks for your time and any help/replies that may be received.
    Paul

    Hello Shaon Shan,
    I am also having the same scenario at one of my customer place.
    We have two FileServers running on Hyper-V 2012 R2 as guest VM using Cluster Shared Volume.  Even the data partition drive also a part of CSV.
    It's really confusing whether the DFS replication on CSV are supported or not, then what would be consequence if using.
    In my knowledge we have some customers they are using Hyper-V 2008 R2 and DFS is configured and running fine on CSV since more than 4 years without any issue.
    Appreciate if you can please elaborate and explain in details about the limitations on using CSV.
    Thanks in advance,
    Abul

  • Choice of shared storage for Oralce VM clustering feature

    Hi,
    I would like to experiment the Oracle VM clustering feature over multiple OVM servers. One requirement is the shared storage which can be provided by iSCSI/FC SAN, or NFS. These types of external storage are usually very expensive. For testing purpose, what other options of shared storage can be used? Can someone share your experience?

    You don't need to purchase an expensive SAN storage array for this. A regular PC running Linux or Solaris will do just fine to act as an iSCSI target or to provide NFS shares via TCP/IP. Googling for "linux iscsi target howto" reveals a number of hits like this one: "RHEL5 iSCSI Target/Initiator" - http://blog.hamzahkhan.com/?p=55
    For Solaris, this book might be useful: "Configuring Oracle Solaris iSCSI Targets and Initiators (Tasks)" - http://download.oracle.com/docs/cd/E18752_01/html/817-5093/fmvcd.html

  • Shared Storage to handle multiple seats for UHD-4k,8k and larger res-am I dreaming?

    Is this possible?
    In our shop we work in very large canvases in AE and PPro for Live Event Production. Lots of compositing in 4k up to and sometimes beyond 8k. AE for the heavy compositing with mediocre RAM previews. PPro for PreViz playback with realtime performance a must in HD and most times in 4k.
    We have been working thru setting up appropriate workstations to handle the performance and are making progress. We have historically been on all Macs but branching out to PC for the better performance capabilities. And testing render farms for big renders.
    Storage is the next project. We have 10-15 AE artists and 5-8 PPro editors that would benefit form a shared storage solution. Is this possible with current storage technoloigy? Or would it be more efficient to look into local storage like SAS...or?
    Would appreciate any input.
    Oh...and this Adobe Anywhere sounds very interesting in our workflow....any opinions? Again....not talking about hd or lower res....4k and above.
    Thanks,
    -philc

    All that I know is from past discussions (see below) which indicate that you may not edit over a network connection (except Anywhere, which is a different product) so you will be limited to using any kind of shared storage only for "master" files that are copied to local drives for editing and then back when the edit is done
    But, since I don't do that kind of editing anyway, all I have are some saved links... and there may be newer products that will work for shared editing, not just store files
    Any version of Premiere doesn't work properly, if at all, over a network... so if it works at all, expect inconsistent behavior and problems
    -see messages #1 and #3 in http://forums.adobe.com/thread/771151
    -you MUST give all users administrator accounts to use Premiere
    -and especially Encore dual layer http://forums.adobe.com/thread/969395
    -#5 Server 2008 is UNsupported http://forums.adobe.com/thread/851602
    -a work around, of sorts http://forums.adobe.com/thread/957523
    -and not on a "domain" http://forums.adobe.com/thread/858977
    -http://helpx.adobe.com/premiere-pro/kb/networks-removable-media-dva.html

  • Runcluvy on shared storage

    Hey, I just might forget anything.
    I am going to install a rac cluster on SLES11 SP1.
    I am using multipathing where shared storage are provided by two SAN´s.
    I see all LUNs and did a raw device mapping via disk by-id ( all LUNs hava a partion on it)
    I am able to write with dd on the raw devices from both rac nodes at the same time.
    I set the permissions to the raw devices
    Installed cluvy.rpm on both nodes.
    When I start cluvy it passes the first checks successful.
    ./runcluvfy.sh stage -post hwos -n raca,racb -verbose
    But when its starts checking the shared storages - it fails.
    But prereq needs to be configured for a successful check of the storage devices ?
    Chris
    cluvy comes out of the installation package from 10.2.0.1

    Christian wrote:
    Hey, I just might forget anything.
    I am going to install a rac cluster on SLES11 SP1.
    I am using multipathing where shared storage are provided by two SAN´s.
    I see all LUNs and did a raw device mapping via disk by-id ( all LUNs hava a partion on it)
    I am able to write with dd on the raw devices from both rac nodes at the same time.
    I set the permissions to the raw devices
    Installed cluvy.rpm on both nodes.
    When I start cluvy it passes the first checks successful.
    ./runcluvfy.sh stage -post hwos -n raca,racb -verbose
    But when its starts checking the shared storages - it fails.
    But prereq needs to be configured for a successful check of the storage devices ?
    Chris
    cluvy comes out of the installation package from 10.2.0.1do you want to do pre-check or post-check?
    if you would like to do pre-check, check with
    ./runcluvfy.sh stage -pre crsinst -n raca,racb -verbose
    HTH,
    Refer installation guide for reference.
    Good Luck.

  • Shared storage check failed on nodes

    hi friends,
    I am installing rac 10g on vmware and os is OEL4.i completed all the prerequisites but when i run the below command
    ./runclufy stage -post hwos -n rac1,rac2, i am facing below error.
    node connectivity check failed.
    Checking shared storage accessibility...
    WARNING:
    Unable to determine the sharedness of /dev/sde on nodes:
    rac2,rac2,rac2,rac2,rac2,rac1,rac1,rac1,rac1,rac1
    Shared storage check failed on nodes "rac2,rac1"
    please help me anyone ,it's urgent
    Thanks,
    poorna.
    Edited by: 958010 on 3 Oct, 2012 9:47 PM

    Hello,
    It seems that your storage is not accessible from both the nodes. If you want you can follow these steps to configure 10g RAC on VMware.
    Steps to configure Two Node 10 RAC on RHEL-4
    Remark-1: H/W requirement for RAC
    a) 4 Machines
    1. Node1
    2. Node2
    3. storage
    4. Grid Control
    b) 2 switchs
    c) 6 straight cables
    Remark-2: S/W requirement for RAC
    a) 10g cluserware
    b) 10g database
    Both must have the same version like (10.2.0.1.0)
    Remark-3: RPMs requirement for RAC
    a) all 10g rpms (Better to use RHEL-4 and choose everything option to install all the rpms)
    b) 4 new rpms are required for installations
    1. compat-gcc-7.3-2.96.128.i386.rpm
    2. compat-gcc-c++-7.3-2.96.128.i386.rpm
    3. compat-libstdc++-7.3-2.96.128.i386.rpm
    4. compat-libstdc++-devel-7.3-2.96.128.i386.rpm
    ------------ Start Machine Preparation --------------------
    1. Prepare 3 machines
    i. node1.oracle.com
    etho (192.9.201.183) - for public network
    eht1 (10.0.0.1) - for private n/w
    gateway (192.9.201.1)
    subnet (255.255.255.0)
    ii. node2.oracle.com
    etho (192.9.201.187) - for public network
    eht1 (10.0.0.2) - for private n/w
    gateway (192.9.201.1)
    subnet (255.255.255.0)
    iii. openfiler.oracle.com
    etho (192.9.201.182) - for public network
    gateway (192.9.201.1)
    subnet (255.255.255.0)
    NOTE:-
    -- Here eth0 of all the nodes should be connected by Public N/W using SWITCH-1
    -- eth1 of all the nodes should be connected by Private N/W using SWITCH-2
    2. network Configuration
    #vim /etc/host
    192.9.201.183 node1.oracle.com node1
    192.9.201.187 node2.oracle.com node2
    192.9.201.182 openfiler.oracle.com openfiler
    10.0.0.1 node1-priv.oracle.com node1
    10.0.0.2 node2-priv.oracle.com node2-priv
    192.9.201.184 node1-vip.oracle.com node1-vip
    192.9.201.188 node2-vip.oracle.com node2-vip
    2. Prepare Both the nodes for installation
    a. Set Kernel Parameters (/etc/sysctl.conf)
    kernel.shmall = 2097152
    kernel.shmmax = 2147483648
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    fs.file-max = 65536
    net.ipv4.ip_local_port_range = 1024 65000
    net.core.rmem_default = 262144
    net.core.rmem_max = 262144
    net.core.wmem_default = 262144
    net.core.wmem_max = 262144
    b. Configure /etc/security/limits.conf file
    oracle soft nproc 2047
    oracle hard nproc 16384
    oracle soft nofile 1024
    oracle hard nofile 65536
    c. Configure /etc/pam.d/login file
    session required /lib/security/pam_limits.so
    d. Create user and groups on both nodes
    # groupadd oinstall
    # groupadd dba
    # groupadd oper
    # useradd -g oinstall -G dba oracle
    # passwd oracle
    e. Create required directories and set the ownership and permission.
    # mkdir –p /u01/crs1020
    # mkdir –p /u01/app/oracle/product/10.2.0/asm
    # mkdir –p /u01/app/oracle/product/10.2.0/db_1
    # chown –R oracle:oinstall /u01/
    # chmod –R 755 /u01/
    f. Set the environment variables
    $ vi .bash_profile
    ORACLE_BASE=/u01/app/oracle/; export ORACLE_BASE
    ORA_CRS_HOME=/u01/crs1020; export ORA_CRS_HOME
    #LD_ASSUME_KERNEL=2.4.19; export LD_ASSUME_KERNEL
    #LANG=”en_US”; export LANG
    3. storage configuration
    PART-A Open-filer Set-up
    Install openfiler on a machine (Leave 60GB free space on the hdd)
    a) Login to root user
    b) Start iSCSI target service
    # service iscsi-target start
    # chkconfig –level 345 iscsi-target on
    PART –B Configuring Storage on openfiler
    a) From any client machine open the browser and access openfiler console (446 ports).
    https://192.9.201.182:446/
    b) Open system tab and update the local N/W configuration for both nodes with netmask (255.255.255.255).
    c) From the Volume tab click "create a new physical volume group".
    d) From "block Device managemrnt" click on "(/dev/sda)" option under 'edit disk' option.
    e) Under "Create a partition in /dev/sda" section create physical Volume with full size and then click on 'CREATE'.
    f) Then go to the "Volume Section" on the right hand side tab and then click on "Volume groups"
    g) Then under the "Create a new Volume Group" specify the name of the volume group (ex- racvgrp) and click on the check box and then click on "Add Volume Group".
    h) Then go to the "Volume Section" on the right hand side tab and then click on "Add Volumes" and then specify the Volume name (ex- racvol1) and use all space and specify the "Filesytem/Volume type" as ISCSI and then click on CREATE.
    i) Then go to the "Volume Section" on the right hand side tab and then click on "iSCSI Targets" and then click on ADD button to add your Target IQN.
    j) then goto the 'LUN Mapping" and click on "MAP".
    k) then goto the "Network ACL" and allow both node from there and click on UPDATE.
    Note:- To create multiple volumes with openfiler we need to use Multipathing that is quite complex that’s why here we are going for a single volume. Edit the property of each volume and change access to allow.
    f) install iscsi-initiator rpm on both nodes to acces iscsi disk
    #rpm -ivh iscsi-initiator-utils-----------
    g) Make entry in iscsi.conf file about openfiler on both nodes.
    #vim /etc/iscsi.conf (in RHEL-4)
    and in this file you will get a line "#DiscoveryAddress=192.168.1.2" remove comment and specify your storage ip address here.
    OR
    #vim /etc/iscsi/iscsi.conf (in RHEL-5)
    and in this file you will get a line "#ins.address = 192.168.1.2" remove comment and specify your storage ip address here.
    g) #service iscsi restart (on both nodes)
    h) From both Nodes fire this command to access volume of openfiler-
    # iscsiadm -m discovery -t sendtargets -p 192.2.201.182
    i) #service iscsi restart (on both nodes)
    j) #chkconfig –level 345 iscsi on (on both nodes)
    k) make the partition 3 primary and 1 extended and within extended make 11 logical partition
    A. Prepare partitions
    1. #fdisk /dev/sdb
    :e (extended)
    Part No. 1
    First Cylinder:
    Last Cylinder:
    :p
    :n
    :l
    First Cylinder:
    Last Cylinder: +1024M
    2. Note the /dev/sdb* names.
    3. #partprobe
    4. Login as root user on node2 and run partprobe
    B. On node1 login as root user and create following raw devices
    # raw /dev/raw/raw5 /dev/sdb5
    #raw /dev/raw/taw6 /dev/sdb6
    # raw /dev/raw/raw12 /dev/sdb12
    Run ls –l /dev/sdb* and ls –l /dev/raw/raw* to confirm the above
    -Repeat the same thing on node2
    C. On node1 as root user
    # vi .etc/sysconfig/rawdevices
    /dev/raw/raw5 /dev/sdb5
    /dev/raw/raw6 /dev/sdb6
    /dev/raw/raw7 /dev/sdb7
    /dev/raw/raw8 /dev/sdb8
    /dev/raw/raw9 /dev/sdb9
    /dev/raw/raw10 /dev/sdb10
    /dev/raw/raw11 /dev/sdb11
    /dev/raw/raw12 /dev/sdb12
    /dev/raw/raw13 /dev/sdb13
    /dev/raw/raw14 /dev/sdb14
    /dev/raw/raw15 /dev/sdb15
    D. Restart the raw service (# service rawdevices restart)
    #service rawdevices restart
    Assigning devices:
    /dev/raw/raw5 --> /dev/sdb5
    /dev/raw/raw5: bound to major 8, minor 21
    /dev/raw/raw6 --> /dev/sdb6
    /dev/raw/raw6: bound to major 8, minor 22
    /dev/raw/raw7 --> /dev/sdb7
    /dev/raw/raw7: bound to major 8, minor 23
    /dev/raw/raw8 --> /dev/sdb8
    /dev/raw/raw8: bound to major 8, minor 24
    /dev/raw/raw9 --> /dev/sdb9
    /dev/raw/raw9: bound to major 8, minor 25
    /dev/raw/raw10 --> /dev/sdb10
    /dev/raw/raw10: bound to major 8, minor 26
    /dev/raw/raw11 --> /dev/sdb11
    /dev/raw/raw11: bound to major 8, minor 27
    /dev/raw/raw12 --> /dev/sdb12
    /dev/raw/raw12: bound to major 8, minor 28
    /dev/raw/raw13 --> /dev/sdb13
    /dev/raw/raw13: bound to major 8, minor 29
    /dev/raw/raw14 --> /dev/sdb14
    /dev/raw/raw14: bound to major 8, minor 30
    /dev/raw/raw15 --> /dev/sdb15
    /dev/raw/raw15: bound to major 8, minor 31
    done
    E. Repeat the same thing on node2 also
    F. To make these partitions accessible to oracle user fire these commands from both Nodes.
    # chown –R oracle:oinstall /dev/raw/raw*
    # chmod –R 755 /dev/raw/raw*
    F. To make these partitions accessible after restart make these entry on both nodes
    # vi /etc/rc.local
    Chown –R oracle:oinstall /dev/raw/raw*
    Chmod –R 755 /dev/raw/raw*
    4. SSH configuration (User quivalence)
    On node1:- $ssh-keygen –t rsa
    $ssh-keygen –t dsa
    On node2:- $ssh-keygen –t rsa
    $ssh-keygen –t dsa
    On node1:- $cd .ssh
    $cat *.pub>>node1
    On node2:- $cd .ssh
    $cat *.pub>>node2
    On node1:- $scp node1 node2:/home/oracle/.ssh
    On node2:- $scp node2 node2:/home/oracle/.ssh
    On node1:- $cat node*>>authowized_keys
    On node2:- $cat node*>>authowized_keys
    Now test the ssh configuration from both nodes
    $ vim a.sh
    ssh node1 hostname
    ssh node2 hostname
    ssh node1-priv hostname
    ssh node2-priv hostname
    $ chmod +x a.sh
    $./a.sh
    first time you'll have to give the password then it never ask for password
    5. To run cluster verifier
    On node1 :-$cd /…/stage…/cluster…/cluvfy
    $./runcluvfy stage –pre crsinst –n node1,node2
    First time it will ask for four New RPMs but remember install these rpms by double clicking because of dependancy. So better to install these rpms in this order (rpm-3, rpm-4, rpm-1, rpm-2)
    1. compat-gcc-7.3-2.96.128.i386.rpm
    2. compat-gcc-c++-7.3-2.96.128.i386.rpm
    3. compat-libstdc++-7.3-2.96.128.i386.rpm
    4. compat-libstdc++-devel-7.3-2.96.128.i386.rpm
    And again run cluvfy and check that "It should given a clean cheat" then start clusterware installation.

  • Vmware shared storage

    Hello,
    I have downloaded the Oracle/VMWare/SuSE and have exercised the databases, etc. However, I would now like to take this to the next step and create a 'sibling' virtual machine - which I would just make as a copy of the first and have the instances O10G1 and O10G2 on each of the two virtual machines, respectively. Finally, I would then like to create a shared storage device between the two virtual machines. I have had great difficulty doing this and have found disagreement on other forums whether this is even possible with vmware 4.5. The most promising post I found: ( http://www.vmware.com/community/thread.jspa?forumID=19&threadID=11257&messageID=109257#109257 ) references a document from Oracle Japan ( http://otndnld.oracle.co.jp/products/database/oracle10g/pdf/RAC_Config_VMWareLinux.pdf ). Is this document also on the Oracle USA site? It seems to have the answers I am seeking. If not, can you offer any guidance?
    Thanks!
    Brandon Moore

    The kit provided is not ment to be used in this way, your request for help is outside the scope of this forum.
    It's a bit complicated to do what you want. I would simply suggest you use NFS or iscsi on external storage that would eliminate any VMware issues with shared storage.
    good luck.
    Saar.

  • WRT600N - Network Sharing / Storage Issues

    I recently bought a WRT600N v1.0 from Dell.com and configured the router (I'm a CCNP, so I've been using routers, etc.. for years). I had a WRT54GS v1.0 before this and that worked great for years.
    The WiFi (both 5GHz and 2.4GHz) has been rock solid on the WRT600N, but my issue is with the network sharing/storage (NAS) part of the router.
    I plugged in a Seagate USB 2.0 750GB HDD (already formatted to NTFS using Windows Vista). I was able to create folders and share them out. However, I cannot map a drive to about half of the folders. All folders are set up the exact same way, and even those I can map a drive to.. the network performance copying files to those shares is horrible (even over Gigabit Ethernet). I'm talking like 1MB/s, when I was getting 20MB/s using my old solution.
    When I formatted the disk using the WRT600N, it only allowed me to format it to FAT, not even FAT32 or NTFS. Then it would still only work half the time.
    Any ideas?
    This seems like a FW issue, like many brand new Linksys products. I've already contacted Linksys support and they don't have a beta FW available to resolve this either.

    DTSkyCop wrote:
    Have you fixed your issues with the storage feature of this router...?  I recently bought the WRT600N router and the SAME exact Seagate HD for the storage link feature.  I have spent over 15 hours numerous phone calls to Linksys Techs and My Computer Works Techs and NOTHING doing, it still does not work right.
    I have set up my drive and I to had to FORMAT using the FAT system of the router per one of the Linksys Supervisor Techs.  I did this and set up everything related to my drive and MAPPED all my drives...  I have access to all the drives at first and can move files around and delete from the network.
    I wait for a few minutes and open a NEW program and try to SAVE to the network drive location and  get "You Need Permission to Preform This Action.  Try Again or Cancel" are the only options I get and NO REQUEST for a paswsword is given.  I am administrator and cannot now access my drive / network...
    The TECH's for MCW are puzzled as well as they initially set the drive and Share folders up.  They cannot figure out why there continues to be a problem accessing the folders on the drive. 
    I spent 2 1/2 hours tonight on the phone with LinkSys and got really no healp only try this and try that.  While experiencing the problems with the tech on the phone, I preformed a REFRESH on the Storage / Disk page and ALL Share Files I created Disappeared.  I looked physically at my router and discovered the USB was now out...Why?  The TECH thought is was a loose connection.  NEW Router and NEW HD.  Both are in a secure location and no one around to touch them.  I could not get the router to discover the HD plugged into the USB location so I finally unplugged it and plugged it back in.
    All setting remained and the router came back up without what seemed to be any hitches.  At this point, I played with the configuration settings while on what seemed like eternal hold and FINALLY got access to the NETWORK drives and was able to save to all the partitions that were created.
    The Tech decided that all was OK and we disconnected and guess what...  I again tried to SAVE to the a partition and "You Need Permission to Preform This Action.  Try Again or Cancel" came up again.
    I am TRULY frustrated with this NEW SETUP and don't know what to do now.  Can anyone help me fix this thing to make it work right????? 
    One thing you will learn about LinkSys, is that they have the best prodcuts out there..but if you buy a first generation product (such as the WRT600N), it will have issues that will take firmware updates to resolve the problems. The best thing is to report the issues and wait a few months until several firmware updates come out to resolve the problems.
    I recently installed the latest FW update and my WRT600N has been acting "better", but still needs a few more FW updates to fix the other stability issues. I'm fully prepared to wait it out, but it is unfortunate that we have to wait anything out...as LinkSys Q/A should have caught these bugs prior to product release and had them fixed then.
    I've tried products from NetGear, Belkin, etc.. and all of them are horrible. At least LinkSys products work once the FW updates come out. I just hope you never need to contact LinkSys support as they are horrible.
    Here's my configuration:
    - WRT600N v1.0
    - Latest FW
    - Seagate 750GB USB 2.0 External HDD formatted to NTFS using Windows Vista
    - Shared folders created using WRT600N
    - Media Server Enabled (still does not work with my Playstation 3)
    - File write performance, still horrible..even over Gigabit Ethernet (around 200KB/s, should be around 4-5MB/s at least)
    - File share mapping, still intermittent.. usually works.. but I noticed it seems to be case senstitive in mapping the drive names (odd)
    - Should have an option to format the HDD as FAT, FAT32 or NTFS (only FAT is used currently)

  • Migration/Live Migration fails with shared storage (SMB), but not all the time

    I have 2 Hyper-V Hosts using shared storage via SMB. I'm having intermittent issues with migrations failing. It'll either go through the motions of doing the whole move and fail and/or when I try to start it again I get the following:
    I've done a lot of reading on this and have done various things that have been recommended. For each Hyper-V host I've setup delagation via Kerberos. ie. HV_Host1 has CIFS - HV_Host2; CIFS - SMB Server; MS Virtual System Migration Service - HV_Host2 and
    vice versa.
    On the actual SMB share on the Shared Storage server I added permissions to the folder for HV_Host1, HV_Host2, Storage Server, Domain Admins and my user account for full access.
    In Group Policy, I've blocked Inheritance for the OU containing the Hyper-V hosts, though I have added some of the Group Policy's manually that I needed.
    Last thing, I also added Domain Admins to the Local GP on each Host for "Log on as a Service" and "Impersonate a client after authentication" as shown in this thread: http://social.technet.microsoft.com/forums/windowsserver/en-US/7e6b9b1b-e5b1-4e9e-a1f3-2ce72ea1e543/unable-to-migrate-create-vm-hyperv-cluster-2012-logon-failure-have-to-restart-the-hyperv
    I get these failed migrations regardless if I start them on the actual Host or via the admin console from Win8.
    At this point I'm not sure what else to check or the next step to take for this.

    Hi Granite,
    Please refer to following article to build hyper-v over SMB :
    http://technet.microsoft.com/en-us/library/jj134187.aspx#BKMK_Step3
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • SQL Server 2012 AlwaysOn for Multi-subnet geographical HA solution steps -- NON-Shared storage,standalone servers

    1.Can any one provide the detailed steps for multi-subnet HA for Always ON Groups.
    --SQL Server 2012 AlwaysOn  for Multi-subnet geographical HA solution steps
    2.Do we need VLAN or not for SQL Server 2012 on win 2012 ? provide details for this VLAN required or not.
    --I read MS links, sql server 2012 and above VLAN not required.
    Env:
    SQL Server 2012
    Windows 2012 R2(2  servers different location)
    Non-Shared storage (stand-alone servers)
    Always ON Availability Group
    I have seen white papers,but did not have detail step by steps.
    Thanks

    Hi SQLDBA321,
    As your post, SQL Server 2012 or higher version has removed that requirement of virtual local area network (VLAN). For more details, please review this similar blog:
    What you need for a Multi Subnet Configuration for AlwaysOn FCI in SQL Server 2012.
    And you can perform the steps in the following similar blog to set up an AlwaysOn Availability Group with multiple subnets.
    http://www.patrickkeisler.com/2013/07/setup-availability-group-with-multiple.html
    Thanks,
    Lydia Zhang
    Lydia Zhang
    TechNet Community Support

  • My wife an I share one iTunes account but have separate Apple id for our devices - prior to 8.1 we shared storage on iCloud now we can't and when I try to buy more storage on her phone instead of accessing the shared iTunes account it try's to sign

    MMy wife an I share one iTunes account but have separate Apple id for our devices - prior to 8.1 we shared storage on iCloud now we can't and when I try to buy more storage on her phone instead of accessing the shared iTunes account it try's to sign in to iTunes using her Apple id - I checked the iTunes id and password on both devices - can anyone help

    Have a look here...
    http://macmost.com/setting-up-multiple-ios-devices-for-messages-and-facetime.htm l

  • In the old Mobile me storage I had shared storage for my family and all of our devices. How do I breakup the 55GB across my apple accounts now that iCloud is by device by user?

    In the old Mobile me storage I had shared storage for my family and all of our devices. How do I breakup the 55GB across my apple accounts now that iCloud is by device by user? Anyone else have this issue. I am also in need of correcting the storage from my work where I do not have Safari or I can not down load iCould to my desk top.

    That storage moves to master account. Since there are no accounts like that in Icloud, people in your family will enjoy complimentary 5 gigs from Apple and they will let you know if they need anymore. You will not be able to manage storage from windows desktop

Maybe you are looking for

  • Safari crash all the time with fault report

    Cannot open Safari at all anymore (writing this question with firefox) on old Imac from 2009/21.5 inch, this started when downloading the high-bit officially purchased versions from Led Zeppelin IV at www.ledzeppelin.com/HDdownload the fault message

  • Messed with pacman

    I've just changed the mirror or perhaps messed something in pacman too. I refreshed the database with new mirror and during upgrade I'm getting error that kdelibs require phonon, but when I want to install it I get msg that phonon is conflicting with

  • Can't open Technical Communication Suite 5 installer

    I'm using a Windows Vista PC. I've downloaded the files for the Technical Communication 5 trial, but I'm unable to open the TechCommunicationSuite_5_LREFDJ.exe file to install it. I wasn't able to use the download manager because I had insufficient d

  • Trouble Redeeming iTunes Gift Card from Multi Pack

    I purchased a 3 pack of $10 gift cards. 2 of 3 redeemed. 3rd said invalid. Contacted Support under the redemption section. They responded. Wanted copies of card front and back, reciept, etc. I provided all in a PDF over a week ago. I've been playing

  • JSF datatable to get multiple row values

    I want to get values of SelectBooleanCheckbox and SelectOneMenu in jsf bean from datatable for multiple rows?