Clustering without shared storage

Is there a way to cluster two Sun Fire 280R without a shared storage?

Not so sure on the argument there... The 'Sun Cluster Overview for Solaris OS' document (available on docs.sun.com) clearly states on Page 9...
'A cluster is two or more systems, or nodes, that work together as a single, continuously availble system ...'
Thus you can have a two-node cluster. To do so, however, you would NEED shared storage to configure the Quorum device. This device is, essentially, the third vote. In a split-brain situation, where interconnects have failed and both sides of the cluster think they're the only node active, the Quorum device is used to determine which node stays in the cluster. This was historically done by all nodes racing to place a SCSI reservation on the nominated Quorum device. The node which fails this would panic, instigated by the failfast driver, to ensure data integrity. How it is actually done now I'm not quite sure, but there is still a race for quorum by all nodes (p.22 Sun Cluster Overview). Thus the Quorum device is required for a two-node cluster, and the cluster would not fail completely in the event of a single node failure.
Hope this helps
Glennog

Similar Messages

  • How can use all vcenter feature such as HA and drs without shared storage

    Hi
    i have 2 esxi host dl 380g8
    but i don't have any shared storage or san now i want use vcenter feature like ha and drs now for use this features do i have to run VSAN?
    i readed for for use vsan we have to buy ATLEAST ONE ssd disk is this trure?
    ssd disk is necessary?
    atleast one ssd disk for each host ?
    can use from all of vcenter feature with VSAN ?
    please some help me

    Hi,
    For VSAN you need at LEAST 3 hosts (4 recommended).
    Each host must have 1 SSD for read cache/write buffer.
    Do you have any storage at all? How about running a NAS such as "OpenFiler" or "FreeNAS". That way you can present shared storage via that way. (I'd use this in Lab only). These NAS O/S's also work as virtual machines so you can turn any disk into shared storage.

  • Server Pool WITHOUT shared storage

    The documentation defines Server Pool as:
          +Logically an autonomous region that contains one or more physical Oracle VM Servers.+
    Therefore, should it possible to add multiple servers (physically separate VM servers) to the same Server Pool even though they are NOT using shared storage? When I tried to add the second VM Server to the Server Pool I received the following error:
    During adding servers ([vmoracle2]) to server pool (VM_Server_Pool), Cluster setup failed:
    (OVM-1011 OVM Manager communication with vmoracle1 for operation HA Setup for Oracle VM
    Agent 2.2.0 failed: <Exception: SR '/dev/sda3' not supported: type 'ocfs2.local' not in
    ['nfs', 'ocfs2.cluster']> )Thanks.

    Nothing is as easy as it seems when it comes to Oracle VM.
    When trying to create a new Server Pool to accommodate my second VM Server, I received the following error:
    +2010-03-08 18:18:24.575 NOTIFICATION Getting agent version for agent:vmoracle2 ...+
    +2010-03-08 18:18:24.752 NOTIFICATION The agent version is 2.3-19+
    +2010-03-08 18:18:24.755 NOTIFICATION Checking agent vmoracle2 is active or not?+
    +2010-03-08 18:18:24.916 NOTIFICATION [Server Pool Management][Server][vmoracle2]:Check agent (vmoracle2) connectivity.+
    +2010-03-08 18:18:30.304 NOTIFICATION entering into assign vs action...+
    +2010-03-08 18:18:30.311 NOTIFICATION Getting agent version for agent:vmoracle2 ...+
    +2010-03-08 18:18:30.482 NOTIFICATION The agent version is 2.3-19+
    +2010-03-08 18:18:30.483 NOTIFICATION Checking agent vmoracle2 is active or not?+
    +2010-03-08 18:18:30.638 NOTIFICATION [Server Pool Management][Server][vmoracle2]:Check agent (vmoracle2) connectivity.+
    +2010-03-08 18:18:45.236 NOTIFICATION Getting agent version for agent:vmoracle2 ...+
    +2010-03-08 18:18:45.410 NOTIFICATION The agent version is 2.3-19+
    +2010-03-08 18:18:45.434 NOTIFICATION master server is:vmoracle2+
    +2010-03-08 18:18:45.435 NOTIFICATION Start to check cluster for server pool+
    +2010-03-08 18:18:45.581 WARNING failed:<Exception: Cluster root not found.>+
    StackTrace:
    File "/opt/ovs-agent-2.3/OVSSiteCluster.py", line 535, in cluster_precheck
    clusterprecheck(single_node, ha_enable)+
    +File "/opt/ovs-agent-2.3/OVSSiteCluster.py", line 515, in clusterprecheck+
    if not cluster_root_sr_uuid: raise Exception("Cluster root not found.")
    +2010-03-08 18:18:45.582 NOTIFICATION Failed check cluster for server pool+
    +2010-03-08 18:18:45.583 ERROR [Server Pool Management][Server Pool][VMORACLE2_Server_Pool]:Check prerequisites to create server pool (VMORACLE2_Server_Pool) failed: (OVM-1011 OVM Manager communication with vmoracle2 for operation Pre-check cluster root for Server Pool failed:+
    +<Exception: Cluster root not found.>+
    +)+
    +2010-03-08 18:18:45.607 NOTIFICATION Exception Message:OVM-1011 OVM Manager communication with vmoracle2 for operation Pre-check cluster root for Server Pool failed:+
    +<Exception: Cluster root not found.>+
    The "*Test Connection*" succeeded just fine prior to clicking NEXT on the "Create Server Pool" page.
    Any suggestions?

  • Is it possible to install Oracle RAC without shared storage

    Dear All,
    I would like to seek for your advice.
    I got two different servers. We call it node 1 and node 2. And two different instances name.
    Node 1 -> instance name as "ORCL1"
    Node 2 -> instance name as "ORCL2"
    For the system we need Oracle RAC active-active cluster mode. Our objective is to have 2 replicated databases, in other words we need 2 instances of the same database automatically replicated for 100% up time to the Application server. We have 2 separate database machines and 2 application server machines. We need our application server to connect to any of the databases at any point of time and be having a consistent data on both database machines. We only need the database to be in a cluster mode, we won't need the OS to be in a cluster. There is no shared storage in this case.
    Can this be done? Please advice.

    you should review RAC concepts, and the meaning of instance and database
    For the system we need Oracle RAC active-active cluster mode.RAC = single database with multiple instances all accessing the same shared storage, no replication involved
    Our objective is to have 2 replicated databases, in other words we need 2 instances of the same database automatically replicated for 100% up time to the Application server.what you describe here is = multiple databases with multiple instances, replicated between each other
    We have 2 separate database machines and 2 application server machines. We need our application server to connect to any of the databases at any point of time and be having a consistent data on both database machines. We only need the database to be in a cluster mode, we won't need the OS to be in a cluster. There is no shared storage in this case.no shared storage = no RAC
    you will have two seperate databases synchronizing continuously
    you can use for example Streams / Advanced Replication (with multi-master configuration)
    if you dont insist on an active-active configuration, you can also use Data Guard for building a standby database

  • File Server Failover Cluster without shared disks

    i have two servers that i wish to cluster as my Hyper V hosts and also two file servers each with 10 4TB SATA disks, all i have read about implementing high availability at the storage level involves clustering the file servers (e.g SOFS) which require external
    shared storage that the servers in the cluster can access directly. i do not have external storage and do not have budget for it.
    Is it possible to implement some for of HA with windows server 2012 R2 file servers without shared storage? for example is it possible to cluster the servers and have data on one server mirrored real-time to the other server such that if one server goes
    down, the other server will take over processing storage request using the mirrored data?
    i intend to use the storage to host VMs for Hyper V fail-over cluster and an SQL server cluster. they will access the shared on the file server through SMB
    each file server also has 144GB SSD, how can i use it to improve performance?

    i have two servers that i wish to cluster as my Hyper V hosts and also two file servers each with 10 4TB SATA disks, all i have read about implementing high availability at the storage level involves clustering the file servers (e.g SOFS) which require external
    shared storage that the servers in the cluster can access directly. i do not have external storage and do not have budget for it.
    Is it possible to implement some for of HA with windows server 2012 R2 file servers without shared storage? for example is it possible to cluster the servers and have data on one server mirrored real-time to the other server such that if one server goes
    down, the other server will take over processing storage request using the mirrored data?
    i intend to use the storage to host VMs for Hyper V fail-over cluster and an SQL server cluster. they will access the shared on the file server through SMB
    each file server also has 144GB SSD, how can i use it to improve performance?
    There are two ways for you to go:
    1) Build a cluster w/o shared storage using MSFT upcoming version of Windows (yes, finally they have that feature and tons of other cool stuff). We've recently build both Scale-Out File Server serving Hyper-V cluster and standard general-purpose File Server
    cluster with this version. I'll blog next week edited content (you can drop me a message to get drafts right now) or you can use Dave's blog who was the first one I know who build it and posted, see :
    Windows Server Technical Preview (Storage Replica)
    http://clusteringformeremortals.com
    Feature you should be interested in it Storage Replica. Official guide is here:
    Storage Replica Guide
    http://blogs.technet.com/b/filecab/archive/2014/10/07/storage-replica-guide-released-for-windows-server-technical-preview.aspx
    Will do things like on the picture below:
    Just be aware: feature is new, build is preview (not even beta) so failover does not happen transparently (even with CA feature of SMB 3.0 enabled). However I think tuning timeout and improving I/O performance will fix that. SoFS failover is transparent
    even right away.
    2) If you cannot wait 9-12 months from now (hope MSFT is not going to delay their release) and you're not happy with a very basic functionality MSFT had put there (active-passive design, no RAM cache, requirement for separated storage, system/boot and dedicated
    log disks where SSD is assumed) you can get advanced stuff with a third-party software doing things similar to the picture below:
    So it will basically "mirror" some part of your storage (can be even directly-accessed file on your only system/boot disk) between hypervisor or just Windows hosts creating fault-tolerant and distributed SAN volume with optimal SMB3/NFS shares.
    For more details see:
    StarWind Virtual SAN
    http://www.starwindsoftware.com/starwind-virtual-san/
    There are other guys who do similar things but as you want file server (no virtualization?) most of them who are Linux/FreeBSD/Solaris-based and VM-running are out and you need to check for native Windows implementations. Guess SteelEye DataKeeper (that's
    Dave who blogged about Storage Replica File Server) and DataCore.
    Good luck :) 
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • DFSr supported cluster configurations - replication between shared storage

    I have a very specific configuration for DFSr that appears to be suffering severe performance issues when hosted on a cluster, as part of a DFS replication group.
    My configuration:
    3 Physical machines (blades) within a physical quadrant.
    3 Physical machines (blades) hosted within a separate physical quadrant
    Both quadrants are extremely well connected, local, 10GBit/s fibre.
    There is local storage in each quadrant, no storage replication takes place.
    The 3 machines in the first quadrant are MS clustered with shared storage LUNs on a 3PAR filer.
    The 3 machines in the second quadrant are also clustered with shared storage, but on a separate 3PAR device.
    8 shared LUNs are presented to the cluster in the first quadrant, and an identical storage layout is connected in the second quadrant. Each LUN has an associated HAFS application associated with it which can fail-over onto any machine in the local cluster.
    DFS replication groups have been set up for each LUN and data is replicated from an "Active" cluster node entry point, to a "Passive" cluster node that provides no entry point to the data via DFSn and a Read-Only copy on it's shared cluster
    storage.
    For the sake of argument, assume that all HAFS application instances in the first quadrant are "Active" in a read/write configuration, and all "Passive" instances of the HAFS applications in the other quadrants are Read-Only.
    This guide: http://blogs.technet.com/b/filecab/archive/2009/06/29/deploying-dfs-replication-on-a-windows-failover-cluster-part-i.aspx defines
    how to add a clustered service to a replication group. It clearly shows using "Shared storage" for the cluster, which is common sense otherwise there effectively is no application fail-over possible and removes the entire point of using a resilient
    cluster.
    This article: http://technet.microsoft.com/en-us/library/cc773238(v=ws.10).aspx#BKMK_061 defines the following:
    DFS Replication in Windows Server 2012 and Windows Server 2008 R2 includes the ability to add a failover cluster
    as a member of a replication group. The DFS Replication service on versions of Windows prior to Windows Server 2008 R2
    is not designed to coordinate with a failover cluster, and the service will not fail over to another node.
    It then goes on to state, quite incredibly: DFS Replication does not support replicating files on Cluster Shared Volumes.
    Stating quite simply that DFSr does not support Cluster Shared Volumes makes absolutely no sense at all after stating clusters
    are supported in replication groups and a technet guide is provided to setup and configure this configuration. What possible use is a clustered HAFS solution that has no shared storage between the clustered nodes - none at all.
    My question:  I need some clarification, is the text meant to read "between" Clustered
    Shared Volumes?
    The storage configuration must to be shared in order to form a clustered service in the first place. What
    we am seeing from experience is a serious degradation of
    performance when attempting to replicate / write data between two clusters running a HAFS configuration, in a DFS replication group.
    If for instance, as a test, local / logical storage is mounted to a physical machine the performance of a DFS replication group between the unshared, logical storage on the physical nodes is approaching 15k small files per minute on initial write and even high
    for file amendments. When replicating between two nodes in a cluster, with shared clustered storage the solution manages a weak 2,500 files per minute on initial write and only 260 files per minute when attempting to update data / amend files.
    By testing various configurations we have effectively ruled out the SAN, the storage, drivers, firmware, DFSr configuration, replication group configuration - the only factor left that makes any difference is replicating from shared clustered storage, to another
    shared clustered storage LUN.
    So in summary:
    Logical Volume ---> Logical Volume = Fast
    Logical Volume ---> Clustered Shared Volume = ??
    Clusted Shared Volume ---> Clustered Shared Volume = Pitifully slow
    Can anyone explain why this might be?
    The guidance in the article is in clear conflict with all other evidence provided around DFSr and clustering, however it seems to lean towards why we may be seeing a real issue with replication performance.
    Many thanks for your time and any help/replies that may be received.
    Paul

    Hello Shaon Shan,
    I am also having the same scenario at one of my customer place.
    We have two FileServers running on Hyper-V 2012 R2 as guest VM using Cluster Shared Volume.  Even the data partition drive also a part of CSV.
    It's really confusing whether the DFS replication on CSV are supported or not, then what would be consequence if using.
    In my knowledge we have some customers they are using Hyper-V 2008 R2 and DFS is configured and running fine on CSV since more than 4 years without any issue.
    Appreciate if you can please elaborate and explain in details about the limitations on using CSV.
    Thanks in advance,
    Abul

  • Choice of shared storage for Oralce VM clustering feature

    Hi,
    I would like to experiment the Oracle VM clustering feature over multiple OVM servers. One requirement is the shared storage which can be provided by iSCSI/FC SAN, or NFS. These types of external storage are usually very expensive. For testing purpose, what other options of shared storage can be used? Can someone share your experience?

    You don't need to purchase an expensive SAN storage array for this. A regular PC running Linux or Solaris will do just fine to act as an iSCSI target or to provide NFS shares via TCP/IP. Googling for "linux iscsi target howto" reveals a number of hits like this one: "RHEL5 iSCSI Target/Initiator" - http://blog.hamzahkhan.com/?p=55
    For Solaris, this book might be useful: "Configuring Oracle Solaris iSCSI Targets and Initiators (Tasks)" - http://download.oracle.com/docs/cd/E18752_01/html/817-5093/fmvcd.html

  • Find shared storage in clustered nodes

    Hi,
    How to check shared storage in clustered nodes
    OS – Solaris
    Regards,
    M@rk....

    I've just discovered that one of the SCSI cards was faulty which explains why I couldn't see all the disks from one of the nodes.

  • Backup Exec 9.2 SSO (shared storage option) SCSI LTO

    Greetings, all...
    I have a 2-node cluster setup at a particular client. Backup Exec is licensed for SSO, and unfortunately, while I have the HP Ultrium 960 in the middle of a shared SCSI bus between the two servers, because it's not on the SAN, Backup Exec apparently refuses to recognize it as a valid shared storage device.
    I was wondering if anyone has been able to get around this in Backup Exec, as the drive is indeed shared (can be seen) by both servers. When using a clustered setup, but without SSO, it is difficult to keep the media management in sync, as each node is given its own subdirectory instead of sharing the media management db.
    TIA

    Rachelsdad,
    It appears that in the past few days you have not received a response to your
    posting. That concerns us, and has triggered this automated reply.
    Has your problem been resolved? If not, you might try one of the following options:
    - Visit http://support.novell.com and search the knowledgebase and/or check all
    the other self support options and support programs available.
    - You could also try posting your message again. Make sure it is posted in the
    correct newsgroup. (http://forums.novell.com)
    Be sure to read the forum FAQ about what to expect in the way of responses:
    http://forums.novell.com/faq.php
    If this is a reply to a duplicate posting, please ignore and accept our apologies
    and rest assured we will issue a stern reprimand to our posting bot.
    Good luck!
    Your Novell Product Support Forums Team
    http://support.novell.com/forums/

  • Disk replication for Shared Storage in Weblogic server

    Hi,
    Why we need a disk replication in web-logic server for shared storage systems? What is the advantage of it and how this disk replication can be achieved in web-logic for the shared storage which contains the common configurations and software's which will be used by a pool of client machines? Please clarify.
    Thanks.

    Hi,
    I am not the middleware expert. However ACFS (Oracle Cloud File System) is a clustering filesystem, which also has the functionality for replication:
    http://www.oracle.com/technetwork/database/index-100339.html
    Maybe you also finde information on what you need on the MAA website: www.oracle.com/goto/maa
    Regards
    Sebastian

  • Access iphoto 08 file on shared storage device from multiple machines

    I recently installed ilife 08 on both an imac and macbook. Previously (iphoto 06), both devices accessed the iphoto library on a shared storage device without any problems. After the upgrade, my imac is able to view thelibrary but my macbook (the second machine to be upgraded) no longer has access. 'Sharing' is too slow over the wireless network and doesn't represent a reasonable option.
    Is anyone else experiencing this issue? Any suggestions.

    Actually, neither repairing permissions or changing them with Get Info worked for me. What did work for me was deleting the empty iPhoto Library in the user folder who couldn't access the shared library, and put an alias of the shared library in that user's Pictures folder. Everything then worked as it did prior to upgrading. Thanks.

  • Accessing blobs in private container without Shared Access Secret key

    Is there any way to access blobs in private blob container without Shared Access Secret key ? i mean any User / Role based security or domain level security i.e only our domain should be able to access blobs in private container etc.
    Actually i don't want to append SAS key after each blob url to access it, i want my container to be private and also i want to access each blob in that container without SAS key
    any way currently available or planned in future release ?

    Hi Yazeem,
    > That main page loads sucessfully but the js, css, xml files which this page accesses are unable to load because SAS key is not appended to their URL automatically.
    If the main page is served by a http handler and the js, css, xml files are linked using relative address, these files will also be served by the http handler too. For example, if the http handler serves a page in address
    http://xxx.cloudapp.net/blobproxy/index.html and the page links to a script file using tag
    <script src="myscript.js"></script>, actually the browser will use address
    http://xxx.cloudapp.net/blobproxy/myscript.js to access the script file. So the solution is to create a http handler to serve all requests to address
    http://xxx.cloudapp.netb/blobproxy/*.
    For test purpose, I made this sample. Please add a class file BlobProxy.cs to your web role project:
    using System;
    using System.Web;
    using Microsoft.WindowsAzure.StorageClient;
    using Microsoft.WindowsAzure;
    namespace WebApplication2
    public class BlobProxy : IHttpHandler
    // Please replace this with your blob container name.
    const string blobContainerName = "files";
    public bool IsReusable
    get { return false; }
    public void ProcessRequest(HttpContext context)
    // Get the file name.
    string fileName = context.Request.Path.Replace("/blobproxy/", string.Empty);
    // Get the blob from blob storage.
    var storageAccount = CloudStorageAccount.DevelopmentStorageAccount;
    var blobStorage = storageAccount.CreateCloudBlobClient();
    string blobAddress = blobContainerName + "/" + fileName;
    CloudBlob blob = blobStorage.GetBlobReference(blobAddress);
    // Read blob content to response.
    context.Response.Clear();
    try
    blob.FetchAttributes();
    context.Response.ContentType = blob.Properties.ContentType;
    blob.DownloadToStream(context.Response.OutputStream);
    catch (Exception ex)
    context.Response.Write(ex.ToString());
    context.Response.End();
    Then please add this http handler to web.config file:
    <configuration>
    <system.webServer>
    <handlers>
    <add name="BlobProxy" verb="*" path="/blobproxy/*" type="WebApplication2.BlobProxy"/>
    </handlers>
    </system.webServer>
    </configuration>
    Before running the project, please replace blobContainerName with your own blob container that contains both html and related files. Then start debugging the Azure service project and then you can use the following address to access the page:
    http://127.0.0.1:[port number]/blobproxy/[page name]
    I above sample does not work for you, please let me know.
    Thanks.
    Wengchao Zeng
    Please mark the replies as answers if they help or unmark if not.
    If you have any feedback about my replies, please contact
    [email protected].
    Microsoft One Code Framework

  • RAC and Shared Storage

    Hi,
    I am trying to build a two-node RAC on windows 2003. I have two nodes, RAC1 and RAC2. I have completed the network configuration and now intend to implement shared storage. On RAC1, i have two hard disks.
    Is it possible if I create 5 logial partitions on one disk with out formatting and without any drive letter, and consider it a shared storage?
    Please guide in this regard.
    regards,

    Hi, you need a shared storage (for OCR, voting disk and databases files) so DAS disk on one node is not correct.
    Regards

  • RAC shared storage

    Hi,
    I am trying to build a two-node RAC on windows 2003. I have two nodes, RAC1 and RAC2. I have completed the network configuration and now intend to implement shared storage. On RAC1, i have two hard disks.
    Is it possible if I create 5 logial partitions on one disk with out formatting and without any drive letter, and consider it a shared storage?
    Please guide in this regard.
    regards,

    Does following Shared Device configuration is fine for 10g RAC on Windows 2003?
    . 1 SCSI drive
    • Two PCI network adapters on each node in the cluster.
    • Storage cables to attach the shared storage device to all computers.
    regard.

  • WRT610N Shared Storage

    I recently purchased a WRT610N and have been having some problems setting up the USB shared storage feature.  I have a 1.5 Terabyte Seagate drive that I have created two partiions on (I had read elsewhere in the forums that the WRT610N only handled partitions/dirves up to 1 TB).  Both partitions are NTFS with the first one being 976,561 GB and the second one being 420,700 GB. Both drives show up in the "Disk" section of the admin console and I can create/define a share for the larger of the two partitions without any problems. 
    The first of my problems comes when I try to create/define a share for the smaller partition.  I can create a share but the admin console does not save the access privleges that I assign to it.  Despite setting them up in the admin console they don't show up when I go back and look (in both the detail and summary views) the Access rights show as blank.  I do not have this issue with the larger partition where I can add and later view groups in the Access section.
    The second problem comes when I try to attach to the larger share from a network client.  I can look at the shares if I use ..........Start - Run - and Type \\192.168.1.1.  If I enter in my admin User ID and password, I can see the new share on the WRT610N.  When I try to double click on it, I am then pompted again for a Username & Password,  When I try to re-enter the admin user ID and password, the logon comes right back to me with "WRT610n\admin" populated in the User ID field.  From there it won't accept the admin password.  There are no error messages.
    Help with either problem would be appreciated.

    When you select your Storage partition and open it, and if it ask you for the Username and Password, thats the username and password is of your storage drive, Might be you must have set some password for your storage drivers.
    Login to your Routers GUI. and click on the Storage Tab and Below you will find the Sub tab "Administration" click on it, if you wish you can Modify the "admin" rights, Like change the password or else you can Create your Own User and password. So whenever you login to your Storage Partition, and it ask for the username and password then you can input that username and password and click OK. This way you will be able to access your Storage Driver. 

Maybe you are looking for