Cluster requirements

I just need the minimal and cheapest configuration to learn Veritas Volume Manager and Sun Cluster features.I readed in Sun Cluster Admin Guide min. requirements are two nodes with 2 cpus each one and dont know if I can set up Clusters with two ultra 5/10 (each one only has 3 pci slots and 1 cpu).
I got two ultra 5, Im using Solaris 9, 512mb RAM, a PCI Qlogic SCSI controller ISP1040B with RJ45 connected to a SCSI DAT.The driver supports fast and wide SCSI and Fast-20 scsi devices.I neither know if I need to get RAID Manager to set up disk storage arrays before using Veritas Volume Manager and what kind of storage array need (A1000,D1000,T3), I guess the cheapest is D1000 but I don't know if it has RAID controller.I guess need Veritas (Sun Storedge) Volume Manager 3.0 or higher, and Sun Cluster 3.0 or higher, 2 disks storage arrays, SCSI controllers with Ethernet integrated prefered (by the fact an ultra5 only has 3 slots pci and I need at least 2 NICs to set up a cluster, and a Terminal Concentrator (or maybe can I use a ultra5 instead of Terminal Concentrator).
Could anyone help me to set up a little clusters lab?

First of all, to answer your question w.r.t. the software:
SunCluster is a Sun product, the current version is 3.1. You can download it from:
http://www.sun.com/download
click on "Systems Administration -> Clustering"
Then click on "Availability Suite"
There you then find, what you want.
Veritas Volume Manager (aka VxVM) is a product from Veritas. As such, Sun can not offer it, you need to got to Veritas to check, if there are any "evaluation copies" available. I do remember, it needs real license keys, so it mightbe difficult...
(There once, long time ago, was a Sun rebranded version of VxVM, but that no longer exists!)
In a two node cluster, you need a "quorum", which is a shared storage. Shared means, it needs to be connected to BOTH nodes simultaneously.
What do you mean by "private disk"?
"And, SunCluster also "works", if you only have one shared disk (as mentioned above, technically, but "not supported""
SunCluster places a very high priority on Data Integrity and Security. Therefore, it requires some kind of mirroring or RAID level stuff on the shared storage.
That's why I said: "not supported", if you only have one physical disk for the shared storage. That is not redundant. But it works!
As I said, we set up clusters in labs with only a single old multipack disk as shared storage...
I'm not quite sure, why the D1000 (which is a plain old SCSI disk-subsystem) Infos about the D100 is available here:
http://sunsolve.sun.com/handbook_private/Systems/D1000/D1000.html
(Assuming, you have a sunsolve account)
Info on the ISP1040B is available here (as part of a systemboard in the larger servers):
http://sunsolve.sun.com/handbook_private/Devices/I_O/IO_PCI_IO_Board.html
(Assuming, you have a sunsolve account)
No, you do not need a RAID controller, that RAID system was the A1000.
Yes, we have 2 single CPU Ultra 1's with a single SWIFT card each connected to a single Multipack (not even a D1000!).
So, it's three "pieces", 2 U1's, 1 Multipack.
Yes, that works!
HTH,
Matthias

Similar Messages

  • Sun Cluster requires external storage or it works without that also

    Hi,
    I have to implement sun clustering on V880 servers but we have only purchased single external storage box (storedge 3510) but we need to prepare 3 clusters of two nodes each ( means in total 6 servers).
    So my question is whether it is possible to create a cluster without the external storage device.
    Regards
    Khushvinder

    Khush,
    If you refer back to my earlier posting,its confimed that you need a external storage for a 2 node cluster.The external storage acts as the quorum device.
    Ofcourse you can get away with external storage provided you have support with Sun.Sun has got an internal hack to get away with multihost ,again it depends whether they are willing to share it.(My guess they wouldn`t...they can sell more storage instead)

  • Regarding shared file system requirement in endeca server cluster

    Hi,
    Our solution involves running a single data domain in an endeca server cluster.
    As per the documentation, endeca server cluster requires shared file system for keeping index file for follower node to read.
    My questions are,
    Can I run the endeca cluster with out shared file system by having the index file on each node of the endeca server cluster?
    Can dependency on shared file system be a single point of failure, if yes how can it be avoided?
    I really appreciate your feedback on these questions.
    thanks,
    rp

    Hi rp,
    The requirement for a shared file system in the Endeca Server cluster is a must. As this diagram shows, the shared file system maintans the index, and also maintains the state of the Cluster Coordinator, which ensures cluster services (automatic leader election, propagation of the latest index version to all nodes in the data domain). A dependency on a shared file system can be a single point of failure and requires to run a backup, -- this is a standard IT approach, that is, it is not specific to the Endeca Server cluster in particular.
    See this section on Cluster Behavior, for info on how the shared file system is used (the topic "how updates are processed"), and on how increased availability is achieved.
    HTH,
    Julia

  • Array of cluster to array of element - or - Cluster of arrays?

    Hi all,
    I have a large cluster (lets call it C_data) containing measured data e.g. 10 Temperatures, Pressure, (Temp_1, ...).....
    All these data are measured once per second. I now collect all data
    measured over a certain time in an array of the aforementioned cluster,
    that is
    an array of C_data. In order to display time series of data in graphs I
    need to extract arrays of elements from this array of C_data.
    In a text based programming language this could look like the following:
    Struct C_data {Temp_1, Temp_2, P_1.....}
    ar_C_data is an array of C_data
    now I want to do something like:
    array_of_Temp_1 = ar_C_data[*].Temp_1
    In some programming languages this works but I cannot unbundle_by_name the array of Temp_1 from ar_data in Labview.
    Since my cluster is large and may change in structure (for this reason I use a typedef) a generic solution would be the best.
    I know that I could: loop over all elements of ar_C_data, unbundle by
    name, index elements into arrays, and use these but this seems very
    inefficient if it is done every second on a large cluster (30 elements) with several thousand array elements....
    Olaf

    You can minimize the overhead of scanning through all elements and extracting if you pre-define the array and use "replace array subset".  This avoide having to re-size the array which is costly.
    Or you can keep an array separately in memory.  When one cluster element is added, the corresponding element is added to the array too.  Causes some memory overhead, but you're going to have that anyway if you generate them "on the fly".
    I don't see a way to do this other than either search through the array and pick the elements you need -or-
    keep a copy of the data in a form you can use.
    It's a common question of how to structure data to best suit two sometimes conflicting needs - efficiency and useability.
    What might be of interest is to change the "Array" - "Cluster" order and have a single cluster C-Data with arrays of each element required.  Might be a bit more difficult to use, it depends on your application.
    This way you have all arrays ready at all times, but generating a single cluster requires bundling the individual units on the fly.
    Shane.
    Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)

  • Solaris 10 cluster:failover project or zone can not have same name?

    Oracle on Solaris 10 cluster: two node SUN cluster fail over, SA advised using different account (oracle01 for node01, oracle02 for node02) to failover cluster, why can't I create same 'oracle' account on both node?
    failover different project or zone can not have same user or group account name?
    thanks.

    Hi Vangelis,
    Building a cluster, requires some planning and understanding the concepts.
    A good start would be reading some of the documents linked to in this url: http://docs.sun.com/app/docs/doc/819-2969/gcbkf?a=view
    Regards,
    Davy

  • Solaris 10 Cluster 3.2 with  2 zones in a failover scenario

    Hi
    Looking for the best way to set up things for the following scenario
    I have 2 M5000 servers with internal storage and a 6140 array for shared storage
    I need to create 2 zones on each in a failover scenario (active /standby)
    On Server1 3 out of 4 cpus for Oracle Database Server 11g and 1 out of 4 cpus for Oracle Application Server
    On Server2 3 out of 4 cpus for Oracle Application Server and 1 out of 4 cpus for Oracle Database Server 11g
    Database files will be placed on the shared storage. In case of failure of Server1 Oracle Database will fail over to Server2 and in case Server2 is down Oracle Application Server will fail to Server1.
    Would a zone cluster using clzonecluster be better?if yes how can i achieve the difference in cpu power in case of failure.
    where is best to keep the zone root path on the internal storage or on the shared storage?
    What about the swap space for both zones?
    Better use exclusive ips or shared will be fine?
    Will it be better to have sparse zone installation for the zone or do the whole thing?
    What is the best way to achieve the cpu assignments needed and how much should be left for the global zone?
    Thanks in advance
    vangelis

    Hi Vangelis,
    Building a cluster, requires some planning and understanding the concepts.
    A good start would be reading some of the documents linked to in this url: http://docs.sun.com/app/docs/doc/819-2969/gcbkf?a=view
    Regards,
    Davy

  • Solaris Cluster 3.3 on VMware ESX 4.1

    Hi there,
    I am trying to setup Solaris Cluster 3.3 on Vmware ESX 4.1
    My first question is: Is there anyone out there setted up Solaris Cluster on vmware accross boxes?
    My tools:
    Solaris 10 U9 x64
    Solaris Cluster 3.3
    Vmware ESX 4.1
    HP DL 380 G7
    HP P2000 Fibre Channel Storage
    When I try to setup cluster, just next next next, it completes successfully. It reboots the second node first and then the itself.
    After second node comes up on login screen, ping stops after 5 sec. Same either nodes!
    I am trying to understand why it does that? I did every possibility to complete this job. Setted up quorum as RDM from VMware. Solaris has direct access to quorum disk now.
    I am new to Solaris and I am having the errors below. If someone would like to help me it will be much appreciated!
    Please explain me in more details i am new bee in solaris :) Thanks!
    I need help especially on error: /proc fails to mount periodically during reboots.
    Here is the error messages. Is there any one out there setted up Solaris Cluster on ESX 4.1 ?
    * cluster check (ver 1.0)
    Report Date: 2011.02.28 at 16.04.46 EET
    2011.02.28 at 14.04.46 GMT
    Command run on host:
    39bc6e2d- sun1
    Checks run on nodes:
    sun1
    Unique Checks: 5
    ===========================================================================
    * Summary of Single Node Check Results for sun1
    ===========================================================================
    Checks Considered: 5
    Results by Status
    Violated : 0
    Insufficient Data : 0
    Execution Error : 0
    Unknown Status : 0
    Information Only : 0
    Not Applicable : 2
    Passed : 3
    Violations by Severity
    Critical : 0
    High : 0
    Moderate : 0
    Low : 0
    * Details for 2 Not Applicable Checks on sun1
    * Check ID: S6708606 ***
    * Severity: Moderate
    * Problem Statement: Multiple network interfaces on a single subnet have the same MAC address.
    * Applicability: Scan output of '/usr/sbin/ifconfig -a' for more than one interface with an 'ether' line. Check does not apply if zero or only one ether line.
    * Check ID: S6708496 ***
    * Severity: Moderate
    * Problem Statement: Cluster node (3.1 or later) OpenBoot Prom (OBP) has local-mac-address? variable set to 'false'.
    * Applicability: Applicable to SPARC architecture only.
    * Details for 3 Passed Checks on sun1
    * Check ID: S6708605 ***
    * Severity: Critical
    * Problem Statement: The /dev/rmt directory is missing.
    * Check ID: S6708638 ***
    * Severity: Moderate
    * Problem Statement: Node has insufficient physical memory.
    * Check ID: S6708642 ***
    * Severity: Critical
    * Problem Statement: /proc fails to mount periodically during reboots.
    ===========================================================================
    * End of Report 2011.02.28 at 16.04.46 EET
    ===========================================================================
    Edited by: user13603929 on 28-Feb-2011 22:22
    Edited by: user13603929 on 28-Feb-2011 22:24
    Note: Please ignore memory error I have installed 5GB memory and it says it requires min 1 GB! i think it is a bug!
    Edited by: user13603929 on 28-Feb-2011 22:25

    @TimRead
    Hi, thanks for reply,
    I have already followed the steps also on your links but no joy on this.
    What i noticed here is cluster seems to be buggy. Because i have tried to install cluster 3.3 on physical hardware and it gave me excat same error messages! interesting isnt it?
    Please see errors below that I got from on top of VMware and also on Solaris Physical hardware installation:
    ERROR1:
    Comment: I have installed different memories all the time. It keeps sayying that silly error.
    problem_statement : *Node has insufficient physical memory.
    <analysis>5120 MB of memory is installed on this node.The current release of Solaris Cluster requires a minimum of 1024 MB of physical memory in each node. Additional memory required for various Data Services.</analysis>
    <recommendations>Add enough memory to this node to bring its physical memory up to the minimum required level.
    ERROR2
    Comment: Despite rmt directory is there I gor error below on cluster check
    <problem_statement>The /dev/rmt directory is missing.
    <analysis>The /dev/rmt directory is missing on this Solaris Cluster node. The current implementation of scdidadm(1M) relies on the existence of /dev/rmt to successfully execute 'scdidadm -r'. The /dev/rmt directory is created by Solaris regardless of the existence of the actual nderlying devices. The expectation is that the user will never delete this directory. During a reconfiguration reboot to add new disk devices, if /dev/rmt is missing scdidadm will not create the new devices and will exit with the following error: 'ERR in discover_paths : Cannot walk /dev/rmt' The absence of /dev/rmt might prevent a failover to this node and result in a cluster outage. See BugIDs 4368956 and 4783135 for more details.</analysis>
    ERROR3
    Comment: All Nics have different MAC address though, also I have done what it suggests me. No joy here as well!
    <problem_statement>Cluster node (3.1 or later) OpenBoot Prom (OBP) has local-mac-address? variable set to 'false'.
    <analysis>The local-mac-address? variable must be set to 'true.' Proper operation of the public networks depends on each interface having a different MAC address.</analysis>
    <recommendations>Change the local-mac-address? variable to true: 1) From the OBP (ok> prompt): ok> setenv local-mac-address? true ok> reset 2) Or as root: # /usr/sbin/eeprom local-mac-address?=true # init 0 ok> reset</recommendations>
    ERROR4
    Comment: No comment on this, i have done what it says no joy...
    <problem_statement>/proc fails to mount periodically during reboots.
    <analysis>Something is trying to access /proc before it is normally mounted during the boot process. This can cause /proc not to mount. If /proc isn't mounted, some Solaris Cluster daemons might fail on startup, which can cause the node to panic. The following lines were found:</analysis>
    Thanks!

  • AlwaysOn on DB Cluster

    Hi All,
    During our internal discussion we have a plan to configure AlwaysOn but as per our existing environment I have few questions to take this proposal to the client.
    Existing environment: SQL Server 2012 Active/Passive cluster
    Requirement: Configure AlwaysOn as DR for more than 100 databases in this SQL Server 2012 (default) instance of Active/Passive cluster.
    Question:
    Is it possible to install another (named) instance in this DB cluster (Active/Passive) and configure AlwaysOn between these two default and named instances.
    or
    Is it preferred to configure another windows server, SQL (standalone), add node to the existing active/passive cluster and configure AlwaysOn
    Please suggest. 
    Regards,
    Kalyan
    Grateful to your time and support. Regards, Shiva

    Well, I think you want to cover all the databases located on the server, am I right? I think you would be better staying with a FCI (cluster solution) that applies on the instance level.. Much easy to maintain.
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Can I modify cluster configuration on the fly

    If we have say 3 iAS instances in a cluster.
    Is it in any way possible to remove one of the iAS instances from the cluster while the cluster is still online. My understanding is that any change to cluster requires restarting all instances in cluster. Is that correct?

    The answer to your question depends on how you set up your cluster in the first place. If all your servers are "sync servers" then the anser is no. A restart of the cluster is required if you want to change or remove a node from the cluster. However, if the machine you wanted to remove was a "sync local", then you would be able to remove it from the cluster without restart.
    (It probably won't be a sync local if you used iASAT to create your cluster)

  • Sun Cluster and Interconnect IP ranges

    Can someone explain why Sun Cluster requires such large subnets for its interconnects ?
    Yes, they use non-routable IPs but there are some cases where even these collide with corporate admin networks. I had one cluster I had to use that Microsoft automatic IP nework range to avoid IP conflict with corporate networks.

    You dont have to stick to default IP's or subnet . You can change to whatever IP's you need. Whatever subnet mask you need. Even change the private names.
    You can do all this during install or even after install.
    Read the cluster install doc at docs.sun.com

  • My sysAdmin tells me he cannot cluster my java application

    Hi
    I have a small java application which reads files from a (ftp) directory, validates them and writes them to disk again. This application is being called from a CRON scripts on both servers which runs it every minute. Our hosting company says that on the application needs to be run on both clusterservers at the same time. This causes synch problems when reading the file. 2 applications try to access the same file at the same time.
    How can he make the second server start up the application when the first goes down?
    Our hostingcompany says that my application is not cluster-aware and that it's impossible to configure the server to start/stop my application.
    Can somebody please help me with this?
    thanks

    there are two ways to approach this (in sun cluster 3.x)
    that come to mind.
    the first is a quick hack that requires you to use a wrapper
    script for your cron job (ie, run this script with cron, and
    use this script to start your java prog).
    here's the gist of it:
    ----------------------------->8---------------------------------------------------
    #!/bin/sh
    # wrapper script for cron jobs on 2-node cluster
    # >2 node cluster requires more work
    # exit if cluster not active on this node
    thisState=`scha_cluster_get -O NODESTATE_LOCAL`
    if [ "$thisState" != "UP" ] ; then
    exit 0
    fi
    # find out the node ID of this system
    thisNode=`scha_cluster_get -O NODEID_LOCAL`
    # if this is not node 1, check to see if node 1 is active
    if [ "$thisNode" != 1 ] ; then
    node1Name=`scha_cluster_get -O NODENAME_NODEID 1`
    node1State=`scha_cluster_get -O NODESTATE_NODE $node1Name`
    # if node 1 is active, it will run the program, so
    # we can quit
    if [ "$node1State" = "UP" ] ; then
    exit 0
    fi
    fi
    # at this point, we know this node is active, and is
    # either node 1, or is the remaining active node
    # in either case, run the program
    exec /path/to/my/prog
    ----------------------------->8---------------------------------------------------
    you could even make this a generic wrapper by doing
    "exec $@" at the end & supplying the normal program
    path & args as arguments to the wrapper script...
    the 2nd solution is to essentially write your own "cron":
    in other words, a long-running process that simply sleeps
    in between trying to run your program (or rewrite your
    program to run forever & check the ftp dir occassionally).
    then you can use the standard tools to make the program
    highly available (scdsbuilder, etc)
    hth
    p

  • O-Cluster on 10.2

    I am trying to run the dmocdemo.sql file and I am having some problems calling the DBMS_DATA_MINING object.
    It seems it does not have the variables algo_ocluster and oclt_max_buffer (as an example). There could be more missing objects.
    I heard that the O-Cluster did not work for 10.1 and it was scheduled to be supported 10.2.
    I also read the o-cluster requires the "scoring functions". Does this mean the "scoring engine" needs to be enabled. Presently, it is not installed.
    Through my struggles, the "scoring engine" can not be installed/enabled on the same machine as the "data mining server". If this is true, is there a work-around?
    All assistance is greatly appreciated.
    Thanks in advance.
    Have a wonderful day/weekend,
    Andy

    It seems it does not have the variables algo_ocluster and oclt_max_buffer (as an example). If you ran dmocdemo.sql successfully, you should see setting details on screen or in the log.
    Or, you can run the following query to see settings,
    SELECT setting_name, setting_value
    FROM TABLE(DBMS_DATA_MINING.GET_MODEL_SETTINGS('your_model_name'))
    ORDER BY setting_name;
    Value algo_ocluster should appear in settings_value column where settings_name=ALGO_NAME and oclt_max_buffer is one of the setting_names. If they are missing, perhaps you or someone else altered the code. Most likely, settings table was not provided or not populated properly. In order to create an O-Cluster model, you must provide a settings table with at least one setting, setting_name=ALGO_NAME and setting_value=ALGO_O_CLUSTER. If settings table is missing or ALGO_NAME is not set to ALGO_O_CLUSTER, you will get a kmeans cluster model. That might explains why you did not see those name or values.
    I heard that the O-Cluster did not work for 10.1 and it was scheduled to be supported 10.2Before 10.2, you can use Oracle JDM to create O-Cluster model. In 10.2, O-Cluster models are supported on both Java and PL/SQL interfaces.
    I also read the o-cluster requires the "scoring functions". Does this mean the "scoring engine" needs to be enabled.Scoring engine is not just for O-Cluster model. If you intend to use any Oracle Data Mining models for scoring data, you will need a scoring engine. Detailed answers can be found in Oracle Data Mining Concepts and Data Mining Admin Guide.
    Through my struggles, the "scoring engine" can not be installed/enabled on the same machine as the "data mining server". If this is true, is there a work-around?Depending on what you want to do. In general, you may need two separate instances. One is a mining lab for data preprocessing, model building, testing and analyzing. And a scoring engine, for applying the model on the real data, either score large amount of data in batch or provide score in realtime for various applications.
    George

  • Cluster SQL 2012 on Windows 2008 R2

    Hi
    Presently, i have a cluster SQL 2012 on windows 2008 R2 (2 nodes), i need to add a new server in cluster.
    My question, it's possible to add WINDOWS 2012 SERVER with SQL 2012?
    (2 nodes with Windows 2008 R2, 1 node with Windows 2012 and all nodes with SQL 2012)
    If yes, do you have a link that
    confirms this?
    Thanks

    No. This is not possible. Microsoft Windows Cluster required all the Member Nodes to have same OS to the Same version level. Even you cannot use Windows Server 2008 on one node with Windows Server 2008 R2 on another node.
    Thanks and Regards Alankar Chakravorty MCITP Database Administrator SQL Server 2008 MCITP Database Administrator SQL Server 2005

  • Cyrus Murder Cluster

    Any advice on how to go about setting up 2 Leopard servers (in different locations, on different networks) as a cyrus murder cluster?
    Rusty

    Typically the cluster requires shared storage so both servers can read/write the same files.
    That's really, really hard to do over a WAN unless you're prepared to spend a lot of money.
    Most people end up using a master/slave setup and failover to the second site rather than trying to run them both live simultaneously.

  • Hyper-V 2012 High Availability using Windows Server 2012 File Server Storage

    Hi Guys,
    Need your expertise regarding hyper-v high availability. We setup 2 hyper-v 2012 host in our infra for our domain consolidation project. Unfortunately, we don't have hardware storage that is said to be a requirement that we can use to create a failover cluster
    for hyper-v host to implement HA. Here's the setup:
    Host1
    HP Proliant L380 G7
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Host2
    Dell PowerEdge 2950
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Storage
    Dell PowerEdge 6800
    Windows Server 2012 Std
    File and Storage Services installed
    I'm able to configure the new feature about Shared Nothing Live Migration - i'm able to move VM's back and fort between my hosts without storage. But this is planned and proactive approach. My concern is to have my hyper-v host to become highly available in
    the event of system failure. If my host1 dies, the VMs should go/move to host2 and vice versa. In setting this up, i believe i need to enable the failover clustering between my hyper-v hosts which i already did but upon validation, it says "No disks
    were found on which to perform cluster validation tests." Is it possible to cluster it using just a regular windows file server? I've read about SMB 3.0 and i've configured it as well i'm able to save VMs on my file server, but i don't think that my hyper-v
    hosts are already highly available.
    Any feedback and suggestions or recommendation is highly appreciated. Thanks in advance!

    Hi Guys,
    Need your expertise regarding hyper-v high availability. We setup 2 hyper-v 2012 host in our infra for our domain consolidation project. Unfortunately, we don't have hardware storage that is said to be a requirement that we can use to create a failover cluster
    for hyper-v host to implement HA. Here's the setup:
    Host1
    HP Proliant L380 G7
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Host2
    Dell PowerEdge 2950
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Storage
    Dell PowerEdge 6800
    Windows Server 2012 Std
    File and Storage Services installed
    I'm able to configure the new feature about Shared Nothing Live Migration - i'm able to move VM's back and fort between my hosts without storage. But this is planned and proactive approach. My concern is to have my hyper-v host to become highly available in
    the event of system failure. If my host1 dies, the VMs should go/move to host2 and vice versa. In setting this up, i believe i need to enable the failover clustering between my hyper-v hosts which i already did but upon validation, it says "No disks were
    found on which to perform cluster validation tests." Is it possible to cluster it using just a regular windows file server? I've read about SMB 3.0 and i've configured it as well i'm able to save VMs on my file server, but i don't think that my hyper-v hosts
    are already highly available.
    Any feedback and suggestions or recommendation is highly appreciated. Thanks in advance!
    Your shared storage is a single point of failure with this scenario so I would not consider the whole setup as a production configuration... Also setup is both slow (all I/O is travelling down the wire to storage server, running VMs from DAS is ages faster)
    and expensive (third server + extra Windows license). I would think twice about what you do and either deploy a built-in VM replication technologies (Hyper-V Replica) and apps built-in clustering features that does not require shared storage (SQL Server and
    Database Mirroring for example, BTW what workload do you run?) or use some third-party software creating fault tolerant shared storage from DAS or investing into physical shared storage hardware (HA one of course). 
    Hi VR38DETT,
    Thanks for responding. The hosts will cater a domain controller (on each host), Web filtering software (Websense), Anti-Virus (McAfee ePO), WSUS and an Auditserver as of the moment. Is the Hyper-V Replica somewhat give "high availability" to VMs or Hyper-V
    hosts? Also, is the cluster required in order to implement it? Haven't tried that but worth a try.

Maybe you are looking for

  • Purchase Info record mandatory to create PO/SA

    Hi, Please tell me the user exit in PO to make Purchase Info record mandatory to create Purchase Order or Scheduling Agreement (only for material with material code). No PO/SA for material should be created without any Purchase Info record maintenanc

  • Empty record issues

    Hey, I am using jdeveloper 11g release 2, I did set one of my attributes as DBSequence. Once i trigger the commit operation records are saved with data assigned on the form. checking the database the records are saved . funny enough i found an  added

  • Web Services and the Developer Mindset

    "Web Services has all the makings of a home run. As a powerful integration platform, it moves the procedural-heavy complexities of building distributed apps into the realm of simple declarative transactions, while supporting location transparency. It

  • Help required - mac address table, virtual pc/ip addressing issue

    Hi, hope someone out there can help? This is the scenario SW1 (WS-C2960G-48TC-L) port gig0/1 has a PC connected to it with ip address 10.182.8.6 and a Virtual IP address 10.182.8.107 SW2 (WS-C2960-24TT-L)  port gig 0/1 has a PC connected to it with i

  • Belkin tunetalk stereo with classic?

    I am having a weird problem with Belikin Tunetalk stereo & IPOD classics. I've asked Belkin & Apple & had nothing useful. Anyone add anything? background: Been using Belkin tunetalk stereos (2) with 2 Ipod 5G (video) for more than a year. They work o