Move SoFS cluster storage

Hi,
I'm thinking about a scenario that is as follows: I have a SoFS cluster using iSCSI shared storage and clustered storage spaces. The underlying hardware providing the iSCSI CSVs are being replaced by a new system.
Can i just add new eligible iSCSI disks to extend the existing clustered pool from the new hardware, then remove the old iSCSI disks from the pool?
Ideally this would then trigger movement of the VDs to the added disks, freeing up the disks to be removed. Then i should be able to remove the old drives from the cluster. Right?
This posting is provided "AS IS" with no warranties or guarantees and confers no rights

Hi,
I'm thinking about a scenario that is as follows: I have a SoFS cluster using iSCSI shared storage and clustered storage spaces. The underlying hardware providing the iSCSI CSVs are being replaced by a new system.
Can i just add new eligible iSCSI disks to extend the existing clustered pool from the new hardware, then remove the old iSCSI disks from the pool?
Ideally this would then trigger movement of the VDs to the added disks, freeing up the disks to be removed. Then i should be able to remove the old drives from the cluster. Right?
Using iSCSI with Clustered Storage Spaces is not supported. So you should be aware of that. Only SAS is currently supported. See:
Clustered Storage Spaces
http://technet.microsoft.com/en-us/library/jj822937.aspx
Disk bus type
The disk bus type must be SAS.
Note
We recommend dual-port SAS drives for redundancy.
Storage Spaces does not support iSCSI and Fibre Channel controllers.
Yes you can replace disk with Storage Spaces no problem. See:
Storage Spaces FAQ
http://social.technet.microsoft.com/wiki/contents/articles/11382.storage-spaces-frequently-asked-questions-faq.aspx#How_do_I_replace_a_physical_disk
Physical attachment interface does not matter.
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

Similar Messages

  • Can I design a SOFS-cluster using non-clsutered storage?

    Hi.
    I've been trying to figure out if I can build a SOFS-cluster with 4 nodes and 4 JBOD cabinets per node for a total of 16 cabinets, like this:
    I haven't seen this design though so I'm not sure if it's even possible and if it is, what features do I lose (enclosure awareness, etc)?
    Thanks.

    Yeah, I was in a hurry when I posted my initial question and didn't explain my thought process clearly enough.
    Are you saying that you can't build a unified CSV namespace on top of multiple SOFS-clusters, despite MSFT proposing this exact design at multiple occasions?
    As for building one-node clusters; it's certainly possible, albeit a bit pointless I suppose unless you want to cheat a bit like I did. :)
    The reason I'm asking about this particular design is that the hardware vendor that the customer wants to use for their Storage Spaces design only supports cascading up to 4 JBOD-cabinets in one SAS-chain.
    As their cabinets only support at the most 48 TB per cabinet and the customer wants roughly 220 TB usable space in a mirror config that gives us 10 cabinets. On top of this though we want to use tiering to SSD and with all those limitations taken into consideration
    we end up with 16 cabinets.
    This results in 8 server nodes (2 per 4 cabinets) which is quite a lot of servers for 220 TB of usable disk space and hard to motivate when compared to a traditional FC-based storage solution.
    Perhaps not the cost, pizza boxes are quite cheap, but the rack space for 8 1U servers and 16 2U cabinets is quite a lot.
    I'll put together a design based on these numbers and see what the cost is though, perhaps it's cheap enough for the customer to consider. :)
    Thanks for the feedback.
    1) I'm saying we did not ever manage to have unified namespace from multiple SoFS with no shared block storage between all of them, we did not find any references from MSFT how to do this, we did not find any people who had done this either. If you'd search
    this particular forum you'll see this question asked many times but not answered (we also did ask). If you'd manage to do this and share some information how I'd appreciate this as we;re still interested. See:
    SoFS Scaling
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/20e0e320-ee90-4edf-a6df-4f91b1ef8531/scaling-the-cluster-2012-r2
    SoFS Architecture
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/dc0213da-5ba1-4fad-a5e4-091e047f06b9/clustered-storage-spaces-architecture
    Answer from the guy who was presenting this "picture" to public on TechEd:
    In this specific example, I started by building up an at-scale Spaces deployment - comprising of "units" of 2-4 servers, attached to 4 SAS JBODs, for a total of 240 disks. As scaling beyond those 240 disks with the 2-4 existing servers would become impractical
    due to either port connectivity limitations of the JBOD units themselves, or limitations of the servers due to PCI-E or HBA limitations, further scale is achieved by adding more units to the cluster.
    These additional units would further comprise of servers and JBODs, but the underlying storage connectivity (Shared SAS) exists only between servers and JBODs within individual units. This means that each unit would have it's own storage pool,
    and it's own collection of provisioned virtual disks. Resiliency of data and creation of virtual disks occurs within each unit.
    As there can be multiple units with no physical SAS connectivity between them, Ethernet connectivity between all the cluster nodes and cluster shared volumes (CSV) presents the means to unify the data access namespace between all the cluster nodes regardless
    of physical connectivity at the underlying storage level - making the logical storage architecture from a client and cluster point of view completely flat, regardless of how it's actually physically organized. Further, as you are using scale-out file server
    atop of CSV (with R2 and SMB 3.0) client connections to the file server cluster will automatically connect to the correct cluster nodes which are attached to the clients’ data.
    Data availability and resiliency occurs at the unit level, and these can be extended across units through a workload replication mechanism such as Hyper-V replica, or data replication mechanisms such as DFSR.
    I hope this helps clear up any confusion on the above, and let me know if you have any further questions!
    Bryan"
    2) Sure there's no point as single node cluster is not fault tolerant which compromises a bit whole idea of having a cluster :) Consensus!
    3) Idea is nice except I don't know how to implement it w/o third-party software *or* SAS switches limiting bandwidth, and increasing cost and complexity :(
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Question about adding an Extra Node to SOFS cluster

    Hi, I have a fully functioning SOFS cluster, with two nodes, it uses SAN FC storage, Not SAS JBODS. its running about 100VM's in production at the moment.
    Both my nodes currently sit on one blade chassis, but for resiliency, I want to add another node from a blade chassis in our secondary onsite smaller DC.
    I've done plenty of cluster node upgrades before on SQL and Hyper-V , but never with a SOFS cluster. 
    I have the third node fully prepaired, it can see the Disks the FC Luns, on the SAN (using powerpath, disk manager) and all the roles are installed.
    so in theory I can just add this node in the cluster manager and it should all be good, my question is has anyone else done this, and is there anything else I should be aware of, and what's the best way to check the new node will function , and be able
    to migrate the File role over without issues. I know I can run a validation when adding the node, I presume this is the best option ?
    cannot find much information on the web about expanding a SOFS cluster.
    any advice or information would be greatfully received !!
    cheers
    Mark

    Hi Mark,
    Sorry for the delay in reply.
    As you said there is no much information which related to add a node to a SOFS cluster.
    The only ones I could find is related to System Center (VMM):
    How to Add a Node to a Scale-Out File Server in VMM
    http://technet.microsoft.com/en-us/library/dn466530.aspx
    However adding a node to SOFS cluster should be simple as you just prepared. You can have a try and see the result. 
    If you have any feedback on our support, please send to [email protected]

  • DPM 2012 R2 backup Causes Redirected CSV IO on SOFS Cluster.

    Hi, I have a Scale out Storage Spaces Server with 2 nodes, and a 10 node 2012 R2, Hyper-V cluster using this via SMB3.0
    I also have installed a DPM2012 R2 backup server.
    the DPM agent is installed on all nodes of all servers and I have followed the pre-requisite from Microsoft for setting up DPM backup of SMB Hyper-V machines.
    The DPM backups all work fine. but occasionaly I get these errors on the SOFS cluster.
    Cluster Shared Volume 'Volume3' ('Cluster Disk 4') has entered a paused state because of '(c0130021)'. All I/O will temporarily be queued until a path to the volume is reestablished.
    I really thought this issue had been resolved in this revision, this doesn't seem to cause any issues with my VM's that I can notice. and all DPM backups are working fine, but it still causes me concern.
    has anyone else seen this or have any suggestions what I can try to resolve.
    Regards
    Mark Green

    We also encounter this issue. We use Windows Server 2012 R2 and SCVMM 2012 R2 (with RU1). Be carefull with this issue, because it can cause serious issues. Btw, note that Windows Server 2012 R2 used Direct I/O instead of Redirected I/O.
    If you can't find a full fix as we are in right now, there are two things that might offer a work-around for you:
    Disabled ODX (if your storage system does not support it):
    Deploy Windows Offloaded Data Transfers
    http://technet.microsoft.com/en-us/library/jj200627.aspx
    Serialize virtual machine backups per node
    Migrate to a hardware VSS provider
    http://technet.microsoft.com/en-us/library/hh758027.aspx
    The second option works best, because this issue mostly occurs when you run a backup of many VMs at once. It it not a full fix and makes you backup windows much longer, but can avoid you other problems. Also keep a close eye on this link:
    Recommended hotfixes and updates for Windows Server 2012 R2-based failover clusters
    http://support.microsoft.com/kb/2920151
    Boudewijn Plomp, BPMi Infrastructure & Security

  • Hardware for SoFS cluster for Hyper-V

    Hi,
    I'm considering placing a SoFS cluster in front of my existing SAN to allow a Hyper-V 2012 R2 cluster to use SMB 3.0 shares for storage. 10Gb Ethernet and 8Gb FC used. I would connect the Hyper-V cluster to SoFS with 10Gb Ethernet on separate storage VLAN
    and connect the SoFS nodes to the SAN with 8Gb FC. What type of servers would you recommend for this? I'm thinking something like 2x Dell R620 2x 12core xeon cpu, 128GB RAM with 4 10Gb ethernet NICs and 4 8Gb FC HBAs.
    This posting is provided "AS IS" with no warranties or guarantees and confers no rights

    Hi,
    I'm considering placing a SoFS cluster in front of my existing SAN to allow a Hyper-V 2012 R2 cluster to use SMB 3.0 shares for storage. 10Gb Ethernet and 8Gb FC used. I would connect the Hyper-V cluster to SoFS with 10Gb Ethernet on separate storage VLAN
    and connect the SoFS nodes to the SAN with 8Gb FC. What type of servers would you recommend for this? I'm thinking something like 2x Dell R620 2x 12core xeon cpu, 128GB RAM with 4 10Gb ethernet NICs and 4 8Gb FC HBAs.
    CPU and RAM are not an issue. CPU is completely irrelevant and RAM can be used for CSV cache so more you throw in faster you can go with read-intensive workloads. SMB3 is not cached on SoFS shares. Making long story short: you'll be fine :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Move Virtual Machine Storage stuck "Loading" CSVs

    As I understand it the "Move Virtual Machine Storage" screen is supposed to enumerate the CSVs available to the cluster in the bottom left portion of the window.  It seems to be stuck on a "Loading..." status and never shows the
    CSVs.  Also, when I try the "Add Share" button and type in a share to use it doesn't do anything.  Thoughts?

    Hi,
    It sounds like your operations seems lose some steps, please refer the following article for the future compare.
    More information:
    Appropriate steps for adding a CSV for Hyper-V?
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/49732873-8f86-4ea1-a738-32ffedb36eab/appropriate-steps-for-adding-a-csv-for-hyperv
    Deploying Cluster Shared Volumes (CSV) in Windows Server 2008 R2 Failover Clustering
    http://blogs.msdn.com/b/clustering/archive/2009/02/19/9433146.aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Failed to move virtual machine storage

    Hello guys.
    I am trying to move a VM from a hyper-v 2012 R2 Failover cluster to a non clustered host (2012).
    Let's say that the VM i need to move outside the cluster is called VM1. VM1's VHDX is on cluster's CSV. VM1 resides on node HV05
    Cluster is called cluster01.
    The host outside the cluster (but in the same domain) is HV25.
    I've created a share folder on HV25 called SharedFolder (\\HV25\SharedFolder). I've shared this folder with domain admin user.
    In ADUC i've created constrain delegation using Kerberos for cifs service on bothe Cluster01 and hv25
    I tried to move VM1's storage to the shared folder from HV25.
    The error i get is: Migration did not succeed. Failed to copy file c:\ClusterStorage\Volume2\VMs\VM1\vm1.vhdx to \\hv25\vm1\vm1.vhdx: General access devnied error (0x80070005).
    In cluster's machine event viewer i get: Storage migration for virtual machine 'VM1' (D13DAEC6-F811-427D-8A06-8F960B3F3B3F) failed with error 'General access denied error' (0x80070005). EventID 20820.
    Can anyone help me understand what am I missing, please?
    Thank you very much.

    Hello guys.
    I am trying to move a VM from a hyper-v 2012 R2 Failover cluster to a non clustered host (2012).
    Let's say that the VM i need to move outside the cluster is called VM1. VM1's VHDX is on cluster's CSV. VM1 resides on node HV05
    Cluster is called cluster01.
    The host outside the cluster (but in the same domain) is HV25.
    I've created a share folder on HV25 called SharedFolder (\\HV25\SharedFolder). I've shared this folder with domain admin user.
    In ADUC i've created constrain delegation using Kerberos for cifs service on bothe Cluster01 and hv25
    I tried to move VM1's storage to the shared folder from HV25.
    The error i get is: Migration did not succeed. Failed to copy file c:\ClusterStorage\Volume2\VMs\VM1\vm1.vhdx to \\hv25\vm1\vm1.vhdx: General access devnied error (0x80070005).
    In cluster's machine event viewer i get: Storage migration for virtual machine 'VM1' (D13DAEC6-F811-427D-8A06-8F960B3F3B3F) failed with error 'General access denied error' (0x80070005). EventID 20820.
    Can anyone help me understand what am I missing, please?
    Thank you very much.

  • Cluster storage won't mount on restart

    This has been happening for awhile (in 10.5+10.6), but when I restart my computer my "shared" storage drive does not mount automatically. The Cluster storage resides on a local drive (which is mounted), so I'm not sure what steps would be necessary to mount this virtual drive.
    Stopping services in the Qmaster System Prefs. pane and restarting mount the drive and make my services available again.
    This happens with Quick Clusters and manual Clusters.
    Any suggestions?
    Thanks

    Not clear on your config.
    You are restarting a computer, not waking from sleep.
    That computer is supposed to mount a share from another computer, in order to support your Qmaster realm.
    The share does not get mounted automatically, and you need to do it manually.
    You want it to happen automatically instead.
    Do I have that right?
    I've not dealt in this realm for awhile, but in the past I have solved this by adding an item in my user account's "Login Items" pane. Seem to recall you can drag the icon of a currently mounted share into that list.
    Also, since this auto-mounting is really a generic system issue, you might get better help over in one of the general Mac OS X forums.
    Cheers,
    -Rick

  • Cluster Storage

    We have three hyper-v host server (windows 2012) in cluster environment. We have SAN which is mounted as cluster-storage on all three server. Hyper-V1 has ownership of the disk.
    Recently we increase the disk space on SAN volume and it reflect in the cluster disk but not in the cluster volume. It shows the correct size in the disk management on all servers, but does not show in the cluster-storage.
    Please see the attache screenshot to understand more clearly.
    Can someone help me how can I resolve this issue?
    Thanks

    You need a proper set of actions to increase shared LUN and CSV layered on top of it. Please see:
    CSV: Extending the Volume
    http://blogs.technet.com/b/chrad/archive/2010/07/15/cluster-shared-volumes-csv-extending-a-volume.aspx
    Similar thread before, see:
    Extend CSV
    in Hyper-V
    https://social.technet.microsoft.com/Forums/windowsserver/en-US/034afc19-490c-45e3-8279-28a2cbfeabe9/hyperv-extend-csv
    Good luck :)
    StarWind Virtual SAN clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • How do we move windows cluster to unix?

    Hi,
    I have WLI 10.3 domain with one cluster and 2 managed servers running on windows xp. I am planning to move this cluster to Linux. Any help what is the best way to migrate?
    Thanks
    Ksr

    Hi,
    Moving existing ManagedServers to Unix Boxes are not possible simply...for that your need to follow the below steps...
    Step1). Install Same version of WebLogic on your Unix Boxes.
    Step2). Now Create "Machines" of type "unix" through AdminConsole. Assign the IPAddress of this Machine to your Unix Server Boxes.
    Step3). Remove your existing ManagedServers from the "Machine (Windows)" list......Now Add these Managed Servers to the newly created Unix Machines Server List.
    Step4). Now run the nmEnroll() function from the UnixBox to enroll your NodeManager to the AdminServer. you can follow the step(8) mentioned in the below link for that: http://jaysensharma.wordpress.com/2010/01/08/struts-2-0-in-weblogic/#comment-136
    Step5). Now Start your NodeManager in Unix Box ...then Login to AdminConsole and check in the " Machine--->Monitoring(Tab) " whether the NodeManager is Reachable or not.
    Step6). If the NodeManager is reachable then ....you can start your Managed Servers.
    Thanks
    Jay SenSharma
    http://jaysensharma.wordpress.com (WebLogic Wonders Are Here)

  • How do I download my movies onto local storage so I can watch them without wifi?

    How do I download my movies onto local storage so I can watch them without wifi?

    What sort of movies?
    Clinton

  • Cluster Storage : All SAN drives are not added up into cluster storage.

    Hi Team,
    Everything seems to be working fine except one minor issue which is one of the disk not showing in cluster storage even validation was without any issue, error or warning. Please see the report
    below where all SAN disks are validate successfully, and not added into Window Server 2012 storage.
    Quorum disk was added successfully into storage, but data disk was not.
    http://goldteam.co.uk/download/cluster.mht
    Thanks,
    SZafar

    Create Cluster
    Cluster:
    mail
    Node:
    MailServer-N2.goldteam.co.uk
    Node:
    MailServer-N1.goldteam.co.uk
    Quorum:
    Node and Disk Majority (Cluster Disk 1)
    IP Address:
    192.168.0.4
    Started
    12/01/2014 04:34:45
    Completed
    12/01/2014 04:35:08
    Beginning to configure the cluster mail.
    Initializing Cluster mail.
    Validating cluster state on node MailServer-N2.goldteam.co.uk.
    Find a suitable domain controller for node MailServer-N2.goldteam.co.uk.
    Searching the domain for computer object 'mail'.
    Bind to domain controller \\GTMain.goldteam.co.uk.
    Check whether the computer object mail for node MailServer-N2.goldteam.co.uk exists in the domain. Domain controller \\GTMain.goldteam.co.uk.
    Computer object for node MailServer-N2.goldteam.co.uk does not exist in the domain.
    Creating a new computer account (object) for 'mail' in the domain.
    Check whether the computer object MailServer-N2 for node MailServer-N2.goldteam.co.uk exists in the domain. Domain controller \\GTMain.goldteam.co.uk.
    Creating computer object in organizational unit CN=Computers,DC=goldteam,DC=co,DC=uk where node MailServer-N2.goldteam.co.uk exists.
    Create computer object mail on domain controller \\GTMain.goldteam.co.uk in organizational unit CN=Computers,DC=goldteam,DC=co,DC=uk.
    Check whether the computer object mail for node MailServer-N2.goldteam.co.uk exists in the domain. Domain controller \\GTMain.goldteam.co.uk.
    Configuring computer object 'mail in organizational unit CN=Computers,DC=goldteam,DC=co,DC=uk' as cluster name object.
    Get GUID of computer object with FQDN: CN=MAIL,CN=Computers,DC=goldteam,DC=co,DC=uk
    Validating installation of the Network FT Driver on node MailServer-N2.goldteam.co.uk.
    Validating installation of the Cluster Disk Driver on node MailServer-N2.goldteam.co.uk.
    Configuring Cluster Service on node MailServer-N2.goldteam.co.uk.
    Validating installation of the Network FT Driver on node MailServer-N1.goldteam.co.uk.
    Validating installation of the Cluster Disk Driver on node MailServer-N1.goldteam.co.uk.
    Configuring Cluster Service on node MailServer-N1.goldteam.co.uk.
    Waiting for notification that Cluster service on node MailServer-N2.goldteam.co.uk has started.
    Forming cluster 'mail'.
    Adding cluster common properties to mail.
    Creating resource types on cluster mail.
    Creating resource group 'Cluster Group'.
    Creating IP Address resource 'Cluster IP Address'.
    Creating Network Name resource 'mail'.
    Searching the domain for computer object 'mail'.
    Bind to domain controller \\GTMain.goldteam.co.uk.
    Check whether the computer object mail for node exists in the domain. Domain controller \\GTMain.goldteam.co.uk.
    Computer object for node exists in the domain.
    Verifying computer object 'mail' in the domain.
    Checking for account information for the computer object in the 'UserAccountControl' flag for CN=MAIL,CN=Computers,DC=goldteam,DC=co,DC=uk.
    Set password on mail.
    Configuring computer object 'mail in organizational unit CN=Computers,DC=goldteam,DC=co,DC=uk' as cluster name object.
    Get GUID of computer object with FQDN: CN=MAIL,CN=Computers,DC=goldteam,DC=co,DC=uk
    Provide permissions to protect object from accidental deletion.
    Write service principal name list to the computer object CN=MAIL,CN=Computers,DC=goldteam,DC=co,DC=uk.
    Set operating system and version in Active Directory Domain Services.
    Set supported encryption types in Active Directory Domain Services.
    Starting clustered role 'Cluster Group'.
    The initial cluster has been created - proceeding with additional configuration.
    Clustering all shared disks.
    Creating the physical disk resource for 'Cluster Disk 1'.
    Bringing the resource for 'Cluster Disk 1' online.
    Assigning the drive letters for 'Cluster Disk 1'.
    'Cluster Disk 1' has been successfully configured.
    Waiting for available storage to come online...
    All available storage has come online...
    Waiting for the core cluster group to come online.
    Configuring the quorum for the cluster.
    Configuring quorum resource to Cluster Disk 1.
    Configuring Node and Disk Majority quorum with 'Cluster Disk 1'.
    Moving 'Cluster Disk 1' to the core cluster group.
    Choosing the most appropriate storage volume...
    Attempting Node and Disk Majority quorum configuration with 'Cluster Disk 1'.
    Quorum settings have successfully been changed.
    The cluster was successfully created.
    Finishing cluster creation.

  • "Shared Cluster Storage" Qmaster

    i have noticed that "Shared Cluster Storage"in > System Preference > Apple Qmaster is not set to "/var/spool/qmaster" as it should be, but "Applications".
    Please help!

    If you'd like to change it back to the original location, you'll need to use the Go To Folder command since it's in a hidden folder.
    In the Qmaster System Preference, click the "Set" button next to the Shared Storage path.
    note: You will need to Stop Sharing to make this change.
    When the Finder window comes up, press Command-Shift "G"
    In the "Go to the Folder" path, enter: /var/spool
    You should be able to select the qmaster folder and click "Choose" at that point.
    ~D

  • Cluster Storage on Xsan - Constant Read/Write?

    Wondering if anyone else has seen this. I have a Qmaster cluster for Compressor jobs, and if I set the Cluster Storage location to somewhere on the xsan instead of my local drive, throughout the whole compression job my disks will be reading and writing like mad - 40megabytes/sec of read and write, constantly. This obviously hurts the performance of the compression job. If I set the Cluster Storage to the internal disk (/var/spool), I get better performance and virtually no disk activity until it writes out the file at the end. It's especially bad when using Quicktime Export Components.
    Has anyone seen this? I've openned a ticket with Apple, but I'm wondering if it's just me?

    Is your Compressor Preferences set to Never Copy? Thats what it should be set to. I personally haven't seen this behavior, and have a 3 clusters (3 X 5 node) connected to the same san.
    Its also possible your san volume config has something to do with it. If the transfer size (block size and stripe breadth) is too low then I could imagine something like this happening.

  • Qmaster Cluster Storage Location Creating Duplicate Clusters

    I'm in the process of setting up a Qmaster cluster for transcoding jobs and also for Final Cut Server. We have an Xserve serving as the cluster controller with a RAID attached via fiber that is serving out an NFS Share over 10GB Ethernet to 8 other Xserves that make up the cluster. The 8 other Xserves all are automounting the NFS Share. The problem we are running into is that we need to change the default "Cluster Storage" location (Qmaster preference pane) to the attached RAID rather than the default location on the system drive. Primarily because the size of transcodes we are doing will fill the system drive and the transcodes will fail if it is left in the default location.
    Everytime we try and set the "Cluster Storage" location to a directory on the RAID and then create a cluster using QAdministrator it generates a duplicate cluster spontaneously and prevents you from being able to modify the cluster you originally made. It says that it's currently in use or being edited by someone else.
    Duplicated Cluster.
    Currently be used by someone else.
    If you close Qadmin and then try and modify the cluster it says it is locked and prompts for a password despite the fact that no password was setup for the cluster. Alternatively if you do setup a password on the cluster it actually does not work in this situation.
    If the "Cluster Storage" location is set back to it's default location none of this duplicated cluster business happens at all. I went and checked and verified that permissions were the same between the directory on the RAID and the default location on the system drive (/var/spool/qmaster). I also cleared out previous entries in the /etc/exports and that didn't resolve anything. Also, everytime any change has been made servives have been reset and started again. The only thing I can see that is different between using the /var/spool/qmaster and another directory on our RAID is that once a controller has been assigned in QAdmin the storage path that shows up is different. The default is nfs://engel.local/private/var/spool/qmaster/4D3BF996-903A30FF/shared and the cusstom is file2nfs://localhost/Volumes/FCServer/clustertemp/869AE873-7B26C2D9/shared. Screenshots are below.
    Default Location
    Custom Location
    Kind of at loss at this point any help would be much appreciated. Thanks.

    Starting from the beginning, did you have a working cluster to begin with or is this a new implementation?
    a Few major housekeepings (assuming this is a new implementation) - Qmaster nodes have the same versions of : Qmaster, Quicktime, and Compressor (if they have compressor loaded).
    The only box that really matters as far as cluster storage location is the controller. It tells the rest of the boxes where to look. On your shared storage, create a folder structure that is "CLUSTER_STORAGE" or something to that affect... then on the controller's preferences pane set the controller's storage to that location. It will create a new folder that is cryptic with numbers and letters and dashes and use that for the storage location for any computer in the cluster.
    Now... What I'm seeing in your first screen shot however worries me a little bit.. I have had the same issue and the only way I've found to remedy that is to pull all things Final Cut Studio off that box and do a completely fresh reinstall... then update everything again to the same versions. I'm assuming you're using FCStudio 7.x?
    We should be able to get you on your feet with this. Configuring is always the hardest part, but when it gets done.. man it's worth it
    Cheers

Maybe you are looking for

  • How do I insert multiple pages in a specific order?

    I create reports in Word and convert them to Adobe before sending them out to customers. I often need to insert additional (Adobe) pages into the report before sending it out; sometimes I need to insert mulitple (20 or 30) pages. Does anyone know how

  • Wipe G4 clean?

    i want to wipe my powerbook g4 clean. i have the leopard disk. Where do i start?

  • IDLE_TIME profile

    Hi If a select statement or update statement takes a long time to execute, does this counted as idle time? Thanks

  • Alarm Application working on Emulator not on device

    Hiii This is a reminder application. The user will set a alarm for a particular time and when the current time equals the alarm time alert message will be displayed. When the Applications starts a thread will be started where the alarmtime is read fr

  • Centering Web site pages

    Just wondering if someone know's how to center your web site pages using CSS if thats possible, if not what other way?