Cluster Storage

We have three hyper-v host server (windows 2012) in cluster environment. We have SAN which is mounted as cluster-storage on all three server. Hyper-V1 has ownership of the disk.
Recently we increase the disk space on SAN volume and it reflect in the cluster disk but not in the cluster volume. It shows the correct size in the disk management on all servers, but does not show in the cluster-storage.
Please see the attache screenshot to understand more clearly.
Can someone help me how can I resolve this issue?
Thanks

You need a proper set of actions to increase shared LUN and CSV layered on top of it. Please see:
CSV: Extending the Volume
http://blogs.technet.com/b/chrad/archive/2010/07/15/cluster-shared-volumes-csv-extending-a-volume.aspx
Similar thread before, see:
Extend CSV
in Hyper-V
https://social.technet.microsoft.com/Forums/windowsserver/en-US/034afc19-490c-45e3-8279-28a2cbfeabe9/hyperv-extend-csv
Good luck :)
StarWind Virtual SAN clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

Similar Messages

  • Cluster storage won't mount on restart

    This has been happening for awhile (in 10.5+10.6), but when I restart my computer my "shared" storage drive does not mount automatically. The Cluster storage resides on a local drive (which is mounted), so I'm not sure what steps would be necessary to mount this virtual drive.
    Stopping services in the Qmaster System Prefs. pane and restarting mount the drive and make my services available again.
    This happens with Quick Clusters and manual Clusters.
    Any suggestions?
    Thanks

    Not clear on your config.
    You are restarting a computer, not waking from sleep.
    That computer is supposed to mount a share from another computer, in order to support your Qmaster realm.
    The share does not get mounted automatically, and you need to do it manually.
    You want it to happen automatically instead.
    Do I have that right?
    I've not dealt in this realm for awhile, but in the past I have solved this by adding an item in my user account's "Login Items" pane. Seem to recall you can drag the icon of a currently mounted share into that list.
    Also, since this auto-mounting is really a generic system issue, you might get better help over in one of the general Mac OS X forums.
    Cheers,
    -Rick

  • Cluster Storage : All SAN drives are not added up into cluster storage.

    Hi Team,
    Everything seems to be working fine except one minor issue which is one of the disk not showing in cluster storage even validation was without any issue, error or warning. Please see the report
    below where all SAN disks are validate successfully, and not added into Window Server 2012 storage.
    Quorum disk was added successfully into storage, but data disk was not.
    http://goldteam.co.uk/download/cluster.mht
    Thanks,
    SZafar

    Create Cluster
    Cluster:
    mail
    Node:
    MailServer-N2.goldteam.co.uk
    Node:
    MailServer-N1.goldteam.co.uk
    Quorum:
    Node and Disk Majority (Cluster Disk 1)
    IP Address:
    192.168.0.4
    Started
    12/01/2014 04:34:45
    Completed
    12/01/2014 04:35:08
    Beginning to configure the cluster mail.
    Initializing Cluster mail.
    Validating cluster state on node MailServer-N2.goldteam.co.uk.
    Find a suitable domain controller for node MailServer-N2.goldteam.co.uk.
    Searching the domain for computer object 'mail'.
    Bind to domain controller \\GTMain.goldteam.co.uk.
    Check whether the computer object mail for node MailServer-N2.goldteam.co.uk exists in the domain. Domain controller \\GTMain.goldteam.co.uk.
    Computer object for node MailServer-N2.goldteam.co.uk does not exist in the domain.
    Creating a new computer account (object) for 'mail' in the domain.
    Check whether the computer object MailServer-N2 for node MailServer-N2.goldteam.co.uk exists in the domain. Domain controller \\GTMain.goldteam.co.uk.
    Creating computer object in organizational unit CN=Computers,DC=goldteam,DC=co,DC=uk where node MailServer-N2.goldteam.co.uk exists.
    Create computer object mail on domain controller \\GTMain.goldteam.co.uk in organizational unit CN=Computers,DC=goldteam,DC=co,DC=uk.
    Check whether the computer object mail for node MailServer-N2.goldteam.co.uk exists in the domain. Domain controller \\GTMain.goldteam.co.uk.
    Configuring computer object 'mail in organizational unit CN=Computers,DC=goldteam,DC=co,DC=uk' as cluster name object.
    Get GUID of computer object with FQDN: CN=MAIL,CN=Computers,DC=goldteam,DC=co,DC=uk
    Validating installation of the Network FT Driver on node MailServer-N2.goldteam.co.uk.
    Validating installation of the Cluster Disk Driver on node MailServer-N2.goldteam.co.uk.
    Configuring Cluster Service on node MailServer-N2.goldteam.co.uk.
    Validating installation of the Network FT Driver on node MailServer-N1.goldteam.co.uk.
    Validating installation of the Cluster Disk Driver on node MailServer-N1.goldteam.co.uk.
    Configuring Cluster Service on node MailServer-N1.goldteam.co.uk.
    Waiting for notification that Cluster service on node MailServer-N2.goldteam.co.uk has started.
    Forming cluster 'mail'.
    Adding cluster common properties to mail.
    Creating resource types on cluster mail.
    Creating resource group 'Cluster Group'.
    Creating IP Address resource 'Cluster IP Address'.
    Creating Network Name resource 'mail'.
    Searching the domain for computer object 'mail'.
    Bind to domain controller \\GTMain.goldteam.co.uk.
    Check whether the computer object mail for node exists in the domain. Domain controller \\GTMain.goldteam.co.uk.
    Computer object for node exists in the domain.
    Verifying computer object 'mail' in the domain.
    Checking for account information for the computer object in the 'UserAccountControl' flag for CN=MAIL,CN=Computers,DC=goldteam,DC=co,DC=uk.
    Set password on mail.
    Configuring computer object 'mail in organizational unit CN=Computers,DC=goldteam,DC=co,DC=uk' as cluster name object.
    Get GUID of computer object with FQDN: CN=MAIL,CN=Computers,DC=goldteam,DC=co,DC=uk
    Provide permissions to protect object from accidental deletion.
    Write service principal name list to the computer object CN=MAIL,CN=Computers,DC=goldteam,DC=co,DC=uk.
    Set operating system and version in Active Directory Domain Services.
    Set supported encryption types in Active Directory Domain Services.
    Starting clustered role 'Cluster Group'.
    The initial cluster has been created - proceeding with additional configuration.
    Clustering all shared disks.
    Creating the physical disk resource for 'Cluster Disk 1'.
    Bringing the resource for 'Cluster Disk 1' online.
    Assigning the drive letters for 'Cluster Disk 1'.
    'Cluster Disk 1' has been successfully configured.
    Waiting for available storage to come online...
    All available storage has come online...
    Waiting for the core cluster group to come online.
    Configuring the quorum for the cluster.
    Configuring quorum resource to Cluster Disk 1.
    Configuring Node and Disk Majority quorum with 'Cluster Disk 1'.
    Moving 'Cluster Disk 1' to the core cluster group.
    Choosing the most appropriate storage volume...
    Attempting Node and Disk Majority quorum configuration with 'Cluster Disk 1'.
    Quorum settings have successfully been changed.
    The cluster was successfully created.
    Finishing cluster creation.

  • "Shared Cluster Storage" Qmaster

    i have noticed that "Shared Cluster Storage"in > System Preference > Apple Qmaster is not set to "/var/spool/qmaster" as it should be, but "Applications".
    Please help!

    If you'd like to change it back to the original location, you'll need to use the Go To Folder command since it's in a hidden folder.
    In the Qmaster System Preference, click the "Set" button next to the Shared Storage path.
    note: You will need to Stop Sharing to make this change.
    When the Finder window comes up, press Command-Shift "G"
    In the "Go to the Folder" path, enter: /var/spool
    You should be able to select the qmaster folder and click "Choose" at that point.
    ~D

  • Cluster Storage on Xsan - Constant Read/Write?

    Wondering if anyone else has seen this. I have a Qmaster cluster for Compressor jobs, and if I set the Cluster Storage location to somewhere on the xsan instead of my local drive, throughout the whole compression job my disks will be reading and writing like mad - 40megabytes/sec of read and write, constantly. This obviously hurts the performance of the compression job. If I set the Cluster Storage to the internal disk (/var/spool), I get better performance and virtually no disk activity until it writes out the file at the end. It's especially bad when using Quicktime Export Components.
    Has anyone seen this? I've openned a ticket with Apple, but I'm wondering if it's just me?

    Is your Compressor Preferences set to Never Copy? Thats what it should be set to. I personally haven't seen this behavior, and have a 3 clusters (3 X 5 node) connected to the same san.
    Its also possible your san volume config has something to do with it. If the transfer size (block size and stripe breadth) is too low then I could imagine something like this happening.

  • Qmaster Cluster Storage Location Creating Duplicate Clusters

    I'm in the process of setting up a Qmaster cluster for transcoding jobs and also for Final Cut Server. We have an Xserve serving as the cluster controller with a RAID attached via fiber that is serving out an NFS Share over 10GB Ethernet to 8 other Xserves that make up the cluster. The 8 other Xserves all are automounting the NFS Share. The problem we are running into is that we need to change the default "Cluster Storage" location (Qmaster preference pane) to the attached RAID rather than the default location on the system drive. Primarily because the size of transcodes we are doing will fill the system drive and the transcodes will fail if it is left in the default location.
    Everytime we try and set the "Cluster Storage" location to a directory on the RAID and then create a cluster using QAdministrator it generates a duplicate cluster spontaneously and prevents you from being able to modify the cluster you originally made. It says that it's currently in use or being edited by someone else.
    Duplicated Cluster.
    Currently be used by someone else.
    If you close Qadmin and then try and modify the cluster it says it is locked and prompts for a password despite the fact that no password was setup for the cluster. Alternatively if you do setup a password on the cluster it actually does not work in this situation.
    If the "Cluster Storage" location is set back to it's default location none of this duplicated cluster business happens at all. I went and checked and verified that permissions were the same between the directory on the RAID and the default location on the system drive (/var/spool/qmaster). I also cleared out previous entries in the /etc/exports and that didn't resolve anything. Also, everytime any change has been made servives have been reset and started again. The only thing I can see that is different between using the /var/spool/qmaster and another directory on our RAID is that once a controller has been assigned in QAdmin the storage path that shows up is different. The default is nfs://engel.local/private/var/spool/qmaster/4D3BF996-903A30FF/shared and the cusstom is file2nfs://localhost/Volumes/FCServer/clustertemp/869AE873-7B26C2D9/shared. Screenshots are below.
    Default Location
    Custom Location
    Kind of at loss at this point any help would be much appreciated. Thanks.

    Starting from the beginning, did you have a working cluster to begin with or is this a new implementation?
    a Few major housekeepings (assuming this is a new implementation) - Qmaster nodes have the same versions of : Qmaster, Quicktime, and Compressor (if they have compressor loaded).
    The only box that really matters as far as cluster storage location is the controller. It tells the rest of the boxes where to look. On your shared storage, create a folder structure that is "CLUSTER_STORAGE" or something to that affect... then on the controller's preferences pane set the controller's storage to that location. It will create a new folder that is cryptic with numbers and letters and dashes and use that for the storage location for any computer in the cluster.
    Now... What I'm seeing in your first screen shot however worries me a little bit.. I have had the same issue and the only way I've found to remedy that is to pull all things Final Cut Studio off that box and do a completely fresh reinstall... then update everything again to the same versions. I'm assuming you're using FCStudio 7.x?
    We should be able to get you on your feet with this. Configuring is always the hardest part, but when it gets done.. man it's worth it
    Cheers

  • Failover cluster storage pool cannot be added

    Hi.
    Environment: Windows Server 2012 R2 with Update.
    Storage: Dell MD3600F
    I created an LUN with 5GB space and map it to both node of this cluster. It can be seen on both side on Disk Management. I installed it as GPT based disk without any partition.
    The New Storage Pool wizard can be finished by selecting this disk without any error message, nor event logs.
    But after that, the pool will not be visible from Pools and the LUN will be gone from Disk Management. The LUN cannot be shown again even after rescanning.
    This can be repo many times.
    In the same environment, many LUNs work well in Storage - Disks. It just failed while acting as a pool.
    What's wrong here?
    Thanks.

    Hi EternalSnow,
    Please refer to following article to create clustered storage pool :
    http://blogs.msdn.com/b/clustering/archive/2012/06/02/10314262.aspx
    Any further information please feel free to let us know .
    Best Regards,
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Windows 2012 R2 File Server Cluster Storage best practice

    Hi Team,
    I am designing  Solution for 1700 VDi user's . I will use Microsoft Windows 2012 R2 Fileserver Cluster to host their Profile data by using Group Policy for Folder redirection.
    I am looking best practice to define Storage disk size for User profile data . I am looking to have Single disk size of 30 TB to host user Profile data .Single disk which will spread across two Disk enclosure .
    Please let me know if if single disk of 30 Tb can become any bottle neck to hold user active profile data .
    I have SSD Writable disk in storage with FC connectivity.
    Thanks
    Ravi

    Check this
    TechEd session,
    the
    Windows Server 2012 VDI deployment Guide (pages 8,9), and 
    this article
    General considerations during volume size planning:
    Consider how long it will take if you ever have to run chkdsk. Chkdsk has gone significant improvements in 2012 R2, but it will still take a long time to run against a 30TB volume.  That's down time..
    Consider how will volume size affect your RPO, RTO, DR, and SLA. It will take a long time to backup/restore a 30 TB volume. 
    Any operation on a 30TB volume like snapshot will pose performance and additional disk space challenges.
    For these reasons many IT pros choose to keep volume size under 2TB. In your case, you can use 15x 2TB volumes instead of a single 30 TB volume. 
    Sam Boutros, Senior Consultant, Software Logic, KOP, PA http://superwidgets.wordpress.com (Please take a moment to Vote as Helpful and/or Mark as Answer, where applicable) _________________________________________________________________________________
    Powershell: Learn it before it's an emergency http://technet.microsoft.com/en-us/scriptcenter/powershell.aspx http://technet.microsoft.com/en-us/scriptcenter/dd793612.aspx

  • Server 2012R2 Cluster Storage Error

    January 2014 I built 4 servers into a cluster for Hyper-V base VDI using a SAN for central storage. I have had no issues with the running of the setup until recently when the 4th server decided to stop one of the vm host services and VMs became inaccessible.
    When I was unable to find a solution I rebuild the server and after finding 50+ updates per server, ran windows update on them all. Ever since these 2 simple actions I have been unable to add the server back into the cluster correctly. The Validation wizard
    shows:
    Failure issuing call to Persistent Reservation REGISTER AND IGNORE EXISTING on Test Disk 0 from node when the disk has no existing registration. It is expected to succeed. The requested resource is in use.
    Test Disk 0 does not provide Persistent Reservations support for the mechanisms used by failover clusters. Some storage devices require specific firmware versions or settings to function properly with failover clusters. Please contact your storage administrator
    or storage vendor to check the configuration of the storage to allow it to function properly with failover clusters.
    The other 3 nodes are using the storage happily without issues. If I force the node into the cluster, it shows it as mounted and accessible but the moment I try to start a VM on that server, it loses the mount point and reports error 2051: [DCM] failed to
    set mount point source path target path error 85.
    The only difference between them is the model of the server - the 3 that work are HP ProLiant DL360 G7 and the one that doesn't is a HP ProLiant DL360p G8. They have worked together previously without issues though.
    I am at a complete loss as to what to do. Any help would be gratefully appreciated.
    Thanks

    Hello the_travisty
    The first thing to check in here is that you are using different hardware version for this node than the other nodes, even if at some point they could work together this configuration does not warranty that you are not going to have future issues.
    Remember that the base on the Failover Cluster technology is that all the nodes should have the same characteristics since they are working together as only one computer, if the "clusterized" resources
    are going to be moved among the nodes they have to work and react as any other node member of the cluster and provide all the elements (hardware/software/firmware/drives) in order to maintain the resource up and running as if they were not moved to other computer
    with different configuration.
    As per this error:
    Please contact your storage administrator or storage
    vendor to check the configuration of the storage to allow it to function properly with failover clusters.
    This could point to a firmware/driver incompatibility, I think that maybe we have updates missing to be installed in the server or one update installed that have a different version of the system files installed on the nodes due the different version of hardware
    is causing this issue. Also you can check with your storage vendor if there is a new release of drivers/firmware for your storage solution compatible with the OS version. This same question goes to your network adapter vendor.
    Hope this info hekp you to reach your goal. :D
    5ALU2 !

  • Move SoFS cluster storage

    Hi,
    I'm thinking about a scenario that is as follows: I have a SoFS cluster using iSCSI shared storage and clustered storage spaces. The underlying hardware providing the iSCSI CSVs are being replaced by a new system.
    Can i just add new eligible iSCSI disks to extend the existing clustered pool from the new hardware, then remove the old iSCSI disks from the pool?
    Ideally this would then trigger movement of the VDs to the added disks, freeing up the disks to be removed. Then i should be able to remove the old drives from the cluster. Right?
    This posting is provided "AS IS" with no warranties or guarantees and confers no rights

    Hi,
    I'm thinking about a scenario that is as follows: I have a SoFS cluster using iSCSI shared storage and clustered storage spaces. The underlying hardware providing the iSCSI CSVs are being replaced by a new system.
    Can i just add new eligible iSCSI disks to extend the existing clustered pool from the new hardware, then remove the old iSCSI disks from the pool?
    Ideally this would then trigger movement of the VDs to the added disks, freeing up the disks to be removed. Then i should be able to remove the old drives from the cluster. Right?
    Using iSCSI with Clustered Storage Spaces is not supported. So you should be aware of that. Only SAS is currently supported. See:
    Clustered Storage Spaces
    http://technet.microsoft.com/en-us/library/jj822937.aspx
    Disk bus type
    The disk bus type must be SAS.
    Note
    We recommend dual-port SAS drives for redundancy.
    Storage Spaces does not support iSCSI and Fibre Channel controllers.
    Yes you can replace disk with Storage Spaces no problem. See:
    Storage Spaces FAQ
    http://social.technet.microsoft.com/wiki/contents/articles/11382.storage-spaces-frequently-asked-questions-faq.aspx#How_do_I_replace_a_physical_disk
    Physical attachment interface does not matter.
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Increase Cluster Storage

    Hi,
    Two node Proliant cluster with MSA 500 running NW 6.0 SP5. MSA 500 populated with 12 72 GB 15K hard drives in a Raid1 config. I need more stoage. System was built in 2003 and recently two drives failed. Each was replaced and the rebuild completed. My thought is to change each 72 GB drive with a 144 GB 15k unit, one at a time, and let the rebuild happen. After all 12 are finished, I should have the current configuration of 3 pools plus approx 400GB of free space. Am I correct in my thinking? Has anyone done this?
    Is there a way of getting rid of the MSA500 on-line spare?
    Thanks

    On 22.07.2011 21:06, tgilmer wrote:
    >
    > I hope this is the correct forum. I don't see anything for NW6 clusters.
    THat's fine, as your question isn't really cluster specific at all.
    > I've completed the change of all 72GB drives to 146GB. I can find no way
    > to add the additional storage through CPQONLIN or NSSMU. Do I have to
    > reconfigure everything and restore from backup? I was hoping to avoid
    > using the backup.
    Well, your question intially is really specific to the MSA500, e.g
    depends on the HW. I don't know the MSA500 well enough to tell you how
    you can expand the storage. On the vast majority of HP/Compaq raids, you
    can simply create a new logical volume from the free space in either
    cpqonlin or the array configuration utility. But the MSA500 is a
    different beast than a local raid controller, and I don't know if
    cpqonlin can handle it. You may have to boot one of the nodes, and use
    the smartstart to do it, or whatever is used to configure the MSA.
    Once you've done that, you can add the new free space to a pool, or
    create a new one using NSSMU.
    CU,
    Massimo Rosen
    Novell Knowledge Partner
    No emails please!
    http://www.cfc-it.de

  • Data Caching in a WLS6 Cluster - Storage Location

    I want to cache read-only reference/code table data that will run in a clustered
    WLS6 environment. It's a JSP application and I am storing a complete HTML Select
    Control per reference/code table data in the cache. The question is where to cache
    it? I was going to put it in the ServletContext (JSP "application" implicit object),
    but the ServletContext is not replicated. I considered using JNDI, but there are
    problems with duplicate name errors when another server who doesn't originally
    bind the object tries to lookup, change and rebind the object. I guess JMS Multicasting
    is an option, but I don't want to implement JMS just for an application data cache.
    Any suggestions for a simple reference/code table read-only caching strategy that
    will work in a clustered WLS6 environment? I know how to do everything except
    how to have it available and in sync on all servers in a clustered environment?
    Any suggestions?

    Read-Only entity ejb?
    .raja
    P.S. Also read Rob Wollen's post in the JDBC news group where he explains
    the new features with example.
    <Chris Olsen chris.olsen> wrote in message
    news:3b09493f$[email protected]..
    >
    I want to cache read-only reference/code table data that will run in aclustered
    WLS6 environment. It's a JSP application and I am storing a complete HTMLSelect
    Control per reference/code table data in the cache. The question is whereto cache
    it? I was going to put it in the ServletContext (JSP "application"implicit object),
    but the ServletContext is not replicated. I considered using JNDI, butthere are
    problems with duplicate name errors when another server who doesn'toriginally
    bind the object tries to lookup, change and rebind the object. I guess JMSMulticasting
    is an option, but I don't want to implement JMS just for an applicationdata cache.
    Any suggestions for a simple reference/code table read-only cachingstrategy that
    will work in a clustered WLS6 environment? I know how to do everythingexcept
    how to have it available and in sync on all servers in a clusteredenvironment?
    >
    >
    Any suggestions?

  • Excessive (?) cluster delays during shutdown of storage enabled node.

    We are experiencing significant delays when shutting down a storage enabled node. At the moment, this is happening in a benchmark environment. If these delays were to occur in production, however, they would push us well outside of our acceptable response times, so we are looking for ways to reduce/eliminate the delays.
    Some background:
    - We're running in a 'grid' style arrangement with a dedicated cache tier.
    - We're running our benchmarks with a vanilla distributed cache -- binary storage, no backups, no operations other than put/get.
    - We're allocating a relatively large number of partitions (1973), basing that number on the total potential cluster storage and the '50MB per partition' rule.
    - We're using JSW to manage startup/shutdown, calling DefaultCacheServer.main() to start the cache server, and using the shutdown hook (from the operational config) to shutdown the instance.
    - We're currently running all of the dedicated cache JVMs on a single machine (that won't be the case in production, of course), with a relatively higher ratio of JVMs to cores --> about 2 to 1.
    - We're using a simple benchmarking client that is issuing a combination of puts/gets against the distributed cache. The ids for these puts/gets are randomized (completely synthetic, i know).
    - We're currently handling all operations on the distributed service thread (i.e. thread count is zero).
    What we see:
    - When adding a new node to a cluster under steady load (~50% CPU idle avg) , there is a very slight degradation, but only very slight. There is no apparent pause, and the maximum operation times against the cluster might barely exceed ~100 ms.
    - When later removing that node from the cluster (kill the JVM, triggering the coherence supplied shutdown hook), there is an obvious, extended pause. During this time, the maximum operation times against the cluster are as high as 5, 10, or even 15 seconds.
    At the beginning of the pause, a client will see this message:
    2010-07-13 22:23:53.227/55.738 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): Member 8 left service Management with senior member 1
    During the length of the pause, the cache server logging indicates that primary partitions are being shuffled around.
    When the partition shuffle is complete, the clients become immediately responsive, and display these messages:
    2010-07-13 22:23:58.935/61.446 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): Member 8 left service hibL2-distributed with senior member 1
    2010-07-13 22:23:58.973/61.484 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): MemberLeft notification for Member 8 received from Member(Id=8, Timestamp=2010-07-13 22:23:21.378, Address=x.x.x.x:8001, MachineId=47282, Location=site:xxx.com,machine:xxx,process:30552,member:xxx-S02, Role=server)
    2010-07-13 22:23:58.973/61.484 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): Member(Id=8, Timestamp=2010-07-13 22:23:58.973, Address=x.x.x.x:8001, MachineId=47282, Location=site:xxx.com,machine:xxx,process:30552,member:xxx-S02, Role=server) left Cluster with senior member 1
    2010-07-13 22:23:59.135/61.646 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): TcpRing: disconnected from member 8 due to the peer departure
    Note that there was almost nothing actually in the entire cluster-wide cache at this point -- maybe 10 MB of data at most.
    Any thoughts on how we could eliminate (or nearly eliminate) these pauses on shutdown?

    Increasing the number of threads associated with the distributed service does not seem to have a noticable effect. I might try it in a larger scale test, just to make sure, but initial indications are not positive.
    From the client side, the operations seem hung behind the DistributedCache$BinaryMap.waitForPartitionRedistribution() method. The call stack is listed below.
    "main" prio=10 tid=0x09a75400 nid=0x6f02 in Object.wait() [0xb7452000]
    java.lang.Thread.State: TIMED_WAITING (on object monitor)
    at java.lang.Object.wait(Native Method)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForPartitionRedistribution(DistributedCache.CDB:96)
    - locked <0x9765c938> (a com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap$Contention)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForRedistribution(DistributedCache.CDB:10)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.ensureRequestTarget(DistributedCache.CDB:21)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.get(DistributedCache.CDB:16)
    at com.tangosol.util.ConverterCollections$ConverterMap.get(ConverterCollections.java:1547)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ViewMap.get(DistributedCache.CDB:1)
    at com.tangosol.coherence.component.util.SafeNamedCache.get(SafeNamedCache.CDB:1)
    at com.ea.nova.coherence.lt.GetRandomTask.main(GetRandomTask.java:90)
    Any help appreciated!

  • DFSr supported cluster configurations - replication between shared storage

    I have a very specific configuration for DFSr that appears to be suffering severe performance issues when hosted on a cluster, as part of a DFS replication group.
    My configuration:
    3 Physical machines (blades) within a physical quadrant.
    3 Physical machines (blades) hosted within a separate physical quadrant
    Both quadrants are extremely well connected, local, 10GBit/s fibre.
    There is local storage in each quadrant, no storage replication takes place.
    The 3 machines in the first quadrant are MS clustered with shared storage LUNs on a 3PAR filer.
    The 3 machines in the second quadrant are also clustered with shared storage, but on a separate 3PAR device.
    8 shared LUNs are presented to the cluster in the first quadrant, and an identical storage layout is connected in the second quadrant. Each LUN has an associated HAFS application associated with it which can fail-over onto any machine in the local cluster.
    DFS replication groups have been set up for each LUN and data is replicated from an "Active" cluster node entry point, to a "Passive" cluster node that provides no entry point to the data via DFSn and a Read-Only copy on it's shared cluster
    storage.
    For the sake of argument, assume that all HAFS application instances in the first quadrant are "Active" in a read/write configuration, and all "Passive" instances of the HAFS applications in the other quadrants are Read-Only.
    This guide: http://blogs.technet.com/b/filecab/archive/2009/06/29/deploying-dfs-replication-on-a-windows-failover-cluster-part-i.aspx defines
    how to add a clustered service to a replication group. It clearly shows using "Shared storage" for the cluster, which is common sense otherwise there effectively is no application fail-over possible and removes the entire point of using a resilient
    cluster.
    This article: http://technet.microsoft.com/en-us/library/cc773238(v=ws.10).aspx#BKMK_061 defines the following:
    DFS Replication in Windows Server 2012 and Windows Server 2008 R2 includes the ability to add a failover cluster
    as a member of a replication group. The DFS Replication service on versions of Windows prior to Windows Server 2008 R2
    is not designed to coordinate with a failover cluster, and the service will not fail over to another node.
    It then goes on to state, quite incredibly: DFS Replication does not support replicating files on Cluster Shared Volumes.
    Stating quite simply that DFSr does not support Cluster Shared Volumes makes absolutely no sense at all after stating clusters
    are supported in replication groups and a technet guide is provided to setup and configure this configuration. What possible use is a clustered HAFS solution that has no shared storage between the clustered nodes - none at all.
    My question:  I need some clarification, is the text meant to read "between" Clustered
    Shared Volumes?
    The storage configuration must to be shared in order to form a clustered service in the first place. What
    we am seeing from experience is a serious degradation of
    performance when attempting to replicate / write data between two clusters running a HAFS configuration, in a DFS replication group.
    If for instance, as a test, local / logical storage is mounted to a physical machine the performance of a DFS replication group between the unshared, logical storage on the physical nodes is approaching 15k small files per minute on initial write and even high
    for file amendments. When replicating between two nodes in a cluster, with shared clustered storage the solution manages a weak 2,500 files per minute on initial write and only 260 files per minute when attempting to update data / amend files.
    By testing various configurations we have effectively ruled out the SAN, the storage, drivers, firmware, DFSr configuration, replication group configuration - the only factor left that makes any difference is replicating from shared clustered storage, to another
    shared clustered storage LUN.
    So in summary:
    Logical Volume ---> Logical Volume = Fast
    Logical Volume ---> Clustered Shared Volume = ??
    Clusted Shared Volume ---> Clustered Shared Volume = Pitifully slow
    Can anyone explain why this might be?
    The guidance in the article is in clear conflict with all other evidence provided around DFSr and clustering, however it seems to lean towards why we may be seeing a real issue with replication performance.
    Many thanks for your time and any help/replies that may be received.
    Paul

    Hello Shaon Shan,
    I am also having the same scenario at one of my customer place.
    We have two FileServers running on Hyper-V 2012 R2 as guest VM using Cluster Shared Volume.  Even the data partition drive also a part of CSV.
    It's really confusing whether the DFS replication on CSV are supported or not, then what would be consequence if using.
    In my knowledge we have some customers they are using Hyper-V 2008 R2 and DFS is configured and running fine on CSV since more than 4 years without any issue.
    Appreciate if you can please elaborate and explain in details about the limitations on using CSV.
    Thanks in advance,
    Abul

  • Windows 2003 Standard Edition (Cluster Configuration Storage page)

    I am trying to install RAC R2 on windows Server 2003 (Standard Edition). I am using FireWire 800 SIIG to connect to Maxtor OneTouch III External HDD.
    When installing cluster Services, i do not see the Cluster Storage Devices. When i go to Computer Management, i see all the partitions of the raw device.
    One "Cluster Configuration Storage" page, the Available Disks show no partitions.
    Oracle installtion documentation says "On the Cluster Configuration Storage page, identify the disks that you want to use for the Oracle Clusterware files and, optionally, Oracle Cluster File System (OCFS) storage. Highlight each of these disks one at a time and click Edit to open the Specify Disk Configuration page where you define the details for the selected disk"
    In my case, i do not see any disks. What am i missing?
    Any Thoughts. Please advise
    Thanks
    -Prasad
    Message was edited by:
    pinjam

    You have a more fundamental problem, Firewire disks will not work for RAC on Windows. The storage needs to be shared, Firewire disks can't be shared on Windows. On Linux, Oracle took the open source firewire driver and modified it to allow more than one host to connect. On Windows the driver is closed source so they can't do that.
    I presume you are wanting to try-out RAC on Windows, If so another solution may be to download one of the many iSCSI Servers that are available. Microsoft ship an iSCSI Initiator for Windows, this allows you to share a 'block device' which is what RAC needs - then you can choose your RAC Database storage method of choice, ASM, OCFS, RAW. I prefer ASM

Maybe you are looking for

  • Why the graph in Labview does not plot when there is a different set of data collected

    Hi everyone, I need help from you guys as I faced a problem with my Labview program. Actually, what I am trying to do with the labview program is that I will be collecting data of different days and use them to plot graphs according to the date that

  • Ati rage & tv out & tv in

    I have udev & 2.6.10 kernel... and i don;t have change it.. Tell me how I can make tv out and tv in ... to work with my archlinux....

  • Asset Creation

    Hi, I have a issue lke this. In my Factory there are many samll machines and those machines were created as One Asset in SAP during the initial implementaton. but now i want to break that asset in to sub assets in SAP. but the issue is that those ass

  • How to create ER diagrams using designer  6

    Hi pls suggest me how to create ER diagrams using designer 6.o thanks

  • Nokia 3600 problem with camera flash and image qua...

    whenever i take picture with flash on in night mode, the white flash light appears on one side of photo.. since flash is just beside the lens... is it the software problem or are there any settings to remove this... please help....