Disks in Cluster
Hi,
I�m new in Sun Cluster and I have some doubts.
How the custer works with shared disks?
The shared disk can be accessed by the all nodes at same time? Or the shared disk is opened to one node at a time?
I need to use a volume manager like Veritas or I can use a normal filesystem (ufs)?
Thank�s
Paulo Vilhena
Hi Phaulo,
well, the Sun cluster takes care of this, you have to define a storage resource and the cluster will switch over this resource group and make certain at time only one of the nodes will have access to this disks.
Our usual configuration is two DS each with a node and SAN, both nodes having two SAN connections one to each of the SANs in both DCs the disk are then mirrored from the two SANs with SLVM.
I have added the commands we use to set this up below, there are cetainly other ways, but it should help as a starting point.
Assumptions:
- you are using SLVM (aka Disk Suite)
- the two disks have the dids 5 and 9 (as shown by didadm -l)
- the nodes are named mynodep and mynodez
- the resour group mysrvc_rg is defined
- the disks are added to this resource group
- the service name is mysrvc
You have to execute the following commands only on one node except stated otherwise
Create the metadevices (the last metainit gives a warning which you can ignore safely as we not yet have any data the disks)
metaset -s mysrvcds -a -h mynodep mynodez
metaset -s mysrvcds -a -m mynodep mynodez
metaset -s mysrvcds -a /dev/did/rdsk/d5 /dev/did/rdsk/d9
metainit -s mysrvcds d101 1 1 /dev/did/rdsk/d5s0
metainit -s mysrvcds d102 1 1 /dev/did/rdsk/d9s0
metainit -s mysrvcds d100 -m d101 d102
create a file system
-> newfs /dev/md/mysrvcds/rdsk/d100 edit /etc/vfstab on __BOTH__ Nodes and add the following line:
/dev/md/mysrvcds/dsk/d100 /dev/md/mysrvcds/rdsk/d100 /global/mysrvc ufs 2 yes global,logging
create the Mount points on __BOTH__ Nodes
mkdir /global/mysrvc mount the disks on the Primary Node
mount /global/mysrvc Register the HAStoragePlus Data Service:
scrgadm -a -t SUNW.HAStoragePlus
create the storage Ressource
scrgadm -a -j mysrvc_storage -g mysrvc_rg -t SUNW.HAStoragePlus -x FilesystemMountpoints=/global/mysrvc -x AffinityOn=true some output to show the resulting configuration:
root@mynodep:~ # metaset
Set name = mysrvcds, Set number = 1
Host Owner
mynodep Yes
mynodez
Mediator Host(s) Aliases
mynodep
mynodez
Driv Dbase
d5 Yes
d9 Yes
root@mynodep:~ # metastat -s mysrvcds
mysrvcds/d100: Mirror
Submirror 0: mysrvcds/d101
State: Okay
Submirror 1: mysrvcds/d102
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 106583040 blocks (50 GB)
mysrvcds/d101: Submirror of mysrvcds/d100
State: Okay
Size: 106583040 blocks (50 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
d5s0 0 No Okay No
mysrvcds/d102: Submirror of mysrvcds/d100
State: Okay
Size: 106583040 blocks (50 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
d9s0 0 No Okay No
Device Relocation Information:
Device Reloc Device ID
d9 No -
d5 No -
root@mynodep:~ # scstat -D
-- Device Group Servers --
Device Group Primary Secondary
Device group servers: mysrvcds mynodep mynodez
-- Device Group Status --
Device Group Status
Device group status: mysrvcds Online
-- Multi-owner Device Groups --
Device Group Online Status
root@mynodep:~ #
Similar Messages
-
Error when adding a disk to Cluster Shared Volumes
When adding a disk to Cluster Shared Volumes via Failover Cluster Manager, I get a couple of errors.
Event ID 5145 in System Log:
While adding the disk ('Cluster Disk 1') to Cluster Shared Volumes, setting explicit snapshot diff area association for volume ('\\?\Volume{420e2cc4-4fb4-41be-afb1-65f2ee62457a}\') failed with error 'HrError(0x8004230d)'. The only supported software snapshot
diff area association for Cluster Shared Volumes is to self.
Cluster disk resource 'Cluster Disk 1' failed to delete a software snapshot. The diff area on volume '\\?\Volume{420e2cc4-4fb4-41be-afb1-65f2ee62457a}\' could not be dissociated from volume '\\?\Volume{420e2cc4-4fb4-41be-afb1-65f2ee62457a}\'. This
may be caused by active snapshots. Cluster Shared Volumes requires that the software snapshot be located on the same disk.
Any ideas why I'm getting this error? This disk was previously added as a CSV to different Windows failover cluster, if that matters. Thanks.Hi,
As the disk was previously used as a CSV, I assume there is still data on it.
Please check if any VSS snapshot created on that disk. If so, delete them and re-add it as a CSV to see the result - a quick way is to backup important files and perform a re-format if you cannot confirm.
If you have any feedback on our support, please send to [email protected] -
Getting only name of the physical disk from cluster group
hi experts,
i want to know the name of my cluster disk example - `Cluster Disk 2` available in a cluster group. Can i write a command something like below.
Get-ClusterGroup –Name FileServer1 | Get-ClusterResource where ResourceType is Physical Disk
Thanks
Sid
sidthanks Chaib... its working.
However i tried this
Get-ClusterGroup
ClusterDemoRole
| Get-ClusterResource
| fl name
| findstr
"physical Disk"
which is also working.
sid -
Slow Disks After Cluster Creation
Hi,
I have a Windows Server 2008R2 SP1 cluster, consisting of two VMWare virtual servers, which is experiencing very slow disk performance (~10mb/s when copying to or from any of the cluster disks). If I destroy the cluster and bring the disks online on a
standalone server I can get over 200mb/s. The disks are presented as RDMs, with the two nodes on separate ESX hosts and the disks in Physical Compatibility mode.
Could anyone explain why disk performance could become so poor as soon as the cluster is created? All the tests in the cluster validation report passed.
Not sure if this helps, but I have also noticed that when I turn maintenance on for each of the disks in Failover Cluster Manager, their performance improves dramatically.
ThanksHi,
I don’t find out the similar issue, you’d better ask VMware® for the more assistance. Base on my experience some third party virtualization often need adjust the disk time
out value.
The third party article:
Inconsistent Windows virtual machine performance when disks are located on SAN datastores (1014)
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1014
Hope this helps.
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place. -
SQL 2008 R2 Cluster | change Cluster Disks
Hello,
We have a SQL cluster (for SharePoint 2010) consists of two nodes Windows server 2008 R2 & SQL 2008 r2.
The SQL cluster have MSDTC (Clustered) & SQL services, with total four disks:
Quorum Disk
MSDTC Disk
Databases disk
Logs disk.
Now the old SAN will be decommissioned and new LUNs have added to replace the our disks above. I managed to change Quorum & MSDTC. I used the below robocopy command to copy databases and logs with the same folder structure and permissions:
robocopy t:\ l:\ /E /MIR /SEC /COPYALL /V
I stopped SQL services then swapped drive letters , when I start SQL services it starts without problems (using the new Disks).
But the issue is when I connect to SQL management studio, all databases are in suspect mode. I know there some SQL query to be run against each database , but this a production environment and I don't want to mess with it.
Is there any other way to change cluster disks of SQL cluster? or use the above method without getting into suspect mode?
Thanks, ShehatovichHello, Shehatovich
I you have copied the files while there were still online, that might have been the cause 'suspect' state of your databases.
I would suggest you to follow the steps:
Add the new disks to cluster, with validation in every single Cluster node (then cluster will be able to get the signature of the disks)
Stop SQL Server service.
Copy the MDF and LDF to the new disks,
use the Detach and attach Method. (http://msdn.microsoft.com/en-us/library/ms187858.aspx)
After check tha your databases are online and consistent.
Stop SQL Server service again and remove the disks
Hope it may help you...
Regards,
Edvaldo Castro
Edvaldo Castro http://edvaldocastro.com MCITP Database Administrator on SQL Server® 2008 MCITP Database Administrator on SQL Server® 2005 MCTS: SQL Server® 2008 MCTS: SQL Server® 2005 MCT: Microsoft Certified Trainer MTA: Microsoft Technology Associate
MTAC – Microsoft Technical Audience Contributor CCSQLA – Cambridge Certified SQL Associate TOEIC – Test of English for International Communication -
Cluster Disk keeps going offline. Service or application failed
We currently have 2 servers, both housing 3 VMs (6 nodes total). 2 Disk are used to house these VM's. Cluster Disk 1 keeps going offline, which kicks 2 of our servers offline until we migrate the disk to another node (puts it into redirected
mode). We then have to restart the server to re-establish to connection to the cluster disk. Digging around in event viewer, I saw the following:
Cluster resource 'Cluster Disk 2' in clustered service or application 'b0a21897-e59b-46c9-a495-273dc5ad2aea' failed.
Cluster physical disk resource 'Cluster Disk 2' cannot be brought online because the associated disk could not be found. The expected signature of the disk was '{92eb716c-6878-42ad-b7de-f3879d72e232}'. If the disk was replaced or restored, in
the Failover Cluster Manager snap-in, you can use the Repair function (in the properties sheet for the disk) to repair the new or restored disk. If the disk will not be replaced, delete the associated disk resource.
The Cluster service failed to bring clustered service or application 'b0a21897-e59b-46c9-a495-273dc5ad2aea' completely online or offline. One or more resources may be in a failed state. This may impact the availability of the clustered service
or application.
I ran the cluster validation test and all seemed to check out except for Cluster Disk 1 and the two servers associated with this server, as shown below:
Validating cluster resource Cluster Disk 2.
This resource is marked with a state of 'Offline'. The functionality that this resource provides is not available while it is in the offline state. The resource may be put in this state by an administrator or program. It may also be a newly created
resource which has not been put in the online state or the resource may be dependent on a resource that is not online. Resources can be brought online by choosing the 'Bring this resource online' action in Failover Cluster Manager.
This resource is marked with a state of 'Failed' instead of 'Online'. This failed state indicates that the resource had a problem either coming online or had a failure while it was online. The event logs and cluster logs may have information
that is helpful in identifying the cause of the failure.
All disks are housed on our SAN. Any idea where I can start troubleshooting further? Changing the signature of the disk or finding out which application is failing? All help is greatly appreciated.Please, help is greatly appreciated. I have been trying to find a solution for almost a week. I have a feeling this has to do with the disk. We have 3 CSV's but the only ones in question are CSV 2 & 3. Cluster Disk 2 is the disk that
keeps going offline. Cluster Disk 3 has no problems at all. They both are GPT volumes, however, after running a "cluster res", I received the following output:
Listing private properties for 'Cluster Disk 2':
T Resource Name Value
D Cluster Disk 2 DiskIdType 1 (0x1)
D Cluster Disk 2 DiskSignature 0 (0x0)
S Cluster Disk 2 DiskIdGuid {92eb716c-6878-42ad-b7de-f3879d72e232}
===============================================================
Listing private properties for 'Cluster Disk 3':
T Resource Name Value
D Cluster Disk 3 DiskIdType 1 (0x1)
D Cluster Disk 3 DiskSignature 994692308 (0x3b49ccd4)
S Cluster Disk 3 DiskIdGuid {4f27bc43-6964-466e-a77d-2321b2b0b718}
=============================================================
As shown, cluster disk 1 has a disk signature but NOT a diskID. Cluster disk 2 as a DiskID but NOT a signature. It may be beneficial to add that cluster disk 3 used to be an MBR disk. Not sure if that makes any difference given the output
above...Performing DISKPART on the same disks shows the following (Don't mind the naming scheme, I know it can be a bit confusing)
Disk 1 (Cluster Disk 2):
HP P2000G3 FC/iSCSI Multi-Path Disk Device
Disk ID: {92EB716C-6878-42AD-B7DE-F3879D72E232}
Disk 3 (Cluster Disk 3):
HP P2000G3 FC/iSCSI Multi-Path Disk Device
Disk ID: {4F27BC43-6964-466E-A77D-2321B2B0B718}
================================================================
Comparing the two, it selects the correct DiskID that was taken from "cluster res" command.
The error in my initial post states a signature was expected. If I am not mistaken, in win2008 it uses "DiskSignature". The lack of this signature being populated for Cluster Disk 2 was resulting in the resources failing to come online. So
how can i go about fixing this signature issue. Again, all help is appreciated since I am fairly new to all of this. -
SCOM 2012 event id 10801 cluster disks don't discovered.
Hello.
I have errors in Operations Manager log:
Discovery data couldn't be inserted to the database. This could have happened because of one of the following reasons:
- Discovery data is stale. The discovery data is generated by an MP recently deleted.
- Database connectivity problems or database running out of space.
- Discovery data received is not valid.
The following details should help to further diagnose:
DiscoveryId: 5a84ee62-20c2-46a2-10b9-3dedaff65df6
HealthServiceId: 3aeaca7c-48de-c0fc-0441-ffd5ef7aa7c3
Microsoft.EnterpriseManagement.Common.DiscoveryDataInvalidRelationshipTargetException,The relationship target specified in the discovery data item is not valid.
Relationship target ID: 2478193e-1a5f-4087-1b5f-95459123321e
Rule ID: 5a84ee62-20c2-46a2-10b9-3dedaff65df6
Instance:
<?xml version="1.0" encoding="utf-16"?><RelationshipInstance TypeId="{acfe2f40-0a73-6764-21a5-bf59c41b2844}" SourceTypeId="{00000000-0000-0000-0000-000000000000}" TargetTypeId="{00000000-0000-0000-0000-000000000000}"><Settings
/><SourceRole><Settings><Setting><Name>5c324096-d928-76db-e9e7-e629dcc261b1</Name><Value>SQL-01</Value></Setting><Setting><Name>af13c36e-9197-95f7-393c-84aa6638fec9</Name><Value>\\.\PHYSICALDRIVE18</Value></Setting></Settings></SourceRole><TargetRole><Settings><Setting><Name>5c324096-d928-76db-e9e7-e629dcc261b1</Name><Value>PDC-S-SQL-01.sibgenco.local</Value></Setting><Setting><Name>af13c36e-9197-95f7-393c-84aa6638fec9</Name><Value>Disk
#18, Partition #0</Value></Setting></Settings></TargetRole></RelationshipInstance>.
SQL-01 is server with clusters disks, and cluster disks are don't discovered.Hi,
Hope the below articles can be helpful:
Cluster resource groups are not monitored! Is there anything I can do?
http://blogs.msdn.com/b/mariussutara/archive/2008/05/03/cluster-resource-groups-are-not-monitored-is-there-anything-i-can-do.aspx
Event ID 10801 and 33333 in Operations Manager log
http://www.itbl0b.com/2014/02/event-id-10801-33333-operations-manager-log.html#.U-QwunmKBes
Please Note: Since site is not hosted by Microsoft, the link may change without notice. Microsoft does not guarantee the accuracy of this information the web.
Regards, Yan Li -
Creating cluster trying to remove disks.
Hi
I am going through the pain of trying to create a whole cloud environment using scvmm2012r2 so that it can then link in to the whole windows azure pack.
I have created what I think is the whole fabric which has been very very painful.
My question when trying to make all that setup work is the following.
When I try and create a hyperv cluster using scvmm I step through the wizard. First issue I have is:
I have defined a logical switch which has three vnics one for host management 2. Live migration. 3. Cluster workload.
When it comes to giving an ip it is only allowing me to select from the host management pool. I would have thought it should have been the cluster or live migration networks.
2nd question is when it gets to the disks to cluster. The servers are connecting to a fibre channel switch so can see other disks that are attached to other clusters. In the interface I can not untick those I don't want as part of this cluster. What have
I done wrong
in the image I have highlighted the disks that I want to remove but I cannot.
If I do this normally through fail over cluster manager then it is no issue.Hi.
First thing first: when you create a Hyper-V Cluster, you will also create a Cluster Name Object in Active Directory, this will be the access point of this cluster and this also requires an IP adress. This IP adress is then on the management network - where
clients can access the cluster name object (so that VMM (as a client) can access the cluster - and manage the cluster). this is by design. You would leave cluster network and live migration out of this, as this is for internal cluster communication and live
migration traffic.
second, a cluster should only see the disks it should be using, and not any other disks used by other clusters. You must check your zoning here, as this is not best practice. a CSV can't be shared across several cluster.
I would suggest that you clean that up, and if you don'w want VMM to do this for you, you can create the cluster in failover cluster manager, add the required disks and the update the hosts in VMM so that they reflects the cluster you have created.
-kn
Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com ) -
Disk Remove from 2008R2 Failover Cluster
I have a two node Windows 2008 R2 Failover Cluster with only the File Services Role. There are multiple disks on this server, presented by an EMC storage array via FC. I would like to remove one disk that is no longer in use. The disk is currently in available
storage in the cluster and is in an offline state. The dependency report for the disk shows no required dependencies, no 'AND' relationship, no child resources, and no 'OR' relationship.
When I attempt to remove the disk from available storage I receive the warning "This resource has other resources which depend on it. Deleting it will cause those resource to go offline. Are you sure you want to delete Cluster Disk #?"
How can I determine what resources are dependent upon this disk when the report shows no dependencies? And, since the disk is in an offline state and in available storage, if there are dependencies how are they not in a failed or offline state as well?Hi,
If the disk resource is offline for some time and you make sure it’s no longer in use. You may remove the disk resource in cluster. You may try below command:
Launch command prompt with elevated privilege.
cluster resource
>> Display the status of a cluster resource
cluster resource “resource name” /listdep
>> list the dependencies for a resource
cluster resource “resource name” /fail
>> initiate resource failure
cluster resource “resource name” /delete
>> delete a resource
Try above command and give us feedback for further tourbleshooting.
For more information please refer to following MS articles:
Cluster resource
http://technet.microsoft.com/en-us/library/cc785087(v=WS.10).aspx
MS Cluster Server Troubleshooting and Maintenance
http://technet.microsoft.com/en-us/library/cc723248.aspx
Adding Disk to Cluster
http://blogs.msdn.com/b/clustering/archive/2008/03/19/8324538.aspx
Hope this helps!
TechNet Subscriber Support
If you are
TechNet Subscription user and have any feedback on our support quality, please send your feedback
here.
Lawrence
TechNet Community Support -
No shared disks visible in the Cluster Configuration Storage dialog
When installing the Oracle 10g clusterware the "Cluster Configuration Storage" dialog shows no shared disks.
We are using:
Windows 2003 Server
HP Eva 4400Hello,
all disks in cluster are visible from all nodes (2 of them).
We tested it with unpartioned and partioned disks (primary and extended). No way to make them visible for the OUI.
Automount is enabled in Windows like required from Oracle.
Besides, we are using Standard Edition. Therefore we have to work with ASM.
Any more information needed.
Thanx in advance. -
Hi!
The setup process fails with this error:
Configuration error code:
0x1C2074D8@1216@1
Configuration error description: There was an error setting private property 'VirtualServerName' to value 'CLUSTER02' for resource 'SQL Server'. Error: Value does not fall within the expected range.
I have found some hints by google, but nothing really helpfull.
Has anyone had a simular problem when installing SQL server 2008 R2?
All posts I found are about sql server 2008 (no R2!).
The cluster itself is working (storage, network, msdtc, quorum...).
Any hints?
Andreas
Here is the complete log:
Overall summary:
Final result: Failed: see details below
Exit code (Decimal): -2067791871
Exit facility code: 1216
Exit error code: 1
Exit message: Failed: see details below
Start time: 2012-04-06 11:23:57
End time: 2012-04-06 12:01:21
Requested action: InstallFailoverCluster
Log with failure: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20120406_112205\Detail.txt
Exception help link: http%3a%2f%2fgo.microsoft.com%2ffwlink%3fLinkId%3d20476%26ProdName%3dMicrosoft%2bSQL%2bServer%26EvtSrc%3dsetup.rll%26EvtID%3d50000%26ProdVer%3d10.50.2500.0%26EvtType%3d0x625969A3%400x294A9FD9
Cluster properties:
Machine name: OC-SQLCL02ND01
Product Instance Instance ID
Feature Language
Edition Version Clustered
Machine name: OC-SQLCL02ND02
Product Instance Instance ID
Feature Language
Edition Version Clustered
Machine Properties:
Machine name: OC-SQLCL02ND01
Machine processor count: 32
OS version: Windows Server 2008 R2
OS service pack: Service Pack 1
OS region: United States
OS language: English (United States)
OS architecture: x64
Process architecture: 64 Bit
OS clustered: Yes
Product features discovered:
Product Instance Instance ID
Feature Language
Edition Version Clustered
Package properties:
Description: SQL Server Database Services 2008 R2
ProductName: SQL Server 2008 R2
Type: RTM
Version: 10
Installation location: G:\x64\setup\
Installation edition: STANDARD
Slipstream: True
SP Level 1
User Input Settings:
ACTION: InstallFailoverCluster
AGTDOMAINGROUP: <empty>
AGTSVCACCOUNT: MANAGEMENT\sqladmin
AGTSVCPASSWORD: *****
ASBACKUPDIR: S:\OLAP\Backup
ASCOLLATION: Latin1_General_CI_AS
ASCONFIGDIR: S:\OLAP\Config
ASDATADIR: S:\OLAP\Data
ASDOMAINGROUP: <empty>
ASLOGDIR: S:\OLAP\Log
ASPROVIDERMSOLAP: 1
ASSVCACCOUNT: MANAGEMENT\sqladmin
ASSVCPASSWORD: *****
ASSVCSTARTUPTYPE: Automatic
ASSYSADMINACCOUNTS: MANAGEMENT\administrator
ASTEMPDIR: S:\OLAP\Temp
CONFIGURATIONFILE: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20120406_112205\ConfigurationFile.ini
CUSOURCE:
ENU: True
ERRORREPORTING: False
FAILOVERCLUSTERDISKS: Cluster Disk 3,Cluster Disk 4,Cluster Disk 5
FAILOVERCLUSTERGROUP: SQL Server (MSSQLSERVER)
FAILOVERCLUSTERIPADDRESSES: IPv4;172.29.2.122;Cluster Network 2;255.255.255.0,IPv4;172.29.3.122;Cluster Network 3;255.255.255.0
FAILOVERCLUSTERNETWORKNAME: CLUSTER02
FARMACCOUNT: <empty>
FARMADMINPORT: 0
FARMPASSWORD: *****
FEATURES: SQLENGINE,REPLICATION,FULLTEXT,AS,RS,BIDS,CONN,IS,BC,SSMS,ADV_SSMS
FILESTREAMLEVEL: 0
FILESTREAMSHARENAME: <empty>
FTSVCACCOUNT: NT AUTHORITY\LOCAL SERVICE
FTSVCPASSWORD: *****
HELP: False
INDICATEPROGRESS: False
INSTALLSHAREDDIR: C:\Program Files\Microsoft SQL Server\
INSTALLSHAREDWOWDIR: C:\Program Files (x86)\Microsoft SQL Server\
INSTALLSQLDATADIR: S:\
INSTANCEDIR: C:\Program Files\Microsoft SQL Server\
INSTANCEID: MSSQLSERVER
INSTANCENAME: MSSQLSERVER
ISSVCACCOUNT: NT AUTHORITY\SYSTEM
ISSVCPASSWORD: *****
ISSVCSTARTUPTYPE: Automatic
PASSPHRASE: *****
PCUSOURCE: d:\install\mssql\sp1
PID: *****
QUIET: False
QUIETSIMPLE: False
RSINSTALLMODE: FilesOnlyMode
RSSVCACCOUNT: MANAGEMENT\sqladmin
RSSVCPASSWORD: *****
RSSVCSTARTUPTYPE: Automatic
SAPWD: *****
SECURITYMODE: SQL
SQLBACKUPDIR: <empty>
SQLCOLLATION: SQL_Latin1_General_CP1_CI_AS
SQLDOMAINGROUP: <empty>
SQLSVCACCOUNT: MANAGEMENT\sqladmin
SQLSVCPASSWORD: *****
SQLSYSADMINACCOUNTS: MANAGEMENT\administrator
SQLTEMPDBDIR: <empty>
SQLTEMPDBLOGDIR: L:\MSSQL10_50.MSSQLSERVER\MSSQL\Data
SQLUSERDBDIR: T:\MSSQL10_50.MSSQLSERVER\MSSQL\Data
SQLUSERDBLOGDIR: L:\MSSQL10_50.MSSQLSERVER\MSSQL\Data
SQMREPORTING: False
UIMODE: Normal
X86: False
Configuration file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20120406_112205\ConfigurationFile.ini
Detailed results:
Feature: Database Engine Services
Status: Failed: see logs for details
MSI status: Passed
Configuration status: Failed: see details below
Configuration error code:
0x1C2074D8@1216@1
Configuration error description: There was an error setting private property 'VirtualServerName' to value 'CLUSTER02' for resource 'SQL Server'. Error: Value does not fall within the expected range.
Configuration log: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20120406_112205\Detail.txt
Feature: SQL Server Replication
Status: Failed: see logs for details
MSI status: Passed
Configuration status: Failed: see details below
Configuration error code:
0x1C2074D8@1216@1
Configuration error description: There was an error setting private property 'VirtualServerName' to value 'CLUSTER02' for resource 'SQL Server'. Error: Value does not fall within the expected range.
Configuration log: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20120406_112205\Detail.txt
Feature: Full-Text Search
Status: Failed: see logs for details
MSI status: Passed
Configuration status: Failed: see details below
Configuration error code:
0x1C2074D8@1216@1
Configuration error description: There was an error setting private property 'VirtualServerName' to value 'CLUSTER02' for resource 'SQL Server'. Error: Value does not fall within the expected range.
Configuration log: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20120406_112205\Detail.txt
Feature: Analysis Services
Status: Passed
MSI status: Passed
Configuration status: Passed
Feature: Reporting Services
Status: Passed
MSI status: Passed
Configuration status: Passed
Feature: Integration Services
Status: Passed
MSI status: Passed
Configuration status: Passed
Feature: Client Tools Connectivity
Status: Passed
MSI status: Passed
Configuration status: Passed
Feature: Management Tools - Complete
Status: Passed
MSI status: Passed
Configuration status: Passed
Feature: Management Tools - Basic
Status: Passed
MSI status: Passed
Configuration status: Passed
Feature: Client Tools Backwards Compatibility
Status: Passed
MSI status: Passed
Configuration status: Passed
Feature: Business Intelligence Development Studio
Status: Passed
MSI status: Passed
Configuration status: Passed
Rules with failures:
Global rules:
There are no scenario-specific rules.
Rules report file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20120406_112205\SystemConfigurationCheck_Report.htmHi Andreas Plachy,
Please make sure that the Virtual Server Name ‘CLUSTER02’ is unique on the network. In addition, are there any resources named ‘SQL Server’ on the Windows cluster? If that is the case, you may need to rename the related resources to avoid conflicting with
SQL Server, and try again.
Stephanie Lv
TechNet Community Support -
Mount Points in Windows 2012 R2 Cluster not displaying correctly
Hi,
Try
and I might, I can't get Mount Points displayed properly in a Windows 2012 R2 cluster.
In Windows 2008 R2, I added them, and they appear under Storage, as for Example, Cluster Disk 7: Mounted Volume (S:\SYSDB). (I may have had to bring them offline/online).
in Windows 2012 R2, they are showing up as, for example, '\\?\Volume{7c636157-e7e9-11e4-80dc0005056873123}'
In the error log it shows up as :
Cluster disk resource 'Cluster Disk 7' contains an invalid mount point. Both the source and target disks associated with the mount point must be clustered disks, and must be members of the same group.
Mount point 'SYSDB\' for volume '\\?\Volume{7c636106-e7e9-11e4-80dc-005056873123}\' references an invalid target disk. Please ensure that the target disk is also a clustered disk and in the same group as the source disk (hosting the mount point).
Now I've checked the error, and in
https://technet.microsoft.com/en-au/library/dd353925(v=ws.10).aspx it says
"The mounted disk and the disk it is mounted onto must be part of the same clustered service or application. They cannot be in two different clustered services or applications, and they cannot be in the general pool of Available Storage in the cluster."
So I have created a 'Other Server' Role. When I go right click on the Role and go to 'Add Storage', Cluster Disk 6 (the root volume) displays S:\, and Cluster Disk 9 (hosting the mountpoint) says Mount Point(s): (S:\SYSDB). I select both, and add, but
alas, the Mount Point still shows up as '\\?\Volume{7c636106-e7e9-11e4-80dc-005056873123}\ (not S:\SYSDB or Cluster Disk 6: Mounted Volume (S:\SYSDB).) as I would expect.
They are both clustered disks (iSCSI). I would expect when it says in the "same group", both added to the same role would be in the same group.Hi,
Thankyou for your response. That's (sort of) good to know, but it seems to be a step backwards from Windows 2008 R2, where you would actually have the meaningful Mounted Volume: (S:\SYSDB) displayed, to the meaningless '\\?\Volume{7c636157-e7e9-11e4-80dc0005056873123}'
GUID. Obviously before you do anything, you need to cross reference the disk number to 'Disk Management'; it would be better if is was displayed correctly in Failover Cluster Manager in the first place.
Secondly, the GUID is somewhat misleading. In Windows 2008 R2 for example, it appears as though the same GUID was displayed on each node (e.g. using Mountvol.exe). In Windows 2012 R2, it appears as though different GUID's are displayed on each node, e.g.
Node 1.
\\?\Volume{7c6368a4-e7e9-11e4-80dc-005056873123}\
S:\SYSDB\
Node 2.
\\?\Volume{97cc0d34-e7e9-11e4-80db-0050568724c4}\
S:\SYSDB\
But the GUID in Failover cluster manager remains the same (you can't really cross reference with what you see in FCM to Mountvol).
Strangely enough, when I check the registry in 'MountedDevices' on Node 1, both of the GUIDs are displayed (even though only one is displayed in MountVol.exe), referencing the same Disk ID listed in Diskpart.exe. I can see this mentioned in
https://support.microsoft.com/en-us/kb/959573, where is says :
A volume can be multiple unique volume names (and thus multiple GUIDs) when it is used by multiple running installations of Windows. This could happen in the following scenarios and in similar scenarios where multiple installations of Windows have
accessed the volume:
Using a volume on a shared disk in a cluster where multiple nodes have accessed the volume.
Oh well, that's progress I guess. -
Cluster Shared Volume disappeared after taking the volume offline for Validation Tests.
Hi,
After an unknown issue with one of our Hyper-V 4 Node cluster running on Server 2008 R2 SP1 with fibre channel NEC D3-10 SAN Storage all our cluster shared volumes were in redirecting mode and I was unable to get them back online. Only after rebooting all
the nodes one by one the disks came back online. Eventlog messages indicated that I had to test my cluster validation. After shutting down all the virtual machines I set all the cluster shared volumes offline and started the complete validation test. The following
warnings/errors appeared during the test.
An error occurred while executing the test.
An error occurred retrieving the
disk information for the resource 'VSC2_DATA_H'.
Element not found (Validate Volume Consistency Test)
Cluster disk 4 is a Microsoft MPIO based disk
Cluster disk 4 from node has 4 usable path(s) to storage target
Cluster disk 4 from node has 4 usable path(s) to storage target
Cluster disk 4 is not managed by Microsoft MPIO from node
Cluster disk 4 is not managed by Microsoft MPIO from node (Validate Microsoft MPIO-based disks test)
SCSI page 83h VPD descriptors for cluster disk 4 and 5 match (Validate SCSI device Vital Product Data (VPD) test)
After the test the cluster shared volume was disappeared (the resource is online).
Cluster events that are logged
Cluster physical disk resource 'DATA_H' cannot be brought online because the associated disk could not be found. The expected signature of the disk was '{d6e6a1e0-161e-4fe2-9ca0-998dc89a6f25}'. If the disk was replaced or restored, in the Failover Cluster
Manager snap-in, you can use the Repair function (in the properties sheet for the disk) to repair the new or restored disk. If the disk will not be replaced, delete the associated disk resource. (Event 1034)
Cluster disk resource found the disk identifier to be stale. This may be expected if a restore operation was just performed or if this cluster uses replicated storage. The DiskSignature or DiskUniqueIds property for the disk resource has been corrected.
(Event 1568)
In disk management the disk is unallocated, unknown, Reserved. When the resource is on one node and i open disk management i get the warning that i have to initialize the disk. I did not do this yet.
Reading from other posts i think that the partition table got corrupted but i have no idea how to get it back. I found the following information but it's not enough for me to go ahead with: Using a tool like TestDisk to rewrite the partition table. then
rewriting the uniqueID to the disk brought everything back. But still no explaination as to why we had our "High Availability" Fail Over cluster down for nearly 2 Days. This happened to us twice within the past week.
Anybody that an idea how to solve this? I think my data is still intact.
Thanx for taking the time to read this.
DJITS.Hi,
Error information you provided indicate disk connection failure issue, please confirm shared disk 4 is available:
To review hardware, connections, and configuration of a disk in cluster storage:
On each node in the cluster, open Disk Management (which is in Server Manager under Storage) and see if the disk is visible from one of the nodes (it should be visible from one node but not multiple nodes). If it is visible to
a node, continue to the next step. If it is not visible from any node, still in Disk Management on a node, right-click any volume, click Properties, and then click the Hardware tab. Click the listed disks or LUNs to see if all expected disks or LUNs appear.
If they do not, check cables, multi-path software, and the storage device, and correct any issues that are preventing one or more disks or LUNs from appearing. If this corrects the overall problem, skip all the remaining steps and procedures.
Review the event log for any events that indicate problems with the disk. If an event provides information about the disk signature expected by the cluster, save this information and skip to the last step in this procedure.
To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Management. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and
then click Continue.
In the Failover Cluster Management snap-in, if the cluster you want to manage is not displayed, in the console tree, right-click Failover Cluster Management, click Manage a Cluster, and then select or specify the cluster that
you want.
If the console tree is collapsed, expand the tree under the cluster you want to manage, and then click Storage.
In the center pane, find the disk resource whose configuration you want to check, and record the exact name of the resource for use in a later step.
Click Start, point to All Programs, click Accessories, right-click Command Prompt, and then click Run as administrator.
Type:
CLUSTER RESOURCE DiskResourceName /PRIV >path\filename.TXT
For DiskResourceName, type the name of the disk resource, and for path\filename, type a path and a new filename of your choosing.
Locate the file you created in the previous step and open it. For a master boot record (MBR) disk, look in the file for DiskSignature. For a GPT disk, look in the file for DiskIdGuid.
Use the software for your storage to determine whether the signature of the disk matches either the DiskSignature or DiskIdGuid for the disk resource. If it does not, use the following procedure to repair the disk configuration.
For more information please refer to following MS articles:
Event ID 1034 — Cluster Storage Functionality
http://technet.microsoft.com/en-us/library/cc756229(v=WS.10).aspx
Hope this helps!
TechNet Subscriber Support
If you are
TechNet Subscription user and have any feedback on our support quality, please send your feedback
here.
Lawrence
TechNet Community Support -
Cluster 3.0 fails to boot
This is a two node cluster consisting of:
2x E250 w 512 Mb RAM
both connected to
D1000 with 12x 18 Gb IBM HDDs
After I got over the initial configuration problem (see "Sun Cluster 3.0 update 1 on Solaris 8 - panics!" in a previous topic), I've run into another (why me?).
Post configuration of quorum device and deactivation of installmode panics the node on which I configured the quorum disk with a reservation conflict. The same thing happens on reboot. Halting node 1 and booting other node doesn't help as it can't gain control of the quorum disk.
on node 1:
Sep 19 21:23:09 cluster2 cl_runtime: NOTICE: clcomm: Path cluster2:qfe2 - cluster1:qfe2 online
Sep 19 21:23:09 cluster2 cl_runtime: NOTICE: clcomm: Path cluster2:qfe3 - cluster1:qfe3 online
Sep 19 21:23:14 cluster2 cl_runtime: NOTICE: CMM: Node cluster1 (nodeid: 2, incarnation #: 100095797
3) has become reachable.
panic[cpu0]/thread=2a100045d40: Reservation Conflict
after node 1 is halted, on node 2:
ASC: 0x29 (<vendor unique code 0x29>), ASCQ: 0x2, FRU: 0x0
NOTICE: CMM: Quorum device 1(gdevname /dev/did/rdsk/d6s2) can not be acquired by the current cluster members. This quorum device is held by node 1.
Is this the famous SCSI 3 reservation bug^H^H^Hfeature that I've been told about? Anyone with a similar experience? Thanks,
-chris.Use the following procedure in the event that it becomes necessary to
remove a SCSI-3 PGR
1.Login as root on one of the nodes that currently has access to the
disk
2.Determine the DID name of the disk
scdidadm -L
3.Verify that there is a SCSI-3 PGR on the disk
/usr/cluster/lib/sc/scsi -c inkeys -d /dev/did/rdsk/dXs2
4.Scrub the reservation from the disk
/usr/cluster/lib/sc/scsi -c scrub -d /dev/did/rdsk/dXs2
5.Verify that the reservation has been scrubbed
/usr/cluster/lib/sc/scsi -c inkeys -d /dev/did/rdsk/dXs2 -
Cluster Web Dispatcher together with CI in MSCS?
I need to provide HA for Web Dispatcher by clustering in MSCS. My plan is to share the MSCS of the CI it is connecting to. The CI (ASCS and SCS) is already installed in cluster with all proper shared resources (disk, global share, services, ip, netname, etc.) My question is whether I should or need to create a separate cluster group for the WD or if it would be easier/better to install in same shared disk as CI? I have been referring to SAP Note 834184 regarding the manual install of the WD in MSCS and have run into an issue because of the global share of the CI. Currently the WD is installed on its own shared disk (with directory structure), and its own netname and IP. When starting it is trying to access the SAPMNT share of it's host, which is successful (as the hostnames and ip are really just virtual resources of the cluster), but the directory path it tries to access on the share is not correct because the share is really for the shared disk of the CI and not that of the WD.
Does anyone have a recommended method of clustering the WD in an existing SAP MSCS cluster? It seems that the Note referenced above was not written with this in mind. However, due to the very low resource needs of the WD, I think it would be fairly common to cluster it with an existing installation or the msg server it is serving. My next thought is to install the WD on the same shared disk and cluster group as the CI and allow it to fail over with the CI so it will access the proper sapmnt file share, but I'm not sure if this is the recommended or best practice for this type of scenario.
Thank you for any input on this and I would be happy to provide any futher details of my configuration.
thanks,
JohnJohn,
First, lets clarify the confusion: CI is not where your ASCS and SCS are installed.
"The system consists of two instances: a Java central instance with SDM, and the Central Services instance"
http://help.sap.com/saphelp_nw04s/helpdata/en/43/1af03eae11e16be10000000a114084/frameset.htm
(A)SCS - "(ABAP) SAP Central Services", consists of Message Server and Enqueue Server, resides in a cluster group of MSCS
CI - "Central Instance" - Dispatcher, Server Processes and SDM. Not part of cluster group, resides on C: drive (your case).
Now to the point:
It is absolutely OK to install your Web Dispatcher on the cluster group of your (A)SCS.
You can use shared drive as a location for the Web Dispatcher and PFL file.
In this case you do not need to do step "6. Register the SAP Events DLL in registry" from note 834184.
During the failover, Web Dispatcher comes up properly on the target node.
Let me know if you need step by step instructions.
I can send you screenshots for the install.
Regards,
Slava
Maybe you are looking for
-
Time Stamp on MAXL Export File
I use MAXL to export data from our Essbase cube. Is there a way to add a time stamp to the filename?
-
i know its a audigy 4 something but is there a way i can tell exactly is it OEM, PRO, SE on the card, just above a white socket (not describred in my manual) model SB060 on the chip E-MU CA0300-IAF LF creative tech 03 05 58a2kyw thanks.
-
I'm having a problem with System Prefrences quiting on me when I open up the Desktop & Screen Saver option. This only happens on my profile. When I option click and try to change my background pic, I get an error beep. Everything else works in Sys Pr
-
Missing keys with ultrabook pro keyboard helix 2
I recieved my Helix 2 the other day, and the tablet portion works perfectly, but the upgraded keyboard that I ordered with it came with 2 missing keys. Is there a way I can return just the keyboard and get a new one free? Or do i have to send the who
-
Anyone out there purchase Snow Leopard shortly before Apple's offer to get it free? I want to get a refund and purchase Lion so that I can use iCloud.