Failover cluster storage pool cannot be added

Hi.
Environment: Windows Server 2012 R2 with Update.
Storage: Dell MD3600F
I created an LUN with 5GB space and map it to both node of this cluster. It can be seen on both side on Disk Management. I installed it as GPT based disk without any partition.
The New Storage Pool wizard can be finished by selecting this disk without any error message, nor event logs.
But after that, the pool will not be visible from Pools and the LUN will be gone from Disk Management. The LUN cannot be shown again even after rescanning.
This can be repo many times.
In the same environment, many LUNs work well in Storage - Disks. It just failed while acting as a pool.
What's wrong here?
Thanks.

Hi EternalSnow,
Please refer to following article to create clustered storage pool :
http://blogs.msdn.com/b/clustering/archive/2012/06/02/10314262.aspx
Any further information please feel free to let us know .
Best Regards,
Elton Ji
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place.

Similar Messages

  • Reporting Services as a generic service in a failover cluster group?

    There is some confusion on whether or not Microsoft will support a Reporting Services deployment on a failover cluster using scale-out, and adding the Reporting Services service as a generic service in a cluster group to achieve active-passive high
    availability.
    A deployment like this is described by Lukasz Pawlowski (Program Manager on the Reporting Services team) in this blog article
    http://blogs.msdn.com/b/lukaszp/archive/2009/10/28/high-availability-frequently-asked-questions-about-failover-clustering-and-reporting-services.aspx. There it is stated that it can be done, and what needs to be considered when doing such a deployment.
    This article (http://technet.microsoft.com/en-us/library/bb630402.aspx) on the other hand states: "Failover clustering is supported only for the report server database; you
    cannot run the Report Server service as part of a failover cluster."
    This is somewhat confusing to me. Can I expect to receive support from Microsoft for a setup like this?
    Best Regards,
    Peter Wretmo

    Hi Peter,
    Thanks for your posting.
    As Lukasz said in the
    blog, failover clustering with SSRS is possible. However, during the failover there is some time during which users will receive errors when accessing SSRS since the network names will resolve to a computer where the SSRS service is in the process of starting.
    Besides, there are several considerations and manual steps involved on your part before configuring the failover clustering with SSRS service:
    Impact on other applications that share the SQL Server. One common idea is to put SSRS in the same cluster group as SQL Server.  If SQL Server is hosting multiple application databases, other than just the SSRS databases, a failure in SSRS may cause
    a significant failover impact to the entire environment.
    SSRS fails over independently of SQL Server.
    If SSRS is running, it is going to do work on behalf of the overall deployment so it will be Active. To make SSRS Passive is to stop the SSRS service on all passive cluster nodes.
    So, SSRS is designed to achieve High Availability through the Scale-Out deployment. Though a failover clustered SSRS deployment is achievable, it is not the best option for achieving High Availability with Reporting Services.
    Regards,
    Mike Yin
    If you have any feedback on our support, please click
    here
    Mike Yin
    TechNet Community Support

  • 2012 New Cluster Adding A Storage Pool fails with Error Code 0x8007139F

    Trying to setup a brand new cluster (first node) on Server 2012. Hardware passes cluster validation tests and consists of a dell 2950 with an MD1000 JBOD enclosure configured with a bunch of 7.2K RPM SAS and 15k SAS Drives. There is no RAID card or any other
    storage fabric, just a SAS adapter and an external enclosure.
    I can create a regular storage pool just fine and access it with no issues on the same box when I don't add it to the cluster. However when I try to add it to the cluster I keep getting these errors on adding a disk:
    Error Code: 0x8007139F if I try to add a disk (The group or resource is not in the correct state to perform the requested operation)
    When adding the Pool I get this error:
    Error Code 0x80070016 The Device Does not recognize the command
    Full Error on adding the pool
    Cluster resource 'Cluster Pool 1' of type 'Storage Pool' in clustered role 'b645f6ed-38e4-11e2-93f4-001517b8960b' failed. The error code was '0x16' ('The device does not recognize the command.').
    Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart it.  Check the resource and group state using Failover Cluster
    Manager or the Get-ClusterResource Windows PowerShell cmdlet.
    if I try to just add the raw disks to the storage -- without using a pool or anything - almost every one of them but one fails with incorrect function except for one (a 7.2K RPM SAS drive). I cannot see any difference between it and the other disks. Any
    ideas? The error codes aren't anything helpful. I would imagine there's something in the drive configuration or hardware I am missing here I just don't know what considering the validation is passing and I am meeting the listed prerequisites.
    If I can provide any more details that would assist please let me know. Kind of at a loss here.

    Hi,
    You mentioned you use Dell MD 1000 as storage, Dell MD 1000 is Direct Attached Storage (DAS)
    Windows Server cluster do support DAS storage, Failover clusters include improvements to the way the cluster communicates with storage, improving the performance of a storage area network (SAN) or direct attached storage (DAS).
    But the Raid controller PERC 5/6 in MD 1000 may not support cluster technology. I did find its official article, but I found its next generation MD 1200 use Raid controller PERC H 800 is still not support cluster technology.
    You may contact Dell to check that.
    For more information please refer to following MS articles:
    Technical Guidebook for PowerVault MD1200 and MD 1220
    http://www.dell.com/downloads/global/products/pvaul/en/storage-powervault-md12x0-technical-guidebook.pdf
    Dell™ PERC 6/i, PERC 6/E and CERC 6/I User’s Guide
    http://support.dell.com/support/edocs/storage/RAID/PERC6/en/PDF/en_ug.pdf
    Hope this helps!
    TechNet Subscriber Support
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Lawrence
    TechNet Community Support

  • Cannot add multiple members of a failover cluster to a DFSR replication group

    Server 2012 RTM. I have two physical servers, in two separate data centers 35 miles apart, with a GbE link over metro fibre between them. Both have a large (10TB+) local RAID storage arrays, but given the physical separation there is no physical shared storage.
    The hosts need to be in a Windows failover cluster (WSFC), so that I can run high-availability VMs and SQL Availability Groups across these two hosts for HA and DR. VM and SQL app data storage is using a SOFS (scale out file server) network share on separate
    servers.
    I need to be able to use DFSR to replicate multi-TB user data file folders between the two local storage arrays on these two hosts for HA and DR. But when I try to add the second server to a DFSR replication group, I get the error:
    The specified member is part of a failover cluster that is already a member of the replication group. You cannot add multiple members for the same cluster to a replication group.
    I'm not clear why this has to be a restriction. I need to be able to replicate files somehow for HA & DR of the 10TB+ of file storage. I can't use a clustered file server for file storage, as I don't have any shared storage on these two servers. Likewise
    I can't run a HA single DFSR target for the same reason (no shared storage) - and in any case, this doesn't solve the problem of replicating files between the two hosts for HA & DR. DFSR is the solution for replicating files storage across servers with
    non-shared storage.
    Why would there be a restriction against using DFSR between multiple hosts in a cluster, so long as you are not trying to replicate folders in a shared storage target accessible to both hosts (which would obviously be a problem)? So long as you are not replicating
    folders in c:\ClusterStorage, there should be no conflict. 
    Is there a workaround or alternative solution?

    Yes, I read that series. But it doesn't address the issue. The article is about making a DFSR target highly available. That won't help me here.
    I need to be able to use DFSR to replicate files between two different servers, with those servers being in a WSFC for the purpose of providing other clustered services (Hyper-V, SQL availability groups, etc.). DFSR should not interfere with this, but it
    is being blocked between nodes in the same WSFC for a reason that is not clear to me.
    This is a valid use case and I can't see an alternative solution in the case where you only have two physical servers. Windows needs to be able to provide HA, DR, and replication of everything - VMs, SQL, and file folders. But it seems that this artificial
    barrier is causing us to need to choose either clustered services or DFSR between nodes. But I can't see any rationale to block DFSR between cluster nodes - especially those without shared storage.
    Perhaps this blanket block should be changed to a more selective block at the DFSR folder level, not the node level.

  • Cannot migrate VM in VMM but can in Failover Cluster Manager network adapters network optimization warning

    I have a 4 node Server 2012 R2 Hyper-V Cluster and manage it with VMM 2012 R2.  I just upgraded the cluster from 2012 RTM to 2012 R2 last week which meant pulling 2 nodes out of the existing cluster, creating the new R2 cluster, running the copy
    cluster roles wizard since the VHDs are stored on CSVs, and then added the other 2 nodes after installing R2 on them, back into the cluster.  After upgrading the cluster I am unable to migrate some VMs from one node to another.  When trying to do
    a live migration, I get the following notifications under the Rating Explanation tab:
    Warning: There currently are not network adapters with network optimization available on host Node7. 
    Error: Configuration issues related to the virtual machine VM1 prevent deployment and must be resolved before deployment can continue. 
    I get this error for 3 out of the 4 nodes in the cluster.  I do not get this error for Node10 and I can live migrate to that node in VMM.  It has a green check for Network optimization.  The others do not.  These errors only affect
    VMM. In the Failover Cluster Manager, I can live migrate any VM to any node in the cluster without any issues.  In the old 2012 RTM cluster I used to get the warning but I could still migrate the VMs anywhere I wanted to.  I've checked the network
    adapter settings in VMM on VM1 and they are the same as VM2 which can migrate to any host in VMM.  I then checked the network adapter settings of the VMs from the Failover Cluster Manager and VM1 under Hardware Acceleration has "Enable virtual machine
    queue" and Enable IPsec task offloading" checked.  I unchecked those 2 boxes refreshed the VMs, refreshed the cluster, rebooted the VM and refreshed again but I still could not live migrate VM1.  Why is this an issue now but it wasn't before
    running on the new cluster?  How do I resolve the issue?  VMM is useless if I can't migrate all my VMs with it.

    I checked the settings on the physical nics on each node and here is what I found:
    Node7: Virtual machine queue is not listed (Cannot live migrate problem VM's to this node in VMM)
    Node8: Virtual machine queue is not listed (Cannot live migrate problem VM's to this node in VMM)
    Node9: Virtual machine queue is listed and enabled (Cannot live migrate problem VM's to this node in VMM)Node10: Virtual machine queue is listed and enabled (Live Migration works on all VMs in VMM)
    From Hyper-V or the Failover Cluster manager I can see in the network adapter settings of the VMs under Hardware Acceleration that these two settings are checked "Enable virtual machine queue" and Enable IPsec task offloading".  I unchecked those
    2 boxes, refreshed the VMs, refreshed the cluster, rebooted the VM and refreshed again but I still cannot live migrate the problem VMs.
    It seems to me that if I could adjust those VM settings from VMM that it might fix the problem.  Why isn't that an option to do so in VMM? 
    Do I have to rebuild the VMM server with a new DB and then before adding the Hyper-V cluster uncheck those two settings on the VM's from Hyper-V manager?  That would be a lot of unnecessary work but I don't know what else to do at this point.

  • Windows 2008 Failover Cluster - Cannot add a generic service

    Trying to add a generic service in a failover cluster.
    Select the option Services and Application and it opens the wizard and then displays the error "An error was encountered while loading the list of services. QueryServiceConfig failed. The system cannot find the file specified"
    The cluster validation wizard completes successfully. Permissions do not appear to be an issue as this account can seemly do everything else so I am at a loss to understand why this API is failing when it tries to query the server for services information.
    Having searched the Internet the only thing I have found was someone posting a similar issue in the Greek language Technet forum(if I recall correctly) and their comment was they rebuild their cluster.
    Windows 2008 (SP2) x64 two node cluster running a non-Microsoft database. We need to add a non-Microsoft Enterpirse backup solution and this is their documented method (adding it as a generic service) - both bits of software are from big vendors.
    Symantec AV, but have tried with that disabled so don't think it has anything to do with that. Something is stopping the API from reporting back but I can't find what.
    Really appreciate some help before we have to log a chargable call with Microsoft support
    Thank you

    Hi,
    Have you tried the suggestion? I want to see if the information provided was helpful. Your feedback is
    very useful for the further research. Please feel free to let me know if you have addition questions.
    Best regards,
    Vincent Hu

  • How to assign SMB storage to CSV in HV failover cluster?

    I have a Hyper-V Cluster that looks like this:
    Clustered-Hyper-V-Diagram
    2012 R2 Failover Cluster
    2 Hyper-V nodes
    iSCSI Disk Witness on isolated "Cluster Only" Network
    "Cluster and Client" Network with nic-team connectivity to 2012 R2 File Server
    Share configured using: server manager > file and storage services > shares > tasks > new share > SMB Share - Applications > my RAID 1 volume.
    My question is this: how do I configure a Clustered Shared Volume?  How do I present the Shared Folder to the cluster?
    I can create/add VMs from Cluster Manager > Roles > Virtual Machines using \\SMB\Share for the location of the vhd...  but how do I use a CSV with this config?  Am I missing something?

    right click one of the disks that you assigned to cluster as available storage
    I don't yet have any disks assigned to the cluster as available storage.
    Just for grins, I added an 8Gb iSCSI lun and added it to a CSV:
    PS C:\> get-clusterresource
    Name State OwnerGroup ResourceType
    Cluster IP Address Online Cluster Group IP Address
    Cluster Name Online Cluster Group Network Name
    witness Online Cluster Group Physical Disk
    PS C:\> Get-ClusterSharedVolume
    Name State Node
    test8Gb Online CLUSTERNODE01
    All well and good, but from what I've read elsewhere...
    SMB 3.0 via a 2012 File server can only be added to a Hyper-V CSV cluster using the VMM component of System Center 2012.  That is the only way to import an SMB 3 share for CSV storage usage.
    http://community.spiceworks.com/topic/439383-hyper-v-2012-and-smb-in-a-csv
    http://technet.microsoft.com/en-us/library/jj614620.aspx

  • Cannot failover cluster

    Hi all,
    In my environment I have 2 exchange 2013 servers : ex01 & ex02 in 1 DAG01. Recently my ex02 server has problem, some exchange service crash ... I'm still finding reason, maybe because lack of memory ... BTW I have a "Mailbox Database 01":
    Get-MailboxDatabaseCopyStatus "Mailbox Database 01"
    Name Status ContentIndexState
    Mailbox Database 01\EX01 Mounted Healthy
    Mailbox Database 01\EX02 Healthy Healthy
    When ex02 has problem, something happen with "Mailbox Database 01" copy on both ex01 & ex02, it cannot be mounted on boths. The status of them keep switching : Mounted , Mounting , Initializing , Disconnected... like this:
    Get-MailboxDatabaseCopyStatus "Mailbox Database 01"
    Name Status ContentIndexState
    Mailbox Database 01\EX01 Mounting Failed
    Mailbox Database 01\EX02 Initializing Failed
    Get-MailboxDatabaseCopyStatus "Mailbox Database 01"
    Name Status ContentIndexState
    Mailbox Database 01\EX01 Initializing Failed
    Mailbox Database 01\EX02 Mounting Failed
    Get-MailboxDatabaseCopyStatus "Mailbox Database 01"
    Name Status ContentIndexState
    Mailbox Database 01\EX01 Disconected Failed
    Mailbox Database 01\EX02 Mounting Failed
    Until I suspend "Mailbox Database 01" copy on ex02, "Mailbox Database 01" will mount on ex01 successfully, then "Mailbox Database 01" edb file on ex02 has "dirty shutdown" state, and I have to reseed "Mailbox
    Database 01" from ex01 to ex02 manually.
    Get-ClusterGroup -Cluster EX01
    Name OwnerNode State
    ClusterGroup ex01 Online
    Available Storage ex02 Offline
    Get-DatabaseAvailabilityGroup -Status -Identity DAG01 | fl name,primaryActiveManager
    Name : DAG01
    PrimaryActiveManager : EX01
    Please let me know if you need any information.
    Thanks for your help.
    Jack.

    Hi John,
    Last time, after re-seed the database status changed to healthy on EX02. I have to note that my DAG can failover before, ex: "Mailbox database 01" can be "mounted" on EX01 , "healthy" on EX02 or vice versa and when one server
    is down (I unplug network cable for example), the mailbox database copy on the other server is mounted automatically, when down server back to online , the mailbox database copy on it is resynchronized from the other automatically. Like I said it has just
    happened recently.
    Recently, even when "Mailbox Database 01" is mounted on EX01 , "healthy" on EX02
    Get-MailboxDatabaseCopyStatus "Mailbox Database 01"
    Name Status ContentIndexState
    Mailbox Database 01\EX01 Mounted Healthy
    Mailbox Database 01\EX02 Healthy Healthy
    When EX02 has problem, "Mailbox Database 01" copy cannot be mounted on boths, its status keep switching...
    During problem I have noticed that there are events appear only on EX02, relating to some exchange programs/services are crashed , ex lastime:
    Faulting application name: MSExchangeHMWorker.exe, version: 15.0.712.0, time stamp: 0x5199cd1a
    Faulting module name: RPCRT4.dll, version: 6.1.7601.21855, time stamp: 0x4eb4c921
    Exception code: 0xc0020043
    Fault offset: 0x000000000008aa13
    Faulting process id: 0x5cfc
    Faulting application start time: 0x01d0067aa7da6ac7
    Faulting application path: C:\Program Files\Microsoft\Exchange Server\V15\Bin\MSExchangeHMWorker.exe
    Faulting module path: C:\Windows\system32\RPCRT4.dll
    Report Id: 432d5040-cdfe-11e4-9bb7-3440b58d323f
    Faulting application name: svchost.exe_RpcEptMapper, version: 6.1.7600.16385, time stamp: 0x4a5bc3c1
    Faulting module name: ntdll.dll, version: 6.1.7601.17725, time stamp: 0x4ec4aa8e
    Exception code: 0xc0000374
    Fault offset: 0x00000000000c40f2
    Faulting process id: 0x2c0
    Faulting application start time: 0x01ce90e16804a9e2
    Faulting application path: C:\Windows\system32\svchost.exe
    Faulting module path: C:\Windows\SYSTEM32\ntdll.dll
    [RpcHttp] An internal server error occurred. The unhandled exception was: System.TypeInitializationException: The type initializer for 'Microsoft.Exchange.Data.Directory.Globals' threw an exception. ---> System.Runtime.InteropServices.COMException: Call was canceled by the message filter. (Exception from HRESULT: 0x80010002 (RPC_E_CALL_CANCELED))
    at System.Runtime.InteropServices.Marshal.ThrowExceptionForHRInternal(Int32 errorCode, IntPtr errorInfo)
    at System.Management.ManagementScope.InitializeGuts(Object o)
    at System.Management.ManagementScope.Initialize()
    at System.Management.ManagementObjectSearcher.Initialize()
    at System.Management.ManagementObjectSearcher.Get()
    at Microsoft.Exchange.Data.Directory.Globals.DetectIfMachineIsVirtualMachine()
    at Microsoft.Exchange.Data.Directory.Globals..cctor()
    Watson report about to be sent for process id: 19380, with parameters: E12IIS, c-RTL-AMD64, 15.00.0712.024, w3wp#MSExchangeRpcProxyAppPool, M.E.Data.Directory, M.E.D.D.Globals.DetectIfMachineIsVirtualMachine, System.TypeInitializationException, 55de, 15.00.0712.016.
    ErrorReportingEnabled: False
    [RpcHttp] An internal server error occurred. The unhandled exception was: System.TypeInitializationException: The type initializer for 'Microsoft.Exchange.Data.Directory.Globals' threw an exception. ---> System.Runtime.InteropServices.COMException: Call was canceled by the message filter. (Exception from HRESULT: 0x80010002 (RPC_E_CALL_CANCELED))
    at System.Runtime.InteropServices.Marshal.ThrowExceptionForHRInternal(Int32 errorCode, IntPtr errorInfo)
    at System.Management.ManagementScope.InitializeGuts(Object o)
    at System.Management.ManagementScope.Initialize()
    at System.Management.ManagementObjectSearcher.Initialize()
    at System.Management.ManagementObjectSearcher.Get()
    at Microsoft.Exchange.Data.Directory.Globals.DetectIfMachineIsVirtualMachine()
    at Microsoft.Exchange.Data.Directory.Globals..cctor()
    and this time, I found one event appear only on EX02 before problem happened :
    Log Name: Application
    Source: MSExchange Transport Migration
    Date: 3/28/2015 10:34:01 AM
    Event ID: 2005
    Task Category: General
    Level: Error
    Keywords: Classic
    User: N/A
    Computer: EX02.mydomain.com
    Description:
    An unexpected failure has occurred. The problem was ignored but may indicate other problems in the system. Diagnostic information:
    at Microsoft.Exchange.Data.Storage.MapiPropertyBag.SaveChanges(Boolean force) at Microsoft.Exchange.Data.Storage.StoreObjectPropertyBag.SaveChanges(Boolean force) at Microsoft.Exchange.Data.Storage.AcrPropertyBag.SaveChanges(Boolean force) at Microsoft.Exchange.Data.Storage.CoreItem.InternalSave(SaveMode saveMode, CallbackContext callbackContext) at Microsoft.Exchange.Data.Storage.Item.SaveInternal(SaveMode saveMode, Boolean commit) at Microsoft.Exchange.Data.Storage.Item.Save(SaveMode saveMode) at Microsoft.Exchange.Migration.MigrationJob.UpdatePoisonCount(IMigrationDataProvider provider, Int32 count) at Microsoft.Exchange.MailboxReplicationService.CommonUtils.ProcessKnownExceptions(Action actionDelegate, FailureDelegate failureDelegate)|Error clearing posion count for job: local move 3:14a96ad4-4506-4ab8-9be7-c0308a72e555:ExchangeLocalMove:Staged:4:[email protected]:Completed:11/26/2013 7:59:11 PM::|Microsoft.Exchange.Data.Storage.ConnectionFailedTransientException|Cannot save changes made to an item to store.|InnerException:MapiExceptionNetworkError:16.55847:3E000000, 18.59943:BE060000BD07000000000000, 0.62184:00000000, 255.16280:BE0600006E2F610000000000, 255.8600:A81D0000, 255.12696:802C8F080869D001000FC882, 255.10648:02000000, 255.14744:BE060000, 255.9624:F2030000, 255.13720:00000000, 255.11672:01000000, 255.12952:00000000010700C000000000, 3.23260:BE060000, 0.43249:000FC882, 4.39153:15010480, 4.32881:15010480, 0.50035:07000000, 4.64625:15010480, 20.52176:000FC88211001010FE000000, 20.50032:000FC8827E17401076040000, 0.50128:00000000, 0.50288:00000000, 4.23354:15010480, 0.25913:76040000, 255.21817:15010480, 0.17361:76040000, 4.19665:15010480, 0.37632:76040000, 4.37888:15010480|Microsoft.Mapi.MapiExceptionNetworkError: MapiExceptionNetworkError: Unable to save changes. (hr=0x80040115, ec=0) Diagnostic context: Lid: 55847 EMSMDBPOOL.EcPoolSessionDoRpc called [length=62] Lid: 59943 EMSMDBPOOL.EcPoolSessionDoRpc exception [rpc_status=0x6BE][latency=1981] Lid: 62184 Lid: 16280 dwParam: 0x0 Msg: EEInfo: ComputerName: n/a Lid: 8600 dwParam: 0x0 Msg: EEInfo: ProcessID: 7592 Lid: 12696 dwParam: 0x0 Msg: EEInfo: Generation Time: 3/28/0415 3:34:01 AM Lid: 10648 dwParam: 0x0 Msg: EEInfo: Generating component: 2 Lid: 14744 dwParam: 0x0 Msg: EEInfo: Status: 1726 Lid: 9624 dwParam: 0x0 Msg: EEInfo: Detection location: 1010 Lid: 13720 dwParam: 0x0 Msg: EEInfo: Flags: 0 Lid: 11672 dwParam: 0x0 Msg: EEInfo: NumberOfParameters: 1 Lid: 12952 dwParam: 0x0 Msg: EEInfo: prm[0]: Long val: 3221227265 Lid: 23260 Win32Error: 0x6BE Lid: 43249 Lid: 39153 StoreEc: 0x80040115 Lid: 32881 StoreEc: 0x80040115 Lid: 50035 Lid: 64625 StoreEc: 0x80040115 Lid: 52176 ClientVersion: 15.0.712.17 Lid: 50032 ServerVersion: 15.0.712.6014 Lid: 50128 Lid: 50288 Lid: 23354 StoreEc: 0x80040115 Lid: 25913 Lid: 21817 ROP Failure: 0x80040115 Lid: 17361 Lid: 19665 StoreEc: 0x80040115 Lid: 37632 Lid: 37888 StoreEc: 0x80040115 at Microsoft.Mapi.MapiExceptionHelper.InternalThrowIfErrorOrWarning(String message, Int32 hresult, Boolean allowWarnings, Int32 ec, DiagnosticContext diagCtx, Exception innerException) at Microsoft.Mapi.MapiExceptionHelper.ThrowIfError(String message, Int32 hresult, IExInterface iUnknown, Exception innerException) at Microsoft.Mapi.MapiProp.SaveChanges(SaveChangesFlags flags) at Microsoft.Exchange.Data.Storage.MapiPropertyBag.SaveChanges(Boolean force)|: ,,
    %2
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
    <System>
    <Provider Name="MSExchange Transport Migration" />
    <EventID Qualifiers="49152">2005</EventID>
    <Level>2</Level>
    <Task>1</Task>
    <Keywords>0x80000000000000</Keywords>
    <TimeCreated SystemTime="2015-03-28T03:34:01.000000000Z" />
    <EventRecordID>10409597</EventRecordID>
    <Channel>Application</Channel>
    <Computer>IDCEXC002.itl.com</Computer>
    <Security />
    </System>
    <EventData>
    <Data> at Microsoft.Exchange.Data.Storage.MapiPropertyBag.SaveChanges(Boolean force) at Microsoft.Exchange.Data.Storage.StoreObjectPropertyBag.SaveChanges(Boolean force) at Microsoft.Exchange.Data.Storage.AcrPropertyBag.SaveChanges(Boolean force) at Microsoft.Exchange.Data.Storage.CoreItem.InternalSave(SaveMode saveMode, CallbackContext callbackContext) at Microsoft.Exchange.Data.Storage.Item.SaveInternal(SaveMode saveMode, Boolean commit) at Microsoft.Exchange.Data.Storage.Item.Save(SaveMode saveMode) at Microsoft.Exchange.Migration.MigrationJob.UpdatePoisonCount(IMigrationDataProvider provider, Int32 count) at Microsoft.Exchange.MailboxReplicationService.CommonUtils.ProcessKnownExceptions(Action actionDelegate, FailureDelegate failureDelegate)|Error clearing posion count for job: local move 3:14a96ad4-4506-4ab8-9be7-c0308a72e555:ExchangeLocalMove:Staged:4:[email protected]:Completed:11/26/2013 7:59:11 PM::|Microsoft.Exchange.Data.Storage.ConnectionFailedTransientException|Cannot save changes made to an item to store.|InnerException:MapiExceptionNetworkError:16.55847:3E000000, 18.59943:BE060000BD07000000000000, 0.62184:00000000, 255.16280:BE0600006E2F610000000000, 255.8600:A81D0000, 255.12696:802C8F080869D001000FC882, 255.10648:02000000, 255.14744:BE060000, 255.9624:F2030000, 255.13720:00000000, 255.11672:01000000, 255.12952:00000000010700C000000000, 3.23260:BE060000, 0.43249:000FC882, 4.39153:15010480, 4.32881:15010480, 0.50035:07000000, 4.64625:15010480, 20.52176:000FC88211001010FE000000, 20.50032:000FC8827E17401076040000, 0.50128:00000000, 0.50288:00000000, 4.23354:15010480, 0.25913:76040000, 255.21817:15010480, 0.17361:76040000, 4.19665:15010480, 0.37632:76040000, 4.37888:15010480|Microsoft.Mapi.MapiExceptionNetworkError: MapiExceptionNetworkError: Unable to save changes. (hr=0x80040115, ec=0) Diagnostic context: Lid: 55847 EMSMDBPOOL.EcPoolSessionDoRpc called [length=62] Lid: 59943 EMSMDBPOOL.EcPoolSessionDoRpc exception [rpc_status=0x6BE][latency=1981] Lid: 62184 Lid: 16280 dwParam: 0x0 Msg: EEInfo: ComputerName: n/a Lid: 8600 dwParam: 0x0 Msg: EEInfo: ProcessID: 7592 Lid: 12696 dwParam: 0x0 Msg: EEInfo: Generation Time: 3/28/0415 3:34:01 AM Lid: 10648 dwParam: 0x0 Msg: EEInfo: Generating component: 2 Lid: 14744 dwParam: 0x0 Msg: EEInfo: Status: 1726 Lid: 9624 dwParam: 0x0 Msg: EEInfo: Detection location: 1010 Lid: 13720 dwParam: 0x0 Msg: EEInfo: Flags: 0 Lid: 11672 dwParam: 0x0 Msg: EEInfo: NumberOfParameters: 1 Lid: 12952 dwParam: 0x0 Msg: EEInfo: prm[0]: Long val: 3221227265 Lid: 23260 Win32Error: 0x6BE Lid: 43249 Lid: 39153 StoreEc: 0x80040115 Lid: 32881 StoreEc: 0x80040115 Lid: 50035 Lid: 64625 StoreEc: 0x80040115 Lid: 52176 ClientVersion: 15.0.712.17 Lid: 50032 ServerVersion: 15.0.712.6014 Lid: 50128 Lid: 50288 Lid: 23354 StoreEc: 0x80040115 Lid: 25913 Lid: 21817 ROP Failure: 0x80040115 Lid: 17361 Lid: 19665 StoreEc: 0x80040115 Lid: 37632 Lid: 37888 StoreEc: 0x80040115 at Microsoft.Mapi.MapiExceptionHelper.InternalThrowIfErrorOrWarning(String message, Int32 hresult, Boolean allowWarnings, Int32 ec, DiagnosticContext diagCtx, Exception innerException) at Microsoft.Mapi.MapiExceptionHelper.ThrowIfError(String message, Int32 hresult, IExInterface iUnknown, Exception innerException) at Microsoft.Mapi.MapiProp.SaveChanges(SaveChangesFlags flags) at Microsoft.Exchange.Data.Storage.MapiPropertyBag.SaveChanges(Boolean force)|: ,,</Data>
    </EventData>
    </Event>
    I also noticed that "Mailbox Database 01" edb file on EX02 has "dirty shutdown" state (yes, last time too)
    eseutil /mh “path to Mailbox Database 01 edb file on EX02″ and result :
    State: Dirty Shutdown
    Log Required: 4656967-4657077 (0x470f47-0x470fb5)
    Log Committed: 0-4657078 (0x0-0x470fb6)
    I'm going to reseed "Mailbox Database 01" copy on EX02 tonight, should I repair it with eseutil /r to get it into clean shutdown state before ?
    How can I check if my DAG configuration is fine ? Is result from Get-Clustergroup before ok ?

  • Failover Cluster Hyper-V Storage Choice

    I am trying to deploy a 2 nodes Hyper-V Failover cluster in a closed environment.  My current setup is 2 servers as hypervisors and 1 server as AD DC + Storage server.  All 3 are running Windows Server 2012 R2.
    Since everything is running on Ethernet, my choice of storage is between iSCSI and SMB3.0 ?
    I am more inclined to use SMB3.0 and I did find some instructions online about setting up a Hyper-V cluster connecting to a SMB3.0 File server cluster.  However, I only have budget for 1 storage Server.  Is it a good idea to choice SMB over iSCSI
    in this scenario?  ( where there is only 1 storage for the Hyper-V Cluster ). 
    What do I need to pay attention to in this setup?  Except some unavoidable single points of failures. 
    In the SMB3.0 File server cluster scenario that I mentioned above, they had to use SAS drives for the file server cluster (CSV).  I am guessing in my scenario, SATA drives should be fine, right?

    "I suspect that Starwind solution achieves FT by running shadow copies of VMs on the partner Hypervisor"
    No, it does not run shadow VMs on the partner hypervisor.  Starwind is a product in a family known as 'software defined storage'.  There are a number of solutions on the market.  They all provide a similar service in that they allow for the
    use of local storage, also known as Direct Attached Storage (DAS), instead of external, shared storage for clustering.  Each of these products provides some method to mirror or 'RAID' the storage among the nodes in the software defined storage. 
    So, yes, there is some overhead to ensure data redundancy, but none of this class of product will 'shadow' VMs on another node.  Products like Starwind, Datacore, and others are nice entry points to HA without the expense of purchasing an external storage
    shelf/array of some sort because DAS is used instead.
    1) "Software Defined Storage" is a VERY wide term. Many companies use it for solutions that DO require actual hardware to run on. Say Nexenta claims they do SDS and they need a separate physical servers running Solaris and their (Nexenta) storage app. Microsoft
    we all love so much because they give us infrastructure we use to make our living also has Clustered Storage Spaces MSFT tells is a "Software Defined Storage" but they need physical SAS JBODs, SAS controllers and fabric to operate. These are hybrid software-hardware
    solutions. More pure ones don't need any hardware but they still share actual server hardware with hypervisor (HP VSA, VMware Virtual SAN, oh, BTW, it does require flash to operate so it's again not pure software thing). 
    2) Yes there are number of solutions but devil is in details. Technically all virtualization world is sliding away from ancient way of VM-running storage virtualization stacks to ones being part of a hypervisor (VMware Virtual Storage Appliance replaced
    with VMware Virtual SAN is an excellent example). So talking about Hyper-V there are not so many companies who have implemented VM-less solutions. Except the ones you've named it's also SteelEye and that's all probably (Double-Take cannot replicate running
    VMs effectively so cannot be counted). Running storage virtualization stack as part of a Hyper-V has many benefits compared to VM-running stuff:
    - Performance. Obviously kernel-space running DMA engines (StarWind) and polling driver model (DataCore) are faster in terms of latency and IOPS compared to VM-running I/O all routed over VMBus and emulated storage and network hardware.
    - Simplicity. With native apps it's click and install. With VMs it's UNIX management burden (BTW, who will update forked-out Solaris VSA is running on top of? Sun? Out of business. Oracle? You did not get your ZFS VSA from Oracle. Who?) and always "hen and
    chicken" issue. Cluster starts, it needs access to shared storage to spawn a VMs but VMs are inside a VM VSA that need to be spawned. So first you start storage VMs, then make them sync (few terabytes, maybe couple of hours to check access bitmaps for volumes)
    and only after that you can start your other production VMs. Very nice!
    - Scenario limitations. You want to implement a SCV for Scale-Out File Servers? You canont use HP VSA or StorMagic because SoFS and Hyper-V roles cannot mix on the same hardware. To surf SMB 3.0 tide you need native apps or physical hardware behind. 
    That's why current virtualization leader VMware had clearly pointed where these types of things need to run - side-by-side with hypervisor kernel.
    3) DAS is not only cheaper but also faster then SAN and NAS (obviously). So sure there's no "one size fits all" but unless somebody needs a a) very high LUN density (Oracle or huge SQL database or maybe SAP) and b) very strict SLAs (friendly telecom company
    we provide Tier2 infrastructure for runs cell phone stats on EMC, $1M for a few terabytes. Reason? EMC people have FOUR units like that marked as a "spare" and have requirement to replace failed one in less then 15 minutes) there's no point to deploy hardware
    SAN / NAS for shared storage. SAN / NAS is an sustained innovation and Virtual SAN is disruptive. Disruptive comes to replace sustained for 80-90% of business cases to allow sustained live in a niche deployments. Clayton Christiansen\s "Innovator's Dilemma".
    Classic. More here:
    Disruptive Innovation
    http://en.wikipedia.org/wiki/Disruptive_innovation
    So I would not consider Software Defined Storage as a poor-mans HA or usable to Test & Development only. Thing is ready for prime time long time ago. Talk to hardware SAN VARs if you have connections: How many stand-alone units did they sell to SMBs
    & ROBO deployments last year?
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Can't get any Local Storage to add in Failover Cluster Manager

    I don't believe that HyperV allows straight local disks to be added to a failover cluster.  How are you sharing the disks between the nodes?  Have you used Storage Spaces Direct or Starwind or HP VSA?  You need to add the drives at those levels, you can't do it at the raw local level.

    Hey All! Been stuck on this awhlie so I was going to see if the community could help.I have a Lab at the house that has 4 Servers running Server 2012r2 out of these I have 2 "High Power" Servers and 2 "Low Power" Servers. What I'm trying to do is cluster them together so that I can have Hyper V VMs fail-over (Best case being they keep failing over to the next available server. Minimum being the High servers fail into each other and the low fail into each other) but I can't get and disks to add! Every time I add them they're either Offline or Failed as a Status with the Failed one having it's local host being the Owner node and the Offline one having a non local host as it's owner node.If I try to bring the disks online I get:Failed disk: Incorrect Function, 0x80070001Offline Disk: Clustered storage is not connected to the node,...
    This topic first appeared in the Spiceworks Community

  • Selecting VHDx as storage for File Server Role (Failover Cluster 2012 R2)

    Is it possible to select an already existing (offline) VHD or VHDX as storage when creating the "File Server" role? Reason I want to do that is because I already have a file server setup as a virtual machine and causing issues so my company
    decided to make the change towards a File Server role.
    Thank you
    David

    Hi David,
    Do you mean you configured it to file server failover cluster via "High Availability Wizard" ?
    I think you need to choose a shared volume between two nodes to achieve high availability .
    Please refer to following link :
    http://technet.microsoft.com/en-us/library/cc731844(v=WS.10).aspx
    If you do not select a shared volume , I think there is no difference than sharing a mounted VHDX file on a standalone file server .
    I would suggest to copy these files to CSV and share them .
    Hope it helps
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Adding drives to storage pool with same unique id

    i have seen a lot of discussion about using storage pools with raid controllers that reporting the same unique id across multiple drives. 
    I am yet to find a solution to my problem is that i can't add 2 drives to storage pool because they share the same unique id. Is there a way i can get around this?
    Thanks brendon

    Thanks for your reply, 
    However, Storage spaces uses the uniqueid that the raid / sata controller reports for the drive. in my case this is the output from powershell
    PS C:\Users\tfs> get-physicaldisk | ft FriendlyName, uniqueid
    FriendlyName                                                uniqueid
    PhysicalDisk1                                               2039374232333633
    PhysicalDisk2                                               2039374232333633
    PhysicalDisk10                                              SCSI\Disk&Ven_Hitachi&Prod_HDS722020ALA330\4&37df755d&0&...
    PhysicalDisk8                                               SCSI\Disk&Ven_WDC&Prod_WD10EACS-00D6B0\4&37df755d&0&0300...
    PhysicalDisk6                                               SCSI\Disk&Ven_WDC&Prod_WD10EADS-00M2B0\4&37df755d&0&0100...
    PhysicalDisk7                                               SCSI\Disk&Ven_&Prod_ST2000DL003-9VT1\4&37df755d&0&020000...
    PhysicalDisk0                                               2039374232333633
    PhysicalDisk4                                               SCSI\Disk&Ven_&Prod_ST3000DM001-9YN1\5&10a0425f&0&010000...
    PhysicalDisk3                                               SCSI\Disk&Ven_Hitachi&Prod_HDS723030ALA640\5&10a0425f&0&...
    PhysicalDisk9                                               SCSI\Disk&Ven_&Prod_ST31500341AS\4&37df755d&0&040000:sho...
    PhysicalDisk5                                               SCSI\Disk&Ven_WDC&Prod_WD1001FALS-00J7B\4&37df755d&0&000...
    as you notice i have 3 drives with the same uniqueid. This i cannot change and this is what i am looking for a workaround for. 
    If you have any thoughts that would be great.
    Thanks in advance
    Brendon

  • Adding a server into a failover cluster

    Hi all, i have the following question:
    i have a Failover Cluster with 3 servers and now i have to add another server. The cluster is in a pruction environment and the service cant be cut off during working hours. Adding this new server will cut off this service?. And which are the steps i should follow to complete this task?.
    Thanks in advance
    Dj

    Thanks for the reply. I have installed the latest VMWare tools and it hasn't fixed my issue. It's just weird. I've set this up multiple times using Sun Fire Servers or IBM Servers (non virtualized) and it's worked flawlessly. But using ESXi each one has had the same issue. Four instances each with differing hardware. I will try to look at netstat and see what I find. I'm pretty sure if the switch port was shutting down due to errors that just rebooting the server wouldn't clear it. I think you have to manually reset the switch port after something like that occurs. Anyways I will check the switch logs and see if I see something there also.
    In my test environment I use VMWare Server 2.0 and I've never had this issue using that platform. I'm currently doing an install where they are using ESX servers and it's ran perfectly so far, albeit with a small load compared to what it will be. Almost wondering if it could be something with ESXi. I didn't think there was much of a difference at the base level between ESXi and ESX. I will check out the link you sent and proceed with Oracle tech support. They want me to run ut_gather, etc. on the server right after it fails. Problem is they are all running live production in a hospital environment so it's difficult to schedule a time when I can enable the second server and wait for it to fail. Again, thank you very much for your input and please let me know if you think of something else to check.

  • Adding lun to an existing storage pool

    i have problems to add a lun to an existing storage pool.
    The storage pool is data only type.
    The log file /var/run/xsancvupdatefsxsan.log indicates:
    ^MMerging bitmap data ( 99%)99.00
    ^MMerging bitmap data (100%)
    Bitmap fragmentation: 1900004 chunks (0%)
    Bitmap fragmentation threshold exceeded. Aborting.
    Invalid argument
    Fatal: Failed to expand stripe group
    Check configuration and try again
    After run the defrag command to some folders the log indicates:
    Merging bitmap data (100%)
    Bitmap fragmentation: 1898528 chunks (0%)
    Bitmap fragmentation threshold exceeded. Aborting.
    Invalid argument
    Fatal: Failed to expand stripe group
    Check configuration and try again
    Do i need to run a complete defrag ?
    Apple documentation indicates we need to delete the data before add the new LUN because there´s an issue. (is a very big issue!!!).
    http://docs.info.apple.com/article.html?artnum=303571
    Thanks for any help.
    xsan 1.4   Mac OS X (10.4.8)  

    hi william, thanks for your answer.
    Right now my storage is %68 used.
    Do you think, if i have < 60% used, the storage pool expansion will work?
    I prefer to add a lun to a storage pool instead to create a new storage pool, because adding a lun add bandwith too.
    thanks for any advice.
    CCL

  • Cluster Storage : All SAN drives are not added up into cluster storage.

    Hi Team,
    Everything seems to be working fine except one minor issue which is one of the disk not showing in cluster storage even validation was without any issue, error or warning. Please see the report
    below where all SAN disks are validate successfully, and not added into Window Server 2012 storage.
    Quorum disk was added successfully into storage, but data disk was not.
    http://goldteam.co.uk/download/cluster.mht
    Thanks,
    SZafar

    Create Cluster
    Cluster:
    mail
    Node:
    MailServer-N2.goldteam.co.uk
    Node:
    MailServer-N1.goldteam.co.uk
    Quorum:
    Node and Disk Majority (Cluster Disk 1)
    IP Address:
    192.168.0.4
    Started
    12/01/2014 04:34:45
    Completed
    12/01/2014 04:35:08
    Beginning to configure the cluster mail.
    Initializing Cluster mail.
    Validating cluster state on node MailServer-N2.goldteam.co.uk.
    Find a suitable domain controller for node MailServer-N2.goldteam.co.uk.
    Searching the domain for computer object 'mail'.
    Bind to domain controller \\GTMain.goldteam.co.uk.
    Check whether the computer object mail for node MailServer-N2.goldteam.co.uk exists in the domain. Domain controller \\GTMain.goldteam.co.uk.
    Computer object for node MailServer-N2.goldteam.co.uk does not exist in the domain.
    Creating a new computer account (object) for 'mail' in the domain.
    Check whether the computer object MailServer-N2 for node MailServer-N2.goldteam.co.uk exists in the domain. Domain controller \\GTMain.goldteam.co.uk.
    Creating computer object in organizational unit CN=Computers,DC=goldteam,DC=co,DC=uk where node MailServer-N2.goldteam.co.uk exists.
    Create computer object mail on domain controller \\GTMain.goldteam.co.uk in organizational unit CN=Computers,DC=goldteam,DC=co,DC=uk.
    Check whether the computer object mail for node MailServer-N2.goldteam.co.uk exists in the domain. Domain controller \\GTMain.goldteam.co.uk.
    Configuring computer object 'mail in organizational unit CN=Computers,DC=goldteam,DC=co,DC=uk' as cluster name object.
    Get GUID of computer object with FQDN: CN=MAIL,CN=Computers,DC=goldteam,DC=co,DC=uk
    Validating installation of the Network FT Driver on node MailServer-N2.goldteam.co.uk.
    Validating installation of the Cluster Disk Driver on node MailServer-N2.goldteam.co.uk.
    Configuring Cluster Service on node MailServer-N2.goldteam.co.uk.
    Validating installation of the Network FT Driver on node MailServer-N1.goldteam.co.uk.
    Validating installation of the Cluster Disk Driver on node MailServer-N1.goldteam.co.uk.
    Configuring Cluster Service on node MailServer-N1.goldteam.co.uk.
    Waiting for notification that Cluster service on node MailServer-N2.goldteam.co.uk has started.
    Forming cluster 'mail'.
    Adding cluster common properties to mail.
    Creating resource types on cluster mail.
    Creating resource group 'Cluster Group'.
    Creating IP Address resource 'Cluster IP Address'.
    Creating Network Name resource 'mail'.
    Searching the domain for computer object 'mail'.
    Bind to domain controller \\GTMain.goldteam.co.uk.
    Check whether the computer object mail for node exists in the domain. Domain controller \\GTMain.goldteam.co.uk.
    Computer object for node exists in the domain.
    Verifying computer object 'mail' in the domain.
    Checking for account information for the computer object in the 'UserAccountControl' flag for CN=MAIL,CN=Computers,DC=goldteam,DC=co,DC=uk.
    Set password on mail.
    Configuring computer object 'mail in organizational unit CN=Computers,DC=goldteam,DC=co,DC=uk' as cluster name object.
    Get GUID of computer object with FQDN: CN=MAIL,CN=Computers,DC=goldteam,DC=co,DC=uk
    Provide permissions to protect object from accidental deletion.
    Write service principal name list to the computer object CN=MAIL,CN=Computers,DC=goldteam,DC=co,DC=uk.
    Set operating system and version in Active Directory Domain Services.
    Set supported encryption types in Active Directory Domain Services.
    Starting clustered role 'Cluster Group'.
    The initial cluster has been created - proceeding with additional configuration.
    Clustering all shared disks.
    Creating the physical disk resource for 'Cluster Disk 1'.
    Bringing the resource for 'Cluster Disk 1' online.
    Assigning the drive letters for 'Cluster Disk 1'.
    'Cluster Disk 1' has been successfully configured.
    Waiting for available storage to come online...
    All available storage has come online...
    Waiting for the core cluster group to come online.
    Configuring the quorum for the cluster.
    Configuring quorum resource to Cluster Disk 1.
    Configuring Node and Disk Majority quorum with 'Cluster Disk 1'.
    Moving 'Cluster Disk 1' to the core cluster group.
    Choosing the most appropriate storage volume...
    Attempting Node and Disk Majority quorum configuration with 'Cluster Disk 1'.
    Quorum settings have successfully been changed.
    The cluster was successfully created.
    Finishing cluster creation.

Maybe you are looking for

  • GRC Report (Request Details vs Approver Details)

    Hi All, I want to prepare a report which gives the request number and approvers at different stages. I found that information is available in below tables. 1. GRACREQ - With Request ID, Request number, Request Created Date 2. GRFNMWRTDATLG - with MSM

  • Building a book

    I want to build a bok from several smaller books. Problem is the smaller books stand alone, they have a cover and TOC etc. I want to link the content of the small books but not the covers and toc into the larger book. Is this possible with ID? Ben

  • Database Logon prompting is in invalid state

    Hi I have written code in Visual Studio .NET 2010 and downloaded the CrystalDecisions Client components on my local. Our System is working fine with existing Classic ASP code and we are in a plan to upgrade from 2003 to 2008 Server R2. So, thought of

  • Command-S keyboard shortcut is not working

    Hi! On our (pre-iSight) 17" iMacG5 running 10.4.6 my son (logged in to his own account) can't use the keyboard shortcut Command-S to save documents (confirmed not to work in Pages and TextEdit - both made by Apple). Using the mouse he can select and

  • Integrating crystal report with java

    Hi Any one can send me sample program for integrating Crystal report with java.I dont know how to integrate java with cryatal report.if any one helps me out from this problem an advance thanks to them Thanks Dilip