Hyper-v cluster with core switch downtime... what to do?

Is there a way to essentially "pause" the hyper-v cluster and keep things running but do NOT attempt to failover anything for any reason?
We have one Procurve 5412zl switch with two c7000 enclosures. In each c7000 enclosure there are two switches that connect all the blade servers within the enclosure. Those two switches are interconnected internally so they can communicate within the enclosure.
So if the core switch goes down the hyper-v servers in the same c7000 enclosure can still communicate but they will be seperated from the others in the other enclosure.
So we have 4 hyper-v servers in one enclosure and 3 in another. If i disconnect the core switch i'm wondering what will happen (if I reboot the switch which is what I need to do).
How can I avoid having to shut down everything for this and just tell hyper-v cluster to not do anything when the network is lost?

Hi Quadrantids,
" to essentially "pause" the hyper-v cluster and keep things running but
do NOT attempt to failover anything for any reason"
Based on my understanding  you need to keep cluster running on the same C7000 enclosure , in another words before you cut the connection between the C7000 enclosures  you may migrate VMs to same enclosure to keep running (I assume that the
storage will not be affected by the restart ).
Best Regards
Elton Ji
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place.

Similar Messages

  • Building a Hyper-V Cluster with Server 2012 R2

    I'm building a 3 node Hyper-V cluster with Server 2012 R2.  I create the cluster but the report is telling me I should configure a disk witness as a best practice.  Do I need that with a 3 node cluster.  In 2008 R2, 3 nodes give me node majority
    and I'm good to go.
    Thanks
    SMaximus7

    True, but Windows Server 2012 introduced a dynamic quorum capability.  When you have an odd number of nodes in the cluster, you really don't need to have a witness to ensure availabilty.  But with the witness what happens is that if you lose a
    node a witness comes in, so there are three quorum votes - one for each surviving node and the witness.  Then, if another node fails, the cluster will continue to be operational because now you have the witness and the surviving node to maintain quorum.
    Not absolutely needed, but a real nice to have.  And, since it just takes such a small disk for a disk witness, or a file share for a file share witness, there is no really reason not to include it.
    . : | : . : | : . tim

  • Correct binding order in a Cluster with logical switches, NIC teams, and vNICs on the host.

    I have seen many recommendations to set the network binding order on you Hyper-V hosts to something similar to:
    Management NIC
    Cluster NICs
    iSCSI NICS
    However, all of  these recommendations are for scenarios where the NICs are all physical NICs in the host.
    Using Server 2012 R2, I am building converged networks with logical switches, NIC Teams, and vNICs on the host.  So when I go set the network binding order, I now have all these components to deal with as well.  For example, on a 4 adapter blade,
    I might typically have the following items in the binding order drop-down.
    4 - physical NICs (2- teamed for the 1 virtual switch, the other 2 used for iSCSI)
    1 - Team interface (Datacenter_Switch)
    5 - vNICs (Management, Cluster, LiveMigration, iSCSI-1, iSCSI-2)
    So, should you only worry about order of the vNICS (placed at the top) and let the other components just fall to the bottom of the list?  This seems to be likely to me, since the binding order applies to service access to the resources, and the other
    components are not being directly accessed by network services?
    Or, should the order start out with the physical resources needed to access the vNICs, followed by any intermediate resources (switches or team interfaces, then the vNICS themselves, to ensure that the resources are available to the subcompnents accessing
    them?
    Any help would be appreciated.
    Thanks.
    -Tim Reid

    If by 'network binding order' you mean the order set in the Advanced Settings of the Network Connections of the Control Panel, then the most important one is to make sure the domain network is at the top of the list.  Whichever network is at the top
    of the list is used first for auth functions.  So auth functions perform best when the proper network is placed first in the binding order.  After that, I don't know that it makes much difference at all.  (If it does, I'm sure my statement will
    start a lively discussion. <grin>)
    . : | : . : | : . tim

  • Are G4 towers standard with Gigabit switches?  What about mac 1.25gh minis?

    I'm looking at buying an old G4 tower to use as a backend media server for HD video. Was wondering if they came with gigabit switches or if they have to be upgraded. if so, any suggestions on switches?

    I'm not completely sure of the answer, but I'll give it an educated guess.
    I believe that if you are connecting the Mini to the ethernet ports on the Airport Extreme router than only that port will operate at 100Mbps and the rest will operate at 1000Mbps. If you have two Macs with gigabit, you could run a test to find out. Plug both into the Airport Extreme and see how long it takes to copy a large file. Then, plug the Mini into the Airport Extreme and do the transfer between the two gigabit machines again. If it takes significantly longer, then the Mini is slowing down the entire network.
    Now, if you're using the wireless capabilities of these computers you will get a network slowdown if some machines are using 802.11b and some are using 802.11g. In this case, all machines will slow down to the 802.11b speed.
    Message was edited by: Steve Boultbee

  • Windows 2008 r2 guests blue screen on Windows 2012 R2 Hyper-V Cluster with e5 2670-v2 processors

    Hello all,
    We have a new hyper-v infrastructure deployed in two brand new Dell R720 Servers with 384GB of Memory and dual Intel e5 2670-v2 processors. This infrastructure is replacing an existing hyper-v 2008 R2 and all the guests are being migrated to this new cluster.
    The issue we are seeing is our 2008 r2 guests blue screening ocasionally with 0x0000001a,0x0000004e or 0x00000050 bugchecks.
    All this guests are configured with dynamic memory and with the integration components up to date. These same guests were running with no problems in the hyper-v 2008r2 cluster.
    When searching i found this article from vmware that pretty much describes what we are facing:
    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2073791
    Are you aware of the same problem with this kindd of hardware on windows 2012 r2 hyper-v?
    Thanks!
    Nuno Carvalho

    Meanwhile i found this May 2014 update on the intel specification of the CPU we are using:
    CA135 Incorrect Page Translation when EPT is enabled
    Problem:
    If EPT (Extended Page Tables) is enabled, then a complex sequence of internal processor events may result in unexpected page faults or use of incorrect page translations.
    Implication:
    Due to this erratum a guest may crash or experience unpredictable system behavior.
    Workaround:
    It is possible for the BIOS to contain a workaround for this erratum.
    Status:
    For the affected steppings, see the
    Summary Tables of Changes.
    This affects VMware as of the today update of the article i referenced in the first post, what about hyper-v?
    http://www.intel.com/content/dam/www/public/us/en/documents/specification-updates/xeon-e5-v2-spec-update.pdf
    Nuno Carvalho

  • Application dying with core dump on what appears to be berkeley

    I have a web app being run on a solaris server. This web app ran for approximately 20 hours before crashing. I have the core dump file, but it is quite large (11GB). Using mdb I am able to get the stack trace from the core dump. Could this be an issue with a corrupted BDB? Or could it be corrupted environment file(s)? A similar, but different (slightly different stack trace) error that happened on another server a few hours later. We have 3 other servers that continue to run successfully though with the same BDB objects.
    We are not writing to any of the BDBs. They are read only. Same with secondary BDBs. It is a java webapp interacting with the native solaris libraries. I forgot to add, this is a 64 bit machine.
    If you have any ideas/suggestions/need more info please let me know.
    Top of the stack trace below.
    libc.so.1`_lwp_kill+8(6, 0, ffffffff7ef45538, ffffffffffffffff, ffffffff7ef3a000, 0)
    libc.so.1`abort+0x118(1, 1d8, ffffffff7e2fc6f8, 1ef13c, 0, 0)
    libjvm.so`__1cCosFabort6Fb_v_+0x58(1, 1, 2dbc8, ffffffff7e69e000, 3abb94, 2d800)
    libjvm.so`__1cHVMErrorOreport_and_die6M_v_+0xcb4(ffffffff7e700480, 0, 1, ffffffff7e70ace0, ffffffff7e6cbb80, ffffffff7e55c873)
    libjvm.so`JVM_handle_solaris_signal+0xa6c(a, fffffffccbefd500, fffffffccbefd220, 1a0c00, 101b8a800, 280000)
    libc.so.1`__sighndlr+0xc(a, fffffffccbefd500, fffffffccbefd220, ffffffff7ddf5df0, 0, 9)
    libc.so.1`call_user_handler+0x3e0(ffffffff7bc15a00, ffffffff7bc15a00, fffffffccbefd220, c, 0, 0)
    libc.so.1`sigacthandler+0x54(0, fffffffccbefd500, fffffffccbefd220, ffffffff7bc15a00, 0, ffffffff7ef3a000)
    libdb_java-4.6.so`__env_alloc_free+0x140(100bab1c0, fffffffc6c0c75f8, fffffffc8bcff5dc, 1a, 66e, fffffffccbefe461)
    libdb_java-4.6.so`__memp_free+0x1c(100bab1c0, fffffffc82123140, fffffffc6c0c75f8, fffffffccbefe710, 0, fffffffc6a00f1d0)
    libdb_java-4.6.so`__memp_bhfree+0x6fc(100fc46e0, 100bab1c0, fffffffc6a00f1c8, fffffffc6c0c75f8, 1, 3)
    libdb_java-4.6.so`__memp_alloc+0x1df8(100fc46e0, 100bab1c0, fffffffc82100698, 0, 0, fffffffccbefdeb0)
    libdb_java-4.6.so`__memp_fget+0x233c(100ba5e50, 10142eb84, 0, 1, 10142eb78, 0)
    libdb_java-4.6.so`__ham_get_cpage+0x33c(101258860, 1, 4, 1, 10142ebc8, 10142ebb0)
    libdb_java-4.6.so`__ham_lookup+0x104(101258860, fffffffccbefe770, 0, 1, fffffffccbefe34c, 4c000)
    libdb_java-4.6.so`__hamc_get+0x278(101258860, fffffffccbefe770, fffffffccbefe710, 1a, fffffffccbefe34c, 0)
    libdb_java-4.6.so`__dbc_get+0x81c(101258860, fffffffccbefe770, fffffffccbefe710, 1a, 66e, fffffffccbefe461)
    libdb_java-4.6.so`__db_get+0x1a4(1012b2570, 0, fffffffccbefe770, fffffffccbefe710, 0, 4c000)
    libdb_java-4.6.so`__db_get_pp+0x3e0(1012b2570, 0, fffffffccbefe770, fffffffccbefe710, 0, fffffffccbefe700)
    libdb_java-4.6.so`Db_get+0x40(1012b2570, 0, fffffffccbefe770, fffffffccbefe710, 0, 0)
    libdb_java-4.6.so`Java_com_sleepycat_db_internal_db_1javaJNI_Db_1get+0x128(101b8a9b8, fffffffccbefe8e8, 1012b2570, 0, fffffffccbefe8c8, fffffffccbefe8d0)
    0xffffffff78391410(1012b2570, 0, ffffffff11fc9a38, ffffffff11fc9a70, 0, fffffffccbefe101)
    0xffffffff78005eac(ffffffff559ee358, b6, fffffffce2f93180, ffffffff78018100, 7124, fffffffccbefe221)
    0xffffffff78005eac(ffffffff559ee338, b6, fffffffce2f92f60, ffffffff78017d20, ae1, fffffffccbefe331)
    0xffffffff78005e60(ffffffff5fad4f80, b7, fffffffce2ff6ac0, ffffffff78017d28, 66e, fffffffccbefe461)
    0xffffffff78005e60(ffffffff5fad4f80, fffffffce150e618, fffffffce2f9dd20, ffffffff78017f60, 5, fffffffccbefe5e1)
    0xffffffff780063b8(ffffffff5fad4f20, b7, 0, ffffffff78018200, 1e400, fffffffccbefe701)
    0xffffffff78005e60(ffffffff5fad4f20, fffffffce14a5828, 0, ffffffff78017f60, ffffffff11ff0cf0, fffffffccbefe811)
    0xffffffff780063b8(ffffffff5f995ce0, fffffffce145f868, 0, ffffffff78017ce0, ffffffff11ff0cf0, fffffffccbefe931)
    0xffffffff780063b8(ffffffff5f995d78, b6, 0, ffffffff78018200, ffffffff11ff0cf0, fffffffccbefea51)
    0xffffffff78005fdc(fffffffef354e068, fffffffce00427e0, 0, ffffffff78017ce0, 0, fffffffccbefeb71)
    0xffffffff78006534(fffffffef354e0f8, b7, 0, ffffffff78018200, 0, fffffffccbefecb1)
    0xffffffff78005fdc(fffffffef354e0f8, fffffffce00427e0, 0, ffffffff78017f60, 912c14, fffffffccbefedc1)
    0xffffffff78006534(fffffffccbefff50, 60800, 0, ffffffff78018200, fffffffef354e178, fffffffccbefeeb1)
    0xffffffff78000240(fffffffccbeff8a0, fffffffccbeffd50, a, fffffffce00441e8, ffffffff7800bda0, fffffffccbeffb48)
    libjvm.so`__1cJJavaCallsLcall_helper6FpnJJavaValue_pnMmethodHandle_pnRJavaCallArguments_pnGThread__v_+0x1f4(1, 101b8a800, fffffffccbeffb38, a,
    ffffffff780001e0, fffffffccbeff870)
    libjvm.so`__1cJJavaCallsMcall_virtual6FpnJJavaValue_nLKlassHandle_nMsymbolHandle_4pnRJavaCallArguments_pnGThread__v_+0x130(fffffffccbeffd48,
    ffffffff7e6ea148, fffffffef354e178, 4c148, fffffffccbeffb38, ffffffffff6f4950)
    libjvm.so`__1cJJavaCallsMcall_virtual6FpnJJavaValue_nGHandle_nLKlassHandle_nMsymbolHandle_5pnGThread__v_+0x50(fffffffccbeffd48, 100c8f880, 100c8f888,
    ffffffff7e71c2b0, ffffffff7e71c798, 101b8a800)
    libjvm.so`__1cMthread_entry6FpnKJavaThread_pnGThread__v_+0xf0(fffffffef354e178, 101b8a800, 7dd50, 85fc64, ffffffff7e71bd50, 7dc00)
    Edited by: user11287228 on Jun 9, 2011 11:33 AM

    Hello,
    If you can one thing that might help is rebuilding the Berkeley DB library with enable-debug and enable-diagnostic. With enable-debug we might get line numbers and see exact where we are in __env_alloc_free and see what the input parameters leading up to the abort are. With enable-diagnostic run-time checking is in place which might also help see what is causing this. (see http://download.oracle.com/docs/cd/E17076_02/html/installation/build_unix_conf.html) Another thing to do if you are not using a private environment is to look at the db_stat -E output:
    http://download.oracle.com/docs/cd/E17076_02/html/api_reference/C/db_stat.html
    Thank you,
    Sandra

  • Hyper-V cluster validation report "Found duplicate physical address" on nic team interfaces.

    I recently built a Windows 2012 Hyper-V cluster with 5 nodes. The validation report shows “duplicate physical address” error (error text pasted below).
    The hardware: HP BladeSystem – servers are BL460c blades, in a c7000 enclosure, connected to HP Virtual Connect switches.
    Each server has 2 physcal nics, teamed in Windows. In the NIC Teaming console, I created the following Team Interfaces and assigned each a VLAN ID:
    “Team1” (the default team)
    “Team1 - VLAN 204 – Management”
    “Team1 - VLAN 212 - 2012HB”
    “Team1 - VLAN 211 -Exchange DAG Replication”
    I also created 2 HV Virtual Switches. Neither one allows management interface to share. They are assigned to “Team1” and “Team1 - VLAN 211 -Exchange DAG Replication” respectively.
    Therefore, in Network Connection, I see the 2 physical Ethernet nics, and 4 “virtual” nics. Only 2 of them have IP addresses assigned: Management and HB. These are the two that the validation wizard complains
    about.
    The MAC address is not configurable in the NIC Teaming console, so I don’t see a way to resolve this error, except to use separate physical nics. I don’t want to do that because a) I would lose the benefits of
    the bandwidth aggregation that Virtual Connect provides, and b) When creating an Interface on a Team in Windows, it looks like it ALWAYS gives it the same MAC address, so that should be a supported configuration.
    Everything works just fine, and there are no other errors or IP conflicts or anything else. But I really want to fix it because I don’t know what unknown problems this may be causing.
    From the Cluster Validation report:
    Found duplicate physical address 10-60-4B-A9-4A-30 on node Cluster201.OurDomain.local adapter
    Team1 - VLAN 212 - 2012HB and node Cluster201.OurDomain.local adapter
    Team1 - VLAN 204 - Management.
    Found duplicate physical address F0-92-1C-13-3C-2C on node Cluster202.OurDomain.local adapter
    Team1 - VLAN 212 - 2012HB and node Cluster202.OurDomain.local adapter
    Team1 - VLAN 204 - Management.
    Found duplicate physical address 68-B5-99-C1-7E-9C on node Cluster210.OurDomain.local adapter
    Team1 - VLAN 212 - 2012HB and node Cluster210.OurDomain.local adapter
    Team1 - VLAN 204 - Management.
    Found duplicate physical address 3C-4A-92-DE-1E-74 on node Cluster211.OurDomain.local adapter VC-Team - VLAN 212 - 2012HB and node Cluster211.OurDomain.local adapter
    VC-Team - VLAN 204 - Management.
    Found duplicate physical address 68-B5-99-C0-3D-50 on node Cluster212.OurDomain.local adapter
    Team1 - VLAN 212 - 2012HB and node Cluster212.OurDomain.local adapter
    Team1 - VLAN 204 - Management.
    Thanks!
    Dan

    Hi Dan,
    "It turns out that both hosts had the same default MAC address ranges for their virtual switches. Since the host vNICs were attached to the virtual switch on each host they received the first couple of MAC addresses from the switches.
    For details please refer to following link:
    http://www.jefflafr.com/blog/4/19/2013/conflicting-mac-addresses-when-building-a-hyper-v-cluster-with-converged-networking
    Hope this helps
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Choosing a Core Switch

    Hi,
    What are the criterias when choosing a core switch? For example, in the Cisco product pages - the Catalyst 4500 and 6500 are already distribution/core switches while the Catalyst 3750 are access/edge switches.
    Can I make a stack of Catalyst 3750 my core switch? What makes a core switch a "core" switch - what features does it have, performance, etc.?
    Does Cisco have a guide - for example, you have X number of users - use Cisco Y model as your core switch?
    Thanks,
    Tony

    There's many criteria one can use choosing a core device, but since such a device, by being at the center of your network, may carry the most traffic, performance is often given additional weight for core device choice.
    With regard to making a choice on some X number of users, choice of core is often made more toward bandwidth usage of core ports. There's often a large difference between the nomimal bandwidth of a port and the substainable bandwidth to/from a port. (E.g. the difference between a 6500 with Sup32 and 6148 10/100/1000 Ethernet vs. 6500 with Sup720 with 6748 and DFC 10/100/1000 Ethernet. The former is suited as an edge device, the latter more suited for core device.)
    A stack of 3750 might be used as a core for a very small and/or light usage network. Consider that a single 48 port 3750, I believe, is not an every port wire rate device, and the performance limitation of the stack ring. However similar performance limitations are also true for certain 4500 or 6500 hardware configurations.
    Although performance is often a major factor, other considerations, such as other features, might be important too. For instance, a dual 48 port 3750G stack might be a viable choice vs. a 6704 with dual Sup32s and two 6148 line cards, but the 6500 likely will offer features not available with the 3750. For instance, believe 3750s only support 32 HSRP groups and don't support GLBP.

  • Best design for HA Fileshare on existing Hyper-V Cluster?

    Have a three node 2012 R2 Hyper-V Cluster. The storage is a HP MSA 2000 G3 SAS Block Storage with CSV's. 
    We have a fileserver for all users running as VM on the cluster. Fileserver availability is important and it's difficult to take this fileserver down for the monthly patching. So we want to make these file services HA. Nearly all clients are Windows 8.1,
    so SMB 3 can be used. 
    What is the best way to make these file services HA?
    1. The easiest way would probably be to migrate these fileserver ressources to a dedicated LUN on the MSA 2000, and to add a "general fileserver role" to the existing hyper-V cluster. But is it supported and a good solution to provide Hyper-V VM's
    and HA file services on the same cluster (even when the performance requirements for file services are not high)? Or does this configuration affect the Hyper-V VM performance too much?
    2. Is it better to create a two node guest cluster with "Shared VHDX" for the file services? I'm not sure if this would even work. Because we had "Persistent Reservation" warnings when creating the Hyper-V cluster with the MSA 2000. According "http://blogs.msdn.com/b/clustering/archive/2013/05/24/10421247.aspx",
    these warnings are normal with block storage and can be ignored when we never want to create Windows storage pools or storage spaces. But the Hyper-V MMC shows that "shared VHDX" work with "persistent reservations". 
    3. Are there other possibilities to provide HA file services with this configuration without buying new HW? (Remark: DFSR with two independet Fileservers is probably not a good solution, we have a lot of data that change frequently).
    Thank you in advance for any advice and recommedations!
    Franz

    Hi Franz,
    If you are not going to be using Storage Spaces in the Cluster, this is a warning that you can safely ignore. 
    It passes the normal SCSI3 Persistent Reservation tests, so you are good with those. Additional, when we use the cluster we can install the cluster CAU it will automatically install the cluster updates.
    The related KB:
    Requirements and Best Practices for Cluster-Aware Updating
    https://technet.microsoft.com/en-us/library/jj134234.aspx
    Cluster-Aware Updating: Frequently Asked Questions
    https://technet.microsoft.com/en-us/library/hh831367.aspx
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Hyper-V Cluster in VMM

    I am trying to build a Hyper-V cluster with VMM 2012 R2 but require some advice as it is not working how I want it too.
    I have 2 Hyper-V servers, both with their own local storage and 1 iSCSI disk shared between them. I am trying to cluster the servers so that the shared iSCSI disk becomes a shared volume while maintaining the ability to use the local storage as well - some
    VMs will run from local storage while others will run from the CSV.
    The issue I'm having is that when I cluster the 2 servers the iSCSI disk does not show up in VMM as a shared volume. In Windows Explorer the disk has the cluster icon but in VMM there is nothing. In the cluster properties I can add a shared volume... but
    it asks for a logical node which I cannot create because I have no storage pools (server manager says no groups of disks are available to pool).
    I also noticed when I clustered the servers my 2 file shares to their local storage disappeared from VMM which isn't what I want.
    Can someone please advise, or link to, a way to achieve my desired configuration?
    Cheers,
    MrGoodBytes
    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

    Hi MrGoodBytes,
    Hi,
    Unfortunately, the available information is not enough to have a clear view of the occurred behavior. Could you provide more information about your environment. For example,
    the server version of the problem on, when this problem occurs the system log record information, screenshots is the best information.
    Before you create the cluster we strongly recommend you run the cluster validation, If you are considering the cluster may have some issue please rerun the validation, then
    post the validation report warning and error part information, this report will quickly locate the cluster potential issue.
    A disk witness is a disk in the cluster storage that is designated to hold a copy of the cluster configuration database. A failover cluster has a disk witness only if this
    is specified as part of the quorum configuration.
    Configure and Manage the Quorum in a Windows Server 2012 Failover Cluster
    http://technet.microsoft.com/zh-cn/library/jj612870.aspx
    I am not familiar with SVCMM so please refer the following related KB to confirm your shared storage add steps is correct.
    How to Configure Storage on a Hyper-V Host Cluster in VMM
    http://technet.microsoft.com/en-us/library/gg610692.aspx
    Configuring Storage in VMM
    http://technet.microsoft.com/en-us/library/gg610600.aspx
    More information:
    How to add storage to Clustered Shared Volumes in Windows Server 2012
    http://blogs.msdn.com/b/clustering/archive/2012/04/06/10291490.aspx
    Configure and Manage the Quorum in a Windows Server 2012 Failover Cluster
    http://technet.microsoft.com/zh-cn/library/jj612870.aspx
    Event Logs
    http://technet.microsoft.com/en-us/library/cc722404.aspx
    I’m glad to be of help to you!
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Backup fails for a Hyper-V guest with VSS Writer failures using DPM 2012 R2 - Hyper-V guest has Oracle application installed,

    I am trying to backup a VM which before it had Oracle installed backed up with no problems.  It is a Windows 2008 R2 server which sites on an 8 node Hyper-V 2012 R2 cluster (with CSVs).  I am using DPM 2012 R2 to run the backups which have been
    successful for the last few weeks but then Oracle was installed and the backups have failed since then.
    The job fails and a large number of VSS writers go into a 'Failed' state with the 'Last Error' showing as 'Timed out'.  I then get 4 popups appear each referring to a 'temporary' drive which appears briefly in Disk Management with a RAW file system. 
    These pops say "You need to format the disk in drive X: before you can use it.  Do you want to format it?" (where
    X: is replaced by the drive letter assigned to each 'temporary' drive).
    The System event log is populated with a large number of warnings with Event 51, Disk, stating "An error was detected on device \Device\Harddisk<number>\DR<number> during a paging operation".  There are also a
    few warnings for Ntfs (eventid 57) stating that "The system failed to flush data to the transaction log.  Corruption may occur."
    Prior to these warnings there are 5 other warnings for partmgr (eventid 58) stating "The disk signature of disk
    <number x> is equal to the disk signatuire of disk <number y>" and 4 errors (eventid 1, VDS Basic Provider), "Unexpected failure.  Error code: 490@01010004".
    There is a script which is run to stop the Oracle application on the server and if this is run then the backups will complete successfully.  We have been troubleshooting this by running a certain amount of the script and seeing which part affects the
    backup and it seems that if the Weblogic (wls_reports) service is stopped then the backup will succeed but if it is running then the backup will fail and the above symptoms occur.
    Another point which may help is that there is a pre-production server which resides on a Windows 2008 R2 Hyper-V standalone server, has the same scripts and installation of Oracle but backs up without any issues.
    I have experienced VSS writer failures before with VM backups but I have not seen this before.  It is not intermittent and I can find no work around to alleviate the problem of having no backup (except stopping this service or shutting the server down,
    but as it is a production server this is not practical).
    If anyone has experienced this before or have any suggestions / advice it would be much appreciated.
    Thanks
    Chris

    Hi Chris
    I have exactly the same issue.
    2008 R2 Server running Oracle
    6 node 2012 R2 Hyper-V Cluster with CSV
    Exact same errors and popup "format disk" message.
    DPM2012 R2
    I also have some other VM's on the same cluster which do back up through DPM
    The only difference I can see is the server which has the issue has a legacy network adapter.
    Production server so cannot make any changes until later but will add new adapter and report back.
    Interesting about the Weblogic service, may test this also.
    Cheers
    Kev.

  • Oracle Clustre, Oracle Cluster with RAC and Oracle 10g

    Is there a difference between Oracle Cluster and Oracle Cluster with RAC? Please explain. Do existing database codes run unmodified in Cluster or Cluster with RAC environment? What needs to be modified to make existing SQL codes RAC-aware. How to achieve 'all automatic' in case of failure and resubmission of Queries from failed instance to a running instance?
    In 10g environment, do we need to consider licensing of RAC as a separate product? What are additional features one derives in 10g that is not in Cluster +RAC?
    Your comments and pointers to comparison study and pictorial clarification will be very helpful.

    Oracle cluster like failsafe before or Veritas Cluster or other vendor's cluster is meant for HA (high availability) purpose. Which 2 nodes or more can see a shared disk with 1 active node. Whenever this active node failed through heartbeat other machine will know and will take the database over from there.
    Oracle RAC is more for HA and load balance. In Oracle RAC 2 or more nodes are accessing the database at the same time so it spread load across all these nodes.
    I believe Oracle 10g RAC still need seperate license for it. But you need to call Oracle or check the production document to verify it.
    Oracle 10g besides improvement in RAC. It's main improvement is on the build in management of the database itself. It can monitored and selftune itself to much furthur level then before and give DBA much more information to determine the cause of the problem as well. Plus improvement on lots of utility as well like RMAN , data pump etc... I don't want to get into too much detail on this you can check on their 10g new features for more detail view.
    Hope this help. :)

  • Write 1d array of cluster with 2 elements to file

    HI!
    I'm new at LabView so I hope the question is very simple to you:
    I am using a waveform graph to display several measurements.
    Now I want to save these Measuremens. The data are stored in 1d array of cluster with 2 elements.
    What is the easiest way to do this??
    Thanks

    If you are already using waveforms you can simply wire right up to "Write Waveforms to File.vi" in the Waveform palette under Waveform File I/O.
    Daniel L. Press
    PrimeTest Corp.
    www.primetest.com

  • High Network Utilization on HeartBeat and LiveMigration networks - Windows 2012 R2 Hyper-V cluster

    Hi all,
    I have just setup a fresh Windows 2012 R2 Hyper-V cluster with 4 nodes.
    Config:
    Network:
    2 x Networks for iSCSI
    1 x VM network 1
    1 x VM network 2
    1 x Heartbeat
    1 x LiveMigration
    Disks:
    5 GB = Quorum
    2 x 2.99 TB (Deduplication enabled)
    The problem I am getting is that HeartBeat and LiveMigration (both configured with Cluster Only traffic) have alot of traffic on them even though no LiveMigration (the only card configured for LiveMigration) is going on.
    All network configuration is the same as it was in Windows 2008 R2 SP1 cluster which these machines where running before (not upgrade, fresh install and migration) and did not have this "problem".
    Has anyone experienced this or have a solution to this?
    Regards,
    Thorir
    thorir

    Hi Thorir,
    According to your discription , you can try to monitor and analyse the traffic of LM via network monitor for troubleshooting .
    Here is the links for downloading and using network monitor  :
    http://www.microsoft.com/en-us/download/details.aspx?id=4865
    http://technet.microsoft.com/en-us/library/cc723623.aspx
     Any further information please feel free to let us know.
    Hope this helps
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Hyper V hosts with Cluster Shared Volume do not provide Shared VHDX ability

    Hi there, i've configured a pair of Hyper V hosts (UCS B230 M2) with SAN-Boot and access to a Shared Fiber Channel LUN.
    No matter what I try, when I add a SCSI Adapter and one VHDX virtual disk to a VM, the option for "Advanced" never shows up therefore I can not share the VHDX file between another VM, which prevents any VM based clustering from working (IE: Clustered
    SQL server)
    Here are the settings I have configured:
    The LUN has been formatted with NTFS
    The Hyper V hosts have been configured as a Windows Cluster with an IP address and the Fiber Channel LUN configured as a Cluster Shared Volume
    I follows the guides below
    Create a CSV
    https://technet.microsoft.com/en-us/library/dn265980.aspx
    Create a shared VHDX
    https://technet.microsoft.com/en-us/library/dn282283.aspx
    Further details:
    Two HyperV server core hosts built
    Both joined to AD
    Both joined to a Windows failover cluster
    One 2TB Fibre Channel LUN presented from an EMC VNX array to both hosts
    LUN formatted as NTFS
    LUN created as a Cluster Shared Volume
    Cluster Shared Volume present on both HyperV hosts
    VM created within the Cluster Shared Volume
    VM Booted up and Windows 2012 has been installed
    VM gracefully shut down
    Through the HyperV Manager, going in to VM settings,create a SCSI adapter, create a new VHDX file inside the CSV, the “Advanced” option for the VHDX is not present, therefore the option to make the VHDX file shared is not available
    I have followed all steps in the Microsft Technet pages to create a CSV and a VM with Shared VHDX
    I’ve cleared all settings and restarted multiple times, but this option is not present

    Hi Sir,
    >>No matter what I try, when I add a SCSI Adapter and one VHDX virtual disk to a VM, the option for "Advanced" never shows up therefore I can not share the VHDX file between another VM
    >>Yes we are using Hyper-V Server 2012 R2
    As we know Hyper-v server 2012R2 is a free version without GUI , I'm afraid you are not using Win 8.1 or other windows server 2012R2 to manage that free version hyper-v server .
    Shared vhdx is a new feature came with 2012R2 , if use previous windows to manage 2012R2 you will not see "advanced" option (The blue circle belongs 2012 , the red circle belongs to 2012R2 ):
    Best Regards,
    Elton Ji
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

Maybe you are looking for