Hyper-v Physical SAN 2 Node network: Best practis

Hi, I have been reading alot of articles and guides on how to set up this correct! all of the guides are driffrent.
in my system:
2 X server, they have 4 nics Gigabit each and 2 FC card each.
1X SAN , 2 FC controller, with 2 connections points each ( 2 for each server)
Whats the best practis for setting this up. The FB wil be connected directly to the servers.
nic team 2 of the gigabit nics to live migratin and 2 For externaly and the rest over FB?

Hi Martinbt,
For the details configuring of FC storage , I would suggest you to contact the storage vendor .
The goal is to create a shared LUN for each cluster node , please refer to following link regarding storage requirement :
http://technet.microsoft.com/en-us/library/jj863389.aspx
Also :
http://technet.microsoft.com/en-us/library/cc732181(v=WS.10).aspx
Best Regards
Elton Ji
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place.

Similar Messages

  • Windows 2012 Hyper-V Virtual SAN Switch Best Practice

     I have a four node cluster running W2012 Datacenter edition on each.  All systems have Emulex HBA compatible with Hyper-V and Virtual SAN technology. The hosts are connected to a Brocade SAN Switch with NPIV Enabled. I've
    created a Virtual SAN on each host. The vSAN is connected to 2 physical HBA. Each HBA is connected to a different physical SAN Switch.
    I have some questions about the best practice of the whole system.
    1. Do I have to create a vSAN per physical HBA ? Or create one vSAN that include 2 HBA ?
    2. Add 2 virtual HBA per VM to have fault tolerance ? Or one virtual HBA that is connected to a vSAN with 2 HBAs ?
    3.  Do I have to create a virtual port for each VM on the HBA Gui (OCManager) ?
    Any best pratice or advice ?
    Thanks

    Hi,
    first you will do almost the same as you do on a physical SAN, normally you build 2 Fabrics right ?
    In Case you have that in place you will configure also 2 vSANs to build up your 2 seperate ways/pathes.
    Here my top links for that Topic
    Hyper-V Virtual Fibre Channel Design Guide
    http://blogs.technet.com/b/privatecloud/archive/2013/07/23/hyper-v-virtual-fibre-channel-design-guide.aspx
    Very Good Blog with all the Scenarios you can have, sinlge Fabric to Multi ......
    Here the TechNet Info around "Hyper-V Virtual Fibre Channel Overview"
    http://technet.microsoft.com/en-us/library/hh831413.aspx
    and the "Implement Hyper-V Virtual Fibre Channel"
    http://technet.microsoft.com/en-us/library/dn551169.aspx
    A good Advice is also to install this Hotfix -
    http://support.microsoft.com/kb/2894032/en-us
    It is a sexy feature but if one point in the chain is doing wrong things with your NPIV Packets you are lost, so in my personal view you miss the for me most inportant point why i do Hyper-V, the Separation of HW and Software :-)
    One Problem i had once was after live Migration of a VM with configured vFC the Destination Host lost there FC Connection, but only if the Host have been one SAN Hop away from the HP Storage , was a HP Driver Problem, used the original Qlogic Driver and
    everything was fine, so same as with some HP Networking Driver for Broadcom, do not trust every update, test it before puting it in production.
    Hope that helps ?
    Udo

  • Windows Server 2012 - Hyper-V - iSCSI SAN - All Hyper-V Guests stops responding and extensive disk read/write

    We have a problem with one of our deployments of Windows Server 2012 Hyper-V with a 2 node cluster connected to a iSCSI SAN.
    Our setup:
    Hosts - Both run Windows Server 2012 Standard and are clustered.
    HP ProLiant G7, 24 GB RAM, 2 teamed NIC dedicated to Virtual Machines and Management, 2 teamed NIC dedicated to iSCSI storage. - This is the primary host and normaly all VMs run on this host.
    HP ProLiant G5, 20 GB RAM, 1 NIC dedicated to Virtual Machines and Management, 2 teamed NIC dedicated to iSCSI storage. - This is the secondary host that and is intended to be used in case of failure of the primary host.
    We have no antivirus on the hosts and the scheduled ShadowCopy (previous version of files) is switched of.
    iSCSI SAN:
    QNAP NAS TS-869 Pro, 8 INTEL SSDSA2CW160G3 160 GB i a RAID 5 with a Host Spare. 2 Teamed NIC.
    Switch:
    DLINK DGS-1210-16 - Both the network cards of the Hosts that are dedicated to the Storage and the Storage itself are connected to the same switch and nothing else is connected to this switch.
    Virtual Machines:
    3 Windows Server 2012 Standard - 1 DC, 1 FileServer, 1 Application Server.
    1 Windows Server 2008 Standard Exchange Server.
    All VMs are using dynamic disks (as recommended by Microsoft).
    Updates
    We have applied the most resent updates to the Hosts, WMs and iSCSI SAN about 3 weeks ago with no change in our problem and we continually update the setup.
    Normal operation
    Normally this setup works just fine and we see no real difference in speed in startup, file copy and processing speed in LoB applications of this setup compared to a single host with 2 10000 RPM Disks. Normal network speed is 10-200 Mbit, but occasionally
    we see speeds up to 400 Mbit/s of combined read/write for instance during file repair
    Our Problem
    Our problem is that for some reason all of the VMs stops responding or responds very slowly and you can for instance not send CTRL-ALT-DEL to a VM in the Hyper-V console, or for instance start task manager when already logged in.
    Symptoms (i.e. this happens, or does not happen, at the same time)
    I we look at resource monitor on the host then we see that there is often an extensive read from a VHDX of one of the VMs (40-60 Mbyte/s) and a combined write speed to many files in \HarddiskVolume5\System Volume Information\{<someguid and no file extension>}.
    See iamge below.
    The combined network speed to the iSCSI SAN is about 500-600 Mbit/s.
    When this happens it is usually during and after a VSS ShadowCopy backup, but has also happens during hours where no backup should be running (i.e. during daytime when the backup has finished hours ago according to the log files). There is however
    not that extensive writes to the backup file that is created on an external hard drive and this does not seem to happen during all backups (we have manually checked a few times, but it is hard to say since this error does not seem leave any traces in event
    viewer).
    We cannot find any indication that the VMs themself detect any problem and we see no increase of errors (for example storage related errors) in the eventlog inside the VMs.
    The QNAP uses about 50% processing Power on all cores.
    We see no dropped packets on the switch.
    (I have split the image to save horizontal space).
    Unable to recreate the problem / find definitive trigger
    We have not succeeded in recreating the problem manually by, for instance, running chkdsk or defrag in VM and Hosts, copy and remove large files to VMs, running CPU and Disk intensive operations inside a VM (for instance scan and repair a database file).
    Questions
    Why does all VMs stop responding and why is there such intensive Read/Writes to the iSCSI SAN?
    Could it be anything in our setup that cannot handle all the read/write requests? For instance the iSCSI SAN, the hosts, etc?
    What can we do about this? Should we use MultiPath IO instead of NIC teaming to the SAN, limit bandwith to the SAN, etc?

    Hi,
    > All VMs are using dynamic disks (as recommended by Microsoft).
    If this is a testing environment, it’s okay, but if this a production environment, it’s not recommended. Fixed VHDs are recommended for production instead of dynamically expanding or differencing VHDs.
    Hyper-V: Dynamic virtual hard disks are not recommended for virtual machines that run server workloads in a production environment
    http://technet.microsoft.com/en-us/library/ee941151(v=WS.10).aspx
    > This is the primary host and normaly all VMs run on this host.
    According to your posting, we know that you have Cluster Shared Volumes in the Hyper-V cluster, but why not distribute your VMs into two Hyper-V hosts.
    Use Cluster Shared Volumes in a Windows Server 2012 Failover Cluster
    http://technet.microsoft.com/en-us/library/jj612868.aspx
    > 2 teamed NIC dedicated to iSCSI storage.
    Use Microsoft MultiPath IO (MPIO) to manage multiple paths to iSCSI storage. Microsoft does not support teaming on network adapters that are used to connect to iSCSI-based storage devices. (At least it’s not supported until Windows Server 2008 R2. Although
    Windows Server 2012 has built-in network teaming feature, I don’t article which declare that Windows Server 2012 network teaming support iSCSI connection)
    Understanding Requirements for Failover Clusters
    http://technet.microsoft.com/en-us/library/cc771404.aspx
    > I have seen using MPIO suggests using different subnets, is this a requirement for using MPIO
    > or is this just a way to make sure that you do not run out of IP adressess?
    What I found is: if it is possible, isolate the iSCSI and data networks that reside on the same switch infrastructure through the use of VLANs and separate subnets. Redundant network paths from the server to the storage system via MPIO will maximize availability
    and performance. Of course you can set these two NICs in separate subnets, but I don’t think it is necessary.
    > Why should it be better to not have dedicated wireing for iSCSI and Management?
    It is recommended that the iSCSI SAN network be separated (logically or physically) from the data network workloads. This ‘best practice’ network configuration optimizes performance and reliability.
    Check that and modify cluster configuration, monitor it and give us feedback for further troubleshooting.
    For more information please refer to following MS articles:
    Volume Shadow Copy Service
    http://technet.microsoft.com/en-us/library/ee923636(WS.10).aspx
    Support for Multipath I/O (MPIO)
    http://technet.microsoft.com/en-us/library/cc770294.aspx
    Deployments and Tests in an iSCSI SAN
    http://technet.microsoft.com/en-US/library/bb649502(v=SQL.90).aspx
    Hope this helps!
    TechNet Subscriber Support
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Lawrence
    TechNet Community Support

  • 10Gb Networking best practices

    I'm looking for good guidance on Hyper-V 2012 R2 network configuration best practices for a converged server. Meaning, dual 10Gb NICs and using SMB 3.0 file shares for storage. The servers also have two 1Gb NICs. I'm very familiar with VMware, but ramping
    up on HV networking best practices.
    Blog: www.derekseaman.com, VMware vExpert 2012/2013

    Derek,
    I tried to draw my prefered setup for this network configuration.
    I would create a Team with the two 1 GBit NICs and use it for Domain, DNS, Backup and any SystemCenter Agents.
    I would also Team the two 10 GBit NICs and than assign it to a Hyper-V Switch for the VMs. In Windows Server 2012 it is posible to create vNICs for the Management OS that use this Hyper-V Switch (Converged Network Design). I would create two vNICs SMB1
    and SMB2 to use them for Cluster and Livemigration traffic with SMB Multichannel. If your storage system supports SMB Multichannel you can also use both as storage NICs (but this depends wich vendor you have).
    Hope this helps.
    Grüße/Regards Carsten Rachfahl | MVP Virtual Machine | MCT | MCITP | MCSA | CCA | Husband and Papa |
    www.hyper-v-server.de | First German Gold Virtualisation Kompetenz Partner ---- If my answer is helpful please mark it as answer or press the green arrow.

  • Hyper-V Windows 2012 r2 Poor network performance

    Hi,
    Since that I installed Windows 2012 R2 and Hyper-v, I notice that is very slow performing copy operations from the VMs to the physical machine (appears to be lock to 10MB). Firs I though that this issue was only between the VMs and the parent OS, now I realized
    that when copying between this Hyper-v physical host and another physical host in the same subnet I also get only 10MB of speed.
    How do I unlock this 10MB limit??
    The switch and all NICs are GIGABIT capable and before using Windows 2012 r2, using the same hardware boxes I was doing 400MB of transfers between physical hosts and VMs??!!
    Thank you.

    I had the exact same issue with the same AR8151 NIC on my Alienware, only I'm running Windows 8.1. 
    I have a VM with both an external (to the AR8151) and an internal NIC which serves as both a domain controller and router (RRAS) for my other VMs (which only have Internal interfaces).  There is no VMQ setting on the adapter, and I messed around with
    the offload/RSS settings, but performance would drop whenever this VM would be running.  (Internet speed test would go from 95Mbps downstream to about 24Mbps.).  When I paused or shut down the VM, performance would return to normal. 
    I found the issue to be an errant entry in my route table which I have no idea how it happened.  A second default route was created which used the internal IP of the VM with a lower metric.  Simply deleting the route corrected the performance issue.
    Network Destination        Netmask          Gateway       Interface  Metric
              0.0.0.0          0.0.0.0      10.10.1.200        10.10.2.1     
    5
              0.0.0.0          0.0.0.0      192.168.1.1    192.168.1.131     10
            10.10.0.0      255.255.0.0         On-link         10.10.2.1    261
            10.10.0.0      255.255.0.0    192.168.1.200    192.168.1.131     11

  • How to delete a node from a node network

    How do I delete a node from a node network?
    <P>
    <OL>
    <LI>Follow the instructions on page 30 of the Calendar Server (CS) 3.x Admin Guide,
    and page 62 of the CS 4.0 Admin Guide "Deleting (excluding) a Node from the
    Network".
    <LI>Shut down the Directory Server.
    <LI>Backup the LDAP directory to an LDIF file.
    <LI>Edit this LDIF file and delete references to the
    6 calendar reserved users for the node to be deleted.
    <LI>Restore the LDAP directory with this changed LDIF file.
    <LI>Run unidbfix -export -n all
    <LI>Edit the resulting remotenodes.ini
    files and delete all
    references to the node to be deleted.
    <LI>Run unidbfix -import -n all
    <LI>Start the Directory Server.
    <LI>Start the Calendar Server.
    <P>
    Note: This next step is for Calendar Server 3.x * ONLY *
    <P>
    <LI>Run unireqdump -delete
    to delete all requests to the node
    to be deleted.
    </OL>

    Divya wrote:
    Hi,
    I'm using forms6i.
    I want to delete unselected nodes from a tree
    How can it be done??
    ThanksIf you mean empty branches as unselected nodes,
    set Allow Empty Branches property to 'No' in the Trees property.

  • Restricting the physical size of wifi network

    Hi,
    I'm planning to cover the 80x80m building by the amount of 1131AG APs and WLC4402 controller. But the strong client's demand is to physically restrict any posibilities of wifi connections from the long distances (100m and more from building).
    Is there a feature in Cisco wifi to drop any incoming connections in case if client is 100+m from AP? (I mean that client's power is high enough for this)
    What are the regular methods of resricting the physical size of wifi networks?
    Thanks.

    This is exactly the case for knowing and understanding antenna types and patterns (i.e., what those goofy circle charts mean).
    By using an antenna's pattern, coupled with power management, you can easily limit the radiation and external exposure of the signal.
    You don't have to know every single antenna, but just be familiar with the general types, and how to interpret the charts.
    Here's a pretty decent explanation on Cisco's main site:
    http://www.cisco.com/en/US/prod/collateral/wireless/ps7183/ps469/prod_white_paper0900aecd806a1a3e.html
    Good Luck
    Scott

  • Flex Mobile Best Practies

    Hi to every one!!!
    I'm try to building an app for mobile with a flex mobile project.
    The UI is gonna be something like in the picture:
    I would like to know witch is the best practies to put all the grafics element into my views.
    For example, for the bricks (that comes from an xml) i created a component, inside the component i put my fxg file in a group with a gap:
    My View:
    <s:VGroup gap="10">
            <customComponents:BrickComponent/>
            <customComponents:BrickComponent/>
            <customComponents:BrickComponent/>
        </s:VGroup>
    BrickComponent:
    <?xml version="1.0" encoding="utf-8"?>
    <s:Group xmlns:fx="http://ns.adobe.com/mxml/2009"
             xmlns:s="library://ns.adobe.com/flex/spark" xmlns:assets="assets.*">
        <fx:Declarations>
            <!-- Place non-visual elements (e.g., services, value objects) here -->
        </fx:Declarations>
        <assets:Brick/>
    </s:Group>
    The same for the tshirt.
    It is a good way like this?
    And draw and import from flash catalyst?
    Thanks a lot

    You might want to look into using .FXG files for your graphics.
    These blog posts should be useful for tips on improving performance in your mobile Flex applications:
    http://flexponential.com/2011/10/05/performance-tuning-mobile-flex-applications/
    http://flexponential.com/2011/04/20/flex-performance-tips-tricks/

  • QMASTER hints 4 usual trouble (QM NOT running/CLUSTEREd nodes/Networks etc

    All, I just posted this with some hints & workaround with very common issues people have on this forum and keep asking concerning the use of APPLE QMASTER with FCP, SHAKE, COMPRESSOR and MOTION. I've had many over the last 2 years and see them coming up frequently.
    Perhaps these symptoms are fixed in FCS2 at MAY 2007 (now). However if not here's some ROTS that i used for FCP to compressor via QMASTER cluster for example. NO special order but might help someone get around the stuff with QMASTER V2.3, FCP V5.1.4, compressor.app V2.3
    I saw the latest QMASTER UI and usage at NAB2007 and it looked a little more solid with some "EASY SETUP" stuff. I hope it has been reworked underneath.. I guess I will know soon if it has.
    For most FCP/COMPRESSOR, SHAKE. MOTION and COMPRESSOR:
    • provide access from ALL nodes to ALL the source and target objects (files) on their VOLUMES. Simply MOUNT those volumes through the APPLE file system (via NFS) using +k (cmd+k) or finder/go/connect to server. OR using an SSAFS such as XSAN™ where the file systems are all shared over FC not the network. YOu will notice the CPU's going very busy for a small while. THhis is the APPLE FILE SYSTEM task,,, I guess it's doing 'spotlight stuff". This goes away after a few minutes.
    • set the COMPRESSOR preferences for "CLUSTER OPTIONS" to "Never copy source to Cluster". This means that all nodes can access your source and target objects (files) over NFS (as above). Failure to to this means LENGTHY times to COPY material back an forth, in some cases undermining the pleasure gained from initially using clustering (reduced job times)
    • DONT mix the PHYSICAL or LOGICAL networks in your local cluster. I dont know why but I could never get this to work. Physical mean stick with eother ETHERNET or FIREWIRE or your other (airport etc whic will be generally way to slow and useless), Logical measn leepin all nodes on the SAME subnet. You can do this siply by setting theis up in the system preferences/QMASTER/advanced tab under "Use Network Interfaces". In my currnet QUAd I set this to use BUILT IN ETHERNET1 and in the MPBDC's I set this to their BUILTIN ETHERNET.
    • LOGICAL NETWORKS (Subnet): simply HARDCODE an IP address on the ETHERNET (for eample) for your cluster nodes andthe service controller. FOr eample 3.1.1.x .... it will all connect fine.
    • Physical Networks: As above (1) DONT MIX firewire (IPoFW) and Ethernet(IPoE). (2) if more than extra service node USE A HUB or SWITCH. I went and bought a 10 port GbE HUB for about $HK400 (€40) and it worked fine. I was NEVER able to get a stable system of QMASTER mixing FW and ETHERNET. (3) fwiw using IP of FW caused me a LOAD of DISK errors and timouts (I/O errors) on thosse DISKs that were FW400 (al gone now) but it showed this was not stable overall
    • for the cluster controller node MAKE SURE you set the CLUSTER STORAGE (system preferences/QMASTER/shared cluster storage) for the CLUSTER CONTROLLER NODE IS ON A SHARED volume (See above). This seems essential for SHAKE to work. (if not check the Qmaster errors in the console.app [see below] ). IF you have an SSAFS like XSAN™ then just add this cluster storage on a share file path. NOte that QMASTER does not permit the cluster storage to be on a NETWORK NODE for some reason. So in short just MOUNT the volume where the SHARED CLUSTER file is maintained for the CLUSTER controller.
    • FCP - avoid EXPORT to COMPRESSOR from the TIMELINE - it never seems to work properly (see later). Instead EXPORT FROM SEQUENCE in the BROWSER - consistent results
    • FCP - "media missing " messages on EXPORT to COMPRESSOR.. seems a defect in FCP 5.1 when you EXPORT using a sequence that is NOT in the "root" or primary trry in the FCP PROJECT BROWSER. Simply if you have browser/bin A contains(Bin B (contains Bin C (contains sequence X))) this will FAIL (wont work) for "EXPORT TO COMPRESSOR" if you use EXPORT to COMPRESSOR in a FCP browser PANE that is separately OPEN. To get around this, simply OPEN/EXPOSE the triangles/trees in the BROWSER PANE for the PROJECT and select the SEQUENCE you want and "EXPORT to COMPRESSOR" from there. This has been documented in a few places in this forum I think.
    • FCP -> COMPRESSOR -> .M2V (for DVDSP3): some things here. EXPORTING from an FCP SEQUENCE with CHAPTER MARKERS to an MPEG2 .M2V encoding USING A CLUSTER causes errors in the placement of the chapter makers when it is imported to DVDSP3. In fact CONSISTENTLY, ALL the chapter markers are all PLACED AT THE END of the TRACK in DVD SP# - somewhat useless. This seems to happen ALSO when the source is an FCP reference movie, although inconsistent. A simple work around if you have the machines is TRUN OF SEGMENTING in the COMPRESSOR ENCODER inspector. let each .M2V transcode run on the same service node. FOr the jobs at hand just set up a CLUSTER and controller for each machine and then SELECT the cluster (myclusterA, hisclusterb, herclusterc) for each transcode job.. anyway for me.. the time spent resolving all this I could have TRANSCODED all this on my QUAD and it would all have ben done by sooner! (LOL)
    • CONSOLE logs: IF QMASTER fails, I would suggest your fist port of diagnosis should be /Library/Logs/Qmaster in there you will see (on the controller node) compressor.log, jobcontroller.com.apple.qmaster.cluster.admin.log, and lots of others including service controller.com.apple.qmaster.executorX.log (for each cpu/core and node) andd qmasterca.log. All these are worth a look and for me helped me solve 90% of my qmaster errors and failures.
    • MOTION 3 - fwiw.. EXPORT USING COMPRESSOR to a CLUSTER seems to fail EVERY TIME.. seems MOTION is writing stuff out to a /var/spool/qmaster
    TROUBLESHOOTING QMASTER: IF QMASTER seems buggered up (hosed), then follow these steps PRIOR to restarting you machines.
    go read the TROUBLE SHOOTING in the published APPLE docs for COMPRESSOR, SHAKE and "SET UP FOR DISTRIBUTED PROCESSING" and serach these forums CAREFULLY.. the answer is usually there somewhere.
    ELSE THEN,, try these steps....
    You'll feel that QMASTER is in trouble when you
    • see that the QMASTER ICON at the top of the screen says 'NO SERVICES" even though that node is started and
    • that the APPLE QMASTER ADMINSTRATOR is VERY SLOW after an 'APPLY" (like minutes with SPINNING BEACHBALL) or it WONT LET YOU DELETE a cluster or you see 'undefined' nodes in your cluster (meaning that one was shut down or had a network failure)..... all this means it's going to get worse and worse. SO DONT submit any more work to QAMSTER... best count you gains and follow this list next.
    (a) in COMPRESSOR.app / RESET BACKGROUND PROCESSES (its under the COMPRESSOR name list box) see if things get kick started but you will lose all the work that has been done up to that point for COMPRESSOR.app
    b) if no OK, then on EACH node in that cluster, STOP the QMASTER (system preferences/QMASTER/setup [set 0 minutes in the prompt and OK). Then when STOPPED, RESET the shared services my licking OPTION+CLICK on the "START" button to reveal the "RESET SERVICES". Then click "START" on each node to start the services. This has the actin of REMOVING or in the case where the CLUSTER CONTROLLER node is "RESET" f terminating the cluster that's under its control. IF so Simply go to APPLE QMASTER ADMINISTRATOR and REDFINE it. Go restart you cluster.
    c) if step (b) is no help, consult the QMASTER logs in /Library/Logs/Qmaster (using the cosole.app) for any FILE MISSING or FILE not found or FILE ERROR . Look carefully for the NODENAME (the machine_name.local) where the error may have occured. Sometimes it's very chatty. Others it is not. ALso look in the BATCH MONITOR OUTPUT for errors messages. Often these are NEVER written (or I cant find them) in the /var/logs... try and resolve any issues you can see (mostly VOLUME or FILE path issues from my experience)
    (d) if still no joy then - try removing all the 'dead' cluster files from /var/tmp/qmaster , /var/sppol/qmaster and also the file directory that you specified above for the controller to share the clustering. FOR shake issues, go do the same (note also where the shake shared cluster file path is - it can be also specified in the RENDER FILEOUT nodes prompt).
    e) if all this WONT help you, its time to get the BIG hammer out. Simply, STOP all nodes of not stopped. (if status/mode is "STOPPING" then it [QMASTER] is truly buggered). DISMOUNT the network volumes you had mounted. and RESTART ALL YOUR NODES. Tis has the affect of RESTARTING all the QMASTERD tasks. YEs sure you can go in and SUDO restart them but it is dodgy at best because they never seem to terminate cleanly (Kill -9 etc) or FORCE QUIT.... is what one ends up doing and then STILL having to restart.
    f) after restart perform steps from (B) again and it will be usually (but not always) right after that
    LAstly - here's some posts I have made that may help others for QMASTER 2.3 .. and not for the NEW QMASTER as at MAy 2007...
    Topic "qmasterd not running" - how this happened and what we did to fix it. - http://discussions.apple.com/message.jspa?messageID=4168064#4168064
    Topic: IP over Firewire AND Ethernet connected cluster? http://discussions.apple.com/message.jspa?messageID=4171772#4171772
    LAstly spend some DEDICATED time to using OBJECTIVE keywords to search the FINAL CUT PRO, SHAKE, COMPRESSOR , MOTION and QMASTER forums
    hope thats helps.
    G5 QUAD 8GB ram w/3.5TB + 2 x 15in MBPCore   Mac OS X (10.4.9)   FCS1, SHAKE 4.1

    Warwick,
    Thanks for joining the forum and for doing all this work and posting your results for our benefit.
    As FCP2 arrives in our shop, we will try once again to make sense of it and to see if we can boost our efficiencies in rendering big projects and getting Compressor to embrace five or six idle Macs.
    Nonetheless, I am still in "Major Disbelief Mode" that Apple has done so little to make this software actually useful.
    bogiesan

  • Re-installing Hyper-V 2012 R2 cluster node

    We have a four HP BL460 Gen8 servers acting as a part of Hyper-V Cluster, running Windows Server 2012 R2 Datacenter.
    Storage is provided by two node 3PAR StoreServ 7400.
    All network and fc connections are managed by HP Virtual Connect.
    One of the four nodes crashed during HP SPP upgrade which resulted as non booting OS.
    I managed to get the OS alive by running multiple check disks and by manually restoring registry hives from backup via Windows 7 installation media's recovery console.
    After the recovery there were still some issues with filesystem. Corrupted, orphaned and missing files here and there.
    Now I want to re-install the OS from scratch to make sure everything will work correctly and to avoid any future errors.
    What I need to know is that is the best practice to re-install the OS with new computername, or should I drop the current OS to workgroup, re-install it and join the AD domain with same computer name? I've already evicted the node from Hyper-V cluster
    but the server is still running as a member server on AD.
    Any other things I should take into consideration before doing the re-installation?
    Thanks in advance!

    I agree that after a major problem it is much safer to rebuild the system.  It sounds like you have the node rebuilt, so I would evict it from the cluster and then remove it from the domain. Rebuild it and you can use the same name because those two
    actions will clean up its 'footprints'.
    If the machine were not running, you would still evict the node from the cluster, but you would need to go into Active Directory to delete the computer account.  Then rebuild.
    . : | : . : | : . tim

  • 3 node networking

    Hi
    I need a little advice
    I have migrated a 2 node 2008R2 cluster to 2012R2, so happy with myself.
    However i wanted to introduce a 3rd node as we want to increase capacity. So here is the issue. Up to now the Heartbeat and live migration networks were simply nic to nic cabling. i have a NIC team on both current servers for these networks and it works
    fine.
    However to add the 3rd server can i continue to use this method by adding a team on the 3rd node and direct connectivity between the other 2, or do i need a 2nd or even 3rd live migration network and heartbeat network to triangulate?
    I appreciate the best option is a switch, but the network guys wont let me use the core switch, so i am stuck trying to use direct cabling.
    thanks
    Olly

    Hi Sir,
    >>Up to now the Heartbeat and live migration networks were simply nic to nic cabling.
    >>I appreciate the best option is a switch, but the network guys wont let me use the core switch,
    I haven't seen this configuration for cluster communication .
    Based on my understanding heart beat needs a physical connection to each other , that means each nodes need two connections to other two nodes  but I didn't find an article mentioned two heart beat NIC .
    It seems that switch is needed .
    Best Regards,
    Elton Ji
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

  • Hyper V file transfer kills the network

    Dear All,
            We have 4 Hyper V VM installations on 2 HP DL 380 G8 servers and have they are connected to HP NSA 2040 SAN Storage through 6 Gbps Fibre channel. The physical machines are clustered.
    Out of the 4, 2 are Clustered Hyper V's running Microsoft Dynamics AX 2012 R3 and we have 2 Clustered Hyper V for Database server running MS SQL 2012 R2.
    The issue here is that the APP servers are slow in communicating with the DB server. The Hyper V's have a dedicated 4 * 1Gbps interface cards teamed and the packets are getting timed out. But from any other machine in the subnet the communications are proper.
    When ever i try to copy a file from the DB server to the APP server using SMB share it kills my network, the copy will happen but after the copy is completed the network comes backup.
    Its strange scenario. APP server 2 APP server smb copy is fine, and from DB to DB smb copy is fine but cannot copy between them.
    Can anyone please help?
    Thanks,
    Sriram A Das

    Hello,
    This forum is for discussions and questions regarding profiles and Microsoft's recognition system on the MSDN and TechNet sites. It is not for products/technologies.
    As it's off-topic here, I am moving the question to the
    Hyper-V forum.
    Karl
    When you see answers and helpful posts, please click Vote As Helpful, Propose As Answer, and/or Mark As Answer.
    My Blog: Unlock PowerShell
    My Book:
    Windows PowerShell 2.0 Bible
    My E-mail: -join ('6F6C646B61726C406F75746C6F6F6B2E636F6D'-split'(?<=\G.{2})'|%{if($_){[char][int]"0x$_"}})

  • Cluster node networking

    I have five node Windows Server 2008 R2 Hyper-V cluster. I put one node to Maintance mode and all VMs migrated to other hosts. I pulled out LAN cables form that node for testing (one out, waited a litte, put it back and pulled second and so on) and put
    them right back on.
    After that I had a lot of cluster errors and some VMs restarted.
    I have put many times nodes on maintance mode and restarted / shut down them and never had any cluster problems. Why did I have now when I pulled out LAN cables?

    Hi antesl,
    The
     failover behavior occurs because the cluster node has detect the cluster resource or node fail, such as network, storage, please refer the following related KB to confirm there have no potential single point failure configuration in your
    cluster.
    Failover Cluster
    http://msdn.microsoft.com/en-us/library/ff650328.aspx
    Failover Cluster Step-by-Step Guide: Configuring the Quorum in a Failover Cluster
    http://technet.microsoft.com/zh-cn/library/cc770620(v=ws.10).aspx
    How a Server Cluster Works
    http://technet.microsoft.com/en-us/library/cc738051(v=ws.10).aspx
    HYPER-V 2008 R2 SP1 Best Practices (In Easy Checklist Form)
    http://blogs.technet.com/b/askpfeplat/archive/2012/11/19/hyper-v-2008-r2-sp1-best-practices-in-easy-checklist-form.aspx
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Networking "best practice" for setting up a farm

    Hi all.
    We would like to set an OracleVM farm, and I have a question about "best practice" for
    configuring the network. Some background:
    - The hardware I have is comprised of machines with 4 gig-eth NICs each.
    - The storage will be coming primarily from a backend NAS appliance (Netapp, FWIW).
    - We have already allocated a separate VLAN for management.
    - We would like to have HA capable VMs using OCFS2 (on top of NFS.)
    I'm trying to decide between 2 possible configurations. The first would keep physical separation
    between the mgt/storage networks and the DomU networks. The second would just trunk
    everything together across all 4 NICs, something like:
    Config 1:
    - eth0 - management/cluster-interconnect
    - eth1 - storage
    - eth2/eth3 => bond0 - 8021q trunked, bonded interfaces for DomUs
    Config 2:
    - eth0/1/2/3 => bond0
    Do people have experience or recommendation about the best configuration?
    I'm attracted to the first option (perhaps naively) because CI/storage would benefit
    from dedicated bandwidth and this configuration might also be more secure.
    Regards,
    Robert.

    user1070509 wrote:
    Option #4 (802.3ad) looks promising, but I don't know if this can be made to work across
    separate switches.It can, if your switches support cross-switch trunking. Essentially, 802.3ad (also known as LACP or EtherChannel on Cisco devices) requires your switch to be properly configured to allow trunking across the interfaces used for the bond. I know that the high-end Cisco and Juniper switches do support LACP across multiple switches. In the Cisco world, this is called MEC (Multichassis EtherChannel).
    If you're using low-end commodity-grade gear, you'll probably need to use active/passive bonds if you want to span switches. Alternatively, you could use one of the balance algorithms for some bandwitch increase. You'd have to run your own testing to determine which algorithm is best suited for your workload.
    The Linux Foundation's Net:Bonding article has some great information on bonding in general, particularly on the various bonding methods for high availability:
    http://www.linuxfoundation.org/en/Net:Bonding

  • Run commands on remote Hyper-V host in different domain/network with powershell

    Hi experts,
    My Setup: Windows Server 2012 R2 / SCVMM 2012 managing localhost and other Hyper-V hosts
    I need to run a script on the remote Hyper-V Host which is in different domain/workgroup using powershell.
    I have tried
    Invoke-SCScriptcommand cmdlet. But I am getting the below error
    Error (2917)
    Virtual Machine Manager cannot process the request because an error occurred while authenticating MY-PC-15.mydomain.local. Possible causes are:
    1) The specified user name or password are not valid.
    2) The Service Principal Name (SPN) for the remote computer name and port does not exist.
    3) The client and remote computers are in different domains and there is not a two-way full trust between the two domains.
    The network path was not found (0x80070035)
    I tried the 'Run Script Command' option in the Host tab in VMM. But getting the same error.
    Checked that it uses the 'Invoke-ScScriptcommand' PS cmdlet.
    Could someone explain how to run scripts on remote Hyper-V host in different Domain/Perimeter network ?
    Regards,
    Saleem

    Hi Saleem,
    Please try to follow the article below to regarding using command "enter-pssession" across domains :
    https://social.technet.microsoft.com/Forums/windowsserver/en-US/f60a29ef-925e-4712-9788-1f95e12c8cfc/forum-faq-introduce-windows-powershell-remoting?forum=winserverpowershell
    (I tested it in my lab )
    Best Regards,
    Elton Ji
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

Maybe you are looking for