Hyper-V Replication over Dedicated LAN and "is alive" checks over Corporate LAN

I am testing Hyper-V replication to see if it will be a suitable replacement for the ArcServer RHA product.  One thing I am struggling with is configuring the replication to use the dedicated LAN, but still have the host servers verify over the corporate
LAN.  
I have seen the blogs on how to use a dedicated route and editing the hosts file to get replication to use the dedicated LAN, but that also changes the LAN in which the host servers communicate.  It seems to me that if the corporate LAN were to go down
on the master server, I wouldn't be able to fail over the virtual machines to the replica server without first having to connect into the master server through the dedicated LAN of the replica server to shut down the virtual machines.
I need to be able to fail over to the replica server if the corporate network connection on the master server drops without having to go through the extra steps of connecting into the master server first.
Is it possible for the two items to be separated?  Can I tell Hyper-V to replicate using one specific IP destination on the dedicated LAN and have the replica server check to see if the master is live over the corporate LAN?

Hi Telrick,
>> It seems to me that if the corporate LAN were to go down on the master server, I wouldn't
be able to fail over the virtual machines to the replica server without first having to connect into the master server through the dedicated LAN of the replica server to shut down the virtual machines.
I want to say that there are "planned fail over" and "unplanned fail over " , the latter applies the primary server crashing (you can select "failover" on "replica server " then the VM will start up ,after the primary
server online again you can do "reverse" )
The point is that you can not use hyper-v replica as a backup (it will lost little data which have not yet been replicated to replica server when "unplanned fail over" happens ) 
Best Regards
Elton Ji
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place.

Similar Messages

  • My iPod's OFF/On switch is not working and I have checked over internet that this problem is very common with iPod Nano 6th generation, my iPod is now out of warranty period. What shall i do now?? Please Suggest!!

    My iPod's OFF/ON switch is not working and I have checked over internet that this problem is very common with iPod Nano 6th generation, my iPod is now out of warranty period. What shall i do now?? Please Suggest!!

    You can do an out-of-warranty replacement at an Apple Store for a fee, get a 3rd party site to repair it, sell it for parts, or get a new one.

  • How do I send an array over endpoint 2 and receive an array over endpoint 1?

    Background:
    I'm a device developer.  I have an HID device that interrupt transfers over endpoint 1 and endpoint 2.  The device enumerates successfully as an HID on mac, windows, and linux.  I've use a couple different .dll files to communicate with the device using visual studio and python with some success and now I'd like to get it to work in labview.
    Status to date:
    1.  Successfully installed device hijacker driver so NI MAX can see the device as USB::0x####::0x####:erialNumber::RAW (# inserted to protect the innocent and SerialNumber is an actual number)
    2.  I can see the device in MAX.  Tried to send a line of numbers and something is sent back, but it isn't useful yet.
    3.  Tried interruptusb.vi and it doesn't do anything but timeout.
    4.  Tried Read USB Descriptor Snippet1.vi and an 18 byte array shows up and the VID and PID are parsed out correctly if the bRequest is Get Descriptor and the Descriptor Type is Device.  None of the endpoint descriptor types return anything.  A bRequest won't trigger a device function.  It needs an array over the out endpoint.
    The problem:
    Intuitively I'm at a loss of what to do next.  The device needs to receive a command from a 16 byte array gets passed through the out endpoint (2).  Then it will respond with a 16 byte array through the IN endpoint(1).  It seems as though the interruptusb.vi should work, but the interrupt transfer is a receive only demonstration.  How should I format a 16 byte array to go through the out endpoint? Does it need to be flattened?  

    Thanks for the tip.
    The nuggets were great for getting started and helped with installing the labview hijack driver for the HID device.  Closer examination may lead to the conclusion that the code I'm using is very very similar to the nugget with minor changes to the output.  Definitely the nuggets are useful, but for my device, there is more to it. 
    It is not USBTMC compliant.  It requires an array of bytes be sent and received.  The problem may have to do with timing and ensuring that the byte transfer is correct.  When communicating from visual studio, a declared array of characters works fine.  I tried that with this setup and it doesn't work consistently.  Of particular concern is why with this setup the device shows up, doesn't work properly until stopped, then works fine, but when the labview VI is stopped, the device disappears and no longer available in the VISA combobox.  The Device Manager still shows the device, but Labview must have an open handle to it. 
    I'd really like to be able to call the dll used in Visual Studio, so the user can choose to use the included software or a Labview VI without having to reinstall a driver.  Afterall, HID is great because the driver is under the hood.  Having to load one for Labview defeats the purpose of developing an HID device.  If I wanted to load a driver, I'd program the device to be a USB-Serial device and use the labview VISA serial vi's. 
    For now I'll be happy to get a stable version in Labview that will communicate consistently even if it is the hijacked driver.

  • Blue squares over moving images and videos, Blue squares over moving images and videos

    https://www.youtube.com/watch?v=t9NmqnaHuWo&feature=youtu.be
    This issue is going on for a while and I have tried the usual means to fix it, like restarting, updating drivers, uninstalling gfxCardStatus completely...
    As you can see on the video, anytime I have an image in motion - be it a video or even when scrolling down a website - a blue rectangle appears over the image or video. When I pause it returns to normal.
    This is absolutelly unacceptable. I work with video editing and this has neutralized my ability to color grade anything, unless of course if I want to really hate myself on the process.
    I'd be trully grateful if anyone could help me out.
    My system is a Macbook Pro Retina 15" - Mid 2012
    Graphics Card is a GTX 650M (discrete)
    Thanks in advance.

    I have the same issue.i need help to rid off this..i have ipad2

  • My itunes have been stuck on waiting for changes to apply over an hour and nothing ever syncs over to my ipad but it shows on my itunes that it did  but it doesnt.How do i go about this problem?

    ive tired uninstalling and reinstalling itunes. i also restored my ipad and still no change since the new iOS7 upate

    The warranty entitles you to complimentary phone support for the first 90 days of ownership.
    If you bought the product in the U.S. directly from Apple (not from a reseller), you have 14 days from the date of delivery in which to exchange or return it for a refund. In other countries, the return policy may be different. If you bought from a reseller, its return policy applies.

  • HA between Dedicated T1 and L2L VPN

    I'm looking for ideas on how to have complete HA between a dedicated T1 and an L2L VPN over the internet.
    We had discussed routing protocol OSPF but would like to avoid the converge issues that could rise and affect other customers in the same DMZ.
    What would be our options if we do not want to use a routing protocol? How could we fail over to the backup line, the L2L, should the T1 fail. I had mentioned changing the metrics but this will not identify a problem on the line should the customers ethernet link goe down.
    Feel free to include an ideas that would use routing protocols.

    I had to revisit this configuration. I had decided since we are not going to use a routing protocol that a floating route between the T1 router and VPN is the best solution. although this should work if the router or Ethernet of the router goes down it should fail if the the Ethernet interface of the router, which has OSPF running between their network and our LAN, does not fail.
    But it is not failing?
    I have attached a diagram.

  • DPM 2012 R2 and Hyper-V Replication, huge conflict and failure to backup

    I have recently created new 2012 R2 servers with DPM 2012 R2 in attempts to upgrade an environment.
    When attempting to create Protection Groups for Hyper-V VM, DPM will consistently omit any VM that happens to have Hyper-V Replication enabled, whether that server is the primary VM instance or a replica VM instance.  ALL other Hyper-V VMs are listed,
    only those with Replication enabled are omitted.  This happens on both DPM 2012 R2 servers, exactly the same results - all VMs without Hyper-V Replication enabled are listed, none of the VMs with replication enabled are visible in the list.
    Obviously we need to be able to backup VMs with Hyper-V replication since replication is only a tiny portion of a DR strategy, it doesn't cover ANY recovery scenario other than the loss of the primary VM - it doesn't allow for restoring any missing or damaged
    files or undoing any other changes to the VM.
    The DPM 2012 R2 server have the latest update rollup (#4) applied and the protection agents have also been updated.
    Looking for some hints since DPM 2012 R2 is supposed to support backing up both the primary and replica VMs, especially when the Hyper-V host if Server 2012 R2.
    Might have to use Windows Server Backup or Veeam's free Hyper-V backup since aside from enabling Hyper-V replication to keep a couple snapshots, DPM isn't a viable backup options in combination with Hyper-V replication.

    As per the following blog:
    http://blogs.technet.com/b/dpm/archive/2014/04/25/backing-up-of-replica-vms-using-dpm.aspx
    you should be able to backup Hyper-V VMs even if they are replica VM.
    When you configure DPM to protect both primary and recovery hosts, VM will appear on any of the Servers as it will have the same GUID.
    So, protect primarya nd recovery hosts using different DPM or if you want to protect both primary and recovery using same DPM, make sure that you check both servers so as to discover VM.
    Regards, Trinadh [MSFT] This posting is provided AS IS with no warranties, and confers no rights. If you found the reply helpful, please MARK IT AS ANSWER. Looking for source of information for DPM? http://blogs.technet.com/b/dpm/ http://technet.microsoft.com/en-in/library/hh758173.aspx

  • 2 issues writing over LAN and paralles.

    First issue
    I have a 1.5 tb HDD formated in HFS+ by my MacBook air, when I plug the HDD into the Mac I can do what ever I need. But when I plug the unit into my Dune media player and connect to it I can't write to it is says I have special permissions. The Dune will play all the media just fine I just can change anything on the drive. Can sme one explain why? And is there a way to fix this?
    2nd issue I use paralles because a program I use called my movies only works in windows for now, after this latest update I get a network security log in pop up asking for my creds to log into the MacBook air. I need to see the MacBook so when I log into MM I can use the URL protocol for the files like \\dune\movies\movie name. The MacBook air would popup and I could select it and share the drive over the network. I just can't get into the macbok air anymore. I know this is a paralles issue, just woundering if anybody else has had this issue?

    Try putting a trace in to see what the code sees... it may not be what you see in the textfield...
    function onClick15(event:MouseEvent):void {
        trace(total);
        if (total >= 72) {
            gotoAndStop(18);
        } else {
            gotoAndStop(19);
    As for the other problem, some funny things can happen if you have objects in adjacent keyframes, and radio buttons were always a problem for me in this regard.  If you can alternate with blank keyframes between them it might clear it up (have two layers where they alternate between the layers).  I don't know if that's the problem in any case... radio buttons in Flash were never easy for me to deal with so I usually ended up making my own.

  • Hyper-V Replication Implementation question

    I have two Hyper-V 2012 servers. I want to setup replication between them but I wanted to clarify a few things.  My plan is to put 3 VM's on each Hyper-v Server and then replicate them to the other servers. So Server A has 3 VM's and Server B has
    3 VM's. In case of a server failure the servers on Server A will failover to Server B and vice versa. This also applies to the VHD's. I want everything to replicate between the two. And I need to make sure that the process is automatic so if storage fails
    the vm's will fail over. I just want to make sure the Hyper-V Replication will work in this way.
    Vincent Sprague

    I need the storage replication aspect, I currently have the two servers in a cluster and the vm's failover but storage is my problem, our shared storage solution is junk and I need to find a way to get around that.
    Vincent Sprague
    1) For Hyper-V Replica you don't need to have shared storage as Windows will replicate source VHDX with some minor delay to a destination VHDX. 
    2) You may also take a look @ Storage Replica (part of the upcoming Windows Server 10) as it may do a better job for you because of the synchronous nature. See:
    Storage Replica and Hyper-V
    http://www.starwindsoftware.com/blog/storage-replica-with-microsoft-failover-cluster-and-clustered-hyper-v-vm-role-windows-server-technical-preview/
    Good luck :)
    P.S. Looks like you've already asked similar question before:
    https://social.technet.microsoft.com/Forums/projectserver/en-US/c19b08aa-b395-49e0-9bf7-52981118b820/server-2012-r2-vm-replication?forum=winserverhyperv 
    StarWind Virtual SAN clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Certificate-based 2012 R2 Hyper-V Replication (powershell)

    I have certificate-based replication working between two Server 2012 R2 workgroup servers.  Through Hyper-V Manager I can resolve any issues with replication, particularly in the scenario where I failover and wish to reverse replication.  Take
    a DR scenario where I the primary goes offline, I initiate failover, start the replica VM, and wish to reverse replication after the primary has come back online following the power simulated power loss, but via powershell. I'm missing a key step, but I don't
    know what.
    1. PRIHOST goes down and the VM REPME1 is now offline.
    2. On REPHOST, I initiate start-vmfailover and start-vm REPME1.
    3. Power is restored to PRIHOST and is back up. VM REPME1 is in the inventory but powered off. PRIHOST remains the Primary for replication. REPHOST is still the Replica.  Replication has failed, which is expected.
    4. I try to reverse replication with set-vmreplication -reverse, but I get an error "Could not reverse replication for virtual machine 'REPME1'. (Virtual machine ID ...) The operation cannot be performed while the virtual machine is in its current state.
    The name of virtual machine is REPME1 and its ID is ...
    5. When I reverse replication in the Hyper-V Manager immediately following it synchronizes back the changes successfully and replication returns to normal.
    I suspect that I'm missing a step.
    -Michael Kelsey

    Since I'm using workgroup servers and certificates, I tested the commands manually to observe the output, modifying the steps that require certificatethumbprint as a mandatory parameter.  I did not get success on this first attempt, but I will keep
    trying over the next few days as I have time.
    Thank you for the exemplary script.  It does indeed reveal at least one missed step, such as completing failover, which was necessary to prevent the set-vmreplication from being blocked by the VM's current state.
    In my manual execution, setting the primary as replica had to be performed on the primary after coming back online, at which point both the primary and the replica degenerated to a mutual replica state.  I didn't spend much time trying to correct this
    condition in powershell, but in Hyper-V management I was able to re-establish replication, although it keeps pausing on the new existing primary (now replica) and the replica (now primary) doesn't indicate replication as being paused.
    I will most likely need to remove replication from the VM and start over.  
    It appears that native Windows authentication (matching usernames and passwords) and certificates will be required to successfully reverse replication.  Ultimately, since I will use a service account to enact the role reversal, I tested with a local
    admin that did not exist on both systems. Setting the AsReplica in Set-VMReplication appears to use RPC as the command failed with a permission denied error.
    I will append my findings.  You've given me a great starting point.
    -Michael Kelsey

  • Upgrading a 3-node Hyper-V clusters storage for £10k and getting the most bang for our money.

    Hi all, looking for some discussion and advice on a few questions I have regarding storage for our next cluster upgrade cycle.
    Our current system for a bit of background:
    3x Clustered Hyper-V Servers running Server 2008 R2 (72TB Ram, dual cpu etc...)
    1x Dell MD3220i iSCSI with dual 1GB connections to each server (24x 146GB 15k SAS drives in RAID 10) - Tier 1 storage
    1x Dell MD1200 Expansion Array with 12x 2TB 7.2K drives in RAID 10 - Tier 2 storage, large vm's, files etc...
    ~25 VM's running all manner of workloads, SQL, Exchange, WSUS, Linux web servers etc....
    1x DPM 2012 SP1 Backup server with its own storage.
    Reasons for upgrading:
    Storage though put is becoming an issue as we only get around 125MB/s over the dual 1GB iSCSI connections to each physical server.  (tried everything under the sun to improve bandwidth but I suspect the MD3220i Raid is the bottleneck here.
    Backup times for vm's (once every night) is now in the 5-6 hours range.
    Storage performance during backups and large file syncronisations (DPM)
    Tier 1 storage is running out of capacity and we would like to build in more IOPS for future expansion.
    Tier 2 storage is massively underused (6tb of 12tb Raid 10 space)
    Migrating to 10GB server links.
    Total budget for the upgrade is in the region of £10k so I have to make sure we get absolutely the most bang for our buck.  
    Current Plan:
    Upgrade the cluster to Server 2012 R2
    Install a dual port 10GB NIC team in each server and virtualize cluster, live migration, vm and management traffic (with QoS of course)
    Purchase a new JBOD SAS array and leverage the new Storage Spaces and SSD caching/tiering capabilities.  Use our existing 2TB drives for capacity and purchase sufficient SSD's to replace the 15k SAS disks.
    On to the questions:
    Is it supported to use storage spaces directly connected to a Hyper-V cluster?  I have seen that for our setup we are on the verge of requiring a separate SOFS for storage but the extra costs and complexity are out of our reach. (RDMA, extra 10GB NIC's
    etc...)
    When using a storage space in a cluster, I have seen various articles suggesting that each csv will be active/passive within the cluster.  Causing redirected IO for all cluster nodes not currently active?
    If CSV's are active/passive its suggested that you should have a csv for each node in your cluster?  How in production do you balance vm's accross 3 CSV's without manually moving them to keep 1/3 of load on each csv?  Ideally I would like just
    a single CSV active/active for all vm's to sit on.  (ease of management etc...)
    If the CSV is active/active am I correct in assuming that DPM will backup vm's without causing any re-directed IO?
    Will DPM backups of VM's be incremental in terms of data transferred from the cluster to the backup server?
    Thanks in advance for anyone who can be bothered to read through all that and help me out!  I'm sure there are more questions I've forgotten but those will certainly get us started.
    Also lastly, does anyone else have a better suggestion for how we should proceed?
    Thanks

    Current Plan:
    Upgrade the cluster to Server 2012 R2
    Install a dual port 10GB NIC team in each server and virtualize cluster, live migration, vm and management traffic (with QoS of course)
    Purchase a new JBOD SAS array and leverage the new Storage Spaces and SSD caching/tiering capabilities.  Use our existing 2TB drives for capacity and purchase sufficient SSD's to replace the 15k SAS disks.
    On to the questions:
    Is it supported to use storage spaces directly connected to a Hyper-V cluster?  I have seen that for our setup we are on the verge of requiring a separate SOFS for storage but the extra costs and complexity are out of our reach. (RDMA, extra 10GB NIC's
    etc...)
    When using a storage space in a cluster, I have seen various articles suggesting that each csv will be active/passive within the cluster.  Causing redirected IO for all cluster nodes not currently active?
    If CSV's are active/passive its suggested that you should have a csv for each node in your cluster?  How in production do you balance vm's accross 3 CSV's without manually moving them to keep 1/3 of load on each csv?  Ideally I would like just
    a single CSV active/active for all vm's to sit on.  (ease of management etc...)
    If the CSV is active/active am I correct in assuming that DPM will backup vm's without causing any re-directed IO?
    Will DPM backups of VM's be incremental in terms of data transferred from the cluster to the backup server?
    Thanks in advance for anyone who can be bothered to read through all that and help me out!  I'm sure there are more questions I've forgotten but those will certainly get us started.
    Also lastly, does anyone else have a better suggestion for how we should proceed?
    Thanks
    1) You can use direct connection to SAS with a 3-node cluster of course (4-node, 5-node etc). Sure it would be much faster then running with an additional SoFS layer (with SAS fed directly to your Hyper-V cluster nodes all reads and writes would be local
    travelling down the SAS fabric and with SoFS layer added you'll have the same amount of I/Os targeting SAS + Ethernet with its huge compared to SAS latency sitting in between a requestor and your data residing on SAS spindles, I/Os overwrapped into SMB-over-TCP-over-IP-over-Etherent
    requests at the hypervisor-SoFS layers). Reason why SoFS is recommended - final SoFS-based solution would be cheaper as SAS-only is a pain to scale beyond basic 2-node configs. Instead of getting SAS switches, adding redundant SAS controllers to every hypervisor
    node and / or looking for expensive multi-port SAS JBODs you'll have a pair (at least) of SoFS boxes doing a file level proxy in front of a SAS-controlled back end. So you'll compromise performance in favor of cost. See:
    http://davidzi.com/windows-server-2012/hyper-v-and-scale-out-file-cluster-home-lab-design/
    Used interconnect diagram within this design would actually scale beyond 2 hosts. But you'll have to get a SAS switch (actually at least two of them for redundancy as you don't want any component to become a single point of failure, don't you?)
    2) With 2012 R2 all I/O from a multiple hypervisor nodes is done thru the storage fabric (in your case that's SAS) and only metadata updates would be done thru the coordinator node and using Ethernet connectivity. Redirected I/O would be used in a two cases
    only a) no SAS connectivity from the hypervisor node (but Ethernet one is still present) and b) broken-by-implementation backup software would keep access to CSV using snapshot mechanism for too long. In a nutshell: you'll be fine :) See for references:
    http://www.petri.co.il/redirected-io-windows-server-2012r2-cluster-shared-volumes.htm
    http://www.aidanfinn.com/?p=12844
    3) These are independent things. CSV is not active/passive (see 2) so basically with an interconnection design you'll be using there's virtually no point to having one-CSV-per-hypervisor. There are cases when you'd still probably do this. For example if
    you'd have all-flash and combined spindle/flash LUNs and you know for sure you want some VMs to sit on flash and others (no so I/O hungry) to stay within "spinning rust". One more case is many-node cluster. With it multiple nodes basically fight for a single
    LUN and a lot of time is wasted for SCSI reservation conflicts resove (ODX has no reservation offload like VAAI has so even if ODX is present its not going to help). Again it's a place where SoFS "helps" as having intermediate proxy level turns block I/O into
    file I/O triggering SCSI reservation conflicts for a two SoFS nodes only instead of an evey node in a hypervisor cluster. One more good example is when you'll have a mix of a local I/O (SAS) and Ethernet with a Virtual SAN products. Virtual SAN runs directly
    as part of the hypervisor and emulates high performance SAN using cheap DAS. To increase performance it DOES make sense to create a  concept of a "local LUN" (and thus "local CSV") as reads targeting this LUN/CSV would be passed down the local storage
    stack instead of hitting the wire (Ethernet) and going to partner hypervisor nodes to fetch the VM data. See:
    http://www.starwindsoftware.com/starwind-native-san-on-two-physical-servers
    http://www.starwindsoftware.com/sw-configuring-ha-shared-storage-on-scale-out-file-servers
    (feeding basically DAS to Hyper-V and SoFS to avoid expensive SAS JBOD and SAS spindles). The same thing as VMware is doing with their VSAN on vSphere. But again that's NOT your case so it DOES NOT make sense to keep many CSVs with only 3 nodes present or
    SoFS possibly used. 
    4) DPM is going to put your cluster in redirected mode for a very short period of time. Microsoft says NEVER. See:
    http://technet.microsoft.com/en-us/library/hh758090.aspx
    Direct and Redirect I/O
    Each Hyper-V host has a direct path (direct I/O) to the CSV storage Logical Unit Number (LUN). However, in Windows Server 2008 R2 there are a couple of limitations:
    For some actions, including DPM backup, the CSV coordinator takes control of the volume and uses redirected instead of direct I/O. With redirection, storage operations are no longer through a host’s direct SAN connection, but are instead routed
    through the CSV coordinator. This has a direct impact on performance.
    CSV backup is serialized, so that only one virtual machine on a CSV is backed up at a time.
    In Windows Server 2012, these limitations were removed:
    Redirection is no longer used. 
    CSV backup is now parallel and not serialized.
    5) Yes, VSS and CBT would be used so data would be incremental after first initial "seed" backup. See:
    http://technet.microsoft.com/en-us/library/ff399619.aspx
    http://itsalllegit.wordpress.com/2013/08/05/dpm-2012-sp1-manually-copy-large-volume-to-secondary-dpm-server/
    I'd look at some other options. There are few good discussion you may want to read. See:
    http://arstechnica.com/civis/viewtopic.php?f=10&t=1209963
    http://community.spiceworks.com/topic/316868-server-2012-2-node-cluster-without-san
    Good luck :)
    StarWind iSCSI SAN & NAS

  • SQL Server 2008 R2 Replication - not applying snapshot and not updating all repliacted columns

    We are using transactional replicating on SQL Server 2008 R2 (SP1) using a remote distributor. We are replicating from BaanLN, which is an ERP application to up to 5 subscribers, all using push publications. 
    Tables can range from a couple million rows to 12 million rows and 100's of GBs in size. 
    And it's due to the size of the tables that it was designed with a one publisher to one table architecture.  
    Until recently it has been working very smooth (last four years)) but we have come across two issues I have never encountered.
    While this has happen a half dozen times before, it last occurred a couple weeks ago when I was adding three new publications, again a one table per publication architecture.
    We use standard SS repl proc calls to create the publications, which have been successful for years. 
    On this occasion replication created the three publications, assigned the subscribers and even generated the new snapshot for all three new publications. 
    However,  while it appeared that replication had created all the publications correctly from end to end, it actually only applied one of the three snapshot and created the new table on both of the new subscribers (two on each of the
    publications).  It only applied the snapshot to one of the two subscribers for the second publications, and did not apply to any on the third.  
    I let it run for three hours to see if it was a back log issue. 
    Replication was showing commands coming across when looking at the sync verification at the publisher and 
    it would even successfully pass a tracer token through each of the three new publications, despite there not being tables on either subscriber on one of the publishers and missing on one of the subscribers on another.  
    I ended up attempting to reinitialize roughly a dozen times, spanning a day, and one of the two remaining publications was correctly reinitialized and the snapshot applied, but the second of the two (failed) again had the same mysterious result, and
    again looked like it was successful based on all the monitoring. 
    So I kept reinitializing the last and after multiple attempts spanning a day, it too finally was built correctly.  
    Now the story only get a little stranger.  We just found out yesterday that on Friday the 17th 
    at 7:45, the approximate time started the aforementioned deployment of the three new publications, 
    we also had three transaction from a stable and vetted publication send over all changes except for a single status column. 
    This publication has 12 million rows and is very active, with thousands of changes daily. 
    , The three rows did not replicate a status change from a 5 to a 6. 
    We verified that the status was in fact 6 on the publisher, and 
    5 on both subscribers, yet no messages or errors.  All the other rows successfully updated.  
    We fixed it by updating the publication from 6 back to 5 then back to 6 again on those specific rows and it worked.
    The CPU is low and overall latency is minimal on the distributor. 
    From all accounts the replication is stable and smooth, but very busy. 
    The issues above have only recently started.  I am not sure where to look for a problem, and to that end, a solution.

    I suspect the problem with the new publication/subscriptions not initializing may have been a result of timeouts but it is hard to say for sure.  The fact that it eventually succeeded after multiple attempts leads me to believe this.  If this happens
    again, enable verbose agent logging for the Distribution Agent to see if you are getting query timeouts.  Add the parameters
    -OutputVerboseLevel 2 -Output C:\TEMP\DistributionAgent.log to the Distribution Agent Run Agent job step, rerun the agent, and collect the log.
    If you are getting query timeouts, try increasing the Distribution Agent -QueryTimeOut parameter.  The default is 1800 seconds.  Try bumping this up to 3600 seconds.
    Regarding the three transactions not replicating, inspect MSrepl_errors in the distribution database for the time these transactions occurred and see if any errors occurred.
    Brandon Williams (blog |
    linkedin)

  • Will Hyper-V Replication Manager move a VM without shutting it down?

    Scenario
    Let say I have two sites (Primary And DR). I have my production VM's running on my Primary Site.  If I protect them with Hyper-V Replica the VM is replicated and I can bring the VM up manually in the DR site. So I know that process work. I am trying
    to minimize the outage of an app and the time it takes to bring the VM up and running. I have a database server in the primary site. I want to be able to move the VM hosting the database server to the secondary site without shutdown or with minimal downtime. 
    Question
    Can I use Hyper-V Replication Manager to migrate the VM with the database from the Primary Site to the DR site without downtime for the SQL server? When I mean downtime for the SQL server is that clients can continue to connect to the database and access
    the data in the database while I am migrating the VM.
    This is needed to maintain availability during DR testing of the Production environment.
    Thanks,
    Carlos

    Hi Carlos, 
    For DR drills you can use the test failover. A
    test failover can spin up a test virtual machine for the production virtual machine. This VM can run in an isolated environment and does not affect the production environment. Your SQL server will have no outage while DR testing.
    A test failover can be executed on an individual virtual machine or a recovery plan. This can orchestrate the drill as close to a real failover which recovers the application in a planned manner.
    Hope this pointer helps.
    More detailed tutorials are available at http://msdn.microsoft.com/en-us/library/windowsazure/dn440569.aspx

  • How to turn on both LAN and Wireless T430

    I need both my LAN and wireless enabled at the same time.  Can I turn them both on using Think Advantage?  I believe our IT dept has uninstalled any ThinkAdvantage software.  Which ThinkAdvantage software should I download and where do I turn both the wireless and the LAN on at?  Thank you

    Windows doesn't make it easy for you to have different processes access different networks simultaneously.
    You can force it to prioritize one over another as so:https://answers.microsoft.com/en-us/windows/forum/windows_7-networking/wired-and-wireless-connection...
    But it's all or nothing; you use either-or.
    W520: i7-2720QM, Q2000M at 1080/688/1376, 21GB RAM, 500GB + 750GB HDD, FHD screen
    X61T: L7500, 3GB RAM, 500GB HDD, XGA screen, Ultrabase
    Y3P: 5Y70, 8GB RAM, 256GB SSD, QHD+ screen

  • WRT150N (New) Gateway IP stops responding to LAN and wireless clients. Hangs, stops, loss of service

    WRT150N Firmware Version: v1.51.3 : From LAN and wireless connected devices, Internet connectivity is lost. I try to ping the LAN side gateway IP address from my laptop and desktop, no response. Web management does not work either. Power re-cycle of the WRT150N fixes the problem. The problem is infrequent, it can happen twice per day or once every 2 days.
    When the problem occurs,
    the DHCP info in my clients looks fine and shows the correct gateway IP address, mask etc. ;
    the desktop and laptop can still ping each other;
    The gateway is unreacheable and all out going connectivity is lost
    Does anybody have any solution or maybe has had the same experience.
    I cannot track the problem happening to any particular event or usage pattern however I am using the Azureus bit torrent client all the time.
    I have an incident raised with LinkSys Technical Support but no response so far from them.
    WRT150N Firmware Version: v1.51.3  

    Hi - please go to this thread for more details:
    http://forums.linksys.com/linksys/board/message?board.id=Wireless_Routers&message.id=103033#M103033
    or search for the other thread started by (fb2k). But briefly over the 1 year period since this thread started my local store replaced my wrt150n 3 times and then gave me a wrt160n which was replaced and its still having the problem. I am now running Open Source wireless software (DD-WRT) on the WRT160N and it has been up 18 days with no restart. I didnt want to do this but I got fed up taking my unit back to the store. Thanks to fb2k (on the other thread) for taking the plunge and reporting success with the DD-WRT software.
    Message Edited by NetGuy-Dubai on 08-23-2008 01:06 PM
    Message Edited by NetGuy-Dubai on 08-23-2008 01:07 PM

Maybe you are looking for

  • DMS issue - backend program

    Hi Experts, I am facing one problem in DMS as not able to create documents from backend(scheduling job - files pick from application server) and getting error message : Error while checking & storing .... but when I tried this from foreground, able t

  • Missing everything in the " Property " window

    So, ok I want to get my " Property" window with the RGB and everything else in order to be able to finish what I am doing, and so I want to turn that grey sand from the image in from to be alike the brown sand in the back, and I get the window just a

  • .mp4 videos only play sound plus 1 frozen frame every 20 seconds

    Hello, I am trying to edit some .mp4 videos in QT 7 pro. I was able to edit them before updating but now all they do is play the sound with like a frozen frame every 20 seconds that morphs into the next frame that it chooses to play. I can play these

  • Erro code:500 SRVE0199E: OutputStream already obtained

    Hi all Iam writing a JSP code for exporting the result in to the csv .It working perfectly but afte exporting results it is showing exeception on same csv file.Can any one help me.The exeception is shown bellow. Error page exception The server cannot

  • How to play music cd in Rhythmbox? (or anything else)

    The situation is this: I can play music in Ryhthmbox that is on my hard disk (copied over from other distro) and am listening to jazz as I write this), but I can not put a CD into my dvd drive and play a music CD. I would like to rip it to disk using