Onsite Hyper-v servers to Azure Site Replication

Hi 
  I am trying to create a Virtual Machine replication between on site Hyper-V 2012 R2 core server and Azure without SCVMM. When I run the Microsoft Azure Site Recovery Provider setup and on the Recovery Registration Wizard as soon as i Click browse
to upload the Vault key file I get the below error. How do I resolve this error ? 
Regards
gprajan
Regards Gprajan

Hi,
Please follow the below steps to solve your issue.
1) Open an Admin Command Prompt
2) Browse to the install location of Azure Site Recovery Provider (usually it is C:\Program Files\Microsoft Azure Site Recovery Provider\
3) Run the following command
DRConfigurator.exe /r /Credentials <PathtoRegistrationKeyFile> /FriendlyName <NameOfHyper-VHost>
Regards,
Anoop KV

Similar Messages

  • Hyper-V VM not showing up in Azure Site Recovery

    Hello,
    I'm trying to set up Azure Site Recovery as "On-Premise Hyper-V site to Azure" - so without VMM.
    So far I've followed the steps from here: https://msdn.microsoft.com/en-us/library/azure/dn879142.aspx up
    to Step 5. That's where I'm stuck.
    Wen trying to set up protection for my VMs Azure is not recognizing my on premise VMs although there is one running on my Hypervisor: http://i.imgur.com/h8TnV46.png
    The Hypervisor got recognized within the Hyper-V site and the Recovery Services Agent and Site Recovery Service are running on my Hypervisor.
    The Hypervisor's event log (Microsoft -> Azure Site Recovery -> Provider -> Operational) shows that replication is triggered every 15 minutes as configured in Azure but no changes are commited to Azure.
    My Hypervisor is a freshly installed Windows 2012 R2. Not domain joined, no proxies. Firewall disabled.

    Hi,
    Check for FAQ on Azure site recovery for Hyper-V and other VM's
    http://social.technet.microsoft.com/wiki/contents/articles/21619.microsoft-azure-site-recovery-common-error-scenarios-and-resolutions.aspx
    Girish Prajwal

  • Microsoft Azure Site Recovery VM Metadate Replication

    Hello,
    We are in a process of investigating azure site recovery for On-Premise to azure replication, but we have seen strange behavior with our VM's after the initial replication.
    We installed a new VM with single disk .vhdx (containing the operating system only) with the size of 127 Gb as the default suggests, and we started replicating the VM to Azure. in the meantime we decided to add a second disk .vhdx file with 200GB in size
    and we figure that the replication will include the second .vhdx file (second disk) but it did not.
    the way we figured it out is by doing a failover test and we have seen that the second disk is missing, the only option of replicating the changes was by removing the replication and than adding it back, it seems that only the initial replication
    takes the metadata of the VM, is it by design?
    Ilan Saadi

    Hi Ilan,
    Your observation is correct; only during enabling the protecting we read the disk metadata and replicate.
    Currently we don't support replicating newly added disk to a protected virtual machine; this is part of the backlog and would request you to update the feature with your specific requirement details
    here.
    Regards
    Anoob

  • Azure site recovery with existing extended replication

    I'm wondering if this is possible. I have two non-clustered hypervisors, which both replicate to the same primary replication server, which in turn performs extended replication to another offsite replication server. I cannot set more than a single
    replication target on the primary hypervisors or the primary replication system and you cannot extend replication past an already extended replication server. In order to use Azure site recovery would I have to stop my existing extended
    replication and just perform replication to Azure? How does a private cloud work with 2 non-clustered primary hypervisors and primary/secondary private clouds?

    Hi
    I am not sure if I got the nuance of the question - but as of today. Azure Site Recovery does not support the notion or workflow of extend-replication.
    This is applicable for replicating VMs from pointA->pointB->pointC (replace "point" with "server" or "cloud") & pointA->pointB->Azure (same as before, replace "point" with "server" or "cloud").
    Let's assume you have a replication setup for a VM between two VMM clouds - as of today, we do not have a workflow which allows you to "extend" the replica VM in the secondary cloud to Azure. Hope that helps.
    Praveen

  • Extend AD to Azure Site

    I'm stumped. I am trying to extend my AD domain into an Azure site.   I have created a network and established a  Site to Site VPN I then created a windows 2012 R2 vm and a second vm using the same cloud service. If I look at this service
    it shows the two instances. So far so good. I have my internal DNS servers set as the DNS for my network. I then RDP into the new VMs and join them to my domain. No problem. I can ping the new vm, I can rdp using domain credentials, etc. All good so far.
    Now, I add the AD role to one of the VMs.  Perfect.   I then run the AD DS config wizard.  Select add a domain controller to an existing Domain, select the domain, and as I am logged in as a domain admin with full rights, leave the default
    to use the signed in credentials.   Click next and get an error "Could not log onto the domain with the specified credentials".  What?  
    I have never seen this before, and I have extended our domain into other data centers via site to site vpn. 
    Is there something I need to change with the Network, Cloud Service, our VM? Is something being blocked that is preventing the authentication step necessary to contact our current AD DS servers?
    In the past after getting my Site to Site VPN in place I have been able to promote a member machine without issue.  
    Also, the same credentials were used to join the machine to the domain with no issues. 
    In the new servers Event log it shows that the DFS replication service successfully contacted the domain controller to access configuration information.in the application event log I only see an error that the Open Procedure for servie "BITS" failed
    But I also see an error in System event log that shows RPC call DC not responsive RPC call has been cancelled.  
    Any advice on what to look for that might cause the RPC call to fail?  Thank you in advance
    Fred Zilz

    Ok, it looks like rpc is failing when going through the VPN.  I have a Juniper SRX100 on the On premise side.  If I run Get-WmiObject win32_computersystem -computer {computername} from one server to another on my local lan, it returns the computers
    system info. If I run it from one AD server on my local lan to an ad server across a site to site vpn to a second location (SRX100 to SRX100) route base it also returns the computer information.
    If I run this from one VM on my Azure site to another VM on the site it returns the computer information.  But, if I run this from a computer on my local lan to a vm on Azure it fails.  Running it from a vm on azure to a computer on my lan also
    fails.  Note, I used the same machines for the tests local to local and local to external so it is not a local machine firewall policy.
    Are there any settings for the VPN from the Azure side?   I am now looking to see if there is anything different on my VPN settings for my site to Azure VPN vs my site to second location VPN.
    Fred Zilz

  • Can I see the list of Event's Notification types of Azure Site Recovery ?

    Can I see the list of Event's Notification types of Azure Site Recovery ?
    I want to verify Event's Notifications rather than "Virtual machine health is OK".
    Example senarios:
    senario 1. Disconnect the Ethernet cable.
      Can I get the notification e-mail at Disconnecting the Ethernet cable from the on-premises Hyper-V host ?
    senario 2. Turn off
      Can I get the notification e-mail at Turning off the protected Virtual Machine on the on-premises Hyper-V host ?
    senario 3. e-mail test
      Can I get the notification e-mail after turning on Event Notification ?
      Is e-mail address Collect ?
      Can I get e-mail without auto-Junk ?
    Regards,
    Yoshihiro Kawabata

    Hi Yoshihiro Kawabata,
    Thanks for bringing these requests to our attention. We currently do not have support for these three types of email notifications. I will add them to our backlog and we will enable support for them soon.
    Currently, we support email notifications only for replication issues. One example you can easily test is to Pause the replication of a virtual machine from the Hyper-V Manager UI. This should send an email notification.
    Let me know if this helps.
    Thanks
    Siva

  • Nutanix site-to-site replication

    When dealing with Nutanix protection domains, what would prevent the execution of the site-to-site replication of said protection domain?
    This topic first appeared in the Spiceworks Community

    Hi, can you please check if network proxy settings configured in the Microsoft Azure Recovery Services Agent are correct. This agent running on the Hyper-V host needs to be able to reach Azure. (You can find more details about this here: http://blogs.technet.com/b/virtualization/archive/2014/07/07/azure-site-recovery-authenticated-proxy-server.aspx)
    If that is correct, please look at event logs at “Applications and Services Logs->MicrosoftAzureRecoveryServices->Replication” and
    traces at a location like C:\Program Files\Microsoft Azure Recovery Services Agent\Temp.
    Sudhakar [MSFT]

  • Azure Site Recovery..

    1.On what scenario does the azure comes to know whether the server is up or down.
    2. Suppose only the network is down, but the servers are up and running, in that case will the vm in azure are failover is done or not?
     

    Praveen is spot on. There is a difference between high availability and disaster recovery. Azure Site Recovery leverages Hyper-V Replica - which is truly a DR solution. In most cases, this is something you would like to perform manually and requires interaction
    from the human being. 
    -kn
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

  • Site to Site Replication only works for a few hours in the morning (each morning)

    We have been fighting an odd active directory replication issue for over a month now and I am hoping that someone can provide some insight. We have 5 AD servers in the following orientation...
    Site HQ
    - PRIME running Windows 2008 R2
    - AD2 running Windows 2008 R2
    Site COLO
    - AD3 running Windows 2008 (not R2)
    - AD3NEW running Windows 2008 R2
    Site BRANCH
    - AD4 running Windows 2008 R2
    The domain is at the Windows 2008 Functional Level.
    There are always on site to site VPNs between all 3 sites and IP Intersite Transports Site Links defined for all 3 possible connections with Cost of 100 and interval of 15. Each IP site link is configured with a schedule of available all day long.
    Every day the following sequence of events happens...
    * Somewhere between 6:30 and 7:30am all the servers start to sync with each other perfectly. We can make AD changes and they replicate across all servers without issues. During this time all the repadmin commands work well across all servers.
    * Typically somewhere in the 10:30 to 11:30am time frame we start to get errors replicating data - specifically between the HQ and COLO sites. This manifests itself as Event 1232 Call Timeout from the DC RPC Client and and Event 1925 from the KCC. Additionally
    repadmin commands fail when attempting to connect to the BRANCH servers.
    * For the rest of the day the intra-site replication between PRIME and AD2 work fine - and periodically the BRANCH AD server is updated as well. But the COLO sites remain unreplicated and continue to get errors for the remainder of the day. While this down
    - the ability to ping and remote desktop between the servers is perfectly fine - so even if there were a network hiccup that happens - the network is stable for hours without the sites recovering.
    * Magically the next morning around 6:30 and 7:30am all the servers are able to replicate without issue and we get 3-5 hours of immediate replication and then it happens again.
    As I stated above - there is always on site-to-site VPN connections between all 3 sites that are actively monitored by PRTG. These connections remain open all day long. The Site topology has the COLO servers attempting to replicate with the HQ servers -
    and both sites have 100MB data connections that remain active during the entire time. Additionally PRTG bandwidth monitoring shows that these links have no spikes in traffic anywhere near the max capacity of those links during the time that the outages begin
    nor during the rest of the day.
    Does anyone have any insight as to why these servers would stop communicating with each other about the same time every day and report errors? Also why it would magically start to work again each day without any changes being made to the network or the AD
    configuration?
    This has been going on for over a month now. When it first started to happen we had 1 Windows 2008 server and 2 Windows 2003 servers in the HQ. We phased out the Windows 2003 servers and upgraded the functional level to Windows 2008 - that did not solve
    the problem. We tried to put a new Windows 2008 R2 server out at the COLO site hoping that if it was limited to the other server then only the one server would be impacted. But now they both appear to be having connectivity issues at the same time.
    It is as if there is one hung connection that is blocking all the other syncs to this site and then someone each morning that bottleneck is released.
    Thank you in advance for any direction you can provide.

    As was stated above - ALL Domain Controllers have direct access to each other through Firewall to Firewall site to site VPNs and the Inter-Site Transport Links mirror that setup. So from the OS perspective any of the AD servers can directly connect to any
    other one.
    There are 3 IP Inter-Site Transport Links defined
    HQ < - > COLO   (Contains HQ and COLO sites) Cost 100  Replication Interval 15
    HQ < - > BRANCH  (Contains HQ and Branch sites) Cost 100 Replication Interval 15
    COLO < - > BRANCH  (Contains COLO and Branch sites) Cost 100 Replication Interval 15
    And on IP Inter-Site Transports "Bridge all site links" is enabled (although disabling it doesn't fix this problem as we have already tried that).
    Right now the servers are claiming (via Active Directory) to be unable to replicate with each other. But I am able to do direct pings as well as open stream sockets using "telnet <otherserver> <port>" on ports 3268 (gc), 88 (kerberos),
    389 (ldap), 135 (replication), 636 (ldap ssl), 53 (DNS). So there is nothing that I can see between the servers that is blocking TCP connectivity.
    I cannot seem to make this any clearer. The sites are 100% functional and responsive for several hours per day - and then mysteriously go into a state of complete denial for a lack of a better word for the rest of the day - only to return back to normal
    again reliably each morning.
    It is as if the sites get into a mode where something in the RPC area are simply refusing to talk to each other despite the servers having full access at the network level.
    Another data point to add to this mystery. While it is in the state where the HQ and COLO servers are refusing to sync with each other. You can launch the AD Users and Computers snap-in, right mouse click on the domain, change the Current Directory server
    and all 5 servers show up as ONLINE. You can pick any of them (including the one that is unable to replicate with) and make a direct change on that server.
    So while the servers are complaining about being unable to talk to each other - the snap in is connecting between those servers and is able to modify it without issue.
    Conversely - when the replication is failing the DNS management tool is unable to connect to the remote servers (i.e. COLO can show itself and the other COLO server. HQ can show PRIME, DC2, and DC4 without issue. But no overlap).
    Not sure that helps at all - but shows our frustration when two servers refuse to replicate but you can easily remote connect from one to the other and make the change.

  • Guide to remote manage Hyper-V servers and VM's in workgroups or standalone

    This guide is based on the following 3 products:
    Windows server 2012 (core)
    Windows 8
    Hyper-V server v3 / Hyper-V server 2012
    The following guide will enable you to:
    1: remotely manage your Hyper-V Virtual Machines with Hyper-V manager
    2: remotely manage your Hyper-V servers' firewall with a MMC snap-in.
    3: remotely manage your Hyper-V server (2012) with server manager
    ! This should also work for Core installations of server 2012, but I haven't tried.
    This guide is purely focussed on servers in a WORKGROUP, or as a stand alone.
    I CAN NOT tell you what you need to do to get it working in a domain.
    * You can run these commands straight from the console (Physically at the machine) or through RDP.
    * You will need to be logged on as an administrator.
    * Commands are listed in somewhat random order; I do however advise to follow the steps as listed.
    * Commands with ? in front of them are only ment to be helpfull for troubleshooting,
    * and to identify settings and changes made.
    * Commands and instructions with ! in front of them are mandatory.
    - server: means the server core or hyper-v server (non gui)
    - client: means the machine you want to use for remote administration.
    - Some commands are spread over 2 lines; be sure to copy the full syntax.
    > To enable the Hyper-V manager to connect to your server, you need to perform the following 2 actions: (Assuming you have already installed the feature)
    1:
    ! Client: Locate the C:\Windows\System32\Drivers\etc\hosts file.
    ! right-click --> properties --> security
    ! click --> edit --> add --> YOURUSERNAME or Administrator --> OK
    ! then select this new user, and tick the "modify"-box under the "allow"-section.
    ! apply the change, and close.
    ! doubleclick the file, and open with notepad
    ! add the ip-address and name of your server (no // or other crap needed)
    ! Save the file
    # I recommend putting a shortcut to this file on the desktop.
    # If you change the ip-address of your server (e.g. move the server from staging to a live environment)
    # you might forget to do so in the hosts file.
    # Hyper-V manager, MMC, RSAT, and Server-manager all rely on the hosts-file to resolve the name.
    # some of these might connect to their respective service on an i.p.-level, but some don't.
    # This is the main reason you need to modify this file.
    ! USE AN ELEVATED CMD/POWERSHELL PROMPT TO CONTINUE !
    # the next config needs to be done on windows 8.
    # It seems that it's already preconfigured under server 2012
    2:
    ! Client: dcomcnfg
    ! open component services --> computers
    ! right-click -> my computer -> properties
    ! select "COM SECURITY" tab
    ! under "ACCESS PERMISSIONS" select "edit limits"
    ! select "ANONYMOUS LOGON", and tick "remote access" under ALLOW
    # Without this adjustment, you can't connect to your Hyper-V server
    # with the Hyper-V manager if you're not in a domain.
    > And if you haven't done so already... make sure you have enabled remote management number 4 on the Hyper-V server console.
    > Next, is to get the MMC firewall snap-in working.
       The reason for this, is to have a GUI available to configure it.
       If you're happy without it, you may skip this and use a shell instead to do so.
    ? server: netsh advfirewall show currentprofile
    # shows the current profile (public/domain/private) and its settings
    # depending on your needs, you should set the right profile to fit your needs.
    # You can easily do this when the MMC snap-in is done. (after you've followed these steps)
    ! server: netsh advfirewall set currentprofile settings remotemanagement enable
    # enables remote management of the firewall on an application level 
    # (In other words: allows the firewall to be remotely managed)
    ! server: netsh advfirewall firewall set rule group="Windows Firewall Remote Management" new enable=yes
    # allows remote management of the firewall, through the required firewall ports with TCP protocol.
    # 4 rules will be updated to allow access: public & Domain, dynamic and endpoint-mapper.
    # You can disable/add/change the rule from the MMC snap-in after finishing this guide.
    # e.g. set the firewall through the MMC-GUI to only allow specific ip-addresses etc.
    ? server: netsh advfirewall firewall show rule all
    # Shows a list of available rules, and their current state.
    # when run from cmd, the list exceeds the maximum length for review.
    # (from cmd,type:) start powershell, and run the command from there.
    ! Client: cmdkey /add:YOURSERVERNAME /user:USERNAMEONTHESERVER /pass:THEPASSWORDOFTHATUSER
    # I recommend you to use a username with enough privileges for management
    # All capital letters need to be replaced with your input
    # CMD answers "credential added successfully" when you're done
    ! Client: locate MMC, and run it as an admin.
    # In windows 8/2012, go to search and type MMC. Right-click the icon, 
    # and choose run as admin on the bar below.
    ! Client: application MMC: select "file" --> Add/remove snap-in 
    ! --> (left pane) scroll down to "windows firewall" --> select and click "add"
    ! select "another computer"
    ! type the name of the server you want to manage (NO workgroup/ or //, just same name as you typed for cmdkey)
    * Part 2 is done.
    # Have a look by doubleclicking the firewall icon in the left pane.
    # It looks and works the same as the GUI version that you are familiar with.
    ! Next is the Server Manager.
    # Follow the steps listed to get your server listed and manageable in the server manager.
    ! Client: Open the created Firewall snap-in for your server.
    ! Find the 3 "Remote Event Log Management" entries in the list of INBOUND rules, and enable them.
    ! Open powershell --> in cmd windows, type: start powershell
    ! run the following line in powershell
    ! Client: in C:\Windows\system32> set-item WSMAN:\localhost\client\trustedhosts -value YOURSERVERNAME -concatenate
    # WinRM Security Configuration.
    # This command modifies the TrustedHosts list for the WinRM client. The computers in the TrustedHosts list might not be
    # authenticated. The client might send credential information to these computers. Are you sure that you want to modify
    # this list?
    # [Y] Yes  [N] No  [S] Suspend  [?] Help (default is "Y"): y
    # I recommend to choose yes; unless you like to pull some more hairs...
    ! server: winrm qc
    # WinRM service is already running on this machine.
    # WinRM is not set up to allow remote access to this machine for management.
    # The following changes must be made:
    # Configure LocalAccountTokenFilterPolicy to grant administrative rights remotely
    # to local users.
    # Make the changes? y / n
    !  select yes
    ! Client: open the server 2012 server manager
    ! click manage -> add server
    ! select the DNS tab, and type the name of your server
    Done.
    You can now manage your remote server through the familiar computer management GUI.
    ! Right-click your remote server, and select "Computer Management"
    A few side notes:
    ? The Performance tab seems to list the local machine's performance, in stead of the remote servers'
    ? If you want Windows server backup, you need to right-click the server in the server manager, and select "add roles and features.
    ? it will then become available under the "computer management" of the remote server.
    If you liked this guide you may thank my employer, Mr. Chris W.
    for giving me the time to work it all out.
    Cheers!

    As a little update to the post, I'd like to add that replication, clustering and migration will not work in workgroup environments. Unless someone can provide an additional guide for this, I'd recommend anyone to no even bother to try.
    To manage the standalone hyper-v server in a remote location over the internet, I would recommend the following:
    Install windows 8 pro (x86 uses less resources!) as a vm on the host, and assign 2 network connections to it.
    1 external (shared with host) (be sure you have a dedicated ip-address for it!)
    1 internal connection.
    What I did was this:
    As soon as you've installed the win8 guest, proceed with the guide as described.
    For the 1st step of the guide (hosts-file) use the ip-address you will later assign to the "internal" network switch of the host!
    In my example, I'm using 10.0.0.1 for the host, and 10.0.0.2 for the guest.
    To be clear: I first used the guide on a LAN-environment, and did all the steps from a "real" client to server on the LAN.
    Then, installed the win8 guest on the host using the "real" clients' hyper-v manager over the LAN.
    Next, assigned the 2 network connections to the VM, and configured them as follows:
    external - as you would to be able to make your guest reach the internet.
    internal - I used the following config:
    ip-address: 10.0.0.2
    subnet: 255.255.255.252
    gateway - blank
    dns - Blank
    Now, when you get to the console of the hyper-v server (host) or RDP to it, go to network settings.
    You'll see that the internal card has been added here as well.
    Configure it as follows:
    ip-address: static - 10.0.0.1
    subnet: 255.255.255.252
    gateway - blank
    dns - blank
    You should now be able to ping your guest (win8) on 10.0.0.2 if it's running.
    Don't forget to enable ping response (option 4 on the host) to test connectivity the other way around as well (guest to host)
    When you're done, you'll be able to RDP to the guest OS over the internet, and then connect to the host with server manager, hyper-v manager, and MMC.
    Don't forget to enable each module on the hosts' firewall to make the snap-ins work!
    Remote volume management requires your guest/client firewall INcoming ports to be enabled as well! not just the host.
    Either update the firewall rules from the MMC gui as described in the guide, or use the following commands on the
    hosts' powershell:
    Enable the firewall rules with the command Enable-NetFirewallRule -DisplayGroup "USE_THE_COMMANDS_BELOW" (include the " " in the command)
    Remote Service Management
    Remote Volume Management
    Remote Event Log Management
    Remote Scheduled Tasks Management
    Windows Firewall Remote Management
    Windows Remote Management
    You can get the list with Get-NetFirewallRule -DisplayName *management*
    You can get the list with Get-NetFirewallRule -DisplayName *remote*
    Commands provided with credits to F. verstegen
    Cheers,
    Michael.
    Sigh...

  • Azure Site Recovery for HA workloads?

    Can Azure Site recovery be used for HA workloads?
    To my understanding Azure Site Recovery leverages Hyper-V Replica. What if my on-premises clouds are hosting HA SQL AlwaysOn cluster workloads?
    Thanks!

    Hello Quivver,
    Thanks for your query. Can you clarify if you are using SQL Always On Availability Groups or Failover Cluster Instance (FCI). If you are already using SQL Always On Availability Groups, we will recommend using the same to replicate SQL databases to DR site
    and using Azure Site Recovery to protect the app tier VMs and create a Recovery Plan to failover the entire Application.
    If you are using FCI instances then they cannot be replicated using Hyper-V Replica. We recommend enabling SQL log shipping or Availability group and orchestrating that with ASR recovery plan.
    Thanks,
    Abhishek Agrawal, PM, Azure Site Recovery

  • Azure Site Recovery to Azure - cost for data transfer and storage

    Hello,
    I send you this message on behalf of a small firm in Greece interested to implement Azure Site Recovery to Azure.
    We have one VM (Windows 2008 R2 Small Business Server) with 2 VHDs (100GB VHD for OS and 550GB VHD for Data) on a Windows 2012 server Std Edition.
    I would like to ask you a few questions about the cost of the data transfer and the storage 
    First: About the initial replication of the VHDs to Azure. It will be 650GBs. Is it free as inbound traffic? If not the Azure Pricing calculator shows about 57€. But there is also the import/export option which costs about the same:
    https://azure.microsoft.com/en-us/pricing/details/storage-import-export/
    What would be the best solution for our case? Please advice.
    Second: What kind of storage is required for the VHDs fo the VM (650GBs). My guess is Blob storage. For this storage locally redundant, the cost will be about 12-13€/month. Please verify.
    Third: Is the bandwidth for the replication of our VM to Azure free?
    That's all for now.
    Thank you in advance.
    Kind regards
    Harry Arsenidis 

    Hi Harry,
    1st question response: ASR doesn't support Storage Import/Export for seeding the initial replication storage. ASR pricing can be found
    here which details about 100GB of Azure replication & storage per VM is included with the purchase of the ASR to Azure subscription SKU through the Microsoft Enterprise Agreement. 
    Data transfer pricing
    here  indicates that inbound data transfers are free.
    As of now only option will be online replication. What is the current current network link type & bandwidth to Azure? Can you vote for the feature & update requirements here?
    2nd question response: A storage account with geo-redundancy is required. But as mentioned earlier with Microsoft Enterprise Agreement you will get 100GB of Azure replication & storage per VM included with ASR. 
    3rd question response: Covered as part earlier queries.
    Regards, Anoob

  • RPC error when configuring Exchange 2013 servers in 2nd site

    Hello. I'm running into an error when trying to configure any of my Exchange 2013 servers in my 2nd AD site. To get into the loop of what my server structure looks like, please check below:
    Site 1 servers:
    DC1 - Domain Controller
    DC2 - Domain Controller
    CAS1 - CAS server
    CAS2 - CAS server
    MBX1 - Mailbox server
    MBX2 - Mailbox server
    MATHAFTMG - TMG server
    Site 2 servers:
    CCCDC1 - Domain Controller
    CCCDC2 - Domain Controller
    CCCCAS1 - CAS server
    CCCCAS2 - CAS server
    CCCMBX1 - MBX server
    CCCMBX2 - MBX server
    CCCTMG - TMG server
    Currently I have a site-to-site vpn connection between site 1 and site 2 TMG servers via Internet connection; I can access the servers of the other site perfectly (whether I am in Site 1 or Site 2).
    All user mailboxes are currently in Site 1 MBX servers; when users are in Site 2, they connect to the CAS servers in Site 1 to access their mailboxes.
    Many users will stay permanently in Site 2, so it makes sense to have Exchange servers in Site 2 to provide faster access to mailboxes. I created the Site 2 domain controllers, and made sure AD replication is working; and it is. I then added the MBX servers
    and CAS servers in Site 2 in this order: CCCMBX1, then CCCCAS1, then CCCMBX2, then CCCCAS2.
    All Exchange servers in Site 2 installed beautifully. But then I tried to access the servers via ECP to proceed with the configuration. In ECP, I click on the server link, and all Exchange servers in both sites appear. If I try to configure the virtual directories
    of Site 1 CAS servers, no problem. But when I try to configure virtual directories of Site 2 CAS servers, I get this error message:
    The task wasn't able to connect to IIS on the server 'CCCCAS1.domain.com'. Make sure that the server exists and can be reached from this computer: The RPC server is unavailable.
    The virtual directories issue is just an example. Same thing happens if I try to configure Outlook Anywhere for Site 2 CAS servers.
    Users connect to Site 1 CAS servers via mail.domain.com. I have the A record mail.domain.com pointing to the IP address of CAS1 server, and another A record mail.domain.com pointing to the IP address of CAS2 server. Not the best load balancing going on here,
    but it works great with Exchange 2013.
    From mail.domain.com I can access OWA and ECP internally and externally; no problems there. From ECP I can access and configure any Site 1 Exchange 2013 servers.
    The only problem is when I access ECP to configure the Site 2 Exchange 2013 servers, I get the same error message:
    The task wasn't able to connect to IIS on the server '<server name>.domain.com'. Make sure that the server exists and can be reached from this computer: The RPC server is unavailable.
    Even if I try to access a Site 2 Exchange 2013 server via https://localhost/ecp to configure it, it get the same error message.
    I updated all Exchange 2013 servers in both sites to CU2 v2 and rebooted the servers in the proper order; problem still there.
    Any clue what might the problem be?
    Thank you!

    Hello. I'm running into an error when trying to configure any of my Exchange 2013 servers in my 2nd AD site. To get into the loop of what my server structure looks like, please check below:
    Site 1 servers:
    DC1 - Domain Controller
    DC2 - Domain Controller
    CAS1 - CAS server
    CAS2 - CAS server
    MBX1 - Mailbox server
    MBX2 - Mailbox server
    MATHAFTMG - TMG server
    Site 2 servers:
    CCCDC1 - Domain Controller
    CCCDC2 - Domain Controller
    CCCCAS1 - CAS server
    CCCCAS2 - CAS server
    CCCMBX1 - MBX server
    CCCMBX2 - MBX server
    CCCTMG - TMG server
    Currently I have a site-to-site vpn connection between site 1 and site 2 TMG servers via Internet connection; I can access the servers of the other site perfectly (whether I am in Site 1 or Site 2).
    All user mailboxes are currently in Site 1 MBX servers; when users are in Site 2, they connect to the CAS servers in Site 1 to access their mailboxes.
    Many users will stay permanently in Site 2, so it makes sense to have Exchange servers in Site 2 to provide faster access to mailboxes. I created the Site 2 domain controllers, and made sure AD replication is working; and it is. I then added the MBX servers
    and CAS servers in Site 2 in this order: CCCMBX1, then CCCCAS1, then CCCMBX2, then CCCCAS2.
    All Exchange servers in Site 2 installed beautifully. But then I tried to access the servers via ECP to proceed with the configuration. In ECP, I click on the server link, and all Exchange servers in both sites appear. If I try to configure the virtual directories
    of Site 1 CAS servers, no problem. But when I try to configure virtual directories of Site 2 CAS servers, I get this error message:
    The task wasn't able to connect to IIS on the server 'CCCCAS1.domain.com'. Make sure that the server exists and can be reached from this computer: The RPC server is unavailable.
    The virtual directories issue is just an example. Same thing happens if I try to configure Outlook Anywhere for Site 2 CAS servers.
    Users connect to Site 1 CAS servers via mail.domain.com. I have the A record mail.domain.com pointing to the IP address of CAS1 server, and another A record mail.domain.com pointing to the IP address of CAS2 server. Not the best load balancing going on here,
    but it works great with Exchange 2013.
    From mail.domain.com I can access OWA and ECP internally and externally; no problems there. From ECP I can access and configure any Site 1 Exchange 2013 servers.
    The only problem is when I access ECP to configure the Site 2 Exchange 2013 servers, I get the same error message:
    The task wasn't able to connect to IIS on the server '<server name>.domain.com'. Make sure that the server exists and can be reached from this computer: The RPC server is unavailable.
    Even if I try to access a Site 2 Exchange 2013 server via https://localhost/ecp to configure it, it get the same error message.
    I updated all Exchange 2013 servers in both sites to CU2 v2 and rebooted the servers in the proper order; problem still there.
    Any clue what might the problem be?
    Thank you!

  • Azure Site Recovery vs. Manual Failover?

    Hi all-
    I am designing a Windows Server 2012 R2 DR scenario using Hyper-V replica.
    Environment:
    Site A:  Primary Server (HP DL380 using DAS with Gen2 / VHDX VMs).  System Center VMM 2012 R2 management server and console installed on this Hyper-V host.  SQL database for SCVMM placed in one of the VMs on this host.
    Site B:  Replica Server (Identical HP DL380).
    Approximately 10Mbps WAN connection interconnecting the sites.
    I need to provide DR for approximately 5-6 VMs.  These will be running standard MS apps like SQL, SharePoint, etc.
    I am wondering if it's worthwhile to use Azure Site Recovery to orchestrate a small DR scenario like this or whether, due to its small size, I am better off just planning to use manual failover.
    Also, if I DO elect to use Azure site recovery, do I need to install the full SCVMM 2012 R2 Server and Console on the server, or will a management agent from the Primary Server do the trick?  If a full installation, I'm assuming I will need a full instance
    of SQL Server to host a separate database at Site B.  Am I correct?
    Thanks

    Hi,
    Azure Site Recovery is awesome, and I will recommend it to all who has a environment that supports it. If all prerequisites is met, it's simple to enable and manage :)
    http://msdn.microsoft.com/library/azure/dn469078.aspx (Prerequisites and support)
    http://azure.microsoft.com/en-us/documentation/articles/hyper-v-recovery-manager-configure-vault/
    You can replicate between two clouds on the same on-site VMM server. So if you place your VMM service so it will not fail if your Hyper-V host does, you should be fine :)
    Anyway, if you are fare away from the prerequisites, you might be better of by implementing the built-in Hyper-V Replica service, and the take the step to Azure Site Recovery when you have time.
    Best of luck in your project!
    /Anders Eide

  • VMWare to Azure Site Recovery pricing

    Hi all,
    We are looking into Disaster Recovery solution and I have a hard time understanding the pricing and the conditions of using Azure Site Recovery.
    We are going to have a file server running Windows Server 2012 R2 on a VMWare host.
    The files shared currently amount to 300GB of space.
    There is nothing else critical running on this server.
    I understand that there is a cost of 52.44 CHF/month/instance (http://azure.microsoft.com/en-us/pricing/details/site-recovery/).
    How can I calculate the cost in the case I need to use the Azure Site Recovery to run my server? What are the elements that I am going to pay for once the machine is running in Azure?
    Thank you very much.

    You dont need to use the Migration accelerator to protect to Azure. You can use ASR itself.
    The documentation has now been updated.
    Kindly also look at these blogs
    http://blogs.technet.com/b/in_the_cloud/archive/2015/03/27/announcing-azure-site-recovery-disaster-recovery-for-vmware-vms-physical-servers-amp-more.aspx
    http://azure.microsoft.com/blog/2015/03/26/announcing-the-preview-of-disaster-recovery-for-vmwarephysical-servers-to-microsoft-azure-with-asr/
    thank you,
    ruturaj

Maybe you are looking for