Azure recovery Services (Backup Vault)

Hi
I'm trying to test Azure Recovery Services and have established a Backup Vault and connected a new server to it using the Azure generated certificate. The set-up seemed to be very simple and the installation of the agent went like a dream.
Unfortunately it does not work as after I start a backup I get the error "Engine encountered an unexpected error. Please ....." The corresponding event log entry is:
Faulting application name: cbengine.exe, version: 2.0.8694.0, time stamp: 0x54816461
Faulting module name: KERNELBASE.dll, version: 6.3.9600.17415, time stamp: 0x54505737
Exception code: 0xe0434352
Fault offset: 0x0000000000008b9c
Faulting process id: 0x12fc
Faulting application start time: 0x01d03016883f6a46
Faulting application path: C:\Program Files\Microsoft Azure Recovery Services Agent\bin\cbengine.exe
Faulting module path: C:\Windows\system32\KERNELBASE.dll
Report Id: 72d16f21-9c0a-11e4-80e6-00155d800f3f
Faulting package full name:
Faulting package-relative application ID:
The server is a Windows 2012 R2 Domain controller and is patched with all latest Windows Updates. The latest Azure agent was installed from the web site
I have tried it on a second server built at the same time and it fails there with the same error

Please find the note on the supported drives by Azure Backup.
https://msdn.microsoft.com/en-us/library/azure/jj573031.aspx?f=255&MSPPError=-2147217396#BKMK_faq_4

Similar Messages

  • Error : -2145124329 Installing Microsoft Azure Recovery Services Agent on SBS Standard 2011

    When installing the Microsoft Azure Recovery Services Agent on Windows SBS 2011 Standard I receive error code - Error : -2145124329. When looking at the file OBManagedlog.LOGCurr.errlog located here: C:\Windows\Temp extract below it appears to be failing on
    the Windows Powershell 3.0 prerequisite which is apart of the .NET 3 Framework. I have tried to install this manually but it does not appear to be compatible with SBS 2011 standard. Anyone have any ideas? MS Azure cannot help until we have a paid subscription,
    I have only a trial at the moment as I wanted to test the product first but have been unable to do so as i can't install the agent.

    Hi,
    I searched and it seems that Windows SBS 2011 is not in the support list. Only Windows Server 2012 Essentials (and later versions) are available.
    Windows Server Essentials Integration Module for Windows Azure Backup is Now Available
    http://blogs.technet.com/b/sbs/archive/2013/04/19/windows-server-essentials-integration-module-for-windows-azure-backup-is-now-available.aspx
    Edit on May.2: I've heard that it was supported in trial version so I'm now trying to contact related team about whether or not it is supported. Will update when getting any response.

  • MS Azure Recovery Services Agent was unable to create a snapshot - 0x186C2

    Hello in the Azure Backup forum,
    We use Microsoft Azure Backup on a VM in Azure. We have scheduled a backup for everyday. However, so far it has not been able to backup anything it fails with:
    "Microsoft Azure Recovery Services Agent was unable to create a snapshot of the selected volume. Please try the operation again. If the issue persists, contact Microsoft Support. (0x186C2) "
    and in the Windows event log it is found that:
    Event ID 20 - volsnap:
    " a volume snapshot of C:\ was interrupted - free space could not be calculated " - this has been translated from Danish.
    Furthermore when I try doing a manual backup via volume shadow copy services I get the following error:
    When troubleshooting and researching I find hotfixes for this type of error for Windows 2003 Server but none for Windows 2012 R2 or specifically for Microsoft Azure Backup.
    Any suggestions and help will be highly appreciated :-)
    Looking forward to hear from you.
    Red Baron

    Hi,
    Please change the Shadow Copy Storage limit for the System drive to NO LIMIT and check. You may refer:
    https://social.technet.microsoft.com/Forums/windowsserver/en-US/e8cac987-e3b9-4ac1-9247-a72e1f953036/was-unable-to-create-a-snapshot-of-the-selected-volume-0x186c2?forum=windowsbackup
    Also, I see that in that error message it states about Free space could not be calculated. Please make sure that you have enough free space (atleast equivalent to that size of snapshot).
    Regards,
    Manu

  • Azure Recovery Service

    Hello,
    We are using the Azure Recovery Service to backup data from our server (Windows Server 2012).
    This has been done since almost 1 year now.
    Since last October we experience problems in backuping data, jobs keep failing without any additional details.
    I have totally reinstalled the Azure Agent on the server yesterday to see if it would change, but we still have issues.
    It is quite hard to identify the issue and I would need help i that. Moreover, the backups are still billed as if data was consumed but our last recovery points is from 09.10.2014!
    Thank you in advance for your help and support.

    Hello and thank you for your answer.
    Looking at the log I could extract the part where the backup fails.
    3EE4 477C 01/07 12:42:28.813 75 OMUtils.cs(59)  246C08FC-FE67-4664-890F-D3C7FBF03D44 NORMAL GetFQDNForServer = sdw-srvbu.
    3EE4 341C 01/07 12:42:29.469 32 metadatastream.cpp(537) [000000001C2AC8B0] 65853539-E59E-4732-AFD0-C5D1F0393B3A NORMAL CMetadataStream::WaitForCompletion Waiting for MetadataStream [4].
    3EE4 341C 01/07 12:42:29.797 32 metadatastream.cpp(562) [000000001C2AC8B0] 65853539-E59E-4732-AFD0-C5D1F0393B3A NORMAL CMetadataStream::WaitForCompletion Wait complete for MetadataStream [4]. Bytes read = [11574181888]
    3EE4 341C 01/07 12:42:30.770 32 metadatastream.cpp(537) [000000001C2AC8B0] 65853539-E59E-4732-AFD0-C5D1F0393B3A NORMAL CMetadataStream::WaitForCompletion Waiting for MetadataStream [4].
    3EE4 341C 01/07 12:42:31.212 32 metadatastream.cpp(562) [000000001C2AC8B0] 65853539-E59E-4732-AFD0-C5D1F0393B3A NORMAL CMetadataStream::WaitForCompletion Wait complete for MetadataStream [4]. Bytes read = [11575230464]
    3EE4 192C 01/07 12:42:31.837 75 OMUtils.cs(59)  FBAF663B-ABA3-48FB-BAFA-E3FA9EB4E55F NORMAL GetFQDNForServer = sdw-srvbu.
    3EE4 341C 01/07 12:42:32.087 32 metadatastream.cpp(537) [000000001C2AC8B0] 65853539-E59E-4732-AFD0-C5D1F0393B3A NORMAL CMetadataStream::WaitForCompletion Waiting for MetadataStream [4].
    3EE4 42B0 01/07 12:42:32.540 32 msfstream.cpp(133) [000000001C2AC8B0] 65853539-E59E-4732-AFD0-C5D1F0393B3A NORMAL CMSFStream Download of Metadata stream [4] complete! Bytes downloaded = [11577056256].
    3EE4 341C 01/07 12:42:32.540 32 metadatastream.cpp(562) [000000001C2AC8B0] 65853539-E59E-4732-AFD0-C5D1F0393B3A NORMAL CMetadataStream::WaitForCompletion Wait complete for MetadataStream [4]. Bytes read = [11576143360]
    3EE4 341C 01/07 12:42:33.353 32 msfparser.cpp(114) [000000001C3DE5A0] 65853539-E59E-4732-AFD0-C5D1F0393B3A NORMAL Copying real boot sector3EE4 341C 01/07 12:42:33.399 32 metadatastream.cpp(618) [000000001C2AC8B0] 65853539-E59E-4732-AFD0-C5D1F0393B3A NORMAL CMetadataStream::Close Closing metadata stream [4]
    3EE4 341C 01/07 12:42:33.399 32 metadatastream.cpp(629) [000000001C2AC8B0] 65853539-E59E-4732-AFD0-C5D1F0393B3A WARNING Failed: Hr: = [0x80070003] Failed to delete metadata file []
    3EE4 341C 01/07 12:42:33.399 32 vhdhelper.cpp(15) 65853539-E59E-4732-AFD0-C5D1F0393B3A NORMAL pVhdHelper:[1a2ab3900x, Type:[1]]
    3EE4 3140 01/07 12:42:34.853 75 OMUtils.cs(59) EA4AB5E8-FAD7-4306-A38A-09260439943A NORMAL GetFQDNForServer = sdw-srvbu.
    3EE4 3140 01/07 12:42:37.874 75 OMUtils.cs(59) 1953CED7-9413-47F0-BB81-59BEB03022D4 NORMAL GetFQDNForServer = sdw-srvbu.
    3EE4 341C 01/07 12:42:37.936 71 replica.cpp(126) 65853539-E59E-4732-AFD0-C5D1F0393B3A NORMAL ErrorMapper: Set Dls error [DLS - 32505]
    3EE4 341C 01/07 12:42:37.936 71 dscontext.cpp(152) [000000001A2FCEF0] 65853539-E59E-4732-AFD0-C5D1F0393B3A WARNING Last completed state for Ds Id (1801510221167250988) is 4
    3EE4 341C 01/07 12:42:37.936 71 dscontext.cpp(158) [000000001A2FCEF0] 65853539-E59E-4732-AFD0-C5D1F0393B3A WARNING Ds Id (1801510221167250988) failed. DLS: 32505 HRESULT: 0x80070005
    3EE4 341C 01/07 12:42:37.936 71 storageasyncworker.cpp(227) [000000001A42C078] 65853539-E59E-4732-AFD0-C5D1F0393B3A WARNING Failed: Hr: = [0x80070005] Initialize unsuccessful for replica id ({DED9EF60-259A-49EE-9422-81782CE7427B})
    3EE4 341C 01/07 12:42:37.936 71 backupasync.cpp(1016) [000000001A42C070] 65853539-E59E-4732-AFD0-C5D1F0393B3A NORMAL Backup Progress: Initialize Storage finished.
    3EE4 341C 01/07 12:42:37.936 71 backupasync.cpp(1045) [000000001A42C070] 65853539-E59E-4732-AFD0-C5D1F0393B3A WARNING Failed: Hr: = [0x80070005] Backup Progress: Failed
    Tell me if it helps, otherwise I will attach the whole log file
    Thank you

  • Azure VM name when restoring from Backup Vault

    I've been experimenting with restoring Azure VMs using Backup Vault and works great.  This will solve DR needs VERY easily so thank you!
    One weird issue (bug?) is that when I try to restore from a recovery point and enter a VM name, I get "Specified name is already in use" for names that don't exist at all within my account.  This causes a slight issue since even though I can
    rename the VM name within the RDP session, the name listed within the Azure Portal doesn't seem to change which could become an administrative nightmare down the road.  Is this a bug?  Is there a way to rename the name that appears in the Azure Portal?
    Thanks.
    -Ben

    Hi Manu,
    I tried logging out and in of both portals but don't see the updated name (I renamed the VM an hour or two ago).  Just so I'm clear, the host name does reflect the renamed VM name. However, the name listed in the various lists within the portal is not
    updated.
    In the screen shot below, the restored VM name was "WEB2-Restore".  I then renamed the VM from with a RDP session to "WEB1".  The "Host name" is listed correctly on this page but not the name in the label or heading:
    -Ben

  • Offline data synchronization in azure mobile services on windows server 2008

    Hi,
    I have a class library which insert data into tables in azure mobile services on windows server 2008 OS for windows universal C# platform. I am trying to insert data using offline data synchronization.
    I had installed SQLite runtime for windows 8.1 and windows phone 8.1, but unable to add reference 'SQLite for Windows Runtime(Windows8.1)'.
    Please guide me whether windows server 2008 OS supports offline data synchronization in azure mobile services.
    Thank you.

    I also have a Windows Server 2012 R2 system using Azure Backup, and I don't have the problem. However, you probably noticed that you use a different Azure Backup installation download for Windows Server 2008 R2 vs. Windows Server 2012 R2. Although both
    show the same Microsoft Azure Recovery Services Agent version 2.0.8692.0 installed, my Windows Server 2012 R2 also lists Microsoft Azure Backup for Windows Server Essentials version 6.2.9805.9 installed as well. It could be the case the my problem with the
    CATALOG FAILURE 0x80131500 errors is something specific to the version of Azure Backup installed on my Windows 2008 R2 servers.
    Trilon, Inc.

  • Need info about data recovery services

    My main hard drive on my Mac G5 crashed on Saturday, and I'm sending it to Drive Savers today. My question concerns security and confidentiality. I have some files on the drive that I would prefer that no one looks at. To what extent, if any, do these places open and look at the contents of individual files? I can't imagine they look at everything. A typical drive like this will have 100s of thousands if not millions of files.
    Thanks for any info.

    Hi mrsaxde-
    The sad reality is that once you hand over your data to anybody you really have no assurance that your files will not be looked at.
    Most likely I would suppose that data-recovery services would not want to get a bad reputation as "the company that looks through your personal data", or something similar, and would be sensitive to such things.
    I would suggest a Google search of that recovery company and see what folks might have to say.
    I have all of my stuff backed u in triplicate, generally due to paranoia about things like this happening. This is unfortunately an expensive and scary lesson in support of a proper backup routine.
    Luck-
    -DP

  • Transaction Recovery Service failover

    Can anyone explain what the suggested configuration is for the default persistence store? In particular, this is to ensure the proper failover / migration of the Transaction Recovery Service which is required to use the Default Persistence Store which is file based. Based on the following statement from the docs:
    Preparing to Migrate the Transaction Recovery Service
    To migrate the Transaction Recovery Service from a failed server in a cluster to another server (backup server) in the same cluster, the backup server must have access to the transaction log records from the failed server. Therefore, you must store default persistent store data files on persistent storage available to all potential backup servers in the cluster. Oracle recommends that you store transaction log records on a Storage Area Network (SAN) device or a dual-ported disk. Do not use an NFS file system to store transaction log records. Because of the caching scheme in NFS, files on disk may not always be current. Using transaction log records stored on an NFS device for recovery may cause data corruption.
    A SAN storage device is recommended for this but my understanding is that the SAN can only be mounted by one machine at a time. Does this imply then that our failover process needs to include mounting the SAN before starting the failover server (as part of the whole server migration)? The docs here (http://download.oracle.com/docs/cd/E15523_01/core.1111/e12036/net.htm#CIHBDDAA) indicate that a NAS can be used and even give examples of configuring it using NFS mount points:
    The following commands show how to share the SOA TX logs location across different nodes:
    SOAHOST1> mount nasfiler:/vol/vol1/u01/app/oracle/stores/soadomain/soa_cluster/tlogs
    +/u01/app/oracle/stores/soadomain/soa_cluster/tlogs -t nfs+
    SOAHOST2> nasfiler:/vol/vol1/u01/app/oracle/stores/soadomain/soa_cluster/tlogs
    +/u01/app/oracle/stores/soadomain/soa_cluster/tlogs -t nfs+
    Can anyone describe a best-practices approach for how to configure the expected persistent storage solution that will work with proper failover of the transaction recovery service?
    Thanks!
    Gary

    have a look at this article and see if it helps.
    http://el-caro.blogspot.com/2008/11/parallel-rollback.html

  • Data recovery service for dead HD

    My HD had to be removed (and replaced) after it could not b recognized. does anyone know a good data recovery service that I can send my old one to retrieve client files ( mainly quark, Photoshop and indesign files)
    Thanks for any input u can provide

    Do yourself a favor, and get a good external HD, at least twice the size of the data that is (or was) on your internal HD, and start making backups with Time Machine or another app.
    Then the next time this happens (and all hard drives fail, sooner or later), you won't lose everything.
    You might want to review the: Time Machine Tutorial
    and perhaps browse Time Machine - Frequently Asked Questions (or use the link in *User Tips* at the top of the +Time Machine+ forum).
    Or Kappy's post on Basic Backup, complete with links to the web sites of each product.

  • Hp recovery service !

    i have a g7-1110 notebook
    i earlier removed drive c
    theres a new version of win  7 on it
    i still have my backup drive
    i need the hp recovery service program i think so i can use my backup drive to restore factory settings cause now i have a
    backdrive and i can't use it ????????????????

    As you changed copy of OEM windows 7 to a retail Windows 7 , the F11 option would not work and there is no download available to have this fixed.
    Your only option is order recovery disks and put the notebook back to factory condition. 
    //Click on Kudos and Accept as Solution if my reply was helpful and answered your question//
    I am an HP employee!!

  • Can Server 2012 R2 Essentials / Azure Directory Services integrate with Office365 Home?

    I currently run Windows Server 2012 R2 Essentials at home to provide network features and automated backup.  I would like to use office365 home for my family (I currently use office365 small business without standalone office apps).  Does anyone
    know if office365 home will integrate?  Can azure directory services be used?  Thanks.

    maybe helpful...
    http://technet.microsoft.com/en-us/library/jj593240.aspx
    http://technet.microsoft.com/en-us/library/dn509538.aspx
    http://www.petri.com/active-directory-integration-office-365-installation.htm
    http://www.petri.com/active-directory-integration-office-365-directory-sync.htm
    http://blogs.technet.com/b/ad/archive/2013/09/10/empower-your-office-365-subscription-identity-management-with-application-access-enhancements-for-windows-azure-ad.aspx
    Best,
    Howtodo

  • SQL Azure sync service-- absurdly slow and fails after a few days

    Hello. We have been trying to use Azure Data Sync to replicate an on-premise MSSQL database to an SQL Azure database for read-only access by a customer. This was working for a while, but stopped syncing after a couple months(12hr auto-sync schedule) with
    no errors in the log. I had to re-create the sync group, but now it takes even longer than originally to try to sync, and never actually completes, as it gets interrupted by bi-weekly server restarts... It used to take a few hours to sync the new data in our
    database(which is appended to daily)-- but this time it fails after 4+ days... It was unacceptably slow initially(IMO), but now it's clearly unusable.  The original initialization of the data when I first set it up was less than 2 days of syncing. 
    It seems there is a throttle on the Azure sync service. Is this true?  Would it be best to clear the SQL Azure database now and re-sync? Is there a way to pre-load the SQL Azure database with MSSQL on-premise data via a SQL backup file or something?
    Please advise. Thank you.

    when you re-created the sync group, does the member databases/hub database have pre-existing data?
    when synching a sync group for the first time, make sure databases don't contain the same set of data, otherwise, you will run into conflicts which will completely slow down your sync...
    I deleted the initial sync group because it wasn't syncing(auto or on-demand), nor creating a log entry with an error indicating why.
    So, I simply deleted the sync group and re-created it with the same exact databases and settings. I did not delete all data in the SQL Azure database-- I was under the assumption that the sync service, with the tracking tables were smart enough to not get
    confused with pre-existing data, but apparently that's not how this works?
    I obviously can't delete the data in the source database(MSSQL on-premise), but I could delete the tables in the SQL Azure database if that's supposed to fix the problem-- then we'll just have to wait multiple days for it to be completely re-initialized,
    hopefully without error... Is there a way to seed the data in some way to prevent this extremely log first sync?
    Thank you for your help.

  • Integrating a PHP Web App with an Existing Azure Mobile Services and Mobile App

    I've got an existing mobile app that is integrated with Azure's mobile services. The mobile services are currently connected to Azure Active Directory with MFA enabled. I'd like to build a separate PHP-based web application (Azure VM) that uses this existing
    mobile service and authentication.
    I reviewed the Azure PHP SDK, but didn't see any tie-ins to the Mobile Service. Additionally, Azure has some great tutorials, but for mobile services they all seem to focus on iOS, Android, and Windows phone. Any insight into how to tie a PHP-app into this
    backend would be much appreciated!

    Although there isn't any client library for PHP, you can still access Mobile Service using the
    Azure Mobile Service REST API.
    Abdulwahab Suleiman

  • Welcome to the Azure App Service API Apps Preview Forum!

    Welcome to the forum! This forum is for support of our customers who are using API Apps. Feel free to post any questions you have related to API Apps.
    The Azure App Service API Apps Team
    Jim Cheshire | Microsoft

    Hey Mikael!
    I actually was struggling through pretty much the same things at the same time that you were.
    The EventTriggered extension is actually in the Microsoft.Azure.AppService.ApiApps.Service namespace, so without a using directive for that it will not be happy.
    I created a library to help with the metadata generation required for Triggers, and did a write-up on exactly what it takes to create both a polling and push trigger (with a few more samples) here: https://github.com/nihaue/TRex#building-a-polling-trigger-api-app
    Hopefully having that combined with
    the official docs, and also
    Sameer's sample can get you on the right track.
    Hope that helps!

  • What are the performance implications moving apps using cloud drive to Azure File Services?

    I run a number of cloud services with 5 or more nodes in using cloud drives. Cloud drive is scheduled to be deprecated in 2015. So I am thinking of replacing the cloud drive with Azure Files service.
    For each cloud service I am using one storage account to create all the the VHD/cloud drives. Some people at the time when cloud drive first appeared, told me that to get better performance, I should create only one VHD/Cloud Drive
    under only one storage account. For example, if I have five instances under a worker role then I should create 5 storage accounts and create one VHD/Cloud Drive under each storage account to be used by each node. I didn't follow that route because I was satisfied
    with the performance of the apps under cloud services having all VHD/Cloud Drives under one storage account.
    My question is, if I replace cloud drive with Azure file services, will my apps perform well having all shares under one storage account or create one storage account for each share?
    Thanks,
    @nazik_huq

    Thanks Obama for replying.
    Here is the comment from @jaiharidas of MSFT if anyone's interested:
    @Naziq, It is better to have multiple shares under single storage account and there is no perf implications. However, please ensure that your ingress/egress and request/sec is within
    the limits of a single storage account (seemsdn.microsoft.com/.../dn249410.aspx)
    and use multiple storage accounts if you need to scale beyond the limits.
    See the original comment  on Azure Storage Team here: http://ow.ly/ChPNf 
    @nazik_huq

Maybe you are looking for

  • Enhancement for Sales Order

    In standard SAP sales order there is no field to refer the custom field  Z_ID  which is reference of internal orders. if the sales order has any free goods items then a check should be made against relevant item category. If free goods are there ZID

  • Error executing application in the PDA

    Hi! I'm doing the first steps with SAP Mobile Infrastructure. I have WAS 640, J2EE and ABAP with SP16. SAP MI 2.5 SP09. I'm trying to run in a PDA the very first examples from the SAP Develloper Studio MDK Help. In WebConsole, I have created an Appli

  • Problem while installing ECC6.0 IDES with EP

    Hi, I have successfully installed ECC6.0 IDES with ABAP+JAVA. Now while installing the same with ABAP+JAVA and with EP &EPC , I am getting the following error while installing.can any one tell me how the problem can be resolved? Jan 6, 2009 9:54:45 A

  • Dvd OR not working

    I have lenovo 3000 410 model serial no EB08354313 This model I have purchased in the month of June 2009 and since I am complaining bought  optical drive but so far no one in lenovo is interested in solving my problem. Second thing is when i am typing

  • Shared devices not always appearing

    I have noticed that not all of the devices that I have set up to be shared (including those using Back To My Mac) don't always appear in Finder under my Shared list. What's up with that? Jim P.