What actually gets written to tape by long-term protection?

I've been reviewing our long term recovery goals and am confused by the Customize Recovery Goal screen. The only documentation I can find for it only gives brief descriptions and is for 2010, I'm using 2012 R2.
Customizing Recovery Goals for Long-Term Protection
http://technet.microsoft.com/en-us/library/ff399209.aspx
I notice it says 'Back up every:' under the recovery goals, not 'colocate'. Does this mean that it will take a backup and write it directly to tape instead of writing the recovery points on disk to tape?
The specify long-term goals screen in the Modify Group wizard says 'All long-term tape-based protection uses full backups.' Does this mean the file and application recovery points from the short term protection aren't written to tape?

Hi,
<snip>
What actually gets written to tape by long-term protection?
>snip<
It really depends if you are using short-term to disk or not.
1) You are using short-term to disk, long term to tape (D2D2T)
       In this configuration the last successful disk based recovery point is written to tape. 
       The data is read from DPM recovery point and copied to tape.
2) You are using short-term to tape and long term to tape (D2T2T)
     The first LT recovery goal is a full backup taken from the protected server.
      Other LT goals are tape copies from lesser goals, so you need multiple tape drives.
As noted - all long term tape backups are full backups, no incremental backups are possible.
Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
This posting is provided "AS IS" with no warranties, and confers no rights.

Similar Messages

  • Can anyone tell me what file gets written to when you add or change GO Connect to Server?

    Can anyone tell me what file gets written to, when you add or change "GO < Connect to Server" in the finder? Also the location of that file. Thanks!

    ~/Library/Preferences/com.apple.sidebarlists.plist
    in favoriteservers.

  • Extending Long-Term Protection (tape)

    Hi all,
    This should hopefully be a simple Q&A.  I'm using local disk for short-term backup - roughly 30 days.  I also have a tape library attached that I use for Long-Term protection. The protection group is configured to keep weekly run tapes for
    8 weeks, and the monthly tapes are marked to keep for 3 years.
    If I reconfigure the protection group to keep the monthly tapes for 5 years, will it update the time-stamp for any previously writtent monthly tape?
    Example:  On May 1, a monthly tape was written w/ a 3 year retention period (as per the protection group).  If I edit the PG to reflect 5 years instead of 3, will that tape on May 1 update with the 5 year retention policy?  Or will it still only
    reflect 3 years, because that was the policy at the time of the write?
    Thanks!

    Hi,
    Yes, by default the monthly tape recovery points should reflect the new recovery retention period.  We introduced a registry setting to prevent that from occurring for customers who do not want the default behavior.
    DPM 2012 Sp1 UR2 KB2706783 see Issue 8
    Expiry dates for valid datasets that are already written to tape are changed when the retention range is changed during a protection group modification.
    For example, a protection group is configured for long-term tape recovery points with custom long-term recovery goals. Recovery Goal 1 has a smaller retention range than other recovery goals. In this configuration, if a protection group is changed to remove
    Recovery Goal 1 and to keep other recovery goals, the datasets that were created by using Recovery Goal 1 have their retention range changed to the retention range of the other recovery goals.
    To work around this problem, create the IsDatasetExpiryDateChangeInModifyPgAllowed DWORD under the following subkey, and then set its value to 0:
    HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft Data Protection Manager\Configuration\MediaManager
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • What are the main causes of slowdown (long term)?

    I've had my MacBook for about 3½ years now, and it's starting to slow down. (Hardware specs are in sig.)
    Yup, I know it's normal. It's happened with every computer I've owned, regardless of platform.
    *_WHAT I CURRENTLY DO ABOUT IT_*
    I routinely run Maintenance (http://www.titanium.free.fr/pgs2/english/maintenance.html), a free app that—yep, you guessed it!—performs maintenance on your Mac. It uses a mix of maintenance scripts that are built-in to OS X (OS X runs them silently, but this allows them to be run on demand), and some of its own methods for cleaning a Mac to keep it running smoothly.
    I've found that Maintenance does help—I definitely recommend it!—but the older my MacBook has become, it seems the less Maintenance can do to mitigate the slowdown that comes with age.
    *_MY CURRENT BELIEF_*
    I don't think it's because the hardware is old by today's standards, because in my experience if you reformat any machine it will operate significantly faster, even after restoring all of the software that was removed by the format.
    I've become increasingly suspicious about the hard drive's free space's effect on a system's performance. I've always known that it has a noticeable effect—less swap space for virtual memory—but I'm beginning to wonder if the effect of a mostly-full hard drive is much more than I have previously imagined. Just a feeling.
    *_MY QUESTIONS_*
    *#1: What are the leading causes of slowdown (long term)?*
    *#2: What are the solutions to each of the above problems?*
    • acknowledging that there may not be a solution to every problem, and
    • some solutions may solve a problem “more completely” than others
    Edited for clarity by Tony R.

    • I was under the impression that Disk Utility's “Repair Disk” would fix all non-permissions-related errors (other than physical damage, obviously). If they can be fixed by a backup-reformat-restore, then I take it they are software (filesystem) based problems. If so, why can't software solve the problem without a reformat?
    Disk utility can and does repair permissions and basic disk errors. However it does not rebuild directory data.
    • How exactly do directory structure errors happen, and what other consequences do they have (if any)?
    The article on alsofts website should help
    What is directory damage?
    It is worth noting that in the future when Mac OS X support the ZFS disk format directory damage will be a thing of the past.
    however as you said
    Yep, I'm at 11.2% free disk space. Thanks for the info!
    I think this is your main issue. Try moving your itunes and movie to an external drive then see how your mac performs.
    Also a good way to reclaim some space is to delete all the language localisations you dont use. It is amazing that iWeb alone can install a few 100megs of foreign language templates that you will never use.
    I used to use Mike Bombich's delocalizer to do this but that app is not longer available so I use monolingual now.
    edit, as you have macbook you are lucky in that it is really straightforward to install a bigger internal hard drive.
    You can now get 5000GB 2.5 inch sata hard drives. You could take out the old drive hook that up to a usb to sata cable, install a bigger hard drive. then connnect the usb cable to your mac. Then boot from your Leopard Disk then use disk utility clone your old drive onto your new drive.
    Message was edited by: Tim Haigh

  • DPM 2012 - stand alone drive - Short/Long term protection to tape

    We are migrating from BE to DPM 2012. I'm trying to configure DPM 2012 for both short term & long term protection - both using tape (stand alone drive). The goal is:
    Daily backups (Mon, Tue, Wed, Thu) - 2 weeks retention
    Weekly backup (Friday) - 4 weeks retention
    Monthly backup (last weekly backup) on every month - 15 m onths retention
    I've configured the short term protection for daily and weekly backups. Monthly backups are configured under long term protection. Daily and weekly backups are working as expected. However when it comes time for the Monthly, the DataSet copy jobs are failing.
    As I understand it, DPM is expecting another drive to copy the weekly backup tape.
    How I can achieve above goals ? Backup to disk is not an option.

    Hi,
    Any time DPM schedules a dataset copy job, you will need more than one tape drive, there is no way around that requirement.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • What actually gets deleted on my iPhone 5 when I sync it to a new computer?

    Hey Guys,
    I bought a new Retina Mac Book Pro a few months ago and transfered iTunes onto it. I also had to get a replacement iPhone 5 a while back. I never synced my old phone to this computer except to back it up when getting the replacement phone. When I restored my new phone from the backup, everything transfered except for music, so currently there is no music on my phone, but everything else (text messages, photos, apps/app data) was back to where and how it was previously. When I went to add music, the standard "everything will get erased and filled with stuff from this computer" message was displayed.
    So my question is this: If there is currently no music on my phone, will anything else from my phone get deleted or mssed up if I put music on it from this computer? Or will everything else remain untouched except the addition of my music?
    Thanks in advance for the help!
    -Chase

    No, everything on your iPhone will sync with the new iTunes and all your songs which are on your iPhone will appear on your iTunes.

  • Snapshot rollup with Long term backup to Tape

    I need to reconfigure the short term disk storage, and wanted to know if I create a Long Term backup to tape, does that
    store or encompass all the "snapshot/short term" backups that I have on disk?
    If so, I will be able to remove all my disk backups, reconfigure it as needed, and re-enable the short term backups without losing any of my historical backups. (For Legal reasons)
    Thanks!
    CCIE, CISSP, MCSE: Communication (Lync 2013), MCITP: Lync 2010, Enterprise Admin & Messaging ************************************************************************************************************************ Please remember to click “Mark as Answer”
    on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question.

    Hi,
    No, Long term to tape are also "point in time" backups and if started now will only contain new recovery points. The ability to make backups of previous recovery points is not very easy, you basically need to restore a disk based RP to tape, unfortunately,
    every restored recovery point to tape requires a separate tape.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • Long Term Storage ideas for Data in an Oracle Database

    We have data that we no longer need access to and want to archive it on tapes for long term storage.
    We want to save the data using datapump or rman from time to time and and store the exports/backups on tapes. The issue we don't want to face in the future is , if Oracle deprecates the methodology we used to save the data in 2012, we no longer have a way to retrieve the data into a database 10 years from now.
    This may sound crazy, but hear me out. Before datapump existed everyone was using vanilla exp to export the data. Now we have datapump as exp's upgrade and we have rman too. What if in 2012 we took an export using datapump or a backup using rman and save it off on a tape. 10 years go by and in 2022 management wants to look at the 2012 data. Lets say by this time Oracle's mothodology changes and they have a much more efficient/robust export/backup method. One way for us to keep the old backups up to date is to import the data back before the method we used in 2012 is deprecated and then export/backup the data using the new method to keep the exports/tapes in sync with then current technology.
    My question is, what methods of saving large amounts of data do you recommend/use to save data to a tape so that you don't have to worry about changing times.
    An idea I heard is to save the data as insert statements. This would work for most of the data except for blobs.
    Any help would be appreciated.
    Thanks in advance.

    >
    Won't an intelligent compression algorithm take care of a lot of the overhead for that?
    >
    That's certainly a valid point to make and one that the OP can test. And because the duplicated metadata is identical and in the same physical row location there may be compression algorithms that can reduce it to just one set.
    There were two points I was trying to make.
    1. Don't just save the data - save the metadata also.
    2. The metadata for each row will be the same so if you use an INSERT format that duplicates it for every row you are potentially wasting a lot of space.
    Your other point was probably more important than these or anything previously mentioned.
    >
    perform restores of the entire database every couple of years or so to make sure that your
    database isn't degrading
    >
    Regardless of the format and media used to backup the data it is important to verify the integrity of the backup periodically.
    There are a lot of ways to do this. One way is a simple MD5 (or other) checksum of data at some level of granularity and then periodically read the data and confirm you still get the same checksum.
    In my opinion the more generic the format that the data is saved in the better the odds that you can recover the data later no matter how technology may have changed.

  • DPM 2012 SP1 long term backup fails when protected server reboots

    Hi,
    we're using DPM 2012 SP1 with Short-term and long-term protection.
    We use a daily short-term backup to disk and a weekly long-term to tape.
    We are noticing that when a long-term tape backup is running, it fails when we reboot one of the protected servers.
    Is anyone else having this issue ? I assumed the long-term tape backup will be created from a disk-recovery point. 

    Hi,
    What is the detailed error code in the properties of the failed initial replica or consistency check job ?
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • Long Term Storage with VTL (Firestreamer)

    Hi All
    I'm currently in the planning stages of DPM 2012 R2 and am looking at eliminating tapes for long term/archival purposes by using a VTL solution. I have a trial of Firestreamer and am testing it out. The thought is that we would output to to file media and
    then somehow replicate this offsite to a 3rd party for long term storage.
    The thinking around removing physical tapes from daily life would save manual labour, tape transfers and the general chore of physical media but I'm wary of the virtual side of things becoming a time consuming monster to manage.
    The "tape" management side of it would need to be very carefully thought out to keep things under control and relatively simple. Things like max media size would need to be kept fairly low compared to modern LTO capacities to allow easy WAN transfer
    but at the same time I'm wary of ending up with a myriad of files from one large backup let alone months/years worth.
    For those "tapes" that will need to rotate (say weeklies on a 5 week rotation) we could simply reuse the tape files and then allow them to overwrite at the offsite end to preserve space and keep things simple.
    I'm backing up around 12TB pre compression using traditional tape methods currently and this is set to rise steeply with the onset of new projects, so depending on the compression rate the transfer of full backups over the wire could be large. 
    There is the added issue that DPM either compresses or encrypts and Firestreamer only compresses. The issue here is that I have a requirement for encrypted backups so therefore encryption will have to be performed at DPM level negating any form of compression
    from there on.
    Does anyone have any experience of this, or any other VTL's or non tape practices, if so I'd be very interested in how you did it.
    Thanks

    Hi,
    I would say it depends - DPM encrypts the data in chunks before writing the data to the tape. If most of the chunks are the same for each version of the backup, then the disk dedupe would be efficient.  If data was inserted or removed from files in
    such a way that the chunks are no longer aligned, then the encrypted data would not match previous chunks and would not be deduped since it didn't match previous backup data.  But overall, I think you have a pretty good chance of getting some
    fairly decent dedupe saving over time.
    You may want to inquiry about that with the OEM to see if they have any test results using DPM backup with software encryption enabled.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • DPM 2012 R2 Long Term not backing up on correct day

    Howdy,
    I have created a Protection group w/both short and long term protections. The short term backup is to disk, Monday through Thursday, and the long term backup up is on Fridays to tape. (See images below) Seems simple enough, however for some reason the long
    term Friday shows up on Thursday the same time the the short term backups are run. I didn't know that was possibly, but more importantly why is this happening?
    I've done as much research as I can do on the web, I'm hoping someone here can weigh in. Thanks in advance.

    Hi Janaka,
    This is by design, all Long term tape backups are taken from the last successful disk based backup.  In your case, your last disk based backup is Thursday, and no data has changed since then, so DPM will mount Thursdays disk based recovery point and
    copy it to tape and the time of the recovery point will be the same as Thursday since the version of the data is exactly the same.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • Ok so I have OS X Lion now and this is what I get:  "You can't open the application Lexmark All In One Center because PowerPC applications are no longer supported."

    ok so I have OS X Lion now and this is what I get:
    "You can't open the application Lexmark All In One Center because PowerPC applications are no longer supported."

    look for the latest drivers on the maker's website.
    check it in your list. this is a lexmark issue not a lion issue.
    http://support.lexmark.com/index?page=content&id=OS22&locale=EN&userlocale=EN_US

  • This is what I get when I go to down load I am stuck  Error  You are running an operating system that Illustrator no longer supports. Refer to the system requirements for a full list of supported platforms.

    this is what i get when I go to down load the program on to my computer
    You are running an operating system that Illustrator no longer supports. Refer to the system requirements for a full list of supported platforms.
    I am stuck
    thank you
    Cindi

    I guess you are trying to download from Adobe web site.  if you are using Mac OS X 10.6.8 or WIndows Vista/XP .
    Install Adobe Application Manager and update and Sign in.
    It will auto detect the version of Illustrator which is compatible with your computer and list the same .
    Then you can go ahead and install the same .
    You may download Adobe application manager using below link :
    Windows :
    http://download.adobe.com/pub/adobe/creativesuite/cc/win/ApplicationManager9.0_all.exe
    Mac :
    http://download.adobe.com/pub/adobe/creativesuite/cc/mac/ApplicationManager9.0_all.dmg

  • What is the best  media for long-term video storage?

    I will be loading several MiniDV tapes into my Mac and want to store the video long-term (in a safe-deposit box). These are going to be home movies with some digital photos or scans as well.
    I am curious what folks believe is the best medium for this. I am considering a Passport-style HD like those from LaCie, Western Digital and others. Has anyone had good luck with these? Or am I on the wrong track?
    Thank you in advance for your opinions,
    jf

    I considered getting one of the DVD recorder products from Sony (they say you can plug right into the camera, hit a button and get a DVD copy of the footage in real time)
    You understand that a DVD recorder will convert and seriously compress the material in the transcode from DV to mpeg2?
    DVDs like this are not a storage system if you are looking to keep original material in a) as close to original quality as possible and b) in an editable format.
    40 1 hour DV tapes is just over 500 GB when captured. Get 2 1 TB drives and make two copies of the material. Use one for editing on-line and put the other on a shelf (not in your house). Just be sure to get it back and spin it up every month or so.
    The tapes need to be seriously secured. If the safety deposit box is too small, look into a fireproof home vault. 40 tapes do not take up that much space. Just store it in a cool and not humid location.
    x

  • Deleted Item not actually getting deleted or **delay** deleted thus getting refetched (EWS Managed API)

    I am polling Exchange Server to fetch all the mails left in the Inbox and then fetching them, processing them and finally moving them to the deleted folder. My procedure takes following form of EWS Managed API calls:
    1 itemResults = service.FindItems(WellKnownFolderName.Inbox, itemView);
    2 //some code...
    3 service.LoadPropertiesForItems(itemResults.Items, itItemPropSet);
    4 //some code...
    5 foreach(Item item in itemResults)
    6 {
    7 //process item
    8 item.Delete(DeleteMode.MoveToDeletedItems);
    9 }
    To test this code I flooded the inbox with 5000 mails, 5 threads flooding 1000 mails each at the interval of 4ms.
    After test run, I realized some mails are getting processed multiple times.
    When I read the soap traces logged using TraceListener, I realized that when there are more than 100 mails in the inbox, there are more SOAP packets were dispatched in burst of 100s.
    Logically below set of SOAP Packets (let us call it "Set 1") should occur only once:
    <m:FindItem>...</m:FindItem> //generated by line 1
    <m:FindItemResponse>...</m:FindItemResponse> //generated by line 1
    <m:GetItem>...</m:GetItem> //generated by line 3
    <m:GetItemResponse>...</m:GetItemResponse> //generated by line 3
    followed x number of Delete SOAP packets for x number of mails in the Inbox
    <m:DeleteItem>...</m:DeleteItem> //generated by line 8
    <m:DeleteItemRepsonse>...</m:DeleteItemRepsonse> //generated by line 8
    However if there are say 325 mails in the inbox, then there first occurs 4 occurences of Set 1 followed by Delete SOAP packest as follows:
    //1st occurence
    <m:FindItem>100 items</m:FindItem>
    <m:FindItemResponse>100 items</m:FindItemResponse>
    <m:GetItem>100 items</m:GetItem>
    <m:GetItemResponse>35 items</m:GetItemResponse>
    //2nd occurence
    <m:FindItem>100 items</m:FindItem>
    <m:FindItemResponse>100 items</m:FindItemResponse>
    <m:GetItem>100 items</m:GetItem>
    <m:GetItemResponse>35 items</m:GetItemResponse>
    //3rd occurence
    <m:FindItem>100 items</m:FindItem>
    <m:FindItemResponse>100 items</m:FindItemResponse>
    <m:GetItem>100 items</m:GetItem>
    <m:GetItemResponse>100 items</m:GetItemResponse>
    //4th occurence
    <m:FindItem>35 items</m:FindItem>
    <m:FindItemResponse>35 items</m:FindItemResponse>
    <m:GetItem>35 items</m:GetItem>
    <m:GetItemResponse>35 items</m:GetItemResponse> //****followed by 335 occurences of the Delete SOAP packets****<m:DeleteItem>...</m:DeleteItem>  
    <m:DeleteItemRepsonse>...</m:DeleteItemRepsonse> 
    //334 more of above
    So there are 100 + 100 + 100 + 35 = 335 items actually getting handled, which is 10 more than what are actually present in the inbox. 
    Also notice that Delete SOAP packets are queued till end. They are not dispatched in burst.
    First thing I realize dispatching SOAP packets in burst of 100s is EWS Managed API behavior. I guess there must be traffic reason behind it and there must be some configuration setting to tweak it.
    Q1. How can we change this count of 100?
    I inferred that there are some overlaps in fetching of mails.
    So I copy pasted SOAP Traces for first two of
    <m:GetItem>100 items</m:GetItem>
    in two different files, and compared them in text comparer. I realised that some mails that are at the end of the first 100s were reappearing at the beginning of 2nd 100: 
    This is the pattern in many test runs.
    Due to this these overlapping mails, these mails are getting processed twice. Also the attempt to second time delete them was throwing exception : The specified object was not found in the store.
    So I guess there is time delay in delete mail API call and mails actually getting deleted, between which they are getting refetched.
    Q2. How can I handle this to avoid redundant processing of mails and unnecessary exceptions.

    I am polling Exchange Server to fetch all the mails left in the Inbox and then fetching them, processing them and finally moving them to the deleted folder. My procedure takes following form of EWS Managed API calls:
    1 itemResults = service.FindItems(WellKnownFolderName.Inbox, itemView);
    2 //some code...
    3 service.LoadPropertiesForItems(itemResults.Items, itItemPropSet);
    4 //some code...
    5 foreach(Item item in itemResults)
    6 {
    7 //process item
    8 item.Delete(DeleteMode.MoveToDeletedItems);
    9 }
    To test this code I flooded the inbox with 5000 mails, 5 threads flooding 1000 mails each at the interval of 4ms.
    After test run, I realized some mails are getting processed multiple times.
    When I read the soap traces logged using TraceListener, I realized that when there are more than 100 mails in the inbox, there are more SOAP packets were dispatched in burst of 100s.
    Logically below set of SOAP Packets (let us call it "Set 1") should occur only once:
    <m:FindItem>...</m:FindItem> //generated by line 1
    <m:FindItemResponse>...</m:FindItemResponse> //generated by line 1
    <m:GetItem>...</m:GetItem> //generated by line 3
    <m:GetItemResponse>...</m:GetItemResponse> //generated by line 3
    followed x number of Delete SOAP packets for x number of mails in the Inbox
    <m:DeleteItem>...</m:DeleteItem> //generated by line 8
    <m:DeleteItemRepsonse>...</m:DeleteItemRepsonse> //generated by line 8
    However if there are say 325 mails in the inbox, then there first occurs 4 occurences of Set 1 followed by Delete SOAP packest as follows:
    //1st occurence
    <m:FindItem>100 items</m:FindItem>
    <m:FindItemResponse>100 items</m:FindItemResponse>
    <m:GetItem>100 items</m:GetItem>
    <m:GetItemResponse>35 items</m:GetItemResponse>
    //2nd occurence
    <m:FindItem>100 items</m:FindItem>
    <m:FindItemResponse>100 items</m:FindItemResponse>
    <m:GetItem>100 items</m:GetItem>
    <m:GetItemResponse>35 items</m:GetItemResponse>
    //3rd occurence
    <m:FindItem>100 items</m:FindItem>
    <m:FindItemResponse>100 items</m:FindItemResponse>
    <m:GetItem>100 items</m:GetItem>
    <m:GetItemResponse>100 items</m:GetItemResponse>
    //4th occurence
    <m:FindItem>35 items</m:FindItem>
    <m:FindItemResponse>35 items</m:FindItemResponse>
    <m:GetItem>35 items</m:GetItem>
    <m:GetItemResponse>35 items</m:GetItemResponse> //****followed by 335 occurences of the Delete SOAP packets****<m:DeleteItem>...</m:DeleteItem>  
    <m:DeleteItemRepsonse>...</m:DeleteItemRepsonse> 
    //334 more of above
    So there are 100 + 100 + 100 + 35 = 335 items actually getting handled, which is 10 more than what are actually present in the inbox. 
    Also notice that Delete SOAP packets are queued till end. They are not dispatched in burst.
    First thing I realize dispatching SOAP packets in burst of 100s is EWS Managed API behavior. I guess there must be traffic reason behind it and there must be some configuration setting to tweak it.
    Q1. How can we change this count of 100?
    I inferred that there are some overlaps in fetching of mails.
    So I copy pasted SOAP Traces for first two of
    <m:GetItem>100 items</m:GetItem>
    in two different files, and compared them in text comparer. I realised that some mails that are at the end of the first 100s were reappearing at the beginning of 2nd 100: 
    This is the pattern in many test runs.
    Due to this these overlapping mails, these mails are getting processed twice. Also the attempt to second time delete them was throwing exception : The specified object was not found in the store.
    So I guess there is time delay in delete mail API call and mails actually getting deleted, between which they are getting refetched.
    Q2. How can I handle this to avoid redundant processing of mails and unnecessary exceptions.

Maybe you are looking for