VHD Size Backup?

I have a question for DPM server. If there is a dynamic disk 600 GB or Fixed 600 gb, and It has got 300 gb data on it.
how does dpm backs it up? Does it back up full size of the VHD or only the data 300 Gb. on our DPM I can see something strange and it backs up the full VHD size, but wanted to re-check on it.
Thanks,

HI,
Lets take these separately, because behavior will be different.
1) Dynamically expanding VHD
a. When you created the .vhd it was logically 600GB, but physically only was only 1MB in size as no data has been written to it yet.
b. You use DPM to protect the VM containing the .vhd and DPM brings over the 1MB .vhd file.
c. In one day 300GB is written to the .VHD, the logical size is still 600GB, but the physical size has now grown to 300GB.
d. DPM backup will now be protecting a 300GB .vhd file and will bring over those changes and apply them to the .vhd on the DPM replica.
e. As time goes on you eventually get close to filling the .VHD with more data and the .VHD grows to 599GB.
f. DPM again will be protecting the whole .VHD and will bring over those new blocks (299gb) and apply them to the .vhd on the replica, new size is is now 599GB.
g. If you now delete 300GB of data on the .VHD, there will only be 299gb worth af actual files, but the .VHD never shrinks.  DPM will delete the same 300GB worth of files on the REPLICA but .vhd is still 599GB.
h. If you want to shrink the .vhd, you need to run a disk defrag inside the VM, then a compact in Hyper-v. Once the .vhd is compacted, it's new physical size will be smaller - lets say 400GB
g. DPM will see that the .vhd is now smaller and will apply the same changes on the DPM replica., so now the .VHD is 400GB
Fixed Size VHD.
a. When you created the .vhd it was logically 600GB, and physically 600GB in size.
b. You use DPM to protect the VM containing the .vhd and DPM brings over the entire 600GB .vhd file.
c. In one day 300GB is written to the .VHD, the logical size and physical size is still 600GB.
d. DPM backup will bring over those changes and apply them to the DPM replica which is also 600GB in fixed size.
e. As time goes on you eventually get close to filling the .VHD with more data.
f. DPM again will be protecting the whole .VHD and will bring over those new blocks (299gb) and apply them to the replica and the .vhd is still 600GB fixed size.
g. If you now delete 300GB of data on the .VHD, there will only be 299gb worth af actual files, but the fixed size.VHD never shrinks.  DPM will delete the same 300GB worth of files on the .VHD in the REPLICA, but .VHD remais at 600GB fixed size..
DPM Tape backups.
DPM will always backup the entire physical size of the .VHD files.  In the case of fixed size .VHD's that will remain a constant (600GB in this case)  for dynamically expanding, the size of the .VHD may vary as data is written or a defrag and compact
is performed.
Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
This posting is provided "AS IS" with no warranties, and confers no rights.

Similar Messages

  • Backup VHD size is much more than expected and other backup related questions.

    Hello,I have few windows 2008 server and i have scheduled the weekly backup (Windows Backup) which runs on saturday.
    I recently noticed that the actual size of the data drive is only 30 GB but the weekly backup creates a VHD of 65 GB.This is not happening for all servers but for most of the server.Why is it so..and how can i get the correct VHD size..60 GB VHD does't make
    sence for 30 GB data.
    2. If any moment of time if i have to restore entire Active Directory on windows 2008 R2 then is is the process same as windows 2003..means going to DSRM mode,restoring the backup,authoritative restore or is there any difference..
    3. I also noticed that if i have a backup VHD of one server (Lets Say A) and if i copy that Backup to other server (Let Say B) then windows 2008 only gives me an option to attach the VHD to server B but through backup utlity is there any way to restore the
    data from the VHD,Currently i am doing copy paste from VHD to server B data drive but  that is not the correct way of doing it..Is it a limitation of windows 2008?
    Senior System Engineer.

    Hi,
    If there are large number of files getting deleted on the data drive, the backup image can have large holes. You can compact the vhd image by using diskpart command to the correct VHD size.
    For more detaile information, please refer to the thread below:
    My Exchange backup is bigger than the used space on the Exchange drive.
    https://social.technet.microsoft.com/Forums/windowsserver/en-US/3ccdcb6c-e26a-4577-ad4b-f31f9cef5dd7/my-exchange-backup-is-bigger-than-the-used-space-on-the-exchange-drive?forum=windowsbackup
    For the second question, the answer is yes. If you want to restore entire Active Directory on windows 2008 R2, you need to start the domain controller in Directory Services Restore Mode to perform a nonauthoritative restore from backup.
    Performing Nonauthoritative Restore of Active Directory Domain Services
    http://technet.microsoft.com/en-us/library/cc816627(v=ws.10).aspx
    If you want to restore a backup to server B which is created on server A, you need to copy the WindowsImageBackup folder to server B.
    For more detailed information, please see:
    Restore After Copying the WindowsImageBackup Folder
    http://standalonelabs.wordpress.com/2011/05/30/restore-after-copying-the-windowsimagebackup-folder/
    Best Regards,
    Mandy
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • VHD Size is Maximum Size Allowed Without Using All Space in OS

    Hello - I have a Hyper-V VM whose maximum size allowed is 700GB. Currently, the VHD file size is 700GB yet the OS is only using ~400GB (300GB free). Shouldn't the VHD size be about 400GB instead of 700? Every other VM I have checked matches the size being
    used in the OS.
    As I was typing this I think I answered my own question. This VM has a fixed size and all of the other VMs we have are dynamic. I assume this is why? It is a SQL server so I guess it is fixed for better performance. I didn't set this up
    Can I safely shrink the VHD down a little bit? When trying to get replicas of this VM, it keeps crashing the VM due to space limitations. It is on its own Cluster Disk but it only has a few GB free which is why I believe it is failing. How much free
    disk space is needed where the VHD is located in order for replication to work?
    This is our SQL server so I am a little cautious in what we do with it.
    Thanks for your help,
    Mike

    Hi Mike ,
    Based on my knowledge for vhd , when you delete a file then this file was not deleted on disk  thoroughly , just give it a deleted tag (the "old space" was not recycled until there is no new space for writing).
    I think you can not make it smaller via shrinking .
    Maybe you can try to compress the vhd file ( "edit disk..."  ), before you doing this please backup your VM first .
     BTW , it is recommended to use fixed disk for SQL server .
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • VHD size incresing 50GB a day while guest drive remains same size

    I have a 2008r2 Server VM dedicated to running Microsoft SQL server 2008r2 with 4 VHD's (each corresponding to a drive on the server).  VM's
    are hosted on a SAN using a cluster shared volume. I have 2 hosts in cluster.
    VHD and corresponding guest OS drive
    1 - c: system disk which is a dynamic disk
    2 - e: Applications on fixed size disk
    3 - f: logs and backup on fixed size disk
    4 - g: databases on fixed size disk
    This was set up about 2/3 years ago and was running well.  Recently the server crashed and when I investigated I saw that the c:drive
    VHD had expanded in size so that that it filled all the space on the CSV. I initially increased the size of the CSV to get the server running again. 
    I noticed that the VHD containing the c: drive is growing about 50GB a day while the guest c:drive on the VHD is only getting marginally bigger. 
    I have compacted the VHD and this brought the size of it down to the same size as the guest c:drive.  Unfortunately since the compact it continues to grow at a rate of approximately 50GB a day. 
    I have done some research but cannot see an identical issue.  This behaviour must have only started to occur recently otherwise this issue
    would have become apparent sooner.
    Has anyone any ideas why this is happening
    I can schedule a weekly compact as a work around.
    I have seen information where the dynamic disk will continue to grow but this will only happen as the guest dive grows.
    If I convert the dynamic disk to a fixed disk will that resolve the issue?
    Any advice / information would be appreciated.
    Regards
    Niall

    Hi Niall,
    Generally , the dynamic VHD won't grow up after getting its MAX size .
    I assume the 50GB usage increasement a day happens to CSV .
    Please try to check the shadow copy settings :
    Open "computer management" -->right click " shared folders " --> all tasks --> configure shadow copies --> settings
    Then you can change the max size for shadow copy on that CSV disk (you may set this on the owner node )
    Best Regards,
    Elton Ji
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

  • What size backup drive for time machine on 500gb MBP?

    I'm sure this is a really novice question, but I couldn't find the answer on the forum.
    I've just got a MBP with a 500gb HDD. I won't be buying a Time Capsule, but I'd like to set up Time Machine to backup to an external USB HDD. What is the optimum size for this?
    I see that time machine keeps deleted files, I'm not sure if it uses compression, and I'm not sure what happens when I run out of space... does it reduce the number of duplicates that it keeps, or will I run into a problem?
    Really, what I'm trying to find out is: I have an old 500gb HDD that isn't being used for much. Is this sufficient to back up a 500gb HDD (assuming that, at some point, I'm going to be hovering around maximum capacity on the MBP)?
    Thankyou!

    Hi and welcome to Discussions,
    our local Time Machine 'Guru' Pondini has made a very thorough FAQ on it to be found here http://discussions.apple.com/thread.jspa?threadID=1964018
    Quoted from it (section 1):
    "A general "rule of thumb" is, TM needs 2 to 3 times as much space as the data it's backing-up (not necessarily the entire size of your internal HD).
    But this varies greatly, depending on how you use your Mac. If you frequently add/update lots of large files, then even 3 times may not be enough. If you're a light user, 1.5 times might do. Unfortunately, it's rather hard to predict, so if in doubt, get a bigger one!
    Also, there are some OSX features and 3rd-party applications that take up large amounts of backup space, for various reasons. See question #9 for details.
    This is a trade-off between space and how long TM can keep its backups, since TM will, by design, eventually use all the space available. But it won't just quit backing-up when it runs out: it starts deleting the oldest backups so it can keep making new ones. Thus, the more space it has, the longer it can keep your backups."
    Regards
    Stefan

  • Recommended External Backup Drive Size

    Hi,
    My friend bought a new iMac, 500GB. What size backup drive ought he buy?
    Thanks.

    That depends on what backup app he's using.
    Time Machine usually needs 2-3 times the space of the data it's backing-up (see #1 in the Frequently Asked Questions *User Tip,* also at the top of the +Time Machine+ forum).
    If he's using CarbonCopyCloner or SuperDuper, a 500 GB drive would be fine, unless he's having it also keep "archive"copies. If he is, then more space will be needed.
    And yes, having two separate backups is highly recommended, as is having something off-site, either with a portable external HD or an app or service that will back up via the internet.

  • Very slow Backup (Many little file in many subdirectories)

    - I have a Netware 6.5 SP8 server with the post SP8 NSS patch installed (NSS version 3.27.01 April 7, 2009)
    - iSCSI Initiator Version 1.06.05 October 30, 2008 connecting a 410 GB NSS Volume (Lefthand Networks iSCSI SAN)
    - the volume contains user data, and contains, from the Root, a USERDATA directory, which then contains 32,749 individual user directories.
    - The majority (90%) of the user directories are empty
    - each user directory has it's own trustee assignment, and space restrictions
    - compression is enabled
    - 376,269 Files using 45,556,460,661 bytes in 221,439 directories so the average would be around 121K per file and as you can see it's very Directory intensive
    TSATEST, indicates approximately 850 MB/min result, but when I attempt to back up the data using my backup software (NetVault, which uses NDMP), the performance is hideous. A Full Backup runs around 135k/sec or 8 MB/min (try getting that done in any size backup window) . I have also tried other backup solutions, with the same basic result.
    I assume the issue is with indexing, but I'm not sure what to check at this point. I've been trying every suggestion I could find. I've gotten the throughput up to about 1.5M/s, but obviously need better. Just wanted to know if anyone here had any suggestions that might help me to make this function more efficiently, or any TID's that have proven helpful.
    Thank You!

    Matt Karwowski wrote:
    > - I have a Netware 6.5 SP8 server with the post SP8 NSS patch installed
    > (NSS version 3.27.01 April 7, 2009)
    > - iSCSI Initiator Version 1.06.05 October 30, 2008 connecting a 410 GB
    > NSS Volume (Lefthand Networks iSCSI SAN)
    > - the volume contains user data, and contains, from the Root, a USERDATA
    > directory, which then contains 32,749 individual user directories.
    > - The majority (90%) of the user directories are empty
    > - each user directory has it's own trustee assignment, and space
    > restrictions
    > - compression is enabled
    > - 376,269 Files using 45,556,460,661 bytes in 221,439 directories so the
    > average would be around 121K per file and as you can see it's very
    > Directory intensive
    >
    > TSATEST, indicates approximately 850 MB/min result, but when I attempt
    > to back up the data using my backup software (NetVault, which
    > uses NDMP), the performance is hideous. A Full Backup runs around
    > 135k/sec or 8 MB/min (try getting that done in any size backup
    > window) . I have also tried other backup solutions, with the same basic
    > result.
    >
    > I assume the issue is with indexing, but I'm not sure what to check at
    > this point. I've been trying every suggestion I could find. I've
    > gotten the throughput up to about 1.5M/s, but obviously need
    > better. Just wanted to know if anyone here had any suggestions that
    > might help me to make this function more efficiently, or any TID's that
    > have proven helpful.
    >
    > Thank You!
    Known issue....
    http://www.novell.com/support/viewCo...1011&sliceId=1
    I have seen this in Java development shops with millions of 1kb files
    and folders. The nature of Java.
    - Use tar or zip on the local server,not across the wire to compress
    those types of data folders.
    - Exclude those folders from backup
    - Then backup only the zip or tar files.
    or maybe look into rsync as I think is copies by hardware blocks not
    files?????
    - maybe need toreview your environment if it is a development shop like
    Java.Maybe switch to GPFS or Reiser for many small files if you can, EXT
    could handle it for your small amount of data though.

  • Sharepoint 2010 farm full backup

    I do full backup to sharepoint 2010 farm, in logs write
    Estimated disk space required: 94,251 MB.
    Backup
    Backup Method: Full
    Top Component:
    Configuration only: False
    Progress updated: 5
    Backup threads created: 10
    at the start of the backup is the approximate size of
    94,251 MB (about 92.05 GB), and in the end
    all backup files weigh
    19 GB. At the start and at the end of
    all sizes backup files
    do not match. Why?

    When you take backup, it require double size of the backup which is actually there, after this the compression happens.
    Also check below:
    http://social.technet.microsoft.com/Forums/sharepoint/en-US/f93e5daf-575d-4565-8d25-9d74a0b7906a/backup-estimated-disk-space-required

  • Backup into file system

    Hi
    Setting backup-storage with the follwoing configuration is not generating backup files under said location - we are pumping huge volume of data and data(few GB) is not getting backuped up into file system - can you let me know if what is that I missing here?
    Thanks
    sunder
    <distributed-scheme>
         <scheme-name>distributed-Customer</scheme-name>
         <service-name>DistributedCache</service-name>
         <!-- <thread-count>5</thread-count> -->
         <backup-count>1</backup-count>
         <backup-storage>
         <type>file-mapped</type>
         <directory>/data/xx/backupstorage</directory>
         <initial-size>1KB</initial-size>
         <maximum-size>1KB</maximum-size>
         </backup-storage>
         <backing-map-scheme>
              <read-write-backing-map-scheme>
                   <scheme-name>DBCacheLoaderScheme</scheme-name>
                   <internal-cache-scheme>
                   <local-scheme>
                        <scheme-ref>blaze-binary-backing-map</scheme-ref>
                   </local-scheme>
                   </internal-cache-scheme>
                   <cachestore-scheme>
                        <class-scheme>
                             <class-name>com.xxloader.DataBeanInitialLoadImpl
                             </class-name>
                             <init-params>
                                  <init-param>
                                       <param-type>java.lang.String</param-type>
                                       <param-value>{cache-name}</param-value>
                                  </init-param>
                                  <init-param>
                                       <param-type>java.lang.String</param-type>
                                       <param-value>com.xx.CustomerProduct
                                       </param-value>
                                  </init-param>
                                  <init-param>
                                       <param-type>java.lang.String</param-type>
                                       <param-value>CUSTOMER</param-value>
                                  </init-param>
                             </init-params>
                        </class-scheme>
                   </cachestore-scheme>
                   <read-only>true</read-only>
              </read-write-backing-map-scheme>
         </backing-map-scheme>
         <autostart>true</autostart>
    </distributed-scheme>
    <local-scheme>
    <scheme-name>blaze-binary-backing-map</scheme-name>
    <high-units>{back-size-limit 1}</high-units>
    <unit-calculator>BINARY</unit-calculator>
    <expiry-delay>{back-expiry 0}</expiry-delay>
    <cachestore-scheme></cachestore-scheme>
    </local-scheme>

    Hi
    We did try out with the following configuration
    <near-scheme>
         <scheme-name>blaze-near-HeaderData</scheme-name>
    <front-scheme>
    <local-scheme>
    <eviction-policy>HYBRID</eviction-policy>
    <high-units>{front-size-limit 0}</high-units>
    <unit-calculator>FIXED</unit-calculator>
    <expiry-delay>{back-expiry 1h}</expiry-delay>
    <flush-delay>1m</flush-delay>
    </local-scheme>
    </front-scheme>
    <back-scheme>
    <distributed-scheme>
    <scheme-ref>blaze-distributed-HeaderData</scheme-ref>
    </distributed-scheme>
    </back-scheme>
    <invalidation-strategy>present</invalidation-strategy>
    <autostart>true</autostart>
    </near-scheme>
    <distributed-scheme>
    <scheme-name>blaze-distributed-HeaderData</scheme-name>
    <service-name>DistributedCache</service-name>
    <partition-count>200</partition-count>
    <backing-map-scheme>
    <partitioned>true</partitioned>
    <read-write-backing-map-scheme>
    <internal-cache-scheme>
    <external-scheme>
    <high-units>20</high-units>
    <unit-calculator>BINARY</unit-calculator>
    <unit-factor>1073741824</unit-factor>
    <nio-memory-manager>
    <initial-size>1MB</initial-size>
    <maximum-size>50MB</maximum-size>
    </nio-memory-manager>
    </external-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>
    com.xx.loader.DataBeanInitialLoadImpl
    </class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>{cache-name}</param-value>
    </init-param>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>com.xx.bean.HeaderData</param-value>
    </init-param>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>SDR.TABLE_NAME_XYZ</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </cachestore-scheme>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    <backup-count>1</backup-count>
    <backup-storage>
    <type>off-heap</type>
    <initial-size>1MB</initial-size>
    <maximum-size>50MB</maximum-size>
    </backup-storage>
    <autostart>true</autostart>
    </distributed-scheme>
    With configuration the amount of residual main memory consumption is like 15 G.
    When we changed this configuration to
    <near-scheme>
         <scheme-name>blaze-near-HeaderData</scheme-name>
    <front-scheme>
    <local-scheme>
    <eviction-policy>HYBRID</eviction-policy>
    <high-units>{front-size-limit 0}</high-units>
    <unit-calculator>FIXED</unit-calculator>
    <expiry-delay>{back-expiry 1h}</expiry-delay>
    <flush-delay>1m</flush-delay>
    </local-scheme>
    </front-scheme>
    <back-scheme>
    <distributed-scheme>
    <scheme-ref>blaze-distributed-HeaderData</scheme-ref>
    </distributed-scheme>
    </back-scheme>
    <invalidation-strategy>present</invalidation-strategy>
    <autostart>true</autostart>
    </near-scheme>
    <distributed-scheme>
    <scheme-name>blaze-distributed-HeaderData</scheme-name>
    <service-name>DistributedCache</service-name>
    <partition-count>200</partition-count>
    <backing-map-scheme>
    <partitioned>true</partitioned>
    <read-write-backing-map-scheme>
    <internal-cache-scheme>
    <external-scheme>
    <high-units>20</high-units>
    <unit-calculator>BINARY</unit-calculator>
    <unit-factor>1073741824</unit-factor>
    <nio-memory-manager>
    <initial-size>1MB</initial-size>
    <maximum-size>50MB</maximum-size>
    </nio-memory-manager>
    </external-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>
    com.xx.loader.DataBeanInitialLoadImpl
    </class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>{cache-name}</param-value>
    </init-param>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>com.xx.bean.HeaderData</param-value>
    </init-param>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>SDR.TABLE_NAME_XYZ</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </cachestore-scheme>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    <backup-count>1</backup-count>
    <backup-storage>
    <type>file-mapped</type>
    <initial-size>1MB</initial-size>
    <maximum-size>100MB</maximum-size>
    <directory>/data/xxcache/blazeload/backupstorage</directory>
    <file-name>{cache-name}.store</file-name>
    </backup-storage>
    <autostart>true</autostart>
    </distributed-scheme>
    Note backup storage is file-mapped
    <backup-storage>
    <type>file-mapped</type>
    <initial-size>1MB</initial-size>
    <maximum-size>100MB</maximum-size>
    <directory>/data/xxcache/blazeload/backupstorage</directory>
    <file-name>{cache-name}.store</file-name>
    </backup-storage>
    We still see that process residual main memory consumption is 15 G and we also see that /data/xxcache/blazeload/backupstorage folder is empty.
    Wanted to check where does backup storage maintains the information - we would like offload this to flat file.
    Appreciate any pointers in this regard.
    Thanks
    sunder

  • Why does Time machine not delete old backups

    I dont understand Why time machine is not deleting old backups. As it worked first time with the same size backup drive as the drive its backing up, I assumed it would just delete the old backup as needed. Do I need a bigger backup drive?

    Roglee wrote:
    I just want it to delete the old backup & replace it so i just have a copy. I guess i could just re format the drive each time but that just seems a waste of my time when TM should do this for me!
    If this is all you want to do, turn off Time Machine, download SuperDuper! and use it for free. This will clone whatever drive you want and the free trial will erase your back up drive each time. I can't remember if it will let you choose what to back up or if it wants the entire drive. Also, I'm not sure if the Scheduler works with the free trial. Below are some verbiage for SuperDuper!. Looks like if you want Scheduling you will have to purchase a license.
    Download Now! 
    You can download SuperDuper! v2.7.1 right now and back up and clone your drives for free— forever! 
    Buy Now!
    Buy now to unlock scheduling, Smart Update (which saves a lot of time), Sandboxes, scripting and more!
    Time Machine will keep making back ups and when it determines it needs space, it will delete older back ups. But it doesn't delete older ones each time it backs up.

  • VHD disk compact

    Hi,
    I have installed Hyper V 2008R2 server. I have created a two VM. The Type of disk of both vm is .VHD.
    My question is: VHD is unable to compact white spaces automatically from the disk and it could not reduce the vhd size after deletion the data. Is there any solution for it .
    Can .VHDX type disk can help me  to reduce the white space automatically?
    IS there any way or method to compact the vhd without down the VM
    Please suggest me the solution to over come this monthly disk compact activity on my vms to reclaim the host disk space
    Thanks

    Hi Sr,
    Please try to defrag in VM first then shutdown VM and try to use "edit disk.." in hyper-v manager to compact the VHD file .
    Benefit of VHDX file please refer to the link below :
    http://technet.microsoft.com/en-us/library/hh831446.aspx
    If you have any further question please fell free to let us know .
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Reduce Hyper-V Machine Size

    Hello All,
    I have created Hyper-v machine in windows server 2012 environment. And install exchange server in that hyper-v machine. Hyper-v machine hdd configured by dynamic disk. now the size is increase and I delete the file from hyper-v machine. now I need to
    reduce the hyper-v machine size. please what is the procedure to reduce hyper-v vhd file size.
    please suggest. thanks

    Hi Parvez,
    >>now the size is increase and I delete the file from hyper-v machine. now I need to reduce the hyper-v machine size.
    First , I would to say the VHD size can be expanded but can not be reduced .
    I assume that you want to reduce the dynamic VHD size on disk .
    As a workaround , please try the following steps :
    1. defrage the disk inside the VM OS .
    2. shutdown the VM then
    compact the VHD file of that VM .
    3. start up the VM
    4. in that VM open disk management , right click the Volume and select "shrink Volume"
    (after "shrink " , the unallocated space equals to the space you want to reduce )
    Best Regards,
    Elton Ji
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

  • Backup disk management questions

    1. Can't I use the backup disk for anything else than time machine?
    2. How can I see the status of the disk - how much room left?
    3. Any controls on the how backup is performed, such as how many generations I want to keep, how mcuh space do I allow etc.

    Welcome to Apple Discussions.
    1. If you have a relatively large disk you can partition it and use one partition for Time Machine. Apple says you can keep other files on the same partition as Time Machine but suggests that it is ideal if Time Machine has its very own disk or partition.
    2. Click once on a disk icon in the Finder. Type Command "i". or choose Get Info from the File menu. You can also see the information if you run Disk Utility which you can launch from the icon on your dock or go to the Utilities folder by choosing Utilities from the Go menu. Once you have launched Disk Utility click on the volume (volumes are the indented ones) in the left pane and it will tell you that volume's capacity and used/available space at the bottom of the Disk Utility window.
    3. Time Machine does an initial backup and then does hourly incremental backups after that. The hourly backups are automatically deleted daily to save disk space. There are no user controls for the timing. You can, however, choose to start or stop Time Machine at any time and you can also turn the program on or off. Once the partition is nearly full Time Machine will start to automatically delete prior backups as it needs disk space starting with the earliest backups. There is an excellent thread on how to calculate what size backup drive might suit your needs here.

  • Backup Assessment

    Good Day,
    I would like to ask your recommendation regarding our procedures in backuping in SQL Server.
    Currently we have 7 companies (7 SAP database).
    Drive C - 67.7 GB Max
    Drive D - 68.8 GB Max
    Sizes: Backup (not shrink)
    0.14 GB - 1
    2.12 GB - 2
    3.20 GB - 3
    1.28 GB - 4
    0.64 GB - 5
    0.39 GB - 6
    0.82 GB - 7
    Total - 8.59 GB
    The Drive D is our storage of the Backup.
    Daily, I am doing backup. But after a week, I am deleting the last week to consume free space.
    After end of the month, I save the last date of the backup to the external drive and also burn it in a DVD CD.
    Can you give me our recommendation so that I can improve my backup procedure.
    Thanks

    Hi,
    You did not specified if you ahve a backup/DR system existing in your setup.
    You should go for tape backups, since they are most reliable than your external disk
    External HDD : It can be damaged physically and also you are putting your corporate data on a piece of machine which can be moved out unnoticed. Also this was concern in auditing policy, if your system are getting audited by external auditors.
    You can build policy based on Prod and non-prod.
    Prod:
    incremental backup once every 15/30/60 mins.
    Full online backup in middle of night everyday
    Full offline backup once a week on weekend.
    Non-Prod:
    Online backup once every 60mins
    Full offline backup on weekend.
    You had lot of 3rd party tools for this eg: IBM Tivoli backuptool, Lightspeed from quest for mssql db.....
    Regards,
    Sitarama.

  • New HD after disk crash : how restore original partitioning scheme

    Hello, After a mechanical disk crash, american version, I have to replace the hard drive of a X220, american version. Windows 7 Pro was in English.I have two VHD files backup to an external drive, as well as a repair disk.One VHD file is 348 MB and one 199 GB.I could check that the second one contains Windows 7 with software and user files.I assume that the first VHD file is the EFI partition. Because of the mechanical failure there is of course no "recovery partition".I do have a CD of Windows 7 Pro in French if required, but I assume the VHD files should suffice. My question is : how to restore the original partitioning sheme?Which is the size of partitions in bytes and in which order are they? Can someone post the output the partition table (for instance Linux's  "gdisk -l" from a bootable CD or USB key) for an X220 ? Thanks a lot.

    Pete291 I am showing a Photoshop CS6 full version serial number for Windows under your account at http://www.adobe.com/.  Please make sure you are utilizing the serial number which ends with 94.  You can find more details on how to locate your serial number at Find your serial number quickly - http://helpx.adobe.com/x-productkb/global/find-serial-number.html.
    If you are using the serial number registered under your account then you can please provide additional details.  Why are you reinstalling Photoshop CS6?  Did you restore/migrate/transfer/copy files previously to this installation?

Maybe you are looking for