Dedupe question

Does dedupe work on a virtual file server's data drive that is a vhd file within a cluster shared volume?

Does dedupe work on a virtual file server's data drive that is a vhd file within a cluster shared volume?
Microsoft deduplication does not work for running VMs (unless they are configured for VDI, see link below) but you can enable dedupe INSIDE your guest VM and indeed save some space. 
Microsoft Windows Server 2012 Dedupe
http://technet.microsoft.com/en-us/library/hh831700.aspx
Not good candidates for deduplication:
Hyper-V hosts
What's new with dedupe in R2
http://technet.microsoft.com/en-us/library/dn486808.aspx
Feature/functionality
New or updated?
Description
Data deduplication for remote storage of Virtual Desktop Infrastructure (VDI) workloads
New
Optimize active virtual hard disks (VHDs) for Virtual Desktop Infrastructure (VDI) workloads by implementing Data Deduplication on Cluster Shared Volumes (CSVs).
Here's a thread where OP is running a file server inside a VM with dedupe enabled and using VHDX shrink to save space on host (CSV). Kludge but would
work with some pressure applied (I would not go for this but worth at least reading...). See:
File Server inside a VM + dedupe enabled
http://social.technet.microsoft.com/Forums/windowsserver/en-US/74f30d29-b0f3-4955-9844-46af0c7db683/server-2012-not-compacting-vhdx-files?forum=winserverhyperv#04e7eb1d-1962-487d-8b1e-8d2775e2c77f
Hope this helped :)
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

Similar Messages

  • ZFS Dedup question

    Hello All,
    I have been playing around with ZFS dedup in Open Solaris.
    I would like to know how does ZFS store the dedup table. I know it is usually in memory, but it must leave a copy on disk. How is this table protected? Are their multiple copies like Uber block?
    Thanks
    --Pete                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    Hello All,
    I have been playing around with ZFS dedup in Open Solaris.
    I would like to know how does ZFS store the dedup table. I know it is usually in memory, but it must leave a copy on disk. How is this table protected? Are their multiple copies like Uber block?
    Thanks
    --Pete                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Deduplication combined with differencing disks for better or not ?

        We have a test hyper-V for various VMs. It is naturally easier for us to start from Parent disk which is sys prepped and move on.
        On the other hand, deduplication(2012 R2) claims to be most effective on VHD library data.
        Is the effectiveness of deduplication different when it comes to differencing VHDs as compared to other disks ?
        Assuming it only works when VM is once off but then What is the algorithm ? and how much time it takes to de-dupe. What happens when the original old files (referenced chunks) are updated ?
    Shahid Roofi

      Problem with diff disks, is the architecture of it. With time, the derived disk grows equivalent to parent. even if a single minor patch is applied sometimes renders most of parent OS to be considered updated.
      others are achieving wonderful results with dedupe on VM library.
    http://blog.compower.org/2013/10/31/deduplication-windows-8-1-laptop-great-hyper-v-lab-environment/. 80-95%. Highest space saving they claim on VMs.
      I would request VR28DETT to elaborate on the information on semantics of that dedupe process which we are unable to find yet. how it does the matching of similar contents and how often and what happens when the referenced data is updated so that
    the child content is cross updated. What are the I/O penalties in achieving that. That is what we are interested to know instead of just the opinions.
      I am sure VR28DETT does have that information to help us out.
      @BrianEH: When we say Dedup is for static content, then also elaborate how static. It should be a read only data? btw is there a mentioning on TechNet docs that it's only for static content. please refer if it does.
    The question you're asking is entirely rhetorical: you cannot use MSFT off-line deduplication with live data like running VMs (unless that's a VDI scenario and it's not what you do according to your first post). So for VM library you CAN use use both diff.
    disks and dedupe (simplicity of new VM provisioning comes here as a benefit) or you can use dedupe only. For production you can use only diff. disks or dedupe enabled inside a Windows-running VM (hell to manage with shrinking VHDX and no deduplication between
    VMs so very limited use and poor space savings). You can read more about dedupe and running VMs here:
    Dedupe and running VMs
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/e275f38c-a440-4790-bd42-1024d0819000/dedupe-question?forum=winserverfiles#f6d2044c-8e3b-4ee5-a3c0-b663b97c729b
    This decent discussion has all the links and answers for questions you've asked.
    Hope this helped.
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • DeDupe and logical Questions

    Can Some one explain me i ran dedupe and it shows 100 completed and that time it saved 21 GB after three days it saved 1 TB on same data how it is possible ?

    Hi, 
    I get the reply below from Deduplication user discussions team.
    There are three possible explanations for this.
    1. Long running optimization job – the statistics for the job are saved periodically as the job runs, so you can watch these values increase during optimization execution
    2. Open files – Data Deduplication in WS2012 skips open files during optimization.  When the files are later closed and the optimization job runs again, it would optimize those files and the SavingsRate would go up.
    3. Normal file churn – after the first deduplication job ran, some non-deduplicated data on the volume was deleted.
    More information on #3… 
    There are two measures of deduplication rate, SavingsRate and OptimizedFilesSavingsRate.
    - SavingsRate is based on the total amount of data on the volume.  You can see this value decrease/increase as you add/remove data on the volume (e.g. copy a bunch of files to the volume after the optimization job has completed).
    - OptimizedFilesSavingsRate is based on the total amount of deduplicated data on the volume.  You would see this value stay the same as you add/remove data on the volume (until another optimization or GC job runs).
    Regards, 
    Mandy
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Serious about cleaning up my library, and i have this question on odd duplicate problem

    Hi all, I am serious about cleaning up my library, and i have this question:
    Is there such thing as a duplicate anymore in iTunes, and what happened to the View / Find duplicates feature? I could have sworn there was a built-in way... Anyway, In my media subdirectories, I have a hundreds of music album folders that have songtitlethesame 1.mp3, songtitlethesame 2,  songtitlethesame 3, etc. where they are the same song (same timecode) but with different catalogue info, ie. album art, date added/modified, etc. What can I do about this?
    I was surprised to find songGenieII and Powertunes does not have a "duplicate" remover... and it seems for the bucks spent, there should be
    Cheers!

    Duplicates are a mess and personally I do not feel any automated process can handle them correctly - there's too many different kinds of  "duplicates" and often the only way to tell they are the same (or not) is to listen to them.
    I use an old iTunes so I cannot tell you where they have moved the show duplicates things but I can pretty much guarantee they would not have removed it.  Poke around a bit.
    How to find and remove duplicate items in your iTunes library - http://support.apple.com/kb/HT2905
    http://dougscripts.com/itunes/itinfo/dupin.php (commercial)
    Posts by turingtest2 about different types of duplicates and techniques- https://discussions.apple.com/thread/3555601 and https://discussions.apple.com/message/16042406 (Note: The DeDuper script is for Windows)
    http://www.hardcoded.net/dupeguru_me/
    http://www.wideanglesoftware.com/tunesweeper/index.php

  • Manual replica creation of DeDup Volumes

    Hi,
    we are on the way to update a lot of our file Servers to Windows Server 2012.
    DeDup will be enabled on the volumes.
    We are going to to backup these voumes via DPM2012SP1.
    Now my question: Is it possible to create a Manual replica (for example with the help of USB external drives) for dedup enabled volumes or does the replica creation process have to be triggered from within DPM console so that all data will be transfered
    over the Network?
    Thanks in advance
    regards
    /bkpfast
    My postings are provided "AS IS" with no warranties and confer no rights

    Hi,
    Unfortumatly, you cannot perform a manual replica creation for a deduped volume in a deduped state, however, you can perform the following, but don't think it would provide you much benefit.
    1) On the protected server that has the dedup volume you want to protect, create a dummy folder in the root of the volume.
    2) On the DPM Server, protect the dedup volume, except uncheck the dummy folder.  Choose to make a manual replica.
    3) Copy the contents of the protected volume to the replica volume using any method you choose. 
    4) Run the mandatory consistency check to make sure the data is equivelent.
    5) On the DPM Server, modify the PG and now include the dummy folder - basically uncheck the drive letter and recheck it so DPM will protect the whole volume now.
    6) On the protected server, delete the dummy folder.
    7) Re-run a new consistency check, and DPM will now protect the volume in a dedup state.  This will result in only transfering deduped blocks, and leaving non-dedup blocks intact.
    Again, that may not buy you much and may be a waste of time - but give it whirl.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • I have three questions about managing my music library

    Hello,
    I have three questions about managing my music library, I hope you can help me with them:
    1) Is there a limit of how many entries, songs, albums, art work, can iTunes handle? I have a hunch iTunes is like a database program and am curious about its capacity. I have two 2-TB drives and am wondering what is going to happen when I fill these two drives up. Other than disk space, what are iTunes limitations?
    2) Talking about these two drives. How can I use both as a source for my iTunes library. Can I have two folders selected as the source of my library? I am not sure if I have enough disk space to hold all my music, but I do also have a 1TB almost empty drive is need be.
    3) OK now comes te real question. I am sure that I have duplicates in my library and I sure would love to clean my library up.Possiby if I do get to clean it up, I can save some disk space and that is always a good thing. Any good techniques, software, techniques to follow while ripping music to help keep my library organized. Please be as detailed as you can.
    Thanks and I can't wait to hear from you.
    Waseem

    Wassimn wrote:
    Hello,
    I have three questions about managing my music library, I hope you can help me with them:
    1) Is there a limit of how many entries, songs, albums, art work, can iTunes handle? I have a hunch iTunes is like a database program and am curious about its capacity. I have two 2-TB drives and am wondering what is going to happen when I fill these two drives up. Other than disk space, what are iTunes limitations?
    As far as I know you're going to run out of disc space before you hit any limits. Each object in iTunes has a 64-bit key to access it. That said as your library grows it will get less responsive as bigger indexes take exponentially longer to process.
    2) Talking about these two drives. How can I use both as a source for my iTunes library. Can I have two folders selected as the source of my library? I am not sure if I have enough disk space to hold all my music, but I do also have a 1TB almost empty drive is need be.
    iTunes wants to manage everything inside one big folder. Some idiosyncrasies with the way it manages things if you have to move to a new drive means it is best if you can stick to that plan. If your library grows larger then you'll have to take manual control of where some or all of your content is stored. I use a variation of a script called ConsolidateByMoving which you could adapt for your needs.
    3) OK now comes te real question. I am sure that I have duplicates in my library and I sure would love to clean my library up.Possiby if I do get to clean it up, I can save some disk space and that is always a good thing. Any good techniques, software, techniques to follow while ripping music to help keep my library organized. Please be as detailed as you can.
    When it comes to deduping I've written another script for that called DeDuper, see this thread for background.
    And for some general tips on getting organized in iTunes see Grouping tracks into albums.
    tt2

  • Performance with Dedup on HP ProLiant DL380p Gen8

    Hi all,
    it is not that i haven't been warned. It is just that i simply do not understand why write performance on the newly created pool ist so horrible...
    Hopefully, i'll get some mor advise here. Some basic figures:
    The machine is a HP ProLiant DL380p Gen8 with two Intel Xeon E5-2665 CPUs and 128GB Ram.
    The storage-pool is made out of 14 900GB SAS 10k disks on two HP H221 SAS HBAs in two HP D2700 storage enclosures.
    The System is Solaris 11.1
    root@server12:~# zpool status -D datenhalde
    pool: datenhalde
    state: ONLINE
    scan: none requested
    config:
    NAME STATE READ WRITE CKSUM
    datenhalde ONLINE 0 0 0
    mirror-0 ONLINE 0 0 0
    c11t5000C5005EE0F5D5d0 ONLINE 0 0 0
    c12t5000C5005EDBBB95d0 ONLINE 0 0 0
    mirror-1 ONLINE 0 0 0
    c11t5000C5005EE20251d0 ONLINE 0 0 0
    c12t5000C5005ED658F1d0 ONLINE 0 0 0
    mirror-2 ONLINE 0 0 0
    c11t5000C5005ED80439d0 ONLINE 0 0 0
    c12t5000C5005EDB23F1d0 ONLINE 0 0 0
    mirror-3 ONLINE 0 0 0
    c11t5000C5005EDA2315d0 ONLINE 0 0 0
    c12t5000C5005ED6E049d0 ONLINE 0 0 0
    mirror-4 ONLINE 0 0 0
    c11t5000C5005EDBB289d0 ONLINE 0 0 0
    c12t5000C5005EDB9479d0 ONLINE 0 0 0
    mirror-5 ONLINE 0 0 0
    c11t5000C5005EDD8385d0 ONLINE 0 0 0
    c12t5000C5005ED72855d0 ONLINE 0 0 0
    mirror-6 ONLINE 0 0 0
    c11t5000C5005ED8759Dd0 ONLINE 0 0 0
    c12t5000C5005EE3AB59d0 ONLINE 0 0 0
    spares
    c11t5000C5005ED6CEADd0 AVAIL
    c12t5000C5005EDA2CD5d0 AVAIL
    errors: No known data errors
    DDT entries 5354008, size 292 on disk, 152 in core
    bucket allocated referenced
    refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE DSIZE
    1 3,22M 411G 411G 411G 3,22M 411G 411G 411G
    2 1,28M 163G 163G 163G 2,93M 374G 374G 374G
    4 440K 54,9G 54,9G 54,9G 2,12M 271G 271G 271G
    8 140K 17,5G 17,5G 17,5G 1,39M 177G 177G 177G
    16 36,1K 4,50G 4,50G 4,50G 689K 85,9G 85,9G 85,9G
    32 6,26K 798M 798M 798M 277K 34,4G 34,4G 34,4G
    64 1,92K 244M 244M 244M 136K 16,9G 16,9G 16,9G
    128 56 6,52M 6,52M 6,52M 10,5K 1,23G 1,23G 1,23G
    256 222 27,5M 27,5M 27,5M 71,0K 8,80G 8,80G 8,80G
    512 2 256K 256K 256K 1,38K 177M 177M 177M
    1K 4 384K 384K 384K 6,00K 612M 612M 612M
    4K 1 512 512 512 4,91K 2,45M 2,45M 2,45M
    16K 1 128K 128K 128K 24,9K 3,11G 3,11G 3,11G
    512K 1 128K 128K 128K 599K 74,9G 74,9G 74,9G
    Total 5,11M 652G 652G 652G 11,4M 1,43T 1,43T 1,43T
    root@server12:~# zpool list
    NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
    datenhalde 5,69T 662G 5,04T 11% 2.22x ONLINE -
    root@server12:~# ./arc_summery.pl
    System Memory:
    Physical RAM: 131021 MB
    Free Memory : 18102 MB
    LotsFree: 2047 MB
    ZFS Tunables (/etc/system):
    ARC Size:
    Current Size: 101886 MB (arcsize)
    Target Size (Adaptive): 103252 MB (c)
    Min Size (Hard Limit): 64 MB (zfs_arc_min)
    Max Size (Hard Limit): 129997 MB (zfs_arc_max)
    ARC Size Breakdown:
    Most Recently Used Cache Size: 100% 103252 MB (p)
    Most Frequently Used Cache Size: 0% 0 MB (c-p)
    ARC Efficency:
    Cache Access Total: 124583164
    Cache Hit Ratio: 70% 87975485 [Defined State for buffer]
    Cache Miss Ratio: 29% 36607679 [Undefined State for Buffer]
    REAL Hit Ratio: 103% 128741192 [MRU/MFU Hits Only]
    Data Demand Efficiency: 91%
    Data Prefetch Efficiency: 29%
    CACHE HITS BY CACHE LIST:
    Anon: --% Counter Rolled.
    Most Recently Used: 74% 65231813 (mru) [ Return Customer ]
    Most Frequently Used: 72% 63509379 (mfu) [ Frequent Customer ]
    Most Recently Used Ghost: 0% 0 (mru_ghost) [ Return Customer Evicted, Now Back ]
    Most Frequently Used Ghost: 0% 0 (mfu_ghost) [ Frequent Customer Evicted, Now Back ]
    CACHE HITS BY DATA TYPE:
    Demand Data: 15% 13467569
    Prefetch Data: 4% 3555720
    Demand Metadata: 80% 70648029
    Prefetch Metadata: 0% 304167
    CACHE MISSES BY DATA TYPE:
    Demand Data: 3% 1281154
    Prefetch Data: 23% 8429373
    Demand Metadata: 73% 26879797
    Prefetch Metadata: 0% 17355
    root@server12:~# echo "::arc" | mdb -k
    hits = 88823429
    misses = 37306983
    demand_data_hits = 13492752
    demand_data_misses = 1281335
    demand_metadata_hits = 71470790
    demand_metadata_misses = 27578897
    prefetch_data_hits = 3555720
    prefetch_data_misses = 8429373
    prefetch_metadata_hits = 304167
    prefetch_metadata_misses = 17378
    mru_hits = 66467881
    mru_ghost_hits = 0
    mfu_hits = 64253247
    mfu_ghost_hits = 0
    deleted = 41770876
    mutex_miss = 172782
    hash_elements = 18446744073676992500
    hash_elements_max = 18446744073709551615
    hash_collisions = 12375174
    hash_chains = 18446744073698514699
    hash_chain_max = 9
    p = 103252 MB
    c = 103252 MB
    c_min = 64 MB
    c_max = 129997 MB
    size = 102059 MB
    buf_size = 481 MB
    data_size = 100652 MB
    other_size = 924 MB
    l2_hits = 0
    l2_misses = 28860232
    l2_feeds = 0
    l2_rw_clash = 0
    l2_read_bytes = 0 MB
    l2_write_bytes = 0 MB
    l2_writes_sent = 0
    l2_writes_done = 0
    l2_writes_error = 0
    l2_writes_hdr_miss = 0
    l2_evict_lock_retry = 0
    l2_evict_reading = 0
    l2_abort_lowmem = 0
    l2_cksum_bad = 0
    l2_io_error = 0
    l2_hdr_size = 0 MB
    memory_throttle_count = 0
    meta_used = 1406 MB
    meta_max = 1406 MB
    meta_limit = 0 MB
    arc_no_grow = 1
    arc_tempreserve = 0 MB
    root@server12:~#
    The write-performance is really really slow:
    read/write within this pool:
    root@server12:/datenhalde/s12test/Bild-DB/Testaktion# /usr/gnu/bin/dd if=Test.tif of=Test2.tif
    1885030+1 records in
    1885030+1 records out
    965135496 bytes (965 MB) copied, 145,923 s, 6,6 MB/s
    read from this pool and write to the root-pool:
    root@server12:/datenhalde/s12test/Bild-DB/Testaktion# /usr/gnu/bin/dd if=Test.tif of=/tmp/Test2.tif
    1885030+1 records in
    1885030+1 records out
    965135496 bytes (965 MB) copied, 9,51183 s, 101 MB/s
    root@server12:/datenhalde/s12test/Bild-DB/Testaktion# /usr/gnu/bin/dd if=FS2013_Fashionation_Beach_06.tif of=FS2013_Test.tif
    I just do not get this. Why is it that slow? Am i missing any tunable parameters? From the above figures the ddt should use 5354008*152=776MB in RAM. That should fit easily.
    Sorry for the longish post, but i really need some help here, because the real data with much higher dedup ratio is still to be copied to that pool.
    Compression is no real alternative, because most of the data will be compressed images and i don't expect to see great compression ratios.
    TIA and kind regards,
    Tom
    Edited by: vigtom on 16.04.2013 07:51

    Hi Cindy,
    thanks for answering :)
    Isn't the tunable parameter "arc_meta_limit" obsolete in Solaris 11?
    Before Solaris 11 you could tune arc_meta_limit by setting something reasonable in /etc/system with "set zfs:zfs_arc_meta_limit=...." which - at boot - is copied into arc_c_max overriding the default setting.
    On this Solaris 11.1 c_max is already maxed out to "kstat -p zfs:0:arcstats:c_max -> zfs:0:arcstats:c_max 136312127488" without any tunig. This is also reflected by the parameter "meta_limit = 0". Am i missing something here?
    When looking at the output of "echo "::arc" | mdb -k" i see the values of "meta_used", "meta_max" and "meta_limit". I understand these as "memory used for metadata right now", "max memory used for metadata in the past" and "theoretical limit of memory used for metadata" with an value of "0" as "unlimited". Right?
    What exactly is "arc_no_grow = 1" saying here?
    Sorry for maybe asking some silly questions. This is all a bit frustrating ;)
    When disabling dedup on the pool write performance is increasing almost instantly. I did not test it long enough to get real figures. I'll probably do this (eventually even with Solaris 10) tomorrow.
    Would Oracle be willing to help me out under a support plan when running Solaris 11.1 on a machine which is certified for Solars 10 only?
    Thanks again and kind regards,
    Tom

  • Several questions around SCDPM...

    Hi Forum,
    After reading the MS SCDPM 2012 supported and un-supported scenarios list.
    ...I have a few questions:
    1) Section:      SharePoint protection issues
       Statement:    AlwaysOn not supported
       Issue:        DPM can’t protected SharePoint farm SQL Server databases that have Always On enabled.
       Workaround:   None.
       Question:     What is the recommended method to protect SharePoint 2013 that resides upon MS SQL AlwaysOn ?
    2) Section:      SQL Server protection issues
       Statement:    AlwaysOn recovery to original location isn’t supported
       Issue:        When DPM is protecting SQL Server with AlwaysOn enabled data recovery to the original location isn’t supported.
       Workaround:   None.
       Question:     Can someone outline the recovery process for MS SQL AlwaysOn ?
    3) Section:      Hyper-V and virtual machine protection issues
       Statement:    DPM doesn’t support the backup of Hyper-V clusters in different domains
       Issue:        In order to backup Hyper-V server clusters they must be located in the same domain as the DPM server.
       Workaround:   None
       Question:     Does this refer to only the protection/backup of Hyper-V cluster members that are Hyper-V hosts, or...
                     does this also cover/include protection/backup of guest VMs on Hyper-V clusters from different domains?
    4) Statement:    Secondary DPM protection of a Hyper-V cluster isn’t supported for a scaled-out DPM server deployment
       Issue:        When protecting a Hyper-V cluster using scaled-out DPM protection, you can’t add secondary protection for the protected Hyper-V workloads.
       Workaround:   None
       Question:     What is 'scaled-out' DPM?
    5) Requirement:  I want to store backups of Hyper-V guests, and backups of MS SQL AlwaysOn on DPM de-duplicating storage.
       Question:     Is it possible to replicate this 'de-duplicated storage' from one DPM environment to another?
    6) Requirement:  I want to replicate DPM backups from primary to DR site via WAN.
       Question:     Does DPM implement any form of optimized replication post de-duplication?
    Thanks,
    Dave.

    Hi
    1) Section:      SharePoint protection issues
       Statement:    AlwaysOn not supported
       Issue:        DPM can’t protected SharePoint farm SQL Server databases that have Always On enabled.
       Workaround:   None.
       Question:     What is the recommended method to protect SharePoint 2013 that resides upon MS SQL AlwaysOn ?
    ANSWER:  We are adding support for protecting Sharepoint utilizing SQL Always-on in a future DPM 2012 R2 update rollup UR.  In the meantime, you can protect the Sharepoint SQL databases using DPM SQL protection - however item level
    recovery will not be possible.
    2) Section:      SQL Server protection issues
       Statement:    AlwaysOn recovery to original location isn’t supported
       Issue:        When DPM is protecting SQL Server with AlwaysOn enabled data recovery to the original location isn’t supported.
       Workaround:   None.
       Question:     Can someone outline the recovery process for MS SQL AlwaysOn ?
    ANSWER: You can Recover the SQL databases to any non-SQL-Always on SQL Server, or restore the DB files to a volume on the SQL Server and replace the .mdf and .ldf files manually, or delete the always-on DB and restore back to original location.
    3) Section:      Hyper-V and virtual machine protection issues
       Statement:    DPM doesn’t support the backup of Hyper-V clusters in different domains
       Issue:        In order to backup Hyper-V server clusters they must be located in the same domain as the DPM server.
       Workaround:   None
       Question:     Does this refer to only the protection/backup of Hyper-V cluster members that are Hyper-V hosts, or...
                     does this also cover/include protection/backup of guest VMs on Hyper-V clusters from different domains?
    ANSWER: It refers to host level backup of guest VM's.
    4) Statement:    Secondary DPM protection of a Hyper-V cluster isn’t supported for a scaled-out DPM server deployment
       Issue:        When protecting a Hyper-V cluster using scaled-out DPM protection, you can’t add secondary protection for the protected Hyper-V workloads.
       Workaround:   None
       Question:     What is 'scaled-out' DPM?
    ANSWER:  Scale out refers to using multiple DPM servers to protect a large Hyper-V cluster.
    See this blog:
    http://blogs.technet.com/b/dpm/archive/2013/05/01/sc-2012-sp1-dpm-leveraging-dpm-scaleout-feature-to-protect-vms-deployed-on-a-big-cluster.aspx
    5) Requirement:  I want to store backups of Hyper-V guests, and backups of MS SQL AlwaysOn on DPM de-duplicating storage.
       Question:     Is it possible to replicate this 'de-duplicated storage' from one DPM environment to another?
    ANSWER: Dedupe is done by Windows against the .VHD files hosting the DPM storage pool, so it is not possible to replicate that to another DPM server.
    6) Requirement:  I want to replicate DPM backups from primary to DR site via WAN.
       Question:     Does DPM implement any form of optimized replication post de-duplication?
    ANSWER:  All secondary protection is optimized in that only block level changes are transmitted to the secondary DPM Server one initial replica is created. You can also enable on-wire compression to help further reduce data transfer
    size. Here again it can then utilize Windows Dedup against that DPM Servers storage pool .vhd files.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • Questions on Print Quote report

    Hi,
    I'm fairly new to Oracle Quoting and trying to get familiar with it. I have a few questions and would appreciate if anyone answers them
    1) We have a requirement to customize the Print Quote report. I searched these forums and found that this report can be defined either as a XML Publisher report or an Oracle Reports report depending on a profile option. Can you please let me know what the name of the profile option is?
    2) When I select the 'Print Quote' option from the Actions drop down in the quoting page and click Submit I get the report printed and see the following URL in my browser.
    http://<host>:<port>/dev60cgi/rwcgi60?PROJ03_APPS+report=/proj3/app/appltop/aso/11.5.0/reports/US/ASOPQTEL.rdf+DESTYPE=CACHE+P_TCK_ID=23731428+P_EXECUTABLE=N+P_SHOW_CHARGES=N+P_SHOW_CATG_TOT=N+P_SHOW_PRICE_ADJ=Y+P_SESSION_ID=c-RAuP8LOvdnv30grRzKqUQs:S+P_SHOW_HDR_ATTACH=N+P_SHOW_LINE_ATTACH=N+P_SHOW_HDR_SALESUPP=N+P_SHOW_LN_SALESUPP=N+TOLERANCE=0+DESFORMAT=RTF+DESNAME=Quote.rtf
    Does it mean that the profile in our case is set to call the rdf since it has reference to ASOPQTEL.rdf in the above url?
    3) When you click on submit button do we have something like this in the jsp code: On click call ASOPQTEL.rdf. Is the report called using a concurrent program? I want to know how the report is getting invoked?
    4) If we want to customize the jsp pages can you please let me know the steps involved in making the customizations and testing them.
    Thanks and Appreciate your patience
    -PC

    1) We have a requirement to customize the Print Quote report. I searched these forums and found that this report can be defined either as a XML Publisher report or an Oracle Reports report depending on a profile option. Can you please let me know what the name of the profile option is?
    I think I posted it in one of the threads2) When I select the 'Print Quote' option from the Actions drop down in the quoting page and click Submit I get the report printed and see the following URL in my browser.
    http://<host>:<port>/dev60cgi/rwcgi60?PROJ03_APPS+report=/proj3/app/appltop/aso/11.5.0/reports/US/ASOPQTEL.rdf+DESTYPE=CACHE+P_TCK_ID=23731428+P_EXECUTABLE=N+P_SHOW_CHARGES=N+P_SHOW_CATG_TOT=N+P_SHOW_PRICE_ADJ=Y+P_SESSION_ID=c-RAuP8LOvdnv30grRzKqUQs:S+P_SHOW_HDR_ATTACH=N+P_SHOW_LINE_ATTACH=N+P_SHOW_HDR_SALESUPP=N+P_SHOW_LN_SALESUPP=N+TOLERANCE=0+DESFORMAT=RTF+DESNAME=Quote.rtf
    Does it mean that the profile in our case is set to call the rdf since it has reference to ASOPQTEL.rdf in the above url?
    Yes, your understanding is correct.3) When you click on submit button do we have something like this in the jsp code: On click call ASOPQTEL.rdf. Is the report called using a concurrent program? I want to know how the report is getting invoked?
    No, there is no conc program getting called, you can directly call a report in a browser window, Oracle reports server will execute the report and send the HTTP response to the browser.4) If we want to customize the jsp pages can you please let me know the steps involved in making the customizations and testing them.
    This is detailed in many threads.Thanks
    Tapash

  • Satellite P300D-10v - Question about warranty

    HI EVERYBODY
    I have these overheating problems with my laptop Satellite P300D-10v.
    I did everything I could do to fix it without any success..
    I get the latest update of the bios from Toshiba. I cleaned my lap with compressed air first and then disassembled it all and cleaned it better.(it was really clean insight though...)
    BUT unfortunately the problem still exists...
    So i made a research on the internet and I found out that most of Toshiba owners have the same exactly problem with their laptop.
    Well i guess this is a Toshiba bug for many years now.
    Its a really nice lap, cool sound (the best in laptop ever) BUT......
    So I wanted to make a question. As i am still under warranty, can i return this laptop and get my money back or change it with a different one????
    If any body knows PLS let me know.
    chears
    Thanks in advance

    Hi
    I have already found you other threads.
    Regarding the warranty question;
    If there is something wrong with the hardware then the ASP in your country should be able to help you.
    The warranty should cover every reparation or replacement.
    But I read that you have disasembled the laptop at your own hand... hmmm if you have disasembled the notebook then your warrany is not valid anymore :(
    I think this should be clear for you that you can lose the warrany if you disasemble the laptop!
    By the way: you have to speak with the notebook dealer where you have purchased this notebook if you want to return the notebook
    The Toshiba ASP can repair and fix the notebook but you will not get money from ASP.
    Greets

  • Question regarding NULL and forms

    Hi all, i have a survey that im working on that will be sent via email.
    I'm having an issue though. if i have a multiple choice question, and the user only selects one of the choices, all the unselected choices return as NULL. is there a way i can filter out anytihng that says "NULL" so it only shows the selected options?
    thanks.
    here is the page that retrieves all the data. thanks
    <body>
    <p>1) Is this your first visit to xxxxxxx? <b><%=request.getParameter("stepone") %></b>
    </p>
    <p> </p>
    <p>2) How did You Learn About xxxxxxx?</p>
    <p><b><%=request.getParameter("steptwoOne") %></b>
      <br>
        <b><%=request.getParameter("steptwoTwo") %></b>
      <br>
        <b><%=request.getParameter("steptwoThree") %></b>
      <br>
        <b><%=request.getParameter("steptwoFour") %></b>
      <br>
        <b><%=request.getParameter("steptwoOther") %></b>
    </p>
    <p> </p>
    <p>3) What was your main reason for visiting xxxxx?</p>
    <p><b><%=request.getParameter("stepthreeOne") %></b>
        <br>
          <b><%=request.getParameter("stepthreeTwo") %></b>
        <br>
          <b><%=request.getParameter("stepthreeThree") %></b>
        <br>
          <b><%=request.getParameter("stepthreeFour") %></b>
        <br>
          <b><%=request.getParameter("stepthreeOther") %></b>
    </p>
    <p>4) did you find the information you were looking for on this site?</p>
    <p><b><%=request.getParameter("stepfour") %>
    <br>
    <b><%=request.getParameter("stepfourOther") %></b>
    </b></p>
    <p>5) Do you plan on using this website in the future?</p>
    <p><b><%=request.getParameter("stepfive") %></b></p>
    <p>6) What is your gender</p>
    <p><b><%=request.getParameter("stepsix") %></b></p>
    <p>7) What is your age group</p>
    <p><b><%=request.getParameter("stepseven") %></b></p>
    8) Would you like to take a moment and tell us how we can improve your experience on xxxxxxxxxx?
    <p><b><%=request.getParameter("stepeightFeedback") %></b></p>

    i was messing around and came up with this. it doesnt remove the null, but if it is null it adds ABC beside it. so i think i might be getting close. i just need to figure out how to replace the null.
    code]
    <b><%=request.getParameter("steptwoFour") %></b>
         <% if (request.getParameter("steptwoFour") == null ) {
         %>
         <% out.print("abc"); %>
         <% }
         %>

  • Anyone know how to remove Overdrive books from my iphone that have been transferred from my computer? They do not show up on itunes. I see a lot of answers to this question but they all are based on being able to see the books in iTunes.

    How do I remove Overdrive books from the library that were downloaded onto my computer then transferred to my iphone? The problem is that they do not show up in iTunes.
    I see this question asked a lot when I google, but they always give answers that assumes you can find the books in iTunes either under the books tab, or the audio books tab or in the music. They do not show up anywhere for me. They do not remove from the app like the ones I downloaded directly onto my iphone.the related archived article does not answer it either.  I even asked a guy working at an apple store and he could not help either.   Anybody...?
    Thanks!

    there is an app called daisydisk on mac app store which will help you see exactly where the memory is focused and consumed try using that app and see which folders are using more memory

  • Basic question

    Hello, i have a basic question. if i have defined 2 fields in a cube or a dso:
    Name Quantity
    and from the external flat file i get some characters for my quantity field. would my load fail?  for standard dso and for write optimized?
    NOTE: quantity field is a keyfigure defined as numeric.
    and the load coming in has "VIKPATEL" for Quantity field and not numbers.
    thanks

    Hi Vik,
    Yes, the load will fail.
    May be you coud first load this data into BW (into PSA) and set both fields as characters fields. Then you can create DSO, do transformation from this PSA to the DSO, and put your logic as to what do you want to do with those Quantity that is not number (e.g. convert to 0, or 'Not assgined', etc).
    You can use transfer rule, or a clean up ABAP code in the start routine.
    Hope this helps.

  • Mid 2010 15" i5 Battery Calibration Questions

    Hi, I have a mid 2010 15" MacBook Pro 2.4GHz i5.
    Question 1: I didn't calibrate my battery when I first got my MacBook Pro (it didn't say in the manual that I had to). I've had it for about a month and am doing a calibration today, is that okay? I hope I haven't damaged my battery? The calibration is only to help the battery meter provide an accurate reading of how much life it has remaining, right?
    Question 2: After reading Apple's calibration guide, I decided to set the MacBook Pro to never go to sleep (in Energy Saver System Preference) and leave it on overnight so it would run out of power and go to sleep, then I'd leave it in that state for at least 5 hours before charging it. When I woke up, the light on the front wasn't illuminated. It usually pulsates when in Sleep. Expectedly, it wouldn't wake when pressing buttons on the keyboard. So, what's happened? Is this Safe Sleep? I didn't see any "Your Mac is on reserve battery and will shut down" dialogues or anything similar, as I was asleep! I've left it in this state while I'm at work and will charge it this afternoon. Was my described method okay for calibration or should I have done something different?
    Question 3: Does it matter how quickly you drain your battery when doing a calibration? i.e is it okay to drain it quickly (by running HD video, Photo Booth with effects etc) or slowly (by leaving it idle or running light apps)?
    Thanks.
    Message was edited by: Fresh J

    Fresh J:
    A1. You're fine calibrating the battery now. You might have gotten more accurate readings during the first month if you'd done it sooner, but no harm has been done.
    A2. Your machine has NOT shut down; it has done exactly what it was supposed to do. When the power became critically low, it first wrote the contents of RAM to the hard drive, then went to sleep. When the battery was completely drained some time later, the MBP went into hibernation and the slepp light stopped pulsing and turned off. In that state the machine was using no power at all, but the contents of your RAM were still saved. Once the AC adapter was connected, a press of the power button would cause those contents to be reloaded, and the machine would pick up again exactly where you left off. It is not necessary to wait for the battery to be fully charged before using the machine on AC power, but do leave the AC adapter connected for at least two hours after the battery is fully charged. Nothing that you say you've done was wrong, and nothing that you say has happened was wrong.
    A3. No, it does not matter.

Maybe you are looking for