Number of Records Loaded for 2LIS_02_HDR much higher than expected

Hi, I just did a setup for 2LIS_02_HDR and in table EKKO has 1.2 million and in bw it looks like it is loading well over 6 million right now.  Is this because the HDR collects up detail information for header aggregation?  Even with that I don't expect 6 million records.
Any ideas?  I'm fairly certain I put in all the right parameters in my setup load!  Any way to check?
thanks!

>
Kenneth Murray wrote:
> Hi, I just did a setup for 2LIS_02_HDR and in table EKKO has 1.2 million and in bw it looks like it is loading well over 6 million right now.  Is this because the HDR collects up detail information for header aggregation?  Even with that I don't expect 6 million records.
>
> Any ideas?  I'm fairly certain I put in all the right parameters in my setup load!  Any way to check?
>
> thanks!
Hi,
please check following :
1. setup most not be filled repeatdly.... in case if its filled multiple times it wil give u higher no. of records than expected. Kindly check in number of entries in table "MC11VA0HDRSETUP".
2. If records are higher than expected than please delete data for application 02 using tcode LBWG and refill it.
Thanks
Dipika

Similar Messages

  • Process Chains - number of records loaded in various targets

    Hello
    I have been trying to figure to access the metadata on the process chain log to extract information on the number of records loaded in various targets(Cubes/ODS/Infoobjects).
    I have seen a few tables like RSMONICTAB, RSPCPROCESSLOG and RSREQICODS through various threads posted, however I would like to know whether there is any data model ( relationship structure beween these and other std. tables) so that I could seamless traverse through them to get the information I need.
    In traditional ETL tools I would approach the problem :
    > Load a particular sequence(in our case = BW Process chain) name into the program
    > Extract the start time and end time information.
    > Tranverse through all the objects of the sequence(BW Process chain)
    > Check if the object is a data target
    > If yes scan through the logs to extract the number of records loaded.
    could I have a list of similar tables which I could traverse through ABAP code to extract such information?
    Thanks in advance

    Hi Richa,
    Please check these tables which may be very useful for you.
      rsseldone,
      rsdodso,
      rsstatmanpsa,
      rsds,
      rsis,
      rsdcube
    I have got a abap code where you can get all the information for a particular request.
    If you need more information goto ST13; select BI TOOLS and Execute.
    hope this helps .....
    Regards,
    Ravi Kanth
    Edited by: Ravi kanth on May 15, 2009 11:08 AM

  • Total number of records loaded into ODS and in case of Infocube

    hai
    i loaded some datarecords from Oracle SS in ODS and Infocube.
    My SOurceSytem guy given some datarecords by his selection at Oracle source system side.
    how can i see that 'how many data records are loaded into ODS and Infocube'.
                     i can check in monitor , but that not correct(becz i loaded second , third time by giving the ignore duplicate records). So i think in monitor , i wont get the correct number of datarecords loaded in case of ODS and Infocube.
    So is there any transaction code or something to find number records loaded in case of ODS and Infocube .
    ps tell me
    i ll assing the points
    bye
    rizwan

    HAI
    I went into ODS manage and see the 'transferred' and 'added' data records .Both are same .
    But when i total the added data records then it comes 147737.
    But when i check in active table(BIC/A(odsname)00 then toal number of entries come 1,37,738
    why it is coming like that difference.......
    And in case of infocube , how can i find total number of records loaded into Infocube.(not in infocube).
               Like any table for fact table and dimension tables.
    pls tell me
    txs
    rizwan

  • Change number of record displayed for a single item alone

    Hi,
    I have a single data block with few items. Is it possible to make one item in the block as non-database item and make the display of record in the item alone to show multiple lines.
    i.e. All other items in the Data block shows single record, whereas this particular item should shows 10 records.
    Is this achievable?
    Or should i have to put that item in a separate data block and choose the data block property to "Number of records displayed" as 10?
    Thanks,
    Yuvaraaj.

    983448 wrote:
    Hi,
    I have a single data block with few items. Is it possible to make one item in the block as non-database item and make the display of record in the item alone to show multiple lines.
    i.e. All other items in the Data block shows single record, whereas this particular item should shows 10 records.Yes you can. But i will say re-check your design.
    Hamid
    Mark correct/helpful to help others to get right answer(s).*

  • Track levels suddenly much higher than normal. ?!?!?

    Need some help friends... I have a problem that I can't seem to fix;
    When I boot up an existing song or project, the levels shown on most of my tracks are peaking but the sound isn't clipping. The faders were all well below zero normally but for some reason now the levels are much higher and in the red. Why? The whole song sounds the same as far as I can tell but the levels shown on the faders are all much, much higher! The only way to reduce them is to remove the compression and other plugins from the track but I need a way to just restore the original levels...
    Many thanks in advance for any suggestions,
    If you need any info - please ask - i really want to sort this out asap.
    Acoma

    So how come BeeJay gets a "Helpful" and I do not when I am the one who correctly diagnosed the problem? Sheesh!
    That's 'cos I'm much prettier than you!
    Er, no, wait... the other thing...
    (In any case, when I saw the original question, I left it for you to answer as you are the chief PFM-advocate around here... )

  • Database much larger than expected when using secondary databases

    Hi Guys,
    When I load data into my database it is much larger than I expect it to be when I turn on secondary indexes. I am using the Base API.
    I am persisting (using TupleBindings) the following data type:
    A Key of size ~ *80 Bytes*
    A Value consisting of 4 longs ~ 4*8= *32 Bytes*
    I am persisting ~ *280k* of such records
    I therefore expect ballpark 280k * (80+32) Bytes ~ *31M* of data to be persisted. I actually see ~ *40M* - which is fine.
    Now, when I add 4 secondary database indexes - on the 4 long values - I would expect to see approximately an additional 4 * 32 * 280k Bytes -> ~ *35M*
    This would bring the total amount of data to (40M + 35M) ~ *75M*
    (although I would expect less given that many of the secondary keys are duplicates)
    What I am seeing however is *153M*
    Given that no other data is persisted that could account for this, is this what you would expect to see?
    Is there any way to account for the extra unexpected 75M of data?
    Thanks,
    Joel
    Edited by: JoelH on 10-Feb-2010 10:59
    Edited by: JoelH on 10-Feb-2010 10:59

    Hi Joel,
    Now, when I add 4 secondary database indexes - on the 4 long values - I would expect to see approximately an additional 4 * 32 * 280k Bytes -> ~ 35MA secondary index consists of a regular JE database containing key-value pairs, where the key of the pair is the secondary key and the value of the pair is the primary key. Your primary key is 80 bytes, so this would be 4 * (4 + 80) * 280k Bytes -> ~ 91M.
    The remaining space is taken by per-record overhead for every primary and secondary record, plus the index of keys itself. There are (280k * 5) records total. Duplicates do not take less space.
    I assume this is an insert-only test, and therefore the log should contain little obsolete data. To be sure, please run the DbSpace utility.
    Does this answer your question?
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • How to stop process chain, if it is taking too much time than expected.

    Some times if a process chain takes to much time to finish than expected, how I can stop the process chain and execute it again.
    Thanks in Advance.
    Harman

    how I can stop the process chain ??
    If the job is running for a long time ,
    1)GOTO RSMO and SM37 and check the long running job over there.
    2)There you can see the status of the job.
    3)If the job is still running you can kill that job
    4)delete the failed request from data target.
    for more details go to this below link
    how to stop process chain if it yellow for long time
    how I can execute it again ?
    GOto Function module  RSPC_API_CHAIN_START
    and give u r process chain name there.and execute.

  • Performance issue fetching huge number of record with "FOR ALL ENTRIES"

    Hello,
    We need to extract an huge amount of data (about 1.000.000 records) from VBEP table, which overall dimension is about 120 milions records.
    We actually use this statements:
    CHECK NOT ( it_massive_vbep[] IS INITIAL ) .
    SELECT (list of fields) FROM vbep JOIN vbap
                 ON vbepvbeln = vbapvbeln AND
                  vbepposnr = vbapposnr
                 INTO CORRESPONDING FIELDS OF  w_sched
                 FOR ALL ENTRIES IN it_massive_vbep
                 WHERE    vbep~vbeln   = it_massive_vbep-tabkey-vbeln
                    AND    vbep~posnr   = it_massive_vbep-tabkey-posnr
                    AND    vbep~etenr   = it_massive_vbep-tabkey-etenr.
    notice that internal table it_massive_vbep contains always records with fully specified key.
    Do you think this query could be further optimized?
    many thanks,
    -Enrico

    the are 2 option to improve performance:
    + you should work in blocks of 10.000 to 50.000
    + you should check archiving options, does this really make sense
    > VBEP table, which overall dimension is about 120 milions records.
    it_massive_vbep  into it_vbep_notsomassive (it_vbep_2)
    CHECK NOT ( it_vbep_2[] IS INITIAL ) .
      get runtime field start.
    SELECT (+list of fields+)
                  INTO CORRESPONDING FIELDS OF TABLE w_sched
                  FROM vbep JOIN vbap
                  ON vbep~vbeln = vbap~vbeln AND
                       vbep~posnr = vbap~posnr
                  FOR ALL ENTRIES IN it_vbep_2
                  WHERE vbep~vbeln = it_vbep_2-vbeln
                  AND      vbep~posnr = it_vbep_2-posnr
                  AND      vbep~etenr  = it_vbep_2-etenr.
      get runtime field stop.
    t = stop - start.
    write: / t.
    Be aware that even 10.000 will take some time.
    Other question, how did you get the 1.000.000 records in it_massive_vbep. They are not typed in, but somehow select.
    Change the FAE into a JOIN and it will be much faster.
    Siegfried

  • Rman backup size for Dr db is very much higher than that of primary db

    Hi All,
    My production database on Oracle 10.2.0.4 had a physical size of 897 Gb and logical size of around 800 Gb.
    Old tables were truncated from the database and its logical size got reduced to 230 Gb.
    Backup size is now 55Gb which used to be of 130 Gb before truncation.
    Now this database has a DR configured. Backup of this DR database is daily taken which is used to refresh test environments.
    But the backup size for DR database has not decreased. The restoration time while refreshing test environments is also same as before.
    We had predicted that the backup size for DR database will also decrease and hence reducing the restoration time.
    We take compressed RMAN backup.
    What is the concept behind this?

    When you duplicate a database it will restore all the datafiles from the RMAN backup. You will find the physical space of your source database. Remove the fragmented space using object movement. Then shrink the tablespaces and take fresh RMAN backup and restore.
    Regards
    Asif Kabir

  • Intel i7-4800MQ CPU Power Consuming is much higher than Windows

    On Windows 8.1, I run CPU-Z to generate these information:
    Processor 1    ID = 0
         Number of cores    4 (max 8)
         Number of threads    8 (max 16)
         Name    Intel Core i7 4800MQ
         Codename    Haswell
         Specification    Intel(R) Core(TM) i7-4800MQ CPU @ 2.70GHz
         Package (platform ID)    Socket 947 rPGA (0x4)
         Core Stepping    C0
         Technology    22 nm
         TDP Limit    47 Watts
         ***Core Speed    799.0 MHz***
         Multiplier x Bus Speed    8.0 x 99.9 MHz
         Stock frequency    2700 MHz
         Instructions sets    MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, EM64T, VT-x, AES, AVX, AVX2, FMA3     
         Turbo Mode    supported, enabled
         Max non-turbo ratio    27x
         Max turbo ratio    37x
         Max efficiency ratio    8x
    You see: This CPU can run at 800MHz(Core Voltage 0.6 v) on windows, when there is no CPU toast Task. The fan is quiet.
    ===========================================================
    But on Arch Linux, I run the latest i7z-git, I got these information:
    Cpu speed from cpuinfo 2693.00Mhz
    cpuinfo might be wrong if cpufreq is enabled. To guess correctly try estimating via tsc
    Linux's inbuilt cpu_khz code emulated now
    True Frequency (without accounting Turbo) 2693 MHz
      CPU Multiplier 27x || Bus clock frequency (BCLK) 99.74 MHz
    Socket [0] - [physical cores=4, logical cores=8, max online cores ever=4]
      TURBO ENABLED on 4 Cores, Hyper Threading ON
      Max Frequency without considering Turbo 2792.74 MHz (99.74 x [28])
      Max TURBO Multiplier (if Enabled) with 1/2/3/4 Cores is  37x/36x/35x/35x
      Real Current Frequency 2653.53 MHz [99.74 x 26.60] (Max of below)
            Core [core-id]  :Actual Freq (Mult.)      C0%   Halt(C1)%  C3 %   C6 %   C7 %  Temp      VCo
            Core 1 [0]:       2625.75 (26.33x)      2.45    16.6       1    2.39    77.6    40      0.89
            Core 2 [1]:       2653.53 (26.60x)         1    1.14    2.44       1    94.4    40      0.89
            Core 3 [2]:       2628.92 (26.36x)      1.41     4.7       1       0    92.9    42      0.89
            Core 4 [3]:       2631.18 (26.38x)      2.29    20.4    1.44       1    74.9    39      0.89
    CPU can only run at 2.7Ghz (or higher in turbo model), and Core Voltage is 0.89 v. Tha fan is very noisy.
    I try many method in CPU power saving, but can not find the solution. Is there any diffierent in power management between Linux and windows?

    Gusar wrote:
    silenceleaf wrote:I compare the power consuming with windows.
    How exactly? That was my question. Just the voltage report? Or do you actually notice something is not right, like shorter battery life? Different tools could possibly measure things differently. Try turbostat, it's in AUR.
    Fan management can also be unrelated to actual power consumption. I've wondered before whether pstate, because it works differently, is confusing whatever is responsible for controlling the fans. What brand of laptop do you have? Post the output of lsmod.
    And again, just because a tool says the frequency is at 2700MHz, does not mean the processor is actually at that frequency! It is not most of the time.
    Thank you for your help.
    I am using dell M4800 laptop which have a 97Wh battery.  Mainly browing web page and typing some documents. In windows, its battery can last for 5 hours.  In linux, it can only last 3 hours.
    I am also trying to find more evidence to compare the power consuming between these two platforms. I understand your meaning. It is hard to fairly compare these two environment. I am strill trying to find more information.
    lsmod output
    Module Size Used by
    rfcomm 50698 12
    btusb 19648 0
    hid_generic 1153 0
    snd_usb_audio 117811 2
    uvcvideo 72804 0
    videobuf2_vmalloc 3304 1 uvcvideo
    snd_usbmidi_lib 19755 1 snd_usb_audio
    videobuf2_memops 2335 1 videobuf2_vmalloc
    snd_rawmidi 18742 1 snd_usbmidi_lib
    videobuf2_core 28243 1 uvcvideo
    snd_seq_device 5180 1 snd_rawmidi
    videodev 111840 2 uvcvideo,videobuf2_core
    media 11719 2 uvcvideo,videodev
    usbhid 40545 0
    hid 90902 2 hid_generic,usbhid
    ctr 3831 2
    ccm 7894 2
    nvidia 10621191 2
    snd_hda_codec_hdmi 36379 1
    joydev 9631 0
    mousedev 10247 0
    dell_wmi 1485 0
    sparse_keymap 3146 1 dell_wmi
    bnep 13053 2
    bluetooth 326343 30 bnep,btusb,rfcomm
    arc4 2000 2
    x86_pkg_temp_thermal 6991 0
    intel_powerclamp 8802 0
    coretemp 6358 0
    kvm_intel 131532 0
    kvm 396221 1 kvm_intel
    crct10dif_pclmul 4682 0
    crct10dif_common 1372 1 crct10dif_pclmul
    crc32_pclmul 2923 0
    crc32c_intel 14185 0
    ghash_clmulni_intel 4405 0
    aesni_intel 45548 4
    aes_x86_64 7399 1 aesni_intel
    lrw 3565 1 aesni_intel
    gf128mul 5858 1 lrw
    glue_helper 4417 1 aesni_intel
    ablk_helper 2004 1 aesni_intel
    cryptd 8409 3 ghash_clmulni_intel,aesni_intel,ablk_helper
    snd_hda_codec_realtek 45083 1
    iTCO_wdt 5407 0
    iTCO_vendor_support 1929 1 iTCO_wdt
    ppdev 7118 0
    fuse 76830 3
    dell_laptop 12389 0
    dcdbas 6463 1 dell_laptop
    iwlmvm 136091 0
    mac80211 474777 1 iwlmvm
    microcode 15216 0
    iwlwifi 140001 1 iwlmvm
    i2c_i801 11269 0
    psmouse 88171 0
    pcspkr 2027 0
    evdev 11045 22
    serio_raw 5009 0
    i915 725519 5
    snd_hda_intel 37352 4
    cfg80211 408167 3 iwlwifi,mac80211,iwlmvm
    snd_hda_codec 150017 3 snd_hda_codec_realtek,snd_hda_codec_hdmi,snd_hda_intel
    snd_hwdep 6332 2 snd_usb_audio,snd_hda_codec
    snd_pcm 77822 4 snd_usb_audio,snd_hda_codec_hdmi,snd_hda_codec,snd_hda_intel
    intel_agp 10872 1 i915
    intel_gtt 12664 2 i915,intel_agp
    snd_page_alloc 7298 2 snd_pcm,snd_hda_intel
    drm_kms_helper 35710 1 i915
    snd_timer 18718 1 snd_pcm
    rfkill 15651 5 cfg80211,bluetooth,dell_laptop
    drm 239102 5 i915,drm_kms_helper,nvidia
    lpc_ich 13368 0
    i2c_algo_bit 5391 1 i915
    i2c_core 24760 7 drm,i915,i2c_i801,drm_kms_helper,i2c_algo_bit,nvidia,videodev
    mei_me 9552 0
    snd 59029 23 snd_hda_codec_realtek,snd_usb_audio,snd_hwdep,snd_timer,snd_hda_codec_hdmi,snd_pcm,snd_rawmidi,snd_usbmidi_lib,snd_hda_codec,snd_hda_intel,snd_seq_device
    mei 62803 1 mei_me
    soundcore 5418 1 snd
    e1000e 222900 0
    ptp 8244 1 e1000e
    pps_core 8833 1 ptp
    shpchp 25425 0
    thermal 8556 0
    wmi 8251 1 dell_wmi
    parport_pc 19479 0
    parport 30517 2 ppdev,parport_pc
    video 11425 1 i915
    ac 3334 0
    battery 7565 0
    processor 24620 0
    button 4605 1 i915
    vboxdrv 264122 0
    msr 2565 0
    ext4 473259 2
    crc16 1359 2 ext4,bluetooth
    mbcache 6074 1 ext4
    jbd2 80912 1 ext4
    sr_mod 14930 0
    sd_mod 31361 5
    cdrom 34880 1 sr_mod
    atkbd 16806 0
    libps2 4187 2 atkbd,psmouse
    ahci 23048 3
    libahci 21698 1 ahci
    libata 172104 2 ahci,libahci
    sdhci_pci 12604 0
    ehci_pci 3928 0
    sdhci 28820 1 sdhci_pci
    xhci_hcd 144681 0
    ehci_hcd 64171 1 ehci_pci
    scsi_mod 132250 3 libata,sd_mod,sr_mod
    mmc_core 95465 2 sdhci,sdhci_pci
    usbcore 180320 8 btusb,snd_usb_audio,uvcvideo,snd_usbmidi_lib,ehci_hcd,ehci_pci,usbhid,xhci_hcd
    usb_common 1648 1 usbcore
    i8042 13366 2 libps2,dell_laptop
    serio 10721 6 serio_raw,atkbd,i8042,psmouse

  • Mileage display is much higher than audible mileage??

    So ever since I update my Nano w/video to 1.0.2 something bizarre has started happening.
    When I look at the screen on the nano it will tell me something insanely high, let's say 6 miles.
    When I press the center button the lovely voice tells me the accurate mileage, usually about half.
    When I get home and sync it shows my last run in iTunes as the higher mileage, but on the Nike+ website it displays the correct mileage.
    Bizarre....

    I am experiencing the same exact problem. The reading on the screen is ~ 1.6 times the true mileage covered, but the pace & times are correct and when you upload the data to nikeplus it uploads the correct data. This all started after I upgraded to 1.0.2, it immediately changed all the stored readings in the history by a fatcor of 1.6!
    Is there a fix or do we just wait for the next nano 2nd gen update?

  • Performance Requirements much higher than in Express 7 ???

    Hi,
    I am using a G5 Dual-Core 2.oGHz PowerMac and alsways found that I could work very comfortably with Logic Express 7. Since I upgraded to LE 8 I found that it uses up to 50% (!!!) more CPU power running the same projects as LE7 before. I even find myself quite often in the situation that I get CPU overloads - this happened extremely rarely before.
    What can i do to minimize the CPU load in LE 8? The way it is now, it looks like I have to go back to LE 7 .....:-(
    Greetings
    Andreas

    Great!
    I'm just upgrading from Garageband - never got round to buying LE7 - went out and bought 8 - loaded it up, tried a demo and - wayhey - system overload! I've just added some more ram (2.5GB) - why is it overloading? I haven't even recorded anything yet! This is the kind of stuff I though I left behind with my eMac 700mhz!
    Anyone any ideas folks?
    Confused
    mackers

  • Firefox 4 is slow in loading web pages, much slower than previous version. WHY?

    I am having problems with slow loading of pages, slower than Explorer for the same page for example.
    What can be done?

    Have you tried (1) restarting your router, and (2) resetting the wifi on the iPad?  To do #2, go to Settings, General, Reset, Reset Network Settings. 

  • Used Space is much higher than actual space used in the partition

    Hi All,
    I am facing space problem in oracle Linux 5.3.
    The parition /oradata is showing Used space as 379 GB when checked with df -TH command and when i go to partition and check space with du -sh command it is showing 280 GB.
    How could i reclaim the space.
    [orah@hih oracle]$ df -TH
    Filesystem Type Size Used Avail Use% Mounted on
    /dev/sda3 ext3 16G 1.0G 14G 7% /
    /dev/sda4 ext3 11G 2.9G 6.8G 31% /usr
    /dev/sdb5 ext3 11G 158M 9.5G 2% /tmp
    /dev/sdb3 ext3 51G 189M 48G 1% /orasig
    /dev/sdb2 ext3 11G 277M 9.4G 3% /var
    /dev/sdb1 ext3 71G 12G 56G 17% /oracle
    /dev/sda1 ext3 104M 16M 83M 17% /boot
    tmpfs tmpfs 26G 9.6M 26G 1% /dev/shm
    /dev/sdc ext3 529G 379G 140G 74% /oradata
    /dev/sdd ext3 529G 401G 128G 76% /orabkp
    /dev/sde ext3 212G 197G 14G 94% /oraprod1
    /dev/sdf ext3 119G 73G 45G 63% /stage
    Rgds,
    Alig

    What is your output of df -h?
    -H uses powers of 1000, not 1024. Computer memory uses binary logic based on the power of 2 (2^10 = 1024, 8 * 8 * 8 * 2 = 1024)
    Anyway, df and du measure different things:
    "du" reports the space used by files and folders
    "df" reports the space used by the file system, which includes the overhead for journals and inode tables.
    If a file is open by a user and the same file is deleted then df and du will report different output.
    A reboot of the system should align df and du output more closely.

  • Backup VHD size is much more than expected and other backup related questions.

    Hello,I have few windows 2008 server and i have scheduled the weekly backup (Windows Backup) which runs on saturday.
    I recently noticed that the actual size of the data drive is only 30 GB but the weekly backup creates a VHD of 65 GB.This is not happening for all servers but for most of the server.Why is it so..and how can i get the correct VHD size..60 GB VHD does't make
    sence for 30 GB data.
    2. If any moment of time if i have to restore entire Active Directory on windows 2008 R2 then is is the process same as windows 2003..means going to DSRM mode,restoring the backup,authoritative restore or is there any difference..
    3. I also noticed that if i have a backup VHD of one server (Lets Say A) and if i copy that Backup to other server (Let Say B) then windows 2008 only gives me an option to attach the VHD to server B but through backup utlity is there any way to restore the
    data from the VHD,Currently i am doing copy paste from VHD to server B data drive but  that is not the correct way of doing it..Is it a limitation of windows 2008?
    Senior System Engineer.

    Hi,
    If there are large number of files getting deleted on the data drive, the backup image can have large holes. You can compact the vhd image by using diskpart command to the correct VHD size.
    For more detaile information, please refer to the thread below:
    My Exchange backup is bigger than the used space on the Exchange drive.
    https://social.technet.microsoft.com/Forums/windowsserver/en-US/3ccdcb6c-e26a-4577-ad4b-f31f9cef5dd7/my-exchange-backup-is-bigger-than-the-used-space-on-the-exchange-drive?forum=windowsbackup
    For the second question, the answer is yes. If you want to restore entire Active Directory on windows 2008 R2, you need to start the domain controller in Directory Services Restore Mode to perform a nonauthoritative restore from backup.
    Performing Nonauthoritative Restore of Active Directory Domain Services
    http://technet.microsoft.com/en-us/library/cc816627(v=ws.10).aspx
    If you want to restore a backup to server B which is created on server A, you need to copy the WindowsImageBackup folder to server B.
    For more detailed information, please see:
    Restore After Copying the WindowsImageBackup Folder
    http://standalonelabs.wordpress.com/2011/05/30/restore-after-copying-the-windowsimagebackup-folder/
    Best Regards,
    Mandy
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

Maybe you are looking for