Lite Vs Incremental Vs Full optimze ?

Hi all,
I want to understand some basic things :
1- what is the difference between processing an application and optimizing it ?
I know that optimization moves data between fact, fac2 and wb tables, but what does the process do then ?
2 what is the difference between lite, incremental and full optimize
I think lite moves data from short term (wb) table to fac2 table but I am not sure of what the 2 other options do
thank you in advance

Optimizing applications
Optimization cleans up data storage which improves the responsiveness of the system. You should optimize your applications periodically to enhance system performance.
There is no rule of thumb for how often to run optimizations. The need can vary depending on the characteristics of your hardware environment and your application. However, you can set the system to remind you to optimize the application when the system database grows to a certain size.
Data storage types
Optimization options center around three different types of Business Planning and Consolidation data storage:
Data storage type
Real-time(FactWBTable)
This storage area holds the most recent data sent through BPC for Excel and BPC Web. Periodically clearing real-time data greatly optimizes the performance of the system. See the Lite Optimization option, below.
Short-term(Fact2Tale)
This is data that is not real-time data, but is also not in long-term storage yet. When you load data via Data Manager, it loads the data to short-term storage so that the loaded data does not affect system performance.
Long-term(FactTable)
This is your main data storage. All data eventually resides in long-term storage. Data that is not accessed very often remains in long-term storage so that the system maintains performance.
Optimization options
The optimization options interact with the three types of data storage in different ways:
Optimization type
Lite Optimization
Clears real-time data storage and moves it to Short-term data storage. This option does not take the system offline, so you can schedule it to run during normal business activity.
Incremental Optimization
Clears both real-time and short-term data storage and moves both to long-term data storage. This option takes the system offline, so it is best run at off-peak periods of activity.
Full Process Optimization
Clears both real-time and short-term data storage and processes the dimensions. This option takes the system offline and takes longer to run than the Increment Optimization. It is best run scheduled at down-time periods.
In the rare case that you have two application sets on the same server and two different applications from different application sets are optimized at the same time, you may receive an error message. To work around this issue, be sure to optimize applications for one Application Set at a time. It is especially important to schedule application optimization at different times when optimizing applications from different Application Sets that are on the same server.

Similar Messages

  • DAC Commands for Incremental and Full load

    Hi,
    I'm implementing BIApps 7.9.6.1 for a customer. For R12 container, I noticed for 5 DAC tasks the command for Incremental and Full load starts with "@DAC_" and ends with "_CMD". Due to this, the ETL load fails. Is this a bug..?
    Thanks,
    Seetharam

    You may want to look at Metalink note ID 973191.1:
    Cause
    The 'Load Into Source Dimension', has the following definition:
    -- DAC Client > Design > Tasks > Load Into Source Dimension > Command for Incremental Load = '@DAC_SOURCE_DIMENSION_INCREMENTAL'
    and
    -- - DAC Client > Design > Tasks > Load Into Source Dimension > Command for Full Load = '@DAC_SOURCE_DIMENSION_FULL'
    instead of the actual Informatica workflow names.
    The DAC Parameter is not substituted with appropriate values in Informatica during ETL
    This is caused by the fact that COMMANDS FOR FULL and INCREMENTAL fields in a DAC Task do not allow for database specific texts as described in the following bug:
    Bug 8760212 : COMMANDS FOR FULL AND INCREMENTAL SHOULD ALLOW DB SPECIFIC TEXTS
    Solution
    This issue was resolved after applying Patch 8760212
    The documentation states to apply Patch 8760212 to DAC 10.1.3.4.1 according to the Systems Requirements and Supported Platforms Guide for Oracle Business Intelligence Data Warehouse Administration Console 10.1.3.4.1.
    However, Patch 8760212 has been made obsolete, recently, in this platform and language. Please see the reason stated below on the 'Patches and Update' tab on My Oracle Support.
    Reason for Obsolescence
    Use cumulative Patch 10052370 instead.
    Note: The most recent replacement for this patch is 10052370. If you are downloading Patch 8760212 because it is a prerequisite for another patch or patchset, you should verify whether or not if Patch 10052370 is suitable as a substitute prerequisite before downloading it.

  • Incremental and Full backups using WBADMIN and Task Scheduler in Server 2008 R2

    I'd like to create an automated rotating schedule of backups using wbadmin and task scheduler, which would backup Bare Metal Recovery; System State; Drive C: and D: to a Network Share in a pattern like this:
    Monday - Incremental, overwrite last Monday's
    Tuesday - Incremental, overwrite last Tuesday's
    Wednesday - Incremental, overwrite last Wednesday's
    Thursday - Incremental, overwrite last Thursday's
    Friday - Incremental overwrite last Friday's
    Saturday - Full, overwrite last Saturday's
    I need to use the wbadmin commands within the Task Scheduler and do not know any of the required Syntax to make sure everything goes smoothly, I do not want to do this through the CMD.

    I know each backup for the previous corresponding day will be replaced, how do you figure I wont be able to do incremental backups...
    Because incremental backup is based on Volume Shadow Copy (VSS) feature and due to Windows Server 2008 R2 limitations (this limitation is resolved in Windows 8) only one version of backed up data can be stored in a shared folder. So the
    result is that every time you back up some data on a shared folder, you actually creating a full backup of them.
    is it not supported through task scheduler?
    The Task Scheduler is only a feature that does the tasks that you have defined for it. Actually it runs the
    wbadmin command that runs on an operating system with the mentioned limitation.
    I know you can do Incremental backups through Windows Server Backup, but my limitation using that is I cant setup multiple backups.
    Yes, you are right. Windows Server Backup feature in Windows Server 2008/2008 R2 has not this functionality (although
    ntbackup in Windows XP and Windows Server 2003 had this functionality). So, the only workaround to this limitation is through using Task Scheduler feature with wbadmin command. For more information see the following article:
    http://blogs.technet.com/b/filecab/archive/2009/04/13/customizing-windows-server-backup-schedule.aspx
    So are you saying that even though I want each backup to go to a different place on the Shared Folder that it will replace the previous backup anyways?
    No and because of this I said in my previous post that with some modifications and additions you can do the scenario. For example, you back up to a shared folder with the name of Shared1 on Mondays. You also have been configured the backup feature to back
    up data on another shared folder, named Shared2, on Wednesday. When you repeat the backup operation in Shared1, only the backed up data that resides on it will be affected, and the data on Shared2 remains intact.
    Please feel free to let us know if you have any question or concern.
    Please VOTE as HELPFUL if the post helps you and remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading
    the thread.
    Hi R.Alikhani
    Then do you know if wbadmin supports incremental backup in Windows 8? As you said the VSS issue is fixed in Win8. However, the wbadmin has less options then in windows server. I tried a bit but it seems it only supports full backup? ps, I use a network share
    - will the incremental backup works if I define a ISCCI then? My remote backup PC is also running Win8.

  • DAC Commands for Incremental and Full load with @DAC_* parameters

    I have my DAC server on Linux running 10.1.3.4.1 with informatica 9.0.1 hotfix 2 also running on the same server. I'm doing a full ETL load for HR 11.10.5 and I get two failed tasks pertaining to parameter files:
    @DAC_SDE_ORA_PersistedStage_WorkforceEvent_Performance_Mntn_CMD
    @DAC_SDE_ORA_PayrollFact_Agg_Items_CMD
    I can see these parameter files are in the SrcFiles directory with the proper names.
    I did some googling and came across a couple threads with the same problem and the solution was to download a patch (8760212) which was marked obsolete and replaced by 10052370 which on oracle support says is ALSO obsolete. I'm at a loss here, where can I find this patch or a patch to replace it so I can fix this problem? Any ideas? Thanks.

    HI,
    You can download the patch from this location https://support.oracle.com/CSP/ui/flash.html and it needs login credentials in order to download the patch.
    Thanks,
    Navin Kumar Bolla

  • How do full and incremental Informatica workflow differ?

    Hi:
    I've read that a custom Informatica workflow should have full and incremental versions. I've compared the incremental and full versions of several seeded OBIA workflows, but I cannot find how they differ. For example, when I look at the session objects for each workflow they both have a source filter that limits a date field by LAST_EXTRACT_DATE.
    Can someone shed some light on this?
    Thanks.

    To answer your question, they differ in various ways for various mappings. For most FACT tables which hold high volume transactional data, there may be a SQL override in the FULL session that uses INITIALEXTRACT_DATE and a different override in the INCREMENTAL (does not have Full Suffix) that uses LASTEXTRACT_DATE. For dimension tables, its not always required that you have a different logic for FULL versus incremental.
    Also, all FULL sessions (even if they have the same SQL) will have a BULK option turned on in the Session properties that allow a faster load since the table is usually truncated on FULL versus Incremental. As a best practice, for FACTS, it is best to have a FULL versus INCREMENTAL session. For Dimensions, depending on the volume of transactions or the change capture date field available, you may or may not have different logic. If you do a FULL load, however, it is better to use BULK to speed up the load.
    if this is helpful, please mark as correct or helpful.

  • HT201272 I downloaded Prep & Pantry Lite (free), then decided I wanted the full version ($4.99), downloaded to iPad but only able to get lite version on computer. Yes, read support page.

    Followed the directions on the support page: at itunes store, logged into my account, clicked on "Purchased", clicked on icon next to "Prep & Pantry", it only down loaded the "lite" version. No full version noted anywhere under "Purchased", although my account does show I paid for the full version. Full version IS on my iPad but did not transfer during syncing either. I sync with my a laptop that has windows xp (older laptop, will not support windows 7). Any suggestions?
    Any assistance would be greatly appreciated.

    The PC does support 4Gb of ram, although it's the max it supports, it does support it.  I have read quite a bit about other people having this issue with other systems and I've gone all through my bios looking for SOME sort of option that I can at least play around with, but there's nothing there.  I've also upgraded the bios thinking that "maybe" it could be something with that.  The video card I'm using is a Nvidia Quadro NVS 420 512 Mb.  I've tried disabling IndirectMemoryAccess in my xorg.conf thinking that might be messing with it somehow, but it did nothing.  I just tried removing the video card altogether and running w/ the onboard video and alas! I recovered 512Mb, which I would be pleased with, but when I add the card back, the available ram disappears again.  So now my question is this...  Why would the kernel be reserving 512Mb of system ram for a card that has 512Mb of ram built in (at the same time not visibly acknowledging the fact that it exists)?  Is this maybe a kernel option that I can pass at boot to disable it from reserving the system ram?
    I'm sorry about the lack of the bbcode tags btw, I'm a first timer to the forums (however an arch user for some time now). 
    Thanks again for any input, it's much appreciated!

  • Using SyncDoc to sync Accounts full version with the Accounts Lite Version

    Hello,
    I tried the AccountsLite application version for a few weeks and I just purchased the full Accounts version. My phone is telling me I can sync the applications using SyncDocs. My interpretation of this is that I can tranfser the information I have input into the lite version into the full version. Is this the case? If so, what is SyncDocs and how do I sync these two together?
    Thanks,

    If you want an unlocked version, you need to buy it from Apple and pay full price.  AT&T can only sell you an iPhone locked to their network.
    The nano Sim card of the 5S will work well on the iPhone 6 Plus.

  • Swapping from fasttrak lite to fasttrak full RAID drivers

    I think I am interested in moving from the lite version to the full version of my RAID drivers.
    As it is now, the title of the RAID drivers read:
    "WinXP Promise MBFastTrak 133 Lite (tm) Controller"
    Can this controller be upgraded to the non-Lite version? ie is this hardware specific or just a matter of drivers?
    Because I have not been able to find full versions of this driver, only "MBFastTrak 100 Lite", that is 100 instead of 133.
    Any help is appreciated!
    Regards

    you need to find a full version modded bios first then the drivers to go with it,the bios is intergrated in the boards bios
    what board do you run would help
    try looking on bas's server

  • Full crawl vs incremental crawl

    Hi All,
    I do understand what the full crawling is, can anyone please tell me what are all things that  Search Service will crawl in case of incremental crawling.
    Regards Amit K

    Here some more information
    Full Crawl: “Full crawl” crawls entire content under a content source (depending upon two settings specified at the time of creating the content source. These settings are “Content Source type” and “Crawl Settings”).
    Incremental Crawl: “Incremental crawl” crawls the content which has been added/modified after last successful crawl.
    Why do we need incremental Crawl?
    Though “Full crawl” crawls every bit and piece of content under a content source but we surely need Incremental crawl as it crawls content which has been added/modified after last successful crawl.
    Full crawls will take more time and resource to complete than Incremental crawls. You should consider following facts before going for full crawl instead of incremental crawl.
    1.As compared with incremental crawls, full crawls chew up more memory and CPU cycles on the index .
    2.Full crawls consume more memory and CPU cycles on the Web Front End servers when crawling content in your farm.
    3.Full crawls use more network bandwidth than incremental crawls.
    Crawling puts an overhead on resources. If some content is already been crawled and indexed, why do we need to crawl it again? Therefore incremental crawl is used in such cases to take care of any added/modified content after last successful crawl.
    There are some scenarios where incremental crawl doesn’t work and you need to run full crawl.
    Why do we need Full Crawl?
    1. Software updates or service packs installation on servers in the farm.
    2. When an SSP administrator added new managed property.
    3. Crawl rules have been added, deleted, or modified.
    4. Full crawl is required to repair corrupted index. In this case, system may attempt a full crawl (depending on severity of corruption)
    5. A full crawl of the site has never been done.
    6. To detect security changes those were made on file shares after the last full crawl of the file share.
    7. In case, incremental crawl is failing consecutively. In rare cases, if an incremental crawl fails one hundred consecutive times at any level in a repository, the index server removes the affected content from the index.
    8. To reindex ASPX pages on Windows SharePoint Services 3.0 or Office SharePoint Server 2007 sites. The crawler cannot discover when ASPX pages on Windows SharePoint Services 3.0 or MOSS sites have changed. Because of this, incremental crawls do not reindex
    views or home pages when individual list items are deleted.
    The system does a full crawl even when an incremental crawl is requested under the following circumstances:
    · A shared services administrator stopped the previous crawl.
    · A content database was restored. This applies to MOSS and Windows SharePoint Services 3.0 content databases only.
    source :
    http://sannadisarath.blogspot.com/2011/03/incremental-crawl-vs-full-crawl.html
    My SharePoint Blog

  • 845E max Lite Raid to Full Tx2000

    Hi,
    i just converted the onboard promise lite bios to a full tx2000 bios with allows you to set stripe size and also use Raid 0 + 1 with a little  help from Shonky.

    I cant post Shonky site but if you do a search i am sure you will find it.
    Also i will be updating the latest 5.5 845E max2  bios with full raid and hacked tx2000 drivers for everyone to download latter.
    All  credit to Shonky.
    With stripe size of 16k there is definately on increase in performance.

  • Full vs Incremental Refresh?

    What is the main difference between full an incremental (auto)refreshes?
    Documentation suggests that full refresh is recommended on the groups when data is changing very frequently while Incremental refresh is advised to use on the tables when data is more or less static.
    1. Is there any way to justify/quantify what is the frequent data change (50% of records in 24 hours, etc..)?
    2. In the documentation I also found that using Incremental Data Refresh TT will hook up triggers on the tables in the Oracle DB. Is that correct?
    3. For Full DataRefresh TT will be using logs from Oracle DB to refresh the data in TT. In some cases logs in the Oracle DB are kept only for a specific number of days (i.e. 5 days). What will happen if those logs are not available?
    4. Using Full refresh TT will be reloading all the data from the Oracle DB. Is there any impact for the data availability in TT by using Full refresh?
    5. I believe that Full data refresh will be taking much more time compared to the Incremental data refresh. In this case if I set autorefresh to every 5 seconds, then basically I will end up with the infinite refresh. What is recommended frequency for Full and Incremental refreshes?
    6. Full vs Incremental refresh. Which one is using less resources in Oracle DB and TT and why?
    Thank you.

    Full refresh works by discarding all the data in the cache group tables and then reloading it all from Oracle on every refresh cycle. it is best when (a) the tables are very small and (b) the refresh interval is not too short.
    Incremental autorefresh works by installing a trigger and a tracking table on each of the base tables in Oracle. As changes occur in Oracle they are tracked and when the next autorefrsh cycle comes around only the changes are propagated to the cache tables. Incremental is recommended when (a) the tables involved are of any substantial size and/or (b) when a short refresh interval is required. To try and answer your specific questions:
    1. Is there any way to justify/quantify what is the frequent data change (50% of records in 24 hours, etc..)?
    CJ>> Not really. This comes down to application requirements, how much load you can tolerate on Oracle etc.
    2. In the documentation I also found that using Incremental Data Refresh TT will hook up triggers on the tables in the Oracle DB. Is that correct?
    CJ>> Yes, a single trigger and a single tracking will be instantiated on each base table that is cached in cache group that uses incremental autorefresh.
    3. For Full DataRefresh TT will be using logs from Oracle DB to refresh the data in TT. In some cases logs in the Oracle DB are kept only for a specific number of days (i.e. 5 days). What will happen if those logs are not available?
    CJ>> No. Neither incremental nor full refresh uses Oracle logs. Full refresh simply queries the data in the table and incremental uses a combination of the data in the table and the tracking table.
    4. Using Full refresh TT will be reloading all the data from the Oracle DB. Is there any impact for the data availability in TT by using Full refresh?
    CJ>> Yes. Using full refresh TimesTen starts by emptying the cache table and so there is a significant impact on data availability. Incremental refresh does not have this issue.
    5. I believe that Full data refresh will be taking much more time compared to the Incremental data refresh. In this case if I set autorefresh to every 5 seconds, then basically I will end up with the infinite refresh. What is recommended frequency for Full and Incremental refreshes?
    CJ>> regardless of the refresh method chosen you should ensure that the refresh interval is set >> than the time it takes to perform the refresh. Setting it to a shorter value simply resuls in almost cointihnuous refresh and a much heavier load on Oracle. This is especially problematic with full refresh.
    6. Full vs Incremental refresh. Which one is using less resources in Oracle DB and TT and why?
    CJ>> Again it depends but in general incremental is the preferred choice.
    In my experience I have rarely seen anyone use full autorefresh.
    Chris

  • Server time out by full Optimize application process

    Hello,
    we have problem with full optimize application processing. We have got error message "Server time out"  on the Processing OLAP Database.
    our test environment on the VM x64bit, Server 2008
    Please anybody helps me to solve this issue.
    P.S. other optimize processes are workable (lite and Incremental)
    Thanks
    Arai

    Hi,
    Please see Note 1277009.
    This might resolve your issue.
    Karthik AJ

  • Error while taking Full DB Backup

    Hi,
    We had Multiplexed the Archive Log File to default location(Flash Recovery Area) and log_archive_dest_1=/u02/app/oracle/oradata/orcl/archive
    As too much of archive log's are generated we had removed the log_archive_dest_1.
    We had scheduled full DB backup every sunday and incremental DB backup everyday(midnight) through OEM.
    I am backing up all the archive log's and after each backup (incremental and full DB backup) we are deleting all Archive logs backed up.
    The Full DB Backup failed as it is still trying to backup archive logs from log_archive_dest_1(/u02/app/oracle/oradata/orcl/archive)
    Please find below the error log file generated after backup failed.
    RMAN> backup device type disk format='/u02/backup/scheduled/Full-ARC_%d_%t_%s_%p' tag 'BACKUP_ORCL_000022_091608120501' archivelog all not backed up;
    Starting backup at 16-SEP-08
    current log archived
    using channel ORA_DISK_1
    using channel ORA_DISK_2
    skipping archive log file */u02/app/oracle/oradata/orcl/archive*/1_8459_639421762.dbf; already backed on 01-SEP-08
    skipping archive log file /u02/app/oracle/oradata/orcl/archive/1_8460_639421762.dbf; already backed on 01-SEP-08
    skipping archive log file /u02/app/oracle/oradata/orcl/archive/1_8461_639421762.dbf; already backed on 01-SEP-08
    skipping archive log file /u02/app/oracle/oradata/orcl/archive/1_8462_639421762.dbf; already backed on 01-SEP-08
    archived log /u02/app/oracle/oradata/orcl/archive/1_8463_639421762.dbf not found or out of sync with catalog
    trying alternate file for archivelog thread 1, sequence 8463
    archived log /u02/app/oracle/oradata/orcl/archive/1_8464_639421762.dbf not found or out of sync with catalog
    trying alternate file for archivelog thread 1, sequence 8464
    archived log /u02/app/oracle/oradata/orcl/archive/1_8465_639421762.dbf not found or out of sync with catalog
    trying alternate file for archivelog thread 1, sequence 8465
    archived log /u02/app/oracle/oradata/orcl/archive/1_8466_639421762.dbf not found or out of sync with catalog
    trying alternate file for archivelog thread 1, sequence 8466
    archived log /u02/app/oracle/oradata/orcl/archive/1_8467_639421762.dbf not found or out of sync with catalog
    trying alternate file for archivelog thread 1, sequence 8467
    Please suggest me the possible outcome
    Thanks and Regards
    Amith

    Hi,
    The Archive log's has occupied full Hard Disk space so i changed the status of DB to NoArchive Log mode and deleted all the archive files available at "/u02/app/oracle/oradata/orcl/archive" using OS command.
    Now I changed the DB to Archive Log Mode and now using RMAN to delete All Archive files which are backed up.
    backup incremental level 0 cumulative device type disk tag '%TAG' database include current controlfile;
    backup device type disk tag '%TAG' archivelog all not backed up delete all input;
    Thanks
    Amith
    Edited by: amith r on Sep 17, 2008 3:00 PM

  • Incremental Planning Refresh in 9.3.1

    Hi,
    I am trying to do an 'incremental' refresh in Hyperion Planning 9.3.1
    Currently, I migrated the Oracle Schema from 9.2 to 9.3 and converted the planning application. I was also able to refresh the new outline to Essbase. However, certain members need to be added to the Time Dimension and we have done it in Essbase. Now when I do a refresh, the changes in Essbase disappear.
    I do know that changes should not be made in Essbase and should be made in Planning, however, the time dimension does not allow the creation of new 'siblings' on the Web Front End.
    I was able to successfully keep the changes made in Essbase in 9.2 (Planning Desktop allowed the admin to select 'incremental' or 'full' refresh). However this option is missing in 9.3.1
    Please advice.
    Thanks and Regards
    Kunal Tripathi

    Hi,
    The UDA will only retain member formulas.
    If you read all the information on the page, this relates to your situation :-
    To retain changes made directly to the Essbase outline, you must update the outline after every Refresh (for example, using MAXL scripts). Such changes can be automated. This process is not supported, and every effort should be made to work directly in Performance Management Architect or Planning. For more information, consult Hyperion Services or Hyperion Partners.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • One of two external drives never does incremental after unplugging

    I've been fighting this for weeks and I am totally buffaloed.
    I have a Dell PowerEdge 2900 running Small Business Server 2008. I've been using Windows Server Backup with BackupAssist as a "front-end" for years. I have two nominally identical eSATA external enclosures, each with a Hitachi HDS724040ALE640 4 TB
    disk. I don't recall how long I've had these but it's been a while. I swap them out pretty much every day, and back up to the attached disk every day of the week.
    A few weeks ago I added another 500GB to my PERC-6 RAID-5 array. At the same time I changed my backup strategy. I had the two external disks formatted with two partitions, one for the image backup and one for data from other servers. I decided to have only
    one partition and only do image backups but put the backups from the other servers into a folder on the main server so they wind up in the image.
    Ever since then, if I unplug one of the disks (call it disk A) and plug in the other disk (call it disk B) and let the scheduled backup run, and then remove disk B, plug in disk A, and let the scheduled backup run, disk A winds up with only the most recent
    backup on it. Disk B, OTOH, accumulates backups as expected and as before the switch. And disk A accumulates multiple backups as long as it remains plugged in (e.g. over the weekend).
    Both disks reports SMART data in the green. An intensive multi-day surface check with Victoria doesn't show an bad or even problematic sectors. I've reformatted many times, and deleted and re-created the partition a couple of times.
    BackupAssist tech support can't figure out what's happening. I've tried to test with using WSB directly but I haven't come up with any useful results becasue I don't really understand how to set up a rotating-media backup in WSB. I've done various checks and
    repairs with wbadmin without any joy. It seems pretty unlikely that the problem is in BackupAssist becasue all it does is write a script then call WSB to execute it, and the script doesn't vary.
    Help??

    Hi,
    There are several reasons which may cause the backup to be full instead of incremental, such as source volume snapshot is deleted, from which the last backup was taken.
    For more detailed information, please refer to the thread below:
    control incremental vs full backup on W2k8 R2
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/7820edc7-18ef-4f6e-bb50-f87f4a728597/control-incremental-vs-full-backup-on-w2k8-r2?forum=windowsbackup
    There is a similar thread, please go through it to help troubleshoot this issue:
    Understanding Windows Small Business Server 2008 Backup - Full and Incremental with 2 Backup Destinations
    http://social.technet.microsoft.com/Forums/en-US/bdb3c5ae-81dc-4f54-a39e-4b888ee52476/understanding-windows-small-business-server-2008-backup-full-and-incremental-with-2-backup
    Regards,
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

Maybe you are looking for