Retrospect incremental network backup of formerly local storage?

Im not running OSX Server, but maybe this is the best forum to post this question?
Is there a way to make Retrospect incrementally backup a network client that was formerly a local drive on a storage set?
I was running Retrospect 5.0 on an OS9.2.2 machine to a local, SCSI AIT drive.
The local AIT drive died, but I have another drive on a different OSX.3.9 machine in the office.
I dragged the 'storage sets' from the OS9 machine to an internal drive on the office OSX machine, and proceeded to try to backup the OS9 machine (which has Retrospect Client) over the network.
The problem is, Retrospect thinks all the files on the client machine have not been backed up yet, even though they were backed up when the AIT drive was local.
Is there any way to tell Retrospect that these (40GB) files across the network have been backed up to this Storage Set when it was local?
Has anybody run into this issue?
TIA for any suggestions.
-noel

Hi,
Based on my knowledge, Windows7 backup does not provide the differential backup functionality. So user cannot point the backup program to a full backup and ask it to do a differential backup (only the changes since the last full).
Windows7 backup only provides an incremental backup functionality. And the incremental would be based only upon the most recently taken backup. However, if you swap the backup target after every full, the next backup would be full every time.
You can follow the articles below for additional help about back and restore:
Back up your programs, system settings, and files
http://windows.microsoft.com/en-us/windows/back-up-programs-system-settings-files#1TC=windows-7
Back up and restore: frequently asked questions
http://windows.microsoft.com/en-US/windows7/Back-up-and-restore-frequently-asked-questions
Description of Full, Incremental, and Differential Backups (although it applies to:Microsoft Windows 95, Microsoft Windows NT Server 4.0 Standard Edition, Microsoft Windows NT Workstation 4.0 Developer Edition)
http://support.microsoft.com/kb/136621/en-us
Hope it helps.
Regards,
Blair Deng
Blair Deng
TechNet Community Support

Similar Messages

  • Adding iCloud feature to backup device to local storage.

    Adding this feature, personally would make my life simpler in a whole bunch of ways. What if we could make iCloud temporarily store any amount of data (from our iOS devices) until all this data is backed up to iTunes on a computer setup to hold the backups in the first place. After the computer is done storing all the data it would have to send a confirmation to iCloud letting it delete the backup in the cloud. It doesn't even need to be a computer what if it was time capsule or an Airport Extreme using an external hard-drive. The confirmation would help in making sure everything is there, because in the case of a power failure or a coincidence similar would in turn tell iCloud the data was not backed up, then iCloud would notify the iOS device was not successfully synced due to the malfunction. So that way you know and can personally fix the problem or sync the data to your computer personally. This brings ease and is more affordable, people can sync their iOS devices away from the computer (like on vacation or at work) the same they would with iCloud but the computer or HD would store the data instead of iCloud.

    Even if you are backing up to iCloud, you can perform a manual backup to your computer using iTunes at any time, and delete your iCloud backup thereafter if you wish.  Not everyone has a computer or wishes to use a computer in this way so I doubt you'll ever see a system designed around this type of functionality, especially when it can easily be done manually already.

  • Network backups?  Anyone tried Retrospect with Mozi?

    I'll admit it, I'm still working on my overall backup strategy.. I've got 2 mac's, a couple of PC's and now a deprecated FreeBSD server waiting to find a new home. Currently all of my important data is on my Mac Mini server. In perusing my options, I ran across Retrospect Backup which we've used at work in the distant past on Mac's and it worked fine. I noticed that the PC version of the product can do network backups to Mozi which is one of those Internet based backup services.. I'm wondering if any of you have used this combination to backup PC's and Mac's using the Retrospect tool on either side and what you thought of it over old-school backups to tape or hard drives that are local to you? Thanks!

    Was a big user of retrospect for many years. Not using crashplan pro - to provide local disk and remote off site backups.

  • Network Backup HD for Mac OS 10.3.9 and 9.2.2

    Hi,
    I'm going to be putting a 320 GB PATA Hitachi 320 GB HD into a Firewire enclosure
    for Backing up my 10.3.9 HD with Data Backup scheduling. I also have a G3 B&W with Mac OS 9.2.2 and Dantz retrospect express to use for Backup too.
    Is there a way to have this Firewire drive used for Network Backups for the 2 Macs with different scheduled backups, partitions, etc..?
    The FireWire drive enclosure only has two firewire ports, but I also have the eMac and the G3 hooked up to a router and cable modem. I only have one Firewire
    cable currently. The Network way would seem the best option though. (less cables)
    Thank you for any helps!
    Michael
    Message was edited by: Learning+

    Hi John,
    Thank you for your reply!
    Can I replace my FireWire drive enclosure for a NAS enclosure? Where and how do you hook these NAS drives up?
    I'm thinking I'll just have to manually attach the FireWire cable and mount the drive for backups (for the G3) It doesn't get used as much anyway. The first partition will be 130 GB for OS 9 and the rest will be for OS X. I'll just have to do it that way. Unless I can use Dantz over the Network, but that would still be the manual way of doing this. And more work. The G3 isn't on all the time anyway.
    It doesn't seem to be a problem to do it this way. But, I'll need to be mindful of my incremental schedule for OS X when I backup OS 9.
    Oh well..
    Thank you!
    Michael

  • Time machine stalling with network backup to Live Duo

    I'm running Mountain Lion (10.8.2) and using a Western Digital Live Duo (NAS with 6TB set up with RAID1 (mirror)) for backup with Time Machine (TM). The WD Live Duo has a specific feature for supporting Time Machine and shares a partition for TM backup, but something is wacky as TM came up a few days ago saying that it had validated the backup and needed to create a new backup.
    The NAS is available on the network, I can easily access it from multiple computers on our home network as well as from the iMac I'm trying to backup.
    I'm connected to the NAS via a network switch (1000/100/10) that both the NAS and this iMac are connected to using Gigabit ethernet wired connections.
    Time machine has been running for 2 days now and reports " 67.63GB of 363.44 GB - About 5 days" . Yesterday it said 4 days.. ;-)
    I've been having issues with TM stalling and eating up all the resources when I was just backing up to a local USB drive.
    Is there any other control for TM that is not visible? Where would I find the TM logs to see if something else is going on?
    Any other ideas?
    Thanks.

    Backing up with Time Machine to a third-party network device is unsupported by Apple and unreliable. Most if not all such devices use the obsolete "netatalk" implementation of AFP, which doesn't meet the technical requirements for a Time Machine server. I strongly suggest you back up to two or more locally attached external hard drives. If you want network backup, use an Apple Time Capsule or another Mac with File Sharing active as the destination.

  • Incrementally updated backup and EMC NMDA

    Hello Everyone,
    I'm kind of a newbie in setting up networker module for oracle, to backup database to tape. We use the oracle's suggested backup strategy to backup DB to Disk first using the incrementally updated backups with recovery set to 3 days (RECOVER COPY OF DATABASE WITH TAG ... UNTIL TIME 'SYSDATE-3') , which helps us to recover DB to any point in time using the backup files in disks vs. going to tape. After backup to disk, we backup recovery area to tape nightly. However, we do want to maintain backup Retention Policy of 1 month. Couple of questions,
    1. If i set RP to recovery window of 31 days in RMAN, then backups don't obsolete at all. this forces me to set RP in networker and they don't recommend setting RP in both RMAN and networker. how is this done generally to obsolete backups from tape as well as rman (catalog in CF) with above strategy. perhaps in this case i should set RP in netwoker and set RP in RMAN to none and rely on crosscheck and delete expired commands to sync with RNAM catalog.
    2. Wondering if nightly backup of recovery area to tape is going to take incremental from previous day and NOT full backup. The reason i ask is, i do not want the tape to do full backup of FRA every day cause full backup datafiles change once in 3 days based on the until time set. is there an option do set in networker to do incremental only or that's the default.
    Thanks in Advance!
    11gR2, 4 Node RAC, Linux, NMDA 1.1, Networker 7.6 sp1

    Loic wrote:
    You do a full backup of the FRA on tape ?No, I do a full backup of the database on tape. Using RMAN together with Veritas NetBackup.
    I mean if you use incremental updated backup it'll not work on tape... Because the level 0 backup that will be updated with the backup of the day after will be on tape and will not be updated.The incrementally updated backup is in the FRA only, on disk (both the image copy and the following backup sets that are used for recovery of the image copy). Never gets written to tape or updated on tape.
    Why don't you use then normal incremental backup ? That will have no problem with the tape backup even if level0, or level1,2 becomes reclaimable... ?I think I do :-)
    Maybe:
    To keep that you can put the redundancy to 2 out of 1 copy. Like this even with one copy on disk and tape it'll say keep the 2 copies.
    CONFIGURE RETENTION POLICY TO REDUNDANCY 2;I'll think about that.

  • Live migration to HA failed leaving VHD on local storage and VM in cluster = Unsupported Cluster Configuration

    Hi all
    Fun one here, I've been moving non-HA VMs to a HA and everything has been working perfectly until now.  All this is being performed on Hyper-V 2012R2, Windows Server 2012R2 and VMM 2012R2.
    For some reason on the VMs failed the migration with an error 10608 "Cannot create or update a highly available virtual machine because Virtual Machine Manager could not locate or access Drive:\Folder"  The odd thing is the drive\folder is
    a local storage one and I selected a CSV in the migration wizard.
    The net result is that the VM is half configured into the cluster but the VHD is still on local storage.  Hence the "unsupported cluster configuration" error.
    The question is how do I roll back? I either need to get the VM out of the cluster and back into a non-HA state or move the VHD onto the CSV.  Not sure if the latter is really a option.
    I've foolishly clicked "Ignore" on the repair so now I can't use the "undo" option (brain fade moment on my part).
    Any help gratefully received as I'm a bit stuck with this.
    Thanks
    Rob

    Hi Simar
    Thanks for the advice, I've now got the VM back in a stable state and running HA.
    Just to finish off the thread for future I did the following
    - Shutdown the VM
    - Remove the VM from the Failover Cluster Manager (as you say this did leave the VM configuration intact)
    - I was unable to import the VM as per your instructions so I copied the VHD to another folder on the local storage and took a note of the VM configuration.
    - Deleted the VM from VMM so this removed all the configuration details/old VHD.
    - Built a new VM using the details I saved from the point above
    - Copied the VHD into the new VMs folder and attached it to the VM.
    - Started it up and reconfigured networking
    - Use VMM to make the VM HA.
    I believe I have found the reason for the initial error, it appears there was a empty folder in the Snapshot folder, probably from an old Checkpoint that hadn't cleaned up properly when it was deleted.
    The system is up and running now so thanks again for the advice.
    Rob

  • Retrieve variable value from local Storage and display on canvas

    Hi
    I'm working on a project that has multiple html files (the projects are split into 12 so 12 different edge projects and im linking them via window.open()). I have a variable that keeps track of correct answers stored in LocalStorage html object. I have managed to get the localStorage variable to increment up by one each time the object is correct however my last step is to get the variable and them display the result on the canvas. I have tried
    var outPut localStorage.getItem(' ') method to retrieve the variable then used the set and get method to display the result however it doesn't work. Im not sure if I need a for loop to go though the localStorage and get the elements
    Code:
    // insert code to be run when the composition is fully loaded here
    yepnope({nope:['jquery-ui-1.10.0.custom.min.js','jquery.ui.touch-punch.min.js'],complete: init}); // load the jquery files
    sym.setVariable("myScore", 0);
    var c = localStorage["myCount"] || 0; //loading from localStorage
    function init(){
    sym.getSymbol("barLimit").$('scrubber').draggable({start: function(e){
    },drag: function(e,ui){ // start: ...  // Find original position of dragged image
    var leftLimitScrubber  = sym.getSymbol('barLimit').$('scrubber').position().left; // position of the scrubber
    var rightLimitScrubber  = sym.getSymbol('barLimit').$('leftLimit').position().left;
    var LimitTwoLeft  = sym.getSymbol('barLimit').$('rightLimit').position().left;
    if(leftLimitScrubber == rightLimitScrubber){
      sym.getSymbol('correctBar1').play('in'); //
      sym.getSymbol('nextButton').play('in');
      sym.getSymbol('incorrectBar1').play('out'); //
      sym.getSymbol('thumbsDown1').play('out'); //
      sym.getSymbol('thumbsUp1').play('in'); //
      sym.getSymbol('congrats').play('in'); //
    localStorage["myCount"] = parseInt(c)+1; //Converting string to number, and then saving it
    console.log("numberOfCorrectAnswers", localStorage["myCount"]);
    var finalScore = sym.getVariable("myScore");
    finalScore = c;
    sym.setVariable("myScore", finalScore);
    sym.$("Score").html(finalScore);
    } else if(leftLimitScrubber == LimitTwoLeft){
    sym.getSymbol('incorrectBar1').play('in');
    sym.getSymbol('correctBar1').play('out');
    sym.getSymbol('thumbsUp1').play('out');
    sym.getSymbol('thumbsDown1').play('in');
    axis: "x",
    containment: "parent"
           //for (var i = 0; i < localStorage.length; i++){ // iterate throught the local storage
             //var getItem = localStorage.getItem(localStorage.key(i));
              //if(getItem == 'numberOfCorrectAnswers' ){
    The above is the code for the 12th project in  this projects it needs to display the variable inside the object localStorage and display on the canvas.
    Any help will mean a lot. Thank You in advance
    P.S edge animate has a lot of bugs hard to code in

    what you need to do is to create a text box and set a default value of zero. Once that is don't you need a code on the stage which grabs the value form the localStorage object. I used the .text() jquery method to display the value on the canvas. So the zero will be replaced with whatever the value of the localStorage is.
    You also need a if statement to check if the localStorage is undefined, if its not then grab the value and display on the canvas.
    e.g
    var number = localStorage['finalValue']; // for the sake of completeness I had to put this line of code
    if( number ! = undefined){ // if not undefined so the object exits then ...
         sym.$(' (text identifier) '). text(number); // note text identifier is the name of the text box you create in edge
    } // Done

  • Moving Photos from iCloud Photo Library to Local Storage

    Scenario - I've a fully migrated library of photos/videos using iCloud Photo Library on iPhone and Mac.  It's near the limit of the iCloud storage plan I purchased and want to retain.  I'd like to move older and less frequently used content from iCloud Photo Library to more permanent archival storage.
    [This is for two reasons.  First, I prefer to use the Full Resolution setting on mobile devices, and that will be impossible as the entire library grows beyond the storage capacity of even the largest mobile devices.  Second, I don't feel the need to pay for super-sized iCloud storage for content rarely needed and only needed on a Mac.]
    The only option I've identified in Photos to do this is to Export (and delete from iCloud), which exports the original photos, but does not preserve useful Photos metadata and organization, such as Albums.
    What one would might like to see is a way to designate selection portions of the Library for local storage only (including backup within the Photos app library package), so those photos can be manipulated within Photos alongside iCloud content, but don't consume iCloud or mobile device space.  Or, in the alternative, one would like to see a way to Export content to merge into a separate Photos library package, preserving the metadata/organization.  In that way, one could maintain one Photos library as current iCloud synced content, and one or more local-only Photos library packages with archival content (with, importantly, the Export function to move content between the two preserving metadata).
    Does anyone know if there's a way to do this?  If not, Apple, would you consider coming up with a way to address this need?

    {quote:title=Nissin101 wrote @ 3:36pm EMT:}{quote}
    Well I was able to move photos from the camera roll to the photo library by sending *the pictures via email to my dad's blackberry, then saving them to my computer from his phone, then putting them back on into the photo library*.
    This is what I said originally.
    {quote:title=Nissin101 wrote @ 4:08pm EMT:}{quote}
    Alright I guess that answers my question then. However, just as I said I was able to transfer photos from my camera roll to my photo library, so at least that is possible.
    I never said that I did it directly, neither did I mean to imply that I was looking for a direct solution. This I guess is where our misunderstanding comes from. I just did not feel like repeating the whole process I went through. Regardless, I would rather this thread not derail into who said what and whose misunderstanding who. I now know that it is not possible to get pictures from the photo library to the camera roll in any way, so my question is answered for now at least.

  • Adobe Flash FAIL:  Adobe Flash Player local storage settings incorrect.  Module 'Resume' feature may not work on this computer.

    Using a Windows 2012 RDS Environment, we have users connecting to a CPD website, and as part of the CPD they need to run a systems checker.  When they run the systems checker they get the following error message: "Adobe Flash FAIL:  Adobe Flash Player local storage settings incorrect.  Module 'Resume' feature may not work on this computer". All users are connecting to this environment with Windows CE Clients,I have checked the setting on Adobe Flash and they seem correct but as each user has its own profile on the RDS session, is there something that I should be setting for each user. I have added the website to the trusted sites and it has made no difference.   Any ideas

    It sounds like what's happening is that Flash Player can't write or read from the local shared objects in the user's redirected home directory because we disallow traversing junctions in the broker process.  This behavior was disabled to address a vulnerability identified in some of John Forshaw's research into the IE broker last year.
    You can enable this behavior by adding the following setting to mms.cfg:
    EnableInsecureJunctionBehavior=1
    That said, you can probably gather from the name of the flag that we don't really recommend this approach, and disable this attack surface by default.  There's some risk that a network attacker could craft content that abuses fundamental issues with how Windows handles Junctions to write to arbitrary locations.
    Unfortunately, there's not a simple or easy workaround that I'm aware of (but it's been ages since I've administered a Windows domain) for this kind of NAS/SAN-backed terminal server environment where Flash is not able to access \Users\<user>\AppData\Roaming\Macromedia\Flash Player\ without traversing a junction.

  • Hard drive backup - then network backup

    I created a full backup of my macbookpro to a connected firewire drive (150g). I then hooked this same hard drive up to another leopard computer (imac). The macbookpro sees the drive and is willing to back-up to it.... but it wants another 150g; as if it's starting over from scratch.
    Is it using two different methodologies for backing up? Do you have to stick with one methodology when backing up? Backing up 150g over the network is not really an option - hence the reason I wanted to do it as mentioned above. Thank you!

    It seems to be that on a locally-connected drive, Time Machine backs up into a simple folder structure, whereas if you are connected remotely, it backs up into a .sparsebundle file named after the connected computer. For now, probably best to stick with one method or the other until someone figures out how to convert between the two.
    I suspect you could convert the local backup and package it in a .sparsebundle, but my network backup is nowhere near finished, so can't try it yet.

  • Configuring NAS as local storage

    This is not a specific question related directly to Oracle, but I'm hoping somebody can answer or point me in the proper direction. I'm working on setting up a Disaster Recovery system for my Oracle environment. Right now my DR system is such:
    HP Proliant DL 385 (G5): 64-bit running Oracle Enterprise Linux 5 Update 2 and 10.2.0.4.0
    IoMega StorPro Center NAS: Mounted as NFS, holds all database related files (control, redo and .dbf files)
    I have everything working but the NAS is hooked up to the network, and thus my environment requires network connectivity which I obviously can't count on during a disaster. Is there anyway to configure the NAS as local storage so when I do not have network connectivity I can still access my files on it?
    The vendor (IoMega) was of very little help. They tell me that I can plug the NAS directly into one of the NIC cards and "discover" the NAS that way. The problem is that the discovery agent does not run on Linux and they could not tell me how to get around this.
    Anybody have some experience hooking up a NAS unit as local storage instead of NFS? I'm trying to put on my SA/Network/Storage hats as best as possible, but I have very little experience trying things like this.

    I'm thinking out loud, so bear with me.
    An NFS mount point does an important feature in a clustered environment: file system access serialization. Frequently the underlying NAS file system has been formatted with EXT3 or some other non-cluster aware file system; NFS performs the important locking and serialization to keep this from being corrupted in a cluster. Please keep this in mind when designing a disaster recovery solution.
    What do you mean by "hooked to the network"? Do you mean you are using the public Internet or a corporate network?
    Are they suggesting that you establish a private, direct connection to the NAS?
    Find out how the NAS gets its network address. If it's using DHCP you will need to set up a local server and have the DHCP server listen only to the NIC/NICs where the NAS is plugged. Be sure the client NIC's have addresses on the same network as the NAS unit.
    Bring up networking on the NAS NIC devices.
    The bit about "discovering" the NAS file systems has me puzzled.
    Once you figure that out, mount the NAS file systems somewhere on you system, but NOT IN THEIR PRODUCTION locations.
    Now, set up your local machine as an NFS server. Publish the mount points as NFS exports, and then have your applications use these NFS mountpoints.

  • Local Storage in SQL Server 2012 (Fast Track Appliance)

    Hello,
    Before procuring the Microsoft SQL Server 2012 Fast Track Appliance, we require additional information to satisfy the requirements from our infrastructure staff.  What type of data is stored in the local storage?  It seems that the local storage
    may be used for computations functionality only (temporary tables etc?); therefore, the only system backups that we may need to perform are for the main server and spare server which also contains all computation nodes (each SQL Server 2012 instance running
    on virtual machines?).  In fact, we probably don’t need any backups since the spare server provides a fail-over mechanism and allows for our engineers to “repair” any issues on the problem server? Also, If all of our data is already on a separate system
    outside of this data warehouse and can simply be exported back to the PDW system, then perhaps we don’t require backups? We simply use this data warehouse for analysis of the data (query and reporting).  The data collection and maintenance are permanently
    stored on a separate system.
    One last question, if required – could we remove the local storage and replace it with our enterprise SANs? – likely a performance hit; however, may still be better than having to integrate separate hardware components to try to replicate the fast track
    appliance and attempt to tune for performance.  Our infrastructure policy requires the use of enterprise SANs and reduction in local storage (possibly from new data center consolidation policies).
    Thank you for helping us to decide quickly and purchase this Fast Track Appliance expeditiously to meet our rapidly approaching deadline.  I have been unable to find answers to these questions from Microsoft pre-sales and tech support staff (without
    a support contract).
    Thank you,
    Vinh
    703-727-5377

    Hello,
    Thank you for the question.
    If the data in the local storage can be extracted from the PDW again, we don't need to backup them.
    As for the storage replacement, technically speaking, you can replace the local storage with the different SANs. However, if you do so, you will lead the system to a nonsupport scenario.
    Please refer http://msdn.microsoft.com/en-us/library/hh918452.aspx for further information.
    If you cannot determine your answer here or on your own, consider opening a support case with us. Visit this link to see the various support options that are available to better meet your needs:
     http://support.microsoft.com/default.aspx?id=fh;en-us;offerprophone
    Regards,

  • Can't preview files from a network drive to a local CF9 server.

    Hi,
    I have the following set up:
    CF9 Local Dev version
    CF Builder 2.
    Project files are in a network drive N:\project
    I have RDS set up and everything seems to work ok, view DSN, files, etc on my local server.
    In the URL prefix section I even created a map:
    Local Path: N:\project,
    URL prefix: http://localhost:8500/project/
    There is no FTP set up on the CF9 server.
    The problem is that when I try to preview a file that I am working on from N:\project, it doesn't show on my local server. The previewing part works if I manually move the file to CF9. But it is the automatic uploading (or moving) of the file to the local CF9  that doesn't seem to work.
    BTW, I have a similar set up in DreamWeaver, where I am editing the same project (or site in this case) off of  N:\project, uploading to the same local CF9 server through RDS, and it works fine.
    I know that if in CF Builder you move the project to under the web root it will work but that would not work for us, since we need to keep the project source files on a network drive for sharing/backup purposes.
    Has anyone been able to successfully preview source files from a network drive on a local CF9 server?
    Thanks in advance,

    Hi again,
    After doing some more googling I realize that for files to be previewed the MUST be under wwwroot. This wasn't too clear in the CF documentation.
    So my new question is:
    Is there are way, once I save a file to automatically copy it from one location(N:\project) to another (c:/ColdFusion9/wwwroot/project)?
    I think there is a way to do it with Ant or some sort of Eclipse feature, but I am not too familiar with either. Could someone point me in the right direction?
    Thanks,

  • Live migration of VM from CSV storage - local storage

    I have a VM I want to migrate from our CSV storage onto local storage. However, I can't find a way to complete this.
    Live migration won't give me the local disks as an acceptable destination.
    During this live migration even if I say that I don't want it to be highly available, the next part to the wizard doesn't give me any options whatsoever to move it anywhere.
    I've tried moving it via the Hyper-V manager - which then directs me to the Failover Cluster Manager. Even there I can't give it a suitable destination on the local disk to stick the machine.
    What am I missing in this?
    Is there a way to do this without manually copying the VHDs to that location and creating a new VM?

    As an additional option, you can use one time backup/restore procedure, using whatever free backup tool you want to.
    We have done something similar with VeeamZIP that in many ways resembles simple zip-utility for VMs.
    We had backed up given VM firstly, and, then, during restoration procedure selected local disks as the datastore the VM should be restored to; worked like a charm.
    Kind regards, Leonardo.

Maybe you are looking for