Rsync or other backup solution- single source multiple target?

Seems from http://www.linuxquestions.org/questions … ly-774225/ that I can't do what I want, but I thought I'd ask as well.
My laptop hard disc is going to be backed up to two external hard discs (regular script). What I do right now is run two separate rsync commands, but of course this means that the data (all 300+ GB of it, currently) needs to be read twice. Seems it'd be much more efficient if the data were read once and then written to both external hard discs.
It doesn't seem from the above link and other google resources that this is possible. Can anyone suggest an alternative short of scripting a file-by-file cache?
Last edited by ngoonee (2013-10-31 01:33:22)

cfr wrote:
Doesn't this partially undermine the point of doing two backups? I realise one is off site and that they are on separate devices which rules out some sources of corruption/loss but if you run the backup as you wish and something bad happens to the source during the backup, you are likely to end up with three corrupted copies rather than only two.
I don't manage to do this as I only have one complete backup but I thought that best practice involved not touching one backup while the other backup was being created (or restored or...). That way, you always have one "known good" backup whatever happens.
EDIT: not "whatever happens" but "whichever of any of a larger number of possible catastrophes occurs". Obviously, the third copy could be on a device that dies or explodes or gets drowned by a peeved goldfish at just that moment when mice eat your source during a backup, thus corrupting your other two copies. But you can only plan so far...
This isn't my complete backup solution. I have two hard discs in one location (what I'm asking about here) and another one at a different location. The reason I have two is simply because I have a spare which doesn't have any other use, and it may as well be put to use in this way.
Besides, if something happens to the source, using --backup-dir means I'll still have the last known good copy anyway (not taking into account the backup which is at the different location.

Similar Messages

  • Single source Multiple Target Messages

    Hi All,
    I have one Source DataType and 3 TargetDataTypes
    I did Mappping and IterfaceMapping successfully while Iam going to Activate the Objects, the follwoing error is showing. Help me to this run successfully,
    Error is
    Activation of the change list canceled Check result for Interface Mapping Micro_IM | http://MicrosoftDifferentMapping.com:  A multi-mapping with multiple source or target interface instances is only recommended with asynchronous interfaces, since only a mapping of this type can be used in the process editor. Message Interface Micro_OB_MI | http://MicrosoftDifferentMapping.com are not asynchronous abstract interfaces  A multi-mapping with multiple source or target interface instances is only recommended with asynchronous interfaces, since only a mapping of this type can be used in the process editor. Message Interface Micro_IB1_MI | http://MicrosoftDifferentMapping.com, Message Interface Micro_IB2_MI | http://MicrosoftDifferentMapping.com, Message Interface Micro_IB3_MI | http://MicrosoftDi
    Thanks & Regards,
    Ashok.

    Hi,
    What is the SP that you are using. If it is XI 3.0 and SP less than 14 then it is not possible. You need to use Abstract interface.
    And why do you need Multimapping in the first place. You can do a simple mapping and in the receiver determination you can add more Business Server/System. Each Service/System you will have seprate IB interface and a separate Interface mapping.
    Hope this solves your problem.
    Thanks,
    Prakash

  • Why to use SQL Server's native backup facilities, not other backup solution?

    I've been asked in my company: why to use SQL Server’s native backup facilities? Instead, they currently rely on other backup software, like Backup Exec, BrightStor, or even Microsoft System Center Data Protection Manager. Those solutions let the
    company manage all of its backups—including SQL Server—in a single place, whereas SQL Server’s native backup abilities only apply to SQL Server.
    So what does SQL Server native backup facility give us more to be forced to use it?

    Satish and Pawan ... thanks but, a backup solution is just there. they don't need to pay anything. even though, having a backup solution that backup everything on the server, like files, software.. etc is needed regardless if you have SQL Server databases
    or not, and if it does database backup it would be even better and more complete as a backup solution. So sorry your answer is not an enough reason that will force me to leave that complete backup solution and use the SQL Server backup tools specifically to
    backup databases.  
    Olaf ... thanks as well. But I was just counting a number of solutions that i think they are related to backup things. yet I believe that Symantec backup exec does backup for SQL Server database, ain't?! what I understood from the link that you gave me,
    that some backup applications (if not all) use SQL Server backup facilities to do database backup, is what I understood correct? if yes then the question will be, is there any situation or reasons that force me to use SQL Server backup tools even if I have
    those backup solution (that some of them in the backgroud they are using SQL Server backup facilities)? does SQL Server backup tool give me more capabilities in backing up databases than what I find in backup solutions?
    The answer is NO, as of now you get all these features in 3rd party native backups...
    So in nutshell Microsoft never forces you to use SQL Servers Native backup -----The only reason why you get native backup featues is since SQL Server is an Enterprise Solution MS provide you all features in-built within the bundle so that you don't have
    to purchase any other license (incase you\your company doesn't have one already)
    Sarabpreet Singh Anand
    SQL Server MVP Blog ,
    Personal website
    This posting is provided , "AS IS" with no warranties, and confers no rights.
    Please remember to click "Mark as Answer" and "Vote as Helpful" on posts that help you. This can be beneficial to other community members reading the thread.

  • Easy Server 2012 backup solutions with tape support?

    We have several smaller to midsized customers that have invested money in tape drives and tapes. They are using Symantec BackupExec on Server 2003 / 2008 and 2008 R2. Unfortunately, Symantec postponed Server 2012 support in their products  many times
    already. This fact costs us many business opportunities to upgrade infrastructures to Server 2012.
    DPM 2012 is good for larger customers, but less suited for smaller companies (and the DPM "philosophy change" of the backup procedures, compared to older tape based backup solutions is an obstacle as well.)
    So we are looking for EASY backup solutions that support tape drives and server 2012. The "perfect" product for us would be just a tape driver and an extension to the integrated Windows Server Backup that would allow to store backups on tapes,
    the same way that Windows Backup allows backups to network shared folders.
    Does anyone knows EASY backup software that provides this functionality? I'm also very interested in feedback from peoples that have successfully used such products on Server 2012 and are satisfied with them.
    Thank you all in advance for any help.
    Franz

    Looks like we are in the same boat here with 2012.
    For all its trouble, BackupExec has been instrumental for us in providing our customers with a backup application that no other software vendor has been able to match. This includes:
    1) Granular backup and restore of Exchange allowing the restore of individual mailboxes and single mail items
    2) The same Granular backup and restore of Hyper-V's VMs
    3) The ability to backup to Tape, Removable USB HDD and NAS and Network shares
    Not even Shadow Protect cuts the mustard anymore – it’s become simply “stupid”.
    Although we love the simplicity and robustness of Windows Backup, there are some very serious deficiencies including;
    1) The extreme difficulty of adding additional USB HDD after the backup schedule has been created. I believe that this has partially been resolved in R2 but in 2012 it was a utter mess-up. We
    resorted to hooking up two 5-port USB Powered Hubs so we could make sure that all USB HDDs could be included in the schedule - what a mission! One blown hub after the other - couldn't find one that could handle the electrical current load of 5x USB HDDs!
    2) No Granular restore ability of nothing... had to resort back to the "old school" ESEUTIL "after" restoring the entire Exchange database to alternate location.
    3) Backup to a Network Share is pathetic - can't store more than the current backup. What use is that? Had to resort to backing up to Network Share then including that backup in another Windows backup - go figure!
    Like Franz, our customers have invested HEAVILY in Tape Drive hardware (some of which cost the customer in excess of $8000) and that excludes the cost of Symantec BackupExec.
    Although BackupExec 2012 now supports Server 2012, the re-purchase of the software is not a consideration to most of our customers.
    Sorry Franz, I don’t have a solution for Tape Drives yet. This is one reason why we have stayed away from deploying 2012 that already have a recent investment in Tape Drive technology.
    However, because there is NO other backup solution that compares to Symantec BackupExec (version <2010), we have been successfully implementing a hybrid solution (which, unfortunately, still does not include Tape Drives) including our own business.
    As BackupExec 2010 does not support Windows 2012 as a “Media Server” there IS a workaround in a “virtual environment”.
    We have successfully installed BackupExec 2010 on a Hyper-V Virtual Server as the “media server” and targeted the backups to shared folders on both NAS and removable USB HDD shares (off the Host Server).
    Although it works GREAT, there is still the serious deficiency of not been able to use Tape Drives (unless some really bright spark out there has an idea on how to make a Tape Drive accessible to a VM).
    Another REALLY good solution in which we deployed just last week was to supply the customer with a dirt-cheap PC, install it with Windows 2008 and the Tape Drive and schedule all the normal (and yummy) BackupExec 2010 granular backups to. Remember, the BackupExec
    agent WILL install and run just fine on Server 2012 - no problems there.
    This has become an INSTANT hit and this is OUR solution – a very small price to pay for “real granular” backups to Tape Drive.
    I hope that you find that the mentioned "good solution" a viable and cost-effective workaround.
    Backup Exec DOES NOT support Windows 2012.  I just got of the phone with them today April 24, 2014.  I have been using BE since Seagate made version 7.  Every product Symantec buys eventually turns to crap!  I just left Symantec Endpoint
    Protection after 13 years because it is unreliable and has very poor detection compared to other products.  In fact, it is so bad, that Symantec won't participate in any third-party evaluations that they don't pay for such as AV-Comparatives.org. 
    Don't let the door hit you on the way out, Symantec!

  • File Server Migration Source and Target Data Validation

    Does anyone know of a power shell script/CLI command  or some other way to verify source and target data after a file server migration?  I want to make sure that the shares that are migrated from the source and target are an exact match. Thank
    you.

    Hi,
    An example is provided in this article:
    http://blogs.technet.com/b/heyscriptingguy/archive/2011/10/08/easily-compare-two-folders-by-using-powershell.aspx
    $fso = Get-ChildItem -Recurse -path C:\fso
    $fsoBU = Get-ChildItem -Recurse -path C:\fso_BackUp
    Compare-Object -ReferenceObject $fso -DifferenceObject $fsoBU
    And actually Robocopy could also do this job with /L and /log:file parameter. 
    If you have any feedback on our support, please send to [email protected]

  • Mapping is inserting multiple records from a single source to Dimension.

    Hi All,
    I am very new to OWB. Please help me out. I've created Dimension with the help of the wizard and then a mapping which consist of single source and single dimension. The mapping is populating nearly 500 times of the actual records. Following are some details to give you a better understanding of mapping: I created a dimension with four levels and two hierarchy. Levels are L1, L2, L3 and L4 and hierarchies are H1-> L1, L2 and L4
    and H2-> L3 and L4. L4 is lowest level of hierarchy. L1 and L3 are parent levels in the respective hierarchies. I assigned an attribute of each level as Business identifier that means business identifier attribute is different in each level. In mapping I mapped the parent natural key(Key for parent Level in a hierarchy) as the value which has been mapped for parent level. The result is coming 500 times of the record that exist in source table. I've tried even single common business identifier for each level but again the result is 5 times of the records. Please let me know the solution.
    Thanks is advance.
    Amit

    Hi ,
    You may not be having multiple records in your dimension.
    To understand better the records insertion, try a snow flake version of the dimension and see how the records are inserted as per the levels in the respective tables.
    Thanks

  • Multiple single source layouts

    Seems robohelp cannot remember multiple build expressions if
    you ave more than 5 single source layouts. So each time you
    generate a set of html files you need to do the build expression
    each time.....so you can also assume the batch feature does nto
    work either - Tech Support had no comment, couldn't figure anything
    out....again.
    Anyone know how to fix?
    Thanks,
    Steve

    Thanks Rick.
    I have been down the road about 5 times until I wasted all of
    my support tickets. Tech support took all of my files replicated
    and tried everything over and over and could not offer a solution.
    I run about 40 single source layouts and about 50 build tags. I was
    told I should just build each on it own everytime to be sure....and
    this cannot trust any batch processing which was one of the man
    drivers for purchasing the system.
    Probably wouldn't mind as much if there was some form of
    upgrade or fix in the last 2 years.....guess Robohelp gets the back
    seat for development as the company is focused on other things.
    Steve

  • Single source can transmit multiple group in multicast

    single source can transmit multiple group in a multicast domain

    I am not sure how to start explaining this, but I will give it a try to my understanding.
    1)
    Look up the MAC Learning section in this link.
    http://www.ciscopress.com/articles/article.asp?p=101367&rl=1
    2)
    Once you have read through then your question would be if switches deliver traffic based on mac address of the end hosts, how is this mapping achieved for multicast addresses, as they are not assigned to any host.
    in this case the switch will consider the multicast packet as unknown as will broadcast it. To avoid that a simple mechanism of multicast
    IP to mac conversion was introduced which would create these temporary mac's based on the membership report of end hosts
    for a certain group. So when this conversion is done apart from the real host mac address pointing to the port where it is connected,
    multicast mac also points to the same port. Based on which the switch is able to forward the traffic intelligently to a set of hosts
    who want that group received.
    3)
    Now see this link which explains how Mcast IP to MAC convesion is done.
    IP to MAC Conversion
    http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/ipmulti.htm#wp1020628
    To summarize this, since only 23 bits are available for converting the IP to MAC, the 24th bit is lost .
    which in terms of binary to decimal means you have lost 128 from the decimal. so it loops from the start of the multicast range of 224-239 at every 128 bits.
    if the starting decimal was 224.1.1.1(if the received membership report was for 224.1.1.1) then next would be 224.129.1.1, 225.1.1.1 , 225.129.1.1 and so on till 239.......
    hence when it loops through all the 224 - 239 addres range and it catches up 32 mapped addresses.
    Now to conclude,
    "32 IP multicast addresses corresponding to each MAC " In here the MAC referenced is not the host mac but the converted MAC.
    Even though the conversion is done, because of the available bits the MAC address hence created overlaps 32 multicast IP
    addresses. (closest resemblence would be the ACL example with the mask assigned. based on this mask and the starting value
    it will keep catching values based on the mask. )
    You can try this practically. Assume you have a host and it sends a membership report for 225.10.10.10. Now the switch has to convert this to Mcast MAC.
    To do this yourself, write down the binary of this address, and look at the rightmost 23 bits (that is from the left ignore the first 9 binary values.) now if you look at the remaining binary values, these will map and look alike for 226.10.10.10 226.139.10.10 and so on. And hence if you run through from 224-239 you will get 32 adresses.
    HTH-Cheers,
    Swaroop

  • Multiple Destination Workbooks link to single source workbook - data dissappearing

    We have a sales reporting system set up with ~30 Destination Workbooks that pull all of their data directly from
    a single Source Workbook - using vlookup formulas (with full path name references to the Source Workbook).
    When we open two or more of these Destination Workbooks at a time without opening the Source Workbook – or enabling data connections or links – all of the vlookup data disappears from all but the most recently opened Destination Workbooks.
    The only work-around we have found for this is to open the Destination Workbooks in separate instances of Excel. This method is not practical for our analytical needs.
    Is there any different way to have data visible in more than one Destination Workbook in a single instance of Excel that are vlookup-ing from a single Source besides re-structuring the entire setup?

    Hi,
    I tested with a simple source Workbook closed from my side, I was able to perform Vlookup from more than 3 destination Workbook without any issues.
    To better assisting you on this, I will need some more information:
     What is your Excel version, 2013/2010 or others?
     How do your Vlookup be used, do you mind sharing me a sample through mail to: [email protected]
     Does this working before? If so, what has been changed?
     How do users get access to that source file?
     Is there firewall settings on the share location of the source file?
    Please update me with above information, so I can check further, thank you.

  • Single source to create multiple target nodes

    Hi Guys,
    I need to create multiple target node as many occurrence of source node. how should i achieve it?
    Source node (1...999999)  to Target node(1..1)
    please suggest.
    Regards
    Swapnil

    Hi Nutan,
    Sorry formatting got messed up so posting again.
    Sorry for the confusion. Target structure is 0..unbounded.
    Source structure ...................................... Target structure
    Message 1 ...................................................Message 1
    ZHRMD_A07 (1...1)..........................................MT_EMPLOYEE (0....unbounded)
         E1PLOG1(1...unbounded)................................ Field1
                                                                                    I need to create MT_EMPLOYEE multiple times depend upon occurences of E1PLOG1.
    Regards
    Swapnil
    Edited by: Swapnil Bhalerao on Mar 3, 2010 12:41 PM
    Edited by: Swapnil Bhalerao on Mar 3, 2010 12:47 PM

  • Definitive Storage and Backup solution

    Hello, I'm looking for a definitive Storage and Backup solution.
    So far I've been looking on to Drobo 5D or N, LaCie 5big Thunderbolt™ 2, or LaCie 2big Thunderbolt™ 2.
    Networking would be a plus but not a must. I'm open for other suggestions and also wonder if these systems can be considered backup since they are ready for single or double disks failures.
    Thanks.

    Methodology to protect your data. Backups vs. Archives. Long-term data protection
    Avoid Lacie, they contain Seagate drives inside.  Bad idea. 
    huge storage, low cost, high quality, very small and portable.
    BEST FOR THE COST, Toshiba "tiny giant" 15mm thick  2TB drive (have several of them, lots of storage in tiny package)    $100
    http://www.amazon.com/Toshiba-Canvio-Connect-Portable-HDTC720XK3C1/dp/B00CGUMS48     /ref=sr_1_3?ie=UTF8&qid=1390020791&sr=8-3&keywords=toshiba+2tb
    best options for the price, and high quality HD:
    Quality 1TB drives are $50 per TB on 3.5" or  $65 per TB on 2.5"
    Perfect 1TB for $68
    http://www.amazon.com/Toshiba-Canvio-Portable-Hard-Drive/dp/B005J7YA3W/ref=sr_1_ 1?ie=UTF8&qid=1379452568&sr=8-1&keywords=1tb+toshiba
    Nice 500gig for $50. ultraslim perfect for use with a notebook
    http://www.amazon.com/Toshiba-Canvio-Portable-External-Drive/dp/B009F1CXI2/ref=s     r_1_1?s=electronics&ie=UTF8&qid=1377642728&sr=1-1&keywords=toshiba+slim+500gb
    *This one is the BEST portable  external HD available that money can buy:
    HGST Touro Mobile 1TB USB 3.0 External Hard Drive $88
    http://www.amazon.com/HGST-Mobile-Portable-External-0S03559/dp/B009GE6JI8/ref=sr     _1_1?ie=UTF8&qid=1383238934&sr=8-1&keywords=HGST+Touro+Mobile+Pro+1TB+USB+3.0+7 2 00+RPM
    Most storage experts agree on the Hitachi 2.5"
    Hitachi is the winner in hard drive reliability survey:
    Hitachi manufacturers the safest and most reliable hard drives, according to the Storelab study. Of the hundreds of Hitachi hard drives received, not a single one had failed due to manufacturing or design errors. Adding the highest average lifespans and the best relationship between failures and market share, Hitachi can be regarded as the winner.
    Data Storage Platforms; their Drawbacks & Advantages
    #1. Time Machine / Time Capsule
    Drawbacks:
    1. Time Machine is not bootable, if your internal drive fails, you cannot access files or boot from TM directly from the dead computer.
    OS X Lion, Mountain Lion, and Mavericks include OS X Recovery. This feature includes all of the tools you need to reinstall OS X, repair your disk, and even restore from a Time Machine
    "you can't boot directly from your Time Machine backups"
    2. Time machine is controlled by complex software, and while you can delve into the TM backup database for specific file(s) extraction, this is not ideal or desirable.
    3. Time machine can and does have the potential for many error codes in which data corruption can occur and your important backup files may not be saved correctly, at all, or even damaged. This extra link of failure in placing software between your data and its recovery is a point of risk and failure. A HD clone is not subject to these errors.
    4. Time machine mirrors your internal HD, in which cases of data corruption, this corruption can immediately spread to the backup as the two are linked. TM is perpetually connected (or often) to your computer, and corruption spread to corruption, without isolation, which TM lacks (usually), migrating errors or corruption is either automatic or extremely easy to unwittingly do.
    5. Time Machine does not keep endless copies of changed or deleted data, and you are often not notified when it deletes them; likewise you may accidently delete files off your computer and this accident is mirrored on TM.
    6. Restoring from TM is quite time intensive.
    7. TM is a backup and not a data archive, and therefore by definition a low-level security of vital/important data.
    8. TM working premise is a “black box” backup of OS, APPS, settings, and vital data that nearly 100% of users never verify until an emergency hits or their computers internal SSD or HD that is corrupt or dead and this is an extremely bad working premise on vital data.
    9. Given that data created and stored is growing exponentially, the fact that TM operates as a “store-it-all” backup nexus makes TM inherently incapable to easily backup massive amounts of data, nor is doing so a good idea.
    10. TM working premise is a backup of a users system and active working data, and NOT massive amounts of static data, yet most users never take this into consideration, making TM a high-risk locus of data “bloat”.
    11. In the case of Time Capsule, wifi data storage is a less than ideal premise given possible wireless data corruption.
    12. TM like all HD-based data is subject to ferromagnetic and mechanical failure.
    13. *Level-1 security of your vital data.
    Advantages:
    1. TM is very easy to use either in automatic mode or in 1-click backups.
    2. TM is a perfect novice level simplex backup single-layer security save against internal HD failure or corruption.
    3. TM can easily provide a seamless no-gap policy of active data that is often not easily capable in HD clones or HD archives (only if the user is lazy is making data saves).
    #2. HD archives
    Drawbacks:
    1. Like all HD-based data is subject to ferromagnetic and mechanical failure.
    2. Unless the user ritually copies working active data to HD external archives, then there is a time-gap of potential missing data; as such users must be proactive in archiving data that is being worked on or recently saved or created.
    Advantages:
    1. Fills the gap left in a week or 2-week-old HD clone, as an example.
    2. Simplex no-software data storage that is isolated and autonomous from the computer (in most cases).
    3. HD archives are the best idealized storage source for storing huge and multi-terabytes of data.
    4. Best-idealized 1st platform redundancy for data protection.
    5. *Perfect primary tier and level-2 security of your vital data.
    #3. HD clones (see below for full advantages / drawbacks)
    Drawbacks:
    1. HD clones can be incrementally updated to hourly or daily, however this is time consuming and HD clones are, often, a week or more old, in which case data between today and the most fresh HD clone can and would be lost (however this gap is filled by use of HD archives listed above or by a TM backup).
    2. Like all HD-based data is subject to ferromagnetic and mechanical failure.
    Advantages:
    1. HD clones are the best, quickest way to get back to 100% full operation in mere seconds.
    2. Once a HD clone is created, the creation software (Carbon Copy Cloner or SuperDuper) is no longer needed whatsoever, and unlike TM, which requires complex software for its operational transference of data, a HD clone is its own bootable entity.
    3. HD clones are unconnected and isolated from recent corruption.
    4. HD clones allow a “portable copy” of your computer that you can likewise connect to another same Mac and have all your APPS and data at hand, which is extremely useful.
    5. Rather than, as many users do, thinking of a HD clone as a “complimentary backup” to the use of TM, a HD clone is superior to TM both in ease of returning to 100% quickly, and its autonomous nature; while each has its place, TM can and does fill the gap in, say, a 2 week old clone. As an analogy, the HD clone itself is the brick wall of protection, whereas TM can be thought of as the mortar, which will fill any cracks in data on a week, 2-week, or 1-month old HD clone.
    6. Best-idealized 2nd platform redundancy for data protection, and 1st level for system restore of your computers internal HD. (Time machine being 2nd level for system restore of the computer’s internal HD).
    7. *Level-2 security of your vital data.
    HD cloning software options:
    1. SuperDuper HD cloning software APP (free)
    2. Carbon Copy Cloner APP (will copy the recovery partition as well)
    3. Disk utility HD bootable clone.
    #4. Online archives
    Drawbacks:
    1. Subject to server failure or due to non-payment of your hosting account, it can be suspended.
    2. Subject, due to lack of security on your part, to being attacked and hacked/erased.
    Advantages:
    1. In case of house fire, etc. your data is safe.
    2. In travels, and propagating files to friends and likewise, a mere link by email is all that is needed and no large media needs to be sent across the net.
    3. Online archives are the perfect and best-idealized 3rd platform redundancy for data protection.
    4. Supremely useful in data isolation from backups and local archives in being online and offsite for long-distance security in isolation.
    5. *Level-1.5 security of your vital data.
    #5. DVD professional archival media
    Drawbacks:
    1. DVD single-layer disks are limited to 4.7Gigabytes of data.
    2. DVD media are, given rough handling, prone to scratches and light-degradation if not stored correctly.
    Advantages:
    1. Archival DVD professional blank media is rated for in excess of 100+ years.
    2. DVD is not subject to mechanical breakdown.
    3. DVD archival media is not subject to ferromagnetic degradation.
    4. DVD archival media correctly sleeved and stored is currently a supreme storage method of archiving vital data.
    5. DVD media is once written and therefore free of data corruption if the write is correct.
    6. DVD media is the perfect ideal for “freezing” and isolating old copies of data for reference in case newer generations of data become corrupted and an older copy is needed to revert to.
    7. Best-idealized 4th platform redundancy for data protection.
    8. *Level-3 (highest) security of your vital data. 
    [*Level-4 data security under development as once-written metallic plates and synthetic sapphire and likewise ultra-long-term data storage]
    #6. Cloud based storage
    Drawbacks:
    1. Cloud storage can only be quasi-possessed.
    2. No genuine true security and privacy of data.
    3. Should never be considered for vital data storage or especially long-term.
    4. *Level-0 security of your vital data. 
    Advantages:
    1. Quick, easy and cheap storage location for simplex files for transfer to keep on hand and yet off the computer.
    2. Easy source for small-file data sharing.
    #7. Network attached storage (NAS) and JBOD storage
    Drawbacks:
    1. Subject to RAID failure and mass data corruption.
    2. Expensive to set up initially.
    3. Can be slower than USB, especially over WiFi.
    4. Mechanically identical to USB HD backup in failure potential, higher failure however due to RAID and proprietary NAS enclosure failure.
    Advantages:
    1. Multiple computer access.
    2. Always on and available.
    3. Often has extensive media and application server functionality.
    4. Massive capacity (also its drawback) with multi-bay NAS, perfect for full system backups on a larger scale.
    5. *Level-2 security of your vital data.
    JBOD (just a bunch of disks / drives) storage
    Identical to NAS in form factor except drives are not networked or in any RAID array, rather best thought of as a single USB feed to multiple independent drives in a single powered large enclosure. Generally meaning a non-RAID architecture.
    Drawbacks:
    1. Subject to HD failure but not RAID failure and mass data corruption.
    Advantages:
    1. Simplex multi-drive independent setup for mass data storage.
    2. Very inexpensive dual purpose HD storage / access point.
    3. *Level-2 security of your vital data.
    Bare hard drives and docks. The most reliable and cheapest method of hard drive data storage, archives, and redundancies
    The best method for your data archives and redundancies, which is also the least expensive, the most reliable, and the most compact option is the purchase of naked hard drives and at least one USB 3.0 HD dock ($40 roughly).
    While regarding Time Machine and your Macbook or desktop, your primary backup is best saved to a conventional USB (or Firewire / thunderbolt) hard drive inside an enclosure, the most important part of your data protection begins after your 1st / primary Time Machine / backup; and these are your secondary (most important) data storage devices, archives and their redundancies.
    However bare hard drives and docks (below) also work perfectly as a Time Machine backup, this is for home use, since the docking station is certainly not very portable as a notebook Time Machine backup device should be; nor should bare HD be packed around with a notebook, rather remain at home or office.
    Six terabytes of 2.5" HD pictured below in a very compact space.
    Bare hard drives and docks have the lowest cost, the highest reliability, and take up the smallest storage space
    Drawbacks:
    1. Care and knowledge in general handling of naked hard drives (how not to shock a bare HD, and how to hold them properly). Not a genuine drawback.
    Advantages:
    1. By far the least expensive method of mass HD storage on a personal basis. Highest quality naked HD can be purchased in bulk very cheap.
    2. Eliminates the horrible failure point of SATA bridges and interfaces between external drives and the computer.
    3. Per square foot you can store more terabytes of data this way than any other.
    4. Fast, easy, no fuss and most simplex method of data storage on hard drives.
    Time Machine is a system  backup, not a data backup
    Important data you “don’t dare lose” should not be considered ultimately safe, or ideally stored (at the very least not as sole copy of same) on your Time Machine backup. Hourly and daily fluctuations of your system OS, applications, and software updates is the perfect focus for the simple user to conduct ‘click it and forget it’ backups of the entire system and files on the Macbook HD.
    Bootable clones are the choice of professionals and others in that Time Machine cannot be booted from and requires a working HD to retrieve data from (meaning another computer). Your vital data needs to be and should be ‘frozen’ on some form of media storage, either in a clone, as an archived HD containing important files, or on DVD blank archival media.
    A file that is backed up to Time Machine is unsafe in that if that file is deleted off the computer by accident or lost otherwise, that file will likewise vanish from Time Machine as it reflects changes on the internal computer HD/SSD.

  • NON 3rd party offsite server backup solutions....?

    Can anyone recommend a method of an off-site server backup solution that doesn't involve a 3rd party company?
    I'm new to server admin and the company I work for is sitting on tons of really, really confidential and sensitive data that we ardently protect on-site, but with all the inclemental weather we've been getting lately (and horror stories from other local businesses regarding their server rooms flooding and such) we've been considering options for offsite backups, but don't want an outside source to have any access to our server contents/data/info in way way shape or form (as in: our backups would be on one of their servers---we don't want to go that route).
    Right now we CCC everything everyday to an on-site external drive and then CCCing the server backups onto another external hard drive which then gets taken off-site to the admin's home and brought back in when a backup is going to be made (about once a week for that one).
    Any thoughts?
    We're absolutely not above buying a second server and running it out of somebody's house for this purpose...but the details of how to configure it as a backup server aren't very clear to me.
    Any help is appreciated!
    Thanks!
    Is there a better way to do this?

    ssh/rsync works well for me at home -- local server with a backup copy plus a copy on my brother's server nearly 3000 miles away. Of course, I recommend shipping physical media containing an initial backup to the offsite backup location first to save on bandwidth and time. (Depending on your connection.)
    My script is basically:
    {quote}
    backup()
        src=$1
        dst=$2
        rsync -abz --delete --delete-excluded $src $dst
        echo "Backup starts: `/bin/date`"
        echo ""
        backup /Volumes/backup/Backups.backupdb/computername/Latest/ \
            onsite_server:computername
        backup /Volumes/backup/Backups.backupdb/computername/Latest/ \
            offsite_server:computername
        echo ""
        echo "Backup ends: `/bin/date`"
    } > backup.log
    rsync -abqz backup.log offsite_server:backup.log
    ssh offsite_server "~/bin/backup-complete.sh"
    {quote}
    backup-complete.sh just emails me using the contents of backup.log as the email body. I used keys for authentication.

  • How to load multiple target tables simultaneously in single interface?

    I have a requirement where I have to load data into two target tables in single interface simultaneously. Reason is to populate parent-child relationship on target side as it is coming from the source side.
    For eg: I have 2 headers and 10 corresponding lines in source. Now I want load 2 headers into T1 and 10 lines into T2 simultaneously.
    Eg. SOURCE_TABLE
    HeaderId HeaderDesc LineId LineDesc
    1 AAA 10 QQQ
    1 AAA 20 WWW
    2 BBB 30 ZZZ
    2 BBB 10 XXX
    TARGET_TABLES:
    TARGET_HEADER
    HeaderId HeaderDesc
    1 AAA
    2 BBB
    TARGET_LINE
    HeaderId LineId LineDesc
    1 10 QQQ
    1 20 WWW
    2 30 ZZZ
    2 10 XXX
    I would appreciate if anyone can provide solution in this scenario.
    Thanks in advance.
    Giri
    Edited by: user10993896 on Apr 13, 2009 2:56 PM
    Edited by: GiriM on Apr 14, 2009 10:47 AM

    Hi Giri,
    Let me try to build an example... If I misunderstood your requirement please, let me know!
    1) Source table Tab_S
    create table Tab_S as (cs1 number, cs2 varchar2(10))
    2) Table Parent (P)
    create table Tab_P as (cp1 number, cp2 varchar2(10))
    3) Table Child (C)
    create table Tab_C as (cc1 number, cc2 varchar2(10), cp1 number)
    4) Function F$_Tab_C (create it in a ODI procedure)
    4.1 - step 1
    Create or Replace
    Function F$_Tab_C (p_cp2 varchar2, p_cc1 number,p_cc2 varchar2, cp_cp1 number) return varchar2 as
    begin
    insert into Tab_C (cc1, cc2, cp1)
    values (p_cc1, p_cc2, p_cp1);
    return p_cp2;
    end;
    associate this step to an procedure option like "Create_Function"
    4.2 - step 1
    Drop Function F$_Tab_C
    associate this step to an procedure option like "Drop_Function"
    4.3 - Step 2
    Disable the FK constraint and parent and child
    associate this step to an procedure option like "Disable_Constraint"
    4.4 - Step 3
    Enable the FK constraint
    associate this step to an procedure option like "Enable_Constraint"
    5) ODI interface:
    Source: Tab_S
    Target: Tab_P
    Mapping:
    cp1 ---> cs1
    cp2 ---> F$_Tab_C(cs2, 123, 'abc', cp1)
    6) ODI Package with all flow:
    6.1 - Drag and drop the procedure and put the options:
    "Create_Function" yes
    "Disable_Constraint" yes
    "Drop_Function" no
    "Enable_Constraint" no
    6.2 - Drag and drop the interface
    6.3 - Drag and drop the procedure (again) and put the options:
    "Create_Function" no
    "Disable_Constraint" no
    "Drop_Function" yes (optional, can let as NO if you wish)
    "Enable_Constraint" yes
    These are the necessary steps.... Maybe there is some syntax error because I build all in a notepad and do not compiled it in the DB. It is just to show you the general idea.
    Maybe you can be a little afraid about disable the FK but it is OK because you can guarantee the relationship by logic (funcion).
    The only point is that you must be the only one working at the target tables during the process.
    Make any sense in your case?

  • Backup Solutions for my MacBook Pro+

    I have a Macbook Pro (2010) w/500gb hard drive. I have one 500gb external hard drive that holds movies and my Aperture library. My current backup solution is a 2nd external hard drive (2TB) that I attach to my laptop while my smaller external hard drive is attached (it is not excluded from the backup).
    I manually attach the laptop and 500gb ehd every week or so to backup to Time Machine
    The problem?  My laptop is approaching it's limit and I'm thinking that I need to move my iPhoto library off to an ehd as well.
    I'm wondering if backing up to my Time Machine drive with a laptop and two ehd (each 500 gb) is smart or if I need to backup those hard drives to other ehd separately.
    Also, I plan to buy an iMac for my family in the next couple of months.  Not sure how that should impact my backup configuration.
    I've also heard that in addition to Time Machine I might consider getting an ehd specifically with the purpose of being a bootable drive (via Super Duper)
    With all of these options my head is swimming a bit.
    Would love recommendations for a backup workflow/solution based on my equipment.
    Thoughts?

    Pretty much any drive will work, but for a laptop, you might like to get a portable drive (doesn't require that you connect it to power) with USB3. Thunderbolt drives are also available, but they're more expensive and no faster than USB3 for a single hard drive.
    I personally like the Lacie rugged series: http://store.apple.com/us/product/H9377ZM/A/lacie-1tb-rugged-hard-drive-triple-u sb-30-5400-rpm?fnode=5f
    If you shop around you should be able to get it a little cheaper than Apple's prices.
    Matt

  • I want clear instructions on how to use Sync to backup my single Firefox computer

    All I want is for my FF bookmarks etc to be sync'd to your server so when I need to reinstall my system, they are easily reloaded. But how????? Syncing seems obsessed with multiple computers. That's not how most users use it. They want to backup their single computer.
    I have an account but when I go to tools/sync I'm asked to setup sync or pair a device. I don't want either, I want to see what my backup is like, how up-to-date it isa and check it is all there before I reinstall my system.
    If this is not what sync is for, where is the option to backup my Firefox settings, bookmarks and history please?
    And for God's sake stop going on about pairing a device!!!!!!!!!!!!!!!!!!!!!!!!!
    Oh and when I click the option to automatically collect info on my setup a pop-up says Firefox has blocked Mozzilla.......

    Alternatively, you can use the [https://addons.mozilla.org/firefox/addon/febe/ FEBE extension] to back up your bookmarks, passwords, and more. Just remember to save that backup somewhere outside of your computer (like on a flash drive or an external hard drive) before you reinstall your system.

Maybe you are looking for