Multiple ( redundant ISP's)

We are in a process of having multiple ISP's . Our Public DNS for mail is currently hosted by AT&T and the second ISP being Verizon
Our networking team will terminate both the ISP's to a device from Radware.
How does one handle a multiple ISP scenario for the email infrastructure ( eg. MX and A records ) ???

This all depends on how you're using your ISPs. If you have your own, portable IP address space, then multiple ISPs isn't any different than just one. But if you're using non-portable IP addresses assigned by your ISPs, then it gets tricky and you'd probably be better off with someone who understands your situation helping you.

Similar Messages

  • Backup or redundant ISP with FWSM and security contexts...

    Hello guys,
    I am in a middle of a dessign problem. We have 2 ISP, and we have a FWSM running multiple contexts, my context that is receiving all the static translations for all my published servers is the one where i want to configure default gateway tracking (so it can go out to an "outside2" interface in case the primary fails) and use the second ISP link for internet access and static nat. Just the exact way the ASA works.
    I am not quite sure it works with FWSM.
    Thanks a lot!
    emilio

    Hello Emilio,
    You cannot configure SLA monitoring on the FWSM at this moment.
    Maybe in the future this great feature will be added to this modules.
    I know the 6500 supports it so you can try to set it up there.
    Regards,
    Julio

  • Seeking assistance in re-architecting Portal to use multiple redundant dbs

    I want to re-architect my running Oracle Portal 10.1.4 installation to have more than one database providing the data layer. At present, I have a redundant architecture for every tier except for the data tier. I have two WebCache servers for the front-end, and two Oracle Application Server instances for the application tier. My only single point of failure is my Oracle 10g database. I just have one instance of the database.
    I am a developer. In the past I have been told by database administrators that the schema used by Oracle Portal relies on some low-level system values in the database. As I understood it, this reliance made it impossible to have two synchronized database instances that split the load coming from the application tier.
    I have searched OTN and MetaLink for architecture tips concerning a redundant data tier. I have not found anything as of yet. I have also searched the Oracle Application Server enterprise deployment guide. While I see articles about creating multiple Oracle Application Server instances, I do not see articles about creating multiple database installations to provide the data layer to Oracle Application Server.
    I want to perform this re-architecture in order to bring my portal into compliance with my corporate disaster recovery policies.
    Any help would be appreciated.
    thanks,
    Mike

    We have a tentative plan.
    I have met with our primary DBA to discuss our options for making our data layer more fault-tolerant. Our DBA is hesitant to use RAC, as, based on his knowledge, RAC has multipe nodes, but has a single storage point. Our goal is to not have our database be our single point of failure, and a single storage point, even with multiple nodes, is still a single point of failure. Instead, our DBA would like to consider an older technology, DataGuard. His idea is to have a single database, protected via DataGuard. This is not exactly fault tolerant, as the switch over to the DataGuard may take time, but it is closer to fault tolerance than is our current architecture. Does this make sense?
    As an aside, our DBA also stated that white papers have shown that RAC has less availability than non-RAC database systems, as any "network hiccup" and node swapping starts. I have not seen these white papers, but he seems to believe that RAC is not as performant, and therefore more prone to latency/outage.
    I personally don't think that either RAC or DataGuard is an optimal solution. I think that designing the Oracle Application Server layer to be able to switch between synchrononized databases is the best situation. I think this simply because if we have this set-up, we can have one of the database servers be destroyed due to disaster, and still be able to use the other, synchronized database server, which is geographically far from the destroyed server.
    However, as pointed out above, I am a developer, and DBAs have a more realistic idea of our options.
    I would appreciate any further assistance from any forum members. I incude links below that may be of interest.
    thanks,
    Mike
    Oracle DataGuard:
    http://www.oracle.com/technology/deploy/availability/htdocs/DataGuardOverview.html
    Oracle RAC:
    http://www.oracle.com/technology/products/database/clustering/index.html
    Switching Oracle Portal to a new database:
    http://blogs.techrepublic.com.com/programming-and-development/?p=527
    Changing the metadata repository:
    http://download.oracle.com/docs/cd/B14099_19/core.1012/b13995/chginfra.htm#i1012229

  • Channel Bonding? Oracle RAC with multiple redundant network

    We are planning on setting up on Oracle RAC 10g on Linux.
    We have redundant switches, so I would like to set up the network to have some version of network redundancy.
    Under windows, it is called 'teaming'. Linux is 'channel-bonding'
    I just want to make sure if I setup the channel-bonded interfaces, it will be supported with RAC. Has anyone done this before??
    -Andy

    we are running our RAC on NIC bonding. Although we need to do some more testing to be real sure as to the functionality of the bond.
    Metalink Note:298891.1 would be a good place to start.
    hth,
    -S

  • Windows clients generate multiple (redundant) ACLs

    The Problem
    On our server, we have a single ACL at the top of one folder hierarchy that enables full access to the appropriate groups, and makes these rights inherit down the tree to all children. (The purpose is to override the default behaviour of the standard Unix permissions, which give the document owner full access, but makes it read-only to the group.)
    However on the child documents created down through the folder hierarchy, instead of a single inherited ACL, we are seeing a proliferation of multiple inherited copies of the ACL; I thought this was untidy but paid little attention to it - until we discovered that such long lists of ACLs were causing samba instances to crash, when a Windows machine attempts to open the document. This is a bug in samba which we seem to have fixed by upgrading the server to 10.4.9, after we found an article about the samba bug. But if this can crash samba I suppose it could cause other problems as well.
    The Apparent Culprit
    After experimenting, we've found that the proliferating ACL duplicates are created by MS Office applications running on Windows, not by MS Office on the macs. A windows machine will assign to a single Word document: two ACLs for the log-on user, one for reading and one for writing, and two again for the "Everyone" user, both read-only - which is correct but superflous; it then creates several additional ACLs, up to at least six, for the Group name, in various duplicated configurations that add up to the desired result. This is all pointless since all that is necessary is a single ACL for the group name.
    The problem also happens for Excel and Powerpoint. We've not noticed it with Visio or Notepad, so it's probably something peculiar to MS Office. It happens for a document created from Windows only, and for a document that has been edited by both a Windows and a mac user. It doesn't happen for a mac only document.
    Has anyone else seen this, and found a way to prevent it? (Apart from not using ACLs', I mean. We do need them for more specific uses elsewhere on the server.)
    Our usage Set up
    We have a mixture of three G4 macs running 10.3. and 10.4; and up to nine windows machines running XP and Vista, all logging into a share on our OS X Server. Everyone works together on projects, so the Samba and AFP servers have the same share, under which hangs our project ('Jobs') hierarchy and various others separate hierarchies.
    This is the ACL set at the top of our project folder hierarchy:
    sudo chmod +a "ConsPlusAss allow readsecurity,readattr,readextattr,list,search,read,execute,writeextattr,writeat tr,delete,deletechild,add_file,add_subdirectory,write,append,file_inherit,directoryinherit" Jobs

    I just found one MS Word document on the server that had 50 ACLs' attached to it. This has got to start causing problems...

  • Multiple Transit to ISP design guide

    Hi,
    I will be implementing transit to multiple internation ISP soon and now looking for design best practise in terms of the BGP routing control.
    Hope someone have the url or good documentation on this matter.
    Thx in advance.

    Hi,
    Please find attached some presentations I have about the topic. Clearly not enough but maybe a good starting point.
    Drop me a mail, I've got some more docs I can share with you.
    Cheers,
    Mihai

  • DHCP with redundant Gateway does not work

    I am having trouble connecting to my office network using Wifi.
    I have done some investigation and found the problem which is...
    When DHCP server serves the request and replies with multiple/redundant gateways, the device does take the IP and communicates within the local network but ignores the gateway and is not able to connect the outside network.
    When I requested my Systems Administrator and he added specific setting for my device's MAC address and sent only one gateway in DHCP reply, device starts to connect outside.
    Our office has other debian based and CentOS based systems running including Ubuntu workstations but none of them is having problem with multiple/redundant gateways but only N900 is having this problem. It means that the problem is not inherited from Linux or Debian but the bug is related to Maemo itself.
    I have tried to tell add some technical details and do not expect answers like to check firewall and so. This is a genuine problem and is not intermittent. We have experienced it using MS DHCP server under windows 2003 and win 2k as well.
    End users: Please try to duplicate the problem before suggesting a work around or anything
    Developers/Admins/Officials:  I hope you acknowledge the problem soon.

    Yes it is uncommon configuration in house holds but in offices or places that have access to more than one Internet connection the configuration will be common. Houses normally have one small router with one internet connection. However in many offices the router is attached to multiple connections and manages fault-tolerance. Having said all that, regardless of how often it is used, when used it wont work and it is a bug.
    In DHCP protocol "003 Router" is defined as a list of IPs not a single IP and if DHCP protocol is properly supported this should have work. What I mean is that this problem is related to DHCP implementation. Following is the output of route command...
    ACTUAL OUTCOME:
    ===============
    / $ sudo /sbin/route
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    192.168.2.15    *               255.255.255.0   U     0      0        0 wlan0
    / $
    EXPECTED OUTCOME:
    =================
    / $ sudo /sbin/route
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    192.168.2.15    *               255.255.255.0   U     0      0        0 wlan0
    default         192.168.2.1     0.0.0.0         UG    0      0        0 wlan0
    default         192.168.2.2     0.0.0.0         UG    10     0        0 wlan0
    / $
    I already posted the big on Bugzilla
    https://bugs.maemo.org/show_bug.cgi?id=9662
    Well I posted this here as well so if someone else also faces the same problem (not sure if there will be one) but they should at-least know the problem instead of thinking that why the hell my internet does not work in office when there is no firewall blocking it.. :-)
    I do hope to get it viewed by a "sympathetic developer" :-)
    I currently am not into develop/modify scripts but if this is not fixed by others then I believe I don't have a choice.
    For now as I mentioned I did a work around by adding an IP reservation in my DHCP server which allows me to fix it to one Gateway for this MAC address only. But if I did not have admin access over my DHCP server, I was in trouble then and maybe would not be able to event troubleshoot and identify the problem.

  • Vlan for voice and vlan for data with diferent ISP best choice of config??

    Hello everyone,
    Im, Oscar
    At our company we have a redundant ISP connection to two separate ISP's.
    We are also using VoIP on the network.
    Currently one ISP connection is used primarily and the second on is just used as a backup.
    I was wondering if it is possible to use the first connection primarily for data traffic and the second connection for VoIP traffic?
    We use different VLAN's for voice and data.
    Any help would be appreciated.

    Disclaimer
    The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
    Liability Disclaimer
    In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
    Posting
    Yes, for egress.  Ingress is "it depends".
    You could also consider using both links for both kinds of traffic.

  • DNS Benchmarks are important.

    Below are posted the results of a recent DNS Benchmark test using the DNS servers that I have set in my router DNS settings section.Not sure if the HH/HH3 has that option as I use a Belkin N1.
    The reason I have posted the below info is because after clicking the PPPoE settings and accepting the DNS servers that our ISP puts in automatically my performance was unreliable.
    So I unclicked the  "Get DNS from ISP" option and replaced the DNS servers with the ones in the report below,which judging by the benchmark report was a good move.
    The DNS benchmark tester I use allows you to setup a database of every DNS server that is nearest to you and reliable, [no router crashes,or modem cutouts],and a stable profile.
    Just google "DNSBench".
    The results summary, conclusions, and recommendations from your most recent run of this DNS benchmark are provided below. Please carefully consider the implications of making any changes to your system's current configuration before doing so.
    þ System has multiple redundant nameservers configured.
    This system is currently configured to use 2 separate nameservers for DNS name resolution. This is in keeping with recommended best practice (of having at least two different nameservers) so that the temporary failure of any single nameserver will not prevent all DNS name resolution.
    þ All system nameservers are alive & replying to queries.
    All of this system's 2 nameservers are working and replying to queries. This is terrific because if the system's primary nameserver were to become overloaded or unavailable, even briefly, one or more backup nameservers are standing by ready to supply DNS lookup services.
    þ System's nameservers are probably optimally ordered.
    Windows uses DNS servers in the order they are listed under the network adapter's properties, or when obtained automatically from an ISP, in the order provided by the ISP. Windows will fall back to using the second, third, and other nameservers only when the first listed nameserver fails to respond. So if the first nameserver happened to be very slow, but working, everything would be slowed down. Consequently, the order of nameserver listing should match their order of decreasing performance . . . which is probably how this system is currently configured:
         Usage Order   Nameserver IP   Speed Rank
               1      194. 74. 65. 68      1
               2      194. 72.  0. 98      2
    Why only "probably" ?
    Only "probably" because there wasn't enough of a statistically significant difference between their timings to be able to make any claims with at least 95% confidence.  Here are the details:
     When this benchmark is allowed to finish, it will have collected approximately one hundred and fifty (150) DNS performance samples from each nameserver being tested.  Although this is sufficient to generate a good average performance estimate, if the collection of sampled values are too widely spread apart (in other words, not a lot of agreement among samples), it is impossible to know with "statistical certainty" (to be 95% sure) how individual nameservers compare to each other.
     Therefore, even if the ranking shown above appears to be out of order, the differences are not statistically significant, and you should not be concerned.  If you were to re-run the benchmark you might get a different outcome.  This benchmark conclusion page will inform you when a problem exists that is statistically significant, and will then advise you that your DNS nameserver settings should be changed.  But that is not the case with the benchmark results that were just obtained.
    ý System nameservers are SLOWER than 14 public alternatives!
    This benchmark found 14 publicly available DNS nameservers that are reliably faster than the slowest nameserver currently being used by this system. If you were to adjust your system's configuration to use the faster of these nameservers instead of what it is currently using, your DNS lookup performance, and all use of the Internet, would be improved.
    Recommended Actions:
    With at least 95% certainty:  Based upon a statistical analysis of the spread in timing value samples received during the benchmark, there is at least a 95% certainty that the performance conclusions stated above are correct. But even so, since changing DNS nameservers requires thought and effort, it's something you want to be sure about. Therefore, since these results represent a single snapshot in time, you may wish to confirm that the faster alternative nameservers are consistently faster than your system's currently configured nameservers, and that those public alternatives don't have any negative characteristics such as being colored orange to signify that they redirect mistaken URLs to an advertising-laden search page rather than returning an error (which will be a concern to some users).
    You may also wish to check the relative performance at different times of day to make sure that the performance improvement over your system's current nameservers is reliable throughout the day.
    And you may wish to make sure that the alternative nameservers are enough faster than what you are currently using for the improvement to be worth changing away from what you're currently using. (This test is only saying that it's 95% sure they are any amount faster.)
    þ This system's nameservers are 100% reliable.
    DNS reliability is extremely important, since lookup requests that are dropped and ignored by nameservers cause significant delays in Internet access while the querying system waits for a reply. The system is then finally forced to reissue the query to the same or to backup nameservers. While your system is patiently waiting for a reply, you are impatiently waiting to get on with your Internet access.
    During this benchmark test, all of the system's nameservers tested returned a reply for every request sent. It doesn't get any better than that. Very nice.
    þ All of this system nameservers return errors.
    This is a GOOD thing!  Some DNS providers, such as OpenDNS and even the Earthlink, Roadrunner and Comcast ISPs, redirect incorrectly entered URLs to their own advertising-laden marketing-driven interception page instead of simply returning an error to the web browser. But this system's nameservers are returning errors when asked to lookup non-existent domain names.
    þ System nameservers are replying to all query types.
    During the development of this DNS Benchmark we discovered that the routers used by some pre-release testers were not returning results for the benchmark's Uncached and/or Dotcom testing queries. Even though these queries are admittedly unusual, they are completely valid. So the only conclusion was that those few routers were inherently defective. The good news here is that your nameservers are replying to these unusual but valid queries.
    The above is for info only and posted incase it might be of some use to anyone losing thier connection frequently.
    If the mods find this post out of order in any way,please delete it.

    Gibson Research have produced several useful tools over the years including this one.
    A list of their freeware is available here.

  • I just backed up my mac to an external hard drive using Time Machine. What would happen if I turn Time Machine off and then plug the external hard drive back into my computer?

    I just backed up my mac to an external hard drive using Time Machine. What would happen if I turn Time Machine off and then plug the external hard drive back into my computer?
    What I am ultimately wanting to do is make more room on my computer by backing up all of my files onto the external hard drive and then deleting them off of my computer. However, neededing to be able to retrieve them from the external hard drive later down the road.
    From what I have read and am trying to understand, is that I probably shouldn't have used time machine. I need to use the external hard drive like a basic flash drive where I can put things on and get things off without having it automatically update through time machine everytime I connect it to my computer.
    Not tech savvy at all and barely understand basics. I need very simple and easy to understand explanations.

    sydababy wrote:
    and then deleting them off of my computer.
    BIG BIG MISTAKE ..... youre making a linchpin deathtrap for your data trying to shove everything on a single fragile HD.
    Dont suffer the tragedy other people make, buy another or 2 more HD, theyre cheap as dust.
    The number of people who have experienced terror by having a single external HD backup is enormous.  One failure that WILL HAPPEN, and kaput,......all gone!
    Dont do it, its all about redundancy, redundancy, redundancy.
    follow here:
    Methodology to protect your data. Backups vs. Archives. Long-term data protection
    Deleting them off your computer is fine....having only ONE copy is extremely BAD.
    The Tragedy that will be, the tragedy that never should be
    Always presume correctly that your data is priceless and takes a very long time to create and often is irreplaceable. Always presume accurately that hard drives are extremely cheap, and you have no excuse not to have multiple redundant copies of your data copied on hard drives and squirreled away several places, lockboxes, safes, fireboxes, offsite and otherwise.
    Hard drives aren't prone to failure…hard drives are guaranteed to fail (the very same is true of SSD). Hard drives dont die when aged, hard drives die at any age, and peak in death when young and slowly increase in risk as they age.
    Never practice at any time for any reason the false premise and unreal sense of security in thinking your data is safe on any single external hard drive. This is never the case and has proven to be the single most common horrible tragedy of data loss that exists.
    Many 100s of millions of hours of lost work and data are lost each year due to this single common false security. This is an unnatural disaster that can avoid by making all data redundant and then redundant again. If you let a $60 additional redundant hard drive and 3 hours of copying stand between you and years of work, then you've made a fundamental mistake countless 1000s of people each year have come to regret.

  • Migration to iPhoto 6 AND Intel Mac Pro at the same time

    I am about to move my life to a new Mac Pro. Included in the move is an iPhoto 5 Library with 19 000 images.
    The iPhoto Library is backed up, of course, and there are multiple redundant copies of all the images on remote hard drives.
    The question is, should I upgrade iPhoto 5 to iPhoto 6 PPC version (and let iPhoto 6 PPC reorganize the Library) on the old Mac FIRST, and then move the iPhoto 6 Library over, or is it OK to move the iPhoto 5 Library to the new Mac intact and let iPhoto 6 (Intel version) do the reorganizing?
    Also, are there preference files from iPhoto 5 that have to go over to preserve the Library structure (rolls, comments, etc)?
    Thanks.

    James
    Simply copy the iPhoto Library Folder from your Pictures folder to the same location on the new machine. Then, to deal with any file permission issues: Download BatchMod from
    http://macchampion.com/arbysoft/
    And apply it to the iPhoto Library Folder using the settings found here:
    http://homepage.mac.com/toad.hall/.Pictures/Forum/BatChmod.png
    (Credit to Old Toad for this one).
    Then launch iPhoto on the new machine and it will do the rest.
    Regards
    TD

  • My iMac suddenly can't read the backup hard drive I've been using for Time Machine.  I did NOT just upgrade the OS or anything.  The external HD is an OWC Mercury Elite All Pro. It's worked fine since I got the iMac 4 years ago.`

    My iMac suddenly can't read the backup hard drive I've been using for Time Machine.  I tried unplugging the cord that connects the HD to the iMac and plugging it back in, but I still get "The disk you inserted was not readable by this computer" below which are buttons for Initialize, Ignore and Eject.  I was using a cord that went from larger square plug to larger square plug.  So then I tried one that went from smaller square plug to what I think is USB (thin rectangular plug) of the sort that connects the keyboard and mouse. It's the type that my printers and scanners use to connect to the iMac.  I did NOT just upgrade the OS or anything.  The external HD is an OWC Mercury Elite All Pro. It's worked fine since I got the iMac 4 years ago. What else can I try before just trying to initialize and

    Thanks, Michael!  I do hear it at times spooling up and running. Just after I bumped the thread I looked for troubleshooting for this drive online and found the manual which suggested using Disk Utility which I've seen before accidentally (if I hit Command Shift U instead of Shift U to type "Unit" on a new folder for a student's homework ) but had never really noticed.   Disk Utility does see it and also a sub-something (directory?) which might be the Time Machine archives on the disk, called disk1s2), sort of the way that my iMac's hard drive shows up as 640.14 GB Nitachi HDT7... and has a sub-something titled DB iMac, which is what I named my iMac's hard drive.
    Anyway the owner's manual just shows the image under the formatting section, not the troubleshooting section, but as soon as I saw it in the manual I remembered seeing it accidentally a few times, went to it, and am now verifying the disk.  Right now it's telling me that it will take 2 hours to complete the verification, so I guess I have a bigt of a wait.  :-) 
    Does that fact that Disk Utilities can see it mean it's not failed, or just that it hasn't completely failed? 
    I can see the virtue in having multiple redundant backups, or at least two backups. What do you suggest?  Two external hard drives?  I had this one linked by ethernet, and but I also have a cord that could link it by USB (like a printer), so if this one is reparable I could get a second one and link it by USB.  If this one is not reparable I could get two and do the same thing.  I do have an Airport so I suppose it's possible to get some sort of Wi-Fi hard drive (my new printer/scanner uses only the network and not a cable, although it has a cable that I used for the initial installation), but I'd suspect a Wi-Fi hard drive might have a higher price.
    What hard drives, if any, do you recommend? I seem to recall that when I was looking at external hard drives 4 years ago, Apple's were substantially more expensive, which is why I got the OWC Mercury Elite All Pro.

  • Time Machine: setting-up external hard drive question

    I purchased the Lacie 2TB External Hard Drive from Apple and in the set-up process it said to free up all 2 TB storage space if I intend to use it only for my MAC (which I do), so that's what I did. However, when I was doing that, the set-up screen kept giving me these warning signs if I did all MAC like the screen told me to do it would erase data on the disk...I did all MAC anyways so not sure what got erased since nothing I thought is supposed to be on there...don't these come clean with 2TB available? I just want to make sure I did this right!

    KrystelleLynne wrote:
    I have been doing back-ups on with it using time machine. I did not, however, do the check in disk utility. Since there are back-ups on the hard drive, should I still erase?
    Not now no. All is ok then.
    If Time machine HAS been backing up to it, its fine, it couldnt use it otherwise.
    However now that youve got A (single) backup, dont make the huge mistake everyone else does,  backup your data on yet another HD and update it and store it somewhere safe
    second HD doesnt have to use time machine, just drag and drop your valuable data.
    Hard drives are cheap as dirt, buy another one at minimum if you cherish your priceless data.
    https://discussions.apple.com/docs/DOC-6031
    The Tragedy that will be, the tragedy that never should be
    Always presume correctly that your data is priceless and takes a very long time to create and often is irreplaceable. Always presume accurately that hard drives are extremely cheap, and you have no excuse not to have multiple redundant copies of your data copied on hard drives and squirreled away several places, lockboxes, safes, fireboxes, offsite and otherwise.
    Hard drives aren't prone to failure…hard drives are guaranteed to fail (the very same is true of SSD). Hard drives dont die when aged, hard drives die at any age, and peak in death when young and slowly increase in risk as they age.
    Never practice at any time for any reason the false premise and unreal sense of security in thinking your data is safe on any single external hard drive. This is never the case and has proven to be the single most common horrible tragedy of data loss that exists.
    Many 100s of millions of hours of lost work and data are lost each year due to this single common false security. This is an unnatural disaster that can avoid by making all data redundant and then redundant again. If you let a $60 additional redundant hard drive and 3 hours of copying stand between you and years of work, then you've made a fundamental mistake countless 1000s of people each year have come to regret.
    Many countless people think they're safe and doing well having a single external backup of their vital data they worked months, years, and sometimes decades on. Nothing could be further from the truth. Never let yourself be in situation of having a single external copy of your precious data.

  • How to transfer a copy or duplicate of my iPhoto library to an external hard drive.

    I would like to save a copy of my iPhoto library on an external hard drive (InfoSafe) that has been reformatted to Mac OS Extended file system (journaled).  I want the photos to be moved in the same Events they are in and with all of their metadata. 
    Then I want to delete some of those pictures or events from iPhoto on my computer.   I am using OS X 10.8.5 and my iPhoto '11 9.4.3.  Once I do this and make sure it works and everything is where it is supposed to be, can I just stick the hard drive in a safe place for years?
    Thank you.
    Wendy

    Make sure the drive is formatted Mac OS Extended (Journaled)
    1. Quit iPhoto
    2. Copy the iPhoto Library from your Pictures Folder to the External Disk.
    Now you have two full versions of the Library.
    3. On the Internal library, trash the Events you don't want there
    Now you have a full copy of the Library on the External and a smaller subset on the Internal
    Some Notes:
    As a general rule: when deleting photos do them in batches of about 100 at a time. iPhoto can baulk at trashing large numbers at one go.
    You can choose which Library to open: Hold down the option (or alt) key key and launch iPhoto. From the resulting menu select 'Choose Library'
    You can keep the Library on the external updated with new imports using iPhoto Library Manager
    Once I do this and make sure it works and everything is where it is supposed to be, can I just stick the hard drive in a safe place for years?
    No. Hard Drives are volatile and can be damaged. Over time the connection protocols change - USB 1 to USB 2 to USB 3. Firewire is almost dead and so on - and HDs fail. Multiple, redundant back ups are required. And when a back up is a copy of two drives at least, you won't have an actual back up for any of the material you then remove from the laptop.
    For long term archiving I use Flickr. Then backing up the photos becomes their problem but it works in other ways as well - images available from any computer, as future-proof as just about anything is, and you can set the privacy protocols to keep the material private.

  • All of my imovie events are stored on an external hard drive.  How do I backup this external hard drive? If the external hard drive crashes, how do retrieve the imovie events?

    I have all of my imovie events stored on an external hard drive in order to save space on my MacBook Pro's internal drive.  I currently back up both my MacBook Pro and the external hard drive  using a second external hard drive via Time Machine. I have a few questions:
    1. How can I be sure that the external drive is actually being backed up.
    2. How do I retrieve the imovie events if the 1st external hard drive crashes?
    Thank for your help!

    Let me give you some thoughts. I can tell you my plan, but ultimately you will need your own plan that makes sense for you.
    I do not think that your idea of having two separate Time Machine destinations will work. You can have two external drives as Time Machine volumes, but they will both back up the same data set. You cannot dedicate one to iMovie and one to everything else. They will both have everything, so you may still have a space problem.
    One way to overcome this is to back up the iMovie volume by using a tool like SuperDuper! or Carbon Copy Cloner. The good news is that your Event files should be pretty stable and there is no need to back them up every 15 minutes. So something like SuperDuper, which you can set to automatically backup once a day, for example, is probably fine. I am pretty sure that this would work if you kept the Projects on the same external drive. As long as Projects and Events are on the same drive, then the cloning stragy should work (but you should always test.)
    Because projects change more often than events, and projects can become corrupted, I would definitely back up the Project Library to Time Machine. You can do this even though it is on an external drive. (And even though you may be periodically backing up the entire drive that contains the projects.)
    Here are some thoughts on how I approach backup, copied from an earlier answer...
    ============================================================
    Great question. I think the issue of backup is a important one, and one I always worry about since I have a lot of video footage that is irreplaceable.
    For back-up, it is helpful to define the risk you are trying to protect from.
    For example,
    1) risk of corruption in iMovie --> I  use Time Machine
    2) risk of hard drive failure --> I use SuperDuper! nightly for bootable backup) and Time Machine all the time
    3) Risk of theft or fire --> offsite backup
    Here is what I do...
    1) I back up my boot disk with SuperDuper automatically each night so I always have a bootable system.
    2) I back up my original video files. For my Motion JPEG, VHS-->AIC, DV, and 8MM to DV, the EVENT is the copy I back up. (I keep the tapes, but I never want to have to import the tapes again.)
    For my AVCHD high def video, I back up the camera archive of the AVCHD, but I am not currently backing up the Event files, because I would need from 4 to 8 TB to back up these events depending on whether I made one copy or two. I can always re-create the Event from the camera archive.
    3) I use CrashPlan for my offsite backup. It costs less than $4 per month, and I seeded the initial backup to an external drive to minimize having to transfer video files over the Internet. I currently have over 2TB of data backed up to CrashPlan, which includes my entire boot drive, and the files described in #2.
    4) All of the above is automated, so I don't have to think about it too much. If I have to manually do stuff, I tend to get behind.
    You have to consider the value of total multiple redundancy vs. the likelihood that 2 or three unlikely events would all happen at once. For example, CrashPlan could lose my data. My hard disk could fail. I could lose everything in a fire. But it is unlikely that they would all happen at once, except in nuclear war, in which case, who cares?

Maybe you are looking for