Recovery catalog best practice

Hi there,
I want to ask you about your recovery catalog best practices
should it be separate server, how to backup it, should I have standby recovery catalog, etc. ?
we have dc and rdc and about 20 databases, we do backups to tapes and we use recovery catalog (in dc 11.2.0.3) on separate lpar on aix 7.1
I mostly wonder how to protect it
regards!

for recovery catalog you also do rman backup and put it in tape or Disk (if it is small database) so that at the time of server failure you can get all information and make a schedule to try to recover recovery database backup on diffrent server to check its creadibility.

Similar Messages

  • Home Movie Cataloging - BEST PRACTICES

    I have about 200 hours of old home movies on VHS which I am in the process of adding to my iMac. I am wondering about 'best practices' on how much video can be stored inside of iMovie '08, when how much video becomes too much inside of the program, etc.
    In a perfect world, I'd like to simply import all of my home videos into iMovie, leave them in the 'library' section, and make 2-5 minute long clips in the 'projects' section for sharing with family members, but never deleting anything from the 'library'. Is this a good way to store original data? Would it be smarter to export all of the original video content to .DV files or something like that for space saving, etc?
    Can I use iMovie to store and catalog all of my old home movies in the same way I use iPhoto to store ALL of my photos, and iTunes to store ALL of my music/hollywood-movies, etc?

    We-ell, since no-one else has replied:
    1 hour of DV (digital video in the file system which iMovie uses) needs 13GB of hard disc space.
    You have 200 hours of video. 200 x 13 = 2,600 gigabytes. Two point six terabytes. If you put all that on one-and-a-bit 2TB hard discs, and a hard disc fails - oops! - where's your backup? ..Ah, on another one-and-a-bit 2TB hard discs ..or, preferably, spread over several hard discs, so that if one fails you haven't lost everything!
    iMovie - the program - can handle video stored on external discs. But are you willing to pay the price for those discs? If so; fine! Digitise all your VHS and store it on computer discs (prices come down month by month).
    Yes, you can "mix'n'match" clips between different projects, making all sorts of "mash-ups" or new videos from all the assorted video clips. But you'll need more hard disc space for the editing, too. You could use your iMac's internal hard disc for that ..or use one of the external discs for doing the editing on. That's how professionals edit: all the video "assets" on external discs, and edit onto another disc. That's what I do with my big floorstander PowerMac, or whatever those big cheesegraters were called..
    So the idea's fine, as long as you have all the external storage you'd need, plus the backup in case one of those discs fails, and all the time and patience to digitise 200 hours of VHS.
    Note that importing from VHS will import material as one long, continuous take - there'll be no automatic scene breaks between different shots - so you'll have to spend many hours chopping up the material into different clips after importing it.
    Best way to index that? Dunno; there have been several programs which supposedly do the job for you (..I can't remember their names; I've tried a few: find them by Googling..) but they've been more trouble - and taken up more disc space - than I've been prepared to bother with. I'd jot down the different clips as you create them, either by jotting in TextEdit (simplest) or in a database or spreadsheet program such as Excel or Numbers or similar ..or even in a notebook.
    Jot down the type of footage (e.g; 16th Birthday party), name of clip (e.g; 016 party), duration (e.g; 06:20 mins and seconds) and anything else you might need to identify each clip.
    Best of luck!

  • Best-practice for Catalog Views ? :|

    Hello community,
    A best practice question:
    The situtation: I have several product categories (110), several items in those categories (4000) and 300 end-users.    I would like to know which is the best practice for segment the catalog.   I mean, some users should only see categories 10,20 & 30.  Other users only category 80, etc.    The problem is how can I implement this ?
    My first idea is:
    1. Create 110 Procurement Catalogs (1 for every prod.category).   Each catalog should contain only its product category.
    2. Assign in my Org Model, in a user-level all the "catalogs" that the user should access.
    Do you have any idea in order to improve this ?
    Saludos desde Mexico,
    Diego

    Hi,
    Your way of doing will work, but you'll get maintenance issues (to many catalogs, and catalog link to maintain for each user).
    The other way is to built your views in CCM, and assign these views to the users, either on the roles (PFCG) or on the user (SU01). The problem is that with CCM 1.0 this is limitated, cause you'll have to assign one by one the items to each view (no dynamic or mass processes), it has been enhanced in CCM 2.0.
    My advice:
    -Challenge your customer about views, and try to limit the number of views, with for example strategic and non strategic
    -With CCM 1.0 stick to the procurement catalogs, or implement BADIs to assign items to the views (I experienced it, it works, but is quite difficult), but with a limitated number of views
    Good luck.
    Vadim

  • Kernel: PANIC! -- best practice for backup and recovery when modifying system?

    I installed NVidia drivers on my OL6.6 system at home and something went bad with one of the libraries.  On reboot, the kernel would panic and I couldn't get back into the system to fix anything.  I ended up re-installing the OS to recovery my system. 
    What would be some best practices for backing up the system when making a change and then recovering if this happens again?
    Would LVM snapshots be a good option?  Can I recovery a snapshot from a rescue boot?
    EX: File system snapshots with LVM | Ars Technica -- scroll down to the section discussing LVM.
    Any pointers to documentation would be welcome as well.  I'm just not sure what to do to revert the kernel or the system when installing something goes bad like this.
    Thanks for your attention.

    There is often a common misconception: A snapshot is not a backup. A snapshot and the original it was taken from initially share the same data blocks. LVM snapshot is a general purpose solution which can be used, for example, to quickly create a snapshot prior to a system upgrade, then if you are satisfied with the result, you would delete the snapshot.
    The advantage of a snapshot is that it can be used for a live filesystem or volume while changes are written to the snapshot volume. Hence it's called "copy on write (COW), or copy on change if you want. This is necessary for system integrity to have a consistent data status of all data at a certain point in time and to allow changes happening, for example to perform a filesystem backup. A snapshot is no substitute for a disaster recovery in case you loose your storage media. A snapshot only takes seconds, and initially does not copy or backup any data, unless data changes. It is therefore important to delete the snapshot if no longer required, in order to prevent duplication of data and restore file system performance.
    LVM was never a great thing under Linux and can cause serious I/O performance bottlenecks. If snapshot or COW technology suits your purpose, I suggest you look into Btrfs, which is a modern filesystem built into the latest Oracle UEK kernel. Btrfs employs the idea of subvolumes and is much more efficient that LVM because it can operate on files or directories while LVM is doing the whole logical volume.
    Keep in mind however, you cannot use LVM or Btrfs with the boot partition, because the Grub boot loader, which loads the Linux kernel, cannot deal with LVM or BTRFS before loading the Linux kernel (catch22).
    I think the following is an interesting and fun to read introduction explaining basic concepts:
    http://events.linuxfoundation.org/sites/events/files/slides/Btrfs_1.pdf

  • Best Practices for Unlinked Assets in Custom Catalogs

    Hello,
    Is it a best practice to have unlinked categories, products, skus in the product catalog?
    How do folks typically deal with assets that are no longer active?
    thanks!
    J

    It is better to have one dummy category to assign all these unlinked assets, and you have to make sure this category should restricted from search engine and it should not display in b2c site. its just a temparay category which will have all the unliked assets under that category.
    if you want to show a particular product in b2c site in near future, then you have assign that product to respective category and you can delete that product relationship from this dummy category.
    Edited by: Suresh Repalle on Aug 24, 2011 3:57 PM

  • Best tactic to migrate recovery catalog

    #Recovery Catalog
    Oracle9i 9.2.0.1.0
    Windows 2003 Server - 32 bit
    Our recovery catalog is an old one, we will upgrade this to 11g, but we will need to make a new server for him as well.
    Correct me if i am wrong but one way should be just to create a new server, install oracle and database, and re-connect all recovery catalog users to this one, and make new backups on them to get information about backup in the new catalog.
    However, should'nt it be possible to export the data from old recovery catalog , to new recovery catalog?
    Please guide me to the best tactic
    Regards,

    migrate catalog from 9i to 10g
    Read the link.
    Regards
    Asif Kabir

  • Backup policy on Grid, using RMAN and Recovery Catalog.

    Hello Gurus,
    I'm trying to move ahead the new features of Oracle 10g in my company. So I've already created Grid Control and will deploy Agents/Targets during the next week. I also want to switch the backup process from the archaic exp/imp to RMAN with the Recovery Catalog.
    My question is such: could anybody suggest the best book or text with solutions and descriptions of backup policy, using the Grid, RMAN and Catalog? What I want to do is to create eventually the powerful and centralized storage of every database in a company and do all daily routine from there. But as long as I'm only in the very beginning I want to gain some knowledge before the bad experience, like crashed recovery catalog and no backups for all databases :-)))
    Definitely I'll keep existing imp/exp and hotbackup for a long time. I'm not going to substitute all our backup strategy with one catalog. I just want to understand what will be the best practice in the backup perspective.
    And at all, I wonder if you please could give me very brief answer to this question: Let's say I have a huge monster with 5 disks by 500Gb RAD0+1. Is it a bad idea to store all of them in one physical machine before they will tape to backup or it could make sense? For me it sounds a bit scary, but probably if all made with the brain it's not a bad idea?
    Please, send me to some valuable and good source (like Tom Kyte or Don Burleson) Unfortunately, I didn't find anything regard the backups itself. Only pretty much general thought.
    Thanks a lot in advance.
    M.

    great!
    thanks a lot!
    this is definitely valuable info and I'll read right away. but as far as understand they suggest to use Data Guard, Standby and RAC which are the best ways to prevent your environment from crash. But in reality (I mean in my company) nobody is going to move to RAC at this point and I'm not sure, that they will have a chance to do in the next year. they just aren't up to it.
    what I'm looking for is probably some tips and hints about backups in more or less big environment, but not as big as Oracle itself :-) unfortunately, i'm the only DBA in my surrounding. i mean, i have no friends or mentors DBAs that could seat with me in a pub and pass theirs knowledge from generation to generation :-)))
    but anyway, I'll devote this week-end to this link.
    thanks a lot!

  • Best practices for deploying EMGrid Control

    Can i use one db for OEM & RMAN repository? Looking for Best practices for deploying EMGrid Control in our environment, I have experience working with EMGrid control it was very slow , how to make it fast ? Like i enjoy the speed of EMDBControl....

    DBA2008 wrote:
    Is this good idea to put RPM recovery catalog & OID schema in OEM Repository DB? I am thinking just to consolidate all these schema's in one db.Unless you are really starved for resources, I would not recommend storing the OID and OEM repositories in the same database. Both of these repositories support different products, and you risk creating unnecessary dependencies when patching or upgrading. As a completely fictitious example, what if your OID installation has a critical issue that requires a repository database upgrade to version 10.2.0.6, and the Grid Control repository database is only certified for version 10.2.0.5?
    Regards,
    John P.
    http://only4left.jpiwowar.com

  • Best practice for migrating data tables- please comment.

    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    Please comment on your view of this practice. Thanks!

    >
    Please comment on your view of this practice. Thanks!
    >
    Sounds like the DBAs are using best practices to get the job done. Congratulations to them!
    >
    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    >
    The process you describe is what I would expect, and require, in any well-run environment.
    >
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    >
    Nobody cares if if is riskier for you. The production environment is sacred. Any and all risk to it must be reduced to a minimum at all cost. In my opinion a DBA should NEVER move ANYTHING from a development environment directly to a production environment. NEVER.
    Development environments are sandboxes. They are often not backed up. You or anyone else could easily modify tables or data with no controls in place. Anything done in a DEV environment is assumed to be incomplete, unsecure, disposable and unvetted.
    If you are doing development and don't have scripts to rebuild your objects from scratch then you are doing it wrong. You should ALWAYS have your own backup copies of DDL in case anything happens (and it does) to the development environment. By 'have your own' I mean there should be copies in a version control system or central repository where your teammates can get their hands on them if you are not available.
    As for data - I agree with what others have said. Further - ALL data in a dev environment is assumed to be dev data and not production data. In all environments I have worked in ALL production data must be validated and approved by the business. That means every piece of data in lookup tables, fact tables, dimension tables, etc. Only computed data, such as might be in a data warehouse system generated by an ETL process might be exempt; but the process that creates that data is not exempt - that process and ultimately the data - must be signed off on by the business.
    And the business generally has no access to, or control of, a development environment. That means using a TEST or QA environment for the business users to test and validate.
    >
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    >
    Absolutely! That's how professional deployments are performed. Deployment documents are prepared and submitted for sign off by each of the affected groups. Those groups can include security, dba, business user, IT and even legal. The deployment documents always include recovery steps so that is something goes wrong or the deployment can't procede there is a documented procedure of how to restore the system to a valid working state.
    The deployments themselves that I participate in have representatives from the each of those groups in the room or on a conference call as each step of the deployment is performed. Your 5 tables may be used by stored procedures, views or other code that has to be deployed as part of the same process. Each step of the deployment has to be performed in the correct order. If something goes wrong the responsible party is responsible for assisting in the retry or recovery of their component.
    It is absolutely vital to have a known, secure, repeatable process for deployments. There are no shortcuts. I agree, for a simple 5 new table and small amount of data scenario it may seem like overkill.
    But, despite what you say it simply cannot be that easy for one simple reason. Adding 5 tables with data to a production system has no business impact or utility at all unless there is some code, process or application somewhere that accesses those tables and data. Your post didn't mention the part about what changes are being made to actually USE what you are adding.

  • Best Practice : how 2 fetch tables, views, ... names and schema

    hi,
    I am looking for the best practice about getting the catalog of a database.
    I have seen that I can make some select in system-tables(or views) such as DBA_TABLES and DBA_VIEWS, or DBA_CATALOG, but is that the best way to grab thses informations ?
    (I ask this question because It seems a strange way to me to get the table names using a simple select, but getting column info using a specialized function, OCIDescribeAny(). this does not look like a coherent API...)
    thanks for your advice
    cd

    in the same idea, why use OCIDescribeAny instead of doing an appropriate select in DBA_TAB_COLUMNS ?
    cd

  • What is a best practice for managing a large amount of ever-changing hyperlinks?

    I am moving an 80+ page printed catalog online. We need to add hyperlinks to our Learning Management System courses to each reference of a class - there are 100s of them. I'm having difficulty understanding what the best practice is for consistent results when I need to go back and edit (which we will have to do regularly).
    These seem like my options:
    Link the actual text - sometimes when I go back to edit the link I can't find it in InDesign but can see it's there when I open up the PDF in Acrobat
    Draw an invisible box over the text and link it - this seems to work better but seems like an extra step
    Do all of the linking in Acrobat
    Am I missing anything?
    Here is the document in case anyone wants to see it so far. For the links that are in there, I used a combination of adding the links in InDesign then perfecting them using Acrobat (removing additional links or correcting others that I couldn't see in InDesign). This part of the process gives me anxiety each month we have to make edits. Nothing seems consistent. Maybe I'm missing something obvious?

    what exatly needs to be edited, the hyperlink or content or?

  • Best practices for ARM - please help!!!

    Hi all,
    Can you please help with any pointers / links to documents describing best practices for "who should be creating" the GRC request in below workflow of ARM in GRC 10.0??
    Create GRC request -> role approver -> risk manager -> security team
    options are : end user / Manager / Functional super users / security team.
    End user and manager not possible- we can not train so many people. Functional team is refusing since its a lot of work. Please help me with pointers to any best practices documents.
    Thanks!!!!

    In this case, I recommend proposing that the department managers create GRC Access Requests.  In order for the managers to comprehend the new process, you should create a separate "Role Catalog" that describes what abilities each role enables.  This Role Catalog needs to be taught to the department Managers, and they need to fully understand what tcodes and abilities are inside of each role.  From your workflow design, it looks like Role Owners should be brought into these workshops.
    You might consider a Role Catalog that the manager could filter on and make selections from.  For example, an AP manager could select "Accounts Payable" roles, and then choose from a smaller list of AP-related roles.  You could map business functions or tasks to specific technical roles.  The design flaw here, of course, is the way your technical roles have been designed.
    The point being, GRC AC 10 is not business-user friendly, so using an intuitive "Role Catalog" really helps the managers understand which technical roles they should be selecting in GRC ARs.  They can use this catalog to spit out a list of technical role names that they can then search for within the GRC Access Request.
    At all costs, avoid having end-users create ARs.  They usually select the wrong access, and the process then becomes very long and drawn out because the role owners or security stages need to mix and match the access after the fact.  You should choose a Requestor who has the highest chance of requesting the correct access.  This is usually the user's Manager, but you need to propose this solution in a way that won't scare off the manager - at the end of the day, they do NOT want to take on more work.
    If you are using SAP HR, then you can attempt HR Triggers for New User Access Requests, which automatically fill out and submit the GRC AR upon a specific HR action (New Hire, or Termination).  I do not recommend going down this path, however.  It is very confusing, time consuming, and difficult to integrate properly.
    Good luck!
    -Ken

  • LRCC Face recognition - best practices?

    Ok so we are all new to the wonderful world of face recognition in LR.  I'm trying to work out what would be the best practices for using this.
    A little bit of background - I have a catalog of over 200,000 images.  In addition to portrait and wedding clients, a significant part of my work is with models and another significant part of my work is theatre photography.  I have be wanting some sort of face recognition to help with both for some time.
    What are your namining conventions for people? - here's mine:
    Ideally I would label people as "surname, firstname" so that I can keep members of a family together in "named people" display, but commas are not allowed in names.  Also the professional name of many models doesn't fit that pattern eg "Strawberry Venom" or "Cute as Sin" are to models I have worked with.
    I am trying to come up with a sensible naming convention at the moment it is "Surname/ Firstname" for clients, theatre folk and friends/family.  Models are still a problem, at present I am thinking of "Surname/ Firstname (model name(s))"  While I may not be able to remember the real names of models, I do usually know the names from model releases.  This naming will still permit me to filter/find them in the keyword List panel by just entering the model name.
    On final addition I am making to this this naming convention is the use of a hashtag suffix to the name:  #F for friends and family, #C for clients, #T for theatre/actors and #M for models.  This enables me to filter on just models, or just actors, or just friends and familiy.  Where people fall into multiple categories I add multiple hashtags.  So photos of me would be keyworded with "Butterfield/ Ian #F #T"
    Unknown / unidentified people.
    What I am not yet certain about is how to handle unknown / unidentified people.  Unidentified people fall into a number of different categories.
    People I don't know and I am never likely to know (Eg random strangers on the street, local tour guides on holiday, random people in the background etc)
    This group is relatively easy to deal with - that is to simply delete the face recognition, End of story.
    People I don't know the names of yet but I am likely to find out (Eg actors in a production for which I don't have a programme)
    For these people I am making up a unique name using the format "date/ Context-Gendernn" Eg an unknown male actor at Stockport Garrick Theatre would be named as "20150313/ SGT-M01"  Although this may appear a complex solution it has a number of advantages.  If/when I do learn the name of the individual (Eg I photograph them in a different production) it is simply a case of renaming the people keyword.  Creating a unique name and not simple assigning all unknowns to a bucket name will help the face recognition algorithms find this person without it being confused by have different faces assigned to the same name. I am also using the hashtage #U to make it easier to filter the unknown faces when I need to.
    People I don't know the names of and there is only a slim possibility of meeting/photographing again (Eg guests as a client weeding)
    It feels as though I out to just delete the face recognition and have done with it, and this is what I would do except for thing. Other than manually drawing face regions I have not yet found a way to get lightroom to rescan a folder for faces if you have previously deleted the face recognition.  This means that deleting face regions from a large number of people is something that cannot be easily reversed.  I might just leave these people in the "Unnamed People" category... at lease until such time as there is a way to rescan a folder or colectoin.
    Summary
    My practices are still evolving. But I hope these thoughts and idea will help others think through the issues and come up with solutions that work for their situation.  I am interested in hearing how other people are using the face recognition system.  Especially if anyone is aware of any 'best practices' that Adobe or anyone else has recommended.

    Glad it helped.
    Yes and no.  You can still put the people keywords into hierarchies within the keyword list - you can arrange them just like any other keywords.so you just create a "smith family" keyword and store "john smith" under it.  What you can't do is apply BOTH smith family and john smith the the same face.
    My use of the hash tags came about because I initially had a top level keyword for models, one for clients, one for theatre peple and one for family and firends.  Then discovered that some of the theatre folk were also clients (headshots) and what to do when a friend is also a client.  So the hash tag system means a person can be both a friend, a model, an actor as well as being a client!  (#T #C #M #F).

  • Best Practices for new iMac

    I posted a few days ago re failing HDD on mid-2007 iMac. Long story short, took it into Apple store, Genius worked on it for 45 mins before decreeing it in need of new HDD. After considering the expenses of adding memory, new drive, hardware and installation costs, I got a brand new iMac entry level (21.5" screen,
    2.7 GHz Intel Core i5, 8 GB 1600 MHz DDR3 memory, 1TB HDD running Mavericks). Also got a Superdrive. I am not needing to migrate anything from the old iMac.
    I was surprised that a physical disc for the OS was not included. So I am looking for any Best Practices for setting up this iMac, specifically in the area of backup and recovery. Do I need to make a boot DVD? Would that be in addition to making a Time Machine full backup (using external G-drive)? I have searched this community and the Help topics on Apple Support and have not found any "checklist" of recommended actions. I realize the value of everyone's time, so any feedback is very appreciated.

    OS X has not been officially issued on physical media since OS X 10.6 (arguably 10.7 was issued on some USB drives, but this was a non-standard approach for purchasing and installing it).
    To reinstall the OS, your system comes with a recovery partition that can be booted to by holding the Command-R keys immediately after hearing the boot chimes sound. This partition boots to the OS X tools window, where you can select options to restore from backup or reinstall the OS. If you choose the option to reinstall, then the OS installation files will be downloaded from Apple's servers.
    If for some reason your entire hard drive is damaged and even the recovery partition is not accessible, then your system supports the ability to use Internet Recovery, which is the same thing except instead of accessing the recovery boot drive from your hard drive, the system will download it as a disk image (again from Apple's servers) and then boot from that image.
    Both of these options will require you have broadband internet access, as you will ultimately need to download several gigabytes of installation data to proceed with the reinstallation.
    There are some options available for creating your own boot and installation DVD or external hard drive, but for most intents and purposes this is not necessary.
    The only "checklist" option I would recommend for anyone with a new Mac system, is to get a 1TB external drive (or a drive that is at least as big as your internal boot drive) and set it up as a Time Machine backup. This will ensure you have a fully restorable backup of your entire system, which you can access via the recovery partition for restoring if needed, or for migrating data to a fresh OS installation.

  • Best practice for TM on AEBS with multiple macs

    Like many others, I just plugged a WD 1TB drive (mac ready) into the AEBS and started TM.
    But in reading here and elsewhere I'm realizing that there might be a better way.
    I'd like suggestions for best practices on how to setup the external drive.
    The environment is...
    ...G4 Mac mini, 10.4 PPC - this is the system I'm moving from, it has all iPhotos, iTunes, and it being left untouched until I get all the TM/backup setup and tested. But it will got to 10.5 eventually.
    ...Intel iMac, 10.5 soon to be 10.6
    ...Intel Mac mini, 10.5, soon to be 10.6
    ...AEBS with (mac ready) WD-1TB usb attached drive.
    What I'd like to do...
    ...use the one WD-1TB drive for all three backups, AND keep a copy of system and iLife DVD's to recover from.
    From what I'm reading, I should have a separate partition for each mac's TM to backup to.
    The first question is partitioning... disk utility see's my iMac's internal HD&DVD, but doesn't see the WD-1TB on the AEBS. (when TM is activity it will appear in disk utility, but when TM ends, it drops off the disk utility list).
    I guess I have to connect it via USB to the iMac for the partitioning, right?
    I've also read the benefits of keeping a copy of the install DVD's on the external drive... but this raises more questions.
    How do I get an image of the install DVD onto the 1TB drive?
    How do I do that? (install?, ISO image?, straight copy?)
    And what about the 2nd disk (for iLife?) - same partition, a different one, ISO image, straight copy?
    Can I actually boot from the external WD 1TB while it it connected to the AEBS, or do I have to temporarily plug it in via USB?
    And if I have to boot the O/S from USB, once I load it and it wants to restore from the TM, do I leave it USB or move it to the AEBS? (I've heard the way the backups are created differ local vs network)>
    I know its a lot of question but here are the two objectives...
    1. Use TM in typical fashion, to recover the occasion deleted file.
    2. The ability to perform a bare-metal point-in-time recovery (not always to the very last backup, but sometimes to a day or two before.)

    dmcnish wrote:
    From what I'm reading, I should have a separate partition for each mac's TM to backup to.
    Hi, and welcome to the forums.
    You can, but you really only need a separate partition for the Mac that's backing-up directly. It won't have a Sparse Bundle, but a Backups.backupdb folder, and if you ever have or want to delete all of them (new Mac, certain hardware repairs, etc.) you can just erase the partition.
    The first question is partitioning... disk utility see's my iMac's internal HD&DVD, but doesn't see the WD-1TB on the AEBS. (when TM is activity it will appear in disk utility, but when TM ends, it drops off the disk utility list).
    I guess I have to connect it via USB to the iMac for the partitioning, right?
    Right.
    I've also read the benefits of keeping a copy of the install DVD's on the external drive... but this raises more questions.
    Can I actually boot from the external WD 1TB while it it connected to the AEBS, or do I have to temporarily plug it in via USB?
    I don't think so. I've never tried it, but even if it works, it will be very slow. So connect via F/W or USB (the PPC Mac probably can't boot from USB, but the Intels can).
    And if I have to boot the O/S from USB, once I load it and it wants to restore from the TM, do I leave it USB or move it to the AEBS? (I've heard the way the backups are created differ local vs network)
    That's actually two different questions. To do a full system restore, you don't load OSX at all, but you do need the Leopard Install disc, because it has the installer. See item #14 of the Frequently Asked Questions *User Tip* at the top of this forum.
    If for some reason you do install OSX, then you can either "transfer" (as part of the installation) or "Migrate" (after restarting, via the Migration Assistant app in your Applications/Utilities folder) from your TM backups. See the *Erase, Install, & Migrate* section of the Glenn Carter - Restoring Your Entire System / Time Machine *User Tip* at the top of this forum.
    In either case, If the backups were done wirelessly, you must transfer/migrate wirelessly (although you can speed it up by connecting via Ethernet).

Maybe you are looking for