Best practice for TM on AEBS with multiple macs

Like many others, I just plugged a WD 1TB drive (mac ready) into the AEBS and started TM.
But in reading here and elsewhere I'm realizing that there might be a better way.
I'd like suggestions for best practices on how to setup the external drive.
The environment is...
...G4 Mac mini, 10.4 PPC - this is the system I'm moving from, it has all iPhotos, iTunes, and it being left untouched until I get all the TM/backup setup and tested. But it will got to 10.5 eventually.
...Intel iMac, 10.5 soon to be 10.6
...Intel Mac mini, 10.5, soon to be 10.6
...AEBS with (mac ready) WD-1TB usb attached drive.
What I'd like to do...
...use the one WD-1TB drive for all three backups, AND keep a copy of system and iLife DVD's to recover from.
From what I'm reading, I should have a separate partition for each mac's TM to backup to.
The first question is partitioning... disk utility see's my iMac's internal HD&DVD, but doesn't see the WD-1TB on the AEBS. (when TM is activity it will appear in disk utility, but when TM ends, it drops off the disk utility list).
I guess I have to connect it via USB to the iMac for the partitioning, right?
I've also read the benefits of keeping a copy of the install DVD's on the external drive... but this raises more questions.
How do I get an image of the install DVD onto the 1TB drive?
How do I do that? (install?, ISO image?, straight copy?)
And what about the 2nd disk (for iLife?) - same partition, a different one, ISO image, straight copy?
Can I actually boot from the external WD 1TB while it it connected to the AEBS, or do I have to temporarily plug it in via USB?
And if I have to boot the O/S from USB, once I load it and it wants to restore from the TM, do I leave it USB or move it to the AEBS? (I've heard the way the backups are created differ local vs network)>
I know its a lot of question but here are the two objectives...
1. Use TM in typical fashion, to recover the occasion deleted file.
2. The ability to perform a bare-metal point-in-time recovery (not always to the very last backup, but sometimes to a day or two before.)

dmcnish wrote:
From what I'm reading, I should have a separate partition for each mac's TM to backup to.
Hi, and welcome to the forums.
You can, but you really only need a separate partition for the Mac that's backing-up directly. It won't have a Sparse Bundle, but a Backups.backupdb folder, and if you ever have or want to delete all of them (new Mac, certain hardware repairs, etc.) you can just erase the partition.
The first question is partitioning... disk utility see's my iMac's internal HD&DVD, but doesn't see the WD-1TB on the AEBS. (when TM is activity it will appear in disk utility, but when TM ends, it drops off the disk utility list).
I guess I have to connect it via USB to the iMac for the partitioning, right?
Right.
I've also read the benefits of keeping a copy of the install DVD's on the external drive... but this raises more questions.
Can I actually boot from the external WD 1TB while it it connected to the AEBS, or do I have to temporarily plug it in via USB?
I don't think so. I've never tried it, but even if it works, it will be very slow. So connect via F/W or USB (the PPC Mac probably can't boot from USB, but the Intels can).
And if I have to boot the O/S from USB, once I load it and it wants to restore from the TM, do I leave it USB or move it to the AEBS? (I've heard the way the backups are created differ local vs network)
That's actually two different questions. To do a full system restore, you don't load OSX at all, but you do need the Leopard Install disc, because it has the installer. See item #14 of the Frequently Asked Questions *User Tip* at the top of this forum.
If for some reason you do install OSX, then you can either "transfer" (as part of the installation) or "Migrate" (after restarting, via the Migration Assistant app in your Applications/Utilities folder) from your TM backups. See the *Erase, Install, & Migrate* section of the Glenn Carter - Restoring Your Entire System / Time Machine *User Tip* at the top of this forum.
In either case, If the backups were done wirelessly, you must transfer/migrate wirelessly (although you can speed it up by connecting via Ethernet).

Similar Messages

  • Best practice for setting up iCloud with multiple devices using a single AppleID

    Hi there
    Me and my wife have an iPhone each, and are looking at getting both of us using iCloud. The problem is that we only use one Apple ID for our music library.
    Is getting a separate Apple ID necessary for each device on iCloud, or can multiple devices have seperate settings/photos/music, etc.?

    Using different Apple ID for iCloud is not necessary, but in most cases it is recommended.
    You can however choose to use separate Apple IDs for iCloud and continue to use the same Apple ID for iTunes, thereby being able to share all your purchases of music, apps and books.

  • Best practices for sharing iPhoto libraries across multiple Macs

    I have a pair of iMacs running 10.6.x. and iPhoto 11 connected via Ethernet to an Airport Extreme. I am looking for a way to share my iPhoto library between the two Macs.
    I found an Apple Support note on the topic (http://support.apple.com/kb/HT1198) which states that a disk image is a supported location for a shared iPhoto library (with limitations acceptable to me) and it strikes me that this would be a way of hosting a shared iPhoto library onto a NAS device.
    Am I missing a simpler solution or, more importantly, am I missing any blindingly obvious caveats? I'd love to hear from anybody who has tried this (successfully or otherwise) or anybody who has a better idea. I haven't bought a NAS device yet, so I'm open to alternative suggestions.
    Specific requirements:
    Members of my family use either one of a pair of iMacs.
    The only user who edits the iPhoto library is me and I only need read/write access from one machine.
    iPhoto library access limited to one user at a time is acceptable and practical in my case.

    Easiest:
    If the other users only get to see the Library then just use iPhoto Sharing.
    Enable sharing in the iPhoto Preferences on the Host machine, then go to the other macgine and in the same location enable 'Look for Shared Libraries'
    Other forms of sharing will give the users read/write access to the Library.
    A strong warning: If you're trying to edit the Library (that is, make albums, move photos around, keyword, make books or slideshows etc.) or edit individual photos in it via Wireless be very careful. Dropouts are a common fact of wireless networking, and should one occur while the app is writing to the database then your Library will be damaged. Simply, I would not do this with my Libraries. 
    Regards
    TD

  • Best practice for integrating oracle atg with external web service

    Hi All
    What is the best practice for integrating oracle atg with external web service? Is it using integration repository or calling the web service directly from the java class using a WS client?
    With Thanks & Regards
    Abhishek

    Using Integration Repository might cause performance overhead based on the operation you are doing, I have never used Integration Repository for 3rd Party integration therefore I am not able to make any comment on this.
    Calling directly as a Java Client is an easy approach and you can use ATG component framework to support that by making the endpoint, security credentials etc as configurable properties.
    Cheers
    R
    Edited by: Rajeev_R on Apr 29, 2013 3:49 AM

  • Best practices for speeding up Mail with large numbers of mail?

    I have over 100,000 mails going back about 7 years in multiple accounts in dozens of folders using up nearly 3GB of disk space.
    Things are starting to drag - particularly when it comes to opening folders.
    I suspect the main problem is having large numbers of mails in those folders that are the slowest - like maybe a few thousand at a time or more.
    What are some best practices for dealing with very large amounts of mails?
    Are smart mailboxes faster to deal with? I would think they would be slower because the original emails would tend to not get filed as often, leading to even larger mailboxes. And the search time takes a lot, doesn't it?
    Are there utilities for auto-filing messages in large mailboxes to, say, divide them up by month to make the mailboxes smaller? Would that speed things up?
    Or what about moving older messages out of mail to a database where they are still searchable but not weighing down on Mail itself?
    Suggestions are welcome!
    Thanks!
    doug

    Smart mailboxes obviously cannot be any faster than real mailboxes, and storing large amounts of mail in a single mailbox is calling for trouble. Rather than organizing mail in mailboxes by month, however, what I like to do is organize it by year, with subfolders by topic for each year. You may also want to take a look at the following article:
    http://www.hawkwings.net/2006/08/21/can-mailapp-cope-with-heavy-loads/
    That said, it could be that you need to re-create the index, which you can do as follows:
    1. Quit Mail if it’s running.
    2. In the Finder, go to ~/Library/Mail/. Make a backup copy of this folder, just in case something goes wrong, e.g. by dragging it to the Desktop while holding the Option (Alt) key down. This is where all your mail is stored.
    3. Locate Envelope Index and move it to the Trash. If you see an Envelope Index-journal file there, delete it as well.
    4. Move any “IMAP-”, “Mac-”, or “Exchange-” account folders to the Trash. Note that you can do this with IMAP-type accounts because they store mail on the server and Mail can easily re-create them. DON’T trash any “POP-” account folders, as that would cause all mail stored there to be lost.
    5. Open Mail. It will tell you that your mail needs to be “imported”. Click Continue and Mail will proceed to re-create Envelope Index -- Mail says it’s “importing”, but it just re-creates the index if the mailboxes are already in Mail 2.x format.
    6. As a side effect of having removed the IMAP account folders, those accounts may be in an “offline” state now. Do Mailbox > Go Online to bring them back online.
    Note: For those not familiarized with the ~/ notation, it refers to the user’s home folder, i.e. ~/Library is the Library folder within the user’s home folder.

  • Best practice for remote topic subscription with HA

    I'd like to create an orchestrator EJB, in cluster A, that must persist some data to a database of record and then publish a business event to a topic. I have two durable subscribers, MDBs on clusters B & C, that need to receive the events and perform some persistence on their side.
              I'd like HA so that a failure in any managed server would not interrupt the system. I can live with at least once delivery, but at the same time I'd like to minimize the amount of redundant message processing.
              The documentation gets a little convoluted when dealing with clustering. What is the best practice for accomplishing this task? Has anyone successfully implemented a similar solution?
              I'm using Weblogic 8.1 SP5, but I wouldn't mind hearing solutions for later versions as well.

    A managed server failure makes that server's JMS servers unavailable, which, in turn, makes the JMS server's messages unavailable until the either (A) the JMS server is migrated or (B) the managed server is restarted.
              For more discussion, see my post today on topic "distributed destinations failover - can't access messages from other node". Also, you might be interested in the circa 8.1 migration white-paper on dev2dev: http://dev2dev.bea.com/pub/a/2004/05/ClusteredJMS.html
              Tom

  • Best Practice for the Service Distribution on multiple servers

    Hi,
    Could you please suggest as per the best practice for the above.
    Requirements : we will use all features in share point ( Powerpivot, Search, Reporting Service, BCS, Excel, Workflow Manager, App Management etc)
    Capacity : We have  12 Servers excluding SQL server.
    Please do not just refer any URL, Suggest as per the requirements.
    Thanks 
    srabon

    How about a link to the MS guidance!
    http://go.microsoft.com/fwlink/p/?LinkId=286957

  • Best Practice for re-usable forms in multiple Countries

    Dear experts,
    i'm seeking for suggestions about the best design which could help saving time on development for multiple forms i should produce for a multi-national client.
    In more details, the client is asking to create a number of forms in the SD area (Order advice, delivery note/Bill of Lading, Payment slips and in particular Billing outputs) for different Countries in North and South America.
    I'm striving to avoid creating separate forms for separate Countries, as ideally i'd want to create just one form per type (i.e. one form for Billing, one form for Delivery note...) and re-use the same forms for all Countries, where specific data should differ from one Country to the other.
    We've been working on similar topics in the past, adding some logic behind some texts which we've made "parametric", meaning different standard texts are printed in the same box depending on some organizational values (i.e. sales organization or company code), but we're looking for something "smarter", where having different pricing structures in different Ctries, for instance, might still work if those Conutries use the same billing form
    Is there any best practice you'd like to share with me with regards to this? Is there any new functionality SAP has delivered recently which helps in this direction?
    Many thanks to all who will want to reply to my weird question.
    Regards,
    Fabio

    Hi Scott.
    Storing two columns of spatial data is fine in SQL Server. And you are correct that there is no built-in reprojection in SQL Server, most folks do that as part of data loading. There was talk of porting parts of ogr2ogr to SQLCLR, but I don't think anyone
    did that.
    Cheers, Bob

  • Best Practice for VPC Domain failover with One M2 per N7K switch and 2 sups

    I Have been testing some failover scenarios with 4 nexus 7000 switches with an M2 and an F2 card in each. Each Nexus has two supervisor modules.
    I have 3 VDC's Admin, F2 and M2
    all ports in the M2 are in the M2 VDC and all ports on the F2 are in the F2 VDC.
    All vPC's are connected on the M2 cards, configured in the M2 VDC
    We have 2 Nexus representing each "site"
    In one site we have a vPC domain "100"
    The vPC Peer link is connected on ports E1/3 and E1/4 in Port channel 100
    The peer-keepalive is configured to use the management ports. This is patched in both Sups into our 3750s. (this is will eventually be on a management out of band switch)
    Please see the diagram.
    There are 2 vPC's 1&2 connected at each site which represent the virtual port channels that connect back to a pair of 3750X's (the layer 2 switch icons in the diagram.)
    There is also the third vPC that connects the 4 Nexus's together. (po172)
    We are stretching vlan 900 across the "sites" and would like to keep spanning tree out of this as much as we can, and minimise outages based on link failures, module failures, switch failures, sup failures etc..
    ONLY the management vlan (100,101) is allowed on the port-channel between the 3750's, so vlan 900 spanning tree shouldnt have to make this decision.
    We are only concerned about layer two for this part of the testing.
    As we are connecting the vPC peer link to only one module in each switch (a sinlge) M2 we have configured object tracking as follows:
    n7k-1(config)#track 1 interface ethernet 1/1 line-protocol
    n7k-1(config)#track 2 interface ethernet 1/2 line-protocol
    n7k-1(config)#track 5 interface ethernet 1/5 line-protocol
    track 101 list boolean OR
    n7k-1(config-track)# object 1
    n7k-1(config-track)# object 2
    n7k-1(config-track)# object 5
    n7k-1(config-track)# end
    n7k-1(config)# vpc domain 101
    n7k-1(config-vpc-domain)# track 101
    The other site is the same, just 100 instead of 101.
    We are not tracking port channel 101, not the member interfaces of this port channel as this is the peer link and apparently tracking upstream interfaces and the peer link is only necessary when you have ONE link and one module per switch.
    As the interfaces we are tracking are member ports of a vPC, is this a chicken and egg scenario when seeing if these 3 interfaces are up? or is line-protocol purely layer 1 - so that the vPC isnt downing these member ports at layer 2 when it sees a local vPC domain failure, so that the track fails?
    I see most people are monitoring upstream layer3 ports that connect back to a core? what about what we are doing monitoring upstream(the 3750's) & downstream layer2 (the other site) - that are part of the very vPC we are trying to protect?
    We wanted all 3 of these to be down, for example if the local M2 card failed, the keepalive would send the message to the remote peer to take over.
    What are the best practices here? Which objects should we be tracking? Should we also track the perr-link Port channel101?
    We saw minimal outages using this design. when reloading the M2 modules, usually 1 -3 pings lost between the laptops in the diff sites across the stretched vlan. Obviously no outages when breaking any link in a vPC
    Any wisdom would be greatly appreciated.
    Nick

    Nick,
    I was not talking about the mgmt0 interface. The vlan that you are testing will have a link blocked between the two 3750 port-channel if the root is on the nexus vPC pair.
    Logically your topology is like this:
        |                             |
        |   Nexus Pair          |
    3750-1-----------------------3750-2
    Since you have this triangle setup one of the links will be in blocking state for any vlan configured on these devices.
    When you are talking about vPC and L3 are you talking about L3 routing protocols or just intervaln routing.
    Intervlan routing is fine. Running L3 routing protocols over the peer-link and forming an adjaceny with an router upstream using L2 links is not recommended. Teh following link should give you an idea about what I am talking here:
    http://bradhedlund.com/2010/12/16/routing-over-nexus-7000-vpc-peer-link-yes-and-no/
    HSRP is fine.
    As mentioned tracking feature purpose is to avoid block hole of traffic. It completely depends on your network setup. Don't think you would be needing to track all the interfaces.
    JayaKrishna

  • Best Practice for Droid Gmail Contacts with Exchange ActiveSync?

    Hi, folks.  After going through an Address Book nightmare this past summer, I am attempting to once again get my Contacts straight and clean.  I have just started a new job and want to bring my now clean Gmail contacts over to Exchange.  The challenge is creating duplicate contacts, then defining a go-forward strategy for creating NEW contacts so that they reside in both Gmail and Exchange without duplication.  Right now, my Droid is master and everything is fine.  However, once I port those contacts from Gmail onto my laptop, all hell breaks loose... Does Verizon have a Best Practice finally documented for this?  This past summer I spoke with no less than 5 different Customer Support reps and got 3 different answers... This is not an uncommon problem...

    In parallel to this post, I called Verizon for Technical Support assistance.  Seems no progress has been made.  My issue this past summer were likely a result of extremely poor quality products from Microsoft, which included Microsoft CRM, Microsoft Lync (new phone system they are touting which is horrible), and Exchange.  As a go-forward strategy, I have exported all Gmail contacts to CSV for Outlook and have imported them to Exchange.  All looks good.  I am turning off phone visibility of Gmail contacts and will create all new contacts in Exchange.

  • Best Practices for Creating eLearning Content With Adobe

    As agencies are faced with limited resources and travel restrictions, initiatives for eLearning are becoming more popular. Come join us as we discuss best practices and tips for groups new to eLearning content creation, and the best ways to avoid complications as you grow your eLearning library.
    In this webinar, we will take on common challenges that we have seen in eLearning deployments, and provide simple methods to avoid and overcome them. With a little training and some practice, even beginners can create engaging and effective eLearning content using Adobe Captivate and Adobe Presenter. You can even deploy content to your learners with a few clicks using the Adobe Connect Training Platform!
    Sign up today to learn how to:
    -Deliver self-paced training content that won't conflict with operational demands and unpredictable schedules
    -Create engaging and effective training material optimized for knowledge retention
    -Build curriculum featuring rich content such as quizzes, videos, and interactivity
    -Track program certifications required by Federal and State mandates
    Come join us Wednesday May 23rd at 2P ET (11A PT): http://events.carahsoft.com/event-detail/1506/realeyes/
    Jorma_at_RealEyes
    RealEyes Connect

    You can make it happen by creating a private connection for 40 users by capi script and when creating portlet select 2nd option in Users Logged in section. In this the portlet uses there own private connection every time user logs in.
    So that it won't ask for password.
    Another thing is there is an option of entering password or not in ASC in discoverer section, if your version 10.1.2.2. Let me know if you need more information
    thnaks
    kiran

  • Best practice for replacing a package with equivalent, lots of deps

    I was having CPU issues, posted about a bit back in another thread, which brought up the fact that it might have been related to the nvidia version at that time (325.15). As a result, I switched to nvidia-beta and nvidia-utils-beta from AUR.
    Nvidia from extra is now up to 331.20, and I was thinking of switching back so that I wouldn't always be surprised after a kernel update that no screens were found (AUR packages don't tend to flag updates just because linux updated). Not a big deal, as I just have to re-build the AUR package and I'm set. Anyway, I was going to switch back to the standard nvidia packages, but am not sure what to do about the dependencies on libgl, provided by nvidia-libgl-beta (a split-package provided by nvidia-utils-beta):
    $ sudo pacman -S nvidia
    resolving dependencies...
    looking for inter-conflicts...
    :: nvidia and nvidia-beta are in conflict. Remove nvidia-beta? [y/N] y
    :: nvidia-utils and nvidia-utils-beta are in conflict. Remove nvidia-utils-beta? [y/N] y
    error: failed to prepare transaction (could not satisfy dependencies)
    :: nvidia-libgl-beta: requires nvidia-utils-beta
    $ sudo pacman -R nvidia-libgl-beta
    checking dependencies...
    error: failed to prepare transaction (could not satisfy dependencies)
    :: cairo: requires libgl
    :: freeglut: requires libgl
    :: glu: requires libgl
    :: libva: requires libgl
    :: qt4: requires libgl
    :: webkitgtk2: requires libgl
    :: xorg-xdriinfo: requires libgl
    $ sudo pacman -Rc nvidia-libgl-beta
    checking dependencies...
    :: avahi optionally requires gtk3: avahi-discover-standalone, bshell, bssh, bvnc
    :: avahi optionally requires gtk2: gtk2 bindings
    :: avahi optionally requires qt4: qt4 bindings
    :: avahi optionally requires pygtk: avahi-bookmarks, avahi-discover
    :: boost-libs optionally requires openmpi: for mpi support
    :: chromium-libpdf optionally requires chromium: default browser to use plugin in (one of the optional dependencies needs to be installed to use the library)
    :: dconf optionally requires gtk3: for dconf-editor
    :: ghostscript optionally requires gtk2: needed for gsx
    :: gvfs optionally requires gtk3: Recent files support
    :: harfbuzz optionally requires cairo: hb-view program
    :: imagemagick optionally requires librsvg: for SVG support
    :: jasper optionally requires freeglut: for jiv support
    :: jasper optionally requires glu: for jiv support
    :: jre7-openjdk optionally requires gtk2: for the Gtk+ look and feel - desktop usage
    :: libtiff optionally requires freeglut: for using tiffgt
    :: libwebp optionally requires freeglut: vwebp viewer
    :: mjpegtools optionally requires gtk2: glav GUI
    :: nvidia-utils-beta optionally requires gtk2: nvidia-settings
    :: pinentry optionally requires gtk2: for gtk2 backend
    :: pinentry optionally requires qt4: for qt4 backend
    :: smpeg optionally requires glu: to use glmovie
    :: v4l-utils optionally requires qt4
    :: wicd optionally requires wicd-gtk: needed if you want the GTK interface
    :: xdg-utils optionally requires exo: for Xfce support in xdg-open
    Packages (102): anycoloryoulike-icon-theme-0.9.4-2 arpack-3.1.2-2 bleachbit-1.0-1 cairo-1.12.16-1 chromium-31.0.1650.63-1 chromium-pepper-flash-stable-2:11.9.900.170-1
    cups-1.7.0-2 cups-filters-1.0.43-1 cups-pdf-2.6.1-2 darktable-1.4-2 dia-0.97.2-5 dropbox-2.6.2-1 emacs-24.3-4 enblend-enfuse-4.1.1-5 evince-gtk-3.10.3-1
    exo-0.10.2-2 farstream-0.1-0.1.2-3 ffmpeg-1:2.1.1-3 finch-2.10.7-4 firefox-26.0-2 flashplugin-11.2.202.332-1 foomatic-db-engine-2:4.0.9_20131201-1
    freeglut-2.8.1-1 geeqie-1.1-2 gegl-0.2.0-10 gimp-2.8.10-1 girara-gtk3-0.1.9-1 glew-1.10.0-2 glu-9.0.0-2 gmtp-1.3.4-1 gnome-icon-theme-3.10.0-1
    gnome-icon-theme-symbolic-3.10.1-1 gnome-themes-standard-3.10.0-1 gstreamer0.10-bad-plugins-0.10.23-7 gtk-engine-murrine-0.98.2-1 gtk-engines-2.21.0-1
    gtk2-2.24.22-1 gtk3-3.10.6-1 gtkspell-2.0.16-3 guvcview-1.7.2-1 hplip-3.13.11-2 hugin-2013.0.0-5 hwloc-1.8-1 impressive-0.10.3-8 jumanji-20110811-1
    libglade-2.6.4-5 libgxps-0.2.2-3 libpurple-2.10.7-4 libreoffice-base-4.1.4-1 libreoffice-calc-4.1.4-1 libreoffice-common-4.1.4-1 libreoffice-draw-4.1.4-1
    libreoffice-gnome-4.1.4-1 libreoffice-impress-4.1.4-1 libreoffice-writer-4.1.4-1 librsvg-1:2.40.1-3 libtiger-0.3.4-3 libunique-1.1.6-5 libva-1.2.1-1
    libva-vdpau-driver-0.7.4-1 libxfce4ui-4.10.0-1 libxfcegui4-4.10.0-1 lxappearance-0.5.5-1 meshlab-1.3.2-4 mpd-0.18.6-1 obconf-2.0.4-1 octave-3.6.4-6
    openbox-3.5.2-6 openmpi-1.6.5-1 pango-1.36.1-1 pangox-compat-0.0.2-1 pdf2svg-0.2.1-7 pidgin-2.10.7-4 poppler-0.24.5-1 poppler-glib-0.24.5-1
    pygtk-2.24.0-3 python2-cairo-1.10.0-1 python2-gconf-2.28.1-8 python2-opengl-3.0.2-5 qt4-4.8.5-7 qtwebkit-2.3.3-1 r-3.0.2-1 rstudio-desktop-bin-0.98.490-1
    screenkey-0.2-5 scribus-1.4.3-2 thunar-1.6.3-1 tint2-svn-652-3 truecrypt-1:7.1a-2 vlc-2.1.2-1 webkitgtk2-1.10.2-8 wicd-gtk-1.7.2.4-9 wxgtk-3.0.0-2
    wxgtk2.8-2.8.12.1-1 xfburn-0.4.3-6 xorg-utils-7.6-8 xorg-xdriinfo-1.0.4-3 xscreensaver-arch-logo-5.26-3 zathura-0.2.6-1 zathura-pdf-mupdf-0.2.5-3
    zukitwo-theme-openbox-20111021-3 zukitwo-themes-20131210-1 nvidia-libgl-beta-331.38-1
    Total Removed Size: 1756.12 MiB
    :: Do you want to remove these packages? [Y/n]
    As you might imagine, I'd prefer not to remove all of those packages just to switch my libgl providing package and then re-install.
    In digging around, I found this entry on downgrading packages without respecting dependencies.
    Is that the best method for doing what I describe above as well? Would I do something like `pacman -Rd nvidia-utils-beta` (without X running) and then install the packages from extra?

    It should be similar to switching to nouveau driver https://wiki.archlinux.org/index.php/Nouveau
    Just:
    # pacman -Rdds nvidia-beta nvidia-utils-beta
    # pacman -S nvidia nvidia-utils

  • Is there a best practice for multi location server setups including mac mail server?

    Afternoon all,
    Last year I setup a client with Snow Leopard Server including hosting his mail on the server with Mac Mail Server and calendaring.  He now has plans to open other sites with the same setup, how can this be done using Mac Server?  The implementation of a new server on the new site would be straight forward my concerns / question are:
    1. How would the two servers communicate?
        a.)Do they need to communicate?
    3. How will mail across the two sites work?
        a.) How will DNS work for email internally?
        b.) How will DNS work for emai externally?
    4.  How will calendaring work across the two sites?
    Is Mac Server the best platform for moving ahead with this type of business need?
    Any help or direction would be greatly appreciated.
    Anthony

    Camelot,
    many thanks for the speedy reply.  Your comments are very helpful thank you, if I may I will give you some for information and some other questions.
    The offices will be from 5 miles to 25 miles apart, the new office and ones that follow will be considered as branches of the main office for example the company name is Sunflower and it serves area 1, then new will office will serve area 2 and so on.  So in theory I could leave the main server domain and mail mx record as sunflower and then add further servers as 2.sunflower.com, 3.sunflower.com as domains and mx records? This would then provide unique mail records and users within the organisation such as [email protected] and [email protected], which doesnt look to good I would prefer all users to be name@sunflower how can this be acheived?
    With regard to user activity in the whole users will be fixed but their will be floaters such as managers and the owners that may at times float from one office to the other and would beneift from logging into any machine and getting their profile.
    I have thought about VPN's as I have acheieved this type of setup with Microsoft server products, I have found speed issues in some cases, how can this be acheived using OS X, are there any how to docs around?  In the Microsoft setup I acheived this using netgear VPN Firewalls which ofcourse adds an overhead, can this be acheived using OS X propietary VPN software?
    So ultimatley I would prefer to have the one domain without subs "sunflower.com" and users to be able to login to their profiles from either location. The main location now will remain as headoffice and the primary location and the new will be satelites.
    I hope that covers all of your other question, again many thanks for your comments and time.
    Kind Regards
    Anthony

  • Best Practices for viewing home video on my Mac

    Come on people - THERE HAS TO BE A BEST PRACTICE out there for storing and viewing family video!!
    I'm the typical dad out there taking photos and video of my kids playing soccer, family trips ect ...
    When I get home I hook my camera up to my MacMini (which is hooked up to my Plasma TV - and backed up to an external HD) and within mere seconds, my photos are nicely captured in iPhoto. Friends and family come over, I flip on the MacMini, select iPhoto (or Front Row) and we enjoy great slide shows.
    I want to be able to have the same easy access to my videos -flip on my MacMini, start Front Row, view my various home movies, pick one and enjoy the show.
    What is the best way to do this?
    Current Technology
    Canon HV20
    iMovie 08
    iPhoto 08
    Front Row

    Thanks for the welcome WC! And thanks for the initial help.
    I have never exported video before (sad but true), so I have a couple "Workflow" questions for you or anyone else that knows this stuff
    1) Should I assume I just hook my firewire up from my Canon to the MacMini and everything just starts to export?
    2) Once exported into iMovie, I assume I can edit out some of the garbage (example - my wife sometimes forgets to turn off the camcorder and she'll video her leg or the ground for minutes at a time - I'd like to trash that.
    3) Once edited, how do I move it into iTunes?
    I will probably need to take an iMovie class ...

  • What is best practice for using a SAN with ASM in an 11gR2 RAC installation

    I'm setting up a RAC environment. Planning on using Oracle 11g release 2 for RAC & ASM, although the db will initially be 10g r2. OS: RedHat. I have a SAN available to me and want to know the best way to utilise that via ASM.
    I've chosen ASM as it allows me to store everything, including the voting and cluster registry files.
    So I think I'll need three disk groups: Data (+spfile, control#1, redo#1, cluster files#1), Flashback (+control#2, redo#2, archived redo, backups, cluster files#2) and Cluster - Cluster files#3. So that last one in tiny.
    The SAN and ASM are both capable of doing lots of the same work, and it's a waste to get them both to stripe & mirror.
    If I let the SAN do the redundancy work, then I can minimize the data transfer to the SAN. The administrative load of managing the discs is up to the Sys Admin, rather than the DBA, so that's attractive as well.
    If I let ASM do the work, it can be intelligent about the data redundacy it uses.
    It looks like I should have LUN (Logical Unit Numbers) with RAID 0+1. And then mark the disk groups as extrenal redundancy.
    Does this seem the best option ?
    Can I avoid this third disk group just for the voting and cluster registry files ?
    Am I OK to have this lower version of Oracle 10gr2 DB on a RAC 11gr2 and ASM 11gr2 ?
    TIA, Duncan

    Hi Duncan,
    if your storage uses SAN RAID 0+1 and you use "External" redundancy, then ASM will not mirror (only stripe).
    Hence theoretically 1 LUN per diskgroup would be enough. (External redundancy will also only create 1 voting disk, hence only one LUN is needed).
    However there are 2 things to note:
    -> Tests have shown that for the OS it is better to have multiple LUNs, since the I/O can be better handled. Therefore it is recommended to have 4 disks in a diskgroup.
    -> LUNs in a diskgroup should be the same size and should have same I/O characteristica. If you now have in mind, that maybe your database one time will need more space (more disks) than you should use a disk size, which can easily be added, without waisting too much space.
    E.g:
    If you have a 900GB database then does it make sense to only use 1 lun with 1TB?
    What happens if the database grows, but only grows slightly above 1TB? Then you should add another disk with 1TB.... You loose a lot of space.
    Hence it does make more sence to use 4 disks á 250GB, since the "disks" needed to grow the disk groups can be extended more easily. (just add another 250G disk).
    O.k. there is also the possibility to resize a disk in ASM, but it is a lot easier to simple add an additional lun.
    PS: If you use a "NORMAL" redundancy diskgroup, then you need at least 3 disks in your diskgroup (in 3 failgroups) to be able to handle the 3 voting disks.
    Hope that helps a little.
    Sebastian

Maybe you are looking for

  • Problem with image in Hierarchy Viewer component

    Hi All, I am using Hierarchy Viewer component in my application developed in Jdeveloper 11.1.1.6.0 Actually, my requirement is to pick the name of the "content id" stored as a value in a field in the table. Hence written code as follows, which is not

  • Error while creating Credit Memo Request for Milestone billing invoice

    Hi All, I have a scenario where i have Milestone billing(% based) at header level in Contracts. I create Invoice for that and then try to create Credit Memo Request with reference to the Invoice. Problem comes when we try to change the qty in the cre

  • Where can I buy a new HardDrive for my PowerBook G4?

    I took my PowerBook into the apple store today and the guy did a little test and it came up that my HardDrive is failing to load or something, haha! But because PowerBook's are pretty old now they don't support it anymore. So the big question is: WHE

  • Recommendations for New Arch Convert (Please help)

    I need some help deciding what to go with for a few different things. I'm trying to stay as far away from GUIs as a I can. WM: I'm looking into a good tiling windows manager that supports dual screens. Password Management:  I have very complex passwo

  • Modify the standard screen of a report

    I've just created a report with some parameters and when i try to modify the standard screen of this report via se80 i've got a warning message that tells me that all changes in standard selection dynpro will be discarded. Does that mean i can't modi