Question about keynote efficiency

I have a question about the efficiency of a keynote presentation...let me explain...
I am creating a slide in which I want the entire contents of the slide, 10 objects (a pdf image, 8 shapes, and a text box), to fade out. Following the fade out, I want a text box to appear with a brief instruction.  Following a click, the text box with the brief instruction will fade out and the original 10 objects will fade back in.  I see several options available to accomplish the effect:
Option 1
I could create fade out actions for the original 10 objects, have the "brief instruction" text box fade in and hold. Then, after a click, the "brief instruction" text box would fade out and the original 10 objects would fade back in.  The important note here is that all these objects would be on the same slide
Option 2
I could create fade out actions for the orginal 10 objects and then move to a new slide.  On the new slide would be the "brief instruction" text box, which would fade in after a short delay.  Then, after a click, the "brief instruction" text box would fade away followed by another new slide which would have copies of the original 10 objects from the first slide.  These 10 objects would then automatically fade back in after a short delay.
TL;DR: Option 1 has many objects and effects all on one slide, option 2 splits some of the objects and effects onto three different slides.
My question: Is one of these options more efficient than the other? I imagine in a short presentation it probably doesn't matter, but what about a presentation that starts to get pretty large, say 20 slides or more? Splitting everything up onto three different slides definitely helps me from an organization and planning standpoint.

Multiple slides.
Slides are free, all you are doing is duplicating the 10-objects slide and inserting the instruction slide in between. Keynote is like any other slideware - keep the complexity to a minimum.
Given the weakness of the nomenclature on the animations timing box in Keynote, using the multiple slide method means you can effect any changes very simply and not worry about scanning up and down a endless list of animation items.
K.I.S.S.
I hope that helps
Rowan

Similar Messages

  • Question about keynote theme

    Can someone tell me what is the name of this theme in this Keynote 08 review of Macworld (http://www.macworld.com/article/132979/2008/04/office_presentation.html)? I've seen it recently in a presentation and it's spectacular. Is this an older theme that didn't make it in Keynote 08? Is there a way to find it? Thanks!

    hi, the theme you're looking for is in Keynote 08, it's the Industrial theme.

  • Question about keynote

    How can I make presentations in keynote like a slideshow

    hi, the theme you're looking for is in Keynote 08, it's the Industrial theme.

  • Questions about editing with io HD or Kona 3 cards

    My production company is switching from Avid to Final Cut Pro. I have a few editing system questions (not ingesting and outputting, just questions about systems for the actual editors - we will have mac pros with either kona 3 or io HD for ingest and outputs)
    1) Our editors work from home so they most likely will be using MacBook Pros - Intel Core 2 Duo 2.6GHz 4GB computers with eSata drives to work on uncompressed HD, will they be able to work more quickly in FCP if they are using the new Mac Pro 8-Core (2 Quad-Core 2.8GHz Intel Xeon) or will the mac book pro's be able to hold their own with editing hour long documentaries, uncompressed HD
    2) Will having an AJA Kona 3 (if we get the editors mac pros) or io HD (for the mac book pros) connected be a significant help to the editor's and their process, will it speed up their work, will it allow them to edit sequences without having to render clips of different formats? Or will they be just as well off without editing with the io HD?
    I'm just trying to get a better understanding of the necessity of the AJA hardware in terms of working with the editors to help them do what they have to do with projects that have been shot on many formats- DVCPro tapes, Aiptek cameras that create QTs and P2 footage.
    Thanks

    1. with the IoHD, laptops become OK working with ProRes and simply eSata setups. Without the Io, they can't view externally on a video monitor (a must in my book). It will not speed up rendering a ton, nor will it save renders of mixed formats. The idea is to get all source footage to ProRes with the Io, and then the Io also lifts the CPU from having to convert ProRes to something you can monitor externally on a video monitor, and record back to any tape format you want... all in real time.
    2. Kona 3's on Towers would run circles around render times on a Laptop... no matter what the codec, but the Kona does not really speed renders up. That's a function of the CPU and just how fast is it. (lots of CPU's at faster speeds will speed up render times).
    I'd recommend you capture to ProRes with Io's or the Kona 3 and don't work in uncompressed HD. You gain nothing doing it quality wise at all. And you only use up a ton of disk space (6 times the size in fact) capturing and working in uncompressed HD, which from your post, you're not shooting anyway. The lovely thing about ProRes is that it's visually lossless, efficient, and speeds up the editing process. Mixing formats can be done, but it's better to go to ProRes for all source footage, and edit that way.
    With either the Kona or the Io, you then can output to uncompressed HD tape... that's what they do for you no matter what codec you've edited in. ProRes is designed to be the codec of choice for all HD projects when you're shooting different formats especially... Get them all singing the same tune in your editing stations, and you'll be a much happier camper. Only reason to buy laptops is portability... otherwise you're much better off with towers and the Kona 3 speed wise.
    Jerry
    Message was edited by: Jerry Hofmann

  • Question about size of ints in Xcode

    Hello. I have a a few quick questions about declaring and defining variables and about their size and so forth. The book I'm reading at the moment, "Learn C on the Mac", says the following in reference to the difference between declaring and defining a variable:
    A variable declaration is any statement that specifies a variables name and type. The line *int myInt;* certainly does that. A variable definition is a declaration that causes memory to be allocated for the variable. Since the previous statement does cause memory to be allocated for myInt, it does qualify as a definition.
    I always thought a definition of a variable was a statement that assigned a value to a variable. If a basic declaration like "int myInt;" does allocate memory for the variable and therefore is a definition, can anyone give me an example of a declaration that does not allocate memory for the variable and therefore is not a definition?
    The book goes on, a page or so late, to say this:
    Since myInt was declared to be of type int, and since Xcode is currently set to use 4-byte ints, 4 bytes of memory were reserved for myInt. Since we haven't placed a value in those 4 bytes yet, they could contain any value at all. Some compilers place a value of 0 in a newly allocated variable, but others do not. The key is not to depend on a variable being preset to some specific value. If you want a variable to contain a specific value, assign the value to the variable yourself.
    First, I know that an int can be different sizes (either 4 bytes or 8 bytes, I think), but what does this depend on? I thought it depended on the compiler, but the above quote makes it sound like it depends on the IDE, Xcode. Which is it?
    Second, it said that Xcode is currently set to use 4-byte ints. Does this mean that there is a setting that the user can change to make ints a different size (like 8 bytes), or does it mean that the creators of Xcode currently have it set to use 4-byte ints?
    Third, for the part about some compilers giving a newly allocated variable a value of 0, does this apply to Xcode or any of its compilers? I assume not, but I wanted to check.
    Thanks for all the help, and have a great weekend!

    Tron55555 wrote:
    I always thought a definition of a variable was a statement that assigned a value to a variable. If a basic declaration like "int myInt;" does allocate memory for the variable and therefore is a definition, can anyone give me an example of a declaration that does not allocate memory for the variable and therefore is not a definition?
    I always like to think of a "declaration" to be something that makes no changes to the actual code, but just provides visibility so that compilation and/or linking will succeed. The "definition" allocates space.
    You can declare a function to establish it in the namespace for the compiler to find but the linker needs an actual definition somewhere to link against. With a variable, you could also declare a variable as "extern int myvar;". The actual definition "int myvar;" would be somewhere else.
    According to that book, both "extern int myvar;" and "int myvar;" are declarations, but only the latter is a definition. That is a valid way to look at it. Both statements 'delcare' something to the compiler, but on the second one 'define's some actual data.
    First, I know that an int can be different sizes (either 4 bytes or 8 bytes, I think), but what does this depend on? I thought it depended on the compiler, but the above quote makes it sound like it depends on the IDE, Xcode. Which is it?
    An "int" is supposed to be a processor's "native" size and the most efficient data type to use. A compiler may or may not be able to change that, depending on the target and the compiler. If a compiler supports that option and Xcode supports that compiler and that option, then Xcode can control it, via the compiler.
    Second, it said that Xcode is currently set to use 4-byte ints. Does this mean that there is a setting that the user can change to make ints a different size (like 8 bytes), or does it mean that the creators of Xcode currently have it set to use 4-byte ints?
    I think that "setting" is just not specifying any option to explicitly set the size. You can use "-m32" or "-m64" to control this, but I wouldn't recommend it. Let Xcode handle those low-level details.
    Third, for the part about some compilers giving a newly allocated variable a value of 0, does this apply to Xcode or any of its compilers? I assume not, but I wanted to check.
    I don't know for sure. Why would you ask? Are you thinking of including 45 lines of macro declarations 3 levels deep to initialize values based on whether or not a particular compiler/target supports automatic initialization? Xcode current supports GCC 3.3, GCC 4.0, GCC 4.2, LLVM GCC, CLang, and Intel's compiler for building PPC, i386, and x86_64 code in both debug and release, with a large number of optimization options. It doesn't matter what compiler you use or what it's behavior is - initialize your variables in C.

  • A Question about LV Database Connectivi​ty Toolkit

    Hello everyone!
    I have a question about using LabVIEW DataBase Connectivity Toolkit 1.0.2 that eagerly needs your help. I don't know how to programmaticlly create a new Microsoft Access(.mdb)file (Not a new table in a existing Database)using LabVIEW Database Connectivity Toolkit1.0.2. As you know, usually we can set up the connection by creating a Universal Data Link (.udl) file and inputting the path to the DB Tools Open Connec VI in the LabVIEW DataBase Connectivity Toolkit. However, searching a table within an existing database containing a great many tables is a toilfulif job. If I want to use a new DataBase file with the date and time string as its name to log my acquisition data in each measurement process, how to do? I am sure someone of you must can resolve my question, and thanks very much for your help.

    I don't know what your real design considerations are here but, from I understand from your post, this is a really bad way to go about the process of logging data -- IF you want to be able to do significant ad hoc or stored procedures analyses after it has been collected.  Using separate MDB files for data that ONLY differs by one field (namely that date) is not the most efficient way to organize it.  What would be much more efficient would a joined table including the date and a reference ID of some sort for the various measurements that were done.  That way your stored procedures for looking at ALL measurements of type X would be very simple going across ALL dates.  Making such a comparison across multiple MDB files is a much more challenging process AND doing the original data collection in that way doesn't really gain you anything.
    Generally, if something is difficult to do in the DCT (Database Connectivity Toolkit) it's because it's a "not good thing" to do within MDBs.  I know that others probably disagree with that but I've worked with Access since it's initial release and other RDBMs prior to that both through compiled tools, Unix scripts, etc.  You may, of course, still choose to proceed in the way you've described and that may work excellently for you. 

  • Questions about CFIMAGE

    I'm planning to use CFIMAGE on a website and had a few questions about what would be the most efficient way to do this.  I have a lot of images on some pages, so it's important to load them the most efficient & fastest way possible.
    I noticed that by default, the writeToBrowser function displays images as a .png image.  Is there a reason for this?  Does that mean it would be best for my original image to be a .png as well, or would it load just as quickly if I start off with a .jpg file? 
    Is there a parameter in the writeToBrowser function to add an alt tag?
    Thanks

    Hi,
    I
    noticed that by default, the writeToBrowser function displays images as
    a .png image.  Is there a reason for this?  Does that mean it would be
    best for my original image to be a .png as well, or would it load just
    as quickly if I start off with a .jpg file? 
    If you don't specify a value for the "format" attribute (in the <cfimage action="writeToBrowser> tag), it outputs the image in "PNG" format by default, And you want to output in "JPG" format set the format attribute as format="jpg".
    Note : You can't output the image in GIF format if you use the "writeToBrowser" action, and when you try do so, you will only get that in PNG format.
    Is there a parameter in the writeToBrowser function to add an alt tag?
    Yeah.. But you may need to install the ColdFusion 8 - Updater 1 which supports all the image attributes.
    http://www.petefreitag.com/item/670.cfm
    You may also try Ben Nadel's "ImageWriteToBrowserCustom" UDF which can be dowloaded here,
    http://www.bennadel.com/blog/846-Styling-The-ColdFusion-8-WriteToBrowser-CFImage-Output.ht m
    HTH

  • Question about "synchronized" and "wait"

    Hello, everyone!
    I have seen a piece of code like this,
    synchronized (lock)
    //do operation 1
    lock.wait();
    //do operation 2
    I think in the above piece of code, when a thead finished operation 1, it will release the lock and let other threads waiting for the lock have chance to run the same block of code. Am I correct?
    If I am correct, I have a further question, a "synchronized" block means a set of operations that can not be interrupted, but when invoke "wait" method, the thread running in the "synchronized" block will be terminated (interrupted) by release the lock and other threads will have chance to enter the "synchronized" block. In short, the execution inside the "synchronized" block will be interrupted. So, I wonder whether the above piece of code is correct and conforms to the principle of "synchronized". And what is its real use?
    Thanks in advance,
    George

    Thanks, pkwooster buddy!You're welcome
    I just wondered whether "wait inside a synchronized
    block" technology is thread safe. Please look at the
    following example,wait and synchronized are thread safe.
    public class Foo {
    int mVal= 0;
    public final Object mLock = ...;
    public void doIt() {
    synchronized(mLock) {
    mVal = ...;
    mLock.wait();
    if (mVal == ...) {
    // do something
    } else {
    // do something else
    }If we have two threads, T1 and T2, enter the block in
    their respective order, and T1 is woken up first, T2
    may have tampered with T1's execution path because T2
    changed mVal while T1 was asleep. So T2 manipulate
    instance field mVal is a side-effect.when you do the wait() you give up the lock and the other threads get a chance to run. When you exit the wait() you get the new value of the myVal variable which may have been changed. This is exactly what you want. To make that not thread save you could do
    int temp = myVal;
    wait();
    if(temp == ...)in this case the variable temp contains the old vale of myVal.
    >
    I think the most safest way is never wait inside a
    synchronized block, but it is less efficient. What do
    you think about the trick of wait inside a
    synchronized block? I think you are very experienced
    in thread field from your kind reply.
    Thanks in advance,
    Georgewait(), notify() and notifyAll() are very useful. You need them when you want threads to cooperate in an predictable manner. I recommend that you review the section on consumer and producer classes and wait and notify in the Threads Tutorial. It gives good examples. For a practical example of this you could also take a look at my Multithreaded Server Example. It implements a simple chat server that switches String messages. Look especially at the send(), doSend() and popMessage() methods in the StreamConnection class. These allow the receive thread of one user to send messages out using the send thread of another user. If you have questions about that example, please post the questions on that topic.
    Hope this helps.

  • Question about PHP

    I am completely new to programming and have enjoyed Linux so much, that I would like to go into PHP next. Currently I am a University student with a dead end major (Russian Studies). I am looking to get a certificate in PHP over the course of this next year and to find a little better job with it than I could get with my current degree. I would do a Google search for this, however, I have come to like Arch and the community that surrounds it and wanted to know if any you fine web programmers out there, had any suggestions for free online courses in PHP or book that could be purchased. I was also wondering what a good PHP certification exam to take would be. I have seen the Zend exam and was considering doing a test prep for this after I learn a little more about PHP and SQL. Thank you for your help and suggestions!

    Berticus wrote:
    As  I said, to most people, taxonomy doesn't matter.  It does matter to other people, and I don't mean people like me who have these little pet peeves.
    It matters to people who know a lot of languages who need to know when to use what language.  There isn't a single programming or scripting language that can be everything, so it's important for people, mostly software engineers, to know taxonomy, so they can pick a language most optimal for what they need to get done.  Most of the time, when you're dealing with very complex systems.  Instead of using one language for the whole system, you'll find out you'll be using COBRA to handle hardware, C++ to handle interface and C to tie everything together or something like that.
    I mean when you're differentiating between an interpretted language (script) and a program, the issue is efficiency and speed.  No matter what you do, an interpretted language is inherently slower and less efficient than a programming language.  Even Java that is compiled, is compiled to Java Native Language or something like that, and requires the Java Virtual Machine to interpret it (that's why Java gets it's own branch).  I believe it's the fastest interpretted language, but how does it compare to a natively compiled program?  It's still slower.
    Even when you know you're going to use a programming language, you still have choices, because each programming language can be split into a high, middle, and low level language.  Reasons for using different levels are due to how quickly do you need to write the program, how portable does the program need to be, how easy should other people be able to read the program, does it need to have low level abilities such as handling memory directly.  Then there's also the question about how your program is going to flow.  Is it functional or object oriented?
    It's not so much an opionated matter when you think about it.  It's more along the lines of do you need this knowledge or not?  For most people, I'm willing to bet that's everybody who posts in this thread, that information is not important, they don't need to know it, because they handle very, very simple applications compared to the complex systems that do indeed require the programmer or scripter to know the difference.
    Actually, the difference between interpreted and compiled (and faux-compiled) languages is different from the difference between programming and scripting languages, interpreted languages can also be programming languages, and compiled languages can be scripting (although this happens very little and is really pointless and tedious to do )
    'Scripting' usually refers to code meant to extend upon a framework or program separate from the script itself, whereas 'programming' is creating applications that are on their own, separate applications, regardless of whether they're run though an interpreter at runtime. The difference is a bit vague, and you are right in that interpreted languages are often used for scripting, but it is not necessary, look at something like python, this can be used both as scripting for automating tasks quickly by using it's immense library, as well as for creating stand alone applications, which would be programming. (PHP is virtually always scripting, though)
    Either way I prefer the term 'coding'

  • Question about Local Variables (Multiple answers welcomed!)

    A couple of questions about Local Variables
    1. Programmers always say: “Do not abuse of Local Variables”. I’d like to know, when and where Local variable are most efficiently used?
    2. If I have to create a couple of local variables, is there anyway to “clone” them without going through the repetitive “create/local variables” mouse click each time? (When I try to copy and paste, it creates a new variables instead of the one that I am trying to reproduce)
    3. Which is faster in execution: Updating a variable through a) writing to property node/value or b) through local variable
    Everyone’s input is welcomed, so if this question is already answered, please
    feel free to add additional comments/answers!

    1. Use Local Variables in user interface code and no where else. The only exception is using a local variable of a cluster output to define the datatype for a bundle by name node.
    2. You can drag copy them then right click to get to a menu of all the currently defined controls and indicators on the VI.
    3. B. The problem with A is that it forces a thread switch to the user interface thread--which can take time if you aren't already in it, and it's a very convoluted process under the hood. NI's advice never update indicator values through a property node unless you absolutely, positively can't figure out some other way of doing it.
    Mike...
    Certified Professional Instructor
    Certified LabVIEW Architect
    LabVIEW Champion
    "... after all, He's not a tame lion..."
    Be thinking ahead and mark your dance card for NI Week 2015 now: TS 6139 - Object Oriented First Steps

  • Questions about Leopard

    First of all, I've watched the WWDC keynote at least 4 times now, and I can't wait for leopard to come out. However, I've got a couple concerns. I don't want boot camp on my mac. Does anyone know if you can un-install it (I would think that it should be possible). I don't want anything at all to do with windows. I've also heard some rumors about parallels being on Leopard. First, I don't know too much about parallels. But what I do know (and think to be correct), is that parallels allows you to run windows apps on a mac without having to use bootcamp, and the windows os. My quesiton about parallels is really, what are the pro's and con's of having it on your system? Thanks in advance for your help.

    Unless Boot Camp drastically changes from now until release, it merely enables those with Windows installation CDs to install Windows on their Macs. It does not enable Windows applications to run on their own without Windows being installed. For that you need WINE or CrossOver to be installed presently. Parallels is like Boot Camp but without the need to have a separate boot to install Windows applications (only an enclosed boot that runs side by side with Mac OS X), and without the need to partition your hard disk to install Windows which Boot Camp presently does.
    Ignore rumors. No one really knows what will be on Leopard except developers who have signed a non-disclosure agreement. They would be breaking their agreement posting here. The Terms of Use on the right specifically forbid us from discussing unreleased products or rumors. Please read them more carefully, and don't ask questions about Leopard which are not answered on http://www.apple.com/macosx/

  • Some question about georaster?

    Hello,
    I have some question about how to create high efficiency georaster model. I have 3T capacity raster data. It's bands is from 8 bands to 12 bands and the resolution is 0.5 meter. In order to ensure the original information of raster data, I consider not use the SDO_GEOR.mosaic, this means that a import file is a georaster object. I keep the original data capacity, not compression. I use JPEG-B to compress pyramid data.
    Does anyone can tell me my georaster model is OK? or tell me some areas for improvement.
    Thanks who have lool my page.

    i don't have specific performance number on georaster using bigfile tablespaces. "A bigfile tablespace with 32K blocks can contain a 128 terabyte datafile" and should be doable with georaster in considering your data size is just 8 TB. It should have some advantages over small file tablespaces, for example, it would be easiler to manage because it has only one datafile. but note, "Bigfile tablespaces are intended to be used with Automatic Storage Management (ASM) or other logical volume managers that supports striping or RAID, and dynamically extensible logical volumes". "Using bigfile tablespaces on platforms that do not support large file sizes is not recommended and can limit tablespace capacity." please refer to the Oracle Database Administrator's Guide.
    thanks

  • Questions about iWork06

    Am I holding my mouth wrong? Is it some king of internet or discussion group bad breath? I've posted a pretty basic question (twice) in the last week regarding iWork06 and compatability with iWorks05, plus a question about iWorks06's compatability with EndNote. If nobody has an answer, I understand.
    But I'm wondering if I can't get answers here, where can I? I guess I could call the Apple online store. But Mac users are usually such a helpful group.
    I notice folks freely get on this discusion to both complain about iWork and Apple's upgrade practices but also to praise and promote them, not exactly what you would expect from a "support" discussion group.

    Sometimes people just don't want to respond, because the questions are really redundant (no offence)
    I'm thinking of purchasing iWork06. I frquently share documents and presentations between my iMac, iBook and Windows PC at work. I only want to buy the single user iWork and put it on my mac Mini.
    Great. good idea.
    If I upgrade to iWork06 on my mini, will I have a lot of compatability problems when transfering Pages docs and Keynote presentations to my iMac or iBook where I have iWork05?
    Why not upgradde all machines. you get a single user license, not a single machine license.
    Also, did I understand correctly from an earlier post, that Pages 2 supports EndNote?
    I haven't noticed, but try doing a search in the Pages forum.
    I notice folks freely get on this discusion to both complain about iWork and Apple's upgrade practices but also to praise and promote them, not exactly what you would expect from a "support" discussion group.
    You get people who want a soapbox to vent their frsustrations, and you get people who are a king honest questions and you get people who are doing their best to help others.
    You have to remember those three groups and not lump everyone together.

  • Question about the FRONT ROW REMOTE

    ok i have 2 questions about this remote.... i ordered a Mac Book Pro and am waiting for it....
    Well the 1st question is that has anyone use Powerpoint or Keynote with this remote. That would be great for keeping me mobile while giving my speeches...
    2nd. Is the IR "eye" in the computere sensitive to other products. Maybe a cell photo that has the obex all locked up (thanks VERIZON)....
    any info would be great
    steve

    Sorry to disappoint you but I don't think it is that kind of a remote. It seems to be designed strictly for audio/video software. I have tried it with both Keynote and PowerPoint and all it did was effect iTunes that was running in the background.
    As far as other IR devices; I aimed all the IR remotes I have and tried different buttons. Nothing happened. But things like TV remotes usually have to be coded to the specific devices you wish to have them operate. If you purchase a Universal Remote for your TV, DVD, VCR, etc., you have to tell it what the brand and model are, usually from a list of codes that come with the remote.

  • General question about Mac hardware

    Hello. I have always used Windows. I am considering picking up a used Macbook. I have some questions about hardware. Chiefly, is there a specific reason that Apple is still dedicated to the Core2Duo processors, even in the highest-end Pros? With the advent of the Intel Core i3, i5 and i7 processors, many PC laptop manufacturers have embraced them, especially the better mainstream manufacturers, like Sony and Toshiba. I know a lot about Windows, but I don't know much about OSX.
    Is there something in the software that allows for a Core2Duo on OSX to run as efficiently or more so than an i3 or i5 on Windows 7? As it stands, comparing strict hardware profiles (processors in particular) between what appear to be equivalent current Macbooks and Windows 7 PCs, the Sonys and Toshibas seem to have a distinct advantage.
    Sorry if this is a tired question or whatever. Believe me, I'm not asking "WHAT'S BETTER? A MAC OR A PC?" Really.
    Many thanks to anyone who can point me to an answer.
    Chris

    Well, as to if there is anything that makes OS X run better on the C2D rather than a i3 or i5. I have never use an i3 based computer, but the i5 mac I used was very fast (tried the i5 27" iMac). I don't know why they are currently still using the C2D over that. I can say that the i5 didn't feel any faster, even the quad core Mac pro didn't feel faster. The main advantage to more cores will be in doing more at once. This makes the biggest difference is seen when running things like video editors. There is also a lot more to making the computer fast than just the CPU, the chipset, system bus, video card, and hardware controllers all make a huge difference. Apple has always used very good quality hardware, there have even been times where MacBook Pros have been rated as the best laptops to run Windows. But OS X will run faster on the same hardware because it is a more streamlined OS that uses the hardware more efficiently. Apple works hard to make the OS work as well as it looks, and Snow Leopard runs very well on the current hardware. I have no doubt that Apple will in the near future come out with something newer and better, but my guess is as good as anyone's as to what chip they will use.

Maybe you are looking for

  • Can't see event titles in subscribed calendar

    My wife publishes an iCal calendar from iCal at home, and I subscribe to it in iCal my laptop. I've done this for years, thus improving marital harmony, but a couple of months ago I stopped being able to see the titles of the events. My laptop calend

  • Slow- i hate leopard

    imac 17. 56 of 80g used extremely slow. ten hours for initial backup. after that makes my mac literally crawl. take ten seconds to minimize a window. turned it off for a month. turn back on and 6 hours later it says i need 30g additional space. same

  • Time capsule 'error occurred reading settings'

    I was trying to configure 'back to my Mac' when I lost my Time Capsule. Have now got most systems OK and the Airport Utility window shows that I am connected to the internet; however it states 'no base stations found, searching'; on 'other wi-fi devi

  • Linking a MovieClip button to a URL?

    Hello all, I have recently followed a Adobe Flash tutorial, which has helped me make MovieClip buttons which extend to the left when hovered over. This tutorial can be found here: http://www.schoolofflash.com/2008/05/flash-cs3-tutorial-movie-clip-but

  • User Data Missing

    HI All, I imported the role from the SAP environment using the CMC and now I can see the roles created as group in the BO envirionment, however the users are missing and have not been imported. Do I need to follow some other process to achieve the us