Presentation with many large qt files

hey folks! just started using keynote earlier this week and am about to go overseas and do a presentation with it in a couple of days.
I have a quick question about the recommended usage of movie files. I will have around 50 or so movies ranging from 10 seconds to 3 minutes. what is the best codec to use? I'm currently hoping that H.264 will work, as it keeps file sizes small, but I'm interested in hearing what you folks have to say.
Also, is it possible to link to the movies instead of embed them? what solution is best?
much thanks!
-Jason
P.s. this is with iWork '08
Message was edited by: jschleifer

Heya Steeko!
thanks for the suggestion! I was able to keep the quicktimes small by re-exporting them with quicktime pro at H.264.. so filesize isn't really an issue that way. It's more a question of what's the recommended way to deal with a large number of quicktimes in a keynote presentation.
What's going to give me the best performance? Storing the movies inside the key file? Outside as references? H.264 codec, or something else?
Cheers!
-jason

Similar Messages

  • InDesign very slow when opening a document with many large photographs

    When opening a book file linked with many large photographs (up to 1.3 GB) InDesign CS4 & CS3 (CS2 did not have this behaviour) takes up to 10 minutes, or even more to open one of the 16 page chapters of the book. As its opening the file InDesign is completely iresponsive. It helps to see the whole document at 5% so that all the images 'load'. The preview view settings are on the default Typical settings. When the link to the folder with the images is broken, InDesign opens the document very fast. It seems that on opening the document InDesign CS4 (and 3) has to rebuild the preview files as if the settings were on High and not Typical in the Preferences settings. Is there any way around this?

    InDesign generates the preview upon import, in fact when the original image is missing InDesign still previews the image, it does not need the original file. Fast Display is not an option since all images are greyed out. To me it seems that InDesign CS4 & 3 re-link to the original files every time a document is opened. After churning away to re-link itself to the original high res files then it becomes responsive again. Yet this can take even more than 5-10 minutes when the linked photographs are very big. CS2 did not behave this way. It did this behaviour only when High Quality Display was selected, which is understandable.

  • Attempted to mail an email with a large attachment file.  One of the addresses was bad.  When my Outlook is running, the Mac tries to send it and shows the progress.  However, when I look in my Outbox the files are not there.  It does show up in the Outb

    attempted to mail an email with a large attachment file.  One of the addresses was bad.  When my Outlook is running, the Mac tries to send it and shows the progress.  However, when I look in my Outbox the files are not there.  It does show up in the Outbox progress section but I can not delete it when it is there.
    Where do these files reside?
    Is there a hidden Outbox??
    MacBook Pro, Mac OS X (10.7.1)

    If you think getting your web pages to appear OK in all the major browsers is tricky then dealing with email clients is way worse. There are so many of them.
    If you want to bulk email yourself, there are apps for it and their templates will work in most cases...
    http://www.iwebformusicians.com/Website-Email-Marketing/EBlast.html
    This one will create the form, database and send out the emails...
    http://www.iwebformusicians.com/Website-Email-Marketing/MailShoot.html
    The alternative is to use a marketing service if your business can justify the cost. Their templates are tested in all the common email clients...
    http://www.iwebformusicians.com/Website-Email-Marketing/Email-Marketing-Service. html
    "I may receive some form of compensation, financial or otherwise, from my recommendation or link."

  • I did a presentation with many images and animations, now I need to change only the images without modify the related animations. How can I do it?

    I did a presentation with many images and animations, now I need to change only the images without modify the related animations. How can I do it?
    I use Keynote 09

    Select the image you want to change and go to Format Menu>Advanced>Define as Media Placeholder (or command, option, control i).

  • Problem with parsing large XML files chunked over HTTP

    I'm trying to isolate a bug that was introduced when upgrading the JRE in use from Java 7u51 to 7u71 without changing any code. The problem appears to be very similar to: Bug ID: JDK-8027359 XML parser returns incorrect parsing results.
    Further investigation showed that it was also introduced in the same versions (7u71) where that fix was applied. Unlike that bug though, my XML is marked as version 1.0. It also appears to be with only large XML files, on the order of 10MB or so.
    The closest I've been able to narrow it down to is the code is using JAXB to unmarshall a stream that the debugger tells me is a org.apache.http.com.EofSensorInputStream / org.apache.http.impl.io.ChunkedInputStream. The exception I get is not consistent, but typically appears to be from chunks being overwritten or shuffled, resulting in letters appearing in attributes that are actually numbers, or like the following where an attribute "testAttribute" gets partially overwritten by the end of a timestamp that was in a different section of the XML.
    javax.xml.bind.UnmarshalException
    - with linked exception:
    [javax.xml.stream.XMLStreamException: ParseError at [row,col]:[1,98748]
    Message: Attribute name "testAttribu00Z" associated with an element type "testElement" must be followed by the ' = ' character.]
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallerImpl.handleStreamException(UnmarshallerImpl.java:421)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal0(UnmarshallerImpl.java:357)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal(UnmarshallerImpl.java:334)
    Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[1,98748]
    Message: Attribute name "testAttribu00Z" associated with an element type "testElement" must be followed by the ' = ' character.
      at com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next(XMLStreamReaderImpl.java:598)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.StAXStreamConnector.bridge(StAXStreamConnector.java:181)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal0(UnmarshallerImpl.java:355)
      ... 6 more
    Here's some code that seems to reproduce it if you can connect to an XML server that returns a large chunked XML file:
      SchemeRegistry registry = new SchemeRegistry();
      registry.register(
                    new Scheme("http", 80, PlainSocketFactory.getSocketFactory()));
      HttpClient client = new DefaultHttpClient(new BasicClientConnectionManager(registry));
      String url = "http://someUrlReturningAlargeChunkedXML";
      HttpGet method = new HttpGet(url);
      HttpResponse response = client.execute(method);
      InputStream inputStream = response.getEntity().getContent();
      XMLStreamReader responseReader = factory.createXMLStreamReader(inputStream);
      JAXBElement<JaxBObjectOfResponse> wot = unmarshaller.unmarshal(responseReader, JaxBObjectOfResponse.class);
    If you connect using URL.openStream() to the same service there is no error. If I read bytes directly and write to a file, there is no error. The error only happens when I try to unmarshal it, and it's large, and I'm using Java 7u71 (or later). It can be consistently repeated with the jsp webapp that I'm using, but didn't show the error when I used the same code with a Wikipedia dump XML file.
    How can I unmarshal in a different way to avoid this problem? Or, how can I better isolate the bug so it can be posted to the appropriate bug system?

    Apparently, adding the Woodstox XML libraries avoids the bug. Is there anyone who can reproduce this on another system? Was there any changes to the Stax implementation between u67 and u71 that may have introduced a bug like this?
    Edit: When setting the logging level to DEBUG, I once saw the overwritten buffer being logged as if that was what was received (as in the testAttribu00Z example above). I can't repeat that anymore though, and very rarely it does parses with no exception (though it may have still been corrupted). Now the error seems to be consistently on one of the buffer boundaries, as in:
    17:08:09,705 DEBUG wire:63 - << "2000[\r][\n]"
    17:08:09,705 DEBUG wire:77 - << "trend>....OTHER XML...<trend hours=""
    17:08:09,705 DEBUG wire:77 - << "634.0972777777778" datetime="2013-05-21T00:43:48.350Z" t"
    17:08:09,705 DEBUG wire:63 - << "[\r][\n]"
    17:08:09,705 DEBUG wire:63 - << "2000[\r][\n]"
    17:08:09,705 DEBUG wire:77 - << "rend-mode="0">
    Exception in thread "main" java.lang.NumberFormatException: t34.0972777777778
      at com.sun.xml.internal.bind.DatatypeConverterImpl._parseDouble(DatatypeConverterImpl.java:213)
      at mypackage.Trend_JaxbXducedAccessor_hours.parse(TransducedAccessor_field_Double.java:48)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.StructureLoader.startElement(StructureLoader.java:194)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallingContext._startElement(UnmarshallingContext.java:486)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallingContext.startElement(UnmarshallingContext.java:465)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.InterningXmlVisitor.startElement(InterningXmlVisitor.java:60)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.StAXStreamConnector.handleStartElement(StAXStreamConnector.java:231)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.StAXStreamConnector.bridge(StAXStreamConnector.java:165)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal0(UnmarshallerImpl.java:355)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal(UnmarshallerImpl.java:334)
    Or:
    17:19:12,563 DEBUG wire:63 - << "2000[\r][\n]"
    17:19:12,563 DEBUG wire:77 - << ...OTHER XML...<trend index="5"
    17:19:12,563 DEBUG wire:77 - << "" label="N"
    17:19:12,563 DEBUG wire:63 - << "[\r][\n]"
    Exception in thread "main" java.lang.NumberFormatException: Not a number: N
      at com.sun.xml.internal.bind.DatatypeConverterImpl._parseInt(DatatypeConverterImpl.java:106)
      at com.sun.xml.internal.bind.DatatypeConverterImpl._parseShort(DatatypeConverterImpl.java:118)

  • Problem with sending large HTML files as attachment using JavaMail 1.2

    Hi dear fellows, i am currently working on posting Emails with attachments using JavaMail 1.2. i succeeded in sending many mimetypes of files as attachments except for .htm and .html files. when large HTML files (say, >100 kB) serve as attachements, the mail is posted on mail server, but not properly posted since only the first small part of the file is writted into mail server but the latter part of the attachment file is missing.
    is that a bug of JavaMail??? are there any fellows encountering similar problem as i did??? any suggestions for me to proceed? hopefully i made myself clear...
    Many thanks in advance,
    Fatty

    i've sort of found the cause for that, it is because when the stream write to the mail server, unfortunately there is a "." in a single line. so the server refuse to take any more inputs.
    so do i have to remove all the "." in the file manually to avoid this disaster? any suggestions??
    Fatty

  • Best data Structor for dealing with very large CSV files

    hi im writeing an object that stores data from a very large CSV file. The idea been that you initlize the object with the CSV file, then it has lots of methods to make manipulating and working with the CSV file simpler. Operations like copy colum, eliminate rows, perform some equations on all values in a certain colum, etc. Also a method for prining back to a file.
    however the CSV files will probly be in the 10mb range maby larger so simply loading into an array isn't posable. as it produces a outofmemory error.
    does anyone have a data structor they could recomend that can store the large amounts of data require and are easly writeable. i've currently been useing a randomaccessfile but it is aquard to write to as well as needing an external file which would need to been cleaned up after the object is removed (something very hard to guarentee occurs).
    any suggestions would be greatly apprechiated.
    Message was edited by:
    ninjarob

    How much internal storage ("RAM") is in the computer where your program should run? I think I have 640 Mb in mine, and I can't believe loading 10 Mb of data would be prohibitive, not even if the size doubles when the data comes into Java variables.
    If the data size turns out to be prohibitive of loading into memory, how about a relational database?
    Another thing you may want to consider is more object-oriented (in the sense of domain-oriented) analysis and design. If the data is concerned with real-life things (persons, projects, monsters, whatever), row and column operations may be fine for now, but future requirements could easily make you prefer something else (for example, a requirement to sort projects by budget or monsters by proximity to the hero).

  • Problem with parsing large xml files

    Hello All,
    I am parsing a large xml file of 20MB and I use DocumentBuilder.parse(File). This method works for small xml files with size less than 20MB but the application hangs and doesn't through any error message when parsing 20MB xml files. Please let me know what I have to do at this point ?
    Thanks & Regards,
    Kumar.

    Well... i can't agree.
    If you have such structure:
    <task>
      <task/>
      <task>
         <task>
            <task/>
         </task>
         <task/>
      </task>
    </task>
    ...you may always keep stack of tasks (at startElement push to top, and at endElement pop), so at every leaf of tree you will have all parents of that leaf.
    for such structure:
    <task id="1" parent="0"/>
    <task id="2" parent="1"/>
    <task id="3" parent="1"/>
    <task id="4" parent="2"/>
    <task id="5" parent="3"/>
    ...it will be much faster to go thro document by sax several times to build tree of tasks, than to load all document into memory...

  • Setting up my mac from time machine left me with a large backup file on my macbook

    I just bought a new Macbook Air and decided to set things up from my latest time machine backup. I'm glad to have my settings back, but it also left me with 30 gigs worth of backups. This wouldn't be so bad, except that I only have 128 gigs of hard drive space, and I supposedly have only 30 gigs left. Is there any way I can safely remove these large files without losing everything again?

    In Lion and Mountain Lion, Time Machine creates local snapshots on the internal hard disk > http://pondini.org/TM/30.html
    You don't have to worry about snapshots, as they are deleted when your Mac's disk is full. Anyway, you can disable them or delete them by opening System Preferences > Time Machine and turning off Time Machine, but you don't need to do this

  • Issue with generating large pdf file using cfdocument tags in CF9

    We are in the process of upgrading our code to use cf9 and the cfdocument tag (from the old cfx_pdf tags).  We have successfully gotten one piece of our code to work but the file size of the pdf that we are generating now is huge in comparison to what it was using the CFX_PDF tags. (I.E.  with the new code the file is 885 KB in comparison to the old code generating only a 11KB file). We are not embedding fonts so the Fontembed = "no" didn't work for us.  We do have all of our images as .jpgs but unfortunately due to the volume of images that we have we can not switch all these files into another format.  Is there is way to shrink or optimize the pdf file size that we are generating? 
    Thanks so much for your help.
    Claudia

    We are in the process of upgrading our code to use cf9 and the cfdocument tag (from the old cfx_pdf tags).  We have successfully gotten one piece of our code to work but the file size of the pdf that we are generating now is huge in comparison to what it was using the CFX_PDF tags. (I.E.  with the new code the file is 885 KB in comparison to the old code generating only a 11KB file). We are not embedding fonts so the Fontembed = "no" didn't work for us.  We do have all of our images as .jpgs but unfortunately due to the volume of images that we have we can not switch all these files into another format.  Is there is way to shrink or optimize the pdf file size that we are generating? 
    Thanks so much for your help.
    Claudia

  • Software available for working with large video files?

    Hello,
    I'm working in PP CS6. I was wondering if there are any workarounds or 3rd party plugins/software that
    make working with really large video files easier and faster?
    Thanks.
    Mark

    Hi Jeff,
    Thanks for helping. This is the first time I shot video with my Nikon D5200. It was only a 3 minute test clip
    set at the highest resolution, 1920x1080-60i. I saw the red line above the clip in PP CS6 and hit the enter
    key to render the clip.
    It took almost 18 minutes or so to render the clip. This is probably normal but I was wondering if there is
    a way to reduce the file size so it doesn't take quite as long to render. I just remember a few years back
    that when the Red camera was out, guys were working with really huge files and there was a program
    from Cine something that they used to reduce the file size and make it more manageable when editing.
    I could be mistaken. I've been out of the editing look for a few years and just getting back into it.
    Thanks.
    Mark
    Here's my PC's components list you asked for:
    VisionDAW 4U 8-Core Xeon Workstation
      2 Intel QUAD-Core Xeon 5365-3.0GHz, 8MB, 1333MHz Processors
      16GB 667MHz Fully Buffered Server Memory Modules (2x2GB)
      Microsoft® Windows® Windows 7 Ultimate (x64)
      WDC 250GB, Ultra ATA100, 7200 rpm, 8MG Buffer Main OS HD
      2 WWDC 750GB, SATA II, 7200 RPM, 16MB Buffer HD (Raid 0)
      2 WDC 750GB, SATA II, 7200 rpm, 16MG Buffer HD (Samples)
      2 WDC 1TB Enterprise Class, SATA II, 7200 RPM, 32MB Buffer Hard Drive
      MOTU 24 I/O (main) / MOTU 2408mk3 (slave)
      Plexor PX-800A 18X Dbl. Layer DVD+/-RW Optical Drive
      Buffalo BuRay Drive (External) BR-816SU2 
      Front Panel USB Acess
      Integrated FireWire (1394a) interface
      Thermaltake Toughpower 850W Power Supply
      3xUAD1 Universal Audio Cards
      NVIDIA QUADRO FX 1800 / Memory 768 MB GDDR3
      CUDA Parallel Processor Cores / 64
      Maximum Display Resolution Digital @60Hz = 2560x1600
      Memory Interface 192Bit
      Memory Bandwidth (GB/sec) / 38.4 GB/sec
      PCI-Express, DUAL-Link DVI 1
      Digital Outputs 3 (2 out of 3 active at a time)
      Dual 25.5" Samsung 2693HM LCD HD Monitors

  • Opening large photo files produces black screen

    When opening a small (less than a meg) photo, it works find. I'm able to move to the next file. If I open a large file (8-10 megs) it sometimes works fine however if I use the arrow to move to the next file in the folder I get a black screen. Also if I have a folder of many large photo files I can see all the thumbnail photos at a small size. If I try to increase the view size to large some will be black, however if I click on the file many times it opens fine, sometimes not.  It was working fine, just recently started to act up.  Using adobe photoshop I can open each file so I know they are there and not corrupt files.

    Hello @JohnBruin,
    Welcome to the HP Forums, I hope you enjoy your experience! To help you get the most out of the HP Forums I would like to direct your attention to the HP Forums Guide First Time Here? Learn How to Post and More.
    I have read your post on how opening large photo files produces a black screen on your desktop computer. I would be happy to assist you in this matter!
    If you boot your system into Safe Mode, are you able to open large photographs? Is this a recent or a reoccurring issue? 
    In the meantime, I recommend following the steps in this document on Computer Locks Up or Freezes (Windows 8). This should help prevent your system from defaulting to a black screen. 
    I would also encourage you to post your product number for your computer. Below is a is an HP Support document that will demonstrate how to find your computer's product number. 
    How Do I Find My Model Number or Product Number?
    Please re-post with the results of your troubleshooting, as well as the requested information above. I look forward to your reply!
    Regards
    MechPilot
    I work on behalf of HP
    Please click “Accept as Solution ” if you feel my post solved your issue, it will help others find the solution.
    Click the “Kudos, Thumbs Up" on the right to say “Thanks” for helping!

  • Query in a large xml file

    Hello,
    I'm trying to work with very large xml files which are created from csv files. These files may be very large - up to 1 GB ! Untill now I have managed to do several validations on these big xml files, and the only thing that works for me is SAX parser, DOM is out of the question because it fills up memory.
    My next task is to do queries on these files, smth like:
    select field1,field2 from file.xml
    where field3 = 'A'
    and (fileld4>'B' or field1='C')
    order by field2.
    I searched the net about finding out how to make queries on xml files (since I have never done queries on xml before), but I couldn't find which "query language" is best for large files. If I use XPath (XSLT) will that not cause me memory problems because XSLT represents the file as a memory object?
    My idea is to parse the file with SAX and check every row if it fits the where condition and then write it immediately to a result xml file. But validating the where statement can be very complicated without using some tool. Also the order by statement is another problematic issue.
    Does anyone have some more intelegent ideas about how I can do this? Please help! :(
    The xml file looks like this:
    <doc>
    <row id ="1">
    <column id="1" name="column1">value</column>
    <column id="N" name="columnN">value</column>
    </row>
    <row id ="M">
    <column id="1" name="column1">value</column>
    <column id="N" name="columnN">value</column>
    </row>
    </doc>

    Hi all,
    Thank you very much for your replies.
    First, saxon didn't work because it uses an in-memory parser, and that is what I was trying to avoid.
    Different database is also out of the question, because the customer insist on XML, and also there are some files that can never be converted to a database table, because eventually with some transformations thay are changed and are not completely like the standard csv format.
    I think that maybe http://exist.sourceforge.net is the rigth solution for me, but I will probably try it in the next version of my project.
    For now I have managed to make the project with only SAXParser and a lot of back - end programming and it works ok, althoug it was very hard to make it, and will be harded to maintain, so I will try to look at the eXist project.
    Thanks everyone for the help.

  • How to extract a column out of a large ASCII file?

    Hi all.
    After searching the board and applying several solution approaches my problem still remains. Maybe you can help me.
    The data source i've to deal with are large ASCII files (~540 MB) with 14 columns (delimiter: TAB). Each column represents one channel. The number of characters in each "field" is variable. I have to read user defined columns (=channels) out of each data set. Needless to say that reading the whole file runs into memory problems.
    If anyone has an idea i would be happy
    Thanks in advance.
    Greets
    Kane

    I hate to defocus you, but there is a more efficient way to do this.  My apologies that I do not have the time to write code, but here is the pseudo code.
    Create an array for your output greater than or equal to what you think you will need.
    Read a 65,000 character chunk from the file (or the rest of the file, whichever is smaller).
    Use the string search functions functions to find successive line ends and the appropriate tab character delimiters for your column.
    Convert and replace the element in your output array.
    When done, trim your output array to the right size.
    If you drop an LVM read, convert it to a regular VI, and dive in, you will see an example of this type of process.  The idea is to keep disk reads, which are very inefficient, to a minimum.  It also minimizes your memory allocations, because you do not need to resize your input buffer for every line.  Problems you will need to deal with (which are handled by the LVM read) are such things as:
    Your line crosses a chunk boundary.
    The end-of-file creates a smaller chunk than 65,000 characters (the optimum chunk size for Win32 systems).
    The end-of-line character is not well defined (in your case, this is probably not an issue)
    Searching for a character can produce memory allocations
    You may want to try reading the data as a U8 array instead of a string and doing your searches on that instead of the string.
    I have always wanted to write the piece of code, but never had the time or reason to do so.  Good luck.  I will try to help if I can.
    This account is no longer active. Contact ShadesOfGray for current posts and information.

  • Large iTunes Files

    Greetings, curious how those of you out there deal with really large iTunes files.  I use a MacBook Pro (MBP) with a 500 Gb harddrive and have a lot of music, podcast, TV Shows, and Movies.  I currently have two iTunes libraries, my primary is on my MBP with all music and podcast, and my secondary is on an external drive with TV Show and Movies.  My iPhone, iPad, and iPods are synched to he primary.  When I want to put TV Shows or Movies on the iPad, or any other device, I have to add to the primary library.  It works, but it's kind of a pain.  Is there any way to synch my devices across iTunes libraries, regardless of where they are located?
    Is there a better option other than doing what I'm doing?
    Thanks,
    Scott

    Hi--
    So what I've noticed is that ever 7 seconds or so, while iTunes is running, a new file is created in the directory and it's the same size as the E-N-T-I-R-E library. If you try to delete all the files while iTunes is running, it won't let you since it's using one or more of the files.
    This isn't going to work much longer for me. I have a 40 gb hd on my Windows laptop and in the course of one day, my 96gb iTunes lib is creating enough files throughout the day to completely fill my hard drive which is only about half full.
    Let's not even talk about how slow this is making the machine.
    Any solutions? I don't have my iTunes directory in the defaulted location... is this contributing somehow to the problem?
    Any help appreciated.

Maybe you are looking for