Getting a large file with 2 small ones

HI,
I'm playing with final cut pro 5 ...
I created a very basic project with 2 movies (quicktime movie 320x240, 8.9 and 9.9 MB) done with my coolpix 3100 camera
I added a transition at the beginning, and one between the 2 movies. (5s each transition)
I select the Render ALL, then I export it into a quicktime movie with "current settings" "include audio and video" and "make movie self contained"
I get a final file which is 1'10 length, 720x576 and 253.5MB long !!! (Codecs is DV-PAL Integer)
is that normal to have such a big file ??

Did you create a sequence with settings that matched your source movies? It looks like you edited your 320x240 quicktimes into a standard PAL dv sequence. DV is about 5 minutes a gig.
Create a new sequence preset that matches your source movies as far as pixel size. For the codec, you might want to match your source files or choose an alternative.

Similar Messages

  • Simultaneous hash joins of the same large table with many small ones?

    Hello
    I've got a typical data warehousing scenario where a HUGE_FACT table is to be joined with numerous very small lookup/dimension tables for data enrichment. Joins with these small lookup tables are mutually independent, which means that the result of any of these joins is not needed to perform another join.
    So this is a typical scenario for a hash join: the lookup table is converted into a hashed map in RAM memory, fits there without drama cause it's small and a single pass over the HUGE_FACT suffices to get the results.
    Problem is, so far as I can see it in the query plan, these hash joins are not executed simultaneously but one after another, which renders Oracle to do the full scan of the HUGE_FACT (or any intermediary enriched form of it) as many times as there are joins.
    Questions:
    - is my interpretation correct that the mentioned joins are sequential, not simultaneous?
    - if this is the case, is there any possibility to force Oracle to perform these joins simultaneously (building more than one hashed map in memory and doing the single pass over the HUGE_FACT while looking up in all of these hashed maps for matches)? If so, how to do it?
    Please note that the parallel execution of a single join at a time is not the matter of the question.
    Database version is 10.2.
    Thank you very much in advance for any response.

    user13176880 wrote:
    Questions:
    - is my interpretation correct that the mentioned joins are sequential, not simultaneous?Correct. But why do you think this is an issue? Because of this:
    which renders Oracle to do the full scan of the HUGE_FACT (or any intermediary enriched form of it) as many times as there are joins.That is (should not be) true. Oracle does one pass of the big table, and then sequentually joins to each of the hashmaps (of each of the smaller tables).
    If you show us the execution plan, we can be sure of this.
    - if this is the case, is there any possibility to force Oracle to perform these joins simultaneously (building more than one hashed map in memory and doing the single pass over the HUGE_FACT while looking up in all of these hashed maps for matches)? If so, how to do it?Yes there is. But again you should not need to resort to such a solution. What you can do is use subquery factoring (WITH clause) in conjunction with the MATERIALIZE hint to first construct the cartesian join of all of the smaller (dimension) tables. And then join the big table to that.

  • Publishing large files into several small ones?

    Hello everyone.
    I have a project, almost 16 mb large. It;s quite large and the users have to wait a bit to get the tutorial.
    I tried this. I split the porject into different projects like 1. menu, 2. description, 3. practice. and then I link the projeect menu to description, practice and so on. So it's working fine! I think this is a good solution to stop the students boredom. Now there is a little problem on this.... THE TABLE OF CONTENTS. For every project there is a new table of contents.
    So, How can I do to have the same table of contents in all the projects? or better yet, How can I do to publish the whole project but like streaming, I mean instead of loading the 16mb I want it loads deppending on the slide the student is. Is it possible????
    Thanks a million!
    Ben

    Hi there
    There are many variables at stake here, such as Captivate version as well as how you are presenting the TOC.
    If you are using Captivate 3, perhaps explore the MenuBuilder application.
    If you are using Captivate 4 or 5, You might investigate using the Aggregator application.
    Cheers... Rick
    Helpful and Handy Links
    Captivate Wish Form/Bug Reporting Form
    Adobe Certified Captivate Training
    SorcererStone Blog
    Captivate eBooks

  • Collapsing a large file into a Small one

    I've been looking in the manual and the boards but I dont know what for exactly, so here I am.
    I want to take a one hour audio recording and speed it up so that its only a couple minutes long.
    I don't mind distortion.
    It was a field recording in mp3 format, from an edirol recorder.
    Can i do this in STP or should I use another application?
    Thank you,
    Nate

    Try [Time Stretch|http://documentation.apple.com/en/soundtrackpro/usermanual/index.html#c hapter=9%26section=10%26tasks=true]
    if it's more than 10% of the length of the file the output will be bad…
    A

  • Hi I installed a new hard drive in my Mac mini osx lion an when I turn it on I get a flashing file with a question mark. I tried holding command and R keys when turning it on but the recovery fails to work. Does any one know how I can get it to recover?

    Hi I installed a new hard drive in my Mac mini osx lion an when I turn it on I get a flashing file with a question mark. I tried holding command and R keys when turning it on but the recovery fails to work. I can hold the option key at start up and choose my network, then Internet recovery shows up with an arrow pointing up. When I click on the arrow Internet recovery fails and all I get is a globe with a triangle on it with an exclamation mark on it, and under that it says
    apple.com/support
          -6002F
    Does any one know how I can fix this without a recovery disc? Thanks

    I just want to add to this, in case someone else searches for this error on Apple Support (google doesnt cover apple support.. how clever is that?)
    I had the same error. And i had a Computer that had worked, with a SSD drive and 16GB upgrade done by the owner himself.
    I tried swapping with a Mechinal Harddrive, no luck.
    Kept the Mechanical drive in, and tried with some other Ram, it worked..
    So for me this error and after reading the other responses can be boiled down to a Harddrive problem or Ram issue.
    It was Ram for me..

  • How do I divide a large catalogue into two smaller one on the same computer?

    How can I divide a large catalogue into two smaller ones on the same computer?  Can I just create a new catalogue and move files and folders from the old one to the new one?  I am using PSE 12 in Windows 7.

    A quick update....
    I copied the folder in ~Library/Mail/V2/Mailboxes that contains all of my local mailboxes over to the same location in the new account. When I go into mail, the entire file structure is there, however it does not let me view (or search) any of the messages. The messages can be accessed through the Finder, though.
    I tried to "Rebuild" the local mailboxes, but it didn't seem to do anything. Any advice would be appreciated.
    JEG

  • Hi.have a g5 mac,dual core 2.3 unit.i bought it with no hard drive.have got hard drive,formatted for mac.i am trying to load osx.i get a grey screen with a small box in the centre with 2 character faces,and then grey apple with loading icon spinning.help?

    hi.have a g5 mac,dual core 2.3 unit.i bought it with no hard drive.have got hard drive,formatted for mac.i am trying to load osx.i get a grey screen with a small box in the centre with 2 character faces,and then grey apple with loading icon spinning.nothing is loading tho.

    I see 10.6.3 in your profile---is tthat what you are trying to load? If so, it won't work. No PowerPC Mac like your G5 can run a Mac OS version higher than 10.5.8

  • Large file with fstream with WS6U2

    I can't read large file (>2GB) with the stl fstream. Can anyone do this or is this a limitation of WS6U2 fstream classes. I can read large files with low level function C functions

    I thought that WS6U2 meant Forte 6 Update 2. As to more information, the OS is SunOS 5.8, the file system is NFS mounted from an HP-UX 11.00 box and it's largefile enabled. my belief is that fstream does not implement access to large files, but I can't be sure.
    Anyway, I'm not sure by what you mean by the access to the OS support for largefiles by the compiler, but as I mentioned before, I can read more then 2GB with open() and read(). My problem is with fstream. My belief is that fstream must be largefile enabled. Any idea?

  • Getting empty log files with log4j and WebLogic 10.0

    Hi!
    I get empty log files with log4j 1.2.13 and WebLogic 10.0. If I don't run the application in the application server, then the logging works fine.
    The properties file is located in a jar in the LIB folder of the deployed project. If I change the name of the log file name in the properties file, it just creates a new empty file.
    What could be wrong?
    Thanks!

    I assume that when you change the name of the expected log file in the properties file, the new empty file is that name, correct?
    That means you're at least getting that properties file loaded by log4j, which is a good sign.
    As the file ends up empty, it appears that no logger statements are being executed at a debug level high enough for the current debug level. Can you throw in a logger.error() call at a point you're certain is executed?

  • Can't transfer large files with remote connection

    Hi all,
    I'm trying to figure out why we can't transfer large files (< 20MB) over a remote connection.
    The remote machine is a G5 running 10.4.11, the files reside on a Mac OS X SATA RAID. We are logged into the remote machine via afp with an administrator's account.
    We can transfer files smaller than 20 MB with no problem. When we attempt to transfer files larger than 20 MB, we get the "The operation can't be completed because you don't have sufficient permissions for some of the items."
    We can transfer large files from the remote machine to this one (a Mac Pro running 10.4.11) no problem.
    The console log reports the following error:
    NAT Port Mapping (LLQ event port.): timeout
    I'm over my head on this one - does anyone have any ideas?
    Thanks,
    David

    I tried both these things with no luck.
    The mDNSResponder starts up right after the force quit - is this right?
    The following is the console log, which differs from the previous logs where we had the same problem:
    DNSServiceProcessResult() returned an error! -65537 - Killing service ref 0x14cb4f70 and its source is FOUND
    DNSServiceProcessResult() returned an error! -65537 - Killing service ref 0x14cb3990 and its source is FOUND
    DNSServiceProcessResult() returned an error! -65537 - Killing service ref 0x14cb2b80 and its source is FOUND
    DNSServiceProcessResult() returned an error! -65537 - Killing service ref 0x14cb1270 and its source is FOUND
    Feb 4 06:13:57 Russia mDNSResponder-108.6 (Jul 19 2007 11: 41:28)[951]: starting
    Feb 4 06:13:58 Russia mDNSResponder: Adding browse domain local.
    In HrOamApplicationGetCommandBars
    In HrOamApplicationGetCommandBars
    In HrOamApplicationGetCommandBars
    Feb 4 06:23:12 Russia mDNSResponder-108.6 (Jul 19 2007 11: 41:28)[970]: starting
    Feb 4 06:23:13 Russia mDNSResponder: Adding browse domain local.
    Feb 4 06:26:00 Russia configd[35]: rtmsg: error writing to routing socket
    Feb 4 06:26:00 Russia configd[35]: rtmsg: error writing to routing socket
    Feb 4 06:26:00 Russia configd[35]: rtmsg: error writing to routing socket
    Feb 4 06:26:00 Russia configd[35]: rtmsg: error writing to routing socket
    Feb 4 06:26:00 Russia configd[35]: rtmsg: error writing to routing socket
    Feb 4 06:26:00 Russia configd[35]: rtmsg: error writing to routing socket
    Feb 4 06:26:00 Russia configd[35]: rtmsg: error writing to routing socket
    Feb 4 06:26:00 Russia configd[35]: rtmsg: error writing to routing socket
    [442] http://discussions.apple.com/resources/merge line 4046: ReferenceError: Can't find variable: tinyMCE
    Thanks,
    David

  • One large message or multipels small ones?

    Hi:
    I'm working in an application using weblogic 8.1.
    We receive in a JMS queue a large message containing thouthands of transactions to be processed.
    What should be more eficient? processing the whole transactions in the message (with one MDB) or split the large message in smaller ones and process "in paralel" with multiples MDB and somehow join all the results to return as the whole process result (hiding the client from this split/join) ?
    Keeping all the execution in the same transaction would be desireable, but we could create multiple independant transactions to process each small message, but the result must be consistent in what transactions were processed successfully and what weren't.
    If splitting is the option: Is there any tip on doing this?
    thanks in advance.
    Guillermo.

    Hi Guillermo,
    I would recommend reading the book
    Enterprise Integration Patterns by Gregor Hohpe
    http://www.eaipatterns.com/
    This book describes the split and join process very good.
    If the messaging overhead is little compared to total job time it would be a good idea to split
    (if the tasks also run well in paralel, get quick out of the DB if you run against one DB).
    Are you doing transactions against a DB?
    Are you using several computers to distribute the work or a multiprocessor computer?
    You can also test performance and try to split to 1, 5 or 10 transactions/msg.
    When you split, keep track of total number of transations per job (= N),
    and when number of successful transactions + number of failed transactions = N you are finished.
    You could send the result per msg to a success or failure channel.
    Regards,
    Magnus Strand

  • Problem unzipping larger files with Java

    When I extract small zip files with java it works fine. If I extract large zip files I get errors. Can anyone help me out please?
    import java.io.*;
    import java.util.*;
    import java.net.*;
    import java.util.zip.*;
    public class  updategrabtest
         public static String filename = "";
         //public static String filesave = "";
        public static boolean DLtest = false, DBtest = false;
         // update
         public static void main(String[] args)
              System.out.println("Downloading small zip");
              download("small.zip"); // a few k
              System.out.println("Extracting small zip");
              extract("small.zip");
              System.out.println("Downloading large zip");
              download("large.zip"); // 1 meg
              System.out.println("Extracting large zip");
              extract("large.zip");
              System.out.println("Finished.");
              // update database
              boolean maindb = false; //database wasnt updated
         // download
         public static void download (String filesave)
              try
                   java.io.BufferedInputStream in = new java.io.BufferedInputStream(new
                   java.net.URL("http://saveourmacs.com/update/" + filesave).openStream());
                   java.io.FileOutputStream fos = new java.io.FileOutputStream(filesave);
                   java.io.BufferedOutputStream bout = new BufferedOutputStream(fos,1024);
                   byte data[] = new byte[1024];
                   while(in.read(data,0,1024)>=0)
                        bout.write(data);
                   bout.close();
                   in.close();
              catch (Exception e)
                   System.out.println ("Error writing to file");
                   //System.exit(-1);
         // extract
         public static void extract(String filez)
              filename = filez;
            try
                updategrab list = new updategrab( );
                list.getZipFiles();
            catch (Exception e)
                e.printStackTrace();
         // extract (part 2)
        public static void getZipFiles()
            try
                //String destinationname = ".\\temp\\";
                String destinationname = ".\\";
                byte[] buf = new byte[1024]; //1k
                ZipInputStream zipinputstream = null;
                ZipEntry zipentry;
                zipinputstream = new ZipInputStream(
                    new FileInputStream(filename));
                zipentry = zipinputstream.getNextEntry();
                   while (zipentry != null)
                    //for each entry to be extracted
                    String entryName = zipentry.getName();
                    System.out.println("entryname "+entryName);
                    int n;
                    FileOutputStream fileoutputstream;
                    File newFile = new File(entryName);
                    String directory = newFile.getParent();
                    if(directory == null)
                        if(newFile.isDirectory())
                            break;
                    fileoutputstream = new FileOutputStream(
                       destinationname+entryName);            
                    while ((n = zipinputstream.read(buf, 0, 1024)) > -1)
                        fileoutputstream.write(buf, 0, n);
                    fileoutputstream.close();
                    zipinputstream.closeEntry();
                    zipentry = zipinputstream.getNextEntry();
                }//while
                zipinputstream.close();
            catch (Exception e)
                e.printStackTrace();
    }

    In addition to the other advice, also change every instance of..
    kingryanj wrote:
              catch (Exception e)
                   System.out.println ("Error writing to file");
                   //System.exit(-1);
    ..to..
    catch (Exception e)
    e.printStackTrace();
    }I am a big fan of the stacktrace.

  • Can´t print large files with 10.5.7

    Hi,
    we finally have updated our G5s, 2x2GHz, 4GB RAM, 500GB system disk. Since then we are unable to print documents from CS3 + CS4 which result in cups print files > 1 Gb. Doesn´t work with GMG rips, HP Laserwriters and Adobe PDF printers. The jobs always show up as "stopped" in the queue. Error log of cups says:
    I [11/Jun/2009:16:42:09 +0200] [Job 55] Adding start banner page "none".
    I [11/Jun/2009:16:42:09 +0200] [Job 55] Adding end banner page "none".
    I [11/Jun/2009:16:42:09 +0200] [Job 55] File of type application/pictwps queued by "dididi".
    I [11/Jun/2009:16:42:09 +0200] [Job 55] Queued on "EpsonISOcoated_39L02" by "dididi".
    I [11/Jun/2009:16:42:09 +0200] [Job 55] Started filter /usr/libexec/cups/filter/pictwpstops (PID 1363)
    I [11/Jun/2009:16:42:09 +0200] [Job 55] Started backend /usr/libexec/cups/backend/pap (PID 1364)
    E [11/Jun/2009:16:42:10 +0200] [Job 55] can't open PSStream for stdout
    E [11/Jun/2009:16:42:10 +0200] [Job 55] pictwpstops - got an error closing the PSStream = -50
    E [11/Jun/2009:16:42:10 +0200] PID 1363 (/usr/libexec/cups/filter/pictwpstops) stopped with status 1!
    I [11/Jun/2009:16:42:10 +0200] Hint: Try setting the LogLevel to "debug" to find out more.
    E [11/Jun/2009:16:42:12 +0200] [Job 55] Job stopped due to filter errors.
    We have tested a catalog file with 150 pages which didn´t print. If you broke it up into 6 pieces (print page 1-25, 26-50...) it did work. Unfortunally we have jobs with single pages exceeding this limit. All together we have tested around 10 differend files.
    We have tried to reset the printer queue, installed the drivers and deleted and created all printers, but nothing helps. Looks like cups or one of it´s helper can´t handle files larger than 1 GB?
    Any help is welcome.

    Hi there,
    Does you printer/RIP support protocols other than AppleTalk? If it does, then I would change it to something like HP Jetdirect-Socket.
    PaHu

  • Compressing a large file into a small file.

    So i have a pretty large file that i am trying to make very small with good quality. the file before exporting it is about 1gig. I need to make it 100mbs. Right now i've tried compressing it with the h.264 compressing type, and i am having to go as low as 300kbits. I use aac 48 for the audio. It is just way to pixelated to submit something like this. But i guess i could make the actual video a smaller size something like 720x480 and just letterboxing it to keep it widescreen? Any hints on a good way to make this 21 minute video around 100mbs?

    There are three ways to decrease the file size of a video.
    1. Reduce the image size. For example, the change of a 720x480 DV image to a 320x240 will decrease the size by a factor of 4
    2. Reduce the frame rate. For example, changing from a 30 fps to 15 fps will decrease the size by a factor of 2
    3. Increase the compression/ change code. This is the black magic part of online material. Only you can decide what's good enough.
    x

  • Compressing large file into several small files

    what can i use to compress a 5gb file in several smaller files, and that will easy rejoined at a later date?
    thanks

    Hi, Simon.
    Actually, what it sounds like you want to do is take a large file and break it up into several compressed files that can later be rejoined.
    Two ideas for you:
    1. Put a copy of the file in a folder of its own, then create a disk image of that folder. You can then create a segmented disk image using the segment verb of the hditutil command in Terminal. Disk Utility provides a graphical user interface (GUI) to some of the functions in hdiutil, but unfortunately not the segment verb, so you have to use hditutil in Terminal to segment a disk image.
    2. If you have StuffIt Deluxe, you can create a segmented archive. This takes one large StuffIt archive and breaks it into smaller segments of a size you define.2.1. You first make a StuffIt archive of the large file, then use StuffIt's Segment function to break this into segments.
    2.2. Copying all the segments back to your hard drive and Unstuffing the first segment (which is readily identifiable) will unpack all the segments and recreate the original, large file.I'm not sure if StufIt Standard Edition supports creating segmented archives, but I know StuffIt Deluxe does as I have that product.
    Good luck!
    Dr. Smoke
    Author: Troubleshooting Mac® OS X

Maybe you are looking for