Render times differing for same composition

I'm trying to render a composition and the first time I tried it said 7 hours. So I shut the Mac down, went home and set it going over night. But it only took 40 mins to create a Lossless render. Great, I thought.
So today I'm trying to export the same Comp and it's saying 4-5 hours! I've cleaned the Cache, enabled multiprocessing etc and it's still saying 4-5 hours.
Any idea why that would be?
All the settings are the same. Output codec, output location etc.
Stumped.

Welcome to the wonderful weird wacky world of After Effects renders. Do you have multiprocessing turned on? How much RAM do you have? How many cores? What's your source footage? What resolution & codec are you rendering to?
I've run into the same situation. Even though I have gobs of RAM (32GB), sometimes rendering with fewer cores and less assigned memory results in faster rendering. I'm sure Todd will be along to provide the link for tuning your RAM/Core settings (I don't have it offhand). I find usually my Mac is fastest after a fresh reboot and not running anything else.
So you are saying you got 40 minutes with multiprocessing turned *off*?
40 minutes vs 7 hours seems a little extreme. You sure it completed rendering? You did it at full rez at full frame rate?

Similar Messages

  • Render times different for FLV?

    I've set up a virtual cluster for Compressor and witnessed the astounding speed increase using Activity monitor. I'm wondering though- the render jobs were using an .mov being rendered to .mp4 (and other formats) and I was able to see all 8-cores being utilized. When I try to render to flv (using Quicktime Export Components) performance drops to ONE core, and takes much longer.....is flv just processed differently in Compressor or does Qmaster not process all jobs the same?

    Only certain codecs can be segmented, h264 can, flv can not.

  • Post-Render time waste for H264 & ProRes Export on Windows?

    There is one thing that bothers me all the time.
    If i render to some 1 Gbit/s windows network location, everything is fine until multiplexing starts, and it goes like this:
    1. Premiere CC creates .m4v file for video and .aac file for audio - thats OK.
    2. Then multiplexing starts and some temp file like "s3r4." is created - no problem.
    3. After that Premiere creates desired ".mp4" file with suffix ".mp4._00_" and copies "s3r4." to that file
    4. Than it renames ".mp4._00_" to ".mp4" and render is finished
    The thing i don't get here is why Premiere have to copy already multiplexed file "s3r4." to "_00_" file
    and than rename it back to mp4? This procedure takes a lot of extra time if you have some huge file size (4GB? 8GB?)
    The other thing: i have also installed Drastic MediaReactor for Direct ProRes export from Premiere CC on Windows.
    Files are huge, no multiplexing in this case, but still after render Premiere starts
    it's unneeded copy process of ".mov" file to ".mov._00_" and than back to ".mov".
    I've tried to "end task" and just use the uncopied ".mov" which is available immediately
    after render and it works fine and has no problems at all. I have also contaceted
    Drastic Tech on this case and they said they have nothing to do with this copy/pasting.
    That's why i don't know why Premiere has to copy/paste post-render files to the same location after render.
    In case of ProRes they can reach ~20GB and you can imagine how long does it takes to copy/paste
    such file especially to a network location.
    Why instead of "copy" Premiere can't just use a "rename"? That could be blazing fast!

    Try exporting to a local hard drive instead of one on a network.

  • Oracle SQL Developer O/P and Toad O/P is Different for Same QUERY.

    Hi,
    can any one calrify me why i'm getting different result when i run the same QUERY in Oracle Developer and TOAD.
    When i Ran a Query in TOAD i could see Null&Data in a column called Customer_Category but when i Ran the same Query in Oracle SQL Developer i'm getting all Null N i couldnt see any data in the column.
    i have not performed any DDL r DML statement, just i got the Query and i Ran it in TOAD & Oracle SQL Developer.
    I found that the OUT PUT is different for perticular column.
    Thanks in Advance....

    > I found that the OUT PUT is different for perticular column.
    The RENDERING of data from Oracle (or any other server such as a POP3 server, web server, etc) IS DONE BY THE CLIENT.
    So to repeat - RENDERING IS DONE BY THE CLIENT.
    If one client selects to display the output received from the server differently than another client, it is a CLIENT ISSUE.
    It is not a server issue. It is not a SQL issue. It is not a PL/SQL issue.
    In other words, wrong forum for this question. You have a pure client side rendering problem which has absolutely nothing to do with SQL and/or PL/SQL.

  • Render time.   Is 11 hours render time reasonable for a 90 minute file?

    I started with a 120 minute AVI file, trimmed the last 30 minutes off and am rendering it to an identical format AVI file. Is 11 hours render time reasonable?  Have I set something improperly?

    More information needed for someone to help... please click below and provide the requested information
    -Premiere Pro Video Editing Information FAQ http://forums.adobe.com/message/4200840
    AVI is a wrapper, what is inside YOUR wrapper?
    Exactly what is INSIDE the video you are editing?
    Codec & Format information, with 2 links inside for you to read http://forums.adobe.com/thread/1270588
    Report back with the codec details of your file, use the programs below... A screen shot works well to SHOW people what you are doing
    http://forums.adobe.com/thread/592070?tstart=30 for screen shot instructions
    Free programs to get file information for PC/Mac http://mediaarea.net/en/MediaInfo/Download
    What effects have you applied to the video?

  • Explain plan is different for same query

    Hi all,
    I have a query, which basically selects some columns from a remote database view. The query is as follows:
    select * from tab1@remotedb, tab2@remotedb
    where tab1.cash_id = tab2.id
    and tab1.date = '01-JAN-2003'
    and tab2.country_code = 'GB';
    Now, i am working on two environments, one is production and other is development. Production environment has following specification:
    1. Remotedb = Oracle9i, Linux OS
    2. Database on which query is running = Oracle10g, Linux OS
    Development environment has following specification:
    1. Remotedb = Oracle10g, Windows OS
    2. Database on which query is running = Oracle10g, Linux OS
    Both databases in development and production environments are on different machines.
    when i execute the above query on production, i see full table scans on both tables in execution plan(TOAD), but when i execute the query in development, i see that both remote database tables are using index.
    Why am i getting different execution plans on both databases? is there is any difference of user rights/priviliges or there is a difference of statistics on both databases. I have checked the statistics for both tables on Production and Development databases, they are updated.
    This issue is creating a performance disaster in our Production system. Any kind of help or knowledge sharing is appreciated.
    Thank you and Best Regards.

    select * from tab1@remotedb, tab2@remotedb
    where tab1.cash_id = tab2.id
    and tab1.date = '01-JAN-2003'
    and tab2.country_code = 'GB';
    I assume that tab1.date is a date column. You are doing an implizit type conversion here. I think the way how those conversions are done, was changed from 9i to 10g. So that in 10g index usage is possible, while in 9i is not (not very sure about this).
    Change your query to this:
    select * from tab1@remotedb, tab2@remotedb
    where tab1.cash_id = tab2.id
    and tab1.date = to_date('01-JAN-2003','DD-MON-YYYY')
    and tab2.country_code = 'GB';
    But compare and consider the results. Especially if the column tab1.date holds time values too.
    and tab1.date = to_date('01-JAN-2003','DD-MON-YYYY')is not the same as
    and to_char(tab1.date) = '01-JAN-2003'maybe you must change to
    and tab1.date >= to_date('01-JAN-2003','DD-MON-YYYY')
          and tab1.date < to_date('01-JAN-2003','DD-MON-YYYY') + 1Depends on your data.

  • Explain plan different for same query

    Hi all,
    I have a query, which basically selects some columns from a remote database view. The query is as follows:
    select * from tab1@remotedb, tab2@remotedb
    where tab1.cash_id = tab2.id
    and tab1.date = '01-JAN-2003'
    and tab2.country_code = 'GB';
    Now, i am working on two environments, one is production and other is development. Production environment has following specification:
    1. Remotedb = Oracle9i, Linux OS
    2. Database on which query is running = Oracle10g, Linux OS
    Development environment has following specification:
    1. Remotedb = Oracle10g, Windows OS
    2. Database on which query is running = Oracle10g, Linux OS
    Both databases in development and production environments are on different machines.
    when i execute the above query on production, i see full table scans on both tables in execution plan(TOAD), but when i execute the query in development, i see that both remote database tables are using index.
    Why am i getting different execution plans on both databases? is there is any difference of user rights/priviliges or there is a difference of statistics on both databases. I have checked the statistics for both tables on Production and Development databases, they are updated.
    This issue is creating a performance disaster in our Production system. Any kind of help or knowledge sharing is appreciated.
    Thank you and Best Regards.

    We ran into a similar situation yesterday morning, though our implementation was easier than yours. Different plans in development and production though both systems were 10gR2 at the time. Production was doing a Merge Join Cartesian (!) instead of nested loop joins. Our DBA figured out that the production stats had been locked for some tables preventing stat refresh; she unlocked them and re-analyzed so which fixed our problem.
    Of some interest was discovering that I got different execution plans from the same UPDATE via EXPLAIN PLAN and SQL*PLUS AUTOTRACE in development. Issue appears to have been bind peeking. Converting bind variables to constants yielded the AUTOTRACE plan, as did turning bind peeking off while using the bind variables. CURSOR_SHARING was set to EXACT too.
    Message was edited by:
    riedelme

  • Value different for same quantity/material

    I am using MB5L report.  There I see two entries for a particular material.
    ValA Material                  Total Stock BUn        Total Value Crcy  Material Description                     S Document     Item
    2900 14007969             3.750  M            3,075.90  SAR   Material 18928L for flushiing
    2900 14007969            3.750  M           14,620.28  SAR   Material 18928L  for flushiing               1277809      101
    why the above same material with same quantity shows different values in MB5L report

    Dear:
                  This happens because of different moving average price MAP. Every time there is a MIGO for new PO the system adjust the previous price according to the prices entered at the time of PO creation at the time of MIGO hence moving average price is updated in SAP. Hence its a normal thing to view material price showing different values in MB5L. I hope this clear.
    Regards

  • How to find the moving average for perticular time step for same time series

    Hello everyone,
    I am new in labview and I have one issue. I have a huge text file with the different pressure value of  different ports.
    I make the VI in which first I read the text file and identify the perticular port column. In that column there are 32768 pressure value.
    Time  Num     Port 101 Port 102 Port 103...... Port 532
    0         1
    0.001   2
    50       32768
    And this all reading takes in 50 sec and I have to split this readings in 4.5 sec... in every 4.5 sec there are 2969 values...
    Now please go throw the attched VI... i am able to find the mean value with the 12 time step of 2969 value...
    Now I want to do is, to find again the mean value of 2969 values(2970 to 5938) with time step of 12 and so on until 32768... so it happens 11 times in series...
    So can anyone modify my VI??
    Thank you,
    Solved!
    Go to Solution.
    Attachments:
    Moving average.vi ‏18 KB

    please go through the attched VI
    I made some correction in that.Now I am able to find the mean value for first 2969 rows with the overlapping time step of 12.
    It means I am able to find the mean value from 1-12,2-13,3-14 ... 2958-2969...
    but now the problem starts, I want to calculate the mean value of next 2969 rows which is 2970 to 5938 with the time step of 12...
    So please modify my VI.. I stuck with the serious problem...
    I have 32768 rows and have a partition of 2969 rows so the loop has to be repeat for 11 times.
    hope you understand the prob
    Thank you 
    Attachments:
    Moving average.vi ‏19 KB

  • Disk space being reported differently for same size drives?

    I have a 1TB drive inside my computer. I have a 1TB external drive for Time Machine backups. I have a 2TB external drive for videos. I know Apple changed the way Snow Leopard calculates disk/file size to base 10, from base 2.
    My internal 1TB drive is reported as 999.86GB. My external 1TB drive is reported as 999.87GB. My external 2TB drive is reported as 2TB. Why are the two 1TB drives being reported as less than 1TB, and at different sizes, while my external 2TB drive is being reported as 2TB in base 10 [properly, according to Snow Leopard]? If Snow Leopard is calculating sizes using base 10, shouldn't both 1TB drives show as 1TB each?

    "what drive manufacturer advertises a 1TB drive not as 1TB?"
    I don't know, that's not what I said anyway. I said drives may vary slightly in size due to different
    cylinder/head layouts which may affect overall gross size, which would also affect formatted size.
    Bear in mind that Apple's GUID partition scheme eats up more space than APM or MSDos partition
    schemes. In fact given the size you quoted, (999.86), if you had used an Apple Partition Map (APM)
    partition, the Volume size would have been more than 1000GB, but drive manufacturers don't
    guarantee formatted drive size in any case. They only go by raw unformatted drive size, which in
    your case would be about 1,000,204,886,016 Bytes before formatting using the GUID partition
    table (GPT) scheme. That being the case (and it is), this case is indeed closed is it not?
    Kj ♘

  • PR Item Text Different for same Material in the same Plant.

    I have a Query for SAP MM.
    Below is the Scenario:
    User raises a PR through SAP for a particular Material (Spare) which belongs to an Equipment.
    Let's say in Plant A, we have an Equipment 10001 & it has a sub-Equipment as 1000112 & further it has a spare which belongs to the sub-equipment as 1000001.
    Now when the user raises a PR for 1000001 then he wants to automatically populate the information in the Item Detail Text tab  such as:
    Equipment No.: 10001
    Sub-Equipment No.: 1000112
    The above text enables the vendor to track & source the correct material.
    The problem here is that the same material is being used in the same plant for another equipment also, hence we cant just update the PO text in the Material Master record.
    How to map the above in the std SAP system???
    Require your help / expert comments.
    Regards,
    Yogesh.

    Hello friend,
                      i feel that u want to send the material to the supplier of that equipment for sum work,
    then it would b better that u maintain the supplier model no. or manufacturing part no. which u can give in the detail.
    it would be better to give the manufacturer part no. The manufacturer can more easil track the part by giving the manufacturer part no, rather then giving your own SAP No.
    and regarding the SUB-Equipment and its part no. there is no standard functionality.
    Regards,
    yk
    Edited by: 1234_abcd on Apr 1, 2010 5:27 PM

  • Result List for Premise different for same BP

    Hi SAP experts,
    We are facing the following issue in our system,
    When we confirm a BP in our system we get two connection object againt the confirmed BP
    When we search for premise from the search option for the same BP :
    We get 3 connection object in the result :
    Any sort of help will be highly appreciated.
    Thanks in advance.

    Hi Anirban,
    Ideally for one BP, there is always one connection and that too if it has a Sold to party role with it.
    Connection can only be for the Sold to parties.
    But, if this is really the issue, check for the business partner feilds, if any enhancement is affecting it and also the search criteria if it is modified.
    Regards,
    Dinesh
    (Rate the Answer if helpful.)

  • Why two different explain plan for same objects?

    Believe or not there are two different databases, one for processing and one for reporting, plan is show different for same query. Table structure and indexes are same. It's 11G
    Thanks
    Good explain plan .. works fine..
    Plan
    SELECT STATEMENT  ALL_ROWSCost: 12,775  Bytes: 184  Cardinality: 1                                                                        
         27 SORT UNIQUE  Cost: 12,775  Bytes: 184  Cardinality: 1                                                                   
              26 NESTED LOOPS                                                              
                   24 NESTED LOOPS  Cost: 12,774  Bytes: 184  Cardinality: 1                                                         
                        22 HASH JOIN  Cost: 12,772  Bytes: 178  Cardinality: 1                                                    
                             20 NESTED LOOPS SEMI  Cost: 30  Bytes: 166  Cardinality: 1                                               
                                  17 NESTED LOOPS  Cost: 19  Bytes: 140  Cardinality: 1                                          
                                       14 NESTED LOOPS OUTER  Cost: 16  Bytes: 84  Cardinality: 1                                     
                                            11 VIEW DSSADM. Cost: 14  Bytes: 37  Cardinality: 1                                
                                                 10 NESTED LOOPS                           
                                                      8 NESTED LOOPS  Cost: 14  Bytes: 103  Cardinality: 1                      
                                                           6 NESTED LOOPS  Cost: 13  Bytes: 87  Cardinality: 1                 
                                                                3 INLIST ITERATOR            
                                                                     2 TABLE ACCESS BY INDEX ROWID TABLE DSSODS.DRV_PS_JOB_FAMILY_TBL Cost: 10  Bytes: 51  Cardinality: 1       
                                                                          1 INDEX RANGE SCAN INDEX DSSODS.DRV_PS_JOB_FAMILY_TBL_CL_SETID Cost: 9  Cardinality: 1 
                                                                5 TABLE ACCESS BY INDEX ROWID TABLE DSSADM.DIM_JOBCODE Cost: 3  Bytes: 36  Cardinality: 1            
                                                                     4 INDEX RANGE SCAN INDEX DSSADM.STAN_JB_FN_IDX Cost: 2  Cardinality: 1       
                                                           7 INDEX UNIQUE SCAN INDEX (UNIQUE) DSSODS.DRV_PS_JOBCODE_TBL_SEQ_KEY_RPT Cost: 0  Cardinality: 1                 
                                                      9 TABLE ACCESS BY INDEX ROWID TABLE DSSODS.DRV_PS_JOBCODE_TBL_RPT Cost: 1  Bytes: 16  Cardinality: 1                      
                                            13 TABLE ACCESS BY INDEX ROWID TABLE DSSODS.DRV_PSXLATITEM_RPT Cost: 2  Bytes: 47  Cardinality: 1                                
                                                 12 INDEX RANGE SCAN INDEX DSSODS.PK_DRV_RIXLATITEM_RPT Cost: 1  Cardinality: 1                           
                                       16 TABLE ACCESS BY INDEX ROWID TABLE DSSADM.DIM_JOBCODE Cost: 3  Bytes: 56  Cardinality: 1                                     
                                            15 INDEX RANGE SCAN INDEX DSSADM.DIM_JOBCODE_EXPDT1 Cost: 2  Cardinality: 1                                
                                  19 TABLE ACCESS BY INDEX ROWID TABLE DSSODS.DRV_PS_JOB_RPT Cost: 11  Bytes: 438,906  Cardinality: 16,881                                          
                                       18 INDEX RANGE SCAN INDEX DSSODS.DRV_PS_JOB_JOBCODE_RPT Cost: 2  Cardinality: 8                                     
                             21 INDEX FAST FULL SCAN INDEX (UNIQUE) DSSADM.Z_PK_JOBCODE_PROMPT_TBL Cost: 12,699  Bytes: 66,790,236  Cardinality: 5,565,853                                               
                        23 INDEX RANGE SCAN INDEX DSSADM.DIM_PERSON_EMPL_RCD_SEQ_KEY Cost: 1  Cardinality: 1                                                    
                   25 TABLE ACCESS BY INDEX ROWID TABLE DSSADM.DIM_PERSON_EMPL_RCD Cost: 2  Bytes: 6  Cardinality: 1                                                         This bad plan ... show merge join cartesian and full table ..
    Plan
    SELECT STATEMENT  ALL_ROWSCost: 3,585  Bytes: 237  Cardinality: 1                                                              
         26 SORT UNIQUE  Cost: 3,585  Bytes: 237  Cardinality: 1                                                         
              25 NESTED LOOPS SEMI  Cost: 3,584  Bytes: 237  Cardinality: 1                                                    
                   22 NESTED LOOPS  Cost: 3,573  Bytes: 211  Cardinality: 1                                               
                        20 MERGE JOIN CARTESIAN  Cost: 2,864  Bytes: 70,446  Cardinality: 354                                          
                             17 NESTED LOOPS                                     
                                  15 NESTED LOOPS  Cost: 51  Bytes: 191  Cardinality: 1                                
                                       13 NESTED LOOPS OUTER  Cost: 50  Bytes: 180  Cardinality: 1                           
                                            10 HASH JOIN  Cost: 48  Bytes: 133  Cardinality: 1                      
                                                 6 NESTED LOOPS                 
                                                      4 NESTED LOOPS  Cost: 38  Bytes: 656  Cardinality: 8            
                                                           2 TABLE ACCESS BY INDEX ROWID TABLE REPORT2.DIM_JOBCODE Cost: 14  Bytes: 448  Cardinality: 8       
                                                                1 INDEX RANGE SCAN INDEX REPORT2.STAN_PROM_JB_IDX Cost: 6  Cardinality: 95 
                                                           3 INDEX RANGE SCAN INDEX REPORT2.SETID_JC_IDX Cost: 2  Cardinality: 1       
                                                      5 TABLE ACCESS BY INDEX ROWID TABLE REPORT2.DIM_JOBCODE Cost: 3  Bytes: 26  Cardinality: 1            
                                                 9 INLIST ITERATOR                 
                                                      8 TABLE ACCESS BY INDEX ROWID TABLE REPORT2.DRV_PS_JOB_FAMILY_TBL Cost: 10  Bytes: 51  Cardinality: 1            
                                                           7 INDEX RANGE SCAN INDEX REPORT2.DRV_PS_JOB_FAMILY_TBL_CL_SETID Cost: 9  Cardinality: 1       
                                            12 TABLE ACCESS BY INDEX ROWID TABLE REPORT2.DRV_PSXLATITEM_RPT Cost: 2  Bytes: 47  Cardinality: 1                      
                                                 11 INDEX RANGE SCAN INDEX REPORT2.PK_DRV_RIXLATITEM_RPT Cost: 1  Cardinality: 1                 
                                       14 INDEX UNIQUE SCAN INDEX (UNIQUE) REPORT2.DRV_PS_JOBCODE_TBL_SEQ_KEY_RPT Cost: 0  Cardinality: 1                           
                                  16 TABLE ACCESS BY INDEX ROWID TABLE REPORT2.DRV_PS_JOBCODE_TBL_RPT Cost: 1  Bytes: 11  Cardinality: 1                                
                             19 BUFFER SORT  Cost: 2,863  Bytes: 4,295,552  Cardinality: 536,944                                     
                                  18 TABLE ACCESS FULL TABLE REPORT2.DIM_PERSON_EMPL_RCD Cost: 2,813  Bytes: 4,295,552  Cardinality: 536,944                                
                        21 INDEX RANGE SCAN INDEX (UNIQUE) REPORT2.Z_PK_JOBCODE_PROMPT_TBL Cost: 2  Bytes: 12  Cardinality: 1                                          
                   24 TABLE ACCESS BY INDEX ROWID TABLE REPORT2.DRV_PS_JOB_RPT Cost: 11  Bytes: 1,349,920  Cardinality: 51,920                                               
                        23 INDEX RANGE SCAN INDEX REPORT2.DRV_PS_JOB_JOBCODE_RPT Cost: 2  Cardinality: 8                                          

    user550024 wrote:
    I am really surprise that the stat for good sql are little old. I just computed the states of bad sql so they are uptodate..
    There is something terribly wrong..Not necessarily. Just using the default stats collection I've seen a few cases of things suddenly going wrong. As the data increases, it gets closer to an edge case where the inadequacy of the statistics convinces the optimizer to do a wrong plan. To fix, I could just go into dbconsole, set the stats back to a time when they worked, and locked them. In most cases it's definitely better to figure out what is really going on, though, to give the optimizer better information to work with. Aside from the value of learning how to do it, for some cases it's not so simple. Also, many think the default settings of the database statistic collection may be wrong in general (in 10.2.x, at least). So much depends on your application and data that you can't make too many generalizations. You have to look at the evidence and figure it out. There is still a steep learning curve for the tools to look at the evidence. People are here to help with that.
    Most of the time it works better than a dumb rule based optimizer, but at the cost of a few situations where people are smarter than computers. It's taken a lot of years to get to this point.

  • EX cam footage render time?

    I have EX1 footage captured as EDcamEX.
    I placed it in a pro res time as recommended for faster renders.
    The footage was an event- low light. The gain was high.
    I applied a BCC Denoise filter.
    The filter cleaned it up nice but,
    The timeline is about 90 minutes and The render out is taking 11 hours (?)
    Is this render time correct for this type of filter on this length timeline?

    Hi -
    Couple of things:
    1) When dealing with XDCam and Ex footage, I routinely get very long estimates for renders that rapidly drop as the render progresses.
    2) I am not sure what advantage you got by editing the Sequence with the Codec set differently from the source. I routinely edit Ex footage on sequences that have the same codec (XDCAM EX 1080p30) and the renders, while not zippy, are not outrageously long. If you think about it, now FCP has to convert all the footage used in the sequence to Pro Res as well as rendering any effects or transitions. This is probably quite CPU intensive.
    Don't know if you want to consider this, but you might want to create a new sequence that has the same codec and frame size and frame rate as you source material, and copy all the clips from your ProRes Sequence and paste them to the new sequence and try a render there.
    Hope this helps.

  • Same Computers/Settings - Different Render Times

    Hello, great and powerful Adobe Community.  My coworker and I have identical machines (Dell Precision T7600 - Xeon 2GB, 32 GB RAM, NVIDIA Quadro K5000, Windows 64 bit OS), both running Adobe CS6.  Our render set-up for After Effects is to pull source footage from the main HD (2 TB), render to a second HD (150GB), with a third HD (150GB) set up for cache, media cache, and database folders.  Both machines are set up to render multiple frames simultaneously.  However, for some reason his machine is taking ridiculously long to render.  For the same file, my machine finishes rendering in about 2hrs, while his is showing an estimated time of 17 hours.  When left to run overnight, his machine still hasn't finished the next morning and shows 60+ hrs remaining.  This is a recent problem and his computer has had reasonable render times up until recently.  Software is up to date across the board, no new software has been installed on either machine lately, and any updates required by our system admin would have been applied to both.
    Other than checking that the above listed conditiions are identical between our two towers, I'm not sure where to look for the problem.  Can anyone suggest a reason for his machine's poor performance?  Any insight would be appreciated.  My thanks in advance for your help.

    I wouldn't call it a bug. 
    Besides, although the effects you use in all these comps might all be precisely the same, you didn't breathe a word whether the codec of the footage used in these comps is precisely the same... and codecs can make a big difference. 
    Sure, you may have rendered to the same codec.  Did you START with source footage that's all the same codec?  If not, you're not comparing apples to apples.

Maybe you are looking for