AE render AVI size issues?

Ok, so I finished my project and rendered it into an .avi. When the rendering was complete (took 4 hours) I found that the .avi file was 24 GIGS in size. Is that normal??? I am thinking about exporting it into premiere to make the short movie (~5 minues). Assistance on how to get this file down to a reasonable size for Youtube would be great. Perhaps another movie extension? Thoughts? Thanks again!

If you're encoding for YouTube, check the guidelines on the YouTube site. That's a specific case of a general rule: When you're trying to decided how to encode something, ask the person/site/company that will be receiving and playing it what they want.
The last time that I encoded something for YouTube (the After Effects 1.1 demo reel), I used an F4V container file, which uses the H.264 video codec.
See "Render and export a composition as an FLV or F4V file" for more information.

Similar Messages

  • WMV and Disk Size issues

    So I am a pretty avid Encore user and I have come into some issues lately and could use some help.
    Background-
    I filmed a 14 hour conference on SD 16:9 mini dv
    I captured 14 hours with Premiere as .AVI - I edited the segments and exported as .AVI
    I used Media Encoder to convert the files to NTSC Progressive Widescreen High Quality (.m2v)   - Reduced the file size drastically
    I then used Media Encoder to convert the .m2v files to .wmv files - Reducing the conference size to 5.65 GB in total.
    I then imported the .wmv into Encore - my issues begin
    At first, Encore CS4 imported the .wmv files without a problem however the disk size (of 5.65 GB) registered in Encore as around 13 gigs???  Why is that?  The .wmv files only consume 5.65 gb on my harddrive.  Where is this file size issues coming from?
    So then Encore CS4 gets upset that I have exceeded the 8.5 DL Disk size and crashes...
    I reopen the program and try to import my .wmv files again (forgot to save like an idiot).  3 of 8 .wmv files import and then Encore starts giving me decoder errors saying I cannot import the rest of the .wmv files... 
    Can anyone help me with this issue?  Im quite confused by all of this.  Things seemed to work fine (sorta) at first and now Encore is pissed.
    I want to get this 14 hour conference on 1 DL DVD and I thought it would be as simple as getting the files reduced to a size sutable for a 8.5 gb disk.  Why is my way of thinking incorrect?
    Thanks for any help,
    Sam

    ssavery wrote:
    Thanks everyone for your help.
    Im still not giving up....  It's become an obsession at this point.  My uncle does this with kids movies for his children.   He'll download and compress entire seasons of children shows and put them all on one dvd which he then plays in a dvd player for them to watch.  Im currently trying to get ahold of him....
    Thanks for the help
    Sam
    i've done this as well for shows that are never to be seen again for the light of day from the 80s and such... i use VSO Software's "ConvertXtoDVD v4" i ONLY use this for archival purposes of xvid or wmv or stuff that encore would throw fits over. the menus are all mainly default stock stuff. but for these projects i'm not concerned about menus or specific navigation, i just need to get the job done. i can squeeze around 15 hrs of 720x480 on one DL (it compresses the ever living dayligths out of the video... but for most of the source footage... it really doesnt matter at that point, its mostly all VHS archives i use for this program anyway) if you just absolutely HAVE to have a 1 disker, you could check that app out, burn it an see how it looks.
    edited to add: that to really squeeze crap in, you can also use a DVDFab program (any ersion should do... Older ones are cheaper) make a disc image with ConvertX, if yiu have alot f footage it may push it beyond the normal boundary of a dvd-dl and fail the burn. So then you can just import the disc image into DVDFab, and choose it to burn to a DVD-DL, and it may compress it by about 3-7% more to fit it. I would NEVER use this method EVER for a client... But if you are just hell-bent on doing 1 disk. Tries these 2 apps out. It may work out if you can live with the compression.
    if you do try this, I recommend trying this workflow. Open premiere with your first gem captured AVI. set up your chapters how you want them or whatever, then save each chapter or lecture or segment or whatever as it's own AVI. the. Import all those separately into ConvertX and set it up to play one after the other when each segment ends. [i can't confirm this 100%, because i usually drop in already compressed  files... but if for some reason it don't wanna work out... then i would  suggest dropping in the mts files instead] (if say you want a new "movie" for each lecture instead, and have chapters per movie that can be done too... But it's more work, but I can expound later if need be)  To save time on encoding, set up the menu to be the "minimalist" menu. It's strictly text. Then just create sn ISO. if you donthe full thing, I can almost guarantee you'll have to use DVDFab to burn to disc, because it'll probably be about 5-8% overburn.

  • Paper Size issues with CreatePDF Desktop Printer

    Are there any known paper size issues with PDFs created using Acrobat.com's CreatePDF Desktop Printer?
    I've performed limited testing with a trial subscription, in preparation for a rollout to several clients.
    Standard paper size in this country is A4, not Letter.  The desktop printer was created manually on a Windows XP system following the instructions in document cpsid_86984.  MS Word was then used to print a Word document to the virtual printer.  Paper Size in Word's Page Setup was correctly set to A4.  However the resultant PDF file was Letter size, causing the top of each page to be truncated.
    I then looked at the Properties of the printer, and found that it was using an "HP Color LaserJet PS" driver (self-chosen by the printer install procedure).  Its Paper Size was also set to A4.  Word does override some printer driver settings, but in this case both the application and the printer were set to A4, so there should have been no issue.
    On a hunch, I then changed the CreatePDF printer driver to a Xerox Phaser, as suggested in the above Adobe document for other versions of Windows.  (Couldn't find the recommended "Xerox Phaser 6120 PS", so chose the 1235 PS model instead.)  After confirming that it too was set for A4, I repeated the test using the same Word document.  This time the result was fine.
    While I seem to have solved the issue on this occasion, I have not been able to do sufficient testing with a 5-PDF trial, and wish to avoid similar problems with the future live users, all of which use Word and A4 paper.  Any information or recommendations would be appreciated.  Also, is there any information available on the service's sensitivity to different printer drivers used with the CreatePDF's printer definition?  And can we assume that the alternative "Upload and Convert" procedure correctly selects output paper size from the settings of an uploaded document?
    PS - The newly-revised doc cpsid_86984 still seems to need further revising.  Vista and Windows 7 instructions have now been split.  I tried the new Vista instructions on a Vista SP2 PC and found that step 6 appears to be out of place - there was no provision to enter Adobe ID and password at this stage.  It appears that, as with XP and Win7, one must configure the printer after it is installed (and not just if changing the ID or password, as stated in the document).

    Thank you, Rebecca.
    The plot thickens a little, given that it was the same unaltered Word document that first created a letter-size PDF, but correctly created an A4-size PDF after the driver was changed from the HP Color Laser PS to a Xerox Phaser.  I thought that the answer may lie in your comment that "it'll get complicated if there is a particular driver selected in the process of manually installing the PDF desktop printer".  But that HP driver was not (consciously) selected - it became part of the printer definition when the manual install instructions were followed.
    However I haven't yet had a chance to try a different XP system, and given that you haven't been able to reproduce the issue (thank you for trying), I will assume for the time being that it might have been a spurious problem that won't recur.  I'll take your point about using the installer, though when the opportunity arises I might try to satisfy my cursed curiosity by experimenting further with the manual install.  If I come up with anything of interest, I'll post again.

  • Smartcardio ResponseAPDU buffer size issue?

    Greetings All,
    I’ve been using the javax.smartcardio API to interface with smart cards for around a year now but I’ve recently come across an issue that may be beyond me. My issue is that I whenever I’m trying to extract a large data object from a smart card, I get a “javax.smartcardio.CardException: Could not obtain response” error.
    The data object I’m trying to extract from the card is around 12KB. I have noticed that if I send a GETRESPONSE APDU after this error occurs I get the last 5 KB of the object but the first 7 KB are gone. I do know that the GETRESPONSE dialogue is supposed to be sent by Java in the background where the responses are concatenated before being sent as a ResponseAPDU.
    At the same time, I am able to extract this data object from the card whenever I use other APDU tools or APIs, where I have oversight of the GETRESPONSE APDU interactions.
    Is it possible that the ResponseAPDU runs into buffer size issues? Is there a known workaround for this? Or am I doing something wrong?
    Any help would be greatly appreciated! Here is some code that will demonstrate this behavior:
    * test program
    import java.io.*;
    import java.util.*;
    import javax.smartcardio.*;
    import java.lang.String;
    public class GetDataTest{
        public void GetDataTest(){}
        public static void main(String[] args){
            try{
                byte[] aid = {(byte)0xA0, 0x00, 0x00, 0x03, 0x08, 0x00, 0x00};
                byte[] biometricDataID1 = {(byte)0x5C, (byte)0x03, (byte)0x5F, (byte)0xC1, (byte)0x08};
                byte[] biometricDataID2 = {(byte)0x5C, (byte)0x03, (byte)0x5F, (byte)0xC1, (byte)0x03};
                //get the first terminal
                TerminalFactory factory = TerminalFactory.getDefault();
                List<CardTerminal> terminals = factory.terminals().list();
                CardTerminal terminal = terminals.get(0);
                //establish a connection with the card
                Card card = terminal.connect("*");
                CardChannel channel = card.getBasicChannel();
                //select the card app
                select(channel, aid);
                //verify pin
                verify(channel);
                 * trouble occurs here
                 * error occurs only when extracting a large data object (~12KB) from card.
                 * works fine when used on other data objects, e.g. works with biometricDataID2
                 * (data object ~1Kb) and not biometricDataID1 (data object ~12Kb in size)
                //send a "GetData" command
                System.out.println("GETDATA Command");
                ResponseAPDU response = channel.transmit(new CommandAPDU(0x00, 0xCB, 0x3F, 0xFF, biometricDataID1));
                System.out.println(response);
                card.disconnect(false);
                return;
            }catch(Exception e){
                System.out.println(e);
            }finally{
                card.disconnect(false)
        }  

    Hello Tapatio,
    i was looking for a solution for my problem and i found your post, first i hope your answer
    so i am a begginer in card developpement, now am using javax.smartcardio, i can select the file i like to use,
    but the problem is : i can't read from it, i don't now exactly how to use hexa code
    i'm working with CCID Smart Card Reader as card reader and PayFlex as smart card,
              try {
                          TerminalFactory factory = TerminalFactory.getDefault();
                      List<CardTerminal> terminals = factory.terminals().list();
                      System.out.println("Terminals: " + terminals);
                      CardTerminal terminal = terminals.get(0);
                      if(terminal.isCardPresent())
                           System.out.println("carte presente");
                      else
                           System.out.println("carte absente");
                      Card card = terminal.connect("*");
                     CardChannel channel = card.getBasicChannel();
                     ResponseAPDU resp;
                     // this part select the DF
                     byte[] b = new byte[]{(byte)0x11, (byte)0x00} ;
                     CommandAPDU com = new CommandAPDU((byte)0x00, (byte)0xA4, (byte)0x00, (byte)0x00, b);
                     resp = channel.transmit(com);
                     System.out.println("Result: " + getHexString(resp.getBytes()));
                        //this part select the Data File
                     b = new byte[]{(byte)0x11, (byte)0x05} ;
                     com = new CommandAPDU((byte)0x00, (byte)0xA4, (byte)0x00, (byte)0x00, b);
                     System.out.println("CommandAPDU: " + getHexString(com.getBytes()));
                     resp = channel.transmit(com);
                     System.out.println("Result: " + getHexString(resp.getBytes()));
                     byte[] b1 = new byte[]{(byte)0x11, (byte)0x05} ;
                     com = new CommandAPDU((byte)0x00, (byte)0xB2, (byte)0x00, (byte)0x04, b1, (byte)0x0E); */
                        // the problem is that i don't now how to built a CommandAPDU to read from the file
                     System.out.println("CommandAPDU: " + getHexString(com.getBytes()));
                     resp = channel.transmit(com);
                     System.out.println("Result: " + getHexString(resp.getBytes()));
                      card.disconnect(false);
              } catch (Exception e) {
                   System.out.println("error " + e.getMessage());
              }read record : 00 A4 ....
    if you know how to do , i'm waiting for your answer

  • Recording File Size issue CS 5.5

    I am using CS 5.5, a Balckmagic Ultra Studio Pro through USB 3.0 being fed by a Roland HD Video switcher. Everything is set for 720P 60fps (59.94) and the Black Magic is using the Motion JPEG compression. I am trying to record our sermons live onto a Windows 7 machine with an Nvidia Ge-Force GTX 570, 16 GB of Ram and a 3TB internal raid array (3 drives). It usually works great but more often now when I push the stop button in the capture window, the video is not proceesed and becomes unusable. Is it a file size issue or what. I get nervous when my recording goes longer than 50 Minutes. Help

    Jim thank you for the response. I have been away and busy but getting
    caught up now.
    I do have all drives formatted as NTFS. My problem is so sporadic that I
    can not get a pattern down. This last Sunday recorded fine so we will see
    how long it last. Thanks again.

  • Swap size issues-Unable to install DB!!

    Unable to install DB. End part i am getting failed due to swap size issue .. FYI...
    [root@usr~]# df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/hda2             5.9G  5.9G     0 100% /
    /dev/hda3             3.0G  848M  2.0G  31% /tmp
    /dev/hda5              34G   12G   21G  37% /refresh
    /dev/hda1              99M   12M   83M  12% /boot
    tmpfs                 3.9G     0  3.9G   0% /dev/shm
    [root@usr~]#
    Please help me...Thanks

    You can increase your swap space.. I have also faced same issue
    Just try: http://www.thegeekstuff.com/2010/08/how-to-add-swap-space/
    ~J

  • Unable to install DB(swap size issues)

    Unable to install DB. End part i am getting failed due to swap size issue .. FYI...
    [root@usr~]# df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/hda2             5.9G  5.9G     0 100% /
    /dev/hda3             3.0G  848M  2.0G  31% /tmp
    /dev/hda5              34G   12G   21G  37% /refresh
    /dev/hda1              99M   12M   83M  12% /boot
    tmpfs                 3.9G     0  3.9G   0% /dev/shm
    [root@usr~]#
    Please help me...Thanks

    I tried with dd if=/dev/hda3 of=/dev/hda5 count=1024 bs=3097152
    Now the o/p is below ...
    [root@user/]# df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/hda2             5.9G  5.9G     0 100% /
    /dev/hda3              35G   33G  345M  99% /tmp
    /dev/hda5              34G   12G   21G  37% /refresh
    /dev/hda1              99M   12M   83M  12% /boot
    tmpfs                 3.9G     0  3.9G   0% /dev/shm
    [root@user/]# cd /tmp/

  • Windows Update Helps with File Size Issues?

    I'm just wondering if anybody has recently noticed an
    improvement related to the file size issue variously reported
    throughout the forums?
    I ask because our IT folks distributed a Windows update on 2
    days last week and since the application of those updates I have
    not experienced the freakishly large file sizes and the related
    performance issues in Captivate. Unfortunately I don't have any of
    the details of what patch(es) were installed, as it was part of our
    boot script one morning and I didn't even realize it was updating
    until I received the Reboot Now or Later alert.
    Anyway, I was curious because I have experienced significant
    performance improvement since then.
    Rory

    If you are using a remote workflow ... designers are sending off-site editors InCopy Assignment packages (ICAPs) .... then they need to create assignments in order to package them for the remote InCopy user. So there's no need to split up a layout into smaller files or anything.  An assignment is a subset of the INDD file; multiple assignments -- each encompassing different pages or sections -- are created from the same INDD file.
    When the designer creates the assignment, have them turn off "Include original images in packages"; that should keep the file size down.
    Or -- like Bob said -- you can avoid the whole remote workflow/assignment package rigamarole all together by just keeping the file in a project folder in the Dropbox folder on teh designer's local hard drive, and have them share the project folder with the editors. In that workflow, editors open the INDD file on their local computer and check out stories, just as though they were opening them from a networked file server.
    I cover how the InCopy Dropbox workflow works in a tutorial video (within the Remote Workflows chapter) on Lynda.com here:
    http://www.lynda.com/tutorial/62220
    AM

  • What am I doing wrong - size issue

    Need some advice...as usual!
    I designed a site on my 17" monitor and set the margins to 0
    and table to 100%. However when I look at it on different screens
    it looks rubbish...big gap at the bottom of the page with the
    design cramp at the top. I wonder if someone would mind looking at
    my code and see what's wrong with it. The site was designed using
    various techniques including css for nav bars, tables, and
    fireworks elements. the site can be viewed at:
    www.shelleyhadler.co.uk/nerja.html
    thanks for your help. Shell

    >Re: What ams I doing wrong - size issue.
    Several things...
    First, your 17" monitor has noting to do with web page
    layout. What
    resolution is your monitor set to? it could be 800 pixels
    wide or 1280
    pixels... wouldn't that make a difference? That aside, screen
    resolution and
    size are irrelevant anyway. What counts it eh size that your
    viewers have
    their web browser window set at.
    I have a pretty large monitor, set to a very high resolution,
    so I open
    several windows at once and size them so I can see the
    content in all.
    Sometimes my browser is full screen and sometimes its shrunk
    down to less
    than 600 x 800. Your web site needs to accommodate that.
    Every web viewer
    out there is different and likes the way they have their
    screens set up. So
    you need to be flexible and your site needs to be flexible.
    Next, Don't design in Fireworks and import to Dreamweaver.
    Fireworks is a
    superb web ready graphics and imaging processing program. The
    authors
    (mistakenly) threw in some web authoring stuff that works
    very poorly.
    Design your pages using Dreamweaver. Learn html markup and
    css styling to
    arrange it, then use Fireworks to create graphics to support
    your content.
    Along the way, be aware of the diffferant browsers in use.
    Internet
    Explorer is the most popular (or at least most in use) simply
    by virtue of
    the Microsoft market share, but it is also the least web
    complient (by virue
    of the Microsoft arrogance) so some things that work there,
    (like your green
    bands) won't on other browsers and vice versa.
    That said... graphically, your site looks great. You have a
    good eye for
    composition and simple clean design. You just need to learn
    to use html to
    your best advantage to create some realy nice looking and
    nicely working
    sites.
    "shelleyfish" <[email protected]> wrote in
    message
    news:[email protected]...
    > Need some advice...as usual!
    >
    > I designed a site on my 17" monitor and set the margins
    to 0 and table to
    > 100%. However when I look at it on different screens it
    looks
    > rubbish...big
    > gap at the bottom of the page with the design cramp at
    the top. I wonder
    > if
    > someone would mind looking at my code and see what's
    wrong with it. The
    > site
    > was designed using various techniques including css for
    nav bars, tables,
    > and
    > fireworks elements. the site can be viewed at:
    >
    > www.shelleyhadler.co.uk/nerja.html
    >
    > thanks for your help. Shell
    >

  • Render Previews Size and Aspect Ratio

    We are currently editing several projects that have an odd editing dimension but a standard output dimension.  These videos are for a live event with rows of four projection screens set somewhat tightly together.  From a production standpoint, we wanted to have content flow freely across all four screens.  We prefer not to have to visualize this, so we set up a "Desktop" mode, square pixel sequence within a project with a dimension of 2560 x 540.  The 2560 dimension is because most of the footage is from the Panasonic P2 camera (HVX200P).  This camera shoots video at a rectangular pixel dimension of 1280 x 1080.  The pixel aspect ratio is 1.5.  So 1.5 times 1280 = 1920, the final output square pixel dimension. In this case the final output WOULD be 1920 x 2 or 3840 x 540 high. 3840 divided by 4 = 960 x 540.  So we're outputting standard def widescreen to each screen.  Premiere automatically creates the final output through nesting the 2560 x 540 sequence twice in a DVCPROHD 1080i sequence of standard size.  The two 2560 x 540 sequences, one on video layer 1 and the other on video layer 2 are positioned so that the video that corresponds to screen one is positioned top-far right and the second instance of the sequence is positioned bottom far left. The final output file is an avi in the Canopus HQ codec at 1920 x 1080.  This output file is a video that is actually four standard def videos in one 1920 x 1080 video. The Grass Valley Turbo 2 outputs this standard hi-def  file to a Vista Spyder which will isolate and project the four videos-in-one, to the appropriate four screens.
    This type of project would not have been possible without CS5 and Mercury.  Premiere is handling this oddity extremely well.
    One minor issue for the support gurus:
    When we render the Desktop mode square pixel 2560 x 540 sequence timeline, we get what appears to be random, mixed render results.  Sometimes this odd shape renders perfectly, other times it renders out to small dimension that is less than half size.  Other times if renders out with pieces of the elements small. but positioned with spaces between them.  Playing around with sequence render settings and playback resolution, sometimes SEEMS to cause the render to correct itself.  Sometimes a computer restart SEEMS to correct the issue.  Any thoughts?

    Colin Brougham wrote:
    Barring previewing, are you able to export to your intermediate/final movie without problems?
    The final exported video is 1920 x 1080. I have test exported in a number of formats without any problems at all.
    The 2560 x 540 version, (BEFORE it is "processed" through 2 instances of itself in a standard DVCPROHD sequence) is for editing purposes only, so we can actually see how the video will play across four projection screens.
    The remaining issue I'm trying to work out is, how to best get the finished video into the grass valley Turbo 2.  The Turbo 2 takes an AVI file transcoded into the Canopus HQ codec natively without having to transcode.  The only way to get an avi from Premiere Pro CS5 into the Canopus HQ codec is to first export the finished video as an uncompressed avi from Premiere, import it into either "Edius" or "Procoder 3" and export in the HQ codec.  The export from Premiere CS5 + Mercury takes 15 minutes.  The export (HQ encoded file) from the trial version of Edius takes around 10 minutes.  IMO grass valley should create a plug-in for Premiere CS5 and shouldn't allow the quickest export to be Canopus exclusive.  I seriously doubt many Premiere CS5 users will switching to Edius any time soon.
    According to grass valley, the Turbo 2 will accept an MPEG-2 Pgm Stream, whatever the heck a Pgm Stream is???.  I did a Google search without success.  Of course Premiere will export an MPEG-2 file, but how the Pgm Stream relates to this, I have no idea. There are a mind boggling array of settings and the preset seems to export a rather compressed looking video.
    My current thinking is to try to convince the boss to buy Procoder 3.
    BTW:
    the 3840 x 540 version sequence renders fine part of the time and then for whatever reason starts creating a preview that is about 35% of the correct dimension... So back to the original Sequence settings of 2560 x 540

  • Video size issues/slow performance after update to CC 2014

    Hello everyone,
    I'm not an editor by trade, so please bear with me here. I've spent the last few days searching around for an answer and haven't been able to come up with much.
    I'm on a late 2011 Macbook Pro, 2.2 GHz Intel Core i7, 4 GB 1333 MHz DDR3, AMD Radeon HD 6750M 512 MB, Mac OS X Lion 10.7.5 (11G63b).
    I spent a month cutting my first little experimental feature which the assistant editor set up for me (and who is currently away).
    My computer isn't rocket fast, but I was able to complete the first few passes on Premiere CC (7.2.2?) with few major issues.
    About a week ago, I upgraded to Premiere CC 2014 8.0.
    I immediately noticed that my footage went from this:
    To this:
    (The specifics of the footage:
    As you might be able to tell, we shot very old school VHS to help sell the 90's setting. It was a nightmare to find the best way to export it all. I'm still not sure if we did this entirely properly -- and hopefully it's not the cause of these issues. Also, I know VHS is only SD -- we chose to import at the utmost quality, i.e. 1920x1440 and ProRes 422, because more of the image detail was present in low-light.)
    I found that I had to select all the clips in each timeline, right click, and select the 'Set to Frame Size' to have the video full frame once again. Then I had to render everything.
    As I tried to resume my edit, the performance was much slower. Playback is a nightmare. I get the beach ball of death constantly. It would take several minutes to save the project. And it started to crash about once an hour on top of everything.
    But I continued to work for the last eight days, hoping that I'd be able to rectify this somehow.
    I cleared the cache with no luck.
    I read that I could uncheck 'Composite in Linear Color' to make it faster, but that wasn't an issue in the last build.
    My playback and paused resolution is 1/4.
    I read that I should update my OS to OS X Mavericks, but that keeps crashing too! Haha.
    I understand that my system isn't the best, but I just want my project to perform as well as it did on Premiere CC before the update.
    I just spoke to an Adobe tech who basically told me that there wasn't a solution here besides reverting back to Premiere CC from CC 2014 -- or buying a new Macbook.
    But there has to be a way that I can get the same performance that I did before the upgrade, right? I have a feeling something happened to how CC 2014 is using the existing footage.
    (Also, I noticed that clips where I increased the zoom, i.e. 100% to 110%, were back at their original size. And instead of saying 100%, it was now saying 300%. I'm assuming this is because of the 'Set to Frame Size' setting, but this wasn't the case on Premiere CC.)
    Does anyone have any ideas what I can do here?
    Here's my sequence settings on CC 2014:
    And on CC:
    For some reason, portions of this window are grayed out, and not in CC 2014. Not sure if this has anything to do with the issue.
    Thanks a ton in advance everyone! I really can't afford to lose 8 days worth of work, so any help would be greatly greatly appreciated!
    I'll be more than happy to provide additional info.
    I'm really looking to find out, definitively, if my only two options are: a) go back to Premiere CC and lose my recent work, or b) buy a new laptop, which I can't afford.
    (As a side question, is there any way I can transfer my recent work from CC 2014 to CC? I understand it's not backwards-compatible, but there has to be at least a partial solution.)
    Best,
    Jon

    Thanks for the quick response!
    So just this right (to DV):
    I scrubbed through really quick, seems like it just may have worked.

  • Smart Objects - File size issues

    Hey All,
    The Question: Not sure if this question has been answered elsewhere. But when using a nested smart object (meaning a smart object within a smart object) Photoshop CS5 doesn't display the correct file size (at bottom left) or seem to account for the nested smart object file size.  Is there a "setting" I’m missing to accurately display what the true file size is?
    The Problem: Using multiple nested smart objects that I have reduced the size of my image to be 260x200 for web export. Photoshop CS5 won't let me save a file that appears to be only 3mbs claiming it's over 2 gig's. See image below.
    Really not sure what to do about this, the company I work for makes lots of changes so using smart objects is necessary for my work flow. But also seems to be slowing me down trying to figure out issues like this and is problematic when it comes to saving all the work I have been doing.
    Thanks for the help

    FentonDesigns wrote:
    when using a nested smart object (meaning a smart object within a smart object) Photoshop CS5 doesn't display the correct file size (at bottom left) or seem to account for the nested smart object file size.  Is there a "setting" I’m missing to accurately display what the true file size is?
    One thing you might have missed is that Photoshop is not a file editor its a document editor.   The sizes Photoshop is displaying are related to how much ram it using for the documents data. How efficient ram is being used etc. File sizes vary all over the place sizes depend on the number of pixels in an image format support layers no layers compression?, transparency.   There is no way Photoshop could even guess at any file sizes.
    An other is all smart object layer are not created the same and their sizes my be far different the you may think.  
    Smart object layers have a basic format. There is an embedded object, there is a composite pixel rendering for embedded object that is used for the layer pixels and there is a transform associated for the layer rendered pixels.
    Anything Photoshop supports can be an embedded object.  These objects are copies of the original object.  For example a copy of a RAW file where ACR settings are stored in the file copy metadata. An embedded object might be a copy of a PSD file that has thousands of layers. in any case Photoshop renders pixels for the embedded objects composite view and uses these rendered pixels as the smart object layers pixels.  These pixels can not be changed within the document.
    However the embed object can be opened and worked on and changed. If the change is committed Photoshop will update the embedded object and render the updated object composit view and replace the layers pixels. 
    Smart Object Layer Pixels can only be acted on in the document not changed with paint etc. For example the Transform associated with the smart object layer sizes and positions the layer rendering over the canvas. The layer actual size may be larger then, smaller then the canvas size and have a different aspect ratio then the canvas. Example if you place in an image that is larger then the document canvas size one of Adobe Photoshop's Preferences is set by default have Place resize large images to fit within current the documents canvas size. The transform associated with that placed layer would cause the rendering of the layers pixels to fit with in the canvas. 
    Though an embedded  object may contain thousands of layers the actual object may be much smaller then you think for PSD files are compressed object may be compressed.  Also while the embedded object  may contain vector layers when a smart object layer is transformed the layer is transformed using interpolation like a raster layer for all that is being transformed is the pixels Photoshop rendered for the embedded smart object. The only way to work on the embedded smart object layers it to open the smart object and work on the object itself.j

  • After Effects export/render time inconsistency issue

    Hi there, just a quick run down on my system (as outlined in the information to include in my post)
    Motherboard:Gigabyte GA-P67A-UD7-B3 Motherboard
    CPU: Intel Core I7 2600k (Weak volt clock @ 3.9ghz)
    RAM: 16GB (4x4) RIPJAWS X
      Primary HDD: OCZ Revo Drive X2 240GB
    Secondary HDD: OCZ Vertex 2 240GB
    GPU 1: Nvidia Gainward GTX580 Phantom (3GB DDR5)
    GPU 2: Nvidia Gainward GTX580 Phantom (3GB DDR5)
    I think thats about the necessary info relevant to this..
    Now my issue is, I have videos which are ~ 40 seconds - 1 minute in length, and to render/export the entire video in LOW quality (i.e. 0) takes upto 4 times longer than to render the video at 10 (max quality).
    I know people can argue 'be happy its faster' but its more the problem that the low quality is slower - I need it fast and poor quality just to check up on changes I make.
    A single 5 second scene takes ~3 minutes in maximum quality, yet a single scene at lowest quality takes ~ 10 minutes.
    I have adjusted settings and such which have been on youtube and read about around the web, including one which would accelerate the first 10% of the export to a few seconds worth, then bottleneck and take tenfold longer to render the other 90%, so that was largely a useless adjustment too.
    Just a heads up, I'm using CS5, not 5.5.

    Umm, sorry, but again, that's as productive as saying that your undies don't fit when we don't know your size... We need the exact settings. And not to point out the obvious: Exporting to SWF from AE? RLY? That's just godawful. All AE does is create a JPEG sequence embedded in an SWF with a few things possibly remaining vectors. How do you even dream of judging performance, when most of this is dependent on your Flash Player performance, anyway? Again, al lof this makes zero sense (to me, anyway). If you need to verify video playback, use a commonly accepted format like H.264 or WMV that plays to spec in most media players. Everything is else is guaranteed to mislead and confuse you and you'll forever be pulling out your hair trying to fix the problems you created for yourself.
    Mylenium

  • Aperture 2.x and Nikon D3 Pixel Size issues

    I've noticed that Aperture 2.x does not use the correct pixel size for Nikon D3 images: they are read by Aperture as 4272x2828 instead of 4256x2832 as they should be. Exported images are similarly incorrect. (This also creates an awkward aspect ration of 151:100).
    After the problems with Aperture displaying D2x files shot in High-Speed Crop mode I find this really irritating. Is Aperture slightly blowing up the image in the longer dimension to gain these extra pixels? It makes me wonder if Aperture/Apple can't get the basics like the size of the picture right what else might be getting misread/mishandled behind the scenes.

    ts, i know this issue has been discussed many times on this forum, however it still is a problem. it might not be an issue if all you use is the file created by aperture, however if you are retouching the same image (or a series of images) converted by different applications, this becomes a serious issue.
    also, in several previous threads you find statements such as "there is more to a raw file than nx or photoshop displays". this is true and false. if aperture would render more sensor pixels than capture or photoshop, the "cropped" image out of photoshop or capture would exactly (pin accurate) match at least parts of the "oversized" export from aperture. this is not the case.
    for me, this issue (even though often discussed) is still not solved and seams to apply to raw from various cameras, nikon and canon. in my case, it applies to nikon D200, D300 and D3.
    every time i have to create layered illustrations, retouching work, layer different exposures in photoshop (anything that requires several instances of the same image rendered by different applications), i can't use aperture adjusted images. in some cases, creating two version in aperture (then export and overlay in photoshop) can do the trick, in other cases you would like to use the advantages of different applications (nx for ca, aperture for global adjustments, psd for lens correction etc etc).
    i understand, this might be a minor issue (if at all) for most, however if your core business is image retouching, and you deal with thousands of images from various photographers, pixel accuracy can become a serious issue (feel free to call it a BFD if you like).

  • HP Office jet pro scanning size issue

    I have a HP Officejet Pro 8500
    When scanning to my computer to a .pdf it always results in an 8.5x14 document not an 8.5x11 document
    (and this happens regardless of whether I scan a book page with the scanning lid open or an 8.5x11 single page original with the scanning lid closed)
    Using the HP solution center Scan Settings - I cannot find any possible place to change the default settings to force it to scan to an 8.5x11 size
    I have looked at the Adobe help which tells me to find the place to change my scanning default settings (under File -> Create) but I do not have a Create option under my File menu in Adobe (I believe I just have the basic Adobe reader)
    And I cannot see how to change the properties of  .pdf in Adobe to change the size from 8.5x14 to 8.5x11
    I have no idea where else to look to figure out how to fix this
    Anyone have any other ideas?
    (and I am running an HP computer with Windows 8.1)
    Thanks.

    Hello there! Welcome to the forums @SKMgherkin ,
    I read about how you cannot scan in letter size and only in legal size from your Officejet 8500 model. I will certainly do my best to help you out!
    I would like to ask if you could post me back a screen shot of the scan size settings you are able to see in the Solution Center software.
    Also try running the Print and Scan Doctor tool to see if the tool will correct the settings in the software.
    Hope to hear from you!
    R a i n b o w 7000I work on behalf of HP
    Click the “Kudos Thumbs Up" at the bottom of this post to say
    “Thanks” for helping!
    Click “Accept as Solution” if you feel my post solved your issue, it will help others find the solution!

Maybe you are looking for

  • Not able to get the reference fot the new column added thru personalization

    Hi, I have added a new column in an advanced table of Message Text Input. ID for this is xxTemp. Now i have extended controller and in processrequest(), i am trying to get the reference for this column as below: OAAdvancedTableBean oaadvancedtablebea

  • Bug report 01

    Bug report 01 In the ScoreWindow, with the INbutton activated, and with the WholeNoteSymbol selected in the PartBox : Playing a note on an external MIDI-keyboard will insert a note with any note length, EXCEPT the selected whole (1/1) note. Anybody e

  • Help needed ASAP!  Ext. DVD-RW Drive Question

    I need help...I have a Mac Mini purchased in 2005 with just a CD-R (I didn't buy the SuperDrive model). I am wanting to connect a firewire DVD-RW drive to burn a slideshow created in iPhoto. However, I do not have iDVD on the machine, and when I try

  • EventListener function specific to generated display object container

    This has to be simple, if only I knew how. I'm an AS3 newbie, and am having difficulty setting up multiple specific EventListeners within a code generated display object container. I am generating a 'page' (Sprite) with a heap of 'cards' (Sprites) on

  • About alv layout

    i used CL_GUI_ALV_GRID>->set_table_for_first_display to display alv but why the subtotal and total button are gray in the toolbar. i searched the help, but there is no parameter to adjust the layout. who met such problem, please give me a favor thank