...better compression?

...I've just rendered a 12 minute video to DVD and not satisfied with quality. From FCP i exported an setting: >deinterlaced>720x480 SD ANAMORPHIC 16:9 uncompressed 8bit> YUV> highest quality /motion> best at a 13gb file size. After DVDSP got through with it the file size was barely 1gb with 2 versions of the movie(one in french). My question is...what settings can I use to get better quality other than the default settings? Can anyone suggest something? I'm in a pinch and can't really begin to delve into mountains of more confusing information about compression settings! Thanks to all who respond.

I usually just let compressor handle the resizing. I use the "Best setting for DVD 90 minute." I tweak the bit rate quality settings and the audio settings, but leave everything else alone. My DVDs have come out fine. They are full screen on a 16:9 LCD TV and on a old 4:3 CRT TV, they have the black bars on top and bottom.
I shouldn't need to manually enter in any resizing settings should I? Compressor should resize it to DVD correctly, or it has been doing it for me. Nothing looks distorted as it is.

Similar Messages

  • How did I get better compression when using GPU acceleration?

    Hello,
         I was just making a short, 1:12-long video encoded as H.264 in an MP4 wrapper. I had edited the video in Premiere Pro CC, and was exporting through Adobe Media Encoder at a framerate of 29.97 FPS and a resolution of 1920x1080. I wanted to do a test to see how much GPU acceleration benefits me. So, I ran two different tests, one with and one without. The one with GPU acceleration finished in 2:09. The one without it took 2:40, definately enough to justify using GPU acceleration (my GPU is an NVidia GTX 660 Ti which I have overclocked). I am sure my benefits would be even greater in most videos I make, since this one had relatively few effects.
    However, I checked the file sizes out of curiosity, and found that the one made with GPU acceleration was 175 MB in size, and the other was 176 MB. While the difference is pretty small, I am wondering why it exists at all. I checked the bitrates, and the one made with GPU acceleration is 19,824 KB/s, whereas the other is 19,966 KB/s. I used the exact same presets for both. I even re-encoded the videos again just to test it and verify the results, and the second test confirms what the first showed.
    So, what I am wondering is: What is it that made this possible, and could this be used to my advantage in future productions?
    In case it is in any way relevant, these are my computer's specifications:
    -Hardware-
    Case- MSI Interceptor Series Barricade
    Mainboard- MSI Z77A-G45
    Power Supply Unit- Corsair HX-750
    Processor- Third-Generation Quad-Core Intel Core i7-3770K
    Graphics Card- ASUS NVidia GeForce GTX 660 Ti Overclocked Edition (with two-gigabytes of dedicated GDDR5 video memory and the core overclocked to 1115 MHz)
    Random Access Memory- Eight gigabytes of dual-channel 1600-MHz DDR3 Corsair Dominator Platinum Memory
    Audio Card- ASUS Xonar DSX
    System Data Drive- 128-gigabyte Samsung 840 Pro Solid-State Drive
    Storage Data Drive- One-terabyte, 7200 RPM Western Digital Caviar Black Hard Disk Drive
    Cooling System- Two front-mounted 120mm case fans (intake), one top-mounted 120mm case fan (exhaust), and a Corsair Hydro H60 High-Performance Liquid CPU Cooler (Corsair SP120 rear exhaust fan)
    -Peripherals-
    Mouse- Cyborg R.A.T. 7
    Keyboard- Logitech G110
    Headset- Razer Tiamat Elite 7.1
    Joystick- Saitek Cyborg Evo
    -Software-
    Operating System- Windows 8.1 64-bit
    DirectX Variant- DirectX 11.1
    Graphics Software- ASUS GPU Tweak
    CPU Temperature/Usage Monitoring Software- Core Temp
    Maintainance Software- Piriform CCleaner and IObit Smart Defragmenter
    Compression Software- JZip
    Encryption Software- AxCrypt
    Game Clients- Steam, Origin, Games for Windows LIVE, and Gamestop
    Recording Software- Fraps and Hypercam 2
    Video Editing Software- Adobe Premiere Pro CC
    Video Compositing/Visual Effects Software- Adobe After Effects CC
    Video Encoding Software- Adobe Media Encoder
    Audio Editing/Mixing Software- Adobe Audition CC
    Image Editing/Compositing Software- Adobe Photoshop CC
    Graphic Production/Drawing Software- Adobe Illustrator CC

    As I said in my original post, I did test it again. I have also ran other tests, with other sequences, and they all confirm that the final file is smaller when GPU acceleration is used.
    Yes, the difference is small but I'm wondering how there can even be a variable at all. If you put a video in and have it spit it out, then do that again, the videos should be the same size. It's a computer program, so the outputs are all based on variables. If the variables don't change, the output shouldn't either. For this reason, I cannot believe you when you say "I can, for example, run exactly the same export on the same sequence more than once, are rarely are they identical in export times or file size."

  • Better compression at lower bit rates

    I've not had much luck getting good h.264 compression from AME at lower bit rates. Are there some secrets that someone has shared? I used to be quite adept back in the day, so it's frustrating having to upload giant files to Vimeo. Any good guidance?
    I should be more specific. It seem like close to 5 Mbps is need for full HD, but I remember Flash expert Fabio getting great results in less than 1:
    http://sonnati.wordpress.com/
    It would be great if AME had some addition controls.

    SSR wrote:
    Wayyyyyy back when I first started encoding MP3s I was using 28 with an inferior encoder. At the time I couldn't spot the difference, but now the difference is stark. Particular types of music (and more specifically instruments and combinations) do emphasise it though.
    Actually, you cannot hear the difference when the background noise is loud, or example, public transport. I suggest that one uses a noise cancellation headset. $$$

  • Using BlackMagic Intensity Pro Video Compression card with FCP in a MacPro

    BlackMagic is offering the Intensity Pro card that they claim is enhancing the quality of video compression for better results than software driven compression, even with Standard DV in addition to HDV, and at a surprisingly cheap cost (less than $300).
    http://www.h-digital.com.au/hardware/hardwareview.asp?id=152
    As someone used such hardware driven compression card in a MacPro ?
    Does-it really help making better quality DVDs?
    If not are there better hardware driven compression card?

    7-8 years ago, we said that analog video was at the low end and DV was at the high end.
    Uh, DV was NEVER on the high end. Digibeta and Beta were analog, and were HIGHER end than DV. Just because it was digital doesn't mean it was better. DV is compressed 5:1 and has 4:1:1 color sampling. Beta was 4:2:2 and better than DV...and Digibeta was and still is the high end SD format. DVCPRO 50 a close second. No...DV came in on the low end...like HDV comes in on the low end of the HD spectrum.
    If this Intensity card does not help to get better compression than software driven compression, is there another card which can do so ?
    Well, I answered this, but you must have missed my answer, so I will bold it for you: *THERE IS NO CARD OUT THERE FOR THE MAC THAT WILL AID IN DVD COMPRESSION.* I don't know about the PC side, so I make the statement to include MAC only. There are several hardware solutions for encoding DVDs that distributors use when making the theatrical DVDs like OCEANS 13...but those are separate hardware encoders. And no, again, those encoders are not available on a Mac.
    What I mean at the end is to find ways to have higher quality DVDs than when using software such as Compressor. Do you follow me ?
    I follow you....you aren't following me. Since I am an editor and not a distributor, I don't know what those solutions are. They exist outside of the editing world, so you have to GOOGLE this or look elsewhere. Needless to say, the Intensity does not aid in this. No capture card does. That is not what they are designed for. They are designed to capture non-firewire formats and to output to non-firewire formats...not to aid in DVD compression.
    Shane

  • DVD Compression/Burning All In One

    Any program that anyone knows of that will work like DVD Shrink does for PC in that it will rip, compress and then prompt to burn the file for you when it has finished all aforementioned? I used MacRipper but don't know what to do with the file after that as I am not very proficient with how to burn the file to a DVD after it has been ripped. The all in one program would be easier for me as DVD Shrink was a great program for avoiding many steps. I want to make a back up copy of some of my DVDs that I have purchased.

    DVD2oneX has better compression and will burn also. DVD2oneX is better on the newer releases. I do have and use both Toast and DVD20neX.
    DVD2oneX is try before you buy so give it a shot.

  • Append to a compressed file

    I have some test results that I want to save into a test result file. Since I will be accumulating many test results, I want to keep overhead down, so I would like to simply append the result to the end of the file. Here is what I have done...
    1) Create an empty array, flatten to XML and write to a file.
    2) When I want to add a record, I take the record control, flatten to XML, and then overwrite the </Array> at the end of the file with the XML record. I then append another </Array> onto the end of that to properly close it out.
    3) I can then read this XML file back into an array of test record structures at a later time.
    I chose to do it this way, because when I am generating the test records, I do not want the overhead of "file read->XML Parse entire file->append code->XML export appended structure->File write". I figured that by simply appending the XML record to the end of the file, I could cut down on a lot of this IO overhead.
    The problem is that XML is a bit "chatty". I'd like to be able to read, write and append to compressed files (since XML compresses very well). The lvzlib routines only have a read and a write, but no append. My understanding of the compression schemes is that it makes a table of the common byte patterns and replaces them with shorter tokens. If I compress each record, I will have a table of tokens for each record that will basically be the same as all the other records. I can get better compression if I use one table for all records.
    Has anybody implemented this, or do I need to dig into the guts of zlib?
    Alternatively, can the normal File I/O routines be configured to work on a file in a compressed archive (like a zip file) and let the OS handle all the compression?
    Thanks,
    Brian Rose

    Mister Rose wrote:
    I have some test results that I want to save into a test result file. Since I will be accumulating many test results, I want to keep overhead down, so I would like to simply append the result to the end of the file. Here is what I have done...
    1) Create an empty array, flatten to XML and write to a file.
    2) When I want to add a record, I take the record control, flatten to XML, and then overwrite the </Array> at the end of the file with the XML record. I then append another </Array> onto the end of that to properly close it out.
    3) I can then read this XML file back into an array of test record structures at a later time.
    I chose to do it this way, because when I am generating the test records, I do not want the overhead of "file read->XML Parse entire file->append code->XML export appended structure->File write". I figured that by simply appending the XML record to the end of the file, I could cut down on a lot of this IO overhead.
    The problem is that XML is a bit "chatty". I'd like to be able to read, write and append to compressed files (since XML compresses very well). The lvzlib routines only have a read and a write, but no append. My understanding of the compression schemes is that it makes a table of the common byte patterns and replaces them with shorter tokens. If I compress each record, I will have a table of tokens for each record that will basically be the same as all the other records. I can get better compression if I use one table for all records.
    Has anybody implemented this, or do I need to dig into the guts of zlib?
    Alternatively, can the normal File I/O routines be configured to work on a file in a compressed archive (like a zip file) and let the OS handle all the compression?
    Thanks,
    I'm not sure what you mean by appending. If you mean appending new data to an existing file in a ZIP archive this can't be done in any way. The ZIP file is not structured in a way that would allow appending data to an internal file stream inside the archive. I'm not even sure there is any compression scheme that would allow that. The only way for all standard compression schemes for this is to extract the file append the new data and then replace the existing file in the archive with the new one.
    If you mean appending a new file to an existing archive then the newest released version 2.3 of lvzlib can do it. The ZLIB Open Zip Archive.vi function has an append mode input. Previously this was a boolean that indicated if a new file was to be created or if the archive should be tacked to the end of an existing file. In the new version this is an enum with an additional entry (adding to existing archive).
    Rolf Kalbermatter
    Message Edited by rolfk on 10-04-2007 09:21 AM
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • Compression does not save space

    Hello All,
    OS: AIX 5.2.
    We are creating seven backup files every day.
    We transfer files to our backup server via ftp.
    No we have request to compress files.
    So I create a backup folder and put all seven files into one pax folder as below:
    pax -wvf backup.pax file1 file2 file3 file4 file5 file6 file7
    ls
    backup.pax Now I am trying to compress file as below but getting error:
    oracle:>compress backup.pax
    This file is not changed; compression does not save space.
    DN

    it's pretty universally accepted.The keyword is "universally". Bzip2 has better compression but it's very slow. This is the reason why bzip2 can't be universally accepted (I mean online content compression via HTTP etc. etc.).
    But it's accepted by most of *nix clones. And if I need really save ma space then I use bzip2 rather than gzip.
    Just little proof:
    standard_bz2.log and standard_gz.log files are the same.
    du -sh *
    1.1G    standard_bz2.log
    1.1G    standard_gz.lognow I'll execute following commands:
    echo "GZIP"; date; gzip standard_gz.log; date
    echo "BZIP2"; date; bzip2 standard_bz2.log dateand here is output:
    GZIP
    Mon Jun 12 12:26:38 CEST 2006
    Mon Jun 12 12:27:38 CEST 2006
    BZIP2
    Mon Jun 12 12:27:45 CEST 2006
    Mon Jun 12 13:12:29 CEST 2006
    du -sh *
    54M     standard_bz2.log.bz2
    108M    standard_gz.log.gzGzip's compresion was fast (took just 60 seconds) and produced archive has 108M Size.
    Bzip2's compression was much better and produced size of archive was just 54M but it took much time for compression.
    So if you need quick compression and less of saved capacity then gzip is the right option.
    If you need save space or you need better compression then bzip2 is the right option.

  • Video compression in After Effects

    Hi,
    i have a little problem with video compression in After Effects.When I was looking for example on that video: http://vimeo.com/18672227 the quality in HD is really great. When we was trying to compress our video in After Effects, the best result, which we was able to get, had 80 MB and 18 seconds and the quality was still much worst than like the one which i posted above. 80 MB for only 18 secs of video and with that quality seems too much for me. I really doubt it is normal. Then for example video which i posted above should have 2000 MB and i really don't believe that.
    Is there some way how to get HD quality video for internet streaming with affordable size from After Effect rendering compression? Which settings, formats or codecs I should use? Or do i need some other Adobe program which is builded for this better compressions?

    You are looking for an easy answer where there is none. Yes, there are better tools than AE for final compression like Adobe Media Encoder, but your format will still be H.264. The rest is very much a matter of experience and a combination of settings for data rates, encoding passes and so on plus tricks on how to deal with Gamma shifts and color skewing due to the chroma undersampling. Without seeing the actual footage, nobody will be able to give you any exact pointers, but generally the presets you find in AE and AME will provide a good basis. The rest is just tippy-toeing through the settings and learning by doing. Also do some general research on the matter. Because it is so complex and difficult, this topic is discussed to death on pretty much every forum that deals with video editing, DVD/ BluRay authoring and ripping movies for fun...
    Mylenium

  • File Compression

    I have a web service created in a .net environment that examines existing pdf files in a staging directory prior to sending them over the wire using FTP.
    Part of that process requires that I rename individual files in order to associate them with a particular batch.
    I also have a requirement to reduce the size of individual files as much as possible in order to reduce the traffic going over the line.
    So far I have managed about a 30% compression rate by using an open source library (iTextSharp).
    I  am hoping that I can get a better compression rate using the Acrobat SDK, but I need someone to show me how, hopefully with an example that I can follow.
    The following code snippet is a model I wrote that accomplishes the rename and file compression...
                const string filePrefix = "19512-";
                string[] fileArray = Directory.GetFiles(@"c:\temp");
                foreach (var pdffile in fileArray) {
                    string[] filePath = pdffile.Split('\\');
                    var newFile = filePath[0] + '\\' + filePath[1] + '\\' + filePrefix + filePath[2];                
                    var reader = new PdfReader(pdffile);
                    var stamper = new PdfStamper(reader, new FileStream(newFile, FileMode.Create), PdfWriter.VERSION_1_5);
                    int pageNum = 1;
                    for (int i = 1; i <= pageNum; i++) {
                        reader.SetPageContent(i, reader.GetPageContent(i), PdfStream.BEST_COMPRESSION);
                    stamper.Writer.CompressionLevel = PdfStream.BEST_COMPRESSION;
                    stamper.FormFlattening = true;
                    stamper.SetFullCompression();
                    stamper.Close();
    Any assistance is appreciated.
    regards,
    Greymajek

    Greymajek wrote:
    ...using the Acrobat SDK...
    Then you better ask in the Acrobat SDK forum.

  • How do I compress PDF files using Acrobat SDK?

    have a web service created in a .net environment that examines existing pdf files in a staging directory prior to sending them over the wire using FTP.
    Part of that process requires that I rename individual files in order to associate them with a particular batch.
    I also have a requirement to reduce the size of individual files as much as possible in order to reduce the traffic going over the line.
    So far I have managed about a 30% compression rate by using an open source library (iTextSharp).
    I  am hoping that I can get a better compression rate using the Acrobat SDK, but I need someone to show me how, hopefully with an example that I can follow.
    The following code snippet is a model I wrote that accomplishes the rename and file compression...
                const string filePrefix = "19512-";
                string[] fileArray = Directory.GetFiles(@"c:\temp");
                foreach (var pdffile in fileArray) {
                    string[] filePath = pdffile.Split('\\');
                    var newFile = filePath[0] + '\\' + filePath[1] + '\\' + filePrefix + filePath[2];               
                    var reader = new PdfReader(pdffile);
                    var stamper = new PdfStamper(reader, new FileStream(newFile, FileMode.Create), PdfWriter.VERSION_1_5);
                    int pageNum = 1;
                    for (int i = 1; i <= pageNum; i++) {
                        reader.SetPageContent(i, reader.GetPageContent(i), PdfStream.BEST_COMPRESSION);
                    stamper.Writer.CompressionLevel = PdfStream.BEST_COMPRESSION;
                    stamper.FormFlattening = true;
                    stamper.SetFullCompression();
                    stamper.Close();
    Any assistance is appreciated.

    You can't use Adobe Acrobat on a server.

  • Layer Comp To Files, JPG Compression?

    Hi
    So I have set up an action to  use Layer Comp To Files to save out  multipile jpg, and I have noticed the file compression is allot larger compared to the, Save for web option. Is there anyway I can use Layer Comp To Files in an action, but then use the Save for web option to get better compression for the final file export?

    Save as jpg and save for web are different animals. Save as assumes print and saves a preview and much less compression resulting in larger files... But you know that. Did you try making an action? It should work just fine, unless I'm missing something. Record your action while you go through the process. Is it the final step that is the issue? I'm not in front of my computer.... Like normf said, you could then batch them at the end for smaller jpgs.
    What version are you using? Do you know about naming your layers and they will save as web assets? I'm assuming you are using layer comps because your images are made up of multiple layers...

  • Compressing data written to Truecrypt container by rsync

    I want to rsync data into a Truecrypt container. However, since the data consists of highly compressable files (text), I'd like to compress everything on the fly too, while the files are being written to the Truecrypt volume. Filesystems are ext4.
    How would I do the "transparent compression" thingy?
    Last edited by mir91 (2014-10-14 08:14:15)

    You have two different options; compress each file individually, or create a single file from all files and compress that. The first variant allows you to access single files, but takes more space. The second variant results in better compression but you need to uncompress everything to access a single file. Using rsync is only possible with the first variant, as rsync deals with individual files.
    http://serverfault.com/questions/154254 … be-fastest

  • Essbase cube/pag file size reduction-bitmap compression

    We are seeing some huge pag files in our essbase cube, and Oracle had suggested changing to bitmap compression to see if that will help.(currently, we have 'no compression')
    That said, what is the difference between using RLE versus bitmap encoding for BSO cubes?
    Would like to hear other's experiences.

    (currently, we have 'no compression')^^^You are going to be very happy. Very, very happy.
    You can read the Database Administrator's Guide -- just search for "comrpession".
    Here it is: http://download.oracle.com/docs/cd/E17236_01/epm.1112/esb_dbag/dstalloc.html
    There are a bunch of caveats and rules re the "better" compression method -- having said that, I almost always use RLE as the method it seems the most efficient wrt disk space. This makes sense as this method examines each block and applies the most efficient method: RLE, bitmap, or Index Value pair. I can't see a calculation difference between bitmap and RLE.
    How on earth did you get into the position of no compression in the first place?
    Regards,
    Cameron Lackpour
    Edited by: CL on Jul 18, 2011 10:48 AM
    Whoops, a couple of things:
    1) After you apply the change in compression in EAS or via MaxL, you must stop and then restart the database for the setting to take affect.
    2) You must export the database, clear it out, and reload it using the new compression method -- the change in #1 only affects new blocks so you could have a mix of compressed and uncompressed bliocks in your db which won't do anything for storage space.
    3) The only way to really and truly know which method is more efficient is to benchmark the calculation (think writing to disk which could be a bit slower than it is now) and compression methods is to do it. Try it as bitmap and write down sample calc times and IND and PAG file size, then try it as RLE and do the same.

  • What is the best compression for youtube videos ?

    I know I can go in export movie, then chosse medium for youtube, but is there a better compression because when I upload it on youtube, the quality is much lower. Should i export with QuickTime or i dunno ?
    Thnx for aswering !!

    What is your source material? If it is DV, then sending to YouTube in the Medium Preset is fine.
    If your source is AVCHD, you could import at 1920x1080. Then after editing, share using QuickTime at 1280x720 in h.264. Then upload to YouTube through the website.

  • Compression or depression - how to reduce further

    I am completeing a project which utlimatley needs to be in avi & windows format, and presented on CD media (I know the peasents!...but what can you do!)
    Using iMovie I have created a movie which is about 20 MB quicktime format. Using RAD video on a PC I have then converted it to avi format, which blows the size out to 790MB, which will not fit onto a CD.
    Hence from compression to depression.
    Can anyone give me an easy solution to my problem...hopefully I have missed something very simple using iMovie which will do the trick for me.
    End product needs to be <700MB.
    Justin
    <br>
    emac G4   Mac OS X (10.4.2)  

    Justin!
    When you export a pop up window appears, asking you what name of the file and location you want to choose, right?
    Choose expert settings at the bottom.
    At the bottom on the right, there should be a SETTINGS button. Click it and you’ll see another pop up window with all the necessary setting fields.
    I don’t have a Mac in front of me right now, so I’m just trying to depict the QT app in my head. Working on a PC now. 
    • Codec: I prefer to use h.264
    o it’s the newest of the new, each codec has its strengths and weaknesses.
    o Sorenson 3 is also great, but this one included in QT is lame. If you get a full version of Sorenson Squeeze, you can have much better outcome
    • Frame Rate: anything over 25 fps is an overkill (in my selfish opinion)
    o 15 or less should be good for slowly moving picture.
    o Faster for action or graphics animations
    • Size:
    o That’s a must!!!
    o 720x480 from a raw DV material is a overwhelming amount for your mac and will be huge plus jerky
    o 640x480 is a standard for any projector or a TV.
    o Speaking of which : DEINTERLACE – that means, remove either even or odd lines from the frame. It drastically reduces the movie size, therefore improves quality, should you wish to keep the same bitrate. There are some commercial plug ins for Final Cut for deinterlacing, but all you have to do is: export it from iMoviee as a full DV. Open it in QT Pro and export as whatever codec, but remember to reduce the pixel size exactly in half (360x240) – QT will automatically cut out every other line, therefore, you deinterlace for free.  If this resolution is enough for ya, keep it and choose a better compression ratio.
    • Compression:
    o Many people make a mistake here when they choose a big frame size and low compression quality instead of getting a smaller frame size but the best possible compression quality. Believe me, the results will show….
    o You can also define a maximum bitrate of your movie, That choice was on the R side of the window as far as I can remember.
    • Audio:
    o Try to get a minute of the movie as a sample and export it with different bitrates. Codec? I don’t know. I do AAC for its quality and size, but again: some players may have hard time to recognize. MP3 might be your choice. Variable bitrate is good if you care about quality, constant bitrate is good if you care about MB size of the project.
    • Keyframes: every 5 seconds is usually good for me, unless you want to do still frame in a slo-mo.
    Chose 1 minute of your movie (some typical sample) and try exporting only this piece under different settings and see the results. Then just multiply the final MB size by the final movie length and you get a ballpark size of the final project. Remember: Size of a vividly colored action movie is way higher than scrolling subtitles over black. 
    As you will realize while playing with this, the biggest influence on the final video quality and MB size will be held by 2 factors:
    • Frame rate
    • Chosen Codec
    • Compression ratio (quality poor…..best)
    Any Q? Drop a line. [email protected]

Maybe you are looking for

  • Server crashes when I do network "find"

    I have a network of five computers, three power PC's and two laptops. No matter which machine I attempt the "find" from, it automatically crashes the server. Any suggestions?

  • Screen capture in J2ME

    Hello, Is there a way to get a screen capture on a mobile device of the current display in J2ME? Does anyone know how to do this (even if it is another way)? Thanks in advance.

  • Qosmio X770-11C - good or bad experience?

    As a fan of Toshiba products (after my two Toshiba PC’s: protege M200, Satellite A200-1DS), I have decided to by a more performance one. So I bought Toshiba Qosmio X770-11C, I was satisfied by the high performances, as I'm using matlab and 3dsmax. Th

  • Color Printing Problems

    OK, this one is strange. I have been using Safari on this iMac since day one. It is currently at 4.0.5. I print eBay invoices/packing slips for my customers, usually about 10 - 20 per day, via the normal eBay web page, over to my USB-attached Canon M

  • I can't install Creative Cloud on Mac OS X.

    It keeps saying "Echec d'installation de Creative Cloud pour bureau. (Code d'erreur : 1)"