Compressing file size from within a portfolio?

I've got 16 files within a PDF portfolio ranging from 80MB to 227Mb and I'm trying to compress them, the portfolio size is 1.41GB.  When I view the file size in Bridge as I compress each file I don't see any change in the individual file sizes nor in the Portfolio size.  Are the original files being compressed or are they stored seperately within the portfolio file?  And any idea why the portfolio size nor the original file sizes are changing after PDF optimization?

Thanks, Leo.
To my mind, this looks about right : http://www.adobe.com/products/livecycle/pdfgenerator/
Do you know whether this will be suitable for integration into .net code?
Can it cover our needs (Concatenation, Compression, Bookmark-Editing)?

Similar Messages

  • I need to save some pictures from iPhoto to a SDHC photo card for a digital frame and was wondering if there is a way to shrink or compress file size for each picture as I have already cropped the pictures? Thank you

    I need to save some pictures from iPhoto to a SDHC photo card for a digital frame and was wondering if there is a way to shrink or compress file size for each picture as I have already cropped the pictures? Thank you

    Yes, you do this when you export the images from the Library.
    File -> Export
    in the Size section, you don't need images larger than the frame size, so you can specify that and in the Jpeg Quality you can select the amount of compression used.
    This User Tip
    https://discussions.apple.com/docs/DOC-4921
    has details of the options in the Export dialogue.

  • How do I access BSD file tree from within Linux?

    I just installed PC-BSD 8.0 RC on an external USB HDD.  I went with the PC-BSD default partition layout, i.e. one primary partition for the entire PC-BSD slice, and 4 "partitions" within that slice, for /, swap, /usr and /var.
    In order to access the PC-BSD file tree from within Linux, I mount it as follows:
    mount -t ufs -o ufstype=ufs2,ro /dev/sdb1 /mnt/bsd
    But after executing this command I can only access the PC-BSD / , not /usr and /var , presumably because they reside on two BSD "partitions" (subslices) different from / .  Is there any way I can mount these (read-only) in Linux, too, or would I have to reinstall PC-BSD and opt for laying out the slice differently, i.e. specifying only / and swap in sdb1?
    Last edited by RobF (2010-02-13 11:55:39)

    Look at  /dev/sdb*  there should be more than just sdb1.  You would need to mount those to see which is which.  For ufs write support the default kernel needs to be recompiled with experimental enabled.

  • MP4 & compression & file size, etc.

    I know very little about video exporting and compression other than the most basic steps. My project is creating a long list of 4-10 minute tutorial videos for internet streaming. I have produced one video so far, and at 5 minutes, 1280x800, H264 mp4, it's 65mb. That's way too big.
    After many hours of research and numerous trial exports, I'm not gaining any ground. I just don't have time to spend days/weeks learning this in depth.
    I used a screen capture video program and recorded at the full screen resolution, 1280x800. The initial streaming presentation would be a smaller resolution, but should the viewer choose to go full screen, I would like it to be crisp and clear, so my goal is to maintain 1280x800, or close to it.
    Lowering the quality seems to defeat the purpose as the quality is substantially degraded. My question is, what are my reasonable expectations in terms of file size for a 5 minute mp4 video at 1280x800? Is 65mb about right? I have no idea but if I have 30 videos, and 5 minutes is one of the shorter ones, that could be many many gigabytes. Is there something I am missing how to get smaller file sizes?
    H264 compression was recommended, and tried out several resolutions, having to calculate different multiples of 1280x800 - it all seems very manual and tedious. I just don't know how to proceed since the first video was just so big... Any guidance is greatly appreciated! Thank you.

    I know very little about video exporting and compression other than the most basic steps. My project is creating a long list of 4-10 minute tutorial videos for internet streaming. I have produced one video so far, and at 5 minutes, 1280x800, H264 mp4, it's 65mb. That's way too big.
    Actually, 65 MBs is quite a reasonable file size for a 5-minute, 1280x800, H.264/AAC MP4 file. (File Size = Total Duration X Total Data Rate so your data rate is only on the order of 1.7 Mbps which was originally the limit for 640x480 5th generation iPod files.) In order to make the files smaller you will have to either settle for reduced video quality and/or a smaller display size. (You could, for instance, create a 1024x800 anamorphic encode that displays as a 1280x800 file, but this would only reduce the file size by a small amount whereas a 640x400 non-anamorphic file could cut the file size significantly while retaining similar quality in the smaller display. In short, you need to re-evaluate your streaming/fast start requirements. (I.e., I typically use 2 to 4 times your data rate for what I consider "good quality" 720p24 file encodes for viewing on HD capable devices.)
    After many hours of research and numerous trial exports, I'm not gaining any ground. I just don't have time to spend days/weeks learning this in depth.
    It is unlikely you will be able to further reduce your file size without a loss in video quality and/or a reduction in the file's display dimensions.
    I used a screen capture video program and recorded at the full screen resolution, 1280x800. The initial streaming presentation would be a smaller resolution, but should the viewer choose to go full screen, I would like it to be crisp and clear, so my goal is to maintain 1280x800, or close to it.
    The only way to reduce file size is to either "throw away" data or use a more efficient, higher compression codec. H.264 is about the most scalable, highly efficient, highest compression capable codec you can use which is why it is used for everything from FaceTime to BD/AVCHD encodings.
    Lowering the quality seems to defeat the purpose as the quality is substantially degraded. My question is, what are my reasonable expectations in terms of file size for a 5 minute mp4 video at 1280x800? Is 65mb about right? I have no idea but if I have 30 videos, and 5 minutes is one of the shorter ones, that could be many many gigabytes. Is there something I am missing how to get smaller file sizes?
    "Reasonable epectations" are realative—and may be quite different for each person. Basically, the expectation is reasonable if the file delivers the quality you want at the display dimensions you want in a file size with which you can live. If not, then you have to re-evaluate your "expectations." Further, encoding is driven by the content itself with every file being different and should be treated as such depending on the graphic complexity of the content, the number/type of vector motions involved, ratio of light:dark scenes, display dimensions, etc. There is no "one shoe fits all" here and what is "reasonable" for one file may not be "reasonable" for another. What you are actually missing here is an overall goal strategy.
    H264 compression was recommended, and tried out several resolutions, having to calculate different multiples of 1280x800 - it all seems very manual and tedious. I just don't know how to proceed since the first video was just so big... Any guidance is greatly appreciated!
    You say you are creating tutorial files for internet streaming. If this is your goal, how do you plan to deliver these files in terms of internet connection speeds? The target speed for internet delivery will determine the data rate limits within which you must work. Once you determine the playback data rate limits for target users, you will know the data rate limits at which to encode your file for playback. In turn, this data rate will determine the combination of display dimensions and level of video quality you will have to accept. At this point the size of individual files is of lessor importance because once the file begins to stream or play in the fast "start mode," the file can continue to play as the data continues to stream or download to the end user's local platform/media player. In fact, at this point you can actually decide if you want to create multiple file versions for users having different target internet connection speeds or just a single file. Frankly, until you are able to answer such questions, there is very little advice that anyone can give you.

  • Significant reduction in file size from Camera Raw to DNG

    Hi,
    I am currently testing the conversion of Leaf camera raw files into DNGs for a photographer's archive. I am hoping to convert all of the mos files to DNGs because Leaf Capture and the Leaf Raw Converter are not being updated and because the photographer wants to have an Adobe centered workflow. In my testing I discovered that converting mos files to DNGs through ACR 8.4 and LightRoom 5.4 resulted in a reduction of file size by nearly 50%. A 44.5MB mos file became a 23.6MB DNG. From what I've read only about 15-20% of the camera raw file should be lost and all of the data lost should be propietary.
    Here-in lies my quesiton, is there any way that I can track or determine exactly what sort of compression is being done to the mos file and what information is or is not travelling in the conversion to DNG?
    These are the settings I have used for converting raw files to DNGs:
    ACR:
    JPEG Preview: Medium Size
    Embed fast load data
    Don't use lossy compression
    Preserve pixel counts
    Don't embed original
    LIGHTROOM 5.4:
    Only Convert Raw files
    Delete originals after successful conversion
    File Extension DNG
    Compatibility Camera Raw 7.1 and later
    Jpeg Preview Medium Size
    Embed Fast Load Data
    Thanks!

    50%? - I thought we were talking about 15-20%?
    In my first post I questioned why I was seeing a reduction in file size of 50% when according to forums and articles I've read I should only be seing a 15-20% reduction in file size. I then wondered what data I might be losing, which you addressed.
    Same as what? - what were the results.
    I was referring to testing I preformed on camera raw files produced during different years (all mos). I converted all files with the same ACR and LR settings and found that the DNGs always reflected a 50% reduction in file size. This test suggests that any conversion issues is not necessarily related to how the camera raw files might have been differently built across years.
    Adobe's raw data compression is touted by DNG zealots, but I haven't scrutinized it enough to corroborate or refute.., but my experience is that reduction is relatively marginal. All of this assumes original is also compressed - if uncompressed in original source, savings would be large.
    The files I am dealing with are definitely uncompressed which could account for the large reduction in file size. I didn't realize until I posted to this thread that converting to a DNG results in a compression of the original image data. I understand that this compression is supposed to be lossless like a lossless compression to a tiff and thus result in no decrease in image quality or harm to the original image. I am baffled by how it is possible that any compression of a file (especially  by 50%) could not result in a loss of important data but I will accept that it is possible to have a truly lossless compression and that the size reduction I am seeing could be a result of all of the different processes a file undergoes that you have outlined.
    I looked into the effects that backwards compatibility has on the conversion process which might interest you http://dpbestflow.org/DNG#backwards-compatibility
    I also posted to luminous landscape's forums http://www.luminous-landscape.com/forum/index.php?topic=89101.new;topicseen#new
    Although it wouldn't surprise me if the DNG conversion process tossed the xmp-like metadata, and kept the original stuff, but it would surprise me if it tossed the original stuff - but as I said before, I haven't scrutinized for completeness so I don't know.
    I've done testing in which I converted .mos camera raw files with their sidecar xmps and without their sidecar xmps. My tests revealed that the DNG definitely carries over xmp metadata although it is not clear to me exactly how it is carried and if anything is lost.

  • Create a text file output from within a procedure

    I can output to a file from within SQL+ using the spool command but how do I do this from within a procedure?
    I have got a table called ABC and want to output columns A and B to a new text file based on variables pased through when the procedure is run, the name of the text file should be generated from a sequence?
    Any info appreciated.
    Cheers
    Cliff

    Hi,
    U can use UTL_File Package, But the only constraint is it will write the file only on the server m/c and not on the client m/c.
    Regards
    Gaurav

  • Shrink file (log) from within a procedure

    I'd like to incorporate the DBCC shrinkfile command to my maintenance procedure. This procedure gets called after I've finished my weekly importing process. I only need to shrink the log files as almost all the modifications are either a record update or
    an insert (there are very few deletions done). I need to do this across several databases and for software maintainability would prefer to have only the one procedure.
    My issue is that there does not seem to be a way to point to the various databases from within a procedure to preform this operation. Also the maintenance plan modules have a shrink database operation but I don't see a shrink file operation so that doesn't
    appear to be an option.
    Have I overlooked something or is it not possible to preform a shrink file operation for the transaction log files for multiple databases?
    Developer Frog Haven Enterprises

    Thank you for your response. While I did not use your answer verbatim it did lead me to my solution as I only need to preform the shrink operation on 4 out of the 7 databases in my SQL instance.
    FYI my final solution was...
    -- shrink the log files
    DECLARE @sql
    nvarchar(500);
    SET @sql
    =
    'USE [vp]; DBCC SHRINKFILE (2, 100);';
    EXEC
    (@sql);
    SET @sql
    =
    'USE [vp_arrow]; DBCC SHRINKFILE (2, 100);';
    EXEC
    (@sql)
    Developer Frog Haven Enterprises

  • Compressing File Size

    I am building a video and have it completed. Just that now it is 4.93gb and using a 4.74 disk. Is there a way to compress the size down to fit on the video or change some settings so the total size goes down enough to fit on disk?

    John,
    If your Exported MPEG-2 (DVD-compliant) was > 4.7GB, you could have altered the bit-rate of the Export to end up with a smaller file.
    Doing the Export, and letting Encore do the Transcode on Automatic will do the same thing. It will take your Duration and the disc capacity, and drop the bit-rate to match. Quality will suffer a bit, but as close as you are, depending on the bit-rate that you chose for your initial Export to MPEG-2, you will likely never notice this.
    Good luck,
    Hunt

  • Log Configurator - Increase log file size from 10 mb to 20 mb

    Hi All,
    We have implemented custom logging in our implementation using custom Log Destinations and Locations.
    The log destination (inside log configurator service) we were using earlier had size of 10 mb for each file and the file count was 5.
    Now, as the log files are getting archived very soon we changed the log file size to 20 mb and file count as 5.
    After restarting server, the log viewer in NWA and within Visual Admin does not show updated logs. We have monitored this for sometime now so that new logs are written to the log files still the situation is same.
    Strangely, logs are getting updated at OS level in the log files but the entries are nto shown on Log Viewer in NWA or Visual Admin.
    Are there any restrictions on the log file size or any other parameter needs to be changed to make this work ?
    Looking forward for your inputs and suggestions.
    Regards,
    Prasanna

    Hi All,
    We have implemented custom logging in our implementation using custom Log Destinations and Locations.
    The log destination (inside log configurator service) we were using earlier had size of 10 mb for each file and the file count was 5.
    Now, as the log files are getting archived very soon we changed the log file size to 20 mb and file count as 5.
    After restarting server, the log viewer in NWA and within Visual Admin does not show updated logs. We have monitored this for sometime now so that new logs are written to the log files still the situation is same.
    Strangely, logs are getting updated at OS level in the log files but the entries are nto shown on Log Viewer in NWA or Visual Admin.
    Are there any restrictions on the log file size or any other parameter needs to be changed to make this work ?
    Looking forward for your inputs and suggestions.
    Regards,
    Prasanna

  • File sizes from MPEG to QuickTime to iDVD ?

    I have about 8 Gig of MPEG files downloaded from a video camera. I needed to install ClipStream and buy the MPEG-2 add on from the Apple Store to be able to open these files. I picked a long file, that was a 1.8Gig file ( 39 minutes long. I suspect the camera may be HD. It's not mine. ) and when I converted it to QT the file length grew to 6.55Gig. Obviously, that won't fit on a single layer DVD. When I use iDVD or iMovie and then iDVD to create the final file, does it convert this to anther file type , which is much smaller ? What file type does iDVD save as ? I have access to Final Cut Express, at work. Would that be a better program for this project ?

    Earlier versions of iMovie and iDVD were built around DV Stream (.dv) as their native video format, and so anything needed to be converted to that before editing or encoding. At roughly 13 gigs per hour of .dv footage, your 39 min mpeg file was uncompressed to about the right size.
    Different editors can handle different video formats either natively or with conversion to another video codec. You could check and see if FCE can handle mpeg files without uncompressing the whole file, but unless you have DVD Studio Pro or Roxio Toast, you'll need to uncompress the video file so that iDVD can handle it.
    All standard DVDs have their video encoded in a particular format, and their encoded files arranged in a particular way so that a DVD player doesn't have to be too smart for your DVD to play properly. The DVD standard is video formatted as an mpeg-2 file.
    If you don't need to do any editing, you may want to consider Roxio Toast, or other software that can handle an mpeg-2 file without uncompressing and recompressing.
    John

  • AI Compression & File Size

    Hi
    I desperately need to reduce the file size of my AI files. They generally have multiple layers, avatars, editable regions etc... and exists as 2-4mg files on disk. What steps can I take to reduce the filesize as I need to transmit them across the internet?
    I've tried FXG but I lose brush strokes which is important in my AIs. Is there any suggestions about redundant data or particular file types that would reduce the file size suitable for web transmission?
    Any suggestions at all will assist.
    Thanks

    Is the winzip/stuff final element to protect interception of the ai file and eavesdropping or simply for further compression?
    Zip or StuffIt compression may not yield much file size reduction with .ai files. It is, nevertheless, a recommended practice.
    Unless you encrypt with a password, compression archives do not necessarily offer any protection against "eavesdropping." However, they do offer very good protection against possible data corruption during transmission. Certain file types, including .ai, seem more prone to corruption than others -- especially when transmitted as e-mail attachments -- and compression archives are an excellent way to prevent those kinds of problems.
    The only possible drawback is that some corporate mail servers routinely filter out .zip file attachments because they've been used to spread malware. However, I'm seeing less and less of that lately (the filtering, not the malware, unfortunately).

  • Finding hard and soft open file limits from within jvm in linux

    Hi All,
    I have a problem where I need to find out the hard and soft open file limits for the process in linux from within a java program. When I execute ulimit from the terminal it gives separate values for hard and soft open file limits.
    From shell if I run the command then the output is given below:
    $ ulimit -n
    1024
    $ ulimit -Hn
    4096
    The java program is given below:
    import java.io.BufferedReader;
    import java.io.IOException;
    import java.io.InputStream;
    import java.io.InputStreamReader;
    import java.io.Reader;
    import java.io.StringWriter;
    import java.io.Writer;
    public class LinuxInteractor {
    public static int executeCommand(String command, boolean waitForResponse, OutputHandler handler) {
    int shellExitStatus = -1;
    ProcessBuilder pb = new ProcessBuilder("bash", "-c", command);
    pb.redirectErrorStream(true);
    try {
    Process shell = pb.start();
    if (waitForResponse) {
    // To capture output from the shell
    InputStream shellIn = shell.getInputStream();
    // Wait for the shell to finish and get the return code
    shellExitStatus = shell.waitFor();
    convertStreamToStr(shellIn, handler);
    shellIn.close();
    catch (IOException e) {
    System.out
    .println("Error occured while executing Linux command. Error Description: "
    + e.getMessage());
    catch (InterruptedException e) {
    System.out
    .println("Error occured while executing Linux command. Error Description: "
    + e.getMessage());
    return shellExitStatus;
    public static String convertStreamToStr(InputStream is, OutputHandler handler) throws IOException {
    if (is != null) {
    Writer writer = new StringWriter();
    char[] buffer = new char[1024];
    try {
    Reader reader = new BufferedReader(new InputStreamReader(is,
    "UTF-8"));
    int n;
    while ((n = reader.read(buffer)) != -1) {
    String output = new String(buffer, 0, n);
    writer.write(buffer, 0, n);
    if(handler != null)
    handler.execute(output);
    } finally {
    is.close();
    return writer.toString();
    } else {
    return "";
    public abstract static class OutputHandler {
    public abstract void execute(String str);
    public static void main(String[] args) {
    OutputHandler handler = new OutputHandler() {
    @Override
    public void execute(String str) {
    System.out.println(str);
    System.out.print("ulimit -n : ");
    LinuxInteractor.executeCommand("ulimit -n", true, handler);
    System.out.print("ulimit -Hn : ");
    LinuxInteractor.executeCommand("ulimit -Hn", true, handler);
    If I run this program the output is given below:
    $ java LinuxInteractor
    ulimit -n : 4096
    ulimit -Hn : 4096
    I have used ubuntu 12.04, Groovy Version: 1.8.4 JVM: 1.6.0_29 for this execution.
    Please help me in understanding this behavior and how do I get a correct result from withing the java program.

    Moderator Action:
    As mentioned in one of the earlier responses:
    @OP this is not a Java question. I suggest you take it elsewhere, i.e. to a Unix or Linux forum.You posted this to a Java programming forum.
    It is not a Java programming inquiry.
    This off-topic thread is locked.
    Additionally, you have answered your own question.
    Don't bother posting it to one of the OS forums. It will get deleted as a duplicate cross-post.

  • Batch Process increases file size from 230kb - 5044kb

    I'm currently using Acrobat 7 Professional.
    I want to use the "Batch" process to re-convert pdf documents that were watermarked using the batch process. We're talking 1000+ documents.
    The reason we need to re-convert them is because the watermarked documents are taking more than 4min/page to print, if it prints at all.
    I've manually re-converted a few of the documents using "print to pdf", kept the file size at 230kb and with no printing issues.
    However, since we have 1000s of these documents to do, most with multiple pages, this would be a very time consuming project.
    I've "Batched" "print to pdf" and it also takes care of our printing issues, however, the file size has jumped to 5044kb. Having literally thousands of files to do - this is an issue on our file server.
    I've done multiple tests with adjusting the pdf settings in the batch process... some work (and again the file size is large) - some don't allow for printing or slow it way down again.
    I understand that it's a "layer" or "transparency" thing that's slowing the printing process and when I mess with it in the batch process, it takes care of the printing issues but not the file size.
    Can someone help me please??? I've been working on this for a couple days now and I'm at my limits.
    How can I batch process that will flatten layers but give me a manageable file size?
    Your help would be GREATLY appreciated - Michelle

    I don't know if it can help here, but don't I recall that wayyyy back in Photoshop 7.0 the last File - Save As you have executed sets the JPEG quality for batch File - Save As commands?  I am not sure when the "Override Action Save-As Commands" became functional, but try this:
    1.  Uncheck "Override Action Save-As Commands".
    2.  Remove the "File - Save As" step you have recorded in your action.  The action should NOT save the file.
    3.  Save any old JPEG file at the quality level you prefer, just to set the "memory" of the quality level.
    4.  Run the File - Automate - Batch, specifying input and output folders there.
    I'll bet it will work.
    -Noel

  • How to compress pdf file size from 16mb to 4mb

    Can you please tell me how to compress a PDF email size of 16mb to 4mb?

    http://pdf-compressor.en.softonic.com/
    However PDF compression in general depending on format (pdf-A, pdf-X, etc etc) may not work or work well.

  • File size from save to spreadsheet

    This may be more of a fundamental programming/computer science question (I'm a Mechanical Engineer) but I want to know just a general correlation between the number of data points created by the Save to Spreadsheet subVI and the size of the resulting text file. Assuming each data point has about 4-5 digits in it. The reason for this is I am creating a program that saves data as it collects (via producer/consumer), but if someone accidentally leaves it running and walks away bad things can happen. I want to be able to disable the save functionality after a certain number of points have been saved to prevent a massive text file from being created.
    Thanks
    Tim

    If each number is 4-5 digits, then figure about 4-5 bytes per number, plus 1 for a delimiter, perhaps 1 for a decimal point.  On the conservative side round up to 10 bytes per value.

Maybe you are looking for