Photoshop Meta Data File Sizes

I am using meta data "description" menu to add meta data to web images.
The basics:
title
description
keywords
copyright
Thats is about it for my application....so far and like I say it is just for web images at this point.
At any rate to me surprise I did not realize how much the meta data added to file size.
I had assummed it was minimal but in fact that is not the case. Meta data ends up adding a lot.
Question:
So is there a way in Photoshop to see what the final file size will be with the meta data included as I have found, so far, is the image itself file size when I view the
"Saving for Web & Devices" menu screen.
Also tried the "Info" menu and neither indicate the meta data additons ito an images file size.
One of the reasons is, of course, that I would like to get as many keywords in there as possible, as an example lets say this was the logo or site's header and they have a lot of different products.
The more keywords the better the SEO results will be. I am sure this statement in itself is disputable but as of now 5/2013 it appears that way as far as SE's are concerned.
Tested this extensively and the result were remakable, but that is another story.
Any help or links to other infomration will be
greatly appreciated
thank you
Jonny

Mylenium wrote:
The more keywords the better the SEO results will be.
I new that was a big mistake saying it that way.
I was focused on the file size issue.
Trying to use more efficient effective keywords for an image that represents several products.
As a simple example let's say the site focused on pencils, pens, and paper clips.
So just using one of those keywords or even just two of them would be misleading to the site visitor.
So in my opinion using all three would be most effective.
I don't intend to use red pens, black pens, purple pens, metal paper clips, plastic paper clips, colored paper clips, etc.
But that is not my point anyway. What a pain in the *** it will be to try this, save it, try that, save it, blah, blah, blah,etc.
Thank you very muich for taking the time to respond and KNOW that I greatly appreciate your advise.
You confimred my suspicions.
I did find a PS CS5 Plug-in that would supposedly make this task easier but i am very suspicous, won't waste my time.
Took a copy of the default "raw data" from the "File Info" menus and pasted that into BBEdit (XML or whatever) and the file size increased by about 3.5k with just the default "raw data", whoa.
So with experimetation, as you suggest I may get and idea of size more or less.
I was hoping I was making it harder than it was but Adobe has not responded either.

Similar Messages

  • Impact of data file size on DB performance

    Hi,
    I have a general query regarding size of data files.
    Considering DB performance, which of the below 2 options are better?
    1. Bigger data file size but less number of files (ex. 2 files with size 8G each)
    2. Smaller data file size but more number of files (ex. 8 files with size 2G each)
    I am working on a DB where I have noticed where very high I/O.
    I understand there might be many reasons for this.
    However, I am checking for possibility to improve DB performance though optimizing data file sizes. (Including TEMP/UNDO table spaces)
    Kindly share your experiences with determining optimal file size.
    Please let me know in case you need any DB statistics.
    Few details are as follows:
    OS: Solaris 10
    Oracle: 10gR2
    DB Size: 80G (Approx)
    Data Files: UserData - 6 (15G each), UNDO - 2 (8G each), TEMP - 2 (4G each)
    Thanks,
    Ullhas

    Ullhas wrote:
    I have a general query regarding size of data files.
    Considering DB performance, which of the below 2 options are better?Size or number really does not matter assuming other variables constant. More files results in more open file handles, but in your size db, it matters not.
    I am working on a DB where I have noticed where very high I/O.
    I understand there might be many reasons for this.
    However, I am checking for possibility to improve DB performance though optimizing data file sizes. (Including TEMP/UNDO table spaces)Remember this when tuning I/O: The fastest I/O is the one that never takes place! High I/O may very well be a symptom of unnecessary FTS or poor execution plans. Validate this first before tuning I/O and you will be much better off.
    Regards,
    Greg Rahn
    http://structureddata.org

  • Negative data file size

    RDBMS: oracle 10g R2
    when execute to statement to determinate size of data files, the data file DATA8B.ORA is negative, why?
    before droped a table with 114,000,000 rows in this tablespace.
    FILE_NAME FILE_SIZE USED PCT_USED FREE
    G:\DIL\DATA5D.ORA 4096 3840.06 93.75 255.94
    Total tablespace DATA5---------------------------> 16384 14728.24 89.9 1655.76
    I:\DIL\DATA6A.ORA 4096 3520.06 85.94 575.94
    I:\DIL\DATA6B.ORA 4096 3456.06 84.38 639.94
    I:\DIL\DATA6C.ORA 4096 3520.06 85.94 575.94
    I:\DIL\DATA6D.ORA 4096 3520.06 85.94 575.94
    Total tablespace DATA6---------------------------> 16384 14016.24 85.53 2367.76
    G:\DIL\DATA7A.ORA 4096 3664.06 89.45 431.94
    G:\DIL\DATA7B.ORA 4096 3720.06 90.82 375.94
    G:\DIL\DATA7C.ORA 4096 3656.06 89.26 439.94
    G:\DIL\DATA7D.ORA 4096 3728.06 91.02 367.94
    G:\DIL\DATA7E.ORA 4096 3728.06 91.02 367.94
    Total tablespace DATA7---------------------------> 20480 18496.3 90.3 1983.7
    G:\DIL\DATA8A.ORA 3500 2880.06 82.29 619.94
    G:\DIL\DATA8B.ORA 4000 -2879.69 -71.99 6879.69
    Total tablespace DATA8---------------------------> 7500 0.37 5.14 7499.63

    the query is:
    select substr(decode(grouping(b.file_name),
    1,
    decode(grouping(b.tablespace_name),
    1,
    rpad('TOTAL:', 48, '=') || '>>',
    rpad('Total tablespace ' || b.tablespace_name,
    49,
    '-') || '>'),
    b.file_name),
    1,
    50) file_name,
    sum(round(Kbytes_alloc / 1024, 2)) file_size,
    sum(round((kbytes_alloc - nvl(kbytes_free, 0)) / 1024, 2)) used,
    decode(grouping(b.file_name),
    1,
    decode(grouping(b.tablespace_name),
    1,
    sum(round(((kbytes_alloc - nvl(kbytes_free, 0)) /
    kbytes_alloc) * 100 / b.nbtbs,
    2)),
    sum(round(((kbytes_alloc - nvl(kbytes_free, 0)) /
    kbytes_alloc) * 100 / b.nbtbsfile,
    2))),
    sum(round(((kbytes_alloc - nvl(kbytes_free, 0)) / kbytes_alloc) * 100,
    2))) pct_used,
    sum(round(nvl(kbytes_free, 0) / 1024, 2)) free
    from (select sum(bytes) / 1024 Kbytes_free,
    max(bytes) / 1024 largest,
    tablespace_name,
    file_id
    from sys.dba_free_space
    group by tablespace_name, file_id) a,
    (select sum(bytes) / 1024 Kbytes_alloc,
    tablespace_name,
    file_id,
    file_name,
    count(*) over(partition by tablespace_name) nbtbsfile,
    count(distinct tablespace_name) over() nbtbs
    from sys.dba_data_files
    group by tablespace_name, file_id, file_name) b
    where a.tablespace_name(+) = b.tablespace_name
    and a.file_id(+) = b.file_id
    group by rollup(b.tablespace_name, file_name);
    the same negative data file size on Database Control...

  • S1000 Data file size limit is reached in statement

    I am new to Java and was given the task to trouble shoot a java application that was written a few years ago and no longer supported. The java application creates database files the user's directory: diwdb.properties, diwdb.data, diwdb.lproperties, diwdb.script. The purpose of the application is to open a zip file and insert the files into a table in the database.
    The values that are populated in the diwdb.properties file are as follows:
    #HSQL Database Engine
    #Wed Jan 30 08:55:05 GMT 2013
    hsqldb.script_format=0
    runtime.gc_interval=0
    sql.enforce_strict_size=false
    hsqldb.cache_size_scale=8
    readonly=false
    hsqldb.nio_data_file=true
    hsqldb.cache_scale=14
    version=1.8.0
    hsqldb.default_table_type=memory
    hsqldb.cache_file_scale=1
    hsqldb.log_size=200
    modified=yes
    hsqldb.cache_version=1.7.0
    hsqldb.original_version=1.8.0
    hsqldb.compatible_version=1.8.0
    Once the databsae file gets to 2GB it brings up the error meessage 'S1000 Data file size limit is reached in statement (Insert into <tablename>......
    From searching on the itnernet it appeared that the parameter hsqldb.cache_file_scale needed to be increased & 8 was a suggested value.
    I have the distribution files (.jar & .jnlp) that are used to run the application. And I have a source directory that was found that contains java files. But I do not see any properties files to set any parameters. I was able to load both directories into NetBeans but really don't know if the files can be rebuilt for distribution as I'm not clear on what I'm doing and NetBeans shows errors in some of the directories.
    I have also tried to add parameters to the startup url: http://uknt117.uk.infores.com/DIW/DIW.jnlp?hsqldb.large_data=true?hsqldb.cache_file_scale=8 but that does not affect the application.
    I have been struggling with this for quite some time. Would greatly appreciate any assistance to help resolve this.
    Thanks!

    Thanks! But where would I run the sql statement. When anyone launches the application it creates the database files in their user directory. How would I connect to the database after that to execute the statement?
    I see the create table statements in the files I have pulled into NetBeans in both the source folder and the distribution folder. Could I add the statement there before the table is created in the jar file in the distribution folder and then re-compile it for distribution? OR would I need to add it to the file in source directory and recompile those to create a new distribution?
    Thanks!

  • Maximum Data file size in 10g,11g

    DB Versions:10g, 11g
    OS & versions: Aix 6.1, Sun OS 5.9, Solaris 10
    This is what Oracle 11g Documentation
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28320/limits002.htm
    says about the Maximum Data file size
    Operating system dependent. Limited by maximum operating system file size;typically 2^22 or 4 MB blocksI don't understand what this 2^22 thing is.
    In our AIX machine and ulimit command show
    $ ulimit -a
    time(seconds)        unlimited
    file(blocks)         unlimited  <-------------------------------------------
    data(kbytes)         unlimited
    stack(kbytes)        4194304
    memory(kbytes)       unlimited
    coredump(blocks)     unlimited
    nofiles(descriptors) unlimited
    threads(per process) unlimited
    processes(per user)  unlimitedSo, this means, In AIX that both the OS and Oracle can create a data file of any Size. Right?
    What about 10g, 11g DBs running on Sun OS 5.9 and Solaris 10 ? Is there any Limit on the data file size?

    How do i determine maximum number of blocks for an OS?df -g would give you the block size. OS blocksize is 512 bytes on AIX.
    Lets say the db_block_size is 8k. What would the maximum file size for data file in Small File tablespace and Big File tablespace be?Smallfile (traditional) Tablespaces - A smallfile tablespace is a traditional Oracle tablespace, which can contain 1022 datafiles or tempfiles, each of which can contain up to approximately 4 million (222) blocks. - 32G
    A bigfile tablespace contains only one datafile or tempfile, which can contain up to approximately 4 billion ( 232 ) blocks. The maximum size of the single datafile or tempfile is 128 terabytes (TB) for a tablespace with 32K blocks and 32TB for a tablespace with 8K blocks.
    HTH
    -Anantha

  • SAP Data File size considerably reduced after Unicode Conversion

    Hello Experts
    I have just performed a CUUC (Upgrade along with unicode conversion) from R/3 4.7 to ECC 6.0 EHP5. The data size that i earlier had was close to 463 GB (MSSQL MDF, LDF Files), after the data export to be converted to unicode the size was 45 GB (10% of actual data size, i feel this is normal after heterogeneous system copy), but after the import again the DATA file size is only 247 GB, is this normal ? or have i lost some data ? For ex, I tried checking tables like MSEG and the number of entries have reduced from 15,678,790 to 15,290,545.
    Could you kindly let me know if there is a way to check from Basis perspective if i have not lost any data ? I have followed all the procedures as per SAP Standards.
    Waiting for your quick reply.
    Best Regards
    Pritish

    Hi Nicholas
    Data is compressed during the new R3load procedure is understood, but why number of table entries have reduced is something which is still a question to me (in some cases increased as well) ? For example      
                                 Source          Target
    STPO                  415,725          412,150
    STKO                 126,710          126,141
    PLAF                 74,671                   78,336
    MDKP                 193,487          192,747
    MDPB                 55,329                   63,557
    Any suggestions or ideas ?
    Best Regards
    Pritish

  • Sql loader maximum data file size..?

    Hi - I wrote sql loader script runs through shell script which will import data into table from CSV file. CSV file size is around 700MB. I am using Oracle 10g with Sun Solaris 5 environment.
    My question is, is there any maximum data file size. The following code from my shell script.
    SQLLDR=
    DB_USER=
    DB_PASS=
    DB_SID=
    controlFile=
    dataFile=
    logFileName=
    badFile=
    ${SQLLDR} userid=$DB_USER"/"$DB_PASS"@"$DB_SID \
              control=$controlFile \
              data=$dataFile \
              log=$logFileName \
              bad=$badFile \
              direct=true \
              silent=all \
              errors=5000Here is my control file code
    LOAD DATA
    APPEND
    INTO TABLE KEY_HISTORY_TBL
    WHEN OLD_KEY <> ''
    AND NEW_KEY <> ''
    FIELDS TERMINATED BY ','
    TRAILING NULLCOLS
            OLD_KEY "LTRIM(RTRIM(:OLD_KEY))",
            NEW_KEY "LTRIM(RTRIM(:NEW_KEY))",
            SYS_DATE "SYSTIMESTAMP",
            STATUS CONSTANT 'C'
    )Thanks,
    -Soma
    Edited by: user4587490 on Jun 15, 2011 10:17 AM
    Edited by: user4587490 on Jun 15, 2011 11:16 AM

    Hello Soma.
    How many records exist in your 700 MB CSV file? How many do you expect to process in 10 minutes? You may want to consider performing a set of simple unit tests with 1) 1 record, 2) 1,000 records, 3) 100 MB filesize, etc. to #1 validate that your shell script and control file syntax function as expected (including the writing of log files, etc.), and #2 gauge how long the processing will take for the full file.
    Hope this helps,
    Luke
    Please mark the answer as helpful or answered if it is so. If not, provide additional details.
    Always try to provide actual or sample statements and the full text of errors along with error code to help the forum members help you better.

  • Can any body send me the cross component meta data files of any cross compp

    HI,
    I want meta data files of cross component project could u please pass these meta data files as soon as possible.
    Thanks,
    Shabeer Ahmed.

    Hi,
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/a137c339-0b01-0010-a688-a87b88706845
    Regards,
    Sundar

  • Essbase cube data file size

    Hi,
    Why is it showing different numbers about my data file size in EAS>database>edit>properties>storage and a full database export.
    Thanks,
    KK

    Tim, in all seriousness, I am not a stalker. Honestly. You just post about things that interest me/we share some of the same skills. Alternatively, Glenn stalks me, I stalk you, it's time for you to become a sociopath too and stalk someone else.
    Okay, with that insanity out of the way, the other thing that could have a big impact on export size is if the OP did a level zero or input level export as opposed to a full export. In BSO databases in particular, that can lead to a very, very large set of .PAG (and to some extent .IND files) and a rather small export file size as the calculated values aren't gettting written out.
    If the export is done through EAS I think the default is level zero.
    Regards,
    Cameron Lackpour
    Edited by: CL on Sep 23, 2011 2:38 PM
    Bugger, the OP wrote "full database export". Okay, never mind, I am a terrible stalker. I agree with everything Tim wrote re compression.
    In an effort to add something useful, if you use the new-ish database archive feature in MaxL, you will essentially get .PAG files + .IND files combined into a single binary file. I used to be a big fan of full restructures, clears, and reloads to do defrags, but now go with the restructure command. Despite the fact that it's single threaded, in my limited testing (only done it at one client) it was faster as the export all, clear, reload. If you combine that with the archive mode you might have a better approach.
    Regards,
    Cameron Lackpour

  • Limitation on data file size for Oracle 8i on window 2000

    What is the size limitation for each Oracle data file ?
    Oracle 8i
    Window 2000 server (32-bit)

    Hi,
    You can get details from the Doc itself
    Refer : http://www.taom.ru/docs/oradoc.817/server.817/a76961/ch43.htm#11789 (Oracle8i Reference Release 2 (8.1.6) )
    Check 10g also : http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14237/limits002.htm (10g Release 2 (10.2) )
    - Pavan Kumar N

  • Data File Size

    I wrote a java program that copies data from a Sql Server DB to Oracle 9i.
    As the program ran, the oracle9i server ran out of disk space. I ran the 'clean' portion of my application that deletes all data from the tables. After I did this, there was still the same amout of free space on the hard drive.
    I am not an oracle dba or expert by any means, I am a programmer but here is what I found, I'm not sure if its revelent.
    On the oracle server I went to the following folder : oracle\ora92\database\DB40
    I think this is the file where my data is stored. The file 'DB40DF' is 6 Gigs.
    I have 2 other databases on the server that are exactly the same as my db40 database and their 'DF' file sizes are 130MB. These have no data in them yet. So from this I would assume that if I emptied all of my tables then the DB40DF should be around 130MB. But this is not the case, after I ran my delete code the file is still 6GIG.
    Did I do my delete wrong? I assumed that my statment would commit and it seems to. When I do a select * from the tables there is no data. If there is no data in my table why is the file 6 gigs? I have included my delete java code incase I did something wrong on that side. If not, is there something I am suspose to do to get the DF file back to normal size? I have Toad 9.5 to administer the database.
    Here is a portion of my delete java code:
    stmt is a java.sql.Statement
    for (String table : tables) {
    sql = "DELETE FROM " + oracleSchema + table;
    stmt.addBatch(sql);
    int[] result = stmt.executeBatch();
    if (stmt != null) {
    stmt.close();
    stmt = null;
    When I print out the int array it shows how many rows were deleted from each table and the record cound seems ok. (ie table x clear 36007 rows deleted)
    Thank you.

    Oracle won't resize a datafile unless this is requested by the SQL statement ALTER DATABASE DATAFILE ... RESIZE... . Oracle won't release extents allocated to a table unless this is requested by the some ALTER TABLE statements. Extents will normally be reused by next INSERT statements. So the behavior you note is normal.

  • Suggested data file size for Oracle 11

    Hi all,
    Creating a new system (SolMan 7.1) on AIX 6.1 running Oracle 11. 
    I have 4 logical volumes for data sized at 100gb each.  During the installation I'm being asked to input the size for the data files. The default is "2000mb/2gb" is this acceptable for a system sized like mine, or should I double them to 4gb each? I know the max is 32gb per data file but that seems a bit large to me.  Just wanted to know if there was a standard best practice for this, or a formula to use based on system sizing.
    I was not able to find any quick suggestions in the Best Practices guide on this unfortunately...
    Any help would be greatly appreciated.
    Thanks!

    Ben Daniels wrote:
    Hi all,
    >
    > Creating a new system (SolMan 7.1) on AIX 6.1 running Oracle 11. 
    >
    > I have 4 logical volumes for data sized at 100gb each.  During the installation I'm being asked to input the size for the data files. The default is "2000mb/2gb" is this acceptable for a system sized like mine, or should I double them to 4gb each? I know the max is 32gb per data file but that seems a bit large to me.  Just wanted to know if there was a standard best practice for this, or a formula to use based on system sizing.
    >
    > I was not able to find any quick suggestions in the Best Practices guide on this unfortunately...
    >
    > Any help would be greatly appreciated.
    >
    > Thanks!
    Hi Ben,
    Check the note 129439 - Maximum file sizes with Oracle
    Best regards,
    Orkun Gedik

  • No image preview in Bridge and unable to write meta data files

    I recently converted my CR2 files to DNG files using the DNG converter. For the most part it worked. However, for some files, something went wrong. When I looked at a folder of newly converted dng files using Bridge, I noticed that some files didn't have a preview. These same files won't allow meta data to be written either. Are these files corrupt? What is wrong and how do I fix it? I tried to open these files in CR and they do open. Wierd thing is that if I open the file in CR, then click save file as dng, it will then show a preview in Bridge.

    Yes. The original RAW file (CR2) allows me to write metadata. In fact, if I write the metadata to the CR2 file first, then convert to DNG, all the metadata is there - even for the files that at first do not have a preview in Bridge.
    But the reason I'm converting to DNG is because I want my original archive files to be in a non-propriatory format.
    Even though I can get around the current issues of no preview and writing metadata, my main concern is whether or not there is a problem with the conversion. I don't want to find out later that the file is corrupt. I'm worried that the issues with previewing and writing metadata are symptoms of some bigger problem.
    Any ideas?

  • Lightroom, Photoshop, Image and File Size | Adobe Evangelists - Julieanne Kost | Adobe TV

    In this episode of The Complete Picture, Julieanne explains how LIghtroom determines the file size and resolution of a file when using the Edit in Photoshop command.
    http://adobe.ly/YasSCQ

    Have you tried, emptying your cache in your browser?  Also, try a different browser ie chrome, safari, firefox.  If your problem persists please send me an email with a detailed systme configuration spec to [email protected] and we can investigate offlist further. 
    Thanks!
    Mike Burton
    Adobe TV Administrator

  • Best file for importing images from Illustrator and Photoshop for small file sizes

    Hello Adode consults!
    I'm in the process of preparing an inDesign file for a school project -- I've already had a few harrowing experiences sending large files to the printer that are too large to process (and a very grumpy computer, etc). The end result will be a poster around 36 inches by 4 or 5 feet.
    I'm wondering if there are any best practices for making sure that the files imported into InDesign are a manageable size to begin with. Should I, for instance, be saving each file as a jpeg before placing in Indesign?
    Thanks!
    -Katherine

    No, you should not save every file as JPG before placing it in INDesign. JPG is only usefull for raster images (like photos) without any transparency in high quality.
    When you place images use:
    For raster images from Photoshop psd (rgb with color profile)
    For raster images from Photoshop with form layers, texts or any vector element use PDF (or PDP) with layers.
    For vector graphics from Illustrator use AI files or PDF.
    For layouts from other InDesign projects use either the INDD itsself or export a PDF/X4.
    But to the printer deliver a PDF according to their standards. E.g. when they need CMYK files export as PDF/X1a with the required output color space and the resolution they want. Produce the pdf via Export (Print).
    Don't deliver open INDD. File size should for printing projects not be an issue.

Maybe you are looking for