MD5 key for huge file

Hi,
I want generated md5 key for a set of file which contains a lot of data.
Big file implies long computation time... so, how can I improving this ?
For now, I have a really simple algorithms :
public byte[] createMD5(File file) throws NoSuchAlgorithmException, IOException {
          InputStream fis =  new FileInputStream(file);
          byte[] buffer = new byte[1024];
          MessageDigest complete = MessageDigest.getInstance("SHA");
          int numRead;
          do {
            numRead = fis.read(buffer);
            if (numRead > 0) {
              complete.update(buffer, 0, numRead);
          } while (numRead != -1);
          fis.close();
          return complete.digest();
     }I have no found benchmark about algorithm SHA, MD5 ...
Thanks you,
Edited by: phpvik on May 20, 2009 1:19 PM

phpvik wrote:
sabre150 wrote:
phpvik wrote:
sabre150 wrote:
What speed are you looking for?I expect 4Go (10.000 files) in 10 min I can do 5,600 files totalling 4,3 GBytes in 48 seconds.Ok, that's good .. Do you have any piece of code ?Yep. Plenty.
>
>>
(I'm working with cluster).I'm not sure I understand the relevance. If you mean that you have N machines working on the problem then you should be able to do it in 1/N th the time.I've just want to say that I can distributed MD5 encrypt thread over computing farm.
My algorithms have to be safe because my java application will be loaded in 64bits JVM (Linux, Windows, Mac..).I definitely do not understand this since one jar will work on 32 or 64 bit Linux, Windows, Mac and uncle tom cobbley and
and allMy bytes code have to be compliant. I'm not sure that MD5 algorithm don't take care of OS architecture. But I'm okay with you : compile one time, use everywhere...I don't understand. Compliant with what? What in the MD5 specification makes it OS architecture dependent?
>
>
>>>
Did MD5 encryption work fine ?Of course. What makes you think the Java MD5 is wrong or in any way inferior to other MD5 ?No, but I remembers someone who say that MD5's implementation can be wrong.. But I trust in Java !Then ask the 'someone' for information about the faulty implementation.

Similar Messages

  • How to saved "Activation key" for DRM files ?

    hi everyone,
    I need help of how to bac up "acyivation key" for DRM files like mp3, MV, movie clips, wallpaper, I really scared to update my N95-1 firmware, my current FW still on V12.0.014...pls teach me ! Also on games & application installed...how ? pls help , much Thanks

    Follow these simple steps and you should be fine:
    1) Uninstall programs, themes, and games that you installed yourself.
    2) Connect the device using PC Suite Mode and use the Content Copier application of PC suite (the one that looks like a safe)
    3) Make a back-up of your device (no need to back-up memory card) including any User files. Use Content copier for this
    4) Use NSU
    5) After NSU done (if successful) then boot your device, now connect to PC SUite again and use Content copier to restore the settings.
    6) Re-Boot, your settings and other data should be restored including DRM settings.
    7) Once you confirmed it's working fine, install the Themes, games, and other programs.
    8) Enjoy!
    640K Should be enough for everybody
    El_Loco Nokia Video Blog

  • MD5 identical for font files

    I put together a script that uses openssl sha1 <file> to check for duplicate files in a folder hierarchy.
    It gives the same hash result for all the members of a font family. Does that make sense as I would have thought there would be some differences between the Suitcase and Postscript files.

    MarkDouma® wrote:
    Are these font files resource fork-based, or data fork-based? Traditional Mac font suitcases and PostScript Type 1 outline fonts are resource fork based files. Most of the command line tools only recognize the data fork of files, and will regard a resource fork-based file like a font suitcase as empty. The hash you're getting back will most likely be the hash you'd get for an empty file (for example, if you created a test file using "touch ~/Desktop/testFile").
    That appears to be exactly the case. I tested md5 and +openssl sha1+ on a couple of fonts I've have for aeons. I get the same hash as if I create an empty file with touch.
    This "..namedfork" method worked in OS X 10.4.x and prior, but as far as I remember, they removed support for that way of accessing files from OS X 10.5.
    You can get to the resource fork on Leopard by appending /rsrc:
    $ ls -l Aquil
    -rwxrwxrwx@ 1 username staff 0 Sep 9 1998 Aquil
    $ ls -l Aquil/rsrc
    -rwxrwxrwx 1 username staff 156440 Sep 9 1998 Aquil/rsrc
    $ touch test.txt
    $ md5 -q test.txt
    d41d8cd98f00b204e9800998ecf8427e
    $ md5 -q Aquil
    d41d8cd98f00b204e9800998ecf8427e
    $ md5 -q Aquil/rsrc
    b872478171cb1fd9838c7ecef926c89b
    So a script would probably need some kind of logic where it looks for zero-length files, then checks to see if there's a resource fork. If there is, then get the hash of that instead...
    charlie

  • Can I use MD5 value for indexing files ?

    I would like to know if MD5 value for each unqiue file is also unique. I am wonder if U can use MD5 value of a file for indexing. Any suggestion ??

    I would like to know if MD5 value for each unqiue file
    is also unique.No, since the number of MD5 hashes is less than the number of possible files. Of course, if you don't have many files the probability of clashes is pretty low. There's some theory about this which you'll find in algorithms textbooks where they talk about hash tables.
    I am wonder if U can use MD5 value of
    a file for indexing. Any suggestion ??Why? Don't you want your index to tell you something about the contents of the file?

  • RegEx Error for Huge File .. Please help , Its Urgent

    Hi all,
    I am getting following exception,
    does anyone know about it
    java.lang.IndexOutOfBoundsException: No group 1
    at java.util.regex.Matcher.group(Matcher.java:355)
    at java.util.regex.Matcher.appendReplacement(Matcher.java:585)
    at java.util.regex.Matcher.replaceFirst(Matcher.java:701)
    at XSLT_OnlyJava.<init>(XSLT_OnlyJava.java:84)
    at XSLT_OnlyJava.main(XSLT_OnlyJava.java:93)
    Exception in thread "main"
    I am parsing huge file with regex and replacing some part
    Thanks,
    Vinayak

    sorry for late
    This is my code
    text = contents.toString();
              String regex = "<tu.*?/tu>";
              Matcher matcher = Pattern.compile(regex,Pattern.CASE_INSENSITIVE|Pattern.DOTALL).matcher(text);
              while(matcher.find()){
                   tuvSegment = matcher.group();
                   String segRegex = "<seg>.*?</seg>";
                   Matcher segMatcher = Pattern.compile(segRegex,Pattern.CASE_INSENSITIVE|Pattern.DOTALL).matcher(tuvSegment);
                   if(segMatcher.find()){
                        SegVal = segMatcher.group();
                   String ReplsegRegex = "<seg />";
                   Matcher ReplsegMatcher = Pattern.compile(ReplsegRegex,Pattern.CASE_INSENSITIVE|Pattern.DOTALL).matcher(tuvSegment);
                   text = ReplsegMatcher.replaceFirst(SegVal);
    The text string contains teh .tmx file
    Thanks,
    Vinayak

  • Improve XML readability in Oracle 11g for binary XMLType storage for huge files

    I have one requirement in which I have to process huge XML files. That means there might be around 1000 xml files and the whole size of these files would be around 2GB.
    What I need is to store all the data in these files to my Oracle DB. For this I have used sqlloader for bulk uploading of all my XML files to my DB and it is stored as binary XMLTYPE in my database.Now I need to query these files and store the data in relational tables.For this I have used XMLTable Xpath queries. Everything is fine when I try to query single xml file within my DB. But if it is trying to query all those files it is taking too much time which is not acceptable.
    Here's my one sample xml content:
    <ABCD>
      <EMPLOYEE id="11" date="25-Apr-1983">
        <NameDetails>
          <Name NameType="a">
            <NameValue>
              <FirstName>ABCD</FirstName>
              <Surname>PQR</Surname>
              <OriginalName>TEST1</OriginalName>
              <OriginalName>TEST2</OriginalName>
            </NameValue>
          </Name>
          <Name NameType="b">
            <NameValue>
              <FirstName>TEST3</FirstName>
              <Surname>TEST3</Surname>
            </NameValue>
            <NameValue>
              <FirstName>TEST5</FirstName>
              <MiddleName>TEST6</MiddleName>
              <Surname>TEST7</Surname>
              <OriginalName>JAB1</OriginalName>
            </NameValue>
            <NameValue>
              <FirstName>HER</FirstName>
              <MiddleName>HIS</MiddleName>
              <Surname>LOO</Surname>
            </NameValue>
          </Name>
          <Name NameType="c">
            <NameValue>
              <FirstName>CDS</FirstName>
              <MiddleName>DRE</MiddleName>
              <Surname>QWE</Surname>
            </NameValue>
            <NameValue>
              <FirstName>CCD</FirstName>
              <MiddleName>YTD</MiddleName>
              <Surname>QQA</Surname>
            </NameValue>
            <NameValue>
              <FirstName>DS</FirstName>
              <Surname>AzDFz</Surname>
            </NameValue>
          </Name>
        </NameDetails>
      </EMPLOYEE >
    </ABCD>
    Please note that this is just one small record inside one big xml.Each xml would contain similar records around 5000 in number.Similarly there are more than 400 files each ranging about 4MB size approx.
    My xmltable query :
    SELECT t.personid,n.nametypeid,t.titlehonorofic,t.firstname,
            t.middlename,
            t.surname,
            replace(replace(t.maidenname, '<MaidenName>'),'</MaidenName>', '#@#') maidenname,
            replace(replace(t.suffix, '<Suffix>'),'</Suffix>', '#@#') suffix,
            replace(replace(t.singleStringName, '<SingleStringName>'),'</SingleStringName>', '#@#') singleStringName,
            replace(replace(t.entityname, '<EntityName>'),'</EntityName>', '#@#') entityname,
            replace(replace(t.originalName, '<OriginalName>'),'</OriginalName>', '#@#') originalName
    FROM xmlperson p,master_nametypes n,
             XMLTABLE (
              --'ABCD/EMPLOYEE/NameDetails/Name/NameValue'
              'for $i in ABCD/EMPLOYEE/NameDetails/Name/NameValue        
               return <row>
                        {$i/../../../@id}
                         {$i/../@NameType}
                         {$i/TitleHonorific}{$i/Suffix}{$i/SingleStringName}
                        {$i/FirstName}{$i/MiddleName}{$i/OriginalName}
                        {$i/Surname}{$i/MaidenName}{$i/EntityName}
                    </row>'
            PASSING p.filecontent
            COLUMNS
                    personid     NUMBER         PATH '@id',
                    nametypeid   VARCHAR2(255)  PATH '@NameType',
                    titlehonorofic VARCHAR2(4000) PATH 'TitleHonorific',
                     firstname    VARCHAR2(4000) PATH 'FirstName',
                     middlename  VARCHAR2(4000) PATH 'MiddleName',
                    surname     VARCHAR2(4000) PATH 'Surname',
                     maidenname   XMLTYPE PATH 'MaidenName',
                     suffix XMLTYPE PATH 'Suffix',
                     singleStringName XMLTYPE PATH 'SingleStringName',
                     entityname XMLTYPE PATH 'EntityName',
                    originalName XMLTYPE        PATH 'OriginalName'
                    ) t where t.nametypeid = n.nametype and n.recordtype = 'Person'
    But this is taking too much time to query all those huge data. The resultset of this query would return about millions of rows. I tried to index the table using this query :
    CREATE INDEX myindex_xmlperson on xml_files(filecontent) indextype is xdb.xmlindex parameters ('paths(include(ABCD/EMPLOYEE//*))');
    My Database version :
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    PL/SQL Release 11.2.0.2.0 - Production
    "CORE 11.2.0.2.0 Production"
    TNS for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - Production
    Index is created but still no improvement with the performance though. It is taking more than 20 minutes to query even a set of 10 similar xml files.Now you can imagine how much will it take to query all those 1000 xml files.
    Could someone please suggest me how to improve the performance of my database.Since I am new to this I am not sure whether I am doing it in proper way. If there is a better solution please suggest. Your help will be greatly appreciated.

    Hi Odie..
    I tried to run your code through all the xml files but it is taking too much time. It has not ended even after 3hours.
    When I tried to do a single insert select statement  for one single xml it is working.But stilli ts in the range of ~10sec.
    Please find my execution plan for one single xml file with your code.
    "PLAN_TABLE_OUTPUT"
    "Plan hash value: 2771779566"
    "| Id  | Operation                                     | Name                                     | Rows   | Bytes | Cost (%CPU)| Time     |"
    "|   0 | INSERT STATEMENT                   |                                              |   499G |   121T |   434M  (2) |999:59:59  |"
    "|   1 |  LOAD TABLE CONVENTIONAL    | WATCHLIST_NAMEDETAILS  |            |           |                 |                 |"
    "|   2 |   SORT AGGREGATE                   |                                             |     1      |     2    |                 |          |"
    "|   3 |    XPATH EVALUATION                 |                                             |             |          |                 |          |"
    "|   4 |   SORT AGGREGATE                   |                                             |     1      |     2    |                 |          |"
    "|   5 |    XPATH EVALUATION                 |                                             |             |          |                 |          |"
    "|   6 |   SORT AGGREGATE                   |                                             |     1       |     2   |                 |          |"
    "|   7 |    XPATH EVALUATION                 |                                             |              |         |                 |          |"
    "|   8 |   SORT AGGREGATE                   |                                             |     1        |     2  |                 |          |"
    "|   9 |    XPATH EVALUATION                 |                                             |              |         |                 |          |"
    "|  10 |   NESTED LOOPS                       |                                             |   499G    | 121T |   434M (2) | 999:59:59 |"
    "|  11 |    NESTED LOOPS                      |                                             |    61M     |  14G |  1222K (1) | 04:04:28 |"
    "|  12 |     NESTED LOOPS                     |                                             | 44924      |  10M |    61   (2) | 00:00:01 |"
    "|  13 |      MERGE JOIN CARTESIAN      |                                             |     5         | 1235 |     6   (0) | 00:00:01 |"
    "|* 14 |       TABLE ACCESS FULL          | XMLPERSON                        |     1          |  221 |     2   (0) | 00:00:01 |"
    "|  15 |       BUFFER SORT                     |                                             |     6          |  156 |     4   (0) | 00:00:01 |"
    "|* 16 |        TABLE ACCESS FULL         | MASTER_NAMETYPES        |     6          |  156 |     3   (0) | 00:00:01 |"
    "|  17 |      XPATH EVALUATION             |                                             |                |         |               |          |"
    "|* 18 |     XPATH EVALUATION              |                                             |               |          |               |          |"
    "|  19 |    XPATH EVALUATION               |                                              |               |         |              |          |"
    "Predicate Information (identified by operation id):"
    "  14 - filter(""P"".""FILENAME""='PFA2_95001_100000_F.xml')"
    "  16 - filter(""N"".""RECORDTYPE""='Person')"
    "  18 - filter(""N"".""NAMETYPE""=CAST(""P1"".""C_01$"" AS VARCHAR2(255) ))"
    "Note"
    "   - Unoptimized XML construct detected (enable XMLOptimizationCheck for more information)"
    Please note that this is for a single xml file. I have like more than 400 similar files in the same table.
    And for your's as well as Jason's Question:
    What are you trying to accomplish with
    replace(replace(t.originalName, '<OriginalName>'),'</OriginalName>', '#@#') originalName 
    originalName XMLTYPE PATH 'OriginalName'
    Like Jason, I also wonder what's the purpose of all those XMLType projections and strange replaces in the SELECT clause
    What I was trying to achieve was to create a table containing separate rows for all the multi item child nodes for this particular xml.
    But since there was an error beacuse of multiple child nodes like 'ORIGINALNAME' under 'NAMEVALUE' node, I tried this script to insert those values by providing a delimiter and replacing the tag names.
    Please see the link for more details - http://stackoverflow.com/questions/16835323/construct-xmltype-query-to-store-data-in-oracle11g
    This was the execution plan for one single xml file with my code :
    Plan hash value: 2851325155
    | Id  | Operation                                                     | Name                                         | Rows  | Bytes   | Cost (%CPU)  | Time       |    TQ  | IN-OUT | PQ Distrib |
    |   0 | SELECT STATEMENT                                   |                                                 |  7487   |  1820K |    37   (3)        | 00:00:01 |           |             |            |
    |*  1 |  HASH JOIN                                                 |                                                 |  7487   |  1820K  |    37   (3)        | 00:00:01 |           |             |            |
    |*  2 |   TABLE ACCESS FULL                                | MASTER_NAMETYPES            |     6     |   156     |     3   (0)         | 00:00:01 |           |             |            |
    |   3 |   NESTED LOOPS                                        |                                                 |  8168   |  1778K  |    33   (0)        | 00:00:01 |           |             |            |
    |   4 |    PX COORDINATOR                                    |                                                 |            |             |                      |               |           |             |            |
    |   5 |     PX SEND QC (RANDOM)                           | :TQ10000                                  |     1    |   221      |     2   (0)        | 00:00:01 |  Q1,00 | P->S     | QC (RAND)  |
    |   6 |      PX BLOCK ITERATOR                              |                                                 |     1    |   221      |     2   (0)        | 00:00:01 |  Q1,00 | PCWC   |            |
    |*  7 |       TABLE ACCESS FULL                            | XMLPERSON                            |     1    |   221      |     2   (0)        | 00:00:01 |  Q1,00 | PCWP   |            |
    |   8 |    COLLECTION ITERATOR PICKLER FETCH  | XQSEQUENCEFROMXMLTYPE |  8168  | 16336    |    29   (0)       | 00:00:01  |           |               |            |
    Predicate Information (identified by operation id):
       1 - access("N"."NAMETYPE"=CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(SYS_XQEXTRACT(VALUE(KOKBF$),'/*/@NameType'),0,0,20971520,0),50,1,2
                  ) AS VARCHAR2(255)  ))
       2 - filter("N"."RECORDTYPE"='Person')
       7 - filter("P"."FILENAME"='PFA2_95001_100000_F.xml')
    Note
       - Unoptimized XML construct detected (enable XMLOptimizationCheck for more information)
    Please let me know whether this has helped.
    My intention is to save the details in the xml to different relational tables so that I can easily query it from my application. I have similarly many queries which inserts the xml values to different tables like the one which I have mentioned here. I was thinking of creating a stored procedure to insert all these values in the relational tables once I receive the xml files. But even a single query is taking too much time to complete. Could you please help me in this regard. Waiting for your valuable feedback.

  • Unable to process huge files in SFTP BASED PROXY SERVICE

    Hi,
    I credted osb project which will have sftp based proxy and sftp based business service.After completing the development i tested the fie transfer through WINSCP ,but my sftp based proxy was picking the files but i am unable to see the files in the output directory for huge files,for small file it is coming after aquite long time .please assist me .

    If you enable content streaming then you cant access the file/message content within the Proxy service and can not perform actions like Replace or perform transformations on the content. Use streaming for pass through scenarios only. If you want to read a large file and also perform transformations on content I would recommend using JCA Adapter for FTP and try reading records from file in batches, a few records at a time. Just so we know the requirement better, what is the file size and what is the format of the content?

  • Where do I find the public key for PGP verification

    Where is the PGP pubic key for firefox file verification ?

    Confirm what is the signing name of the public key?
    Confirm which of the sign files shasum1.asc or shasum256.asc should be used ?
    Shasum1 key value is
    -----BEGIN PGP SIGNATURE-----
    Version: GnuPG v2.0.14 (GNU/Linux)
    iQIcBAABAgAGBQJUq8UhAAoJEAV8w+sVoKS8aTIP/Al30xRFgL7NKbnANzavK/Cm
    YviQUkJHdux2PeBSyWgwGjb3skiQRPpKEOzZ0+Jv2zmI/9BpupCYPIkLFq0D9d+6
    kEnC+nFezVS7IUDh46MLL6cSv8OraRDqKRJXapZotpEd62zch+nZJsb5vOsW12Wd
    9YcE7f/0h75KcuzxZ2VK0a78JObm5xcMIIa/R+iDsV1LAEYTDj10o2LrKhJkYD3d
    BIc8EPHaqXeHRhwTt1K7YO0TuXKJEYuhG32jVKWwU6QSAmIuAGSnM60U46fVIde6
    /z41rWZmL5kIRwzZWORHdG9HJvP0CIU/TA9kDKyo7bS+PrHMeLQ8omxAEjLBbM6P
    aUfRd3Qp5rmIp45/dXCmEb5uYZ3HJwmt4EZ7mtxi9rTiCE5wqlRwQySz62YAI8wU
    iZQ/sAw2NkFLZLjP+FsvLwGFAu1AekX8TX8OMredzSW/VzmxJUgXG6OjmpHoPtTk
    /awliL6Tr+SfLaA9zlnWuFk1YSCVj4vIK5Pd3X+NmeVa/hXjw9tiq+LA/p9+L996
    FXj5f+CMCOmivQHmKxloA1Cozb5q6wUf9mZtU2SeSckoH4jfWRRtICUW9r6D9k6a
    eBYk5JhBU73MYyFt7b9+mL5A2gWZ8KGJGuHIw51d0cR+MHfmv+CLK45c2xOLe0Pd
    qaj/+4LO8rZq0z3HIlbU
    =dXvG
    -----END PGP SIGNATURE-----
    Shasum512.asc value is
    -----BEGIN PGP SIGNATURE-----
    Version: GnuPG v2.0.14 (GNU/Linux)
    iQIcBAABAgAGBQJUq8UkAAoJEAV8w+sVoKS8ZmEP/RDrO15NufQabW0dtdDmjL1l
    zMDNZMVhdTNQx5TuQxAzDaAoT3NH9PfobgEgtU2kt/l9fFT8XIHHCBdg6Jci5PvR
    tLG9EyPqcjNxehvyykMlxoO0ajOu3Asm0tZbBCxJ8d5kpzN5eZjHOIKuH4mv9VEs
    cSjy022ZwhWqiH81tAdItJlo0kfxJpJXbkVfmtNQQkNL7yYcrJI/FsGlIq39xyd8
    1LvkvwswSeOYkL8fgMcXvxO4RxZlR8nbd+GIbAzHp3ztl5XYRuMeekLA6igyq3JX
    rdHQXGt1xe4n0lGWNzPwY8YKG7D/ku9RTfH2b78IQLmm5+G4BZoaEAFXjtsBacoe
    kZOem1M1PtVq4A0e7mQNnEGGXz4zLFm8t+g1TXV8FM+dN3K6OXGAXXgA+Yt9LY//
    DR4ESNHS1/sC3pdAtynfg3MtV5yXmDOZKwx2ew1EYEsc7OD9QmYQW+sIkOi8nFgH
    TxU1udep/ZjerMABu1lZoy8WAX29DmheYkTn95oCTHLBh//03FWc76/ENT2cqZVV
    GYS1SRgLtnENXmi2CazVB80o6zOFGLfnL272fHhhr/zMqtWxtPd0WekIVxhA0dhg
    GBvPu6YyjyUXQQqxc/gz86IJus/tHkfd5RhCMbFlU3RNBZLo8SXmJXYlk8xcZtH2
    b3m3ifyXU6sSIkkxorp1
    =WMMi
    -----END PGP SIGNATURE-----
    I am not able to verifiy firefox31.4 ESR for the Mac

  • I have had to buy an GRAID 4TB to back up my images as I am a photographer and have huge files.  Can I use my old timecapsule 2TB as storage for say 4 years of photography.  My imac is getting full.

    I am a photographer and have huge files so  have had to buy a GRAID 4TB for backup and now have an old 2 TB time capsule not doing anything. Could I use the timecapsule as storage and transfer say a few years of images over to that and free up space on my imac.  I do have the images copied onto other hard drives but feel I need more copies in case of external harddrive failure.

    Could I use the timecapsule as storage and transfer say a few years of images over to that and free up space on my imac.
    Can't give you a definite answer as I don't use a Time capsule, but if it is not being used for anything, why not try it out? My guess is you could use it for miscellaneous storage if you wanted to. 

  • HUGE file size for Pages (iWork '06) documents

    I'm in the process of evaluating Pages in the hopes of dumping the last of my Microsoft programs (ie, Word.) I'm impressed with the ease of use; especially when it comes to being a little creative.
    My great concern is the file size. I created a Pages document of 6 pages. It had text, images and tables. This relatively small document is a whopping 17.7MB. When I export this document to a Word doc and open it, it looks identical. The file size of the Word doc is only 1.4 MB. A HUGE difference. Is this a known issue or is there some sort of compression setting I'm missing?
    I typically work with large docs and I don't want Pages files hogging all of my drive space.

    The package shows image1.tif, which I added on the
    first page. There's also an image1_filtered-9.tiff
    which is HUGE. It's about 3MB larger than the
    original file. It looks like Pages possibly makes a
    copy of an image when you apply an effect like a drop
    shadow?
    So if you don't want huge files, I guess you have to
    stay away from the image effects...which is actually
    what made Pages interesting.
    It is the adjustment tool which is causing the ballooning of your files.
    If you use the levels adjustement, you will end up with an image which is labelled ~levelled.tiff. If you use the various other parts of the adjustment screen you will get files which are labelled ~-filtered.tiff, with or without version numbers.
    There is no optimization of the various files as they are saved, and no user control over their generation. It would be much nicer if the adjustments weere done on the fly, or at least if a size-specific thumbnail were made for on-screen representation.
    For now, however, if you are concerned with the size of your files in pages (and keynote, since it also happens there) then do the adjustments elsewhere. I suggest iPhoto, if that is where you have stored your items. You can always make your placed photo an image placeholder and replace it with alternative adjustments later.
    (I tested this with a 4k file made for Livejournal, saved as a jpg. The resultant filtered and levelled tiffs were 40k. A considerable increase.)

  • Error: cannot find bundle file or key for resource type

    Hello,
    Please help in finding out the cause & resolving the below errors.
    Logs from the default trace:
    Error Log Message:  cannot find bundle file or key for resource type http://sap.com/xmlns/ciphotocomp, use fallback
    Error Location: com.sapportals.wcm.service.resourceTypeRegistry.ResourceType.getDescription( Locale )
    Thanks & Regards
    Maha,

    << Do not post the same question across a number of forums >>

  • MD5 for flat files

    Hello experts,
    I am developing a program whereby flat files will have to be generated as well as their respective MD5 checksum. Can someone please tell me how should I proceed to generate the MD5 checksum once a specific file has been generated?
    Thanks in advance,
    Shabir

    You can use functions
    MD5_CALCULATE_HASH_FOR_CHAR for text file
    MD5_CALCULATE_HASH_FOR_RAW for binary

  • How To Fix: "The iTunes Library.itl file is locked, on a locked disk, or you do not have write permission for this file."

    I can't reopen my iTunes after I have to force quit when it does not respond. I get this message:
    "The iTunes Library.itl file is locked, on a locked disk, or you do not have write permission for this file."
    I've read to do the solutions. I right clicked on both the iTunes and Music folder and went to info for each. On both the lock at the bottom of the Get Info was locked. All of my users including admin can read and write all files within those folders. They are even a shared folder. I clicked on the lock to unlock it, typed in my admin password, and went to the gear to apply to all files inside. After doing that I keep getting the saved message.
    I tried another way that I've found online to move my ituneslibrary folder to the desktop and then open my iTunes. In theory the iTunes is supposed to ask for where the iTunes library is. Once that is prompted you quit iTunes and drag the iTunes Library folder back to the folder from the desktop and reopen iTunes to apprently solve the problem. However, when I move the iTunes Library file to the desktop and try to reopne iTunes the icon just keeps bouncing and when I right click the application does not respond.
    This problem is only resolved if I restart/shut down my macbook pro which is a 15-inch Core 2 Duo with intel. It will be 5 years old in December. I currently have Lion and the latest iTunes 10.5. This problem started in late December. 

    If you hold down the option key when starting iTunes, it will allow you to select a library or create a new one.
    You can create a new one, and then add all of your iTunes music back in by simply dragging the old iTunes music file onto iTunes.
    There are more detailed instructions at http://support.apple.com/kb/HT1451#
    It will show you how to re-build your iTunes database file.

  • I am trying to open iTunes on my Windows 7 PC and am getting the following message...the itunes library can not be found or created.  The default location for this file is in the "iTunes" folder in the "Music" folder.  Any ideas?

    I am trying to open iTunes on my Windows 7 PC and am getting the following message...the itunes library can not be found or created.  The default location for this file is in the "iTunes" folder in the "Music" folder.  Any ideas?

    Let's try this first. Hold down the Shift key while you try to launch iTunes. You should eventually see the following dialog:
    Click "Choose library". Browse to inside the following location (depending on what operating system you're running):
    Operating System
    Default location of iTunes library
    Microsoft Windows XP
    \Documents and Settings\[your username]\My Documents\My Music\iTunes\
    Microsoft Windows Vista
    \Users\[your username]\Music\iTunes\
    Microsoft Windows 7
    \Users\[your username]\My Music\iTunes\
    ... and open the iTunes library you find in there.

  • Can you set a second "Open with:" application for a file?

    Hi all,
    I was wondering if something like this exists for OS X Lion or greater: you can set a default application to open a type of file, say Safari for all .html files, but can you set a second default application for a file?
    I usually open files using command + o or command + down arrow from the Finder and I've often thought how good it would be to hit an alternate keyboard shortcut to open the selected file in a secondary default application.
    For example, if I'm making a web page and have the .html selected in the finder I can hit command + o (or command + down arrow) to open it in Safari and if the file isn't open in my editor and is selected in the Finder I can hit, say command + option + o (or command + option + down arrow) to open the .html file in my editor.
    Does anyone know if this can be achieved in OS X Lion or greater?
    Any advice is greatly appreciated.
    Thanks.

    Hi all,
    If anyone's interested in a partial solution, I've rigged up the following.
    I ended up using an Automator action coupled with Spark to capture a keyboard key combination. Spark: http://www.shadowlab.org/Software/spark
    I've used Spark before with good results, and although it's from 2008 it still works on Lion.
    I created an Automator action that does two things:
    gets the selected Finder items
    opens finder items in the text editor I'm using.
    I then saved the workflow as an application so that Spark could run it.
    I've set up a keyboard shortcut that Spark picks up and calls my Automator workflow. It all seems to work nicely.
    So, I've created a specific "open selected file with <text editor>" keyboard shortcut that should do for the time being.

Maybe you are looking for

  • Why alum keyboard no longer recognized in Windows XP?

    I just did the firmware update for my alum. keyboard and now I can't get it to type in the windows log in screen when I boot to XP Pro via bootcamp. Any suggestions or do I now have to wait for Apple to address this issue too? Thanks, Dan

  • Daisy Chaining FW External HD

    Can I attach an 800 FW drive to the 800 FW port and then daisy chain 3 400 FW drives to it without degrading performance?

  • Pricing table - pronblem

    Hi Gurus When I am creating new pricing table and trying to genarate system asking me to select package. 1)what is the use of it 2)which package I have to choose 3)If I choose local object what it makes differ from choosing Pacakage? Please any help

  • Out bound proxy query.

    Hi all, I am a ABAP -er. I am facing one issue . Scenario : When ever user create some Business partner or change some Bp , that thing will trigger in GIS system through PI. We developed proxy & outbound proxy program . It is working fine after runni

  • Will the CS3 activation servers be shut down?

    Are there plans to deactivate CS3 Activation server. If so when ? If not now then how much advance notifation should I expect? Last one CS2 caght me by surprise thanks for any help Jeff Frank