Trial CS4; also cache management

I am considering the trial of CS4, but concerned because it seems from what I've read on forums that installing the retail version after a trial sometimes causes problems. Are there preventive measures I can take or would it be best to try it on a cloned drive?
Also is the retail version updated and better than the trial? I noticed that last November Ramón found Bridge cache management still weak in CS4. Has that been fixed? Is it best to choose to save the cache files to the folders, or keep them in one place?
Thanks for advice. I'm not rolling in money (few of us are, of course) and really don't want to upgrade unless the new version is a LOT more stable. I don't need the new features even though they sound nice.
(Ramón , I did apply the update to Bridge CS3 as you advise. It opened ok but with a message that I needed to update versionque. Since I never use the feature i just disabled in in my preferences. Hope that won't cause problems.)

Seems like I'm always 3 days late checking my emails :-(
I used to save in PSD, but became concerned at one point because TIFFs seem more standard. I hadn't noticed any delay in opening them, but it's been awhile.
I'm with you on the copies, only more so. Paranoid is the word. Three copies on three internal hard drives and two more on external drives. I need to update my DVDs but want to wait until I'm through with another marathon scanning task. So I just took one of my hard drives with photos updated (well boxed) and a huge bag of DVDs with all the other files, to my daughters house.
I went through a box of DVDs from the past three years and cataloged them. A lot of repetition, of course. One hates to discard them though, because what if the newest ones turn up with errors? It would be a nightmare putting those old ones back on the drive because I went to a different naming system, using dates, and renamed most of my photos. not to mention all the work this year keywording. I haven't found a good duplicate finder for the Mac. File Buddy misses a lot of dups with different names and the keywords would actually save the file contents anyhow.
Meantime I'm trying to decide whether to invest in a new scanner. Mine is still good (the Epson 4870 Photo) but everyone that I've had seems to eventually collect dust and it affects the scans.
So I have to decide if I want to keep using this one and do more work cleaning up the dust in Photoshop, or buy an inexpensive one (maybe the V500) to hopefully finally finish the backlog of years of photos. The specs on the V500 sound as good as my 4870, but I don't know about the optical stuff.
Wow, I'm sorry this got a bit wordy! I don't know anyone around here who works with Photoshop so I can get carried away with electronic chatter

Similar Messages

  • Bridge cache management still weak in CS4 (trial version)

    I've been running the trial version of Photoshop 11 (CS4) and Bridge 3.0.0.464 for a little over 24 hours.
    Though Bridge 3.x (CS4) runs
    much faster than 2.x (CS3) on my Mac, cache management is already a big disappointment (again). I find myself having to purge the cache
    for practically each and every folder, as at least one of the four info lines under each thumbnail fails to populate, usually the first line, "Date Created". On a few ".pct" images (Apple PICT), the Dimensions (in pixels) line refuses to populate even after repeated purging of the cache for the folder.
    What remains to be seen is whether ACR 5.2 is enough incentive to upgrade or not. Bridge 3.x falls a little short, the flashy effects in Photoshop are not enough by themselves.

    Ann,
    >I have absolutely NO desire to inflict unnecessary pain on myself intentionally.
    That's most commendable. :) Trust me, pain and grief is exactly what you would have.
    Here's another way I can confirm the bug:
    First I manually delete the cache folders, then do a global purge on startup (by holding the option key and checking the box in the dialog box). The "Date Created" info line is blank (data missing).
    Then I Build and Export Cache through the Tools menu. That fixes the missing data in the info line.
    Presumably, I now have a healthy cache.
    If I again purge the cache globally, through preferences or at startup, the "Date Created" info line goes blank instantly under ALL my thumbnails on all volumes connected at the time.
    Then I again Build and Export Cache through the Tools menu. That fixes the missing data in the info line under all thumbnails.
    To me, that is unmistakably a bug.

  • Feature Request (again); Adobe Bridge Cache Manager

    In Adobe Bridge, the Cache is still a problem and a huge memory hog. I asked for a good Cache Manager over 3 years ago when it was needed CS4.
    Now in CS6, I am disappointed that the cache system in Bridge has not changed. Why do you have a section in the Adobe Forums that asks for 'Feature Request' and then ignore them?
    http://forums.adobe.com/message/2652688#2652688
    "I would personally like a tool within Bridge that allows me to manage and control the Cache in a more personal and effective way. I would like a toggle that would keep '100% previews' in one folder for a selected period of time (1 day to 1 month) before the folder's cache is automatically purged. And another toggle that would give me the option to keep a selected folder's cache permanently intact (such as an Important Portfolio Folder or an Edited Images Folder).
    As a photographer, I take and edit a lot of photos. When I import a photoshoot with 1,500 photos (not uncommom at all) I enjoy having the Bridge Previews at 100%, but for only a limited time. Unfortunately, I forget to purge the individual cache of that folder and the cache builds to a point that makes Adobe Bridge CS4 very slow. I then have to Purge the whole Bridge Cache in order to make Bridge run smooth again.

    gumbogood wrote:
    A Cache Manager tool that lets me either schedule a cache cleaning on a folder, or keep it permanent, would be a huge help. And I don't think that it is unreasonable for me to ask this of Adobe.
    If you submit a feedback poll where Yammer pointed you, and it gets a couple of hundred (thousand?) users agree you might get Adobe's attention.
    I agree with Yammer that Bridge is a low priority project with Adobe.  Think their target audience is the casual user with a few hundred photos added each year.   Your use pattern is more specialized and you either have to adapt with scripts or move to another product IMO.
    The size of cache takes a lot of people by surprise.  It can grow humongous if you use HQ thumbs and save 100% previews.  One can dump the cache in preferences, but many are reluctant to to this as they think it will delete all their keywords and edits.  In addition, to do a search you have to re-index all your files, which can take a long time.
    If you have "export cache to folders" checked, dumping the cache in preferences only dumps the central cache.  To dump the folder cache you have to visit each folder and click on Tools/cache/purge cache for xxx folder.  Again, most people do not recognize this.  This technique purges both the central and folder cache for this folder only.
    It the 100% previews are the problem for you it would seem like a simple task for the script people to write one that would delete it after xx days as the 100% previews are held in a specific folder.
    Bridge is not a good digital asset manager.  You are probably in that arena dealing with several hundred thousand images.  Omke uses a DAM as it does some functions better than Bridge, but Bridge is still his main program.  Can't remember the name, but if you read these posts you might have seen it.

  • How to use Cache Management Library (CML) for custom applications?

    Hello,
    We are planning the migration of multiple applications (J2EE, Portal, Web-Dynpro for Java) from 7.01 to 7.3 and we would like to replace some custom cache implementations with a central cache management provided by the SAP Web-AS Java.
    Several SAP standard services (e.g. UME, Configuration Manager, Scheduler) seems to use the "Cache Management Library" (CML):
    [http://help.sap.com/saphelp_nw73/helpdata/en/4a/f833eb306628d2e10000000a42189b/frameset.htm]
    Such caches can be monitored using SAP Management Console (AS Java Caches).
    Portal Runtime (cache_type=CML) and Web Page Composer can also be configured to use CML:
    [http://help.sap.com/saphelp_nw73/helpdata/en/49/d822a779cf0e80e10000000a42189b/frameset.htm]
    [http://help.sap.com/saphelp_nw70ehp2ru/helpdata/en/13/76db395a3140fcb17b8d24f5966766/frameset.htm]
    So our questions:
    How to use CML for custom applications?
    Is there any example or documentation available?
    Kind Regards,
    Dirk

    Thanks Vidyut! You've answered my question.
    I placed the jar file in the $CATALINA_HOME/shared/lib directory. But where should I place the taglib TLD file? And how should I reference it in web.xml?
    Currently, my web.xml is as follows and it doesn't work.
    <?xml version="1.0" encoding="ISO-8859-1"?>
    <!DOCTYPE web-app
    PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN"
    "http://java.sun.com/dtd/web-app_2_3.dtd">
    <web-app>
    <taglib>
    <taglib-uri>http://abc.com</taglib-uri>
    <taglib-location>c:\Tomcat\shared\lib\mytags-taglib.tld</taglib-location>
    </taglib>
    </web-app>
    Thanks again!
    Joe

  • Cache Management

    Hi,
    I have a few questions regarding Cache Management.
    I have "Table A" whose Cacheable Property is set to true and Cache Never Expires button is checked. When I query this table through answers, the result set I get is up to the minute records.
    I do not want up-to-the minute records. I want a result set that is more than 6 hours or more old.
    How can I achieve this?
    Thanks
    rkingmdu

    Check the Cache section of the following file:
    $OBIHome/server/Config/NQSConfig.INI
    Make sure it says:
    ENABLE     =     YES;
    Also make sure that in your Answers request, the "Bypass Oracle BI Presentation Services Cache" checkbox is unchecked (Advanced Tab)
    Edited by: KevinC_VA on Oct 2, 2008 4:35 PM

  • ERM 5.3: Administration - Cache Management

    Hi experts,
    In what situations is cache management used for? Also, what would be the recommended update period?
    Thanks & regards,
    Debbie

    Debbie,
        Basically, all the AC applications including ERM stores data in cache to reduce DB operations. As soon as ERM is up, it will bring all the necessary data from DB and keep in the cache so that it doesn't waste time in bringing new data from DB all the time. This improves performance. Sometime you make some update and you might see that data has not refreshed then you should reload the cache. When you reload the cache, ERM will go to DB and bring all the latest data from the tables.
    I hope this helps.
    Regards,
    Alpesh

  • Cache manager give me a wrong date

    This is my problem:
    I move RPD file and Catalog from DEV instance to PROD instance to have same dashboard and reports in both instances, but when I access to Interactive Dashboard, and check Cache Manager in Administration Tool, it keep a wrong date, instead of give me 2010 year, it gives me 3910 year, and because of this I can not run some reports in my dashboard.
    I want to notice that not only move RPD and Catalog, I also change information in NQSConfig file, like reposiroty pointing to my RPD, I also change information in instanceconfig.xml file, like CatalogPath, to pointing to the Catalog I moved.
    So, if you can help me with this issue, I really appreciate it.
    Regards,
    Arnulfo

    Hi Nico, thanks for your quick answer but this error is not related with the browser cause we have the following situation:
    1.- First we delete all rows from the following table afiandwh.s_nq_acct that have "3910" as year date
    2.-Then we run this query: select count(*) from afiandwh.s_nq_acct where to_char(start_ts,'YYYY')='3910'
    And the result is 0 rows
    3.-So after that we entered to OBIEE Administration tool and in the physical layer we entered to OBI Usage Tracking and then in Table
    S_NQ_ACCT we made an "Update Row Count" and when it finished we run again the query of step 2 and we had 1 new registry in the
    database with "3910" date .
    So BI Server is generating these wrong dates, any idea why this could be happening?

  • Override Auto Cache Management

    Not really a problem, as such.. but something that is a little annoying.
    The Override Auto Cache Management option doesn't seem to work properly, unless what is happening SHOULD be happening?
    Basically, by default it is set to 75Mb.. I'll set it to around 350-400Mb. However, at random intervals, Firefox will default the settings back to 75Mb, out of the blue. No warning or anything.
    Why does is keep defaulting back to 75Mb? I have also changed the option using About:Config with the 'browser.cache.disk.capacity' option. but it still defaults back to 75Mb.
    Is there anything I could be doing wrong? Another setting that needs to be changed? Any help will be appreciated. Thanks.

    The cache size usually ends up around 350 MB, unless you have very little free disk space. I suggest you start over:
    # Type ''about:config'' into the address bar and press Enter.
    # Press the big button to bypass the warning.
    # In the search box, type ''browser.cache.''
    # In the search results, right-click each entry with the status "user set" and choose '''Reset'''.
    # Type ''about:cache'' into the address bar and press Enter. Under the "Disk cache device" category, note the value of the '''Cache Directory'''. Open that folder in Windows Explorer.
    # Exit Firefox (click the Firefox button in the top left corner, then choose Exit).
    # Delete the cache folder you opened earlier.
    # Restart Firefox.
    At that point, your cache size should be set to something more sensible. Note that under [[Advanced settings for accessibility, browsing, system defaults, network, updates, and encryption|Options - Advanced - Network]], the number box shows the maximum cache size, while the text above it shows how much is currently stored.<br>
    If you still need to override the automatic size, then normally all you need to do is check "Override automatic cache management", specify a custom size below, then click the OK button.<br>
    Doing so would save the first preference as ''browser.cache.disk.smart_size.enabled'' with a value of '''false''', and the second as ''browser.cache.disk.capacity'' with a value in [http://en.wikipedia.org/wiki/Kibibyte kibibytes]. To avoid unintended effects, it's best to avoid modifying about:config preferences (at least in this case).
    If your preferences are lost after restarting Firefox, see the following article:
    * [[How to fix preferences that won't save]]

  • Pb with cache manager

    Hello all,
    I'm a frenc h developper and I've just met a pb with the cache manager in Wl Console
    when I want to flush the cache (as described in WLP devellopers documentation)
    I get this exception :
    javax.management.InstanceNotFoundException: Found 16 (expected 1) Cache=null with
    parent SACSO:Application=portalApp,ApplicationConfiguration=portalApp,Name=CacheManager,Type=CacheManager.
         at com.bea.p13n.management.MBeanAccess.getMBean(MBeanAccess.java:238)
         at com.bea.p13n.management.ApplicationHelper.getChildServiceConfigurationMBean(ApplicationHelper.java:606)
         at jsp_servlet.__cacheconsole._jspService(statusMsg.inc:26)
         at weblogic.servlet.jsp.JspBase.service(JspBase.java:27)
         at weblogic.servlet.internal.ServletStubImpl$ServletInvocationAction.run(ServletStubImpl.java:1058)
         at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:401)
    I really do not understand....
    Could anyone help me? thanks

    For most any administrative task, except for shutting down and starting up, its best to use the system account. The sys user owns the database catalog, and when connecting as sys it has to be with the sysdba privilege.
    To fix an expired sys (or system) account, connect as sysdba, set database user password(s), may also have to unlock the account. On the database host, set $PATH, $ORACLE_SID (unix) or %PATH% and %ORACLE_SID% (windows) environment variables, connect as sysdba and alter the database user. If that doesn't fix the access the account may still be locked ...
    $ sqlplus /nolog
    SQL> connect /as sysdba;
    SQL> alter user sys identified by <password>;
    SQL> alter user sys account unlock;
    For a connect / to work you may need to su - oracle (unix) to become the user that owns the oracle software, or have the dba group added to your unix login. Connecting from a remote host as sysdba requires some extra setup, not sure how XE is configured "out of the box" for remote sysdba but that (would be) a different issue-

  • Global-Cache-Manager for Multi-Environment Applications

    Hi,
    Within our server implementation we provide a "multi-project" environment. Each project is fully isolated from the rest of the server e.g. in terms of file-system usage, backup and other ressources. As one might expect the way to go is using a single VM with multiple BDB environments.
    Obviously each JE-Environment uses its own cache. Within a our environment with dynamic numbers of active projects this causes a problem because the optimal cache configuration within a given memory frame depends on the JE-Environments in use BUT there is no way to define a global JE cache for ALL JE-Environments.
    Our "plan of attack" is to implement a Global-Cache-Manager to dynamicly configure the cache sizes of all active BDB environments depending on the given global cache size.
    Like Federico proposed the starting point for determining the optimal cache setting at load time will be a modification to the DbCacheSize utility so that the return value can be picked up easily, rather than printed to stdout. After that the EnvironmentMutableConfig.setCacheSize will be used to set the cache size. If there is enough Cache-RAM available we could even set a larger cache but I do not know if that really makes sense.
    If Cache-Memory is getting tight loading another BDB environment means decreasing cache sizes for the already loaded environments. This is also done via EnvironmentMutableConfig.setCacheSize. Are there any timing conditions one should obey before assuming the memory is really available? To determine if there are any BDB environments that do not use their cache one could query each cache utilization using EnvironmentStats.getCacheDataBytes() and getCacheTotalBytes().
    Are there any comments to this plan? Is there perhaps a better solution or even an implementation?
    Do you think a global cache manager is something worth back-donating?
    Related Postings: Multiple envs in one process?
    Stefan Walgenbach

    Here is the updated DbCacheSize.java to allow calling it with an API.
    Charles Lamb
    * See the file LICENSE for redistribution information.
    * Copyright (c) 2005-2006
    *      Oracle Corporation.  All rights reserved.
    * $Id: DbCacheSize.java,v 1.8 2006/09/12 19:16:59 cwl Exp $
    package com.sleepycat.je.util;
    import java.io.File;
    import java.io.PrintStream;
    import java.math.BigInteger;
    import java.text.NumberFormat;
    import java.util.Random;
    import com.sleepycat.je.Database;
    import com.sleepycat.je.DatabaseConfig;
    import com.sleepycat.je.DatabaseEntry;
    import com.sleepycat.je.DatabaseException;
    import com.sleepycat.je.Environment;
    import com.sleepycat.je.EnvironmentConfig;
    import com.sleepycat.je.EnvironmentStats;
    import com.sleepycat.je.OperationStatus;
    import com.sleepycat.je.dbi.MemoryBudget;
    import com.sleepycat.je.utilint.CmdUtil;
    * Estimating JE in-memory sizes as a function of key and data size is not
    * straightforward for two reasons. There is some fixed overhead for each btree
    * internal node, so tree fanout and degree of node sparseness impacts memory
    * consumption. In addition, JE compresses some of the internal nodes where
    * possible, but compression depends on on-disk layouts.
    * DbCacheSize is an aid for estimating cache sizes. To get an estimate of the
    * in-memory footprint for a given database, specify the number of records and
    * record characteristics and DbCacheSize will return a minimum and maximum
    * estimate of the cache size required for holding the database in memory.
    * If the user specifies the record's data size, the utility will return both
    * values for holding just the internal nodes of the btree, and for holding the
    * entire database in cache.
    * Note that "cache size" is a percentage more than "btree size", to cover
    * general environment resources like log buffers. Each invocation of the
    * utility returns an estimate for a single database in an environment.  For an
    * environment with multiple databases, run the utility for each database, add
    * up the btree sizes, and then add 10 percent.
    * Note that the utility does not yet cover duplicate records and the API is
    * subject to change release to release.
    * The only required parameters are the number of records and key size.
    * Data size, non-tree cache overhead, btree fanout, and other parameters
    * can also be provided. For example:
    * $ java DbCacheSize -records 554719 -key 16 -data 100
    * Inputs: records=554719 keySize=16 dataSize=100 nodeMax=128 density=80%
    * overhead=10%
    *    Cache Size      Btree Size  Description
    *    30,547,440      27,492,696  Minimum, internal nodes only
    *    41,460,720      37,314,648  Maximum, internal nodes only
    *   114,371,644     102,934,480  Minimum, internal nodes and leaf nodes
    *   125,284,924     112,756,432  Maximum, internal nodes and leaf nodes
    * Btree levels: 3
    * This says that the minimum cache size to hold only the internal nodes of the
    * btree in cache is approximately 30MB. The maximum size to hold the entire
    * database in cache, both internal nodes and datarecords, is 125Mb.
    public class DbCacheSize {
        private static final NumberFormat INT_FORMAT =
            NumberFormat.getIntegerInstance();
        private static final String HEADER =
            "    Cache Size      Btree Size  Description\n" +
        //   12345678901234  12345678901234
        //                 12
        private static final int COLUMN_WIDTH = 14;
        private static final int COLUMN_SEPARATOR = 2;
        private long records;
        private int keySize;
        private int dataSize;
        private int nodeMax;
        private int density;
        private long overhead;
        private long minInBtreeSize;
        private long maxInBtreeSize;
        private long minInCacheSize;
        private long maxInCacheSize;
        private long maxInBtreeSizeWithData;
        private long maxInCacheSizeWithData;
        private long minInBtreeSizeWithData;
        private long minInCacheSizeWithData;
        private int nLevels = 1;
        public DbCacheSize (long records,
                   int keySize,
                   int dataSize,
                   int nodeMax,
                   int density,
                   long overhead) {
         this.records = records;
         this.keySize = keySize;
         this.dataSize = dataSize;
         this.nodeMax = nodeMax;
         this.density = density;
         this.overhead = overhead;
        public long getMinCacheSizeInternalNodesOnly() {
         return minInCacheSize;
        public long getMaxCacheSizeInternalNodesOnly() {
         return maxInCacheSize;
        public long getMinBtreeSizeInternalNodesOnly() {
         return minInBtreeSize;
        public long getMaxBtreeSizeInternalNodesOnly() {
         return maxInBtreeSize;
        public long getMinCacheSizeWithData() {
         return minInCacheSizeWithData;
        public long getMaxCacheSizeWithData() {
         return maxInCacheSizeWithData;
        public long getMinBtreeSizeWithData() {
         return minInBtreeSizeWithData;
        public long getMaxBtreeSizeWithData() {
         return maxInBtreeSizeWithData;
        public int getNLevels() {
         return nLevels;
        public static void main(String[] args) {
            try {
                long records = 0;
                int keySize = 0;
                int dataSize = 0;
                int nodeMax = 128;
                int density = 80;
                long overhead = 0;
                File measureDir = null;
                boolean measureRandom = false;
                for (int i = 0; i < args.length; i += 1) {
                    String name = args;
    String val = null;
    if (i < args.length - 1 && !args[i + 1].startsWith("-")) {
    i += 1;
    val = args[i];
    if (name.equals("-records")) {
    if (val == null) {
    usage("No value after -records");
    try {
    records = Long.parseLong(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (records <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-key")) {
    if (val == null) {
    usage("No value after -key");
    try {
    keySize = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (keySize <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-data")) {
    if (val == null) {
    usage("No value after -data");
    try {
    dataSize = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (dataSize <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-nodemax")) {
    if (val == null) {
    usage("No value after -nodemax");
    try {
    nodeMax = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (nodeMax <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-density")) {
    if (val == null) {
    usage("No value after -density");
    try {
    density = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (density < 1 || density > 100) {
    usage(val + " is not betwen 1 and 100");
    } else if (name.equals("-overhead")) {
    if (val == null) {
    usage("No value after -overhead");
    try {
    overhead = Long.parseLong(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (overhead < 0) {
    usage(val + " is not a non-negative integer");
    } else if (name.equals("-measure")) {
    if (val == null) {
    usage("No value after -measure");
    measureDir = new File(val);
    } else if (name.equals("-measurerandom")) {
    measureRandom = true;
    } else {
    usage("Unknown arg: " + name);
    if (records == 0) {
    usage("-records not specified");
    if (keySize == 0) {
    usage("-key not specified");
         DbCacheSize dbCacheSize = new DbCacheSize
              (records, keySize, dataSize, nodeMax, density, overhead);
         dbCacheSize.caclulateCacheSizes();
         dbCacheSize.printCacheSizes(System.out);
    if (measureDir != null) {
    measure(System.out, measureDir, records, keySize, dataSize,
    nodeMax, measureRandom);
    } catch (Throwable e) {
    e.printStackTrace(System.out);
    private static void usage(String msg) {
    if (msg != null) {
    System.out.println(msg);
    System.out.println
    ("usage:" +
    "\njava " + CmdUtil.getJavaCommand(DbCacheSize.class) +
    "\n -records <count>" +
    "\n # Total records (key/data pairs); required" +
    "\n -key <bytes> " +
    "\n # Average key bytes per record; required" +
    "\n [-data <bytes>]" +
    "\n # Average data bytes per record; if omitted no leaf" +
    "\n # node sizes are included in the output" +
    "\n [-nodemax <entries>]" +
    "\n # Number of entries per Btree node; default: 128" +
    "\n [-density <percentage>]" +
    "\n # Percentage of node entries occupied; default: 80" +
    "\n [-overhead <bytes>]" +
    "\n # Overhead of non-Btree objects (log buffers, locks," +
    "\n # etc); default: 10% of total cache size" +
    "\n [-measure <environmentHomeDirectory>]" +
    "\n # An empty directory used to write a database to find" +
    "\n # the actual cache size; default: do not measure" +
    "\n [-measurerandom" +
    "\n # With -measure insert randomly generated keys;" +
    "\n # default: insert sequential keys");
    System.exit(2);
    private void caclulateCacheSizes() {
    int nodeAvg = (nodeMax * density) / 100;
    long nBinEntries = (records * nodeMax) / nodeAvg;
    long nBinNodes = (nBinEntries + nodeMax - 1) / nodeMax;
    long nInNodes = 0;
         long lnSize = 0;
    for (long n = nBinNodes; n > 0; n /= nodeMax) {
    nInNodes += n;
    nLevels += 1;
    minInBtreeSize = nInNodes *
         calcInSize(nodeMax, nodeAvg, keySize, true);
    maxInBtreeSize = nInNodes *
         calcInSize(nodeMax, nodeAvg, keySize, false);
         minInCacheSize = calculateOverhead(minInBtreeSize, overhead);
         maxInCacheSize = calculateOverhead(maxInBtreeSize, overhead);
    if (dataSize > 0) {
    lnSize = records * calcLnSize(dataSize);
         maxInBtreeSizeWithData = maxInBtreeSize + lnSize;
         maxInCacheSizeWithData = calculateOverhead(maxInBtreeSizeWithData,
                                  overhead);
         minInBtreeSizeWithData = minInBtreeSize + lnSize;
         minInCacheSizeWithData = calculateOverhead(minInBtreeSizeWithData,
                                  overhead);
    private void printCacheSizes(PrintStream out) {
    out.println("Inputs:" +
    " records=" + records +
    " keySize=" + keySize +
    " dataSize=" + dataSize +
    " nodeMax=" + nodeMax +
    " density=" + density + '%' +
    " overhead=" + ((overhead > 0) ? overhead : 10) + "%");
    out.println();
    out.println(HEADER);
    out.println(line(minInBtreeSize, minInCacheSize,
                   "Minimum, internal nodes only"));
    out.println(line(maxInBtreeSize, maxInCacheSize,
                   "Maximum, internal nodes only"));
    if (dataSize > 0) {
    out.println(line(minInBtreeSizeWithData,
                   minInCacheSizeWithData,
                   "Minimum, internal nodes and leaf nodes"));
    out.println(line(maxInBtreeSizeWithData,
                   maxInCacheSizeWithData,
    "Maximum, internal nodes and leaf nodes"));
    } else {
    out.println("\nTo get leaf node sizing specify -data");
    out.println("\nBtree levels: " + nLevels);
    private int calcInSize(int nodeMax,
                   int nodeAvg,
                   int keySize,
                   boolean lsnCompression) {
    /* Fixed overhead */
    int size = MemoryBudget.IN_FIXED_OVERHEAD;
    /* Byte state array plus keys and nodes arrays */
    size += MemoryBudget.byteArraySize(nodeMax) +
    (nodeMax * (2 * MemoryBudget.ARRAY_ITEM_OVERHEAD));
    /* LSN array */
         if (lsnCompression) {
         size += MemoryBudget.byteArraySize(nodeMax * 2);
         } else {
         size += MemoryBudget.BYTE_ARRAY_OVERHEAD +
    (nodeMax * MemoryBudget.LONG_OVERHEAD);
    /* Keys for populated entries plus the identifier key */
    size += (nodeAvg + 1) * MemoryBudget.byteArraySize(keySize);
    return size;
    private int calcLnSize(int dataSize) {
    return MemoryBudget.LN_OVERHEAD +
    MemoryBudget.byteArraySize(dataSize);
    private long calculateOverhead(long btreeSize, long overhead) {
    long cacheSize;
    if (overhead == 0) {
    cacheSize = (100 * btreeSize) / 90;
    } else {
    cacheSize = btreeSize + overhead;
         return cacheSize;
    private String line(long btreeSize,
                   long cacheSize,
                   String comment) {
    StringBuffer buf = new StringBuffer(100);
    column(buf, INT_FORMAT.format(cacheSize));
    column(buf, INT_FORMAT.format(btreeSize));
    column(buf, comment);
    return buf.toString();
    private void column(StringBuffer buf, String str) {
    int start = buf.length();
    while (buf.length() - start + str.length() < COLUMN_WIDTH) {
    buf.append(' ');
    buf.append(str);
    for (int i = 0; i < COLUMN_SEPARATOR; i += 1) {
    buf.append(' ');
    private static void measure(PrintStream out,
    File dir,
    long records,
    int keySize,
    int dataSize,
    int nodeMax,
    boolean randomKeys)
    throws DatabaseException {
    String[] fileNames = dir.list();
    if (fileNames != null && fileNames.length > 0) {
    usage("Directory is not empty: " + dir);
    Environment env = openEnvironment(dir, true);
    Database db = openDatabase(env, nodeMax, true);
    try {
    out.println("\nMeasuring with cache size: " +
    INT_FORMAT.format(env.getConfig().getCacheSize()));
    insertRecords(out, env, db, records, keySize, dataSize, randomKeys);
    printStats(out, env,
    "Stats for internal and leaf nodes (after insert)");
    db.close();
    env.close();
    env = openEnvironment(dir, false);
    db = openDatabase(env, nodeMax, false);
    out.println("\nPreloading with cache size: " +
    INT_FORMAT.format(env.getConfig().getCacheSize()));
    preloadRecords(out, db);
    printStats(out, env,
    "Stats for internal nodes only (after preload)");
    } finally {
    try {
    db.close();
    env.close();
    } catch (Exception e) {
    out.println("During close: " + e);
    private static Environment openEnvironment(File dir, boolean allowCreate)
    throws DatabaseException {
    EnvironmentConfig envConfig = new EnvironmentConfig();
    envConfig.setAllowCreate(allowCreate);
    envConfig.setCachePercent(90);
    return new Environment(dir, envConfig);
    private static Database openDatabase(Environment env, int nodeMax,
    boolean allowCreate)
    throws DatabaseException {
    DatabaseConfig dbConfig = new DatabaseConfig();
    dbConfig.setAllowCreate(allowCreate);
    dbConfig.setNodeMaxEntries(nodeMax);
    return env.openDatabase(null, "foo", dbConfig);
    private static void insertRecords(PrintStream out,
    Environment env,
    Database db,
    long records,
    int keySize,
    int dataSize,
    boolean randomKeys)
    throws DatabaseException {
    DatabaseEntry key = new DatabaseEntry();
    DatabaseEntry data = new DatabaseEntry(new byte[dataSize]);
    BigInteger bigInt = BigInteger.ZERO;
    Random rnd = new Random(123);
    for (int i = 0; i < records; i += 1) {
    if (randomKeys) {
    byte[] a = new byte[keySize];
    rnd.nextBytes(a);
    key.setData(a);
    } else {
    bigInt = bigInt.add(BigInteger.ONE);
    byte[] a = bigInt.toByteArray();
    if (a.length < keySize) {
    byte[] a2 = new byte[keySize];
    System.arraycopy(a, 0, a2, a2.length - a.length, a.length);
    a = a2;
    } else if (a.length > keySize) {
    out.println("*** Key doesn't fit value=" + bigInt +
    " byte length=" + a.length);
    return;
    key.setData(a);
    OperationStatus status = db.putNoOverwrite(null, key, data);
    if (status == OperationStatus.KEYEXIST && randomKeys) {
    i -= 1;
    out.println("Random key already exists -- retrying");
    continue;
    if (status != OperationStatus.SUCCESS) {
    out.println("*** " + status);
    return;
    if (i % 10000 == 0) {
    EnvironmentStats stats = env.getStats(null);
    if (stats.getNNodesScanned() > 0) {
    out.println("*** Ran out of cache memory at record " + i +
    " -- try increasing the Java heap size ***");
    return;
    out.print(".");
    out.flush();
    private static void preloadRecords(final PrintStream out,
    final Database db)
    throws DatabaseException {
    Thread thread = new Thread() {
    public void run() {
    while (true) {
    try {
    out.print(".");
    out.flush();
    Thread.sleep(5 * 1000);
    } catch (InterruptedException e) {
    break;
    thread.start();
    db.preload(0);
    thread.interrupt();
    try {
    thread.join();
    } catch (InterruptedException e) {
    e.printStackTrace(out);
    private static void printStats(PrintStream out,
    Environment env,
    String msg)
    throws DatabaseException {
    out.println();
    out.println(msg + ':');
    EnvironmentStats stats = env.getStats(null);
    out.println("CacheSize=" +
    INT_FORMAT.format(stats.getCacheTotalBytes()) +
    " BtreeSize=" +
    INT_FORMAT.format(stats.getCacheDataBytes()));
    if (stats.getNNodesScanned() > 0) {
    out.println("*** All records did not fit in the cache ***");

  • Vista 64 bit and CS4 and color management

    This is a question about Vista 64 bit and CS4 and color management. I scan 4x5 film and sometimes end up with up to or even bigger than 1 GB files. Obviously that needs as much memory as possible. Windows XP is limited in this regard and I am in the market for a new speedy computer which won't force me to stay at a snail's pace. In this month's Shutterbug, David Brooks in his Q&A column says to avoid Vista for color management reasons, but offers no explanation or support for his opinion. He implies one should wait for Windows 7 for some unstated reason. With a calibrated monitor and printer and Photoshop controlling color files sent to the printer, why would Vista be any different or worse than XP? Is he on to something or just pontificating? Does anyone know any reliable info about Windows 7 that would make it worth waiting for?
    Thanks.

    Zeno Bokor wrote:
    Photoshop has direct access to max 3.2gb
    On Mac OS X, PS CS4 can use up to 8 GB of RAM, but only directly accesses up to 3.5 GB. (Figures quoted from kb404440.) In using PS CS4 on Mac OS, though, direct Memory Usage maxes out at 3 GB even. If you set usage to 100% (3 GB), then plug-ins (including Camera Raw and filters), as well as actions and scripts, can access RAM above that 3 GB to between about 512 MB and about 768 MB total (seems to vary depending on which filters et al that you are using), leaving the rest up to 4 GB for the Mac OS. If you have more than 4 GB, then the amount of RAM above 4 GB is used by PS as a scratch disk. This increases performance significantly for most things because writing to and reading from the hard drive is much slower than doing so with RAM.
    I haven't done the testing for actual RAM usage and such for PS CS4 on Vista 64, and Adobe's documentation is very much lacking in detail, but, based on the statement "If you use files large enough to need more than 4 GB of RAM, and you have enough RAM, all the processing you perform on your large images can be done in RAM, instead of swapping out to the hard disk." from kb404439, it seems that PS would be using RAM in very much the same way as I described above for Mac OS, except that the scratch disk usage in RAM wouldn't be limited to 8 GB (instead to how much you have installed). Has anyone done any performance/load testing to know for sure? I didn't see any such studies published, but I am curious if one has been done.
    I will agree that there is a definite performance advantage when using PS CS4 (64-bit) on Vista 64, which I've experienced, especially when working very large compositions.
    My initial recommendation to the OP to use Mac was based upon reading those articles about bad color management. As I stated before, I have never experienced that problem, and clearly the views of all that have posted here so far indicate that the problem may not be a real issue. (Perhaps this David Brooks fellow and Steve Upton both like to mess with their computers and broke something in Windows?)

  • Bridge cache managment & export to folders?

    What is the function and advantage of "auto exporting cache to folders" in Bridge?
    Round 2 testing, I changed the CS6 Bridge cache location to be the same location I was using for CS5, on a secondary drive.
    Small mistake, because CS6 proceeded to delete the entire existing cache and start building a new one. However, Bridge CS5 seems to coexist OK sharing the same cache with CS6. They are both re-building the cache as I visit individual folders. (30,000 files in 800 folders gonna take a while).
    Long ago under CS3 I adopted the strategy of setting Bridge to "Auto export cache to folders." That worked well, because if you purged the master cache, Bridge would very quickly rebuild it from the folder cache files each time you re-visited a folder. The thumbs would come up very quickly if a folder cache existed, much slower if no folder cache existed.
    Under CS5, and now CS6, that is no longer true. If you visit a folder after the cache has been purged, a slow process to generate thumbnails occurs. It appears that the individual folder cache is not being used for anything. The time to re-build the cache is about the same, regardless of whether folder cache files exist or not.
    Purging or deleting the Bridge cache was often necessary under CS3 because the cache often became corrupted. CS5 was more stable, but still not immune from cache corruption. Maybe CS6 will be even more stable, but I'm betting cache purges will still be sometimes necessary. Is there any way to make the rebuild faster? Or more generally, what's the best strategy for Bridge cache management?
    (And no, I don't have "keep 100% previews in cache" selected.)

    What is the function and advantage of the Bridge Forum?
    http://forums.adobe.com/community/bridge/general

  • One day i went to go get on Photoshop Elemnts 13 and it said i had to pay for it all over again or do a 30 day free trial, i also forgot what my serial number is, so is there a way i can find out what my serial number is by signing in to my adobe Id?

    one day i went to go get on Photoshop Elements 13 and it said i had to pay for it all over again or do a 30 day free trial, i also forgot what my serial number is, so is there a way i can find out what my serial number is by signing in to my adobe Id?

    Make an appointment at the genius bar.
    If you went without an appointment, then it would make sense that you could be turned away.

  • Since I joined creative cloud, I have not been able to sync. Why the difference? Worked good during initial trial. Also I had installed the Mosaic app before Lightroom mobile came out and I cannot seem to get rid of it. How can I get mosaic out? I have ca

    Since I joined creative cloud, I have not been able to sync. Why the difference? Worked good during initial trial.
    Also I had installed the Mosaic app for ipad before Lightroom mobile came out and I cannot seem to get rid of it. How can I get mosaic out? I have cancelled, I have deleted, I have looked through the files for visible remnants.

    The sync count is 60 and static. The total number synced so far is 500+. None have been synced from the ipad except for corrections made there on those already synced.
    I have deleted the Mosaic app from all machines and checked the library folder and have contacted the Mosaic tech support and they have discontinued my subscription. I still get mosaic messages on initiating Lightroom and sometimes at closing.
    I have not tried to sync fewer yet, but some catalogs that I have tried are smaller numbers. Initially, it did well on good connections. Nothing works on my home connections over satellite.

  • In preferences what does "Override automatic cache management" mean and do?

    I was wondering what "Override automatic cache management" means and does.

    Firefox normally manages the size of it's cache, and can cache more or less information depending on various factors.
    Checking this option allows you to specify the maximum size of the Firefox cache, in MB.
    There isn't usually any need to enable this option.

Maybe you are looking for