Cache Problem in iPlanet (Solaris)

We are using iPlanet on solaris. we are having cache problem. Everytime we make any change in out JSP files we have to stop the server and then we clear the cache from ClassCache folder. Then we restart the server. Please let me know, how can I avoid restarting the server. Thanks

Sanjay,
You do not say which version you are using but I
believe some people had problems with 4.1 that went
away when they upgraded to 4.1.3.
The JSP source file should be compared with the
version in the cache at run time, so a newer version
should be picked up automatically. However, a JSP that
you only 'include' in an outer JSP will not be updated
because only the including JSP is checked for update.
If this is the problem you have, you should be
able to force a run time check for the inner JSP by
using "jsp:include page=<name>.jsp" in the including
(outer) JSP.
I hope this helps.
Alan Beecraft, Sun Developer Technical Support

Similar Messages

  • Solaris 10 NFS caching problems with custom NFS server

    I'm facing a very strange problem with a pure java standalone application providing NFS server v2 service. This same application, targeted for JVM 1.4.2 is running on different environment (see below) without any problem.
    On Solaris 10 we try any kind of mount parameters, system services up/down configuration, but cannot solve the problem.
    We're in big trouble 'cause the app is a mandatory component for a product to be in production stage in a while.
    Details follows
    System description
    Sunsparc U4 with SunOS 5.10, patch level: Generic_118833-33, 64bit
    List of active NFS services
    disabled   svc:/network/nfs/cbd:default
    disabled   svc:/network/nfs/mapid:default
    disabled   svc:/network/nfs/client:default
    disabled   svc:/network/nfs/server:default
    disabled   svc:/network/nfs/rquota:default
    online       svc:/network/nfs/status:default
    online       svc:/network/nfs/nlockmgr:default
    NFS mount params (from /etc/vfstab)
    localhost:/VDD_Server  - /users/vdd/mnt nfs - vers=2,proto=tcp,timeo=600,wsize=8192,rsize=8192,port=1579,noxattr,soft,intr,noac
    Anomaly description
    The server side of NFS is provided by a java standalone application enabled only for NFS v2 and tested on different environments like: MS Windows 2000, 2003, XP, Linux RedHat 10 32bit, Linux Debian 2.6.x 64bit, SunOS 5.9. The java application is distributed with a test program (java standalone application) to validate main installation and configuration.
    The test program simply reads a file from the NFS file-system exported by our main application (called VDD) and writes the same file with a different name on the VDD exported file-system. At end of test, the written file have different contents from the read one. Indeep investigation shows following behaviuor:
    _ The read phase behaves correctly on both server (VDD) and client (test app) sides, trasporting the file with correct contents.
    _ The write phase produces a zero filled file for 90% of resulting VDD file-system file but ending correctly with the same sequence of bytes as the original read file.
    _ Detailed write phase behaviour:
    1_ Test app wites first 512 bytes => VDD receive NFS command with offset 0, count 512 and correct bytes contents;
    2_ Test app writes next 512 bytes => VDD receive NFS command with offset 0, count 1024 and WRONG bytes contents: the first 512 bytes are zero filled (previous write) and last 512 bytes with correct bytes contents (current write).
    3_ Test app writes next 512 bytes => VDD receive NFS command with offset 0, count 1536 and WRONG bytes contents: the first 1024 bytes are zero filled (previous writes) and last 512 bytes with correct bytes contents (current write).
    4_ and so on...
    Further tests
    We tested our VDD application on the same Solaris 10 system but with our test application on another (Linux) machine, contacting VDD via Linux NFS client, and we don�t see the wrong behaviour: our test program performed ok and written file has same contents as read one.
    Has anyone faced a similar problem?
    We are Sun ISV partner: do you think we have enough info to open a bug request to SDN?
    Any suggestions?
    Many thanks in advance,
    Maurizio.

    I finally got it working. I think my problem was that I was coping and pasting the /etc/pam.conf from Gary's guide into the pam.conf file.
    There was unseen carriage returns mucking things up. So following a combination of the two docs worked. Starting with:
    http://web.singnet.com.sg/~garyttt/Configuring%20Solaris%20Native%20LDAP%20Client%20for%20Fedora%20Directory%20Server.htm
    Then following the steps at "Authentication Option #1: LDAP PAM configuration " from this doc:
    http://docs.lucidinteractive.ca/index.php/Solaris_LDAP_client_with_OpenLDAP_server
    for the pam.conf, got things working.
    Note: ensure that your user has the shadowAccount value set in the objectClass

  • I got a big problem that iPlanet 6.0 SP2

    Hi,
    I got a big problem that iPlanet 6.0 SP2 is in cycle of restart.
    The web server's error logs is the following:
    [16/Jul/2002:00:30:05] info ( 8545): successful server startup
    [16/Jul/2002:00:30:05] info ( 8545): iPlanet-WebServer-Enterprise/6.0SP2 B11/13/2001 00:49
    [16/Jul/2002:00:30:09] info ( 8599): Installing a new configuration
    [16/Jul/2002:00:30:09] info ( 8599): [LS ls1] http://xxx.xxx.xxx.xxx, port 80 ready to accept requests
    [16/Jul/2002:00:30:09] failure ( 8599): Error accepting connection -5928, oserr=130 (Connect aborted)
    [16/Jul/2002:00:30:09] info ( 8599): A new configuration was successfully installed
    [16/Jul/2002:00:38:14] failure ( 8545): Child process admin thread is shutting down
    [16/Jul/2002:00:38:16] info ( 9053): Installing a new configuration
    [16/Jul/2002:00:38:16] info ( 9053): [LS ls1] http://xxx.xxx.xxx.xxx, port 80 ready to accept requests
    [16/Jul/2002:00:38:16] info ( 9053): A new configuration was successfully installed
    [16/Jul/2002:00:38:16] failure ( 9053): Error accepting connection -5928, oserr=130 (Connect aborted)
    [16/Jul/2002:00:46:04] failure ( 8545): Child process admin thread is shutting down
    [16/Jul/2002:00:46:05] info ( 9496): Installing a new configuration
    [16/Jul/2002:00:46:05] info ( 9496): [LS ls1] http://xxx.xxx.xxx.xxx, port 80 ready to accept requests
    [16/Jul/2002:00:46:05] failure ( 9496): Error accepting connection -5928, oserr=130 (Connect aborted)
    [16/Jul/2002:00:46:05] info ( 9496): A new configuration was successfully installed
    [16/Jul/2002:02:43:31] failure (15596): Child process admin thread is shutting down
    [16/Jul/2002:02:43:32] info (16212): Installing a new configuration
    [16/Jul/2002:02:43:32] info (16212): [LS ls1] http://xxx.xxx.xxx.xxx, port 80 ready to accept requests
    [16/Jul/2002:02:43:32] info (16212): A new configuration was successfully installed
    [16/Jul/2002:02:43:32] failure (16212): Error accepting connection -5928, oserr=130 (Connect aborted)
    You can see child process is shutdown and restart it.
    I don't know what's problem is.
    And other problem is the following:
    [16/Jul/2002:16:10:25] catastrophe (23966): Server crash detected (signal SIGBUS)
    [16/Jul/2002:16:10:25] info (23966): Crash occurred in NSAPI SAF NSServletService
    [16/Jul/2002:16:10:25] info (23966): Crash occurred in function direct_identityHashCode from module /usr/java1.2/jre/lib/sparc/libjvm.so
    [16/Jul/2002:16:10:25] failure (23454): Child process admin thread is shutting down
    [16/Jul/2002:16:10:26] info (24006): Installing a new configuration
    [16/Jul/2002:16:10:26] info (24006): [LS ls1] http://xxx.xxx.xxx.xxx, port 80 ready to accept requests
    [16/Jul/2002:16:10:26] failure (24006): Error accepting connection -5928, oserr=130 (Connect aborted)
    FYI, here is our environment:
    # /usr/java1.2/bin/java -version
    java version "1.2.2" Solaris VM (build Solaris_JDK_1.2.2_07, native threads, sunwjit)
    # uname -a
    SunOS wagency2 5.6 Generic_105181-31 sun4u sparc SUNW,Ultra-4
    Sun Enterprise 450 (2 X UltraSPARC-II 400 Mhz, 2G Memory
    If you have any solution and experience, please let me know.
    Thanks,
    Barney Kim
    [email protected]

    Barney,
    from the error log the problem seems to be related to the use of an external JDK. Try reverting back to the original JRE. See if that helps, i.e., the server starts up fine. Use the administration GUI and select Global Settings > Configure JRE/JDK Paths. On the screen that appears, you can configure the path to the JRE. The default would be <server-root>/bin/https/jre.
    If that works fine, you might download a more recent version of the JDK. iWS 6 SP2 supports JDK 1.3.1.
    Hope this helps.
    Best regards
    Rodrigo

  • Caching problem w/ primary-foreign key mapping

    I have seen this a couple of times now. It is not consistent enough to
    create a simple reproducible test case, so I will have to describe it to you
    with an example and hope you can track it down. It only occurs when caching
    is enabled.
    Here are the classes:
    class C1 { int id; C2 c2; }
    class C2 { int id; C1 c1; }
    Each class uses application identity using static nested Id classes: C1.Id
    and C2.Id. What is unusual is that the same value is used for both
    instances:
    int id = nextId();
    C1 c1 = new C1(id);
    C2 c2 = new C2(id);
    c1.c2 = c2;
    c2.c1 = c1;
    This all works fine using optimistic transactions with caching disabled.
    Although the integer values are the same, the oids are unique because each
    class defines its own unique oid class.
    Here is the schema and mapping (this works with caching disabled but fails
    with caching enabled):
    table t1: column id integer, column revision integer, primary key (id)
    table t2: column id integer, column revision integer, primary key (id)
    <jdo>
    <package name="test">
    <class name="C1" objectid-class="C1$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t1"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c2">
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="column.id" value="id"/>
    </extension>
    </field>
    </class>
    <class name="C2" objectid-class="C2$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t2"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c1">
    <extension vendor-name="kodo" key="dependent" value="true"/>
    <extension vendor-name="kodo" key="inverse-owner" value="c2"/>
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="table" value="t1"/>
    <extension vendor-name="kodo" key="ref-column.id" value="id"/>
    <extension vendor-name="kodo" key="column.id" value="id"/>
    </extension>
    </field>
    </class>
    </package>
    </jdo>
    Because the ids are known to be the same, the primary key values are also
    used as foreign key values. Accessing C2.c1 is always non-null when caching
    is disabled. With caching is enabled C2.c1 is usually non-null but sometimes
    null. When it is null we get warnings about dangling references to deleted
    instances with id values of 0 and other similar warnings.
    The workaround is to add a redundant column with the same value. For some
    reason this works around the caching problem (this is unnecessary with
    caching disabled):
    table t1: column id integer, column id2 integer, column revision integer,
    primary key (id), unique index (id2)
    table t2: column id integer, column revision integer, primary key (id)
    <jdo>
    <package name="test">
    <class name="C1" objectid-class="C1$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t1"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c2">
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="column.id" value="id2"/>
    </extension>
    </field>
    </class>
    <class name="C2" objectid-class="C2$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t2"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c1">
    <extension vendor-name="kodo" key="dependent" value="true"/>
    <extension vendor-name="kodo" key="inverse-owner" value="c2"/>
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="table" value="t1"/>
    <extension vendor-name="kodo" key="ref-column.id" value="id2"/>
    <extension vendor-name="kodo" key="column.id" value="id"/>
    </extension>
    </field>
    </class>
    </package>
    </jdo>
    Needless to say, the extra column adds a lot of overhead, including the
    addition of a second unique index, for no value other than working around
    the caching defect.

    Tom-
    The first thing that I think of whenever I see a problem like this is
    that the equals() and hashCode() methods of your application identity
    classes are not correct. Can you check them to ensure that they are
    written in accordance to the guidelines at:
    http://docs.solarmetric.com/manual.html#jdo_overview_pc_identity_application
    If that doesn't help address the problem, can you post the code for your
    application identity classes so we can double-check, and we will try to
    determine what might be causing the problem.
    In article <[email protected]>, Tom Landon wrote:
    I have seen this a couple of times now. It is not consistent enough to
    create a simple reproducible test case, so I will have to describe it to you
    with an example and hope you can track it down. It only occurs when caching
    is enabled.
    Here are the classes:
    class C1 { int id; C2 c2; }
    class C2 { int id; C1 c1; }
    Each class uses application identity using static nested Id classes: C1.Id
    and C2.Id. What is unusual is that the same value is used for both
    instances:
    int id = nextId();
    C1 c1 = new C1(id);
    C2 c2 = new C2(id);
    c1.c2 = c2;
    c2.c1 = c1;
    This all works fine using optimistic transactions with caching disabled.
    Although the integer values are the same, the oids are unique because each
    class defines its own unique oid class.
    Here is the schema and mapping (this works with caching disabled but fails
    with caching enabled):
    table t1: column id integer, column revision integer, primary key (id)
    table t2: column id integer, column revision integer, primary key (id)
    <jdo>
    <package name="test">
    <class name="C1" objectid-class="C1$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t1"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c2">
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="column.id" value="id"/>
    </extension>
    </field>
    </class>
    <class name="C2" objectid-class="C2$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t2"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c1">
    <extension vendor-name="kodo" key="dependent" value="true"/>
    <extension vendor-name="kodo" key="inverse-owner" value="c2"/>
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="table" value="t1"/>
    <extension vendor-name="kodo" key="ref-column.id" value="id"/>
    <extension vendor-name="kodo" key="column.id" value="id"/>
    </extension>
    </field>
    </class>
    </package>
    </jdo>
    Because the ids are known to be the same, the primary key values are also
    used as foreign key values. Accessing C2.c1 is always non-null when caching
    is disabled. With caching is enabled C2.c1 is usually non-null but sometimes
    null. When it is null we get warnings about dangling references to deleted
    instances with id values of 0 and other similar warnings.
    The workaround is to add a redundant column with the same value. For some
    reason this works around the caching problem (this is unnecessary with
    caching disabled):
    table t1: column id integer, column id2 integer, column revision integer,
    primary key (id), unique index (id2)
    table t2: column id integer, column revision integer, primary key (id)
    <jdo>
    <package name="test">
    <class name="C1" objectid-class="C1$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t1"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c2">
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="column.id" value="id2"/>
    </extension>
    </field>
    </class>
    <class name="C2" objectid-class="C2$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t2"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c1">
    <extension vendor-name="kodo" key="dependent" value="true"/>
    <extension vendor-name="kodo" key="inverse-owner" value="c2"/>
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="table" value="t1"/>
    <extension vendor-name="kodo" key="ref-column.id" value="id2"/>
    <extension vendor-name="kodo" key="column.id" value="id"/>
    </extension>
    </field>
    </class>
    </package>
    </jdo>
    Needless to say, the extra column adds a lot of overhead, including the
    addition of a second unique index, for no value other than working around
    the caching defect.
    Marc Prud'hommeaux [email protected]
    SolarMetric Inc. http://www.solarmetric.com

  • I am facing a caching problem in the Web-Application that I've developed us

    Dear Friends,
    I am facing a caching problem in the Web-Application that I've developed using Java/JSP/Servlet.
    Problem Description: In this application when a hyperlink is clicked it is supposed to go the Handling Servlet and then servlet will fetch the data (using DAO layer) and will store in the session. After this the servlet will forward the request to the view JSP to present the data. The JSP access the object stored in the session and displays the data.
    However, when the link is clicked second time then the request is not received by our servlet and the cached(prev data) page is shown. If we refresh the page then request come to the servlet and we get correct data. But as you will also agree that we don't want the users to refresh the page again and again to get the updated data.
    We've included these lines in JSPs also but it does no good:
    <%
    response.setHeader("Expires", "0");
    response.setHeader("Cache-Control" ,"no-cache, must-revalidate");
    response.setHeader("Pragma", "no-cache");
    response.setHeader("Cache-Control","no-store");
    %>
    Request you to please give a solution for the same.
    Thanks & Regards,
    Mohan

    However, when the link is clicked second time then the request is not received by our servlet Impossible mate.. can you show your code. You sure there are no javascript errors ?
    Why dont you just remove your object from the session after displaying the data from it and see if your page "automatically" hits the servlet when the link is clicked.
    cheers..
    S

  • Problem facing installing Solaris 10 on Dell inspiron 6400.

    Hi,
    I am facing problem in installing Solaris 10 on my Dell inspiron 6400.
    It is giving "cannot find driver for screen device /isa/motherboard@1,61" error.
    Please help...
    Thanks In Advance
    Saurabh

    Hi all,
    I've also the same problem, I tried the 11/06 but after GRUB , it hangs. HereBelow the HCL.
    No Solaris Driver
         No Solaris Driver
         Multimedia
         Intel Corporation
         82801G (ICH7 Family) High Definition Audio Controller
    No Solaris Driver
         No Solaris Driver
         Network
         Intel Corporation
         PRO/Wireless 3945ABG Network Connection
    Solaris 10 11/06
         Solaris 10 11/06
         Firewire
         Ricoh Co Ltd
         Unknown device
         hci1394
    Note-1
         Note-1
         Network
         Broadcom Corporation
         BCM4401-B0 100Base-TX
         bfe
    Solaris 10 11/06
         Solaris 10 11/06
         Video
         Intel Corporation
         Mobile 945GM/GMS/940GML Express Integrated Graphics Controller
         vgatext
    Solaris 10 11/06
         Solaris 10 11/06
         Storage
         Intel Corporation
         82801GBM/GHM (ICH7 Family) Serial ATA Storage Controller IDE
         pci-ide
    Solaris 10 11/06
         Solaris 10 11/06
         USB
         Intel Corporation
         82801G (ICH7 Family) USB UHCI #1
         uhci
    Solaris 10 11/06
         Solaris 10 11/06
         USB
         Intel Corporation
         82801G (ICH7 Family) USB UHCI #2
         uhci
    Solaris 10 11/06
         Solaris 10 11/06
         USB
         Intel Corporation
         82801G (ICH7 Family) USB UHCI #3
         uhci
    Solaris 10 11/06
         Solaris 10 11/06
         USB
         Intel Corporation
         82801G (ICH7 Family) USB UHCI #4
         uhci
    Solaris 10 11/06
         Solaris 10 11/06
         USB
         Intel Corporation
         82801G (ICH7 Family) USB2 EHCI Controller
         ehci

  • Problem while installing solaris 10 on HP proliant DL 380 G5.

    Hi,
    facing problem while installing solaris 10 05/09 on HP proliant DL 380G5. The array controller is E200.
    i have installed the raid controller driver CPQary3-2.1.0-solaris10-i386, it is going ahead but not detecting the whole hard disk space. My hard disk space is SAS 146GB. There are two 146GB hard disk present. It is showing only 978 MB & only one hard disk. So please help me out. my mail ID is [email protected]

    I have a similar issue trying to load Solaris 10u4 on an IBM x236.
    I had to put in a recognized SCSI card, attach an external array to the card, and install Solaris 10u4 on the array. Then I had to install the correct driver from OpenSolaris (an Adaptec one for IBM ServeRAID), and do a live upgrade onto the now-visible internal disks. Then I removed the array and cleaned up the traces from the live upgrade.
    You may have to do something similar.
    And last month the boot drive got corrupted when power was interrupted. "Oops. Wrong plugs..." This time the external array will stay connected.

  • Bridge update does not fix caching problems.

    Dear Adobe,
    The 5.0.1.23 update for Bridge CS6 does NOT fix the problem of constantly re-caching layered TIF files.
    I originaly reported the problem here on May 16, 2012.
    http://forums.adobe.com/thread/1007560
    At that time I also submitted a bug report via photoshop.com, and received an e-mail response from Adobe support confirming the problem had been replicated in their lab and promising a fix in the next update.
    I've since tracked several other reports of this bug and related cache problems.
    I assume that, at best, we will have to wait another 6 months or more for the next update. How can I assure this bug will be addressed?

    redcrown on guard wrote:
    The 5.0.1.23 update for Bridge CS6 does NOT fix the problem of constantly re-caching layered TIF files.
    At that time I also submitted a bug report via photoshop.com, and received an e-mail response from Adobe support confirming the problem had been replicated in their lab and promising a fix in the next update.
    Thank you for this bit of information. Maybe it means I can stop the deactivations/uninstall/reinstall/reactivate cycle to try yet another solution. And hopefully, this will stop the re-caching problem with other than tif files.
    regards
    *S*

  • Caching problem in Chrome and Firefox

    Hey folks,
    I ran into a weird problem.  I created a video player based on the Strobe Media Playback.  I added a couple of plugins.  This player is used to watch progressive download FLV files.
    I ran into the following issue.  I watch part of a video.  I select another one.  Then I select the previous one again.  Only the cached portion of the first video is shown.  The entire video will not be downloaded again from the server, but only the portion already cached on the client.
    This problem is really bad in Chrome.  When I restart FF, I can watch the entire video.  Not in Chrome.  The only way to solve this in Chrome is to clear the cache.
    Any ideas?
    The website is live, so you can test this yourselve.  http://www.submergeproductions.com/videos.aspx
    All help very welcome, because this is a major issue.
    Follow up.  I made a quick fix.  I added a random number to the FLV url to force a redownload from the server, but this quite a dirty fix. I would rather have a restart/continuation of the download if the file was only partially downloaded.
    Thanks,
    Peter

    Hi Silviu,
    the reason why it works now is because I uploaded a modified version.  I append "?<random number> to the URL.  That prevents caching problems because the browser hasn't got the version cached.  But I still report it as a bug.
    Peter

  • Caching problem of servlet

    Hi guys
    We are facing this problem of caching within our project. The project aims to generate a html code to pick up some rich media ads details at random and displaying on the html file where the generated code is expected to be pasted. We developed two servlets, one which extracts the ads from the database randomly and then depanding on the ad type it calls the other servlet as src of an iframe, which in turn puts all code for displaying the rich media ads. The script which we are generating for the user to paste onto their pages is:
    <script LANGUAGE="JAVASCRIPT" src="http://192.168.1.6:8080/advert_java/servlet/GetAdServlet?region=1&zone=1&type=nossi&cachevar=yes">
    </script>First servlet (GetAdServlet) returns the javascript statements and thus is called using this generated code. Now cotents of the iframe are supplied by the second servlet ie richMediaServlet. This servlet is called like
    iframeURL = fullHttpDir+"/servlet/RichMediaServlet?";
    iframeURL += "bannerCode="+ RNBanner (BannerCode to be called);
    out.println("document.write(\"<iframe  src='"  + iframeURL +  "' height=" + hheight +" width="+ wwidth + " SCROLLING=no FRAMEBORDER=0 MARGINWIDTH=2 MARGINHEIGHT=2 onfocus='window.focus(); return iframeFocus()'>\");");
    out.println("document.write(\"</iframe>\");");This richmediaServlet returns HTML into <iframe>. when richmediaservlet is called, a parameter 'bannerCode' is passed. then richmediaServlet fatches the banner from the database and displays the banner into the <iframe>.
    Now the problem comes when we run the html file containing the script tag mentioned above, and refresh our page, ideally it should pick the ads randomly and pass it on to RichMediaServlet.
    I also try debugging both servlets. I called the getadservlet from javascript mentioned above and put debugging info in both the servlets, now for every refresh we do on the html side, we are getting a different random bannercode in adservlet but in richmedia when we print the bannercode received in querystring it is taking an older value which was displayed some time back and keeps on doing this for quiet a long time, making it look like some caching problem of RichMediaServlet.
    Instead when we tried to put the same html <script> code into another servlet's doGet, everything seems to be working fine.
    i have also used the following code to prevent the caching on both the setvlets
    long currentTime = System.currentTimeMillis();
    response.setHeader("Cache-Control", "no-cache, must-revalidate");
    response.setHeader("Pragma", "no-cache");
    response.setDateHeader("Last-modified", currentTime);
    response.setHeader("Expires", "Sat, 6 May 1995 12:00:00 GMT");     and following in the iframe's head tag before the iframe tag in the getAdServlet.
    out.println("document.write('<head>');");
    out.println("document.write('<meta http-equiv=\"Cache-Control\" content=\"no-cache,must-revalidate\">');");
    out.println("document.write('<meta http-equiv=\"Pragma\" content=\"no-cache\">');");
    out.println("document.write('<meta http-equiv=\"Last-modified\" content=\""+ currentTime + "\">');");
    out.println("document.write('<meta http-equiv=\"expires\" content=\"Sat, 6 May 1995 12:00:00 GMT\">');");
    out.println("document.write('</head>');");I request you all geeks to try and help me to your best. The project is at its final stages and in high urgency now.

    i think the caching is being in the browser, with the iframe.
    You should try passing a random param to the servlet in the iframe URL, something like:
    var a = Math.random() * 10000000; //for example
    out.println("document.write(\"<iframe  src='"  + iframeURL +"&rand="+a+"' height=" + hheight +" width="+ wwidth + " SCROLLING=no FRAMEBORDER=0 MARGINWIDTH=2 MARGINHEIGHT=2 onfocus='window.focus(); return iframeFocus()'>\");");
    out.println("document.write(\"</iframe>\");");
    ...It should force the browser to ask for the servlet again
    hope this helps...

  • Caching problem with Internet Explorer

    Hi,
    users of an ApEx application I'm working on are reporting that when they're deleting an uploaded file from one of the pages in the application (using Internet Explorer), the link to the file remains. This is however not an issue in FireFox, and after some research I found out that this is a caching problem in IE. It can be avoided by making IE check for newer versions of stored pages every time a page is visited, but it is clearly not an option to ask all our users to do this. I also learned that it can be fixed by randomizing the file URL every time the page is loaded, but I don't know how to randomize a URL, nor how to make it still point to the uploaded file.. Any help would be appreciated!
    Thanks,
    -Kjetil

    Kjetil,
    This problem is also there if you use Flash Charts with a drilldown. See this posting:
    http://www.deneskubicek.blogspot.com/
    It will also link you to a corresponding thread and to an example in my demo application.
    The idea with a random number changing you link is the same I used in extending my
    xml chart package.
    Denes Kubicek
    http://deneskubicek.blogspot.com/
    http://www.opal-consulting.de/training
    http://htmldb.oracle.com/pls/otn/f?p=31517:1
    -------------------------------------------------------------------

  • ADF cache problem

    Hello,
    I'm developing a web application with an ADF tree accessing to a Content DB repository. When I deploy the application and navigate to the tree everything look fine. The problem occurs, when the content of the repository changed outside of the web application (e.g. a file will be deleted with Oracle drive), the tree doesn't display the changes. The only way to get the right state of the tree, clear the browser cache and reload the page again. In my opinion it looks like a caching problem. Putting the following meta information in the html header also failed!
    <meta http-equiv="pragma" content="no-cache"/>
    <meta http-equiv="expires" content="0"/>
    <meta http-equiv="cache-control" content="no-cache"/>
    Is there a possibility to disable the caching of such ADF components?
    Hope you can help me!
    Thanks,
    Alex

    This will probably help you: http://www.oracle.com/technology/products/ias/web_cache/afc/index.html
    Regards,
    Koen Verhulst

  • Cache problem for this

    Hi
    in the saw.sessionsinfo.xml file ,i change the home link prop true to false
    but it not remove from the presentation side
    below is my file
    Below is my script file
    plz check it
    give if any errors
    <?xml version="1.0" encoding="UTF-8"?>
    <resourceBundle xmlns="oracle.bi.ps.resourceBundle/v1">
    <gdexpression id="noLogoffUI" expr="session.hideLogoffLink" />
    <gdexpression id="syndicate" expr="session.syndicate" />
    <gdexpression id="canAccessDashboards" expr="privileges.Access['Global Portal']" />
    <gdexpression id="hdrLinkCatalog" expr="true" />
    <gdexpression id="hdrLinkOpen" expr="true" />
    <gdexpression id="hdrLinkAdvanced" expr="true" />
    <gdexpression id="hdrLinkHelp" expr="true" />
    <gdexpression id="hdrLinkHome" expr="false" />
    <gdexpression id="hdrLinkGSearch" expr="true" />
    <gdexpression id="hdrLinkNew" expr="true" />
    <gdexpression id="hdrLinkDashboards" expr="true" />
    <gdexpression id="hdrLinkSettings" expr="true" />
    <gdexpression id="bipKeepAlive" expr="session.bipKeepAlive" />
    <gdexpression id="bipWebUrl" expr="system.config['AdvancedReporting/WebURL']" />
    <gdexpression id="bipExternalRepository" expr="system.config['AdvancedReporting/ExternalRepository']" />
    <gdexpression id="biComposerContext" expr="system.config['BIComposer/ContextPath']" />
    </resourceBundle>
    plz anybody give solution for this
    Edited by: ARYABRAHMA on Feb 5, 2013 3:38 AM

    is cache problem for this

  • Qaaws not refreshing query triggered from Xcelsius, maybe a cache problem

    Hi,
    I'm having a problem with QAAWS and Xcelsius
    I'm using a List Builder component to select multiple values in this case STATES from the efashion universe
    I use the selected states as values to feed a prompt in a QAAWS query, the qwaas query has  the SALES REVENUE as the resultset and in the conditions it has a multi prompt for STATES
    When I preview my dashboard, I select the States, then UPDATE the values and then refresh the query with a CONNECTION REFRESH button, The first time I do this it works fine and returns the Sales revenue.
    If I add a new State to my selection and then run update and run run again the query with the refresh button, it doesn't work any more and it shows again the value retrieved from the first query
    First I thought that the query wasn't triggered by Xcelsius, but by doing some more tests I found that actually the query runs but it returns the value from the first query
    I think this is a cache problem , so is there a way to tell QAAWS to always run the query and not use the cache?
    thanks,
    Alejandro

    Hello Alejandro,
    QaaWS indeed uses a cache mechanism to speed up some Xcelsius interactions (from XI 3.0 onwards), but your issue should not be induced by this, as cache sessions are discriminated according to session user id & prompt values, so if you are correctly passing prompt values, QaaWS should not serve you with the previous values by error.
    Could you specify how you are passing several prompt values to the QaaWS? There might be an issue there, so make sure that:
    1. QaaWS query prompt is set using In List operator, otherwise only first value will actually be taken into account,
    2. In Xcelsius Designer Data Manager, web service input paremeters are duplicated to accept several input values (you cannot submit you list of prompt values as a list to a single input parameter).
    If this still does not work, I'd suggest you debug your dashboard runtime using an HTTP sniffer like Fiddler (available from http://www.fiddler2.com/) which enable you to inspect the sent & recieved HTTP messages with the server, where you should verify which prompt values are sent to the QaaWS servlet.
    FYI, you can set the QaaWS cache lifetime for each query, by going into QaaWS edition first wizard screen, click Advanced... button and change value for timeout parameter (default is 60 seconds)
    Hope that helps,
    David.

  • ASO cache problem with Windows 7

    I have a large base of AS2 classes that support my application.  I've been working on it for a few years and I'm very familiar with the ASO cache problem that causes edits to not be compiled.  Well, I hit it today and I can't get rid of the old files.  I'm running CS5.5 and I used the menu first.  Then I manually deleted the ASO files.  Then I rebooted the computer and searched the entire C: drive for ASO files and deleted all of them.  Nothing worked.  I finally renamed one of my class files (and the the necessary edits that flowed therefrom) and that one file was recompiled. Obviously there is a hidden cached file somewhere.  I have Win7 Pro 64-bit.  Has anyone had a similar problem?  The last thing I want to have to do is rename all of my class files or tear down and rebuild my development box.  TIA

    3.1 is not "fine" and the drivers leave much to be desired.
    You will probably need to do a Safe Mode and uninstall, rollback, system restore. Use your DVD for Windows 7.
    Not sure of the support issues and details on MacBook Pro 13" as to whether you have graphic driver issue, AppleHFS, or other (and to say 'total hangup' doesn't really lead to what and why).

Maybe you are looking for

  • Won't Open Any Documents

    Hi all, I'm pretty new to Illustrator and I'm currently using Illustrator CS6 v16.0.0 x32 on Windows 7. I'm having an issue where it's not letting me open any documents or basically use it... hence the long description: When I load the program, every

  • Redeploying application errors out.

    We have a clustered application servers (2), and have deployed the application using OEM Application Server Control. Sometimes, when we redeploy an application (EAR) we get the following error and the ASC removes the application from the OC4J contain

  • Layers (general question)

    I don't really use layers much, but I just noticed that if I'd used a layer for my master page items I wouldn't keep having to drag them to the front whenever I put a tint of color over the whole page. Is it common practice to always have the master

  • Cannot copy table cells/rows/columns

    Every time I attempt to copy cells using dreamweaver 8.0.2 I get.. When executing DWMenu_Edit_Copy command in menus.xml, the following javascript error(s) occurred: At line 77 of D:\Program FIles\Macromedia\Dreamweaver 8\Configuration\Shared\MM\Scrip

  • SUBMIT WITH JOB

    Hi All, There is a requrement in which i have to pass an internal  table from main program to another program .then do processing on that table i.e modify the value of some fileds and then pass the changed table to main program. I have to use job i.e