JNDI caching problems.. Want to lookup newly bound object.

I want to use JNDI as a dynamic objects repository.
When I first bound a object to a JNDI, it works properly.
But, when I second bound a object that is in the same object hierarchy to a JNDI with a same JNDI name (rebind), it returns old result that is related to first bound object.
How can I turn off caching capability of JNDI?
I'm working on Weblogic Server 8.1 with Service Pack 5.

I want to use JNDI as a dynamic objects repository.
When I first bound a object to a JNDI, it works properly.
But, when I second bound a object that is in the same object hierarchy to a JNDI with a same JNDI name (rebind), it returns old result that is related to first bound object.
How can I turn off caching capability of JNDI?
I'm working on Weblogic Server 8.1 with Service Pack 5.

Similar Messages

  • JNDI failure. Unable to lookup Data Source

    Dear all
    I'm running my ADF application from the jdeveloper integrated web logic server and every thing is ok.
    My project contains BC and a bounded tasklflow that contains two views.
    the problem happened when I changed the connection type of the application module from "JDBC Datasource" to "JDBC URL".
    The name of the jdbc url is "jdbc/pmsDS"
    I made this because I want to deploy my application to weblogic server and I created a datasource with the same name "jdbc/pmsDS" in weblogic server.
    I tried to test and run the application on my local jdeveloper , it fails and this error occurs
    oracle.jbo.DMLException: JBO-27200: JNDI failure. Unable to lookup Data Source at context jdbc/pmsDS
    Can any one plese tell me what is the problem. Whay my application can not identify the JNDI.
    Thanks

    user answer here,
    What does this message mean?
    No credential mapper entry found for password indirection user =hr Issue:-
    http://madnanhashmi.blogspot.com/2010/05/weblogiccommonresourceexception.html
    Edited by: Erp on Oct 20, 2011 4:50 AM

  • JNDI replication problems in WebLogic cluster.

    I need to implement a replicable property in the cluster: each server could
    update it and new value should be available for all cluster. I tried to bind
    this property to JNDI and got several problems:
    1) On each rebinding I got error messages:
    <Nov 12, 2001 8:30:08 PM PST> <Error> <Cluster> <Conflict start: You tried
    to bind an object under the name example.TestName in the jndi tree. The
    object you have bound java.util.Date from 10.1.8.114 is non clusterable and
    you have tried to bind more than once from two or more servers. Such objects
    can only deployed from one server.>
    <Nov 12, 2001 8:30:18 PM PST> <Error> <Cluster> <Conflict Resolved:
    example.TestName for the object java.util.Date from 10.1.9.250 under the
    bind name example.TestName in the jndi tree.>
    As I understand this is a designed behavior for non-RMI objects. Am I
    correct?
    2) Replication is still done, but I got randomly results: I bind object to
    server 1, get it from server 2 and they are not always the same even with
    delay between operation in several seconds (tested with 0-10 sec.) and while
    it lookup returns old version after 10 sec, second attempt without delay
    could return correct result.
    Any ideas how to ensure correct replication? I need lookup to return the
    object I bound on different sever.
    3) Even when lookup returns correct result, Admin Console in
    Server->Monitoring-> JNDI Tree shows an error for bound object:
    Exception
    javax.naming.NameNotFoundException: Unable to resolve example. Resolved: ''
    Unresolved:'example' ; remaining name ''
    My configuration: admin server + 3 managed servers in a cluster.
    JNDI bind and lookup is done from stateless session bean. Session is
    clusterable and deployed to all servers in cluster. Client invokes session
    methods throw t3 protocol directly on servers.
    Thank you for any help.

    It is not a good idea to use JNDI to replicate application data. Did you consider
    using JMS for this? Or JavaGroups (http://sourceforge.net/projects/javagroups/) -
    there is an example of distibuted hashtable in examples.
    Alex Rogozinsky <[email protected]> wrote:
    I need to implement a replicable property in the cluster: each server could
    update it and new value should be available for all cluster. I tried to bind
    this property to JNDI and got several problems:
    1) On each rebinding I got error messages:
    <Nov 12, 2001 8:30:08 PM PST> <Error> <Cluster> <Conflict start: You tried
    to bind an object under the name example.TestName in the jndi tree. The
    object you have bound java.util.Date from 10.1.8.114 is non clusterable and
    you have tried to bind more than once from two or more servers. Such objects
    can only deployed from one server.>
    <Nov 12, 2001 8:30:18 PM PST> <Error> <Cluster> <Conflict Resolved:
    example.TestName for the object java.util.Date from 10.1.9.250 under the
    bind name example.TestName in the jndi tree.>
    As I understand this is a designed behavior for non-RMI objects. Am I
    correct?
    2) Replication is still done, but I got randomly results: I bind object to
    server 1, get it from server 2 and they are not always the same even with
    delay between operation in several seconds (tested with 0-10 sec.) and while
    it lookup returns old version after 10 sec, second attempt without delay
    could return correct result.
    Any ideas how to ensure correct replication? I need lookup to return the
    object I bound on different sever.
    3) Even when lookup returns correct result, Admin Console in
    Server->Monitoring-> JNDI Tree shows an error for bound object:
    Exception
    javax.naming.NameNotFoundException: Unable to resolve example. Resolved: ''
    Unresolved:'example' ; remaining name ''
    My configuration: admin server + 3 managed servers in a cluster.
    JNDI bind and lookup is done from stateless session bean. Session is
    clusterable and deployed to all servers in cluster. Client invokes session
    methods throw t3 protocol directly on servers.
    Thank you for any help.--
    Dimitri

  • I am facing a caching problem in the Web-Application that I've developed us

    Dear Friends,
    I am facing a caching problem in the Web-Application that I've developed using Java/JSP/Servlet.
    Problem Description: In this application when a hyperlink is clicked it is supposed to go the Handling Servlet and then servlet will fetch the data (using DAO layer) and will store in the session. After this the servlet will forward the request to the view JSP to present the data. The JSP access the object stored in the session and displays the data.
    However, when the link is clicked second time then the request is not received by our servlet and the cached(prev data) page is shown. If we refresh the page then request come to the servlet and we get correct data. But as you will also agree that we don't want the users to refresh the page again and again to get the updated data.
    We've included these lines in JSPs also but it does no good:
    <%
    response.setHeader("Expires", "0");
    response.setHeader("Cache-Control" ,"no-cache, must-revalidate");
    response.setHeader("Pragma", "no-cache");
    response.setHeader("Cache-Control","no-store");
    %>
    Request you to please give a solution for the same.
    Thanks & Regards,
    Mohan

    However, when the link is clicked second time then the request is not received by our servlet Impossible mate.. can you show your code. You sure there are no javascript errors ?
    Why dont you just remove your object from the session after displaying the data from it and see if your page "automatically" hits the servlet when the link is clicked.
    cheers..
    S

  • ASO cache problem with Windows 7

    I have a large base of AS2 classes that support my application.  I've been working on it for a few years and I'm very familiar with the ASO cache problem that causes edits to not be compiled.  Well, I hit it today and I can't get rid of the old files.  I'm running CS5.5 and I used the menu first.  Then I manually deleted the ASO files.  Then I rebooted the computer and searched the entire C: drive for ASO files and deleted all of them.  Nothing worked.  I finally renamed one of my class files (and the the necessary edits that flowed therefrom) and that one file was recompiled. Obviously there is a hidden cached file somewhere.  I have Win7 Pro 64-bit.  Has anyone had a similar problem?  The last thing I want to have to do is rename all of my class files or tear down and rebuild my development box.  TIA

    3.1 is not "fine" and the drivers leave much to be desired.
    You will probably need to do a Safe Mode and uninstall, rollback, system restore. Use your DVD for Windows 7.
    Not sure of the support issues and details on MacBook Pro 13" as to whether you have graphic driver issue, AppleHFS, or other (and to say 'total hangup' doesn't really lead to what and why).

  • Firefox cache problem

    Hi,
    There is an old nasty problem that I finally want to solve:
    With Firefox my Apex applications get a cache problem once in a while: The user clicks on a button or tab or a text field etc. and suddenly gets a page with the design mingled up and the functionality broken. With ctrl + F5 (refresh) everything is fine again, but it's rather ennoying for the users. The problem is, some users get that often, others not at all, sometimes it's more, sometimes it's less. For all (internal) users I set in prefs.js the cache to user_pref("browser.cache.check_doc_frequency", 1); for maximum refresh behaviour. So the configs should be the same for all.
    With other websites it doesn't seem to happen. Is this an Apex specific problem? Are others experiencing the same? Any hint or solution? I would be glad for some input.
    Thank you,
    Roger

    Never get that problem Roger.
    The cache problem I have with firefox is trying to get it to load the latest versions of javascript libraries after they have been changed.
    Andy.

  • JNDI failure. Unable to lookup Data Source at context jdbc/AppsDatasource

    I'm not able to run my Application locally due to issue "JNDI failure. Unable to lookup Data Source at context jdbc/APPSnonXA", I did the following under debugging.
    1. Finding SPACE in DataSource, I verified
    1a. The data source for spaces and could not find any spaces. Also the "Test Connection" was successful.
    1b. The PATH, CLASSPATH variables for SPACES. Removed other Tool paths. But no Use.
    1c. Jdeveloper Installation Location Path. none of the folder names have spaces.
    1d. Application Location path. Doesn't have spaces.
    I'm attaching the log file that has the PATH,Classpath variable info.
    Please help me out.
    I did all the above by taking help of oracle forums.(https://forums.oracle.com/forums/search.jspa?threadID=&q=Re%3Aoracle.jbo.DMLException%3A+JBO-27200%3A+JNDI+failure.+Unable+to+lookup+Data+Source+at+context+&objID=f83&dateRange=all&userID=&numResults=30&rankBy=10001)

    After doing the following two things, The issue is resolved.
    #1:
    Configure Default Server for Datasource (Web logic console --> Data Sources --> jdbc/AppsDataSource --> Targets --> select, if not default server not selected --> Save --> restart the server.)
    Note:
    If we don't deploy while JNDI creation then below is correct. "You can select one or more targets to deploy your new JDBC data source. If you don't select a target, the data source will be created but not deployed. You will need to deploy the data source at a later time."
    #2: If The Jdeveloper is newly installed, Create a dummy UI application. & run a test.jspx page.

  • Caching problem w/ primary-foreign key mapping

    I have seen this a couple of times now. It is not consistent enough to
    create a simple reproducible test case, so I will have to describe it to you
    with an example and hope you can track it down. It only occurs when caching
    is enabled.
    Here are the classes:
    class C1 { int id; C2 c2; }
    class C2 { int id; C1 c1; }
    Each class uses application identity using static nested Id classes: C1.Id
    and C2.Id. What is unusual is that the same value is used for both
    instances:
    int id = nextId();
    C1 c1 = new C1(id);
    C2 c2 = new C2(id);
    c1.c2 = c2;
    c2.c1 = c1;
    This all works fine using optimistic transactions with caching disabled.
    Although the integer values are the same, the oids are unique because each
    class defines its own unique oid class.
    Here is the schema and mapping (this works with caching disabled but fails
    with caching enabled):
    table t1: column id integer, column revision integer, primary key (id)
    table t2: column id integer, column revision integer, primary key (id)
    <jdo>
    <package name="test">
    <class name="C1" objectid-class="C1$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t1"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c2">
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="column.id" value="id"/>
    </extension>
    </field>
    </class>
    <class name="C2" objectid-class="C2$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t2"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c1">
    <extension vendor-name="kodo" key="dependent" value="true"/>
    <extension vendor-name="kodo" key="inverse-owner" value="c2"/>
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="table" value="t1"/>
    <extension vendor-name="kodo" key="ref-column.id" value="id"/>
    <extension vendor-name="kodo" key="column.id" value="id"/>
    </extension>
    </field>
    </class>
    </package>
    </jdo>
    Because the ids are known to be the same, the primary key values are also
    used as foreign key values. Accessing C2.c1 is always non-null when caching
    is disabled. With caching is enabled C2.c1 is usually non-null but sometimes
    null. When it is null we get warnings about dangling references to deleted
    instances with id values of 0 and other similar warnings.
    The workaround is to add a redundant column with the same value. For some
    reason this works around the caching problem (this is unnecessary with
    caching disabled):
    table t1: column id integer, column id2 integer, column revision integer,
    primary key (id), unique index (id2)
    table t2: column id integer, column revision integer, primary key (id)
    <jdo>
    <package name="test">
    <class name="C1" objectid-class="C1$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t1"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c2">
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="column.id" value="id2"/>
    </extension>
    </field>
    </class>
    <class name="C2" objectid-class="C2$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t2"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c1">
    <extension vendor-name="kodo" key="dependent" value="true"/>
    <extension vendor-name="kodo" key="inverse-owner" value="c2"/>
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="table" value="t1"/>
    <extension vendor-name="kodo" key="ref-column.id" value="id2"/>
    <extension vendor-name="kodo" key="column.id" value="id"/>
    </extension>
    </field>
    </class>
    </package>
    </jdo>
    Needless to say, the extra column adds a lot of overhead, including the
    addition of a second unique index, for no value other than working around
    the caching defect.

    Tom-
    The first thing that I think of whenever I see a problem like this is
    that the equals() and hashCode() methods of your application identity
    classes are not correct. Can you check them to ensure that they are
    written in accordance to the guidelines at:
    http://docs.solarmetric.com/manual.html#jdo_overview_pc_identity_application
    If that doesn't help address the problem, can you post the code for your
    application identity classes so we can double-check, and we will try to
    determine what might be causing the problem.
    In article <[email protected]>, Tom Landon wrote:
    I have seen this a couple of times now. It is not consistent enough to
    create a simple reproducible test case, so I will have to describe it to you
    with an example and hope you can track it down. It only occurs when caching
    is enabled.
    Here are the classes:
    class C1 { int id; C2 c2; }
    class C2 { int id; C1 c1; }
    Each class uses application identity using static nested Id classes: C1.Id
    and C2.Id. What is unusual is that the same value is used for both
    instances:
    int id = nextId();
    C1 c1 = new C1(id);
    C2 c2 = new C2(id);
    c1.c2 = c2;
    c2.c1 = c1;
    This all works fine using optimistic transactions with caching disabled.
    Although the integer values are the same, the oids are unique because each
    class defines its own unique oid class.
    Here is the schema and mapping (this works with caching disabled but fails
    with caching enabled):
    table t1: column id integer, column revision integer, primary key (id)
    table t2: column id integer, column revision integer, primary key (id)
    <jdo>
    <package name="test">
    <class name="C1" objectid-class="C1$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t1"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c2">
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="column.id" value="id"/>
    </extension>
    </field>
    </class>
    <class name="C2" objectid-class="C2$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t2"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c1">
    <extension vendor-name="kodo" key="dependent" value="true"/>
    <extension vendor-name="kodo" key="inverse-owner" value="c2"/>
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="table" value="t1"/>
    <extension vendor-name="kodo" key="ref-column.id" value="id"/>
    <extension vendor-name="kodo" key="column.id" value="id"/>
    </extension>
    </field>
    </class>
    </package>
    </jdo>
    Because the ids are known to be the same, the primary key values are also
    used as foreign key values. Accessing C2.c1 is always non-null when caching
    is disabled. With caching is enabled C2.c1 is usually non-null but sometimes
    null. When it is null we get warnings about dangling references to deleted
    instances with id values of 0 and other similar warnings.
    The workaround is to add a redundant column with the same value. For some
    reason this works around the caching problem (this is unnecessary with
    caching disabled):
    table t1: column id integer, column id2 integer, column revision integer,
    primary key (id), unique index (id2)
    table t2: column id integer, column revision integer, primary key (id)
    <jdo>
    <package name="test">
    <class name="C1" objectid-class="C1$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t1"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c2">
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="column.id" value="id2"/>
    </extension>
    </field>
    </class>
    <class name="C2" objectid-class="C2$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t2"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c1">
    <extension vendor-name="kodo" key="dependent" value="true"/>
    <extension vendor-name="kodo" key="inverse-owner" value="c2"/>
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="table" value="t1"/>
    <extension vendor-name="kodo" key="ref-column.id" value="id2"/>
    <extension vendor-name="kodo" key="column.id" value="id"/>
    </extension>
    </field>
    </class>
    </package>
    </jdo>
    Needless to say, the extra column adds a lot of overhead, including the
    addition of a second unique index, for no value other than working around
    the caching defect.
    Marc Prud'hommeaux [email protected]
    SolarMetric Inc. http://www.solarmetric.com

  • Bridge update does not fix caching problems.

    Dear Adobe,
    The 5.0.1.23 update for Bridge CS6 does NOT fix the problem of constantly re-caching layered TIF files.
    I originaly reported the problem here on May 16, 2012.
    http://forums.adobe.com/thread/1007560
    At that time I also submitted a bug report via photoshop.com, and received an e-mail response from Adobe support confirming the problem had been replicated in their lab and promising a fix in the next update.
    I've since tracked several other reports of this bug and related cache problems.
    I assume that, at best, we will have to wait another 6 months or more for the next update. How can I assure this bug will be addressed?

    redcrown on guard wrote:
    The 5.0.1.23 update for Bridge CS6 does NOT fix the problem of constantly re-caching layered TIF files.
    At that time I also submitted a bug report via photoshop.com, and received an e-mail response from Adobe support confirming the problem had been replicated in their lab and promising a fix in the next update.
    Thank you for this bit of information. Maybe it means I can stop the deactivations/uninstall/reinstall/reactivate cycle to try yet another solution. And hopefully, this will stop the re-caching problem with other than tif files.
    regards
    *S*

  • Caching problem in Chrome and Firefox

    Hey folks,
    I ran into a weird problem.  I created a video player based on the Strobe Media Playback.  I added a couple of plugins.  This player is used to watch progressive download FLV files.
    I ran into the following issue.  I watch part of a video.  I select another one.  Then I select the previous one again.  Only the cached portion of the first video is shown.  The entire video will not be downloaded again from the server, but only the portion already cached on the client.
    This problem is really bad in Chrome.  When I restart FF, I can watch the entire video.  Not in Chrome.  The only way to solve this in Chrome is to clear the cache.
    Any ideas?
    The website is live, so you can test this yourselve.  http://www.submergeproductions.com/videos.aspx
    All help very welcome, because this is a major issue.
    Follow up.  I made a quick fix.  I added a random number to the FLV url to force a redownload from the server, but this quite a dirty fix. I would rather have a restart/continuation of the download if the file was only partially downloaded.
    Thanks,
    Peter

    Hi Silviu,
    the reason why it works now is because I uploaded a modified version.  I append "?<random number> to the URL.  That prevents caching problems because the browser hasn't got the version cached.  But I still report it as a bug.
    Peter

  • Caching problem of servlet

    Hi guys
    We are facing this problem of caching within our project. The project aims to generate a html code to pick up some rich media ads details at random and displaying on the html file where the generated code is expected to be pasted. We developed two servlets, one which extracts the ads from the database randomly and then depanding on the ad type it calls the other servlet as src of an iframe, which in turn puts all code for displaying the rich media ads. The script which we are generating for the user to paste onto their pages is:
    <script LANGUAGE="JAVASCRIPT" src="http://192.168.1.6:8080/advert_java/servlet/GetAdServlet?region=1&zone=1&type=nossi&cachevar=yes">
    </script>First servlet (GetAdServlet) returns the javascript statements and thus is called using this generated code. Now cotents of the iframe are supplied by the second servlet ie richMediaServlet. This servlet is called like
    iframeURL = fullHttpDir+"/servlet/RichMediaServlet?";
    iframeURL += "bannerCode="+ RNBanner (BannerCode to be called);
    out.println("document.write(\"<iframe  src='"  + iframeURL +  "' height=" + hheight +" width="+ wwidth + " SCROLLING=no FRAMEBORDER=0 MARGINWIDTH=2 MARGINHEIGHT=2 onfocus='window.focus(); return iframeFocus()'>\");");
    out.println("document.write(\"</iframe>\");");This richmediaServlet returns HTML into <iframe>. when richmediaservlet is called, a parameter 'bannerCode' is passed. then richmediaServlet fatches the banner from the database and displays the banner into the <iframe>.
    Now the problem comes when we run the html file containing the script tag mentioned above, and refresh our page, ideally it should pick the ads randomly and pass it on to RichMediaServlet.
    I also try debugging both servlets. I called the getadservlet from javascript mentioned above and put debugging info in both the servlets, now for every refresh we do on the html side, we are getting a different random bannercode in adservlet but in richmedia when we print the bannercode received in querystring it is taking an older value which was displayed some time back and keeps on doing this for quiet a long time, making it look like some caching problem of RichMediaServlet.
    Instead when we tried to put the same html <script> code into another servlet's doGet, everything seems to be working fine.
    i have also used the following code to prevent the caching on both the setvlets
    long currentTime = System.currentTimeMillis();
    response.setHeader("Cache-Control", "no-cache, must-revalidate");
    response.setHeader("Pragma", "no-cache");
    response.setDateHeader("Last-modified", currentTime);
    response.setHeader("Expires", "Sat, 6 May 1995 12:00:00 GMT");     and following in the iframe's head tag before the iframe tag in the getAdServlet.
    out.println("document.write('<head>');");
    out.println("document.write('<meta http-equiv=\"Cache-Control\" content=\"no-cache,must-revalidate\">');");
    out.println("document.write('<meta http-equiv=\"Pragma\" content=\"no-cache\">');");
    out.println("document.write('<meta http-equiv=\"Last-modified\" content=\""+ currentTime + "\">');");
    out.println("document.write('<meta http-equiv=\"expires\" content=\"Sat, 6 May 1995 12:00:00 GMT\">');");
    out.println("document.write('</head>');");I request you all geeks to try and help me to your best. The project is at its final stages and in high urgency now.

    i think the caching is being in the browser, with the iframe.
    You should try passing a random param to the servlet in the iframe URL, something like:
    var a = Math.random() * 10000000; //for example
    out.println("document.write(\"<iframe  src='"  + iframeURL +"&rand="+a+"' height=" + hheight +" width="+ wwidth + " SCROLLING=no FRAMEBORDER=0 MARGINWIDTH=2 MARGINHEIGHT=2 onfocus='window.focus(); return iframeFocus()'>\");");
    out.println("document.write(\"</iframe>\");");
    ...It should force the browser to ask for the servlet again
    hope this helps...

  • Caching problem with Internet Explorer

    Hi,
    users of an ApEx application I'm working on are reporting that when they're deleting an uploaded file from one of the pages in the application (using Internet Explorer), the link to the file remains. This is however not an issue in FireFox, and after some research I found out that this is a caching problem in IE. It can be avoided by making IE check for newer versions of stored pages every time a page is visited, but it is clearly not an option to ask all our users to do this. I also learned that it can be fixed by randomizing the file URL every time the page is loaded, but I don't know how to randomize a URL, nor how to make it still point to the uploaded file.. Any help would be appreciated!
    Thanks,
    -Kjetil

    Kjetil,
    This problem is also there if you use Flash Charts with a drilldown. See this posting:
    http://www.deneskubicek.blogspot.com/
    It will also link you to a corresponding thread and to an example in my demo application.
    The idea with a random number changing you link is the same I used in extending my
    xml chart package.
    Denes Kubicek
    http://deneskubicek.blogspot.com/
    http://www.opal-consulting.de/training
    http://htmldb.oracle.com/pls/otn/f?p=31517:1
    -------------------------------------------------------------------

  • ADF cache problem

    Hello,
    I'm developing a web application with an ADF tree accessing to a Content DB repository. When I deploy the application and navigate to the tree everything look fine. The problem occurs, when the content of the repository changed outside of the web application (e.g. a file will be deleted with Oracle drive), the tree doesn't display the changes. The only way to get the right state of the tree, clear the browser cache and reload the page again. In my opinion it looks like a caching problem. Putting the following meta information in the html header also failed!
    <meta http-equiv="pragma" content="no-cache"/>
    <meta http-equiv="expires" content="0"/>
    <meta http-equiv="cache-control" content="no-cache"/>
    Is there a possibility to disable the caching of such ADF components?
    Hope you can help me!
    Thanks,
    Alex

    This will probably help you: http://www.oracle.com/technology/products/ias/web_cache/afc/index.html
    Regards,
    Koen Verhulst

  • Cache problem for this

    Hi
    in the saw.sessionsinfo.xml file ,i change the home link prop true to false
    but it not remove from the presentation side
    below is my file
    Below is my script file
    plz check it
    give if any errors
    <?xml version="1.0" encoding="UTF-8"?>
    <resourceBundle xmlns="oracle.bi.ps.resourceBundle/v1">
    <gdexpression id="noLogoffUI" expr="session.hideLogoffLink" />
    <gdexpression id="syndicate" expr="session.syndicate" />
    <gdexpression id="canAccessDashboards" expr="privileges.Access['Global Portal']" />
    <gdexpression id="hdrLinkCatalog" expr="true" />
    <gdexpression id="hdrLinkOpen" expr="true" />
    <gdexpression id="hdrLinkAdvanced" expr="true" />
    <gdexpression id="hdrLinkHelp" expr="true" />
    <gdexpression id="hdrLinkHome" expr="false" />
    <gdexpression id="hdrLinkGSearch" expr="true" />
    <gdexpression id="hdrLinkNew" expr="true" />
    <gdexpression id="hdrLinkDashboards" expr="true" />
    <gdexpression id="hdrLinkSettings" expr="true" />
    <gdexpression id="bipKeepAlive" expr="session.bipKeepAlive" />
    <gdexpression id="bipWebUrl" expr="system.config['AdvancedReporting/WebURL']" />
    <gdexpression id="bipExternalRepository" expr="system.config['AdvancedReporting/ExternalRepository']" />
    <gdexpression id="biComposerContext" expr="system.config['BIComposer/ContextPath']" />
    </resourceBundle>
    plz anybody give solution for this
    Edited by: ARYABRAHMA on Feb 5, 2013 3:38 AM

    is cache problem for this

  • Qaaws not refreshing query triggered from Xcelsius, maybe a cache problem

    Hi,
    I'm having a problem with QAAWS and Xcelsius
    I'm using a List Builder component to select multiple values in this case STATES from the efashion universe
    I use the selected states as values to feed a prompt in a QAAWS query, the qwaas query has  the SALES REVENUE as the resultset and in the conditions it has a multi prompt for STATES
    When I preview my dashboard, I select the States, then UPDATE the values and then refresh the query with a CONNECTION REFRESH button, The first time I do this it works fine and returns the Sales revenue.
    If I add a new State to my selection and then run update and run run again the query with the refresh button, it doesn't work any more and it shows again the value retrieved from the first query
    First I thought that the query wasn't triggered by Xcelsius, but by doing some more tests I found that actually the query runs but it returns the value from the first query
    I think this is a cache problem , so is there a way to tell QAAWS to always run the query and not use the cache?
    thanks,
    Alejandro

    Hello Alejandro,
    QaaWS indeed uses a cache mechanism to speed up some Xcelsius interactions (from XI 3.0 onwards), but your issue should not be induced by this, as cache sessions are discriminated according to session user id & prompt values, so if you are correctly passing prompt values, QaaWS should not serve you with the previous values by error.
    Could you specify how you are passing several prompt values to the QaaWS? There might be an issue there, so make sure that:
    1. QaaWS query prompt is set using In List operator, otherwise only first value will actually be taken into account,
    2. In Xcelsius Designer Data Manager, web service input paremeters are duplicated to accept several input values (you cannot submit you list of prompt values as a list to a single input parameter).
    If this still does not work, I'd suggest you debug your dashboard runtime using an HTTP sniffer like Fiddler (available from http://www.fiddler2.com/) which enable you to inspect the sent & recieved HTTP messages with the server, where you should verify which prompt values are sent to the QaaWS servlet.
    FYI, you can set the QaaWS cache lifetime for each query, by going into QaaWS edition first wizard screen, click Advanced... button and change value for timeout parameter (default is 60 seconds)
    Hope that helps,
    David.

Maybe you are looking for