Problem with PRAGMA NO-CACHE

Hi,
          I am having the following problem. Any help is greatly appreciated.
          I have a servlet which in its doPost method says..
          response.setHeader ("Pragma", "no-cache");
          The browser gets this page. Then the user goes from this page to another
          page.
          When the user then uses the "Back" button of the browser, the browser
          does not show a "Warning: This page has expired". Instead it displays the
          original page
          that the servlet had sent.
          My servlet works fine with WebSphere but not with WebLogic 4.5.1 and 5.1
          Anybody encounter this problem before?
          Am I doing something wrong? Do I need to do something extra for Weblogic so
          that
          the page does not get cached?
          Thanks,
          Ullas.
          ([email protected])
          

You try to delete the cache and retry once again. If this not solves your probelm then follow this url:
Inconsistant Cache (SXI_CACHE)
Cache Refresh
---Mohan

Similar Messages

  • Problem with optional attribute caching on a custom tag

    Hello,
    I've created a tag by extending TagSupport. I have one attribute that is optional. I'm having a problem with this attribute since the tag is cached. If the value is not specified in the tag, it is always using the previous value from the past request.
    I understand why this is happening, but I wonder if there is anyway to reset this value besides doing it at the end of the doStartTag or in the doEndTag methods? I simply want it to be an empty string if it is not in the request.
    Thanks,
    Tim

    Thats abit overkill in my opinion.Probably yes, but its a cleaner option. In case your doEndTag handles custom exceptions, you would anyhow need to put this code in a finally block, right ?
    public int doEndTag() throws JspException {
    try {
    call some methods that throws other checked exceptions;
    }catch(Exception1 e){
    throw JspException(e);
    }catch(Exception2 e){
    //log and ignore
    }finally{
    //clean up here
    return an int;
    Having said that, different containers implement the spec a bit differently. For example, in our project, we use weblogic and for wl, we put our clean up code in the release() method which according to the spec, needs to be called only before gc. Weblogic implementation is a bit different - its called after doEnd() for every use of the tag.
    This is from jsp spec regarding tag life cycle especially with reuse
    Some setters may be called again before a tag handler is reused. For
    instance, setParent() is called if it�s reused within the same page but at a dif-ferent
    level, setPageContext() is called if it�s used in another page, and
    attribute setters are called if the values differ or are expressed as request-time
    attribute values.
    �Check the TryCatchFinally interface for additional details related to exception handling and resource management.
    cheers,
    ram.
    }

  • Problems with invalidating web cache

    Currently we display the current date (portal smart link) in the top region of each portal page. We have been having problems with the date not refreshing, rather old pages continue to be served up. I set up a database job using wvuxtil to send an XML document to invalidate all documents in the web cache under root at midnight. (The caching of every portal page is set to "content and definiton at system level" for 1440 min). This morning, the job ran but when checking the pages from a client machine, the date still displayed as yesterday's for SOME of the pages (not all)- clearing the clients IE cache made no difference. Logging on to the server, I accessed the relevant pages displaying yesterday's date on the client - and they showed the correct date on the server. Going back to the client machine, the dates are now correct ? Help anyone ? What should I be checking/looking at next ?
    Cheers
    Brent Harlow

    Hi Brent,
    Which version of Web Cache and Portal are you using? A similar problem with dates and invalidation has been reported before, so you may want to check with Oracle Support for the appropriate patch.

  • Weblogic 8.1 SP2 + Sybase: Problem with Insufficient Procedure Cache Memory

    Hi all,
    Our Weblogic server(8.1, SP2) encountered a problem this week.
    It connects to Sybase.
    For a particular database query (which involves temporary tables), the DB seems to have run out of "Procedure Cache memory". And as a result, has thrown an exception.
    What is a bit wierd is that, the Weblogic, slowly, seems to have exhausted all its DB resources, and all the subsequent database queries (even the simple ones) have failed due to some error or the other.
    The weblogic required a re-start for it to acquire back its DB resources.
    Has somebody faced a similar problem before, please?
    On reading the Release Notes for WLS: 8.1, I see that the Service Packs SP4, SP5 seem to have a few bug fixes related to memory leaks (especially, in case a Prepared statement failed).
    [Related CRs are CR233948, CR179600, CR183190].
    Does this mean that an Upgrade to 8.1: SP5 or maybe even SP6 help, please?
    Would welcome any kind of advice/suggestions.
    Thanks you!!!!
    Rhishi.

    Rhishikesh Anandamoorthy wrote:
    Hi Joe,
    Thanks a lot for your reply.
    I should have mentioned in my previous post that the DBA had indeed recommended
    an increase to the Sybase's Procedure Cache Memory size. (We have a separate
    DBA team out here, and I do not have DBA access to my database).
    But I am a bit apprehensive on two counts:
    1. From the DB logs, I see that the query which seems to have failed is something
    which is fired day in and day out (though this involves the usage of temporary tables,
    which might have filled up the Cache). I should also add that the "DB statistics" was
    also being run (automatically) about the same time when the query failed. But again,
    the "DB statistics" is run everyday. So, cant see much of a problem here.Nevertheless, it is an internal DBMS issue.
    2. The CR179600 and CR183190 of the Weblogic Release Notes suggests that: "Under
    certain statement failure conditions, cached statements are leaked without being
    closed, which can lead to DBMS resource problems."
    So, I am just wondering if this is indeed the actual cause. If so, it might mean
    that there is a chance, though slight, that the problem might re-occur.
    There seems to be a fix for this in SP5.I certainly advocate upgrading to 81sp6, but that issue had to do with Oracle,
    which retains DBMS 'cursors' for each open prepared statement. This will have
    no effect on a Sybase DBMS.
    I can certainly upgrade to SP6. But, the application seems to have been pretty
    stable with SP2 for the past 4 years.
    I understand that an upgrade to SP6 may not be such a big change. But, it would
    still be a change.
    And the webapp which the server supports is very critical to our customer
    [which webapp is not? :-) ], and a server-restart again for this issue would
    certainly not be acceptable.
    So, would want to be doubly sure, if an upgrade is indeed the right way out.
    Thank you!!!
    Rhishi.In that case, I would stay comfortable with SP2 if you like. In my professional
    opinion, at this time, it is a purely DBMS-side issue, based on the current
    evidence. Note that the same WLS was fine for these same previous years. The
    problem may have to do with a gradual or recent change in the load or size of
    the DBMS.
    Joe Weinstein at BEA Systems ( nee [email protected] 1988-1996 )

  • Problems with increasing/decreasing cache size when live

    Hello,
    I have configured multiple environments which I'm compacting sequentially and to achieve this I allocate a bigger cache to the env currently being compacted as follows:
    Initialization:
    DB_ENV->set_cachesize(gbytes, bytes, 1); // Initial cache size.
    DB_ENV->set_cache_max(gbytes, bytes); // Maximum size.
    While live, application decreases cache of current env when finished and then increases cache of next env using:
    DB_ENV->set_cachesize(gbytes, obytes, 0); // Decrease cache size of current env to initial size
    DB_ENV->set_cachesize(gbytes, obytes, 0); // Increase cache size of next env to max size.
    When I print statistics about the memory pool using DB_ENV->memp_stat I can see that everyting is going normally:
    memp_stat: env1 ncache= 8 cache_size=20973592 // env1 is current env
    memp_stat: env2 ncache= 1 cache_size=20973592
    and then after changing current env:
    memp_stat: env1 ncache= 1 cache_size=20973592
    memp_stat: env2 ncache= 8 cache_size=20973592 // env2 is now current env
    But the problem is that over time memory is leaked (as if the extra memory of each env was not freed) and I'm totally sure that the problem comes from this code.
    I'm running Berkeley DB 4.7.25 on FreeBSD.
    Maybe some leak was fixed in newer versions and you could suggest to me a patch? or I don't use the API correctly?
    Thanks!
    Edited by: 894962 on Jan 23, 2012 6:40 AM

    Hi,
    Thanks for providing the information.
    Unfortunately, I don't remember exact test case I was doing, so I did a new one with 32 env.
    I set the following for each env:
    - Initial cache=512MB/32
    - Max=1GB
    Before open, I do:
    DBEnvironment->set_cachesize((u_int32_t)0, (u_int32_t)512*1024*1024/32, 1);
    DBEnvironment->set_cache_max(1*1024*1024*1024, 0);
    DBEnvironment->get_cache_max(&gbytes, &obytes); // gives gbytes=1 and obytes=0
    After open, I have the following:
    DBEnvironment->get_cache_max(&gbytes, &obytes); // gives gbytes=0 and obytes=9502720
    memp_stat: cache_size=18644992 cache_ncache=1
    So here, the values returned by memp_stat are normal but get_cache_max is strange. Then after increasing the cache to the strange value returned by get_cache_max (gbytes=0, obytes=9502720), I have the following:
    DBEnvironment->get_cache_max(&gbytes, &obytes); // gives gbytes=0 and obytes=9502720
    memp_stat: outlinks cache_size=27328512 cache_ncache=54
    with cache_size being: ((ui64)sp->st_gbytes * GIGA + sp->st_bytes);.
    So cache is actually increased...
    I try to reproduce this case by opening 1 env as follows.
    //Before open
    DbEnv->set_cachesize(); 512MB, 1 cache
    DbEnv->set_cache_max; 1GB
    //After open
    DbEnv->get_cachesize; 512MB, 1cache
    DbEnv->get_caceh_max; 1GB
    memp_stat: cache:512MB, ncache:1, cache_max:1GB
    //Decrease the cache size
    DbEnv->set_cachesize(); 9MB(9502720B), 1 cache
    DbEnv->get_cachesize; 512MB, 1cache
    DbEnv->get_caceh_max; 1GB
    memp_stat: cache:512MB, ncache:1, cache_max:1GB
    All the result is expected. Since when resizing the cache after DbEnv is open, it is rounded to the nearest multiple of the region size. Region size means the size of each region specified initially. Please refer to BDB doc: [http://docs.oracle.com/cd/E17076_02/html/api_reference/C/envset_cachesize.html|http://docs.oracle.com/cd/E17076_02/html/api_reference/C/envset_cachesize.html]. Here region size is 512MB/1cache = 512MB. And I don't think you can resize the cache smaller than 1 region.
    Since you are opening 32 env at the same time with 512MB cache and 1GB maximum for each, when the env is open, whether it can allocate as much as that specified for the cache, is dependent on the system. I am guess the number 9502720 got from get_cache_max after opening the env is probably based on the system and app request, the cache size you can get when opening the env.
    And for the case listed in the beginning of the post
    While live, application decreases cache of current env when finished and then increases cache of next env using:
    DB_ENV->set_cachesize(gbytes, obytes, 0); // Decrease cache size of current env to initial size
    DB_ENV->set_cachesize(gbytes, obytes, 0); // Increase cache size of next env to max size.
    When I print statistics about the memory pool using DB_ENV->memp_stat I can see that everyting is going normally:
    memp_stat: env1 ncache= 8 cache_size=20973592 // env1 is current env
    memp_stat: env2 ncache= 1 cache_size=20973592
    and then after changing current env:
    memp_stat: env1 ncache= 1 cache_size=20973592
    memp_stat: env2 ncache= 8 cache_size=20973592 // env2 is now current envWhen env1 is finishing soon, what numbers do you set in set_cachesize to decrease the cache, including the number of caches and cache size?
    When decreasing the cache, I do:
    env->GetDbEnv()->set_cachesize((u_int32_t)0, (u_int32_t)20973592, 0);
    I mean, in all cases I simply set cachesize to its original value (obtained after open through get_cachesize) when decreasing and set cachesize to its max value when increasing (obtained though get_cache_max; plus I do something like cacheMaxSize * 0.75 if < 500MB).I can reproduce this case. And I think the result is expected. When using DBEnv->set_cachesize() to resize the cache after env is opened, the ncache para is ignored. Please refer to BDB doc here: [http://docs.oracle.com/cd/E17076_02/html/api_reference/C/envset_cachesize.html|http://docs.oracle.com/cd/E17076_02/html/api_reference/C/envset_cachesize.html] . Hence I don't think you can decrease the cache size by setting the number of cache to 0.
    Hope it helps.

  • Globle table  data  problem with pragma autonomous_tranaction

    hi gurus,
    i have  two globle table both having   preserve  row on commit
    g_emp ;
    g_hop_brk ;
    in  procedure
       i m populating   g_emp  then  on this table  doing some manipulation
       loop
    i m calling function  which  have  pragma autonomous_tranaction;
                   this function is
                  is now referring  g_emp ;
                   inserting into  g_hop_brk  ;
               commit ;
    loop end
      now fetching data from  g_hop_brk   ;
    end procedure ;
    with this logic i m not  getting value  when loop runs first time but from 2nd time i m getting value ,
    please tell me why its happening ,
    how to make work
    thanks

    On the second iteration of the loop, the parent transaction has committed after the first iteration, so some data will be visible.
    If you are merely trying to log the value the function is going to return, the function itself should not be an autonomous transaction. A separate procedure should be created to log the return value, that procedure should be an autonomous transaction, and the procedure should be called just before you return, i.e.
    CREATE OR REPLACE PROCEDURE log_something( p_return_val IN NUMBER )
      PRAGMA autonomous_transaction
    IS
    BEGIN
      INSERT INTO log_table ...
      commit;
    END;
    CREATE OR REPLACE FUNCTION your_function( some_parameters )
      RETURN something
    IS
      l_ret_val number;
    BEGIN
      log_something( l_ret_val );
      RETURN l_ret_val;
    END;The procedure never has to query any tables, so the fact that it can't see the parent's uncommitted changes is irrelevent.
    Justin

  • Problems with JCO Interface CACHE

    Hi All,
    I'm reading data from a BAPI that is under development yet, we are changing this BAPI frequently, I would like to know how can I clear the cache that xMII makes with the BAPI Input and Output Parameters.
    I'm using xMII 12.0.1.
    Thanks in advance

    Try this, I don't know if it will work in 12.0.
    Go into the action block for the BAPI and configure it.  When you click "OK" to close the window, it should ask if you want to "Generate Request/Response Documents". 
    This should communicate with the BAPI and refresh the information in BLS.

  • Problem with video playback caching to disk

    I have a piece of code that opens a netconnection to a FLV
    file that is essentially identical to the netstream example in the
    online docs (see below).
    The video in question is being generated on the fly from a
    3rd person app but while it is presented in a SWF running in a
    browser it is also automatically being cached to disk. Since I am
    generating the video live and do not wish to keep it, I would much
    prefer to avoid writing anything to disk (since I will eventually
    run out of space if the SWF runs too long). Is there any way to
    clear the data in a netconnection or netstream to avoid it acting
    in this fashion?

    Thats what I thought as well, but the browser (Firefox,
    Linux) happily caches until it runs out of disk space. I have set
    the cache size in the browser to a small value but it shows the
    same behaviour.
    Is it making an assumption about progressive download, rather
    than streaming?

  • Problems with iPhoto thumbnail cache?

    I am running iPhoto 7.1.1 under MacOS 10.4.11 and have started experiencing the following problem: I can’t seem to view the thumbnails of the Photos in my iPhoto library. I am only able to view full-size images. I can view thumbnails of Events, but not of Photos.
    I have two other Macs running the same version of iPhoto, and neither exhibits these symptoms.

    Look down at the bottom right hand corner of the screen - is the slider there all the way to the right? If so slide it to the left to get the size thumb nails you want
    Sorry I missed that as a possibility the first go around - did not realize where you were missing thumbnails
    LN

  • Problem with JCS Caching

    Hi,
    I am using JCS for Disk Based Caching. I don't want to use memory cache. So I set the configuration like this:
    ##### Default Region Configuration
    jcs.default=DC
    jcs.default.cacheattributes=org.apache.jcs.engine.CompositeCacheAttributes
    jcs.default.cacheattributes.MaxObjects=0
    jcs.default.cacheattributes.MemoryCacheName=org.apache.jcs.engine.memory.lru.LRUMemoryCache
    ##### CACHE REGIONS
    jcs.region.myRegion1=DC
    jcs.region.myRegion1.cacheattributes=org.apache.jcs.engine.CompositeCacheAttributes
    jcs.region.myRegion1.cacheattributes.MaxObjects=0
    jcs.region.myRegion1.cacheattributes.MemoryCacheName=org.apache.jcs.engine.memory.lru.LRUMemoryCache
    jcs.region.myRegion1.cacheattributes.MaxSpoolPerRun=100
    ##### AUXILIARY CACHES
    # Indexed Disk Cache
    jcs.auxiliary.DC=org.apache.jcs.auxiliary.disk.indexed.IndexedDiskCacheFactory
    jcs.auxiliary.DC.attributes=org.apache.jcs.auxiliary.disk.indexed.IndexedDiskCacheAttributes
    jcs.auxiliary.DC.attributes.DiskPath=target/test-sandbox/indexed-disk-cache
    jcs.auxiliary.DC.attributes.MaxPurgatorySize=1000
    jcs.auxiliary.DC.attributes.MaxKeySize=1000
    jcs.auxiliary.DC.attributes.MaxRecycleBinSize=5000
    jcs.auxiliary.DC.attributes.OptimizeAtRemoveCount=300000
    jcs.auxiliary.DC.attributes.OptimizeOnShutdown=true
    jcs.auxiliary.DC.attributes.EventQueueType=POOLED
    jcs.auxiliary.DC.attributes.EventQueuePoolName=disk_cache_event_queue
    ################## OPTIONAL THREAD POOL CONFIGURATION ########
    # Disk Cache Event Queue Pool
    thread_pool.disk_cache_event_queue.useBoundary=false
    thread_pool.remote_cache_client.maximumPoolSize=15
    thread_pool.disk_cache_event_queue.minimumPoolSize=1
    thread_pool.disk_cache_event_queue.keepAliveTime=3500
    thread_pool.disk_cache_event_queue.startUpSize=1I set max object 0 so that all items are cache in disk.
    jcs.default.cacheattributes.MaxObjects=0
    As I set the maxkeysize=1000, after writing 1000 items, I am calling jcs.freeMemoryElements(1000), so that all the items are successfully writen to disk. But this method throwing exception: Update: Last item null.
    jcs.auxiliary.DC.attributes.MaxKeySize=1000
    I wrote 2000 items in cache, but I am able to get only 1000 items back.
    What I am doing wrong ? I want to use this disk caching for items [ nearabout 50 lac items]

    Hi,
    I am similar problem with JCS indexed cache machanism for writing data to disk, but I am not successful with below cache.ccf configuration.
    Also, I am not able to find the complete details from the above post: http://www.nabble.com/Unable-to-get-the-stored-Cache-td23331442.html
    Here is my configuration, I do not want to save the objects in memory, I want everything to be on disk. Please help me.
    Appriciate your quick response.
    # sets the default aux value for any non configured caches
    jcs.default=DC
    jcs.default.cacheattributes=org.apache.jcs.engine.CompositeCacheAttributes
    jcs.default.cacheattributes.MaxObjects=1000
    jcs.default.cacheattributes.MemoryCacheName=org.apache.jcs.engine.memory.lru.LRUMemoryCache
    jcs.default.elementattributes.IsEternal=false
    jcs.default.elementattributes.MaxLifeSeconds=3600
    jcs.default.elementattributes.IdleTime=1800
    jcs.default.elementattributes.IsSpool=true
    jcs.default.elementattributes.IsRemote=true
    jcs.default.elementattributes.IsLateral=true
    # CACHE REGIONS AVAILABLE
    # Regions preconfigured for caching
    jcs.region.monitoringCache=DC
    jcs.region.monitoringCache.cacheattributes=org.apache.jcs.engine.CompositeCacheAttributes
    jcs.region.monitoringCache.cacheattributes.MaxObjects=0
    jcs.region.monitoringCache.cacheattributes.MemoryCacheName=org.apache.jcs.engine.memory.lru.LRUMemoryCache
    # AUXILIARY CACHES AVAILABLE
    # Primary Disk Cache -- faster than the rest because of memory key storage
    jcs.auxiliary.DC=org.apache.jcs.auxiliary.disk.indexed.IndexedDiskCacheFactory
    jcs.auxiliary.DC.attributes=org.apache.jcs.auxiliary.disk.indexed.IndexedDiskCacheAttributes
    jcs.auxiliary.DC.attributes.DiskPath=C:/TestCache
    jcs.auxiliary.DC.attributes.MaxPurgatorySize=10000
    jcs.auxiliary.DC.attributes.MaxKeySize=10000
    jcs.auxiliary.DC.attributes.OptimizeAtRemoveCount=300000
    jcs.auxiliary.DC.attributes.MaxRecycleBinSize=7500

  • Problem with Preview Cache and LR5 keeps shutting down

    I keep trying to open LR5 and it gives me an error message that says it encountered a problem with the preview cache and needs to shut down, and it will attempt to fix the problem next time I open LR 5.  It keeps happening and won't allow me to open LR5 at all.  What should I do?

    Delete .lrdata folder (the whole thing) - it's in with your catalog.
    Do NOT delete your catalog (.lrcat file) or anything else.
    If you have 2 .lrdata folders, delete the one that does NOT have "Smart Previews" in the name.
    Rob

  • Problem with RFC INTEGRATION_DIRECTORY_HMI

    Hello !
    I h’ave one problem with the RFC connection INTEGRATION_DIRECTORY_HMI.
    When I test it, SAP ask me in a pop-up window an user/password.
    I’m sure that SAP should’nt ask me another user/password like I fill it (XIISUSER) in the definition of the RFC destination.
    Coul someone help me ?
    Thanks in advance

    Hi,
    this problem are strang. I also have this problem with my
    test installation. Even when i try the trick from Lui it
    doesn't work in my case. The popup screen still appeare.
    I futhermore have problem with the abap cache refresh in
    full mode where i describe in another topic. I would also
    interested how to get rid of this problem.
    For Martial:
    did you install a new XI-System with NW04-SR1 on windows
    e.g. windows 2000 Server? Did you have also the cache problem on the abap side?
    regards,
    Ly-Na Phu
    Message was edited by: Ly-Na Phu

  • Problem with a slash (/) at the end of contextPath

    Hi,
              Weblogic server version 8.1.3.
              We just moved an existing application from tomcat to weblogic. The application context path is "/cm".
              In tomcat the request.getContextPath() returns "/cm"
              in weblogic the request.getContextPath() returns "/cm/" and it causes problems with our external caching product because URL results are double cached "http://server.com/cm/uri" and "http://server.com/cm//uri".
              How can I specify to weblogic web server not to add the "/" at the end ?
              thanks

    This is a bug in 81sp3. Please contact [email protected] in reference to
              bug id CR239600

  • Viewing Excel Files using Tomcat - Problem with caching

    Hi all,
    A small part of an application I'm writing has links to Excel files for users to view/download. I'm currently using Tomcat v5 as the web/app server and have some very simple code (an example is shown below) which calls the excel file.
    <%@ page contentType = "application/vnd.ms-excel" %>
    <%
    response.setHeader("Pragma", "no-cache");
    response.setHeader("Cache-Control", "no-cache");
    response.setDateHeader("Expires", 0);
    response.sendRedirect("file1.xls");
    %>
    This all works except but I'm having one big problem.
    The xls file (file1.xls) is updated via a share on the server so each month, the xls file is overwritten with the same name but with different contents. I'm finding that when an update is made to the xls file and the user then attempts to view the new file in the browser they recieve only the old xls file. It's caching the xls file and I don't want it to. How can I fix this so that it automatically gives the user the new updated file.
    The only way I've managed to get Tomcat to do this is to delete the work directory and delete the file from my IE temp folder and then restart Tomcat - this is a bit much!
    Any help would be greatly appreciated.
    Thanks.

    I'd a problem with caching a few years back, for a servlet request which returned an SVG file.
    As a workaround, I ended up putting appending "#" and a timestamp / random number after it. The browser assuming each request was new, and didn't use the cache.
    Eg.
    http://myserver/returnSVG.do#1234567
    where 1234567 is a timestamp / random.
    Not sure whether you can do this on a file based URL... but maybe worth a shot...
    regards,
    Owen

  • Have a  problem with Lightroom 5.4.  Since the program crashed yesterday it won't launch, it comes up with the message "Lightroom encountered an error when reading its preview cache and needs to quit".  "  Lightroom will attempt to fix this problem net ti

    Have a  problem with Lightroom 5.4.  Since the program crashed yesterday it won't launch, it comes up with the message "Lightroom encountered an error when reading its preview cache and needs to quit".  "  Lightroom will attempt to fix this problem next time it launches".  Except that it doesn't, I keep getting the same message and the program closes.  Does anyone know what I  can do to repair it?  Can't back up, can't do anything.

    There are dozens of threads in this forum that describe the fix

Maybe you are looking for