Cache Expiration

Hi,
Can any one let me know how to set cache expiration based on access time instead of the inserted time.
Or is there any way to identify the Cache Expiry value of an existing entry.
I am using Coherence 3.4. and I need to reset the expiry time for the ones which was inserted with a expiry value each time I access it.
It would be great if some one can help to figure this out.
Thanks!

Hi user2030814,
If you want to set expiration time based on last access instead of insert/modify time you can do something like this:
<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
    <caching-scheme-mapping>
        <cache-mapping>
            <cache-name>test</cache-name>
            <scheme-name>test</scheme-name>
        </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
        <distributed-scheme>
            <scheme-name>test</scheme-name>
            <backing-map-scheme>
                <local-scheme>
                    <class-name>MyLocalScheme</class-name>
                    <expiry-delay>2s</expiry-delay>
                    <flush-delay>1s</flush-delay>                   
                </local-scheme>
            </backing-map-scheme>
        </distributed-scheme>
    </caching-schemes>
</cache-config>
import com.tangosol.net.cache.CacheLoader;
import com.tangosol.net.cache.LocalCache;
import com.tangosol.net.CacheFactory;
import com.tangosol.net.NamedCache;
import com.tangosol.util.SafeHashMap;
import com.tangosol.util.Base;
* dimitri
public class MyLocalScheme extends LocalCache
    public MyLocalScheme()
        super();
    public MyLocalScheme(int cUnits)
        super(cUnits);
    public MyLocalScheme(int cUnits, int cExpiryMillis)
        super(cUnits, cExpiryMillis);
    public MyLocalScheme(int cUnits, int cExpiryMillis, CacheLoader loader)
        super(cUnits, cExpiryMillis, loader);
    protected SafeHashMap.Entry instantiateEntry()
        return new MyEntry();
    public class MyEntry extends LocalCache.Entry
        public void touch()
            super.touch();
            setExpiryMillis(getSafeTimeMillis() + getExpiryMillis());
            scheduleExpiry();
    // test harness
    public static void main(String[] asArg)
        try
            NamedCache cache = CacheFactory.getCache("test");
            cache.put("foo", "bar");
            // keep reading the entry which will reset expiration time
            for (int i = 0; i < 10; i++)
                Thread.sleep(1500l);
                CacheFactory.log("Entry in the cache: " + cache.get("foo"), CacheFactory.LOG_DEBUG);
            // now let it expire
            Thread.sleep(2500l);
            CacheFactory.log("Entry in the cache: " + cache.get("foo"), CacheFactory.LOG_DEBUG);
        catch (Throwable oops)
            Base.err(oops);
        finally
            CacheFactory.shutdown();
    }Regards,
Dimitri

Similar Messages

  • Configuring Cache Expiration in Toplink using Jdeveloper

    HI
    I am using Toplink10g, Jdeveloper10g.
    I have a batch job running over more than 25K records,
    It initially perform a findall query, then updates data and calls commit on Unit OF Work(which merges the entity and all entities in relation), Commit is called after every 100 entities are updated.
    After commit, new Unit of work is acquired from the Session.
    I am monitoring the memory consumption, I can see it is increasing till it reaches the max., and even after batch job ends, the memory consumption never lowers down.
    I assume, all those merged entities are now in cache.m
    I tried to check different type of IdentityMaps(i.e. different type of cache), but none helped.
    I also tried to set expiry at query level, but found no diff. in memory consumption.
    I assume, that If I set cache expiration at project level, then it can help.
    But, I couldn't find any way to set cache expiration at project level, in JDeveloper.
    I opened the map( configured in sessions.xml), i found what kind of cache to use, but couldn't find a way to set the expiry.
    Please suggest any otherw ay also, to reduce memory consumption by batch job, while using Toplink
    Thanks

    Cache expiration has nothing to do with your issue.
    Assuming you used a weak cache, then the cache is probably not what is holding these objects in memory. A weak cache will allow any object that is not reference by your application to garbage collect.
    Does your code or application hold onto these objects or the UnitOfWork, they will not be free to garbage collect until you remove any references to them.
    Are you holding onto the original findAll query results? If so, then this will keep all of these objects in memory. The UnitOfWork should still be free to garbage collect, so you will need to find out what is holding onto the UnitOfWork or the objects within it. You may wish to use a memory profiler such as JProfiler.
    What exact version of TopLink are you using? I believe there were some memory related issues in 10g, so make sure you have the latest patch release.
    Depending on your TopLink version you could also try an isolated cache.
    James : http://www.eclipselink.org

  • ADF deployment Cach expiration

    Hi,
    I have an ADF application and to improve performance I am testing improving the use of local web browser cash.
    I have deployed the application with following additional settings in the deployment plan:
    <expiration-setting expires="never" url-pattern="*.gif"> </expiration-setting>
    <expiration-setting expires="never" url-pattern="*.png"> </expiration-setting>
    After deployment I can see in the OCO4J console, using System Mbean browser, I can see this has been applied.
    I test the application with firefox and looking at cache (using CacheViewer) and http header (using Live HTTP Headers), I can see expiration date is always set to 01/01/1970 (see below).
    Is there anything wrong with the expiration-setting. Or is there an other way to enable web browser caching for ADF applications.
    thank you for any help and info,
    Yves
    GET /appname/faces/images/graphLegend/LEG6.png HTTP/1.1
    Host: domain:port
    User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; nl; rv:1.8.1.4) Gecko/20070515 Firefox/2.0.0.4
    Accept: image/png,*/*;q=0.5
    Accept-Language: nl,en-us;q=0.7,en;q=0.3
    Accept-Encoding: gzip,deflate
    Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
    Keep-Alive: 300
    Connection: keep-alive
    Referer: http://domain:port/appname/faces/app/fc/dpr_work_screen.jspx
    Cookie: JSESSIONID=...; oracle.uix=0^^GMT+2:00; OHS-domain-port=...
    HTTP/1.x 200 OK
    Date: Thu, 27 Sep 2007 11:08:46 GMT
    Server: Oracle-Application-Server-10g/10.1.3.1.0 Oracle-HTTP-Server
    Pragma: no-cache
    Cache-Control: no-store
    Surrogate-Control: no-store
    Expires: Thu, 01 Jan 1970 12:00:00 GMT
    Content-Length: 181
    Set-Cookie: JSESSIONID=...; path=/appname
    Content-Location: /images/graphLegend/LEG6.png
    Last-Modified: Thu, 27 Sep 2007 10:57:58 GMT
    Keep-Alive: timeout=15, max=94
    Connection: Keep-Alive
    Content-Type: image/png

    Hi,
    I want to try expiration-settings in our application and see how it improves the performance of our application. To which file did you add the below settings?
    <expiration-setting expires="never" url-pattern="*.gif"> </expiration-setting>
    <expiration-setting expires="never" url-pattern="*.png"> </expiration-setting>
    Thanks

  • OSB result caching : expiration

    Hi,
    Does someone know if it's possible to override the TTL (set at design-time) of the result caching while deploying (using the customization file for example or WLST ?).
    regards,
    mathieu

    Thank you both of you.
    While the xquery expression based on an input parameter is definetly an option, it will not be applicable in our case (The cache feature should be completely invisible for consumers).
    I think I'll lurk around the ant script for modifying the business service file contained in the archive.
    regards,
    mathieu

  • How to save web service request key and response value in cache to reduce calling the service for same type of requests

    Hi
    I have a web service which return the response based on the request key.
    I need to save the key and the response value in cache for around 30mins
     to reduce the web service calls for better performance.
    Appreciate if any once can share a sample code

    using System.Runtime.Caching;
    public List<string> cachingwebserviceresponse()
    {//Create a cache key
    string strParameters = "1234";//Create a cache value holding object
    List<string> results = new List<string>();//Create a cache
    ObjectCache cache = MemoryCache.Default;//Assign key for the cache
    string cacheKey = strParameters;//Check whether the key exists in the cache//If exists assign the values from the cache instead of calling the webservice//else call the web service and add the response to the cache
    if (cache.Contains(cacheKey))
    results = cache.Get(cacheKey);
    else
    { //calling the web service client
    using (service.webservice fd = new service.webserviceContractClient())
    { //Call the web service function to get the results
    results = fd.DataSearch(strParameters);
    } //Create the cache expiration policy. I have created for 30 minutes.
    CacheItemPolicy cacheItemPolicy = new CacheItemPolicy();
    cacheItemPolicy.AbsoluteExpiration = DateTime.Now.AddMinutes(30); //Add the response to the cache based on the key
    cache.Add(cacheKey, results, cacheItemPolicy);
    return results;

  • Parse Expires HTTP Header

    Hello,
    I'm writing a HTTP-Proxyserver and I want to compare the cached Expires-Header with the actual date, so I need to convert the Header Object which I get from "httpclient" (jakarta) to a date Object. I tried the following:
    /** Expires Header format. */
    private final static DateFormat expiresFormat = new SimpleDateFormat("EEE, dd MMM yyyy HH:mm:ss 'GMT'");
    /** Expires header as a date. */
    private Date expires;
    Header expires = method.getResponseHeader("Expires");
    // Parse the expires header.
    try
      this.expires = expiresFormat.parse(expires.toString());
    catch (ParseException e)
      logger.fatal("Bad date format in header: "+expires);
      throw new IllegalArgumentException(
        "Bad date format in header: "+expires);
    ...I get an Exception "Exception in thread "Thread-1" java.lang.IllegalArgumentException: Bad date format in header: Expires: Wed, 01 Jan 2020 01:01:01 GMT and I really don't know why :-/
    I've tried so many "formats", but hmm, nothing worked so far :-/
    greetings,
    Johannes

    Yes, but oh well, it doesn't work. Now I even tried
    /** Expires Header format. */
    private final static DateFormat expiresFormat = new
    SimpleDateFormat(
    "'Wed, 01 Jan 2020 01:01:01 GMT'");
    That is illogical.
    SimpleDateFormat takes a pattern. That isn't a pattern.
    Presumably you intend to escape the entire thing perhaps? But then what exactly is it supposed to be parsing then?
    And there is at least one bug with escaping so there could be others.
    My suggestion was to stop escaping everything by removing the elements that you were already escaping. That would leave you just with a regular pattern.

  • Proxy server v4.0.9 not caching as expected

    I've 2 Web proxy server configured in reverse mode and in the same proxy array.
    I'm just doing basic testings and noticed a document is not cached by the proxy array, while I think it should.
    The proxy access log is as follows (nothing in the error log):
    192.168.101.245 - - [20/Apr/2009:15:20:23 +0100] "GET /at06_REDESIGN.css HTTP/1.1" p2c_hl:489 p2c_cl:27147 p2c_rc:200 r2p_hl:415 r2p_cl:27147 r2p_rc:200 c2p_hl:596 c2p_cl:- p2r_hl:671 p2r_cl:- DNS_time:5 p2r_cnx_time:0 p2r_init_wait:8 p2r_full_wait:18 Total_time(us):24000 c_fin_status:FIN r_fin_status:FIN cache_status:ABORTED
    I can't figure out why the the final cache status is always ABORTED while for other documents of the same kind (.js files for example), the proxy caches them.
    Here's the full HTTP request and answer, from the browser point of view, when I try to get the document from a non master member of the array:
    http://livecache002.front.dc2.mydomain.com:8080/at06_REDESIGN.css
    GET /at06_REDESIGN.css HTTP/1.1
    Host: livecache002.front.dc2.mydomain.com:8080
    User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.0.8) Gecko/2009032609 Firefox/3.0.8
    Accept: text/css,*/*;q=0.1
    Accept-Language: en-gb
    Accept-Encoding: gzip,deflate
    Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
    Keep-Alive: 300
    Connection: keep-alive
    Referer: http://livecache002.front.dc2.mydomain.com:8080/QA/siteundermaintenance.html
    Cookie: s_lastvisit=1239889603182; CP=null*; JSESSIONID=864758F37F32BED06B37DD228F795F34; s_cc=true; SC_LINK=%5B%5BB%5D%5D; s_sq=%5B%5BB%5D%5D
    HTTP/1.x 200 OK
    Content-Length: 27147
    Content-Type: text/css
    Date: Mon, 20 Apr 2009 13:42:15 GMT
    Server: Apache/1.3.27 (Unix) PHP/4.3.0 mod_gzip/1.3.19.1a mod_jk/1.2.2
    Cache-Control: max-age=3600
    Expires: Mon, 20 Apr 2009 14:42:15 GMT
    Last-Modified: Fri, 03 Apr 2009 15:33:19 GMT
    Etag: "3b57f-6a0b-49d62c3f"
    Accept-Ranges: bytes
    Via: 1.1 proxy-cache2
    Proxy-agent: Sun-Java-System-Web-Proxy-Server/4.0
    Here's what I see if I access through the master proxy of the array, called livecache001 :
    http://livecache001.front.dc2.mydomain.com:8080/at06_REDESIGN.css
    GET /at06_REDESIGN.css HTTP/1.1
    Host: livecache001.front.dc2.mydomain.com:8080
    User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.0.8) Gecko/2009032609 Firefox/3.0.8
    Accept: text/css,*/*;q=0.1
    Accept-Language: en-gb
    Accept-Encoding: gzip,deflate
    Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
    Keep-Alive: 300
    Connection: keep-alive
    Referer: http://livecache001.front.dc2.mydomain.com:8080/QA/siteundermaintenance.html
    Cookie: CP=null*; s_lastvisit=1239889603182; JSESSIONID=3E4EC776B5BA3ADF54FE05F87623034B; s_cc=true; SC_LINK=%5B%5BB%5D%5D; s_sq=%5B%5BB%5D%5D
    HTTP/1.x 200 OK
    Content-Length: 27147
    Content-Type: text/css
    Date: Mon, 20 Apr 2009 14:20:23 GMT
    Server: Apache/1.3.27 (Unix) PHP/4.3.0 mod_gzip/1.3.19.1a mod_jk/1.2.2
    Cache-Control: max-age=3600
    Expires: Mon, 20 Apr 2009 15:20:23 GMT
    Last-Modified: Fri, 03 Apr 2009 15:33:19 GMT
    Etag: "3b57f-6a0b-49d62c3f"
    Accept-Ranges: bytes
    Via: 1.1 proxy-cache2, 1.1 proxy-master
    Proxy-agent: Sun-Java-System-Web-Proxy-Server/4.0, Sun-Java-System-Web-Proxy-Server/4.0
    The master determines the document should be retrieved from the other proxy, but since that proxy doesn't
    cache the document, the master can't help in such a case.
    Here's what I see for a successfully cached document (retrieved from the master proxy in the array, by the non master proxy) :
    http://livecache002.front.dc2.mydomain.com:8080/common/browsing_func.js
    GET /common/browsing_func.js HTTP/1.1
    Host: livecache002.front.dc2.mydomain.com:8080
    User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.0.8) Gecko/2009032609 Firefox/3.0.8
    Accept: */*
    Accept-Language: en-gb
    Accept-Encoding: gzip,deflate
    Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
    Keep-Alive: 300
    Connection: keep-alive
    Referer: http://livecache002.front.dc2.mydomain.com:8080/QA/siteundermaintenance.html
    Cookie: s_lastvisit=1239889603182; CP=null*; JSESSIONID=864758F37F32BED06B37DD228F795F34; s_cc=true; SC_LINK=%5B%5BB%5D%5D; s_sq=%5B%5BB%5D%5D
    HTTP/1.x 200 OK
    Content-Length: 8875
    Content-Type: application/x-javascript
    Date: Mon, 20 Apr 2009 13:42:15 GMT
    Server: Apache/1.3.27 (Unix) PHP/4.3.0 mod_gzip/1.3.19.1a mod_jk/1.2.2
    Cache-Control: max-age=3600
    Expires: Mon, 20 Apr 2009 14:42:15 GMT
    Last-Modified: Wed, 15 Oct 2008 11:00:17 GMT
    Etag: "31ff2-22ab-48f5cd41"
    Accept-Ranges: bytes
    Via: 1.1 proxy-master, 1.1 proxy-cache2
    Proxy-agent: Sun-Java-System-Web-Proxy-Server/4.0, Sun-Java-System-Web-Proxy-Server/4.0
    Basically, both proxies have exactly the same caching configuration: each possible must be cached, if possible, without low size limit.
    Any idea of what could be wrong ?

    Now that I have the error log level set to fine, I think I see the problem, which looks like a bug in the proxy:
    My (reverse) proxy server is in the "GMT+1" timezone. When it gets a document from the backend server,
    the header shows the document is valid for one our, but the proxy erroneously considers, according to its
    error log, that the document is expired, as shown in the following error log extract:
    [21/Apr/2009:14:36:33] fine (25681): CORE7205: document http://himalia.nlw.mydomain/structure_images/footerNavDivider.gif will not be cached, expired on Tue Apr 21 14:36:33 2009
    14h36 is my proxy server local time, so it's 13h36 GMT.
    Here're the headers in the request and answer:
    http://livecache001.front.dc2.mydomain:8080/structure_images/footerNavDivider.gif
    GET /structure_images/footerNavDivider.gif HTTP/1.1
    Host: livecache001.front.dc2.mydomain:8080
    User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.0.8) Gecko/2009032609 Firefox/3.0.8
    Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
    Accept-Language: en-gb
    Accept-Encoding: gzip,deflate
    Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
    Keep-Alive: 300
    Connection: keep-alive
    Cookie: CP=null*; s_lastvisit=1240319966219; rsi_ct=2009_4_20:1; rsi_segs=; JSESSIONID=13D24F4986FA8E67CC62E10F145103A5; s_sq=%5B%5BB%5D%5D; SC_LINK=%5B%5BB%5D%5D; s_cc=true
    If-Modified-Since: Wed, 18 Mar 2009 12:06:20 GMT
    If-None-Match: "2e551-2c-49c0e3bc"
    Cache-Control: max-age=0
    HTTP/1.x 304 Not Modified
    Date: Tue, 21 Apr 2009 13:36:33 GMT
    Server: Apache/1.3.27 (Unix) PHP/4.3.0 mod_gzip/1.3.19.1a mod_jk/1.2.2
    Etag: "2e551-2c-49c0e3bc"
    Expires: Tue, 21 Apr 2009 14:36:33 GMT
    Cache-Control: max-age=3600
    Via: 1.1 proxy-master
    Proxy-agent: Sun-Java-System-Web-Proxy-Server/4.0
    The proxy header states the document will expire at 14h36 GMT (so 15h36 local time), but its error log states the document
    won't be cached because it has already expired, while it's 14h36 local time .

  • Clearing resource bundle cache - can it be done? how?

    How can I refresh/clear the resource bundle cache?
    I'm using JDK1.6 (under Tomact), been trying to use the ResourceBundle.clearCache(); method as indicated in the [International Enhancements in Java SE 6|http://java.sun.com/developer/technicalArticles/javase/i18n_enhance/#cache-expire] article with no effect.
    Code example:
    ResourceBundle.clearCache();
    ResourceBundle bundle = ResourceBundle.getBundle("conf.L10N.portal", ServletActionContext.getContext().getLocale());
    ...Change the resource bundle file made no effect and the same values kept coming from the cache.
    Does the clearCache method works? Am I doing something wrong?
    Thanks,
    Elad

    This works for me:
    import java.io.*;
    import java.util.*;
    public class ResourceBundleTest {
        public static void main(String[] args) throws IOException {
            ResourceBundle res = ResourceBundle.getBundle("res");
            System.out.println(res.getString("foo"));
            System.out.print("edit properties then type any char:");
            int ch = System.in.read();
            ResourceBundle.clearCache();
            res = ResourceBundle.getBundle("res");
            System.out.println(res.getString("foo"));
    }Could you need the version that takes a class loader? Have you tried using timetolive?

  • Prevent JNLP from being cached in the Java Web Start cache

    Hi,
    when i launch a JNLP it's always being cached in the Java Web Start cache. Is there a way to prevent it from being cached? Some attribute/property inside it?
    Thanks!

    Set the http headers in the response from the server when the client downloads the jnlp file.
    eg Cache-control no-cache
    Expires (use same date as "date" header)
    Note that over SSL on Internet Explorer 8 this will not work. All other configurations should be ok.

  • Strange caching problem

    Dear Community,
    I have a problem with deployment of WebDynpro applications. These contain a number of static files (stored under src/mimes/Components/...) which are displayed in my application.
    Whenever I change those static (not generated by WD) files and redeploy the app, the changes are not visible. I tried disabling the IE cache with the Microsoft Developer Toolbar but the content sent is still the old file - the changes are not visible.
    Does anyone have a clue on how to make those changes visible and where to flush any caches that hold these files?
    I found out about cache expiration of WD content, but this does not affect files such as html.
    Thanks for you help!
    Kind regards,
    Christian

    Hi folks,
    thanks for the answers, but unfortunately this ain't the solution. I'm using IE with Developer Toolbar, so I can erase my browsers cache on a click and view all sources - and updating my WD app doesn't influence the caching behaviour.
    Has someone made experiences with setting parameters in Visual Administrator? There's a bunch of properties to make WD expire after x seconds - maybe there is one for mime content?
    Thanks,
    Christian<i></i>
    See SAP Note Number: 589444
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.highlightedcontent?documenturi=%2foss_notes%2fsdn_oss_bc_bsp%2f%7eform%2fhandler%7b5f4150503d3030323030363832353030303030303031393732265f4556454e543d444953504c4159265f4e4e554d3d353839343434%7d
    MIME objects are buffered in the server cache of the Web ApplicationServer. When you import MIME objects, these are not removed from the cache. Consequently, the old versions are still displayed.
    Message was edited by:
            Christian Henn

  • cm:search and content caching and

    Hi All,
    This is my environment :
    Weblogic Portal 8.1 SP3.
    Documentum : We are using DCS for connecting to SCS target system and is used as content repository.
    If we use <cm:search> as below :
    <%
    String queryFor25Content = "i_chronicle_id in ('090000058000556e','090000058000487e','09000005800037c3','09000005800195bd','0900000580003116','0900000580011a61','0900000580004365','0900000580003a98','0900000580004299','09000005800030a9','0900000580004793','09000005800043da','090000058000310e','090000058001a622','0900000580021504','0900000580003536','090000058001bc11','090000058001a7bb','09000005800195b5','0900000580003e57','090000058002403c','0900000580023fed','0900000580024065','0900000580023f92','090000058001a971')";
    %>
    <cm:search id="contents" query="<%=queryFor25Content%>" useCache="true" cacheScope="application" cacheTimeout="300000"/>
    <%
    int size = contents.length;
    for(int i=0; i< size; i++){ 
    Node node = contents;
    %>
    -----------------Content Start-----------------
    <es:notNull item="<%=node%>">
    <cm:getProperty node="<%=node%>" name="_content" default="No Content Found"/>
    </es:notNull>
    -----------------Content End-----------------
    <%
    %>
    In such case, this is my observation:
    The content query will not be fired till cache expires.
    What I would like to know is that :
    Does this also caches the contents that are returned by this query?
    As for as my observation is concerned, the content objects are not cached (meaning the binary propery of the content) rather it caches all other attributes of the content.
    Can somebody comment on your experiences?. If i have to cache the whole content itself then, should I create my own Cache using "P13N" cache APIs and put the "com.bea.content.Node" in cache??
    TIA,
    Prashanth Bhat.

    Hi,
    Have you checked the following article?
    Content Index status of all or most of the mailbox databases in the environment shows "Failed"
    http://support.microsoft.com/kb/2807668
    After you create new Active Directory group that is named "ContentSubmitters", please grant Admistrators and NetworkService full access to the group. Then please restart the search services to check result.
    Here is a similar thread for your reference.
    https://social.technet.microsoft.com/Forums/exchange/en-US/fccf9dca-b865-4356-905b-33ac25dcc44d/exchange-2013-content-catalog-index-failed-all-databases?forum=exchangesvravailabilityandisasterrecovery
    Hope this is helpful to you.
    Best regards,
    Belinda Ma
    TechNet Community Support

  • Regarding Portlet caching on Portal 8.1

    Hi,
    I m using WLP 8.1 where i have tried by setting 'Render Cacheable' and 'Cache Expires'. But when i run portal,my jpf is reflected with latest data, but it should show cache data. Even i tried in portal administrator.
    Pls suggest ...
    Regards,
    Senthil

    In order to diagnose your problem you will need to download and install the below
    Install the WPT (Windows Performance Toolkit) 
    http://www.microsoft.com/en-us/download/details.aspx?id=30652
    Help with installation (if needed) is here
    When you have, open an elevated command prompt and type the following 
    WPRUI.exe (which is the windows performance recorder) and check off the boxes for the following:
    First level triage (if available), CPU usage, Disk IO.  
    If your problem is not CPU or HD then check off the relevant box/s as well (for example networking or registry)  Please configure yours as per the below snip.  This will reboot your system so make sure you save your
    work
    Click Start
    Let it run for 60 secs or more and save the file (it will show you where it is being saved and what the file is called)
    Zip the file and upload to us on Onedrive (or any file sharing service) and give us a link to it in your next post.
    Wanikiya and Dyami--Team Zigzag

  • Clear TREX query cache

    Hi all,
    I'm stuck on this problem: I'm adding a document to a Trex search index using KM API (Web Dynpro for Java app). The document is processed & added to the index. So far so good.
    The problem is that when I do a search on that index (I search for ''), the document won't show up because the query results of the previous search for '' on my index seem to be cached somewhere. When I wait for 10 mins (cache expires?) or when restart my Web Dynpro app (so the user needs to login again), the document is found.
    How can I clear/disable the trex query cache?
    Thanks a lot,
    Jeroen
    PS: clicking on the 'clear cache' button in Search and Classification Cache Administration (TREX monitor) is not working. The cache that is pestering me seems to be a session-specific query cache.

    TREX service @ visual admin: queries.usecache == false

  • Java DNS Caching

    I have my Java DNS cache expiring every five minutes by setting it up in java.security. My question is - what happens when the DNS server is unreachable, ie. down ? Does Java retain what it already had in the event that it cannot reach a DNS server to update from, or does the domain that was being accessed become unavailable ?

    Thank you for your response. At least now I know it isn't something that I misconfigured. It is interesting that if I add this line to obj.conf, the statistics function appears to work, even though according to the admin interface DNS caching is disabled: (I got this from the help system.)
    Init cache-size="1024" expire="1200" fn="ip-dns-cache-init"
    The original entry put there by the admin interface is:
    Init cache-size="1024" expire="1200" negative-dns-cache="yes" fn="host-dns-cache-init"
    I do not have a Sun support contract. Is there a procedure to report this bug, or have you already done that?

  • Csm arp cache timeout issues

    Hello all.
    The arp cache timeout of the csm is normally 4 hours.
    Now if we want to replace one of our servers we would need to wait 4 hours before the new servers mac address is learned if we keep the old ip-address.
    I know we can manually flush one entry from the arp cache but is there a way for the csm to find out sooner if the mac address has changed?
    I also know we can make the time shorter before the cache expires but what would be the consequences if we would put the timer to lets say 1000 seconds?
    Would we then be flooding our network with arp requests all the time?
    Finally I would expect that if an icmp request would fail because of the change of mac addres the csm would make an arp request to find out who has the ip I am trying to ping.
    What is the procedure if the icmp request would fail?
    Thank you.
    Daniel Levi

    I would not suggest using the manual method, since it is time consuming and also there is a good chance that the new ARP request may load the CSM. I would suggest that you wait for the arp cache timeout.

Maybe you are looking for