Turn off Cache in OBIEE

Hi,
I am seeing some issue with cache -
Is it possible to turn off cache for all reports in OBIEE.
Or
Is it possible to purge cache automatically from RPD and from answer?
Thanks,
Poojak

Hi ,
You can diable the cache in NQSCONFIG.INI file.
You can purge the cache from the answer
http://bintelligencegroup.wordpress.com/2011/01/06/how-to-purge-the-cache-from-dashboard/
and also from rpd manage -->cache you purge
or using the event polling
http://bintelligencegroup.wordpress.com/2011/07/27/event-polling-table-to-purge-the-cache-in-obiee/
Hope this will help you.
Thanks,
Ananth

Similar Messages

  • CE - Turning off caching

    I have a CE510 and I also run SmartFilter. To a degree the caching has been more problems than what it is worth. I am constantly plagued by users having problems with forms, search sites, redirects, etc....
    I would like to turn off caching! I will be content with the CE's handling NTML authentication and SmartFilter content filtering.
    How do I turn off caching where all requests are treated as a unique request just as if we did not have a the cache engine.
    Thanks!

    I do not believe there is a way to turn off the caching. You could remove the CFS partition on the CE thereby eliminating the CE from being able to cache as it needs the CFS partition to perform this task.
    You may also be able to do a "http reval-each-request all" thereby making the CE go get the content every time.
    Regards
    Pete..

  • Document Level JavaScript to turn off caching for a PDF?

    I have been trying to find a way to ensure that a fillable PDF leaves no trace of itself on a computer that was used to open it. For example, if a user acceses a fillable PDF via browser or Reader, once they are done and have closed the browser or Reader, I want no trace of the information to remain in any cache on that machine. We don't want to require our users to try to control cache via settings in Reader, so I have been trying to find other ways to do this.
    I came across the following comment from GKaiseril in another thread - is this a way for a PDF to control its own caching? Any details or alternative approaches would be welcome!
    GKaiseril
    Re: Metadata - Can't remove
     You can also use document level JavaScirpts to turn off the auto complete and caching for a given PDF.

    Version 9 and 10 of Reader do not cache form data in a temporary FDF as previous version may have. The nocache document property it not even documented in the latest Acrobat JavaScript reference. If you're concerned about previous versions, you can use set the nocache document property at run-time, but users can disable JavaScript, so that approach is no guarantee.
    For information on controlling autocomplete, see: http://livedocs.adobe.com/acrobat_sdk/9.1/Acrobat9_1_HTMLHelp/JS_API_AcroJS.88.407.html

  • How to refresh cache in TopLink, turn off cache

    How does one refresh TopLink's cache resulting from an event change? As an example, if a stored procedure is triggered by a process independent of TopLink, how do I tell TopLink to refresh its cache.
    Also, how does one turn off TopLink cache altogether. Thanks.

    I do not typically recommend turning the TopLink cache off all together. The cache is also key for managing object identity. If you have data that changes outside of your Java applications knowledge then there are several strategies to handling the stale data situation in the cache.
    1. Make sure you setup a locking strategy so that you can prevent or at least identify when values have already changed on an object you are modifying. Typically this is done using optimistic locking. There are several options (numeric version field, timestamp version field, some or all fields). Full details available in the docs.
    2. Configure that cache appropriately per class. If the data is being modified by other applications then use a weaker style of cache for the class. SoftCacheWeakIdentityMap or WeakIdentityMap will minimize how long de-referenced objects are held in the cache.
    3. Force a refresh if needed. On any query a flag can be set to force the query's execution to refresh the cache. (http://otn.oracle.com/docs/products/ias/doc_library/90200doc_otn/toplink.903/tl_flapi/oracle/toplink/queryframework/ObjectLevelReadQuery.html#refreshIdentityMapResult())
    4. If the application is primarily read based and the changes are all being performed by the same Java application operating in a cluster you may want to look at TopLink's cache-sync feature. Although this won't prevent stale data it should greatly minimize it.
    5. One less frequently used option is to build infrastructure that can notify TopLink's cache when something changes on the database. This involves building your own call-out mechanism from a trigger into the Java application hosting the TopLink session and cache. In the Java space you can make use of the cache API to force refresh, update the object yourself, or mark it as dirty. I would discourage removing it from the cache at the object may still be in use.
    I hope this explains stale-data management at a high level. More details can be found throughout the TopLink documentation.
    Doug Clarke
    Product Manager
    Oracle9iAS TopLink

  • Turning off caching for loading a text file

    I'm new to AS3 and don't understand the properties setting
    for URLRequest. I'm trying to turn caching off so when I reload the
    text file, it always gets a fresh copy. I keep getting compiler
    errors however with the useCache line. I'm not accessing the
    properties correctly or something. When I take out that line it
    works fine but always fetches the cached file.
    My code is:
    loadDoctorVariables();
    function loadDoctorVariables() {
    var requestJo:URLRequest = new URLRequest("
    http://www.grxsolutions.com/gtslideshow3/"+doctorURL+".txt");
    //requestJo.useCache(false);
    requestJo.useCache = false;
    var variables:URLLoader = new URLLoader();
    variables.dataFormat = URLLoaderDataFormat.VARIABLES;
    variables.addEventListener(Event.COMPLETE, completeHandler);
    try
    variables.load(requestJo);
    catch (error:Error)
    trace("Unable to load URL: " + error);
    function completeHandler(event:Event):void {
    var loader:URLLoader = URLLoader(event.target);
    trace(loader.data.dayNames);
    trace(loader.data.monthNames);
    trace(loader.data);
    trace(loader.data.saturationColor);
    //saturationColor=10&hue=100&tint=255&bright=12&bgimage=ironman&slidesused=1,2,3,4,7,8,9, 10,11&voice=male1&alternateslides=7,doctorbob1,11,doctorbob2
    }

    append a random string to the URL, ie
    var requestJo:URLRequest = new URLRequest("
    http://www.grxsolutions.com/gtslideshow3"+doctorURL+".txt"+"?c="+Math.random());

  • Windows.Web.Http.HttpClient - turn off caching.

    Windows.Web.Http.HttpClient by default sends a few headers like
    If-None-Match
    If-Modified-Since
    Proxy-Connection
    etc.
    I tried, httpClient.DefaultRequestHeaders.Clear(). Doesn't work.
    Any help is appreciated?
    Basically, I manage my own caching and don't want the client to send extra headers that the server hates.

    try to set this:
    RootFilter = new HttpBaseProtocolFilter();
    RootFilter.CacheControl.ReadBehavior = Windows.Web.Http.Filters.HttpCacheReadBehavior.MostRecent;
    RootFilter.CacheControl.WriteBehavior = Windows.Web.Http.Filters.HttpCacheWriteBehavior.NoCache;
    HttpClient = new HttpClient(RootFilter)            
    Microsoft Certified Solutions Developer - Windows Store Apps Using C#

  • Turn off caching function in JRE

    I am working with applet that uses jre 1.0.5 as plugin.
    My applet will download some images when it runs and these images and applet itself are written to the local file in the client machine.
    Do somebody know how i can prevent this caching (cache applet and images to local machine)? Thanks

    hi, I means that the applet developer controls the caching event, not client.
    I means that I don't want my images and code exposed to the client .thank.

  • How to turn off unified buffer caching

    It's clear from many other forums that the unified buffer cache is supposed to 'just work' and in general it does. However, I have a situation where I really want the option of turning off caching either at the device or file system level.
    Is this possible?
    I have seen mention of using F_NOCACHE when opening files but that either does not work or, in any case, is not an option in this case.
    Thanks.

    It actually indexes the "cache" files, so although not creating them might improve indexing speed, you wouldn't get anything indexed which is probably not very useful.

  • Turn off library caching for a Mail client on the Mac?

    Hi..is it possible to turn off library caching for a Mail client on the Mac? At work I have outlook 2007. I setup Mail client to sync Mail,contacts and calendar. The cache is stored in the library hidden folders. Is it possible to permanently turn off caching?

    I am getting more and more bewildered.  I just looked at google mails settings to see if they do something different from the isp I use and there they say the following: if the sent messages aren't stored on the server, then where are they stored? What happens to them if I set my preferences this way?  I want to wail and tear my hair out. I feel so very stupid.
    http://mail.google.com/support/bin/answer.py?answer=78892#
    Recommended imap settings....
    Apple Mail
    From the Mail menu, click Preferences > Accounts > Mailbox Behaviors
    Drafts:
    Store draft messages on the server > do NOT check
    Sent:
    Store sent messages on the server > do NOT check
    Junk:
    Store junk messages on the server > checked
    Delete junk messages when > Never
    Trash:
    Move deleted messages to the Trash mailbox > do NOT check
    Store deleted messages on the server > do NOT check

  • Caching in OBIEE/OBISE1

    Here is list of my research that corresponds to the Caching issue in OBISE1 well its similar to OBIEE atleast in this aspect,this list pretty much covers caching that can occur different levels.I would suggest to start one by one and see how caching is corresponding in the Dashboard and please do contact me if you need further help
    A.Cache Management
    For this release of the OBIEE, if you run an initial or incremental load without first
    clearing the query cache, it is possible that reports that you run after the load process will reuse the cache that existed prior to the load process. This can result in inconsistencies between reports. There are several alternatives to mitigate this situation,
    such as:
    • Configure the query cache to expire daily.
    • Clear the cache tables manually as needed; for example, after you complete a load process.
    • Schedule the system to clear the cache tables at the same frequency as the
    incremental load process.
    To clear cached queries:
    1. Open the Oracle BI Administration Tool in online mode.
    2. Click Manage, Cache to access the Cache Manager page and select all cache entries.
    3. Click Action, Refresh.
    To disable the cache:
    1. Locate this configuration file: <root
    directory>\OracleBI\server\Config\NQSConfig.INI.
    2. In the Query Result Cache Section, change the [ CACHE ] setting from ENABLE =YES; to ENABLE = NO;.
    3. Save the NQSConfig.INI configuration file and restart the Oracle BI Server service.
    See: Oracle Business Intelligence Server Administration Guide, "Query Caching in the OracleBI Server" chapter for more information on query caching in OBIEE.,
    B. After running an Answers report, check the SQL NQQUERY.log (on Windows, usually located in, :\OracleBI\Server\log\)
    TIP:
    Turn off caching in the NSQConfig.ini (and restart the server service) or at the physical table level in the .rpd, to ensure you get the un-cached results, and can therefore see the SQL being generated each run, or drill down in Answers.
    C.Found this in documentation(if using OBISE1,for OBIEE refer to OBIEE documentation on CACHING)
    http://download-west.oracle.com/docs/cd/B40078_02/doc/bi.1013/b31770.pdf
    all about caching in OBISE1
    But important thing to note is that Caching occurs at different levels
    Server (NQSConfig file ,look for ENABLE parameter)
    Administration(if you click on the TOOL in the MENU bar ,there you would some options related to OBJECT in the physical layer )
    OBIEE Dashboard Query level (in answers when making the ad-hoc query ,there is a check box which can checked or unchecked depending on whats desired in terms of caching).
    This is to the best of my Knowledge but please free to add some more details if you feel If I missed something.
    Four logs to check are as follows
    NQServer.log
    NQQuery.log
    NQSAdminTool.log
    NQScheduler.log
    I hope this covers some ground as to Caching in OBISE1/OBIEE
    Thank you,
    Mohammad Alam

    Hi Mohammed,
    Thank you for the VERY thorough and useful digest of OBIEE Caching!
    We've found that caching to be less-than-satisfactory. We've enabled caching at the NQSConfig level, and set the max time out to 5 minutes, but for some reason our users are not seeing data refreshed.
    For example, we have a report that shows Sales Order shipments. The order status can go from "Booked" to "Picked" to "Shipped" all within 1 day. However, the report will show "Booked" when the transaction actually has been "Picked".
    I've tried unchecking By pass server cache in the report's Advanced tab, but to no avail.
    Any suggestions or comments? Our current workaround is NO caching at all.
    irene

  • Is there a way to turn off using ram as intermediary when transfering files between 2 internal hdds? (write cache is off on all drives)

    right forum?  first time here. none of the options seem perfect. so i guess this applies to 'setup.' i tried to describe what's happening verbosely from the start of a file transfer to completion of writing file to 2nd hard drive.
    win 7 64bit home - i2600k 3.4ghz - 8gb of ram - 2hdd 1 ssd
    I have this problem when transferring files between 2 internal hard drives. one is unhealthy (slow write speeds but not looking for advice to replace it because it serves its unimportant purpose). so, the unhealthy drive drops into PIO mode - that's acceptable.
    however, a little over 1gb of any file transfer is cached in RAM during the file transfer. after the file transfer window closes, indicating the transfer is "complete" (HA!), it still has 1gb to write from RAM which takes about 30minutes. This would
    not be a problem if it did not also earmark an additional 5gb of ram (never in use), leaving 1gb or less 'free' for programs. this needlessly causes my pc to be sluggish - moreso than a typical file transfer file between 2 healthy drives. i have windows write
    caching turned off on all drives. so this is a different setting i can't figure out nor find after 2 hours of google searches.
    info from taskmanager and resource monitor.
    idle estimates: total 8175, cached 532, available 7218, free 6771 and the graph in taskman shows about 1gb memory in use.
    at the start of a file transfer:  8175,  2661,  6070,  4511free and ~2gb of ram used in graph. No problems, yet.
    however, as the transfer goes on, the free ram value drops to less than 1gb (1gb normal +1gb temporary transfer cache +5gb of unused earmarked space = ~7gb),  cached value increases, but the amount of used ram remains relatively unchanged (2gb
    during transfer and slowly drops to idle values as remaining bits are written to the 2nd hard drive).  the free value is even slower to return to idle norms after the transfer completes writing data to 2nd hard drive from RAM. so, it's earmarking an addition
    5gb of ram that are completely unused during this process.  *****This is my problem*****
    Is there any way to turn this function off or limit its maximum size. in addition to sluggishness, it poses risk to data integrity from system errors/power loss, and it's difficult to discern the actual time of transfer completion, which makes it difficult
    to know when it's safe to shutdown my pc (any data left in ram is lost, and the file transfer is ruined - as of now i have to use resmon and look through what's writing to disk2 -> sometimes easy to forget about it when the transfer window closed 20-30minutes
    ago and the file is still in the process of writing to the 2nd disk).
    Any solution would be nice, and a little extra info like whether it can be applied to only 1 hard drive would be excellent.

    Thanks for the reply.
    (Although i have an undergrad degeee in computers, it's been 15years and my vocab is terrible. so, i will try my best. keep an open mind. it's not my occupation, so i rarely have to communicate ideas regarding PCs)
    It operates the same way regardless of the write-cache option being enabled. It's not using the 5gb for read/write buffer - it's merely bloating standby memory during the transfer process at a rate similar to the write speed of destination (for my situation).
    at this point i don't expect a solution. i've tried to look through lists of known memory leaks but i dont have the vocabulary to be 100% certain this problem is not documented. as of now it can't affect many people - NAS's with low bandwidth networks, usb
    attached storage etc. do bugs get forwarded from these forums? below i can outline the consistent and repeatable nature not only on my pc but on others' pcs as well (2012 forum post i found).
    I've been testing and paying a little more attention and i can describe it better:
    Just the Facts
    Resmon Memory Info: "In Use" stays consistent ~1gb (idle amount and roughly the same when nothing else is running during file transfer)
                                     "Modified" contains file transfer
    data (meta data?) which remains consistent at little over 1gb (minor fluctuations due to working as a buffer). After the file transfer window closes "Modified" slowly diminishes as it finishes lazy writing (i believe that's the term). I forget idle
    pc amount, but after transfer this is ony 58mb)
    "Standby" as the transfer starts it immediately rises to ~2gb. I'm sure this initial jump is normal. However, it will bloat well over 5gb over time with a large enough transfer increasing at a consistent rate during the entire transfer
    process. the crux of the matter.
    "Free" will drop as far as 35-50megabytes over time.
    as the transfer starts, the "Standby" increases by an immediate chunk then at a slow rate throughout entire transfer process(~1mb/s). once writing metadata to RAM no longer occurs, the "Modified" ram slowly (@500kb/s matching resmon disk
    activity for that file write) diminishes as it finishes lazy writing, After file is 100% written to destination drive, "Standby" remains a bloated figure long after.
    a 1.4gb transfer filled 3677MB of "Standby" by the time writing finished and modified ram cleared. after 20minutes, it's still bloated at 3696MB. after 30-40mins it's down to 2300mb - this is about what it jumps immediately to when transfer starts
    - it now remains at this level until i reboot.
    I notice the "standby" is available to programs. but they do not operate well. e.g. a 480p trailer on IMDB.com will stop-and-go every 2-3seconds (stream buffers fine/fast) - this would be during the worst case scenario of 35-50mb "Free"
    ram. my pc isn't and never was the latest and greatest, but choppy video never happens even with 1 or 2 resource hogs running (video processing/encoding, games, etc).
    Conjecture Below
    i think it's a problem when one device is significantly slower at writing than the source device - this is the factor that i share with others having this problem. when data is written to modified ram then sent to destination, standby memory is expanded
    until it completely or nearly fills all available RAM - If the transfer size is large enough relative to how slow the write speed of destination device is. otherwise it fills it up as much as the file size/write speed issue allows. the term "memory leak"
    is used below but may not technically be one, but it's an apt description in layman's terms.
    i saw a similar post in these forums (link at end). My problem is repeatable and consistent with others' reports. I wasn't sure if i should revive it with a reply. some of these online message boards (maybe not this one) are extremely picky
    and sensitive, lol.the world will end if an old thread revives - even if for a good reason.
    i can answer some of the ancillary issues. one person (Friday, September 21, 2012 8:33 PM) mentions not being able to shutdown, i asume he means stuck on the shutdown screen - this is because lazy writing has not completed - his nas write speed is significantly
    slower than reading from source - the last bits of data left in ram still needs to be writen to the destination. shutdown will stall for as long as needed until the data finishes writing to destination to prevent data loss.
    another person (Monday, September 24, 2012 6:31 PM) mentions the rate of the leak, but the rate is more likely a function of read speed from source relative to write speed of destination. which explains why my standby expands closer to a 1:1 ratio compared
    to his 1:100 (he said 10mb per 1000mb)
    we all have the same exact results/behaviour, but slightly different rates of bloating/leaking. as the file is written from from the ram to the destination, standby increases during this time - not a problem if read and write speeds are roughly equal (unless
    your transfering a terabytes then i bet the problem could rear its head). when writing lags, it gives the opportunity for standby ram to bloat with no maximum except the amount of installed ram. slower the write speed, the worse the problem.
    The reply on Wednesday, September 26, 2012 3:04 AM has before and after pictures of exactly what i described in
    "Just the Facts". Specifically the resmon image showing the Memory Tab.
    The kb2647452 hotfix seems to do some weird things relative to this problem. in the posts that mention they've applied it: after file completes it looks like the "standby" bloat is now "in use" bloat. as per info from Tuesday, October
    09, 2012 10:36 PM - bobmtl in an earlier post applies the patch. compare images from earlier posts to his post on this date. seems like a worse problem. Also, his process list indicates it's very unlikely they add up to ~4gb as listed in the color coded bar.
    wheres the extra gb's coming from? likely the same culprit of what filled up "standby" memory for me and others. it looks like this patch relative to this problem merely recategorizes the bloat - just changes the title it falls under.
    Link:
    https://social.technet.microsoft.com/Forums/windows/en-US/955b600f-976d-4c09-86dc-2ff19e726baf/memory-leak-in-windows-7-64bit-when-writing-to-a-network-shared-disk?forum=w7itpronetworking

  • JPA -- How can I turn off the caching for an entity?

    Hi,
    I have a problem that I will illustrate with a simplified example. I have created an entity:
    @Entity(name="Customer")
    @Table(name="CUSTOMERS")
    public class Customer implements Serializable {
    }I have also set the collowing properties in persistence.xml:
    <property name="toplink.cache.type.default" value="NONE"/>
    <property name="toplink.cache.size.default" value="0"/>
    <property name="toplink.cache.type.Customer" value="NONE"/>
    <property name="toplink.cache.size.Customer" value="0"/>
    <property name="toplink.cache.shared.Customer" value="false"/>And then I run the following code:
    Customer cust = em.find(Customer.class, 1L);
    System.out.println(cust);
    cust = em.find(Customer.class, 1L);
    System.out.println(cust);The problem: the second call to em.find does NOT generate a query to the database. Here's a fragment from the console log:
    [TopLink Fine]: 2007.05.11 02:55:05.656--ServerSession(2030438)--Connection(5858953)--Thread(Thread[Main Thread,5,main])--SELECT ID, SEX, NAME, MANAGER FROM CUSTOMERS WHERE (ID = ?)
         bind => [1]
    Customer: id=1, name=Customer #1, sex=MALE
    Customer: id=1, name=Customer #1, sex=MALECan anyone help me? Why isn't the caching turned off? I tried various combinations of properties. Nothing worked. I was expecting to see two queries to the database. I can see only one.
    I tried with TopLink Essentials Version 2 Build 39 and Version 2 Build 41.
    Best regards,
    Bisser

    The cache is likely turned off, but you can't tell because you are using the same transactional EntityManager instance for the two queries. The EntityManager requires its own cache for object identity and transactional purposes, as once you read an object in through the EM, the spec requires that all subsequent reads return the same instance. Only the EntityManager refresh will cause a refresh, that or setting your queries to use the toplink.refresh and toplink.cache-usage query hints.
    I would strongly recommend you use a query cache for performance, but there are of course reasons why one might not be the best option.
    http://weblogs.java.net/blog/guruwons/archive/2006/09/understanding_t.html
    is a good blog on understanding the caching used In TopLin Essentials.
    Best Regards,
    Chris

  • After clearing cache,cookies,history some sites still know my location ? I also turned off location

    After clearing the cache,cookies,history and turning off the location service, some web sites still know the city I'm in ? I like to start freash on some sites but my old search still apear ?

    tbwaukesha wrote:
    After clearing the cache,cookies,history and turning off the location service, some web sites still know the city I'm in ? I like to start freash on some sites but my old search still apear ?
    The site is likely geo-targeting your location from your ISP assigned IP Address. See the explanation here http://www.computerhope.com/issues/ch001043.htm or do a Google search for geo-targeting.

  • Http.keepAlive does not turn off SSL session cache?

    Hi there,
    I have a web service client that uses JSSE for making web service calls via https. In an effort to debug problems, I set http.keepAlive to false, I can see from the SSL debug output that KeepAlive timer messages no longer shows up, but I still see text such as "Cached client session" and "try to reuse cached session", etc.
    Should not turning off keepAlive disable the use of persistent sessions?
    Thanks.
    Yan

    They are unrelated features.
    HTTP Keep Alive allows the browser to maintain a Socket to the server and issue multiple HTTP requests over that same socket.
    SSL Session caching is when an SSL Session is assigned an ID, and additional SSL connects may be established with the same ID. These additional sockets then do not need to perform the full SSL handshake, since much of the data has already been negotiated previously.

  • Turning off jar cache causes classnotfound with signed jar files

    Hi,
    I have a problem with applet signed jars when the java cache is turned off.
    With the cache turned off, I get a class not found for the first class it attempts to use from the signed jar file from an applet.
    If I turn the jar caching on, all works perfectly with no other changes.
    Anyone have any ideas? This is java 6u16.
    Thanks

    jkc532 wrote:
    .. Is the fact that the CachedJarFile class doesn't attempt to reload the resource when it can't retrieve it from MemoryCache a bug? From your comprehensive investigation and report, it seems so to me.
    ..I've dug as deep as I can on this and I'm at wits end, does anybody have any ideas?Just after read the summary I was tired, so I have some understanding of the effort you have already invested in this (the 'wits' you have already spent). I think you should raise a bug report and seek Oracle's response.

Maybe you are looking for

  • HP AiO on iOS7 document capture not working with iPad2?

    Has anyone else found that the Document Capture does not work with the latest version of HP AiO Remote on iOS7? The app grumbles about "The document capture feature is only available if the camera on your iPad is over 5 megapixels" - any solutions to

  • Written Down Value Or Net Book Value

    Dear Friends: The Written Down Value Or Net Book Value given by the client is different in the year 2006 and also different in 2005.Both the years have different amounts. Also Depreciation as on current date is different and Accumlated depreciation i

  • I think i lost all my songs...

    So my ipod(4G) kept spazzing, like just cycling through a bunch of songs without playing them, so i tried to restart it, the whole toggle the hold switch back and forth and then hold down play and the center button like I've done before... but when i

  • Can you change the TOC font size and pane size?

    I am using RH 9 to generate WebHelp whose output is ultimately being viewed by a QT browser on a Linux platform machine. In my WebHelp settings, I am using "(Traditional style -no skin)" and using "Show/hide navigation pane" in my topic navigation. B

  • Does Elements 13 work with Retina display?

    I am considering purchase of Elements 13. Blogs indicate that it does not work with MacBook Pro with Retina display. Thoughts?