Query Cache

Hi,
I have gone through some posts  saying:
When the user execute an query, first the system will look for your OLAP cache and see if appropriate data is there ..if its not there then it goes to the aggregates,,then Cube..The main purpose of OLAP cache is to have the performance run time for query faster..so that the system do not need to go and fetch from the database.
To understand better If I run a query say Q1 with 20 company codes and suppose for the next time if I run for 10 company codes which are in the previous 20 company codes will it bring data from the Cache?
2. Suppose If I execute a query and after sometime If I execute the same query with little bit diffrent selection criteria just the diffrence is in drilldowns(if the same query is used 10 times in waorkbook with diffrent drilldown like once with heirarchy and the other time without hierarchy) will it bring the data from the Cache?
Please let me know.
Thanks,

see this:
http://help.sap.com/saphelp_nw70/helpdata/en/b2/e50138fede083de10000009b38f8cf/frameset.htm

Similar Messages

  • SAP BW 3.5 Query Cache - no data for queries

    Dear experts,
    we do have a problem with the SAP BW Query Cache (BW 3.5). Sometimes the
    problem arise that after the queries have been pre-calculated using web templates there are no figures available when running a web-report, or when doing a drilldown or filter navigation. A solution to solve that issue is to delete the cache for that query, however this solution is not reasonable or passable. The problem occurs non-reproducible for different queries.
    Is this a "normal" error of the SAP BW we have to live with, or are there any solutions for it? Any hints are greatly appreciated.
    Thanks in advance & kind regards,
    daniel

    HI Daniel
    Try to wirk without cache for those queries.
    Anyway, you should check how the cache option is configured for those queries .
    You can see that, in RSRV tx
    Hope this help

  • BW3.5 - Query Cache - InfoObject decimal formatting

    Hello all,
    I built a bw query, which displays key figures.  Each key figure uses the decimal place formatting from the key figure infoobject (In Query Designer, the properties for each keyfigure for Decimal places, is set to "[From Key Figure 0.00]").
    I decided to change the InfoObject key figure to have 0 decimals places (in BEx formatting tab for the Key Figure).  Now, when I open up query designer, and look at the properties for the Key figure, it still is set to "[From Key Figure 0.00]" (it should be "[From Key Figure 0]" to reflect the Key Figure Infoobject change. 
    I tried to generate the report using RSRT, and deleting the query cache, but it still shows up with two decimal places.  Has anyone encountered this problem before?  I am trying to avoid removing the Key Figure infoobject from the query and readding it to reflect the change.
    Thanks!

    Hello Brendon
    You have changed the KF infoObject to show only 0 decimal( no decimal)..that is okay but in query KF property you have selected with 2 decimal so data will be displayiing in 2 decimal...the query setting is local and have priority over KF InfoObject settings...
    If you notice in KF property in query u will have one option from the field somethjing which means whatever is defined by KF infoObject...just select that now onwards u will get only those many decimal which u have defined in KF InfoObject
    Thanks
    Tripple k

  • Issues with Query Caching in MII

    Hi All,
    I am facing a strange problem with Query caching in MII query. Have created one xacute query and set cache duration 30 sec. The associated BLS with the query retrieves data from SAP system. In the web page this value is populated by executing an iCommand. Followings are the steps I followed -
    Query executed for first time, it retrives data from SAP correctly. Lets say value is val1
    At 10th sec Value in SAP changed to val2 from val1.
    Query excuted at 15th sec, it gives value as val1. Which is expected as it gives from cache.
    Query is executed 1t 35th sec, it gives value as val2 retriving from SAP system. Which is correct.
    Query executed at 40th sec, it gives value as val1 from cache. Which is not expected.
    I have tried with java cache clear in browser and JCo cache of the server.
    Same problem I have seen for tag query also..
    MII Version - 12.0.6 Build(12)
    Any thoughts on this.
    Thanks,
    Soumen

    Soumen Mondal,
    If you are facing this problem from the same client PC and in the same session, it's very strange.. but if there are two sessions running on the same PC this kind of issue may come..
    Something about caching:
    To decide whether to cache a query or not, consider the number of times you intend to run the query in a minute. For data changing in seconds, and queried in minutes, I would recommend not to cache query at all.
    I may give a typical example for query caching in a query on material master cached for 24 hours, where we know that after creating material master, we are not creating a PO on the same day.
    BR,
    SB

  • Named query cache not hit

    Hi,
    I'm using Toplink ORM 10.1.3.
    I have a table called STORE_CONFIG which has a primary key called KEYWORD (a VARCHAR2). The POJO mapped to this table is called StoreConfig.
    In the JDeveloper (10.1.3.1.0) mapping workbench I've defined a named query to query by the PK called "getConfigPropertyByKeyword". The type of the named query is ReadObjectQuery.
    SELECT keyword, key_value
    FROM STORE_CONFIG
    WHERE (keyword = #key)
    Under the options tab I have the following settings:
    Cache Statement: true
    Bind Parameters: true
    Cache Usage: Check Cache by Primary Key
    The application logs show that the same database queries are executed multiple times for the same PK (keyword)! Why is that? Shouldn't it be checking the Object Cache rather than going to the DB?
    I've tried it with "Cache Statement: false" and "Bind Parameters: false" with the same problem.
    If I click the Advanced tab and check "Cache Query Results" then the database is not hit twice for the same record. However it was my understanding that since I am querying by PK that I wouldn't need to set "Cache Query Results".
    Doesn't "Cache Query Results" apply to the Query Cache and not the Object Cache?

    Your issue seems to be that you are using custom SQL for the query, not a TopLink expression. When you use an Expression query TopLink's know if the query is by primary key and can get a cache hit.
    When you use custom SQL, TopLink does not know that the SQL is by primary key, so does not get a cache hit.
    You could either use an Expression for the query,
    or when using custom SQL you should be able to name your query argument the same as your database field defined as the primary key in your descriptor (case sensitive).
    i.e.
    SELECT keyword, key_value
    FROM STORE_CONFIG
    WHERE (keyword = #KEYWORD)

  • Problem in continuous query cache with PofExtractor

    Hi,
    I am creating a CQC with a filter having PofExtractor. When I try to insert any record in cache it is giving me exception at server side.
    When I do not use PofExtractor it is working fine.
    If any one know about this prblem please help.
    I am using C# as my client application
    Regards
    Nitin Jain

    Hi JK,
    I have made some changes in my server. Now I am not using any pof definition on server side. I think that's why it is giving me the error.
    2010-04-13 15:39:07.481/20.593 Oracle Coherence GE 3.5.1/461 <Error> (thread=DistributedCache, member=1): Exception occured during filter evaluation: MapEventFilter(mask=INSERTED|UPDATED_ENTERED|UPDATED_WITHIN, filter=GreaterEqualsFilter(PofExtractor(target=VALUE, navigator=SimplePofPath(indices=0)), Nitin)); removingthe filter...
    2010-04-13 15:39:07.481/20.593 Oracle Coherence GE 3.5.1/461 <Error> (thread=DistributedCache, member=1):
    (Wrapped) java.io.StreamCorruptedException: unknown user type: 1001
    at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:261)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ConverterFromBinary.convert(DistributedCache.CDB:4)
    at com.tangosol.util.ConverterCollections$ConverterMapEvent.getNewValue(ConverterCollections.java:3594)
    at com.tangosol.util.filter.MapEventFilter.evaluate(MapEventFilter.java:172)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.prepareDispatch(DistributedCache.CDB:82)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.postInvoke(DistributedCache.CDB:10)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.put(DistributedCache.CDB:156)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onPutRequest(DistributedCache.CDB:37)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$PutRequest.run(DistributedCache.CDB:1)
    at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:12)
    at com.tangosol.coherence.component.net.message.requestMessage.distributedCacheKeyRequest.ExtendedKeyRequest.onReceived(ExtendedKeyRequest.CDB:4)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:136)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:595)
    Caused by: java.io.StreamCorruptedException: unknown user type: 1001
    at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3289)
    at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2600)
    at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:348)
    at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2673)
    at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:257)
    ... 15 more
    Is there any way by which we can apply continuous query cache without providing object definition at server side?
    Regards
    Nitin Jain

  • Continuous Query Caching - Expensive?

    Hello,
    I have had a look at the documentation but I still cannot find a reasonable answer to the following question : How expensive are continuous query caches?
    Is it appropriate to have many of them?
    Is the following example an acceptable usage of Continuous query caching (does it scale?)
    In the context of a web application:
    User logs onto a website
    User performs a "Search" for financial instruments
    A continuous query cache is created with a filter for those instruments returned (say, 50) to listen to price updates.
    If the user pages, or does another search, the query cache is released and a new one, with an updated filter, is created.
    Does it make a difference if we are using the extend client?

    Hi,
    So 100 CQCs is probably not too excessive depending on the configuration of the process instantiating the CQCs and the cluster size etc.
    Each CQC will hold its own set of deserialized keys and values, so yes they are distinct objects, although a CQC of 50 entries would not be very big.
    One query I have - you mention that this is a Web Application but you also mention an Extend Client. Is your Web App and Extend Client of the main cluster? Is there are reason why you did this, most people would make a Web App a storage disabled cluster member so it would perform a bit better. Providing the Web App sits on a server that is very close in network terms to the cluster (i.e. same switch) then I would make it part of the cluster - or is the Web App the thing that is in the "regional environment".
    If you are running CQCs over Extend then there used to be some issues with this if the Extend connection was lost. AFAIK this is supposed to be fixed in later patches so I would get 3.7.1.8 and make sure you test that the Web App continues to work and properly fails over if you kill its Extend connection. When the CQC fails over it will reinitialize all its data so you will need to cope with that if you are pushing changes based on the CQC.
    JK

  • Missed and duplicate events with Continues Query Cache

    We have seen missed events and duplicate events when we register to receive events (using Continues Query Cache) on an entry in the cache while the entry is updating.
    Use case:
    Start a Node
    Start a Proxy
    Start Extend Client
    Implementation of the Extend Client
    Create Cache
    Add Entry to Cache
    Initiate Thread 1 {
          For each ( 1 to 30)
              Run Update Entry Processor on cache entry; Entry Processor increments the Cache Entry value by 1 
    Initiate Thread 2 {
         wait until Cache entry is updated 10 times
         Create MAP Listener {
              For Entry Insert Event {
                            Print event
                   set Initial value = new value
              For Entry Update Event {
                            Print event
                   set Update value = + 1
         Initiate Continues Query Cache (cache, Always Filter, MAP Listener)
    Start Thread 1
    Start Thread 2
    Waits until Thread 1 and Thread2 are terminated
    Expected Result = read the value of the entry from cache
    Actual result = Initial value + Update value
    Results we have seen in two tests_
    Test1: Expected Result > Actual results: Missing events
    Entry Inserted Event: MapEvent{ContinuousQueryCache inserted: key=1, value=UpdateObject [intNumber=1, longNumber=13]}
    +Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=13], new value=UpdateObject [intNumber=1, longNumber=15]}+*
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=15], new value=UpdateObject [intNumber=1, longNumber=16]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=16], new value=UpdateObject [intNumber=1, longNumber=17]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=17], new value=UpdateObject [intNumber=1, longNumber=18]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=18], new value=UpdateObject [intNumber=1, longNumber=19]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=19], new value=UpdateObject [intNumber=1, longNumber=20]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=20], new value=UpdateObject [intNumber=1, longNumber=21]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=21], new value=UpdateObject [intNumber=1, longNumber=22]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=22], new value=UpdateObject [intNumber=1, longNumber=23]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=23], new value=UpdateObject [intNumber=1, longNumber=24]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=24], new value=UpdateObject [intNumber=1, longNumber=25]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=25], new value=UpdateObject [intNumber=1, longNumber=26]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=26], new value=UpdateObject [intNumber=1, longNumber=27]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=27], new value=UpdateObject [intNumber=1, longNumber=28]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=28], new value=UpdateObject [intNumber=1, longNumber=29]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=29], new value=UpdateObject [intNumber=1, longNumber=30]}
    Expected Result: 30
    Actual Result: 29
    Issue:+ Event on 14th update was not sent
    Test 2: Expected Result < Actual Result: Duplicate events
    Entry Inserted Event: MapEvent{ContinuousQueryCache inserted: key=1, value=UpdateObject [intNumber=1, longNumber=13]}
    *Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=13], new value=UpdateObject [intNumber=1, longNumber=13]}*+
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=13], new value=UpdateObject [intNumber=1, longNumber=14]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=14], new value=UpdateObject [intNumber=1, longNumber=15]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=15], new value=UpdateObject [intNumber=1, longNumber=16]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=16], new value=UpdateObject [intNumber=1, longNumber=17]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=17], new value=UpdateObject [intNumber=1, longNumber=18]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=18], new value=UpdateObject [intNumber=1, longNumber=19]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=19], new value=UpdateObject [intNumber=1, longNumber=20]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=20], new value=UpdateObject [intNumber=1, longNumber=21]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=21], new value=UpdateObject [intNumber=1, longNumber=22]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=22], new value=UpdateObject [intNumber=1, longNumber=23]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=23], new value=UpdateObject [intNumber=1, longNumber=24]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=24], new value=UpdateObject [intNumber=1, longNumber=25]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=25], new value=UpdateObject [intNumber=1, longNumber=26]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=26], new value=UpdateObject [intNumber=1, longNumber=27]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=27], new value=UpdateObject [intNumber=1, longNumber=28]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=28], new value=UpdateObject [intNumber=1, longNumber=29]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=29], new value=UpdateObject [intNumber=1, longNumber=30]}
    Expected Result: 30
    Actual Result: 31
    Issue:+ Event on 13th update was sent in Insert and Update events both
    reg
    Dasun.

    Hi Paul,
    I tested with 3.7.1.4 and 3.7.1.5. In both versions I can see the issue.
    reg
    Dasun.

  • JPA toplink Query cache

    Any idea when performing Query's with Hint but no L2 cache enabled ? Will that object cached / lazyloaded /enriched on access ?
    By the way, whats the default L2 cache setting, is it enable or disabled ? If enable where does it stores, in memory ?
    Thanks
    Newbie

    A shared (L2) cache is enabled by default in TopLink/EclipseLink.
    The cache is in memory.
    Using a query cache, without the shared cache makes little sense, you should use both, or none.
    What will happen if you do is the query cache will be isolated to the session the same as the object cache, so you will get query cache hits for the duration of the persistence context/transaction. i.e. it will be an L1 query cache.
    To configure the cache see,
    http://wiki.eclipse.org/EclipseLink/Examples/JPA/Caching

  • When is query cache deleted?

    Hello All,
    I searched this forum and SAP for notes but couldn't find exactly when query cache is deleted.
    For example, we know when new data is loaded to an infocube, but what about when new master data is loaded?  If a query still has the existing transaction data loaded, but new master data is loaded (and a query output is reading and attribute), should it NOT read cache?
    What are the hard and fast rules?
    Thank you so much!

    Queries check to see if new data has been loaded to a cube/DSO since teh data has been cached. If new data has been loaded, the old cached data for that query is deleted and the new execution of the query will read the data from the database.  A new master data load would not effect the last load date on a cube/DSO, so it would not cause the the cache to be deleted for a quary on the cube/ODS.  If you had set the Master Data InfoObject to be an InfoProvider and had just a master data query (no cube/ODS), the cached master data query results should get deleted.
    As a matter of practice, Master Data should be loaded before loads to the cubes/ODS are done.
    As mentioned you can delete OLAP cache from tran RSRCACHE.  There is also a batch program you can run to delete entries from OLAP cache.  You can specify specific query results to be deleted, and you can also use it to delete all cached results that are older than a specifiied number of days.

  • Query cache in IP

    Hi
    Could anybody tell me how can we enable the Query cache for the reports built on Aggregation level in Integrated planning?.
    Thanks in advance.
    Raj

    Hi Raj,
    the system will not use the cache for the query defined on the aggregation level but for the technical query used in the plan buffer. The name of the query used in the plan buffer is as follows:
    - aggregation level A on a MultiProvider M: Query name is M/!!1M
    - aggregation level A on a real-time cube C: Query name is C/!!1C
    With these names you should find the following settings in RSRT:
    Read Mode                     H                                          
    Req. Status                     0                                          
    Cache Mode                    1                                          
    Update Cache in Delta Process X                                          
    SP Grouping                   1            (only for MultiProvider)
    Use these settings, don't try to experiment with different settings.
    Best regards,
    Gregor

  • Continuous Query Cache Local caching meaning

    Hi,
    I'm enconter following problem when I was working with continuous query cache with local caching TRUE.
    I was able to insert data into coherence cache and read as well.
    Then I stopped the process and try to read the data in the cache for keys which I inserted earlier.
    But I received NULL as the result.
    NamedCache::Handle hCache = DerivedCQC::create(CacheFactory::getCache("dist-AccountGCE"), AlwaysFilter::getInstance(), true);
    DerivedCQC.hpp
    * File: DerivedCQC.hpp
    * Author: srathna1
    * Created on 15 July 2011, 02:47
    #ifndef DERIVEDCQC_HPP
    #define     DERIVEDCQC_HPP
    #include "coherence/lang.ns"
    #include "coherence/net/cache/ContinuousQueryCache.hpp"
    #include "coherence/net/NamedCache.hpp"
    #include "coherence/util/Filter.hpp"
    #include "coherence/util/MapListener.hpp"
    using namespace coherence::lang;
    using coherence::net::cache::ContinuousQueryCache;
    using coherence::net::NamedCache;
    using coherence::util::Filter;
    using coherence::util::MapListener;
    class DerivedCQC
    : public class_spec<DerivedCQC,
    extends<ContinuousQueryCache> >
    friend class factory<DerivedCQC>;
    protected:
    DerivedCQC(NamedCache::Handle hCache,
    Filter::View vFilter, bool fCacheValues = false, MapListener::Handle hListener = NULL)
    : super(hCache, vFilter, fCacheValues, hListener) {}
    public:
    virtual bool containsKey(Object::View vKey) const
    return m_hMapLocal->containsKey(vKey);
    #endif     /* DERIVEDCQC_HPP */
    When I switch off the local storage flag to FALSE.
    I was able to read the data.
    NamedCache::Handle hCache = DerivedCQC::create(CacheFactory::getCache("dist-AccountGCE"), AlwaysFilter::getInstance(), false);
    Ideally I'm expecting in true scenario when I'm connected with coherence all keys and values with locally synced up with cache and store it locally and for each update also it will get synched up.
    In false scenario it will hook into the coherence cache and read it from their for each key and cache values from that moment onwards. Please share how it is implemented underneath
    Thanks and regards,
    Sura

    Hi Wei,
    I found the issue when you declare you cache as an global variable then you won't get data in TRUE scenario and if you declare the cache during a method then you will retrieve data.
    Try this.......
    #include <iostream>
    #include <coherence/net/CacheFactory.hpp>
    #include "coherence/lang.ns"
    #include <coherence/net/NamedCache.hpp>
    #include <stdio.h>
    #include <stdlib.h>
    #include <pthread.h>
    #include <coherence/net/cache/ContinuousQueryCache.hpp>
    #include <coherence/util/filter/AlwaysFilter.hpp>
    #include <coherence/util/filter/EntryFilter.hpp>
    #include "DerivedCQC.hpp"
    #include <fstream>
    #include <string>
    #include <sstream>
    #include <coherence/util/Set.hpp>
    #include <coherence/util/Iterator.hpp>
    #include <sys/types.h>
    #include <unistd.h>
    #include <coherence/stl/boxing_map.hpp>
    #include "EventPrinter.hpp"
    using namespace coherence::lang;
    using coherence::net::CacheFactory;
    using coherence::net::NamedCache;
    using coherence::net::ConcurrentMap;
    using coherence::net::cache::ContinuousQueryCache;
    using coherence::util::filter::AlwaysFilter;
    using coherence::util::filter::EntryFilter;
    using coherence::util::Set;
    using coherence::util::Iterator;
    using coherence::stl::boxing_map;
    NamedCache::Handle hCache = DerivedCQC::create(CacheFactory::getCache("gce-Account"), AlwaysFilter::getInstance(), true);
    int main(int argc, char** argv) {
    std::cout << "size: " << hCache->size() << std::endl;
    in above example you will see size is 0 for true case and size is equal to data size in the cache in false scenario.
    But if you declare the cache as below you will get the expected results as the documentation.
    int main(int argc, char** argv) {
    NamedCache::Handle hCache = DerivedCQC::create(CacheFactory::getCache("gce-Account"), AlwaysFilter::getInstance(), true);
    std::cout << "size: " << hCache->size() << std::endl;
    Is this a bug or is this the expected behaviour. According to my understanding this is a bug?
    Thanks and regards,
    Sura

  • Empty query cache with ABAP code

    Hi Experts,
    Is there any way to empty the query cache using ABAP code?
    Thank you!
    Regards,
    Sam

    Sam,
    You can clear cache by using transaction RSRCACHE. Also, you can use a BDC ABAP program to do it.
    -Saket

  • Why should we turn off query cache when alternative UOM solution is used?

    Hi, all, Why should we turn off query cache when alternative UOM solution is used?I found it in "Checklist for Query Performance", but I dont know why.
    Please tell me if u know.
    PS: I also dont know how to turn off the cache, Need your help, thanks!

    hi ,
           I have also some confusion regarding Cache Parameters . What is the importance of cache ,  Should we delete the cache memory time to time for each query ? I have chked it in RSRT but never use the chache monitor function .

  • How do u do Query Caching/Aggregates/Optimise ETL

    Hello
    How do u do the following?A document or step wise approach would be really handy
    1.How do u do Query caching?The pro and cons?How to optimize?
    2.How do u create aggragates?Step by step method?
    3.How do u optimize ETL?Whats the benefits of it?Again a document would be handy
    Thanks

    Search SDN and ASUG for many good presentations.
    Hee's a couple to get you started:
    http://www.asug.com/client_files/Calendar/Upload/ACF3DBF.ppt
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/biw/p-r/performance in sap bw.pdf
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/events/asug-biti-03/sap bw query performance tuning with aggregates

  • Query Cache in WLP 10.3

    Hi,
    Has anyone used query cache in WLP applications without using any ORM tool ?
    What all options do I have to get my frequently used queries to be cached?
    Thanks,
    CA

    If i understood you correctly, you want to do some sort of data caching in portal controller rather in DAO layer. You can use WLP caches in controller to do this coding using CacheFactory http://e-docs.bea.com/wlp/docs81/javadoc/com/bea/p13n/cache/CacheFactory.html . You can also cache contents from JSPs using tags http://e-docs.bea.com/wls/docs81/jsp/customtags.html#56944
    Edited by: user10704069 on Dec 22, 2008 10:56 AM

Maybe you are looking for

  • ALV Object : how to manage two grid in one screen ?

    Hello, I would like to print in a same screen two ALV grid with different command for each. The two grid will be linked. One is the header of the datas and the second represent the items of the selected data in the the header grid. Is it possible to

  • Why won't my Facebook work?

    I bought the minimum package for Internet and I am trying to check my Facebook but it tells me....... Response Error. Technical description: 502 Bad Gateway - Response Error, a bad response was received from another proxy server or the destination or

  • Cisco Cius AnyConnect check box not getting enabled!

    Folks,      Here we are evaluating the Cisco Cius, however, while trying to get the AnyConnect "check box" marked, it does not check it! Does anyone knows why? Anything specific done on CUCM? BTW, cucm 8.5.1. Any ideas? Thanks, Thiago Henriques

  • Uploading file with comma separator

    Hi firends, I have a text file in the format below. "abc","dedffrt","asd" The value of field is enclosed in double quotes and each values are separated by comma. I have tried with the function modules available for upload but of no use. Im aware of t

  • Spring SOAP - Could not handle mustUnderstand headers

    Guys, Please check this... I'm getting Could not handle mustUnderstand headers when doing a transaction. What am I missing here? Please help. Thanks. spring-ws-servlet.xml <beans>      <bean id="mm7Resp" class="org.springframework.ws.wsdl.wsdl11.Defa