Right time to do aggregates/query caching

Hi
when is right time to do aggregates/query caching?Whats the business scenario
thanks

Hi Jack,
Refer to the document link below for details.
https://websmp106.sap-ag.de/~sapidb/011000358700004339892004
Hope it helps.
Cheers
Anurag

Similar Messages

  • How do u do Query Caching/Aggregates/Optimise ETL

    Hello
    How do u do the following?A document or step wise approach would be really handy
    1.How do u do Query caching?The pro and cons?How to optimize?
    2.How do u create aggragates?Step by step method?
    3.How do u optimize ETL?Whats the benefits of it?Again a document would be handy
    Thanks

    Search SDN and ASUG for many good presentations.
    Hee's a couple to get you started:
    http://www.asug.com/client_files/Calendar/Upload/ACF3DBF.ppt
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/biw/p-r/performance in sap bw.pdf
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/events/asug-biti-03/sap bw query performance tuning with aggregates

  • Query Cache to Derive Summary Data - Advice?

    Good Evening, I am faced with a problem and I was hoping to get a nudge in the right direction.
         I have a cache that is storing the status of orders and their respective times they took to process in that status going though our system. I need to retrieve from that some average times. I need to calculate the average time it took for an order to go from status "A" (endtime) to status "B" (starttime). So essentially i will need to retrieve the difference between variables of two different statuses for every order then get the average of those. I know how to query the cache to get my result set and then obviously iterate across them and calculate the average. However I was wondering/hoping there was a more eloquent solution.
         I have been reading through the user guide (http://wiki.tangosol.com/display/COH32UG/Provide+a+Data+Grid) and reading up on Aggregators and Agents but there are no complete examples for me to view and am unsure if these approaches would help in my goal.
         Knowing my task, is there any approach i should focus on and if so is there documentation anywhere to support that?
         Thanks in Advance!
         -- Grant
         Our data structure looks something like this:
         class ClientReport{
              public long lasttimeaccessed;
         public long clientid;
              ...some other stuff....
              Map Orders; //map of Order Objects, keyed on orderid
         class Order{
              public long orderID;
              ...some other stuff....
              Map statuses //map of status changes, keyed on status
         class Status{
              public String status;
              public long starttime;
              public long endtime;     
         }

    Hi Grant,
         first of all, you might want to create an index on the status changes, so that you can prefilter for the status changes, but for that you would need to implement a custom ValueExtractor implementation, because the status is twice nested in maps, and use that to create the index.
         You can add the index with that ValueExtractor with the addIndex(ValueExtractor, boolean, Comparator) method of the cache.
         The ValueExtractor would need to collect the status strings from each status change in each order, and return them as a collection.
         You can then use this index to filter for those report objects which do have at least one status changes ending up in the expected state.
         You could also create a similar ValueExtractor (let's name it ReportStatusObjectExtractor) which returns each Status objects from all orders in a report, and create an index also with this ValueExtractor.
         Please be aware that the ValueExtractor-s need to implement the equals() and the hashCode() method properly. In this case, because the extractors will have no parameters, you can implement equals() so that true is returned for all objects of the same class, and hashCode() can return 1.
         This index could then be used to speed up evaluation of the aggregator because you would not need to deserialize the entire report instance.
         You can now create an own InvocableMap.EntryAggregator implementation which should also implement the InvocableMap.ParallelAwareAggregator.
         The getParallelAggregator() method should return an instance of a Serializable class again implementing EntryAggregator, and in the aggregate method it should use the extract method on each entry with an instance of the second mentioned ValueExtractor (should be a constant) to extract the status objects belonging to the report instance from the index:
                  public class ReportStatusStringExtractor implements ValueExtractor, Serializable {
               public static final ReportStatusStringExtractor INSTANCE = new ReportStatusStringExtractor();
               public Object extract(Object oTarget) {
                 ClientReport report = (ClientReport) oTarget;
                 // extract each status strings from each order object and return all of them in a collection
               public boolean equals(Object other) {
                 return other!=null && other.getClass().equals(this.getClass());
               public int hashCode() {
                 return 1;
             public class ReportStatusObjectExtractor implements ValueExtractor, Serializable {
               public static final ReportStatusObjectExtractor INSTANCE = new ReportStatusObjectExtractor();
               public Object extract(Object oTarget) {
                 ClientReport report = (ClientReport) oTarget;
                 // extract each status objects from each order object and return all of them in a collection
               public boolean equals(Object other) {
                 return other!=null && other.getClass().equals(this.getClass());
               public int hashCode() {
                 return 1;
             public class OrderAverageAggregator implements InvocableMap.ParallelAwareAggregator {
               public static final OrderAverageAggregator  INSTANCE = new OrderAverageAggregator();
               public InvocableMap.EntryAggregator getParallelAggregator() {
                 return OrderAverageParallelAggregator.INSTANCE;
               public Object aggregateResults(Collection collResults) {
                 // join results from all servers
                 // the object returned from this method will be returned from the cache.aggregate call
               public Object aggregate(Set setEntries) {
                 // the object returned from this method will be returned
                 // from the cache.aggregate call if invoked on a replicated cache
                 return aggregateResults(
                   Collections.singletonList(
                     OrderAverageParallelAggregator.INSTANCE.aggregate(
                       setEntries)));
             public class OrderAverageParallelAggregator implements InvocableMap.EntryAggregator, Serializable {
               public static final OrderAverageParallelAggregator INSTANCE = new OrderAverageParallelAggregator();
             // aggregate method of the parallel aggregator
               public Object aggregate(Set setEntries) {
                 Iterator iter = setEntries.iterator();
                 while (iter.hasNext()) {
                   InvocableMap.Entry entry = iter.next();
                   Collection statusObjects = (Collection)  
                     entry.extract(ReportStatusObjectExtractor.INSTANCE);
                   // ... do averaging ...
                 // the objects returned from this method on each storage-enabled node will be passed in a collection
                 // to the aggregateResults method of the OrderAverageAggregator class
                 return (Serializable) result;
             // code doing the querying
             NamedCache cache = CacheFactory.getCache(...);
             Object result = cache.aggregate(new ContainsFilter(ReportStatusStringExtractor.INSTANCE, "expectedStatus"),
                   OrderAverageAggregator.INSTANCE);
             // code adding the indexes, this needs to run only once after the cluster is started
             NamedCache cache = CacheFactory.getCache(...);
             cache.addIndex(ReportStatusStringExtractor.INSTANCE, false, null);
             cache.addIndex(ReportStatusObjectExtractor.INSTANCE, false, null);
                       The class files for the two extractor classes and the OrderAverageParallelAggregator class must reside in the classpath of the storage-enabled cache JVMs.
         I hope this helps,
         Robert

  • Query cache,query monitor

    Hi
    wt is the purpose of Query monitor and query cache.. can u plz.. explain
    ponts assured*
    Regards
    Rekha

    As the verbeage indicates, query monitor is to monitor the runtime performance of BW queries. Query monitor is one of the tools in BW to monitor the query performance. The transaction to run query monitor is RSRT.
    In RSRT, you can exceute queries in various modes and you can to some extent enforece a query to be executed in a certain path; For example, you can simulate the execution of  q query without using a aggregate, without using cache, etc.
    In monitor you can also view how the query is getting executed and diagnose the possible causes as to why a query is running slow.
    Caching is to store the query results in the memory of the BW  system's application server.  If you cache a query, the run time performance  will improve considerably, because the result ste is stored in the meonry and every time when the query is run, the OLAP engine will not have to read the data base to fetch the records.
    The query caching has some limitations; if the query result changes, the cache will not help, because the new result set has to be again read from database and presented.
    You can get more on this in help.sap.com
    Ravi Thothadri

  • Is there a better way to do this projection/aggregate query?

    Hi,
    Summary:
    Can anyone offer advice on how best to use JDO to perform
    projection/aggregate queries? Is there a better way of doing what is
    described below?
    Details:
    The web application I'm developing includes a GUI for ad-hoc reports on
    JDO's. Unlike 3rd party tools that go straight to the database we can
    implement business rules that restrict access to objects (by adding extra
    predicates) and provide extra calculated fields (by adding extra get methods
    to our JDO's - no expression language yet). We're pleased with the results
    so far.
    Now I want to make it produce reports with aggregates and projections
    without instantiating JDO instances. Here is an example of the sort of thing
    I want it to be capable of doing:
    Each asset has one associated t.description and zero or one associated
    d.description.
    For every distinct combination of t.description and d.description (skip
    those for which there are no assets)
    calculate some aggregates over all the assets with these values.
    and here it is in SQL:
    select t.description type, d.description description, count(*) count,
    sum(a.purch_price) sumPurchPrice
    from assets a
    left outer join asset_descriptions d
    on a.adesc_no = d.adesc_no,
    asset_types t
    where a.atype_no = t.atype_no
    group by t.description, d.description
    order by t.description, d.description
    it takes <100ms to produce 5300 rows from 83000 assets.
    The nearest I have managed with JDO is (pseodo code):
    perform projection query to get t.description, d.description for every asset
    loop on results
    if this is first time we've had this combination of t.description,
    d.description
    perform aggregate query to get aggregates for this combination
    The java code is below. It takes about 16000ms (with debug/trace logging
    off, c.f. 100ms for SQL).
    If the inner query is commented out it takes about 1600ms (so the inner
    query is responsible for 9/10ths of the elapsed time).
    Timings exclude startup overheads like PersistenceManagerFactory creation
    and checking the meta data against the database (by looping 5 times and
    averaging only the last 4) but include PersistenceManager creation (which
    happens inside the loop).
    It would be too big a job for us to directly generate SQL from our generic
    ad-hoc report GUI, so that is not really an option.
    KodoQuery q1 = (KodoQuery) pm.newQuery(Asset.class);
    q1.setResult(
    "assetType.description, assetDescription.description");
    q1.setOrdering(
    "assetType.description ascending,
    assetDescription.description ascending");
    KodoQuery q2 = (KodoQuery) pm.newQuery(Asset.class);
    q2.setResult("count(purchPrice), sum(purchPrice)");
    q2.declareParameters(
    "String myAssetType, String myAssetDescription");
    q2.setFilter(
    "assetType.description == myAssetType &&
    assetDescription.description == myAssetDescription");
    q2.compile();
    Collection results = (Collection) q1.execute();
    Set distinct = new HashSet();
    for (Iterator i = results.iterator(); i.hasNext();) {
    Object[] cols = (Object[]) i.next();
    String assetType = (String) cols[0];
    String assetDescription = (String) cols[1];
    String type_description =
    assetDescription != null
    ? assetType + "~" + assetDescription
    : assetType;
    if (distinct.add(type_description)) {
    Object[] cols2 =
    (Object[]) q2.execute(assetType,
    assetDescription);
    // System.out.println(
    // "type "
    // + assetType
    // + ", description "
    // + assetDescription
    // + ", count "
    // + cols2[0]
    // + ", sum "
    // + cols2[1]);
    q2.closeAll();
    q1.closeAll();

    Neil,
    It sounds like the problem that you're running into is that Kodo doesn't
    yet support the JDO2 grouping constructs, so you're doing your own
    grouping in the Java code. Is that accurate?
    We do plan on adding direct grouping support to our aggregate/projection
    capabilities in the near future, but as you've noticed, those
    capabilities are not there yet.
    -Patrick
    Neil Bacon wrote:
    Hi,
    Summary:
    Can anyone offer advice on how best to use JDO to perform
    projection/aggregate queries? Is there a better way of doing what is
    described below?
    Details:
    The web application I'm developing includes a GUI for ad-hoc reports on
    JDO's. Unlike 3rd party tools that go straight to the database we can
    implement business rules that restrict access to objects (by adding extra
    predicates) and provide extra calculated fields (by adding extra get methods
    to our JDO's - no expression language yet). We're pleased with the results
    so far.
    Now I want to make it produce reports with aggregates and projections
    without instantiating JDO instances. Here is an example of the sort of thing
    I want it to be capable of doing:
    Each asset has one associated t.description and zero or one associated
    d.description.
    For every distinct combination of t.description and d.description (skip
    those for which there are no assets)
    calculate some aggregates over all the assets with these values.
    and here it is in SQL:
    select t.description type, d.description description, count(*) count,
    sum(a.purch_price) sumPurchPrice
    from assets a
    left outer join asset_descriptions d
    on a.adesc_no = d.adesc_no,
    asset_types t
    where a.atype_no = t.atype_no
    group by t.description, d.description
    order by t.description, d.description
    it takes <100ms to produce 5300 rows from 83000 assets.
    The nearest I have managed with JDO is (pseodo code):
    perform projection query to get t.description, d.description for every asset
    loop on results
    if this is first time we've had this combination of t.description,
    d.description
    perform aggregate query to get aggregates for this combination
    The java code is below. It takes about 16000ms (with debug/trace logging
    off, c.f. 100ms for SQL).
    If the inner query is commented out it takes about 1600ms (so the inner
    query is responsible for 9/10ths of the elapsed time).
    Timings exclude startup overheads like PersistenceManagerFactory creation
    and checking the meta data against the database (by looping 5 times and
    averaging only the last 4) but include PersistenceManager creation (which
    happens inside the loop).
    It would be too big a job for us to directly generate SQL from our generic
    ad-hoc report GUI, so that is not really an option.
    KodoQuery q1 = (KodoQuery) pm.newQuery(Asset.class);
    q1.setResult(
    "assetType.description, assetDescription.description");
    q1.setOrdering(
    "assetType.description ascending,
    assetDescription.description ascending");
    KodoQuery q2 = (KodoQuery) pm.newQuery(Asset.class);
    q2.setResult("count(purchPrice), sum(purchPrice)");
    q2.declareParameters(
    "String myAssetType, String myAssetDescription");
    q2.setFilter(
    "assetType.description == myAssetType &&
    assetDescription.description == myAssetDescription");
    q2.compile();
    Collection results = (Collection) q1.execute();
    Set distinct = new HashSet();
    for (Iterator i = results.iterator(); i.hasNext();) {
    Object[] cols = (Object[]) i.next();
    String assetType = (String) cols[0];
    String assetDescription = (String) cols[1];
    String type_description =
    assetDescription != null
    ? assetType + "~" + assetDescription
    : assetType;
    if (distinct.add(type_description)) {
    Object[] cols2 =
    (Object[]) q2.execute(assetType,
    assetDescription);
    // System.out.println(
    // "type "
    // + assetType
    // + ", description "
    // + assetDescription
    // + ", count "
    // + cols2[0]
    // + ", sum "
    // + cols2[1]);
    q2.closeAll();
    q1.closeAll();

  • Time out of the Query

    Sdn
    Im getting Time out for one query while excuting in PRD system. That query filter field setting is Only values Infoprovider. But DEV Im able to excute the same query, but in this system that field setting is Only posted values in navigation
    advance wishes

    Hi Kirun,
    I'm sure, it can be done in dev because the data is not big like in the production.
    You need to tune up your info provider (info cube/ods).
    I had once the problem like you, i can run in dev, but not in prd.
    due I use ODS, so that I tuned it up by creating Index. And after it, it can run well.
    If it's Info cube, you can create aggregate / compress it.
    Hope it helps you a lot.
    Br,
    Daniel N.

  • Issues with Query Caching in MII

    Hi All,
    I am facing a strange problem with Query caching in MII query. Have created one xacute query and set cache duration 30 sec. The associated BLS with the query retrieves data from SAP system. In the web page this value is populated by executing an iCommand. Followings are the steps I followed -
    Query executed for first time, it retrives data from SAP correctly. Lets say value is val1
    At 10th sec Value in SAP changed to val2 from val1.
    Query excuted at 15th sec, it gives value as val1. Which is expected as it gives from cache.
    Query is executed 1t 35th sec, it gives value as val2 retriving from SAP system. Which is correct.
    Query executed at 40th sec, it gives value as val1 from cache. Which is not expected.
    I have tried with java cache clear in browser and JCo cache of the server.
    Same problem I have seen for tag query also..
    MII Version - 12.0.6 Build(12)
    Any thoughts on this.
    Thanks,
    Soumen

    Soumen Mondal,
    If you are facing this problem from the same client PC and in the same session, it's very strange.. but if there are two sessions running on the same PC this kind of issue may come..
    Something about caching:
    To decide whether to cache a query or not, consider the number of times you intend to run the query in a minute. For data changing in seconds, and queried in minutes, I would recommend not to cache query at all.
    I may give a typical example for query caching in a query on material master cached for 24 hours, where we know that after creating material master, we are not creating a PO on the same day.
    BR,
    SB

  • Named query cache not hit

    Hi,
    I'm using Toplink ORM 10.1.3.
    I have a table called STORE_CONFIG which has a primary key called KEYWORD (a VARCHAR2). The POJO mapped to this table is called StoreConfig.
    In the JDeveloper (10.1.3.1.0) mapping workbench I've defined a named query to query by the PK called "getConfigPropertyByKeyword". The type of the named query is ReadObjectQuery.
    SELECT keyword, key_value
    FROM STORE_CONFIG
    WHERE (keyword = #key)
    Under the options tab I have the following settings:
    Cache Statement: true
    Bind Parameters: true
    Cache Usage: Check Cache by Primary Key
    The application logs show that the same database queries are executed multiple times for the same PK (keyword)! Why is that? Shouldn't it be checking the Object Cache rather than going to the DB?
    I've tried it with "Cache Statement: false" and "Bind Parameters: false" with the same problem.
    If I click the Advanced tab and check "Cache Query Results" then the database is not hit twice for the same record. However it was my understanding that since I am querying by PK that I wouldn't need to set "Cache Query Results".
    Doesn't "Cache Query Results" apply to the Query Cache and not the Object Cache?

    Your issue seems to be that you are using custom SQL for the query, not a TopLink expression. When you use an Expression query TopLink's know if the query is by primary key and can get a cache hit.
    When you use custom SQL, TopLink does not know that the SQL is by primary key, so does not get a cache hit.
    You could either use an Expression for the query,
    or when using custom SQL you should be able to name your query argument the same as your database field defined as the primary key in your descriptor (case sensitive).
    i.e.
    SELECT keyword, key_value
    FROM STORE_CONFIG
    WHERE (keyword = #KEYWORD)

  • Missed and duplicate events with Continues Query Cache

    We have seen missed events and duplicate events when we register to receive events (using Continues Query Cache) on an entry in the cache while the entry is updating.
    Use case:
    Start a Node
    Start a Proxy
    Start Extend Client
    Implementation of the Extend Client
    Create Cache
    Add Entry to Cache
    Initiate Thread 1 {
          For each ( 1 to 30)
              Run Update Entry Processor on cache entry; Entry Processor increments the Cache Entry value by 1 
    Initiate Thread 2 {
         wait until Cache entry is updated 10 times
         Create MAP Listener {
              For Entry Insert Event {
                            Print event
                   set Initial value = new value
              For Entry Update Event {
                            Print event
                   set Update value = + 1
         Initiate Continues Query Cache (cache, Always Filter, MAP Listener)
    Start Thread 1
    Start Thread 2
    Waits until Thread 1 and Thread2 are terminated
    Expected Result = read the value of the entry from cache
    Actual result = Initial value + Update value
    Results we have seen in two tests_
    Test1: Expected Result > Actual results: Missing events
    Entry Inserted Event: MapEvent{ContinuousQueryCache inserted: key=1, value=UpdateObject [intNumber=1, longNumber=13]}
    +Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=13], new value=UpdateObject [intNumber=1, longNumber=15]}+*
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=15], new value=UpdateObject [intNumber=1, longNumber=16]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=16], new value=UpdateObject [intNumber=1, longNumber=17]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=17], new value=UpdateObject [intNumber=1, longNumber=18]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=18], new value=UpdateObject [intNumber=1, longNumber=19]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=19], new value=UpdateObject [intNumber=1, longNumber=20]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=20], new value=UpdateObject [intNumber=1, longNumber=21]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=21], new value=UpdateObject [intNumber=1, longNumber=22]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=22], new value=UpdateObject [intNumber=1, longNumber=23]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=23], new value=UpdateObject [intNumber=1, longNumber=24]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=24], new value=UpdateObject [intNumber=1, longNumber=25]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=25], new value=UpdateObject [intNumber=1, longNumber=26]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=26], new value=UpdateObject [intNumber=1, longNumber=27]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=27], new value=UpdateObject [intNumber=1, longNumber=28]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=28], new value=UpdateObject [intNumber=1, longNumber=29]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=29], new value=UpdateObject [intNumber=1, longNumber=30]}
    Expected Result: 30
    Actual Result: 29
    Issue:+ Event on 14th update was not sent
    Test 2: Expected Result < Actual Result: Duplicate events
    Entry Inserted Event: MapEvent{ContinuousQueryCache inserted: key=1, value=UpdateObject [intNumber=1, longNumber=13]}
    *Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=13], new value=UpdateObject [intNumber=1, longNumber=13]}*+
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=13], new value=UpdateObject [intNumber=1, longNumber=14]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=14], new value=UpdateObject [intNumber=1, longNumber=15]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=15], new value=UpdateObject [intNumber=1, longNumber=16]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=16], new value=UpdateObject [intNumber=1, longNumber=17]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=17], new value=UpdateObject [intNumber=1, longNumber=18]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=18], new value=UpdateObject [intNumber=1, longNumber=19]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=19], new value=UpdateObject [intNumber=1, longNumber=20]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=20], new value=UpdateObject [intNumber=1, longNumber=21]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=21], new value=UpdateObject [intNumber=1, longNumber=22]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=22], new value=UpdateObject [intNumber=1, longNumber=23]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=23], new value=UpdateObject [intNumber=1, longNumber=24]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=24], new value=UpdateObject [intNumber=1, longNumber=25]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=25], new value=UpdateObject [intNumber=1, longNumber=26]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=26], new value=UpdateObject [intNumber=1, longNumber=27]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=27], new value=UpdateObject [intNumber=1, longNumber=28]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=28], new value=UpdateObject [intNumber=1, longNumber=29]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=29], new value=UpdateObject [intNumber=1, longNumber=30]}
    Expected Result: 30
    Actual Result: 31
    Issue:+ Event on 13th update was sent in Insert and Update events both
    reg
    Dasun.

    Hi Paul,
    I tested with 3.7.1.4 and 3.7.1.5. In both versions I can see the issue.
    reg
    Dasun.

  • Change Aggregate Storage Cache

    Does anyone know how to change the aggregate storage cache setting in Maxl? I can no longer see it in EAS and I don't think I can change it in MaxL. Any clue?
    Thanks for your help.

    Try something like
    alter application ASOSamp set cache_size 64MB;
    I thought you right click the ASO app in EAS and edit properties > Pending cache size limit.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Query cache in IP

    Hi
    Could anybody tell me how can we enable the Query cache for the reports built on Aggregation level in Integrated planning?.
    Thanks in advance.
    Raj

    Hi Raj,
    the system will not use the cache for the query defined on the aggregation level but for the technical query used in the plan buffer. The name of the query used in the plan buffer is as follows:
    - aggregation level A on a MultiProvider M: Query name is M/!!1M
    - aggregation level A on a real-time cube C: Query name is C/!!1C
    With these names you should find the following settings in RSRT:
    Read Mode                     H                                          
    Req. Status                     0                                          
    Cache Mode                    1                                          
    Update Cache in Delta Process X                                          
    SP Grouping                   1            (only for MultiProvider)
    Use these settings, don't try to experiment with different settings.
    Best regards,
    Gregor

  • Why should we turn off query cache when alternative UOM solution is used?

    Hi, all, Why should we turn off query cache when alternative UOM solution is used?I found it in "Checklist for Query Performance", but I dont know why.
    Please tell me if u know.
    PS: I also dont know how to turn off the cache, Need your help, thanks!

    hi ,
           I have also some confusion regarding Cache Parameters . What is the importance of cache ,  Should we delete the cache memory time to time for each query ? I have chked it in RSRT but never use the chache monitor function .

  • Aggregate storage cache warning during buffer commit

    h5. Summary
    Having followed the documentation to set the ASO storage cache size I still get a warning during buffer load commit that says it should be increased.
    h5. Storage Cache Setting
    The documentation says:
    A 32 MB cache setting supports a database with approximately 2 GB of input-level data. If the input-level data size is greater than 2 GB by some factor, the aggregate storage cache can be increased by the square root of the factor. For example, if the input-level data size is 3 GB (2 GB * 1.5), multiply the aggregate storage cache size of 32 MB by the square root of 1.5, and set the aggregate cache size to the result: 39.04 MB.
    My database has 127,643,648k of base data which is 60.8x bigger than 2GB. SQRT of this is 7.8 so I my optimal cache size should be (7.8*32MB) = 250MB. My cache size is in fact 256MB because I have to set it before the data load based on estimates.
    h5. Data Load
    The initial data load is done in 3 maxl sessions into 3 buffers. The final import output then looks like this:
    MAXL> import database "4572_a"."agg" data from load_buffer with buffer_id 1, 2, 3;
    OK/INFO - 1270041 - For better performance, increase the size of aggregate storage cache.
    OK/INFO - 1270041 - For better performance, increase the size of aggregate storage cache.
    OK/INFO - 1270041 - For better performance, increase the size of aggregate storage cache.
    OK/INFO - 1003058 - Data load buffer commit elapsed time : [5131.49] seconds.
    OK/INFO - 1241113 - Database import completed ['4572_a'.'agg'].
    MAXL>
    h5. The Question
    Can anybody tell me why the final import is recommending increasing the storage cache when it is already slightly larger than the value specified in the documentation?
    h5. Versions
    Essbase Release 11.1.2 (ESB11.1.2.1.102B147)
    Linux version 2.6.32.12-0.7-default (geeko@buildhost) (gcc version 4.3.4 [gcc-4_3-branch revision 152973] (SUSE Linux) ) #1 SMP 2010-05-20 11:14:20 +0200 64 bit

    My understanding is that storage cache setting calculation you quoted is based on the cache requirements for retrieval. This recommendation has remained unchanged since ASO was first introduced in v7 (?) and was certainly done before the advent of parallel loading.
    I think that the ASO cache is used during the combination of the buffers. As a result depending on how ASO works internally you would get this warning unless your buffer was:
    1. = to the final load size of the database
    2. OR if the cache was only used when data existed for the same "Sparse" combination of dimensions in more than one buffer the required size would be a function of the number of cross buffer combinations required
    3. OR if the Cache is needed only when compression dimension member groups cross buffers
    By "Sparse" dimension I mean the non-compressed dimensions.
    Therefore you might try some experiments. To test case x above:
    1. Forget it you will get this message unless you have a cache large enough for the final data set size on disk
    2. sort your data so that no dimensional combination exists in more than one buffer - ie sort by all non-compression dimensions then by the compression dimension
    3. Often your compression dimension is time based (EVEN THOUGH THIS IS VERY SUB-OPTIMAL). If so you could sort the data by the compression dimension only and break the files so that the first 16 compression members (as seen in the outline) are in buffer 1, the next 16 in buffer 2 and the next in buffer 3
    Also if your machine is IO bound (as most are during a load of this size) and your cpu is not - try using os level compression on your input files - it could speed things up greatly.
    Finally regarding my comments on time based compression dimension - you should consider building a stored dimension for this along the lines of what I have proposed in some posts on network54 (search for DanP on network54.com/forum/58296 - I would give you a link but it is down now).
    OR better yet in the forthcoming book (of which Robb is a co-author) Developing Essbase Applications: Advanced Techniques for Finance and IT Professionals http://www.amazon.com/Developing-Essbase-Applications-Techniques-Professionals/dp/1466553308/ref=sr_1_1?ie=UTF8&qid=1335973291&sr=8-1
    I really hope you will try the suggestions above and post your results.

  • SAP BW 3.5 Query Cache - no data for queries

    Dear experts,
    we do have a problem with the SAP BW Query Cache (BW 3.5). Sometimes the
    problem arise that after the queries have been pre-calculated using web templates there are no figures available when running a web-report, or when doing a drilldown or filter navigation. A solution to solve that issue is to delete the cache for that query, however this solution is not reasonable or passable. The problem occurs non-reproducible for different queries.
    Is this a "normal" error of the SAP BW we have to live with, or are there any solutions for it? Any hints are greatly appreciated.
    Thanks in advance & kind regards,
    daniel

    HI Daniel
    Try to wirk without cache for those queries.
    Anyway, you should check how the cache option is configured for those queries .
    You can see that, in RSRV tx
    Hope this help

  • BW3.5 - Query Cache - InfoObject decimal formatting

    Hello all,
    I built a bw query, which displays key figures.  Each key figure uses the decimal place formatting from the key figure infoobject (In Query Designer, the properties for each keyfigure for Decimal places, is set to "[From Key Figure 0.00]").
    I decided to change the InfoObject key figure to have 0 decimals places (in BEx formatting tab for the Key Figure).  Now, when I open up query designer, and look at the properties for the Key figure, it still is set to "[From Key Figure 0.00]" (it should be "[From Key Figure 0]" to reflect the Key Figure Infoobject change. 
    I tried to generate the report using RSRT, and deleting the query cache, but it still shows up with two decimal places.  Has anyone encountered this problem before?  I am trying to avoid removing the Key Figure infoobject from the query and readding it to reflect the change.
    Thanks!

    Hello Brendon
    You have changed the KF infoObject to show only 0 decimal( no decimal)..that is okay but in query KF property you have selected with 2 decimal so data will be displayiing in 2 decimal...the query setting is local and have priority over KF InfoObject settings...
    If you notice in KF property in query u will have one option from the field somethjing which means whatever is defined by KF infoObject...just select that now onwards u will get only those many decimal which u have defined in KF InfoObject
    Thanks
    Tripple k

Maybe you are looking for

  • Hi I am having a problem with viber on my iPod touch can u help?

    i Am having a problem with viber on my iPod touch could you help

  • Sales Order Loading Date and Goods Issue Date

    I am getting confused with specific sales orders in our system which have a loading date of 5.11.2008 00:00 and a goods issue of 6.11.2008 00:00, this would imply a loading time of 1 day, I would think. However in our system I look at the shipping po

  • Beginner's question

    i started to work with labview newly, the task im tryin to do is the following : i have a microcontroller and i wanna know if it is alive or not .. simply am tryin to open a port (com) and to ping the microcontroller using the functions (.dll provide

  • Can Custom skin download font if not installed?

    Hi I have an ADF BC application written in JDev 10.1.3. It uses a customer skin so our various customers can implement their own look and feel. One of our customers requested the Futura font which worked fine on my PC because I have the futura font i

  • Apache, MySQL, and PHP

    I am new PHP development and working on a test project. I've installed Apache, MySQL and PHP on my Windows XP box and now I'm a little confused. I created a database and some sample code to retrieve data. Now, on a server I'm using which supports thi