Query cache,query monitor

Hi
wt is the purpose of Query monitor and query cache.. can u plz.. explain
ponts assured*
Regards
Rekha

As the verbeage indicates, query monitor is to monitor the runtime performance of BW queries. Query monitor is one of the tools in BW to monitor the query performance. The transaction to run query monitor is RSRT.
In RSRT, you can exceute queries in various modes and you can to some extent enforece a query to be executed in a certain path; For example, you can simulate the execution of  q query without using a aggregate, without using cache, etc.
In monitor you can also view how the query is getting executed and diagnose the possible causes as to why a query is running slow.
Caching is to store the query results in the memory of the BW  system's application server.  If you cache a query, the run time performance  will improve considerably, because the result ste is stored in the meonry and every time when the query is run, the OLAP engine will not have to read the data base to fetch the records.
The query caching has some limitations; if the query result changes, the cache will not help, because the new result set has to be again read from database and presented.
You can get more on this in help.sap.com
Ravi Thothadri

Similar Messages

  • Why should we turn off query cache when alternative UOM solution is used?

    Hi, all, Why should we turn off query cache when alternative UOM solution is used?I found it in "Checklist for Query Performance", but I dont know why.
    Please tell me if u know.
    PS: I also dont know how to turn off the cache, Need your help, thanks!

    hi ,
           I have also some confusion regarding Cache Parameters . What is the importance of cache ,  Should we delete the cache memory time to time for each query ? I have chked it in RSRT but never use the chache monitor function .

  • Clear TREX query cache

    Hi all,
    I'm stuck on this problem: I'm adding a document to a Trex search index using KM API (Web Dynpro for Java app). The document is processed & added to the index. So far so good.
    The problem is that when I do a search on that index (I search for ''), the document won't show up because the query results of the previous search for '' on my index seem to be cached somewhere. When I wait for 10 mins (cache expires?) or when restart my Web Dynpro app (so the user needs to login again), the document is found.
    How can I clear/disable the trex query cache?
    Thanks a lot,
    Jeroen
    PS: clicking on the 'clear cache' button in Search and Classification Cache Administration (TREX monitor) is not working. The cache that is pestering me seems to be a session-specific query cache.

    TREX service @ visual admin: queries.usecache == false

  • SAP BW 3.5 Query Cache - no data for queries

    Dear experts,
    we do have a problem with the SAP BW Query Cache (BW 3.5). Sometimes the
    problem arise that after the queries have been pre-calculated using web templates there are no figures available when running a web-report, or when doing a drilldown or filter navigation. A solution to solve that issue is to delete the cache for that query, however this solution is not reasonable or passable. The problem occurs non-reproducible for different queries.
    Is this a "normal" error of the SAP BW we have to live with, or are there any solutions for it? Any hints are greatly appreciated.
    Thanks in advance & kind regards,
    daniel

    HI Daniel
    Try to wirk without cache for those queries.
    Anyway, you should check how the cache option is configured for those queries .
    You can see that, in RSRV tx
    Hope this help

  • BW3.5 - Query Cache - InfoObject decimal formatting

    Hello all,
    I built a bw query, which displays key figures.  Each key figure uses the decimal place formatting from the key figure infoobject (In Query Designer, the properties for each keyfigure for Decimal places, is set to "[From Key Figure 0.00]").
    I decided to change the InfoObject key figure to have 0 decimals places (in BEx formatting tab for the Key Figure).  Now, when I open up query designer, and look at the properties for the Key figure, it still is set to "[From Key Figure 0.00]" (it should be "[From Key Figure 0]" to reflect the Key Figure Infoobject change. 
    I tried to generate the report using RSRT, and deleting the query cache, but it still shows up with two decimal places.  Has anyone encountered this problem before?  I am trying to avoid removing the Key Figure infoobject from the query and readding it to reflect the change.
    Thanks!

    Hello Brendon
    You have changed the KF infoObject to show only 0 decimal( no decimal)..that is okay but in query KF property you have selected with 2 decimal so data will be displayiing in 2 decimal...the query setting is local and have priority over KF InfoObject settings...
    If you notice in KF property in query u will have one option from the field somethjing which means whatever is defined by KF infoObject...just select that now onwards u will get only those many decimal which u have defined in KF InfoObject
    Thanks
    Tripple k

  • Issues with Query Caching in MII

    Hi All,
    I am facing a strange problem with Query caching in MII query. Have created one xacute query and set cache duration 30 sec. The associated BLS with the query retrieves data from SAP system. In the web page this value is populated by executing an iCommand. Followings are the steps I followed -
    Query executed for first time, it retrives data from SAP correctly. Lets say value is val1
    At 10th sec Value in SAP changed to val2 from val1.
    Query excuted at 15th sec, it gives value as val1. Which is expected as it gives from cache.
    Query is executed 1t 35th sec, it gives value as val2 retriving from SAP system. Which is correct.
    Query executed at 40th sec, it gives value as val1 from cache. Which is not expected.
    I have tried with java cache clear in browser and JCo cache of the server.
    Same problem I have seen for tag query also..
    MII Version - 12.0.6 Build(12)
    Any thoughts on this.
    Thanks,
    Soumen

    Soumen Mondal,
    If you are facing this problem from the same client PC and in the same session, it's very strange.. but if there are two sessions running on the same PC this kind of issue may come..
    Something about caching:
    To decide whether to cache a query or not, consider the number of times you intend to run the query in a minute. For data changing in seconds, and queried in minutes, I would recommend not to cache query at all.
    I may give a typical example for query caching in a query on material master cached for 24 hours, where we know that after creating material master, we are not creating a PO on the same day.
    BR,
    SB

  • Caching Query Issues?

    I have a test(on the server) and a development(localhost) sites so is the oracle database schemas.  Both Oracle database schemas reside on the same server. I have the same datasource name for both schemas in the CF Administrators (local and server). For some reason, it seems to me that the data being pulled only from the test environment database eventhough I run the code on the development(localhost).  I cleared the cache on the CF administrator.   Any advise would be great.  Thanks.

    Soumen Mondal,
    If you are facing this problem from the same client PC and in the same session, it's very strange.. but if there are two sessions running on the same PC this kind of issue may come..
    Something about caching:
    To decide whether to cache a query or not, consider the number of times you intend to run the query in a minute. For data changing in seconds, and queried in minutes, I would recommend not to cache query at all.
    I may give a typical example for query caching in a query on material master cached for 24 hours, where we know that after creating material master, we are not creating a PO on the same day.
    BR,
    SB

  • Named query cache not hit

    Hi,
    I'm using Toplink ORM 10.1.3.
    I have a table called STORE_CONFIG which has a primary key called KEYWORD (a VARCHAR2). The POJO mapped to this table is called StoreConfig.
    In the JDeveloper (10.1.3.1.0) mapping workbench I've defined a named query to query by the PK called "getConfigPropertyByKeyword". The type of the named query is ReadObjectQuery.
    SELECT keyword, key_value
    FROM STORE_CONFIG
    WHERE (keyword = #key)
    Under the options tab I have the following settings:
    Cache Statement: true
    Bind Parameters: true
    Cache Usage: Check Cache by Primary Key
    The application logs show that the same database queries are executed multiple times for the same PK (keyword)! Why is that? Shouldn't it be checking the Object Cache rather than going to the DB?
    I've tried it with "Cache Statement: false" and "Bind Parameters: false" with the same problem.
    If I click the Advanced tab and check "Cache Query Results" then the database is not hit twice for the same record. However it was my understanding that since I am querying by PK that I wouldn't need to set "Cache Query Results".
    Doesn't "Cache Query Results" apply to the Query Cache and not the Object Cache?

    Your issue seems to be that you are using custom SQL for the query, not a TopLink expression. When you use an Expression query TopLink's know if the query is by primary key and can get a cache hit.
    When you use custom SQL, TopLink does not know that the SQL is by primary key, so does not get a cache hit.
    You could either use an Expression for the query,
    or when using custom SQL you should be able to name your query argument the same as your database field defined as the primary key in your descriptor (case sensitive).
    i.e.
    SELECT keyword, key_value
    FROM STORE_CONFIG
    WHERE (keyword = #KEYWORD)

  • Problem in continuous query cache with PofExtractor

    Hi,
    I am creating a CQC with a filter having PofExtractor. When I try to insert any record in cache it is giving me exception at server side.
    When I do not use PofExtractor it is working fine.
    If any one know about this prblem please help.
    I am using C# as my client application
    Regards
    Nitin Jain

    Hi JK,
    I have made some changes in my server. Now I am not using any pof definition on server side. I think that's why it is giving me the error.
    2010-04-13 15:39:07.481/20.593 Oracle Coherence GE 3.5.1/461 <Error> (thread=DistributedCache, member=1): Exception occured during filter evaluation: MapEventFilter(mask=INSERTED|UPDATED_ENTERED|UPDATED_WITHIN, filter=GreaterEqualsFilter(PofExtractor(target=VALUE, navigator=SimplePofPath(indices=0)), Nitin)); removingthe filter...
    2010-04-13 15:39:07.481/20.593 Oracle Coherence GE 3.5.1/461 <Error> (thread=DistributedCache, member=1):
    (Wrapped) java.io.StreamCorruptedException: unknown user type: 1001
    at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:261)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ConverterFromBinary.convert(DistributedCache.CDB:4)
    at com.tangosol.util.ConverterCollections$ConverterMapEvent.getNewValue(ConverterCollections.java:3594)
    at com.tangosol.util.filter.MapEventFilter.evaluate(MapEventFilter.java:172)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.prepareDispatch(DistributedCache.CDB:82)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.postInvoke(DistributedCache.CDB:10)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.put(DistributedCache.CDB:156)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onPutRequest(DistributedCache.CDB:37)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$PutRequest.run(DistributedCache.CDB:1)
    at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:12)
    at com.tangosol.coherence.component.net.message.requestMessage.distributedCacheKeyRequest.ExtendedKeyRequest.onReceived(ExtendedKeyRequest.CDB:4)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:136)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:595)
    Caused by: java.io.StreamCorruptedException: unknown user type: 1001
    at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3289)
    at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2600)
    at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:348)
    at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2673)
    at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:257)
    ... 15 more
    Is there any way by which we can apply continuous query cache without providing object definition at server side?
    Regards
    Nitin Jain

  • Sql query can be monitored?

    Is there any way to monitor sql base query through performance monitor in scom both version 2007 and 2012?

    Hi
    if i am not wrong, you are looking to monitor the performace of SQL query.
    you can you OLEDB templete to collect the performace data.
    refer below link for more information
    http://technet.microsoft.com/en-us/library/hh457575.aspx
    http://blogs.technet.com/b/authormps/archive/2011/02/24/oledb-based-monitoring.aspx
    Regards
    sridhar v

  • Continuous Query Caching - Expensive?

    Hello,
    I have had a look at the documentation but I still cannot find a reasonable answer to the following question : How expensive are continuous query caches?
    Is it appropriate to have many of them?
    Is the following example an acceptable usage of Continuous query caching (does it scale?)
    In the context of a web application:
    User logs onto a website
    User performs a "Search" for financial instruments
    A continuous query cache is created with a filter for those instruments returned (say, 50) to listen to price updates.
    If the user pages, or does another search, the query cache is released and a new one, with an updated filter, is created.
    Does it make a difference if we are using the extend client?

    Hi,
    So 100 CQCs is probably not too excessive depending on the configuration of the process instantiating the CQCs and the cluster size etc.
    Each CQC will hold its own set of deserialized keys and values, so yes they are distinct objects, although a CQC of 50 entries would not be very big.
    One query I have - you mention that this is a Web Application but you also mention an Extend Client. Is your Web App and Extend Client of the main cluster? Is there are reason why you did this, most people would make a Web App a storage disabled cluster member so it would perform a bit better. Providing the Web App sits on a server that is very close in network terms to the cluster (i.e. same switch) then I would make it part of the cluster - or is the Web App the thing that is in the "regional environment".
    If you are running CQCs over Extend then there used to be some issues with this if the Extend connection was lost. AFAIK this is supposed to be fixed in later patches so I would get 3.7.1.8 and make sure you test that the Web App continues to work and properly fails over if you kill its Extend connection. When the CQC fails over it will reinitialize all its data so you will need to cope with that if you are pushing changes based on the CQC.
    JK

  • Missed and duplicate events with Continues Query Cache

    We have seen missed events and duplicate events when we register to receive events (using Continues Query Cache) on an entry in the cache while the entry is updating.
    Use case:
    Start a Node
    Start a Proxy
    Start Extend Client
    Implementation of the Extend Client
    Create Cache
    Add Entry to Cache
    Initiate Thread 1 {
          For each ( 1 to 30)
              Run Update Entry Processor on cache entry; Entry Processor increments the Cache Entry value by 1 
    Initiate Thread 2 {
         wait until Cache entry is updated 10 times
         Create MAP Listener {
              For Entry Insert Event {
                            Print event
                   set Initial value = new value
              For Entry Update Event {
                            Print event
                   set Update value = + 1
         Initiate Continues Query Cache (cache, Always Filter, MAP Listener)
    Start Thread 1
    Start Thread 2
    Waits until Thread 1 and Thread2 are terminated
    Expected Result = read the value of the entry from cache
    Actual result = Initial value + Update value
    Results we have seen in two tests_
    Test1: Expected Result > Actual results: Missing events
    Entry Inserted Event: MapEvent{ContinuousQueryCache inserted: key=1, value=UpdateObject [intNumber=1, longNumber=13]}
    +Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=13], new value=UpdateObject [intNumber=1, longNumber=15]}+*
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=15], new value=UpdateObject [intNumber=1, longNumber=16]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=16], new value=UpdateObject [intNumber=1, longNumber=17]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=17], new value=UpdateObject [intNumber=1, longNumber=18]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=18], new value=UpdateObject [intNumber=1, longNumber=19]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=19], new value=UpdateObject [intNumber=1, longNumber=20]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=20], new value=UpdateObject [intNumber=1, longNumber=21]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=21], new value=UpdateObject [intNumber=1, longNumber=22]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=22], new value=UpdateObject [intNumber=1, longNumber=23]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=23], new value=UpdateObject [intNumber=1, longNumber=24]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=24], new value=UpdateObject [intNumber=1, longNumber=25]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=25], new value=UpdateObject [intNumber=1, longNumber=26]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=26], new value=UpdateObject [intNumber=1, longNumber=27]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=27], new value=UpdateObject [intNumber=1, longNumber=28]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=28], new value=UpdateObject [intNumber=1, longNumber=29]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=29], new value=UpdateObject [intNumber=1, longNumber=30]}
    Expected Result: 30
    Actual Result: 29
    Issue:+ Event on 14th update was not sent
    Test 2: Expected Result < Actual Result: Duplicate events
    Entry Inserted Event: MapEvent{ContinuousQueryCache inserted: key=1, value=UpdateObject [intNumber=1, longNumber=13]}
    *Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=13], new value=UpdateObject [intNumber=1, longNumber=13]}*+
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=13], new value=UpdateObject [intNumber=1, longNumber=14]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=14], new value=UpdateObject [intNumber=1, longNumber=15]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=15], new value=UpdateObject [intNumber=1, longNumber=16]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=16], new value=UpdateObject [intNumber=1, longNumber=17]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=17], new value=UpdateObject [intNumber=1, longNumber=18]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=18], new value=UpdateObject [intNumber=1, longNumber=19]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=19], new value=UpdateObject [intNumber=1, longNumber=20]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=20], new value=UpdateObject [intNumber=1, longNumber=21]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=21], new value=UpdateObject [intNumber=1, longNumber=22]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=22], new value=UpdateObject [intNumber=1, longNumber=23]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=23], new value=UpdateObject [intNumber=1, longNumber=24]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=24], new value=UpdateObject [intNumber=1, longNumber=25]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=25], new value=UpdateObject [intNumber=1, longNumber=26]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=26], new value=UpdateObject [intNumber=1, longNumber=27]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=27], new value=UpdateObject [intNumber=1, longNumber=28]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=28], new value=UpdateObject [intNumber=1, longNumber=29]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=29], new value=UpdateObject [intNumber=1, longNumber=30]}
    Expected Result: 30
    Actual Result: 31
    Issue:+ Event on 13th update was sent in Insert and Update events both
    reg
    Dasun.

    Hi Paul,
    I tested with 3.7.1.4 and 3.7.1.5. In both versions I can see the issue.
    reg
    Dasun.

  • JPA toplink Query cache

    Any idea when performing Query's with Hint but no L2 cache enabled ? Will that object cached / lazyloaded /enriched on access ?
    By the way, whats the default L2 cache setting, is it enable or disabled ? If enable where does it stores, in memory ?
    Thanks
    Newbie

    A shared (L2) cache is enabled by default in TopLink/EclipseLink.
    The cache is in memory.
    Using a query cache, without the shared cache makes little sense, you should use both, or none.
    What will happen if you do is the query cache will be isolated to the session the same as the object cache, so you will get query cache hits for the duration of the persistence context/transaction. i.e. it will be an L1 query cache.
    To configure the cache see,
    http://wiki.eclipse.org/EclipseLink/Examples/JPA/Caching

  • When is query cache deleted?

    Hello All,
    I searched this forum and SAP for notes but couldn't find exactly when query cache is deleted.
    For example, we know when new data is loaded to an infocube, but what about when new master data is loaded?  If a query still has the existing transaction data loaded, but new master data is loaded (and a query output is reading and attribute), should it NOT read cache?
    What are the hard and fast rules?
    Thank you so much!

    Queries check to see if new data has been loaded to a cube/DSO since teh data has been cached. If new data has been loaded, the old cached data for that query is deleted and the new execution of the query will read the data from the database.  A new master data load would not effect the last load date on a cube/DSO, so it would not cause the the cache to be deleted for a quary on the cube/ODS.  If you had set the Master Data InfoObject to be an InfoProvider and had just a master data query (no cube/ODS), the cached master data query results should get deleted.
    As a matter of practice, Master Data should be loaded before loads to the cubes/ODS are done.
    As mentioned you can delete OLAP cache from tran RSRCACHE.  There is also a batch program you can run to delete entries from OLAP cache.  You can specify specific query results to be deleted, and you can also use it to delete all cached results that are older than a specifiied number of days.

  • Query cache in IP

    Hi
    Could anybody tell me how can we enable the Query cache for the reports built on Aggregation level in Integrated planning?.
    Thanks in advance.
    Raj

    Hi Raj,
    the system will not use the cache for the query defined on the aggregation level but for the technical query used in the plan buffer. The name of the query used in the plan buffer is as follows:
    - aggregation level A on a MultiProvider M: Query name is M/!!1M
    - aggregation level A on a real-time cube C: Query name is C/!!1C
    With these names you should find the following settings in RSRT:
    Read Mode                     H                                          
    Req. Status                     0                                          
    Cache Mode                    1                                          
    Update Cache in Delta Process X                                          
    SP Grouping                   1            (only for MultiProvider)
    Use these settings, don't try to experiment with different settings.
    Best regards,
    Gregor

Maybe you are looking for

  • How do I uninstall mackeeper from my macbook pro?

    How do i uninstall mackeeper from my Macbook Pro?

  • Confirm quantity for an prod order

    After confirming quantity for an order, can I use the same stock for another order

  • Create a "today's menu" module?

    I have our weekly menu set up on the intranet using Spry data and an XML file. It displays stuff like: day (e.g. "Monday") The dish (e.g. "Lasagna") Soup of the day Price OK, fine. But what I'd like to do is have it only display TODAY's menu, with a

  • Jrun closed connection errors

    The application.log file shows repeated 404 errors for files that have been off the server for ages. Also 404 errors for files that are right were they are supposed to be. It's almost like a script is running somewhere to search for these pages, but

  • BDLS: importing request back to dev

    Hi, I have a problem and i hope Experts may help me. We are upgrading BW from 3.1 to 7.0. I have followed the instructions in  http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/805eefcd-5428-2d10-c0ab-dc27a95b2c81?QuickLink=index&ove