When is query cache deleted?

Hello All,
I searched this forum and SAP for notes but couldn't find exactly when query cache is deleted.
For example, we know when new data is loaded to an infocube, but what about when new master data is loaded?  If a query still has the existing transaction data loaded, but new master data is loaded (and a query output is reading and attribute), should it NOT read cache?
What are the hard and fast rules?
Thank you so much!

Queries check to see if new data has been loaded to a cube/DSO since teh data has been cached. If new data has been loaded, the old cached data for that query is deleted and the new execution of the query will read the data from the database.  A new master data load would not effect the last load date on a cube/DSO, so it would not cause the the cache to be deleted for a quary on the cube/ODS.  If you had set the Master Data InfoObject to be an InfoProvider and had just a master data query (no cube/ODS), the cached master data query results should get deleted.
As a matter of practice, Master Data should be loaded before loads to the cubes/ODS are done.
As mentioned you can delete OLAP cache from tran RSRCACHE.  There is also a batch program you can run to delete entries from OLAP cache.  You can specify specific query results to be deleted, and you can also use it to delete all cached results that are older than a specifiied number of days.

Similar Messages

  • Why should we turn off query cache when alternative UOM solution is used?

    Hi, all, Why should we turn off query cache when alternative UOM solution is used?I found it in "Checklist for Query Performance", but I dont know why.
    Please tell me if u know.
    PS: I also dont know how to turn off the cache, Need your help, thanks!

    hi ,
           I have also some confusion regarding Cache Parameters . What is the importance of cache ,  Should we delete the cache memory time to time for each query ? I have chked it in RSRT but never use the chache monitor function .

  • SAP BW 3.5 Query Cache - no data for queries

    Dear experts,
    we do have a problem with the SAP BW Query Cache (BW 3.5). Sometimes the
    problem arise that after the queries have been pre-calculated using web templates there are no figures available when running a web-report, or when doing a drilldown or filter navigation. A solution to solve that issue is to delete the cache for that query, however this solution is not reasonable or passable. The problem occurs non-reproducible for different queries.
    Is this a "normal" error of the SAP BW we have to live with, or are there any solutions for it? Any hints are greatly appreciated.
    Thanks in advance & kind regards,
    daniel

    HI Daniel
    Try to wirk without cache for those queries.
    Anyway, you should check how the cache option is configured for those queries .
    You can see that, in RSRV tx
    Hope this help

  • BW3.5 - Query Cache - InfoObject decimal formatting

    Hello all,
    I built a bw query, which displays key figures.  Each key figure uses the decimal place formatting from the key figure infoobject (In Query Designer, the properties for each keyfigure for Decimal places, is set to "[From Key Figure 0.00]").
    I decided to change the InfoObject key figure to have 0 decimals places (in BEx formatting tab for the Key Figure).  Now, when I open up query designer, and look at the properties for the Key figure, it still is set to "[From Key Figure 0.00]" (it should be "[From Key Figure 0]" to reflect the Key Figure Infoobject change. 
    I tried to generate the report using RSRT, and deleting the query cache, but it still shows up with two decimal places.  Has anyone encountered this problem before?  I am trying to avoid removing the Key Figure infoobject from the query and readding it to reflect the change.
    Thanks!

    Hello Brendon
    You have changed the KF infoObject to show only 0 decimal( no decimal)..that is okay but in query KF property you have selected with 2 decimal so data will be displayiing in 2 decimal...the query setting is local and have priority over KF InfoObject settings...
    If you notice in KF property in query u will have one option from the field somethjing which means whatever is defined by KF infoObject...just select that now onwards u will get only those many decimal which u have defined in KF InfoObject
    Thanks
    Tripple k

  • When does proactive caching make sense?

    Hi all!
    An standard pattern for multi-dimensional cube is to have
    one cube doing heavy time-consuming processing and then synchronize it to query cubes.
    In this setup, will pro-active caching makes sense?
    Best regards
    Bjørn
    B. D. Jensen

    Hello Jensen,
    Proactive Cache is useful low volume data cubes where data is updating frequently like inventory, forecasting etc. But I will tell you with my own experience Proactive cache in SSAS is not worth. It behaves unexpectedly some times when data update/insert/Delete
    in source table the cube doesn't start its processing ,  better you create a SQL Job to process the cube after specified time .
    If you want to process the cube in specified interval then I would suggest you to go with SQL JOB
    blog:My Blog/
    Hope this will help you !!!
    Sanjeewan

  • Query Cache Behavior

    Hi Experts,
    I'm running 12.0.5, and have enjoyed 6 months or more of uptime for our production server, until Thursday when an electrical strom coupled with an odd occurance with our UPS system brought it to an abrupt halt.
    Dev and Test environments came up OK, however the icon related to the J2EE server on production remained yellow.  We were unable to access MII, or NWA for that matter.
    I handed the problem over to our Basis team, who couldn't figure it out then passed a note to SAP.  While waiting for a response, I decided to do some more poking around, as it was now 8:00pm Friday night and any hope of doing anything over the weekend (other than work) was quickly fading.
    Poking around in the NW database, I found that the XMII_QUERYCACHE table had >2.5M rows (that seemed odd).  Initital attempts to Truncate the table were unsuccessful (something about resources and NOWAIT, should have captured that error), however I was able to Truncate by shutting down the server, then executing just as the database was starting, and VOILA!, NW and MII came online.
    My assumption is that either the table is read on startup, or perhaps an attempt is made to 'delete' all rows on startup (delete would attempt to build a rollback segment in order to execute), is that the case?  Also, are 'expired' cache entries supposed to be hanging around?
    I found that I have a single frequently executed query that was the culprit, query caching was enabled with a duration of 2Hours (many jobs use this query).   I'll correct that situation, but I wanted to try and correct the root cause.
    Thanks,
    Rod

    I entered a ticket and just received a resolution from SAP, as follows..
    We have added this feature in both MII 12.0 and MII 12.1.
    The changes will be available with next SP release on SAP Service
    Marketplace.
    MII 12.0 SP12 will be available on 15-Dec-2010.
    My assumption is that there will be a system job that clears out expired query cache rows at some frequency, nightly maybe.
    Wanted to update the thread with the resolution in case anyone was having the same problem.
    Rod

  • Named query cache not hit

    Hi,
    I'm using Toplink ORM 10.1.3.
    I have a table called STORE_CONFIG which has a primary key called KEYWORD (a VARCHAR2). The POJO mapped to this table is called StoreConfig.
    In the JDeveloper (10.1.3.1.0) mapping workbench I've defined a named query to query by the PK called "getConfigPropertyByKeyword". The type of the named query is ReadObjectQuery.
    SELECT keyword, key_value
    FROM STORE_CONFIG
    WHERE (keyword = #key)
    Under the options tab I have the following settings:
    Cache Statement: true
    Bind Parameters: true
    Cache Usage: Check Cache by Primary Key
    The application logs show that the same database queries are executed multiple times for the same PK (keyword)! Why is that? Shouldn't it be checking the Object Cache rather than going to the DB?
    I've tried it with "Cache Statement: false" and "Bind Parameters: false" with the same problem.
    If I click the Advanced tab and check "Cache Query Results" then the database is not hit twice for the same record. However it was my understanding that since I am querying by PK that I wouldn't need to set "Cache Query Results".
    Doesn't "Cache Query Results" apply to the Query Cache and not the Object Cache?

    Your issue seems to be that you are using custom SQL for the query, not a TopLink expression. When you use an Expression query TopLink's know if the query is by primary key and can get a cache hit.
    When you use custom SQL, TopLink does not know that the SQL is by primary key, so does not get a cache hit.
    You could either use an Expression for the query,
    or when using custom SQL you should be able to name your query argument the same as your database field defined as the primary key in your descriptor (case sensitive).
    i.e.
    SELECT keyword, key_value
    FROM STORE_CONFIG
    WHERE (keyword = #KEYWORD)

  • Problem in continuous query cache with PofExtractor

    Hi,
    I am creating a CQC with a filter having PofExtractor. When I try to insert any record in cache it is giving me exception at server side.
    When I do not use PofExtractor it is working fine.
    If any one know about this prblem please help.
    I am using C# as my client application
    Regards
    Nitin Jain

    Hi JK,
    I have made some changes in my server. Now I am not using any pof definition on server side. I think that's why it is giving me the error.
    2010-04-13 15:39:07.481/20.593 Oracle Coherence GE 3.5.1/461 <Error> (thread=DistributedCache, member=1): Exception occured during filter evaluation: MapEventFilter(mask=INSERTED|UPDATED_ENTERED|UPDATED_WITHIN, filter=GreaterEqualsFilter(PofExtractor(target=VALUE, navigator=SimplePofPath(indices=0)), Nitin)); removingthe filter...
    2010-04-13 15:39:07.481/20.593 Oracle Coherence GE 3.5.1/461 <Error> (thread=DistributedCache, member=1):
    (Wrapped) java.io.StreamCorruptedException: unknown user type: 1001
    at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:261)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ConverterFromBinary.convert(DistributedCache.CDB:4)
    at com.tangosol.util.ConverterCollections$ConverterMapEvent.getNewValue(ConverterCollections.java:3594)
    at com.tangosol.util.filter.MapEventFilter.evaluate(MapEventFilter.java:172)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.prepareDispatch(DistributedCache.CDB:82)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.postInvoke(DistributedCache.CDB:10)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.put(DistributedCache.CDB:156)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onPutRequest(DistributedCache.CDB:37)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$PutRequest.run(DistributedCache.CDB:1)
    at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:12)
    at com.tangosol.coherence.component.net.message.requestMessage.distributedCacheKeyRequest.ExtendedKeyRequest.onReceived(ExtendedKeyRequest.CDB:4)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:136)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:595)
    Caused by: java.io.StreamCorruptedException: unknown user type: 1001
    at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3289)
    at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2600)
    at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:348)
    at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2673)
    at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:257)
    ... 15 more
    Is there any way by which we can apply continuous query cache without providing object definition at server side?
    Regards
    Nitin Jain

  • Can we have triggers that get fired when we Query a table?

    Hi All,
    What could be the pricise answer to the following question?
    1. Can we have triggers that get fired when we Query a table?
    2. What the relation of triggers and delete, truncate statement in one line?
    3. What is data modelling? Why is it necessary?
    4. Which are not mandatory but essential files in Oracle?
    Regards,
    AAK

    1. Can we have triggers that get fired when we Query a table?for INSERT, UPDATE and DELETE statement, yes
    for SELECT statement, no.
    Question 2 is not clear for me.
    Question 3 is very very general ...
    4 >Which are not mandatory but essential files in Oracle?
    all database files (initialization file, control files, datafiles, redo log files) are mandatory. What is not mandatory but considered as a (very) bad practice is
    to have only 1 control file and only 1 redo log group with 1 redo log file.

  • Query for deleting the minimum updated record.

    Hello Everybody,
    I have table USER_RECENT_PROJECTS which has SIX columns USER_NAME,PROJECT_ID,CREATED_BY,CREATED_ON,UPDATED_BY
    and UPDATED_ON.The purpose of having this table to get 5 recent PROJECTS
    on which user has worked on.
    I have trigger called RECENT_PRJ_TRIGG which IS FIRED when the data is inserted or updated on PROJECT table.After this trigger calls procedure PROC_USER_RECENT_PRJ and that procedure puts the data in this table.
    It is inserting the data upto 5 records when the six records comes it deleting the record which is least UPDATED_ON from the table USER_RECENT_PROJECTS but the problem is it is deleting
    the record from other user that i don't want.I want to delete the the record which is
    least UPDATED_ON from particular user.
    Please help me on this issue.
    Here is the trigger
    CREATE TRIGGER RECENT_PRJ_TRIGG
    AFTER INSERT OR UPDATE ON PROJECT
    FOR EACH ROW
    DECLARE
    NUMBER_OF_PROJECTS NUMBER:=0;
    EXISTING_PROJECT_ID NUMBER:=0;
    BEGIN
    SELECT COUNT(*) INTO NUMBER_OF_PROJECTS FROM USER_RECENT_PROJECTS WHERE USER_NAME=:NEW.UPDATED_BY;
    SELECT PROJECT_ID INTO EXISTING_PROJECT_ID FROM USER_RECENT_PROJECTS WHERE PROJECT_ID=:NEW.PROJECT_ID AND USER_NAME=:NEW.UPDATED_BY;
    NVLX.PROC_USER_RECENT_PRJ(NUMBER_OF_PROJECTS,:NEW.PROJECT_ID,EXISTING_PROJECT_ID,:NEW.UPDATED_BY,:NEW.CREATED_BY,:NEW.CREATED_ON);
    EXCEPTION
    WHEN NO_DATA_FOUND THEN
    NVLX.PROC_USER_RECENT_PRJ(NUMBER_OF_PROJECTS,:NEW.PROJECT_ID,0,:NEW.UPDATED_BY,:NEW.CREATED_BY,:NEW.CREATED_ON);
    END;
    And this is the procedure which is inserting the data
    CREATE OR REPLACE PROCEDURE PROC_USER_RECENT_PRJ (
                   NUMBER_OF_PROJECTS IN NUMBER,
                   NEW_PROJECT_ID IN PROJECT.PROJECT_ID%TYPE,
                   EXISTING_PROJECT_ID IN USER_RECENT_PROJECTS.PROJECT_ID%TYPE,
                   USER_NAME IN CONTENT_USER.USER_NAME%TYPE,
                        CREATED_BY IN PROJECT.CREATED_BY%TYPE,
                        CREATED_ON IN PROJECT.CREATED_ON%TYPE)
              IS
                                  MAX_RECENT_PROJECTS NUMBER := 5;
              BEGIN
                        IF NUMBER_OF_PROJECTS<=MAX_RECENT_PROJECTS AND EXISTING_PROJECT_ID=NEW_PROJECT_ID THEN
                             UPDATE USER_RECENT_PROJECTS SET USER_RECENT_PROJECTS.UPDATED_ON=SYSDATE,USER_RECENT_PROJECTS.UPDATED_BY=USER_NAME WHERE PROJECT_ID=EXISTING_PROJECT_ID
                             AND USER_RECENT_PROJECTS.USER_NAME=USER_NAME;
                        ELSE IF NUMBER_OF_PROJECTS<MAX_RECENT_PROJECTS AND EXISTING_PROJECT_ID!= NEW_PROJECT_ID THEN
                        INSERT INTO USER_RECENT_PROJECTS VALUES (USER_NAME,NEW_PROJECT_ID,CREATED_BY,CREATED_ON,USER_NAME,SYSDATE);
                        ELSE IF NUMBER_OF_PROJECTS=MAX_RECENT_PROJECTS AND EXISTING_PROJECT_ID!= NEW_PROJECT_ID THEN
                        DELETE FROM USER_RECENT_PROJECTS WHERE USER_RECENT_PROJECTS.PROJECT_ID IN(
                                       SELECT PROJECT_ID FROM USER_RECENT_PROJECTS
                                                 WHERE UPDATED_ON=(SELECT MIN(UPDATED_ON) FROM USER_RECENT_PROJECTS
                                  WHERE USER_RECENT_PROJECTS.USER_NAME=USER_NAME));
                   INSERT INTO USER_RECENT_PROJECTS VALUES (USER_NAME,NEW_PROJECT_ID,CREATED_BY,CREATED_ON,USER_NAME,SYSDATE);
                        END IF;
                        END IF;
                        END IF;
    EXCEPTION
    WHEN NO_DATA_FOUND THEN
    NVLX.PROC_USER_RECENT_PRJ(NUMBER_OF_PROJECTS,NEW_PROJECT_ID,0,USER_NAME,CREATED_BY,CREATED_ON);
    END PROC_USER_RECENT_PRJ;
    Please help me on this issue.
    Thanks in advance.....

    Thanks for your suggestion....
    I am using the trigger for populating the data in USER_RECENT_PROJECTS.This
    trigger is fired when data is INSERTED or UPDATED in PROJECT table.And that trigger will call the procedure PROC_USER_RECENT_PRJ.This is used
    to put data in USER_RECENT_PROJECTS table.I am getting the problem in procedure the problem is upto five records it is inserting the record when i go for insert sixth record it should delete least UPDATED_ON and insert new record.But it is deleting the record from other user that i don't want.I want to delete the record from the paarticular user.I am using this query for deleting the record..
                   DELETE FROM USER_RECENT_PROJECTS WHERE USER_RECENT_PROJECTS.PROJECT_ID=(
         SELECT PROJECT_ID FROM USER_RECENT_PROJECTS
                        WHERE UPDATED_ON=(SELECT MIN(UPDATED_ON) FROM USER_RECENT_PROJECTS WHERE USER_RECENT_PROJECTS.USER_NAME=USER_NAME)
                             AND USER_RECENT_PROJECTS.USER_NAME=USER_NAME
                   ) AND USER_RECENT_PROJECTS.USER_NAME=USER_NAME;
    when i fire this query individually it is deleting the proper record,but when i use it
    inside procedure it is creating the problem it is deleting the record from other user.
    Please suggest me other query for deletion.
    Thanks in advance.......

  • Crash when updating site cache?  MM_Username1?

    Hi,
    All of a sudden, tonight, DW8.0.2 will no longer open or load
    one of my
    sites, crashing when "Updating Site Cache". The site is ASP
    linked to an SQL
    database.
    Checking the file that the cache gets to, I notice that the
    problem file is
    within a sub-directory of the site, called "/admin". Thing is
    though, it's
    not always the same file, but it IS always a file within the
    /admin
    sub-directory.
    Within this sub-directory I altered, by hand, the Login User
    code and
    Restrict Access to Page code to create and use the session
    variable
    MM_Username1 as opposed to the default MM_Username.
    Could this be the reason for the sudden trouble with the site
    cache? I am
    puzzled because the site was working fine for a few weeks
    now, with this
    sub-directory included, and I didn't create the MM_Username1
    issue today
    either, it has also been working fine for a good few weeks
    now. I've also
    not added anything new to the site, that I am aware of.
    When I remove the sub-directory from the site, the Updating
    Site Cache works
    fine, so I am 100% sure there is a problem with the files
    within this
    sub-directory. How will I know which one though?
    I've tried this, as recommended by Adobe, but I still get a
    freeze-up when
    re-creating the cache:
    1. Try renaming the Dreamweaver user configuration folder, so
    that
    Dreamweaver will automatically generate a new user
    configuration folder the
    next time Dreamweaver launches. The Configuration folder is
    located here:
    C:\Documents and Settings\<username>\Application
    Data\Macromedia\Dreamweaver
    8\Configuration
    2. Recreate your Dreamweaver user settings in the registry as
    follows:
    Launch the Registry Editor by clicking the Start button,
    choose Run, then
    type "regedit". In the Registry Editor, navigate to this
    folder:
    HKEY_CURRENT_USER\Software\Macromedia\Dreamweaver 8
    Rename the "Dreamweaver 8" key to "DreamweaverOLD", so that
    Dreamweaver will
    automatically generate a new user settings key the next time
    you launch it.
    3. Recreate your Dreamweaver site definitions in the registry
    as follows.
    The steps below will delete your site definitions in
    Dreamweaver.
    4. Launch the Registry Editor by clicking the Start button,
    choose Run, then
    type "regedit". In the Registry Editor, navigate to this
    folder:
    HKEY_CURRENT_USER\Software\Macromedia\Common\8\Sites
    Rename the "Sites" key to "SitesOLD", so that Dreamweaver
    will automatically
    generate a new Sites key the next time you launch it.
    Any further advice?
    Regards
    nath.

    Found it! Jeesh.
    I had a querystring value in my UPDATE redirect URL which was
    coded
    incorrectly!!!
    I am frustrated that, rather than just highlighting the error
    or, at the
    very least, producing an error when opening the specific
    file, this type of
    thing crashed the entire DW programme.
    The time I spent in Notepad today changing MM_Username1 back
    to
    MM_Username!! Turns out that was nothing to do with it.
    <groan> Yeah,
    that's right, laugh it up! :o)
    Anyway, solved now. A misplaced ", %, ), (, <, >, &
    etc can cause you no
    end of grief! :o(
    Nath.
    "tradmusic.com" <[email protected]> wrote in
    message
    news:[email protected]...
    > Hi,
    > All of a sudden, tonight, DW8.0.2 will no longer open or
    load one of my
    > sites, crashing when "Updating Site Cache". The site is
    ASP linked to an
    > SQL database.
    > Checking the file that the cache gets to, I notice that
    the problem file
    > is within a sub-directory of the site, called "/admin".
    Thing is though,
    > it's not always the same file, but it IS always a file
    within the /admin
    > sub-directory.
    >
    > Within this sub-directory I altered, by hand, the Login
    User code and
    > Restrict Access to Page code to create and use the
    session variable
    > MM_Username1 as opposed to the default MM_Username.
    > Could this be the reason for the sudden trouble with the
    site cache? I
    > am puzzled because the site was working fine for a few
    weeks now, with
    > this sub-directory included, and I didn't create the
    MM_Username1 issue
    > today either, it has also been working fine for a good
    few weeks now.
    > I've also not added anything new to the site, that I am
    aware of.
    >
    > When I remove the sub-directory from the site, the
    Updating Site Cache
    > works fine, so I am 100% sure there is a problem with
    the files within
    > this sub-directory. How will I know which one though?
    >
    > I've tried this, as recommended by Adobe, but I still
    get a freeze-up when
    > re-creating the cache:
    >
    > 1. Try renaming the Dreamweaver user configuration
    folder, so that
    > Dreamweaver will automatically generate a new user
    configuration folder
    > the next time Dreamweaver launches. The Configuration
    folder is located
    > here: C:\Documents and
    Settings\<username>\Application
    > Data\Macromedia\Dreamweaver 8\Configuration
    >
    > 2. Recreate your Dreamweaver user settings in the
    registry as follows:
    >
    > Launch the Registry Editor by clicking the Start button,
    choose Run, then
    > type "regedit". In the Registry Editor, navigate to this
    folder:
    > HKEY_CURRENT_USER\Software\Macromedia\Dreamweaver 8
    > Rename the "Dreamweaver 8" key to "DreamweaverOLD", so
    that Dreamweaver
    > will automatically generate a new user settings key the
    next time you
    > launch it.
    >
    > 3. Recreate your Dreamweaver site definitions in the
    registry as follows.
    > The steps below will delete your site definitions in
    Dreamweaver.
    >
    > 4. Launch the Registry Editor by clicking the Start
    button, choose Run,
    > then type "regedit". In the Registry Editor, navigate to
    this folder:
    >
    > HKEY_CURRENT_USER\Software\Macromedia\Common\8\Sites
    >
    > Rename the "Sites" key to "SitesOLD", so that
    Dreamweaver will
    > automatically generate a new Sites key the next time you
    launch it.
    >
    > Any further advice?
    > Regards
    > nath.
    >

  • Continuous Query Caching - Expensive?

    Hello,
    I have had a look at the documentation but I still cannot find a reasonable answer to the following question : How expensive are continuous query caches?
    Is it appropriate to have many of them?
    Is the following example an acceptable usage of Continuous query caching (does it scale?)
    In the context of a web application:
    User logs onto a website
    User performs a "Search" for financial instruments
    A continuous query cache is created with a filter for those instruments returned (say, 50) to listen to price updates.
    If the user pages, or does another search, the query cache is released and a new one, with an updated filter, is created.
    Does it make a difference if we are using the extend client?

    Hi,
    So 100 CQCs is probably not too excessive depending on the configuration of the process instantiating the CQCs and the cluster size etc.
    Each CQC will hold its own set of deserialized keys and values, so yes they are distinct objects, although a CQC of 50 entries would not be very big.
    One query I have - you mention that this is a Web Application but you also mention an Extend Client. Is your Web App and Extend Client of the main cluster? Is there are reason why you did this, most people would make a Web App a storage disabled cluster member so it would perform a bit better. Providing the Web App sits on a server that is very close in network terms to the cluster (i.e. same switch) then I would make it part of the cluster - or is the Web App the thing that is in the "regional environment".
    If you are running CQCs over Extend then there used to be some issues with this if the Extend connection was lost. AFAIK this is supposed to be fixed in later patches so I would get 3.7.1.8 and make sure you test that the Web App continues to work and properly fails over if you kill its Extend connection. When the CQC fails over it will reinitialize all its data so you will need to cope with that if you are pushing changes based on the CQC.
    JK

  • Prallel query and Delete statements

    Hi Gurus, need your help in understanding parallel execution
    We are noticing that delete statements are executing very slowly when parallel query is forced. Is this expected?
    All the literature that I read says that parallel query has no impact on DML statements. Yet, the query plan on the delete statement shows that it will be executed in parallel mode. This could mean that the scan portion is happening in parallel but the delete operation itself is happening in serial, correct?
    I tested in various servers and multiple tables before posting my question here. They all seem to show consistent results. Delete statements are twice slower when parallel query is forced. The same happens when parallel degree is set in table definition.
    For your information, we are running 10g on windows server with 15 million rows in table, 5 million rows being deleted with the statement. There is one index on the table and it doesn’t match the columns in query. Query plan shows full table scan. Table is not portioned.
    Thanks for your help in advance

    Parallel DML is supported by Oracle. Obviously when enabled, it can impact a DML statement.. (what literature have you read that said otherwise?)
    The delete operation itself is done in parallel using rowid ranges (e.g. each PQ slave process does a distinct physical "piece" of the table).
    Parallel DML should typically speed up the process. Why? Because I/O itself has latency. The process needs to wait (idle CPU time) for the I/O operation to complete before continuing.
    So let's assume the process can only do a 100 deletes per second. The actual I/O channel is capable of a 1000 I/O's per second. But due to inherant latency, the "max delete speed limit" for a procces is a 100 I/O's per second. The full capacity of the I/O channel is thus not used (and cannot be used by a single process).
    Parallel Query enables more processes to do I/O in order to utilise this "max speed limit".
    Why would you see a degradation in performance? It could be due to overutilising the I/O channels (attempting to go faster than the speed limit so to say).
    It could be due to some other contention in Oracle or even the o/s. You will need to investigate the wait state and events of the PQ processes to try and determine the probable cause.

  • Missed and duplicate events with Continues Query Cache

    We have seen missed events and duplicate events when we register to receive events (using Continues Query Cache) on an entry in the cache while the entry is updating.
    Use case:
    Start a Node
    Start a Proxy
    Start Extend Client
    Implementation of the Extend Client
    Create Cache
    Add Entry to Cache
    Initiate Thread 1 {
          For each ( 1 to 30)
              Run Update Entry Processor on cache entry; Entry Processor increments the Cache Entry value by 1 
    Initiate Thread 2 {
         wait until Cache entry is updated 10 times
         Create MAP Listener {
              For Entry Insert Event {
                            Print event
                   set Initial value = new value
              For Entry Update Event {
                            Print event
                   set Update value = + 1
         Initiate Continues Query Cache (cache, Always Filter, MAP Listener)
    Start Thread 1
    Start Thread 2
    Waits until Thread 1 and Thread2 are terminated
    Expected Result = read the value of the entry from cache
    Actual result = Initial value + Update value
    Results we have seen in two tests_
    Test1: Expected Result > Actual results: Missing events
    Entry Inserted Event: MapEvent{ContinuousQueryCache inserted: key=1, value=UpdateObject [intNumber=1, longNumber=13]}
    +Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=13], new value=UpdateObject [intNumber=1, longNumber=15]}+*
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=15], new value=UpdateObject [intNumber=1, longNumber=16]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=16], new value=UpdateObject [intNumber=1, longNumber=17]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=17], new value=UpdateObject [intNumber=1, longNumber=18]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=18], new value=UpdateObject [intNumber=1, longNumber=19]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=19], new value=UpdateObject [intNumber=1, longNumber=20]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=20], new value=UpdateObject [intNumber=1, longNumber=21]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=21], new value=UpdateObject [intNumber=1, longNumber=22]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=22], new value=UpdateObject [intNumber=1, longNumber=23]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=23], new value=UpdateObject [intNumber=1, longNumber=24]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=24], new value=UpdateObject [intNumber=1, longNumber=25]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=25], new value=UpdateObject [intNumber=1, longNumber=26]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=26], new value=UpdateObject [intNumber=1, longNumber=27]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=27], new value=UpdateObject [intNumber=1, longNumber=28]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=28], new value=UpdateObject [intNumber=1, longNumber=29]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=29], new value=UpdateObject [intNumber=1, longNumber=30]}
    Expected Result: 30
    Actual Result: 29
    Issue:+ Event on 14th update was not sent
    Test 2: Expected Result < Actual Result: Duplicate events
    Entry Inserted Event: MapEvent{ContinuousQueryCache inserted: key=1, value=UpdateObject [intNumber=1, longNumber=13]}
    *Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=13], new value=UpdateObject [intNumber=1, longNumber=13]}*+
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=13], new value=UpdateObject [intNumber=1, longNumber=14]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=14], new value=UpdateObject [intNumber=1, longNumber=15]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=15], new value=UpdateObject [intNumber=1, longNumber=16]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=16], new value=UpdateObject [intNumber=1, longNumber=17]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=17], new value=UpdateObject [intNumber=1, longNumber=18]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=18], new value=UpdateObject [intNumber=1, longNumber=19]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=19], new value=UpdateObject [intNumber=1, longNumber=20]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=20], new value=UpdateObject [intNumber=1, longNumber=21]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=21], new value=UpdateObject [intNumber=1, longNumber=22]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=22], new value=UpdateObject [intNumber=1, longNumber=23]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=23], new value=UpdateObject [intNumber=1, longNumber=24]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=24], new value=UpdateObject [intNumber=1, longNumber=25]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=25], new value=UpdateObject [intNumber=1, longNumber=26]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=26], new value=UpdateObject [intNumber=1, longNumber=27]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=27], new value=UpdateObject [intNumber=1, longNumber=28]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=28], new value=UpdateObject [intNumber=1, longNumber=29]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=29], new value=UpdateObject [intNumber=1, longNumber=30]}
    Expected Result: 30
    Actual Result: 31
    Issue:+ Event on 13th update was sent in Insert and Update events both
    reg
    Dasun.

    Hi Paul,
    I tested with 3.7.1.4 and 3.7.1.5. In both versions I can see the issue.
    reg
    Dasun.

  • JPA toplink Query cache

    Any idea when performing Query's with Hint but no L2 cache enabled ? Will that object cached / lazyloaded /enriched on access ?
    By the way, whats the default L2 cache setting, is it enable or disabled ? If enable where does it stores, in memory ?
    Thanks
    Newbie

    A shared (L2) cache is enabled by default in TopLink/EclipseLink.
    The cache is in memory.
    Using a query cache, without the shared cache makes little sense, you should use both, or none.
    What will happen if you do is the query cache will be isolated to the session the same as the object cache, so you will get query cache hits for the duration of the persistence context/transaction. i.e. it will be an L1 query cache.
    To configure the cache see,
    http://wiki.eclipse.org/EclipseLink/Examples/JPA/Caching

Maybe you are looking for

  • I would like to connect my Mac Book Pro 13 Mid 2012 to an HD TV

    What adapter do I need to purchase in order to connect my Mac Book Pro 13 Mid 2012 to an HD TV?

  • Net Value Key figure in Service Level: Order Items OSD_C12

    Hi, I'm trying to set up the Sales Analysis Business content for my Customer but I don't get the Net Value key figure to work. My customer uses TS as price unit (1 TS = 1000 PC). The net price are set to 35EUR / 1 TS. In BW the Net value don't consid

  • Why cant i sign into my account

    For ages now When ever I open the app store on my imac I can't update any apps nor can i sign in. I obviously did work at one point becasue I bought a couple of little apps but now I cant do a thing. When clicking to sign in or update something my cu

  • Cannot save as TIFF from ACR

    This is my first post on this forum.  Please forgive me if I have completly misunderstood something. I am using PSE7 with ACR 5.6 on Windows 7 (soon to be updated to PSE 11).  I have only been using RAW files for a short while so I am far from famili

  • Request Product Service Form Broken...

    I know that this may not be the right forum, but I can't find one that fits. Of course, my ipod 20gig has stopped working. Right now I am trying to fill out a Request Product Service form and it's broken! Whenever I try to put in my postal code, it k