UUP: um:getProperty caching results?

I've set up a UUP to access some data in an external database,
and I'm getting some interesting effects when the external data
changes. Am I missing something obvious here?
When the UUP is first accessed, all of the information is
returned correctly by the
<um:getProperty propertyName="<myProp>"/> tag. If that data
then changes in the external data store, the tag still returns
the original data.
I assumed this was due to the profile not reloading the
information, but it does in fact as I can retrieve it using
either the explicit getter method, or the getProperty method on
the profile.
Just to make it a bit clearer, if I do:
<um:getProfile profileType="myType" profileKey="<user>" profileId="myProfile"
scope="request"/>
<%myProfileType mp=(myProfileType)myProfile;%>
<um:getProperty propertyname="<property>"/>
<%=mp.get<property>%>
<%=mp.getProperty(null, "<property>", null, null)%>
Then I get cached data displayed on the first line, and current
data on the other two, until I restart the server.
I'd assumed the <um:getProperty> tag was just calling the
getProperty method on the bean rather than caching anything
itself.
Any suggestions?
David

Hi David,
<um:getPropery> gets the property value from the cache, not directly from a
ConfigurableEntity EJB. If you have first called <um:getProfile> with session
scope, then it uses the CachedProfileBean that the <um:getProfile> tag put into
the session. If you don't call <um:getProfile> with session scope, then it uses
the AnonymousProfileBean.
You could experiment with calling <um:getProfile> with session scope and check
out the session to see the cached profile show up.
You are calling the <um:getProfile> tag with request scope, not session scope.
Also, I see that you are testing your application using
ConfigurableEntity.getProperty() directly (with your mp.getProperty() call). In
a real application you will want to use CachedProfileBean for speed. Search this
newsgroup and check out the javadoc for hints about using the CachedProfileBean.
Here are some links that may be useful:
<um:getProfile> docs for WLCS 3.5 at
http://edocs.bea.com/wlcs/docs35/p13ndev/jsptags.htm#1058056
<um:getProperty> docs at
http://edocs.bea.com/wlcs/docs35/p13ndev/jsptags.htm#1058056
Anonymous profile documentation at
http://edocs.bea.com/wlcs/docs35/p13ndev/users.htm#1092004
CachedProfileBean javadoc
http://edocs.bea.com/wlcs/docs35/javadoc/wlps/com/beasys/commerce/user/jsp/beans/CachedProfileBean.html
David Marston wrote:
I've set up a UUP to access some data in an external database,
and I'm getting some interesting effects when the external data
changes. Am I missing something obvious here?
When the UUP is first accessed, all of the information is
returned correctly by the
<um:getProperty propertyName="<myProp>"/> tag. If that data
then changes in the external data store, the tag still returns
the original data.
I assumed this was due to the profile not reloading the
information, but it does in fact as I can retrieve it using
either the explicit getter method, or the getProperty method on
the profile.
Just to make it a bit clearer, if I do:
<um:getProfile profileType="myType" profileKey="<user>" profileId="myProfile"
scope="request"/>
<%myProfileType mp=(myProfileType)myProfile;%>
<um:getProperty propertyname="<property>"/>
<%=mp.get<property>%>
<%=mp.getProperty(null, "<property>", null, null)%>
Then I get cached data displayed on the first line, and current
data on the other two, until I restart the server.
I'd assumed the <um:getProperty> tag was just calling the
getProperty method on the bean rather than caching anything
itself.
Any suggestions?
David--
Ture Hoefner
BEA Systems, Inc.
2590 Pearl St.
Suite 110
Boulder, CO 80302
www.bea.com

Similar Messages

  • Ibots are not retrieving cached results.

    Hello,
    I have scheduled the ibots on reports available on a dashboard page and added a list of users in the recipients tab. i want the results to be cached for these list of users.
    I get an email with the results of the dashboard report whenever the ibots finish to run. Then i tried to login as one of the users from the list of users in the recipients, but the reports seems to run again without retriving the cached results.
    Can anyone help me to understand why the results are not being cached?
    Thanks,
    sK.

    hi SK,
    check this
    http://www.artofbi.com/index.php/2010/03/obiee-ibots-obi-caching-strategy-with-seeding-cache/
    thanks,
    Saichand.v

  • WLS9.2 caching results of CallableStatement?

    Hello All,
    I've been flummoxed by a problem in which the results of a CallableStatement seem to be cached by our WebLogic 9.2 server. We have a connection pooled DataSource talking to Oracle8i (8.1.7.4.0), configured with 10 statements in a cache and the LRU algorithm, also 1 connection initially and a maximum of 15. We're using Oracle's ojdbc14.jar implementation and orai18n.jar.
    We're mostly executing CallableStatments for packaged procedures/functions that in turn call other procedures or run selects. What I'm geting at is, we're retrieving data rather than updating through our CallableStatement. The retreieved data we extract from the returned oracle ARRAY type using a homegrown map to the "shape" of each Struct underlying the ARRAY collection. In this problem, two CallableStatements are run and return data to our Java app, and at the start all is well. Then the underlying data is changed - via procedures in an oracle forms app on the database, not our Java app. Our Java app should display the changed data. The first CallableStatement runs again and an Eclipse remote debug shows the updated data, however the second CallableStatement continues to return the older data. We have oracle test harnesses that call the same procedures as our CallableStatements; both return the new data. If we leave it for sometime less than 45 minutes, the new data is then returned by both statements. Similarly, if we set the statement cache size to 0, the newe data is returned both times. We're closing the DB resources, (and in the right order: ResultSet, Statement and Connection)
    Has anyone come across this issue before? (From my searches it appears not). Secondly, is there a way of debugging into the WLS DataSource mechanism? I can go down as far as getting the ARRAY and no further, but if there's a switch/command line arg that I could use in the startWeblogic script that'd be great. I remember reading about a WebLogic attributeSpy, (or maybe spyAttribute?) and if I had to try the WLS driver for that I'd give it a go, but if Oracle have something similar that'd be fantastic.
    FWIW, changing the data via the oracle forms app changes the sysdate on the oracle database too.
    Variables from the debug that might be relevant:
    DataSource retrieved from jndi context of type WLEventContextImpl
    driversettings
    weblogic.jdbc.rmi.internal.RmiDriverSettings{verbose=false chunkSize=256 rowCacheSize=0}
    driverProps entrySet [EmulateTwoPhaseCommit=false, connectionPoolID=xxxxx, jdbcTxDataSource=true, LoggingLastResource=false, dataSourceName=xxxxx]
    Connection is PoolConnection_oracle_jdbc_driver_T4CConnection
    weblogic.jdbc.wrapper.PoolConnection_oracle_jdbc_driver_T4CConnection@5f1
    statement     is CallableStatement_oracle_jdbc_driver_T4CCallableStatement (id=nnnnn)     
    weblogic.jdbc.wrapper.CallableStatement_oracle_jdbc_driver_T4CCallableStatement@5f2
    There is a StatementHolder too:
    weblogic.jdbc.wrapper.CallableStatement_oracle_jdbc_driver_T4CCallableStatement@5f2=weblogic.jdbc.common.internal.StatementHolder@21cae26
    In the "stmts" table in "conn" in "stmt", there are the following two variables:
    jstmt     T4CCallableStatement (id=nnnnn) oracle.jdbc.driver.T4CCallableStatement@21cad44
    key     StatementCacheKey (id=nnnn)     { call my_pack_name.my_proc_name(?,?)}:true:-1:-1
    Thanks for any help that anyone might be able to shed on this!
    Best Regards,
    ConorD

    Hi Joe,
    Thanks for your reply; I'm delighted to see that you're still here helping WLS users under the Oracle banner :-)
    To answer your question, yes, we do get good behaviour when we set the statement cache size to 0. I ran a test on that where I set the Statement cache size to 0 and the initial number of connections to 0, then:
    *1)* Logged in through our Java/WLS app to a few test scenarios and saw the data
    *2)* Ran the oracle app on that DB outside of WLS which moved forward the state of the app, including sysdate
    *3)* Logged in again through our Java/WLS app to the test scenarios and saw the new data being returned from both CallableStatements
    I don't think that the initial number of connections had any effect on this, since WLS was running all the time and there was no retargetting of the DataSource, so IMHO it must have been the statement cache size.
    Where we hadn't got the Statement cache size 0, for step *3)* above one of the two CallableStatements continued to return the old data, (as if it had been told by the DB that the result hadn't changed, and it might as well return a cached result - if WLS does cache ResultSets?)
    Lastly, there is one case where we do see good behaviour even with a Statement cache size of 10:
    Between steps *2)* and *3)* above, if I untarget & activate, then retarget & activate the DataSource (to the same server/db that had just been untargetted) we see new data back from both statements on running step *3)*, even with the Statement cache size set to 10. My guess is that the untarget frees up any object in the Statement cache for collection and removes the remote DB session stubs that may be caching on the DB side.
    Thanks again and Best Regards,
    ConorD

  • Method Iteretor Cache Results Problem

    Hi all,
    I am executing a method and taking some results via method iterator.
    Method iterator Cache Results property is true. But i want to clear or remove all data from cache after completing my job.
    I am trying set to methodIteretor.result property to null in bindings. But this bindings property is unsettable property.
    Thanks to all.
    gokmeni

    Are you attempting to set the property with the Flat editor or within the XML file itself?
    --Ric                                                                                                                                                                                                       

  • Why is my cache results not highligted anymore?

    why is my cache results not highligted anymore?

    When I do a word/s search.
    Normally the words I typed and searched on Google will be highlighted in different colors each word at the top of the webpage.
    The same color will be highligted for each word/s whenever they appears all throught out the page. when I open it as cached.
    Now the words don't highlight any more and no color even.
    Here below info from About.com about what I am asking about:
    http://google.about.com/od/searchingtheweb/qt/cache_syntax.htm
    ''''''Highlight Keywords With Google Cache Search''''''
    Find Specific Information Faster With Google's Cache
    By Marziah Karch, About.com Guide
    Is it hard to find a specific piece of information on a large Web page? You can simply this by using Google's cached page to highlight your search term.
    As Google indexes Web pages, it retains a snapshot of the page contents, known as a cached page. When a cached page is available, you'll see a Cached link at the bottom of the search result.
    Clicking on the Cached link will show you the page as it was last indexed on Google, but with your search keywords highlighted. This is extremely useful if you want to find a specific piece of information without having to scan the entire page.
    Keep in mind that this shows the last time the page was indexed, so sometimes images will not show up and the information will be out of date. For most quick searches, that doesn't matter. You can always go back to the current version of the page and double check to see if the information has changed.

  • RFC Look up Caching Results

    Hello Experts,
                I am facing an issue in PI7.1 while using RFC Look up functionality. I use this in a high usage interface and now I see that it actually caches the results somewhere and does not actually make the RFC call to the backend system.
    I refered some threads, but could not get answers. Has anyone else faced this issue ? If yes, could you let me know what the solution is to get around this issue?
    Thanks,
    Karthik

    Thanks Arpil, We are calling the RFC look up once per mapping. We get results into a global variable and process it thereafter. But this interface that has this mapping itself gets called very frequently, around 30 calls per second.
    We are using the RFC look up wizard in PI7.1 and hence not building any new java code etc..but the results are cached. If the data changes in the back end system, the RFC still fetches us old data and only a forced cache refresh is causing it to pick up the new results..

  • Selecting fewer columns from cached results

    If I'm reading the docs right, I should be able to create a request with X number of columns to populate the cache, so that a subsequent request that has a direct subset of those columns would hit the cache. That's not what I'm seeing.
    I have four fact columns in the RPD. The first three, Sales, Cost, and Units, are coming straight from the Physical Layer. The fourth, Profit, is a subtraction of Sales minus Cost.
    Here's what I'm seeing:
    1a. Log In to Answers
    1b. Create Query: Year, Sales, Cost, Profit - this populates the cache.
    1c. Log Out of Answers
    1d. Log back in to Answers
    1e. Create Query: Year, Sales, Cost - this query does NOT hit that cache entry.
    2a. Close all cursors (under Administration...Manage Sessions)
    2b. Log Out
    2c. Purge Cache
    3a. Log In to Answers
    3b. Create Query: Year, Sales, Cost, Units - this populates the cache.
    3c. Log Out of Answers
    3d. Log back in to Answers
    3e Create Query: Year, Sales Cost - this query DOES hit the cache entry.
    The only difference between the two sequences is that one references a calculated column (Profit) and the other does not.
    Incidentally, I've tested this with Profit defined first as a subtraction of the logical columns, and then again as a subtraction of the physical columns, with the same behavior observed both ways.
    Can someone please confirm this behavior and suggest a reason why it should occur?

    Turribeach, here are the Session Logs for each of the four requests. I've interspersed the steps between double-rows of = signs.
    ===================================================
    ===================================================
    STEP 1A: Log In
    STEP 1B: Create request - Year, Sales, Cost, Profit
    ===================================================
    ===================================================
    +++Analyst:320000:320001:----2009/01/15 09:13:51
    -------------------- Logical Request (before navigation):
    RqList
    Times.Year as c1 GB,
    Sales:[DAggr(~Base Facts.Sales by [ Times.Year, Times.Year End Date] )] as c2 GB,
    Cost:[DAggr(~Base Facts.Cost by [ Times.Year, Times.Year End Date] )] as c3 GB,
    Sales:[DAggr(~Base Facts.Sales by [ Times.Year, Times.Year End Date] )] - Cost:[DAggr(~Base Facts.Cost by [ Times.Year, Times.Year End Date] )] as c4 GB,
    Times.Year End Date as c5 GB
    OrderBy: c5 asc
    +++Analyst:320000:320001:----2009/01/15 09:13:51
    -------------------- Sending query to database named BIEE_TRAIN (id: <<1690>>):
    WITH
    SAWITH0 AS (select sum(round(T68.COST , 2)) as c1,
    sum(round(T68.SALES , 2)) as c2,
    T61.YEAR as c3,
    T61.YEAR_END_DATE as c4
    from
    GLOBAL_ADMIN.BI_D_TIME T61,
    GLOBAL_ADMIN.BI_F_SALES T68
    where ( T61.MONTH = T68.MONTH )
    group by T61.YEAR, T61.YEAR_END_DATE)
    select distinct SAWITH0.c3 as c1,
    SAWITH0.c2 as c2,
    SAWITH0.c1 as c3,
    SAWITH0.c2 - SAWITH0.c1 as c4,
    SAWITH0.c4 as c5
    from
    SAWITH0
    order by c5
    +++Analyst:320000:320001:----2009/01/15 09:13:52
    -------------------- Query Result Cache: [59124] The query for user 'Analyst' was inserted into the query result cache. The filename is 'C:\OracleBIData\cache\NQS_TRAINING_733424_33231_00000000.TBL'.
    ===================================================
    ===================================================
    STEP 1C: LOG OUT
    STEP 1D: LOG IN
    STEP 1E: Create request - Year, Sales, Cost
    ===================================================
    ===================================================
    +++Analyst:330000:330001:----2009/01/15 09:17:41
    -------------------- Logical Request (before navigation):
    RqList
    Times.Year as c1 GB,
    Sales:[DAggr(~Base Facts.Sales by [ Times.Year, Times.Year End Date] )] as c2 GB,
    Cost:[DAggr(~Base Facts.Cost by [ Times.Year, Times.Year End Date] )] as c3 GB,
    Times.Year End Date as c4 GB
    OrderBy: c4 asc
    +++Analyst:330000:330001:----2009/01/15 09:17:41
    -------------------- Sending query to database named BIEE_TRAIN (id: <<2062>>):
    select T61.YEAR as c1,
    sum(round(T68.SALES , 2)) as c2,
    sum(round(T68.COST , 2)) as c3,
    T61.YEAR_END_DATE as c4
    from
    GLOBAL_ADMIN.BI_D_TIME T61,
    GLOBAL_ADMIN.BI_F_SALES T68
    where ( T61.MONTH = T68.MONTH )
    group by T61.YEAR, T61.YEAR_END_DATE
    order by c4
    +++Analyst:330000:330001:----2009/01/15 09:17:41
    -------------------- Query Result Cache: [59124] The query for user 'Analyst' was inserted into the query result cache. The filename is 'C:\OracleBIData\cache\NQS_TRAINING_733424_33461_00000001.TBL'.
    *** Note: At this point, there are indeed TWO entries in the cache. ***
    ===================================================
    ===================================================
    STEP 2A: Close all cursors
    STEP 2B: Log Out
    STEP 2C: Purge the cache
    STEP 3B: Create request - Year, Sales, Cost, Units
    ===================================================
    ===================================================
    +++Analyst:350000:350001:----2009/01/15 09:23:18
    -------------------- Logical Request (before navigation):
    RqList
    Times.Year as c1 GB,
    Sales:[DAggr(~Base Facts.Sales by [ Times.Year, Times.Year End Date] )] as c2 GB,
    Cost:[DAggr(~Base Facts.Cost by [ Times.Year, Times.Year End Date] )] as c3 GB,
    Units:[DAggr(~Base Facts.Units by [ Times.Year, Times.Year End Date] )] as c4 GB,
    Times.Year End Date as c5 GB
    OrderBy: c5 asc
    +++Analyst:350000:350001:----2009/01/15 09:23:18
    -------------------- Sending query to database named BIEE_TRAIN (id: <<2399>>):
    select T61.YEAR as c1,
    sum(round(T68.SALES , 2)) as c2,
    sum(round(T68.COST , 2)) as c3,
    sum(round(T68.UNITS , 0)) as c4,
    T61.YEAR_END_DATE as c5
    from
    GLOBAL_ADMIN.BI_D_TIME T61,
    GLOBAL_ADMIN.BI_F_SALES T68
    where ( T61.MONTH = T68.MONTH )
    group by T61.YEAR, T61.YEAR_END_DATE
    order by c5
    +++Analyst:350000:350001:----2009/01/15 09:23:19
    -------------------- Query Result Cache: [59124] The query for user 'Analyst' was inserted into the query result cache. The filename is 'C:\OracleBIData\cache\NQS_TRAINING_733424_33798_00000002.TBL'.
    ===================================================
    ===================================================
    STEP 3C: LOG OUT
    STEP 3D: LOG IN
    STEP 3E: Create request - Year, Sales, Cost
    ===================================================
    ===================================================
    +++Analyst:360000:360001:----2009/01/15 09:24:36
    -------------------- Logical Request (before navigation):
    RqList
    Times.Year as c1 GB,
    Sales:[DAggr(~Base Facts.Sales by [ Times.Year, Times.Year End Date] )] as c2 GB,
    Cost:[DAggr(~Base Facts.Cost by [ Times.Year, Times.Year End Date] )] as c3 GB,
    Times.Year End Date as c4 GB
    OrderBy: c4 asc
    +++Analyst:360000:360001:----2009/01/15 09:24:36
    -------------------- Cache Hit on query:
    Matching Query:     SET VARIABLE QUERY_SRC_CD='Report';SELECT "Time Dimension"."Year" saw_0, "Base Measures".Sales saw_1, "Base Measures".Cost saw_2, "Base Measures".Units saw_3 FROM Global ORDER BY saw_0
    Created by:     Analyst
    +++Analyst:360000:360001:----2009/01/15 09:24:36
    -------------------- Query Status: Successful Completion
    +++Analyst:360000:360001:----2009/01/15 09:24:36
    *** Note: At this point, there is ONE entry in the cache. ***

  • Caching Result Set ?

    folks
    I have a search page which returns a result set and the results are put in the session to be able to access when user clicks on the page numbers(pagination) in the results pane.
    Is there any way we can store or cache this and access instead of fetching it off the session.

    You can store the data as a multi dimensional array in javascript on the rendered jsp page as a javascript function. It exists on the client side (browser) and you can use an onClick event on the page's button to call up various parts of the array and display it to the user. That way, your user doesnt have to submit the page back to the servlet to pagenate to the next page. The data shows up immidiately instead. You'll have to read up on javascript to learn how to do this. (Also I assume you are storing the resultant data in some type of array and not the raw resultSet).
    However, if so much data is returned to the user he needs pagenation, I suggest you add filter textfields to allow him to limit what data is returned so pagenation is not needed. Pagenation implies there is too much data for the user to effectively use at once (no one likes scrolling down a list of 200 items). For instance, instead of displaying all names in a list, add a filter so the user can search for all last names that begain with an A, or B, etc through Z. Then, when displayed to the user, show the list sorted by your filter criteria (lastName). If there is still too much data in the list, I suggest putting up a vertical scrollbar rather than pagenation.

  • Cached results returned from LOV even though query criteria has changed

    Hi,
    JDeveloper 10.1.3.4
    JHeadstart 10.1.3.3.81
    We have a screen that is entered in create row mode. Just prior to entering the screen the user has to select a study for which the record is to be created for. On the screen we have an LOV and a few read-only fields that are populated from the LOV selection. The LOV includes a "study id" as one of its query bind parameters. The data returned by the LOV is very study specific in that a code used in one study will be the same as another however have a very different meaning. We are finding that the first time a code is used, the LOV dialog is shown, however it doesn't get shown again (just automatically populated) for any other further records created even though the user has changed study. It appears as though the code entered by the user is checked against the cache for a match and ignores the fact that the query bind parameter value has changed and should therefore either re-display the LOV dialog or retrieve the results from the correct study.
    The requery condition on the LOV seems irrelevant here (as I changed it to Always and nothing different happened). And of course there isn't a requery condition on the main view object as it is in create row mode.
    Can you point me in the direction of how to clear this cache when a new study is selected? I am unsure how to get at this cache.
    Thanks.

    Barry,
    Hard to help you as it is very specific to your app.
    Can you run in de bug mode and set a breakpoint in LovItemBean.validateWithLOV?
    This should help you find out why the LOV is not shown the second time.
    Steven Davelaar,
    JHeadstart team.

  • POWL: Refresh the cache results of POWL during page load.

    Hi Friends,
    I am using a feeder class for my POWL. Whenever the user clicks opens the POWL, he sees the cached data/results and he is expected to do a manual refresh to see current/new data. I want to eliminate this by refreshing the cache during the page load itself and show user current data. I already know about the exporting parameter in handle_action method which is used to do refresh. But it does not work in this case.
    Pls help.

    Hello Saud,
    In which release you are? in new release you can do refresh by personalization where options are there.
    There are other options also to refresh
    1) By passing URL parameter refreshA=X it will refresh all queies when loading
    2) By passing refreshq=X, it will refresh the current query
    best regards,
    Rohit

  • Ramifications of caching results of InitialContext(().lookup?

    One of the thing we discovered during our early efforts to port a 5.1 app to
    7.0 was that in 7.0 the JNDI lookups were simply taking FOREVER. It was
    really horrible.
    So, the question is, what are the ramifications of caching the results of
    this:
    Context ctx = new InitialContext();
    SessionBeanHome = (SessionBeanHome) ctx.lookup("SessionBean")
    We're guessing that this will fail horrible in a clustered environment, but
    what about a stand alone environment?
    Thanx!
    Will Hartung
    ([email protected])

    Can you provide some statistics, how much time it used to take and how much
    is it taking now etc.
    In 70, We know that the first InitialContext() call will take some time, as
    it needs to initialize kernel and generate the hot-codegened initial context
    stub. But once you have this call done, next initialContext call should be
    pretty fast.
    If you want to avoid the hot-codegen cost of stub, use this work around.
    From the browser, try
    http://server:port/bea_wls_internal/classes/weblogic/jndi/internal/RootNamin
    gNode_WLStub.class
    Save this class in your client package. This may give some performance
    benefit.
    This needs that, your classpath servlet should be turned on. See docs for
    more info on this.
    But I don't recommend this. This may become an issue later and may generate
    version incompatibilities, if you upgrade server and forgot to re-pack the
    client etc. I am not sure though.
    Hope this helps.
    Cheers,
    ..maruthi
    "Will Hartung" <[email protected]> wrote in message
    news:3d6a8d58$[email protected]..
    One of the thing we discovered during our early efforts to port a 5.1 appto
    7.0 was that in 7.0 the JNDI lookups were simply taking FOREVER. It was
    really horrible.
    So, the question is, what are the ramifications of caching the results of
    this:
    Context ctx = new InitialContext();
    SessionBeanHome = (SessionBeanHome) ctx.lookup("SessionBean")
    We're guessing that this will fail horrible in a clustered environment,but
    what about a stand alone environment?
    Thanx!
    Will Hartung
    ([email protected])

  • How does BerkeleyDB know when cached results from stat function are valid ?

    Something curious: I have a berkeley database about 2G in size and a program that (1) opens the database, not DB_RDONLY; (2) calls stat(), without DB_FAST_STAT; (3) calls sync(); (4) calls close().
    Running this program the first time takes a good amount of time in the stat() function - 20 minutes at least - and thrashes the disk all that time. So it's trawling the database to get record counts etc.
    Running the program again takes only a few seconds so clearly the database is caching those stats and knows they're up to date - makes sense.
    What's odd though is why the stats weren't known to be up to date on the first run. The database was actually copied from another box where the last thing to happen was a run of the same program. So it should have had up-to-date stats cached in it.
    It's as if the cached values are somehow invalidated by moving the database to another machine. Why would that happen ? Where are the cached stats held and how does berekelyDB decide when they're up to date ?

    I still cannot solve this problem, but I have some more observations:
    1) yes it is a JApplet, and yes, the data streaming is performed in a separate thread.
    2) Wireshark sniffing shows that data sent out by the PHP datasource on the server is sent immediately, and is not buffered (am using ob_start(), ob_flush() and flush() alls in the PHP script).
    3) On Windows Vista, using Internet Explorer or Firefox, there is a constant 30 second delay before the Applet returns from this line: InputStream is = url.openStream();
    4) After this 30 seconds, data appears in the Applet, but it can be seen also with java console debug prints that the data seems to be buffered. The newest data shown in the Applet is not the newest data sent to the client by the PHP datasource script.
    5) On a SUSE Linux client, the Applet works as it should, there is no delay time in showing the data.
    It appears as if there is on Windows a buffering of data which I do not wish to have and which does not occur on Linux. I need to find out how to get the URL openStream() call to return immediately allowing the initial data to be read and shown on the Applet. And I need to remove the buffering of data so that the data can be shown on the Applet when it arrives.
    Can anyone help? Why does this work on Linux but not on Windows, and what can I do, at best within the Java code, to get the Applet to work on Windows as it does on Linux?
    Thanks!
    Steve, Denmark

  • Oracle 11g result cache and TimesTen

    Oracle 11g has introduced the concept of result cache whereby the result set of frequently executed queries are stored in cache and used later when other users request the same query. This is different from caching the data blocks and exceuting the query over and over again.
    Tom Kyte calls this just-in-time materialized view whereby the results are dynamically evaluated without DBA intervention
    http://www.oracle.com/technology/oramag/oracle/07-sep/o57asktom.html
    My point is that in view of utilities like result_cache and possible use of Solid State Disks in Oracle to speed up physical I/O etc is there any need for a product like TimesTen? It sounds to me that it may just asdd another layer of complexity?

    Oracle result cache ia a useful tool but it is distinctly different from TimesTen. My understanding of Oracle's result cache is caching results set for seldom changing data like look up tables (currencies ID/code), reference data that does not change often (list of counter parties) etc. It would be pointless for caching result set where the underlying data changes frequently.
    There is also another argument for SQL result cache in that if you are hitting high on your use of CPUs and you have enough of memory then you can cache some of the results set thus saving on your CPU cycles.
    Considering the arguments about hard wired RDBMS and Solid State Disks (SSD), we can talk about it all day but having SSD does not eliminate the optimiser consideration for physical I/O. A table scan is a table scan whether data resides on SCSI or SSD disk. SSD will be faster but we are still performing physical IOs.
    With regard to TimesTen, the product positioning is different. TimesTen is closer to middletier than Oracle. It is designed to work closely to application layer whereas Oracle has much wider purpose. For real time response and moderate volumes there is no way one can substitue TimesTen with any hard wired RDBMS. The request for result cache has been around for sometime. In areas like program trading and market data where the underlying data changes rapidly, TimesTen will come very handy as the data is real time/transient and the calculations have to be done almost realtime, with least complications from the execution engine. I fail to see how one can deploy result cache in this scenario. Because of the underlying change of data, Oracle will be forced to calculate the queries almost everytime and the result cache will be just wasted.
    Hope this helps,
    Mich

  • Oracle 11g/R2 Query Result Cache - Incremental Update

    Hi,
    In Oracle 11g/R2, I created replica of HR.Employees table & executed the following statement (+Although using SUM() function is non-logical in this case, but just testifying the result+)
    STEP - 1
    SELECT      /+ RESULT_CACHE */ employee_id, first_name, last_name, SUM(salary)*
    FROM           HR.Employees_copy
    WHERE      department_id = 20
    GROUP BY      employee_id, first_name, last_name;
    EMPLOYEE_ID      FIRST_NAME LAST_NAME     SUM(SALARY)
    202           Pat           Fay          6000
    201           Michael           Hartstein     13000
    Elapsed: 00:00:00.01
    Execution Plan
    Plan hash value: 3837552314
    | Id | Operation           | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT      | | 2 | 130 | 4 (25)| 00:00:01 |
    | 1 | RESULT CACHE      | 3acbj133x8qkq8f8m7zm0br3mu | | | | |
    | 2 | HASH GROUP BY      | | 2 | 130 | 4 (25)| 00:00:01 |
    |* 3 | TABLE ACCESS FULL     | EMPLOYEES_COPY | 2 | 130 | 3 (0)| 00:00:01 |
    --------------------------------------------------------------------------------------------------     Statistics
    0 recursive calls
    0 db block gets
    0 consistent gets
    0 physical reads
    0 redo size
    *690* bytes sent via SQL*Net to client
    416 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    2 rows processed
    STEP - 2
    INSERT INTO HR.employees_copy
    VALUES(200, 'Dummy', 'User','[email protected]',NULL, sysdate, 'MANAGER',5000, NULL,NULL,20);
    STEP - 3
    SELECT      /*+ RESULT_CACHE */ employee_id, first_name, last_name, SUM(salary)
    FROM           HR.Employees_copy
    WHERE      department_id = 20
    GROUP BY      employee_id, first_name, last_name;
    EMPLOYEE_ID      FIRST_NAME LAST_NAME SUM(SALARY)
    202      Pat      Fay      6000
    201      Michael      Hartstein      13000
    200      Dummy User      5000
    Elapsed: 00:00:00.03
    Execution Plan
    Plan hash value: 3837552314
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT |          | 3 | 195 | 4 (25)| 00:00:01 |
    | 1 | RESULT CACHE | 3acbj133x8qkq8f8m7zm0br3mu | | | | |
    | 2 | HASH GROUP BY | | 3 | 195 | 4 (25)| 00:00:01 |
    |* 3 | TABLE ACCESS FULL| EMPLOYEES_COPY | 3 | 195 | 3 (0)| 00:00:01 |
         Statistics
    0 recursive calls
    0 db block gets
    4 consistent gets
    0 physical reads
    0 redo size
    *714* bytes sent via SQL*Net to client
    416 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    3 rows processed
    In the execution plan of STEP-3, against ID-1 the operation RESULT CACHE is shown which shows the result has been retrieved directly from Result cache. Does this mean that Oracle Server has Incrementally Retrieved the resultset?
    Because, before the execution of STEP-2, the cache contained only 2 records. Then 1 record was inserted but after STEP-3, a total of 3 records was returned from cache. Does this mean that newly inserted row is retrieved from database and merged to the cached result of STEP-1?
    If Oracle server has incrementally retrieved and merged newly inserted record, what mechanism is being used by the Oracle to do so?
    Regards,
    Wasif
    Edited by: 965300 on Oct 15, 2012 12:25 AM

    965300 wrote:
    Hi,
    In Oracle 11g/R2, I created replica of HR.Employees table & executed the following statement (+Although using SUM() function is non-logical in this case, but just testifying the result+)
    STEP - 1
    SELECT      /+ RESULT_CACHE */ employee_id, first_name, last_name, SUM(salary)*
    FROM           HR.Employees_copy
    WHERE      department_id = 20
    GROUP BY      employee_id, first_name, last_name;
    EMPLOYEE_ID      FIRST_NAME LAST_NAME     SUM(SALARY)
    202           Pat           Fay          6000
    201           Michael           Hartstein     13000
    Elapsed: 00:00:00.01
    Execution Plan
    Plan hash value: 3837552314
    | Id | Operation           | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT      | | 2 | 130 | 4 (25)| 00:00:01 |
    | 1 | RESULT CACHE      | 3acbj133x8qkq8f8m7zm0br3mu | | | | |
    | 2 | HASH GROUP BY      | | 2 | 130 | 4 (25)| 00:00:01 |
    |* 3 | TABLE ACCESS FULL     | EMPLOYEES_COPY | 2 | 130 | 3 (0)| 00:00:01 |
    --------------------------------------------------------------------------------------------------     Statistics
    0 recursive calls
    0 db block gets
    0 consistent gets
    0 physical reads
    0 redo size
    *690* bytes sent via SQL*Net to client
    416 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    2 rows processed
    STEP - 2
    INSERT INTO HR.employees_copy
    VALUES(200, 'Dummy', 'User','[email protected]',NULL, sysdate, 'MANAGER',5000, NULL,NULL,20);
    STEP - 3
    SELECT      /*+ RESULT_CACHE */ employee_id, first_name, last_name, SUM(salary)
    FROM           HR.Employees_copy
    WHERE      department_id = 20
    GROUP BY      employee_id, first_name, last_name;
    EMPLOYEE_ID      FIRST_NAME LAST_NAME SUM(SALARY)
    202      Pat      Fay      6000
    201      Michael      Hartstein      13000
    200      Dummy User      5000
    Elapsed: 00:00:00.03
    Execution Plan
    Plan hash value: 3837552314
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT |          | 3 | 195 | 4 (25)| 00:00:01 |
    | 1 | RESULT CACHE | 3acbj133x8qkq8f8m7zm0br3mu | | | | |
    | 2 | HASH GROUP BY | | 3 | 195 | 4 (25)| 00:00:01 |
    |* 3 | TABLE ACCESS FULL| EMPLOYEES_COPY | 3 | 195 | 3 (0)| 00:00:01 |
         Statistics
    0 recursive calls
    0 db block gets
    4 consistent gets
    0 physical reads
    0 redo size
    *714* bytes sent via SQL*Net to client
    416 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    3 rows processed
    In the execution plan of STEP-3, against ID-1 the operation RESULT CACHE is shown which shows the result has been retrieved directly from Result cache. Does this mean that Oracle Server has Incrementally Retrieved the resultset?
    Because, before the execution of STEP-2, the cache contained only 2 records. Then 1 record was inserted but after STEP-3, a total of 3 records was returned from cache. Does this mean that newly inserted row is retrieved from database and merged to the cached result of STEP-1?
    If Oracle server has incrementally retrieved and merged newly inserted record, what mechanism is being used by the Oracle to do so?
    Regards,
    Wasif
    Edited by: 965300 on Oct 15, 2012 12:25 AMNo, the RESULT CACHE operation doesn't necessarily mean that the results are retrieved from there. It could be being
    written to there.
    Look at the number of consistent gets: it's zero in the first step (I assume you had already run this query before) and I would
    conclude that the data is being read from the result cache.
    In the third step there are 4 consistent gets. I would conclude that the data is being written to the result cache, a fourth step repeating
    the SQL should show zero consistent gets and that would be the results being read.

  • SQL Result Cache  vs In-Memory Database Cache

    Hi,
    can anyone help me to understand the relations and differences between the 11 g new features of SQL Result Cache vs In-Memory Database Cache ?
    Thanks

    I highly recommend you read the 11g New Features Guide. Here is a sample from it:
    h4. 1.11.2.9 Query Result Cache
    A separate shared memory pool is now used for storing and retrieving
    cached results. Query retrieval from the query result cache is faster
    than rerunning the query. Frequently executed queries will see
    performance improvements when using the query result cache.
    The new query result cache enables explicit caching of results in
    database memory. Subsequent queries using the cached results will
    experience significant performance improvements.
    See Also:
    [Oracle Database Performance Tuning Guide|http://download.oracle.com/docs/cd/B28359_01/server.111/b28274/memory.htm#PFGRF10121] for details
    [Results Cache Concepts|http://download.oracle.com/docs/cd/B28359_01/server.111/b28274/memory.htm#PFGRF10121|Results Cache Concepts]
    HTH!

Maybe you are looking for

  • Cache refresh issue with PI Load Balanced HA setup.

    Dear Experts, Wei have installed a HA Load Balanced PI Production Server with the below specifications. Its a four node cluster. Two nodes for Application Cluster and another two nodes for Database Cluster. Node1 Physical Hostname  : axsappci Virtual

  • Recording songs on my PC laptop and importing to Logic/Mac?

    I have a Mac desktop and have started using Logic for my recordings. I have a very fast laptop that is unfortunately a PC. I was looking at this software called Mixcraft which looks like a pretty nice GarageBand clone for doing some recordings when o

  • My itunes wont open on my computer windows xp

    my itunes for my computer window xp will not open at all if anyone knows how to fix it can you please help me

  • How do I save as a .mp4 file?

    how do you save your edited video as a .mp4?

  • Apple TV cannot load my Itunes lybrary

    First of all I´m in Brazil and some things do not work as in US. My Apple TV 2 suddenly cannot see (load) my Itunes lybray. I´ve already tried many tricks and nothing seems to work. Restarted all devices, used different account but nothing really wor