RC3 Query Caching problem

I have a situation with RC3 where cached queries appear not to be
getting invalidate when they should.
The data model is this:
- UserSecurity has a Collection of UserAssignments. A UserAssignments
contains a long which is the ID of another persistent instance.
Sequence of events, each in a separate transaction:
1. query UserAssignments where referencedItemId == 1234, get one result.
2. add a new UserAssignment, with its referencedItemId set to 1234, to
the assignments Collection of a persistent UserSecurity instance.
3. repeat the first query, still only one result, the original one.
If I disable caching by commenting out the
com.solarmetric.kodo.DataCacheClass line in kodo.properties, it all
works. It also worked fine with 2.4.
My kodo.properties looks like:
javax.jdo.option.RetainValues: true
javax.jdo.option.RestoreValues: true
javax.jdo.option.Optimistic: true
javax.jdo.option.NontransactionalWrite: false
javax.jdo.option.NontransactionalRead: true
javax.jdo.option.Multithreaded: true
javax.jdo.option.MsWait: 5000
javax.jdo.option.MinPool: 1
javax.jdo.option.MaxPool: 10
javax.jdo.option.IgnoreCache: false
javax.jdo.option.ConnectionUserName: @USER_NAME@
javax.jdo.option.ConnectionURL: jdbc:oracle:thin:@@DB_HOST_NAME@:1521:db1
javax.jdo.option.ConnectionPassword: password
javax.jdo.option.ConnectionDriverName: oracle.jdbc.driver.OracleDriver
javax.jdo.PersistenceManagerFactoryClass:
com.solarmetric.kodo.impl.jdbc.ee.EEPersistenceManagerFactory
com.solarmetric.kodo.impl.jdbc.WarnOnPersistentTypeFailure: true
com.solarmetric.kodo.impl.jdbc.SequenceFactoryClass:
com.solarmetric.kodo.impl.jdbc.schema.DBSequenceFactory
com.solarmetric.kodo.impl.jdbc.FlatInheritanceMapping: true
com.solarmetric.kodo.impl.jdbc.AutoReturnTimeout: 10
com.solarmetric.kodo.LicenseKey: XXXXXXXXXXXXXXX
com.solarmetric.kodo.EnableQueryExtensions: true
com.solarmetric.kodo.DefaultFetchThreshold: -1
com.solarmetric.kodo.impl.jdbc.DictionaryProperties:NameTruncationVersion=1
com.solarmetric.kodo.ee.ManagedRuntimeProperties=TransactionManagerName=java:/TransactionManager
#com.solarmetric.kodo.DataCacheClass=com.solarmetric.kodo.runtime.datacache.plugins.CacheImpl
com.solarmetric.kodo.RemoteCommitProviderClass=com.solarmetric.kodo.runtime.event.impl.SingleJVMRemoteCommitProvider
com.solarmetric.kodo.DataCacheProperties= CacheSize=5000
com.solarmetric.kodo.impl.jdbc.UseBatchedStatements=true
com.solarmetric.kodo.impl.jdbc.UsePreparedStatements=false
com.solarmetric.kodo.impl.jdbc.StatementCacheMaxSize=100
Thanks for any help you can give.
Tom

Here's some more info:
Metadata for UserSecurity:
<class name="UserSecurityImpl">
<field name="_assignments">
<collection element-type="UserAssignmentImpl"/>
</field>
</class>
Metadata for the ManagedAssignment:
<class name="ManagedAssignment"
persistence-capable-superclass="au.com.oakton.jcore.application.security.UserAssignmentImpl">
<field name="_managingAssignmentHistoryItems">
<collection element-type="ManagingAssignmentHistoryItem"/>
<extension vendor-name="kodo" key="inverse"
value="_managedAssignment"/>
</field>
</class>
<class name="UserAssignmentImpl"
persistence-capable-superclass="AssignmentImpl">
<class name="AssignmentImpl" persistence-capable-superclass="RoleSetImpl"/>
<class name="RoleSetImpl">
<field name="_roles">
<collection element-type="RoleHistoryItem"/>
</field>
</class>
Code for adding ManagedAssignment to UserSecurityImpl:
public synchronized void addAssignment(ManagedAssignment a, User user)
_assignments.add(a);
Code for querying ManagedAssignments:
public synchronized Collection getAssignmentsForItem(long itemId)
if (itemId == -1)
return Collections.EMPTY_LIST;
Long id = new Long(itemId);
PersistenceManager pm = null;
try
pm = Persistence.getManager();
Query q = pm.newQuery(AssignmentImpl.class);
q.setFilter("_itemId == itemId");
q.declareParameters("Long itemId");
return new ArrayList((Collection) q.execute(id));
finally
Persistence.close(pm);

Similar Messages

  • Qaaws not refreshing query triggered from Xcelsius, maybe a cache problem

    Hi,
    I'm having a problem with QAAWS and Xcelsius
    I'm using a List Builder component to select multiple values in this case STATES from the efashion universe
    I use the selected states as values to feed a prompt in a QAAWS query, the qwaas query has  the SALES REVENUE as the resultset and in the conditions it has a multi prompt for STATES
    When I preview my dashboard, I select the States, then UPDATE the values and then refresh the query with a CONNECTION REFRESH button, The first time I do this it works fine and returns the Sales revenue.
    If I add a new State to my selection and then run update and run run again the query with the refresh button, it doesn't work any more and it shows again the value retrieved from the first query
    First I thought that the query wasn't triggered by Xcelsius, but by doing some more tests I found that actually the query runs but it returns the value from the first query
    I think this is a cache problem , so is there a way to tell QAAWS to always run the query and not use the cache?
    thanks,
    Alejandro

    Hello Alejandro,
    QaaWS indeed uses a cache mechanism to speed up some Xcelsius interactions (from XI 3.0 onwards), but your issue should not be induced by this, as cache sessions are discriminated according to session user id & prompt values, so if you are correctly passing prompt values, QaaWS should not serve you with the previous values by error.
    Could you specify how you are passing several prompt values to the QaaWS? There might be an issue there, so make sure that:
    1. QaaWS query prompt is set using In List operator, otherwise only first value will actually be taken into account,
    2. In Xcelsius Designer Data Manager, web service input paremeters are duplicated to accept several input values (you cannot submit you list of prompt values as a list to a single input parameter).
    If this still does not work, I'd suggest you debug your dashboard runtime using an HTTP sniffer like Fiddler (available from http://www.fiddler2.com/) which enable you to inspect the sent & recieved HTTP messages with the server, where you should verify which prompt values are sent to the QaaWS servlet.
    FYI, you can set the QaaWS cache lifetime for each query, by going into QaaWS edition first wizard screen, click Advanced... button and change value for timeout parameter (default is 60 seconds)
    Hope that helps,
    David.

  • Query contains double value - seems cache problem

    Hi experts,
    in a Query (integrated planning) based on an Aggregationlevel based on a Multicube.
    The Multicube contains an Realtime InfoProvider to store planning data and a normal InfoProvider to show actual values.
    The load of actual values to the InfoProvider can be executed several times a month via ProcessChain -
    it´s a full load so the data loaded before will be deleted during the month in order to save the new monthly dataload.
    In the Query there were double values displayed for actual data after several loads during the month -
    but in the acutal InfoProvider only a single value is stored!
    It seems to be a buffer problem, since a deletion of all system buffers solves the problem - but this is not the way to handle the problem in productive area.
    How can the problem be solved?
    Setting of Planbufferquery of acutal data InfoProvider to not using Cache doesn't help.
    Thank you!
    Angie

    Hi,
    there is no data stored in acutal cube and the query column is restricted to actual cube.
    And actual cube only contains the value once, but the query can display the double value.
    If all buffers are reset, the query shows correct value - but this is not the way to handle the problem.
    What changes need to be made?
    Is there a setting in query cache or planningbuffer query cache or some other setting that can fix the problem?
    Best regards,
    Angie

  • Problem in continuous query cache with PofExtractor

    Hi,
    I am creating a CQC with a filter having PofExtractor. When I try to insert any record in cache it is giving me exception at server side.
    When I do not use PofExtractor it is working fine.
    If any one know about this prblem please help.
    I am using C# as my client application
    Regards
    Nitin Jain

    Hi JK,
    I have made some changes in my server. Now I am not using any pof definition on server side. I think that's why it is giving me the error.
    2010-04-13 15:39:07.481/20.593 Oracle Coherence GE 3.5.1/461 <Error> (thread=DistributedCache, member=1): Exception occured during filter evaluation: MapEventFilter(mask=INSERTED|UPDATED_ENTERED|UPDATED_WITHIN, filter=GreaterEqualsFilter(PofExtractor(target=VALUE, navigator=SimplePofPath(indices=0)), Nitin)); removingthe filter...
    2010-04-13 15:39:07.481/20.593 Oracle Coherence GE 3.5.1/461 <Error> (thread=DistributedCache, member=1):
    (Wrapped) java.io.StreamCorruptedException: unknown user type: 1001
    at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:261)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ConverterFromBinary.convert(DistributedCache.CDB:4)
    at com.tangosol.util.ConverterCollections$ConverterMapEvent.getNewValue(ConverterCollections.java:3594)
    at com.tangosol.util.filter.MapEventFilter.evaluate(MapEventFilter.java:172)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.prepareDispatch(DistributedCache.CDB:82)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.postInvoke(DistributedCache.CDB:10)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.put(DistributedCache.CDB:156)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onPutRequest(DistributedCache.CDB:37)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$PutRequest.run(DistributedCache.CDB:1)
    at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:12)
    at com.tangosol.coherence.component.net.message.requestMessage.distributedCacheKeyRequest.ExtendedKeyRequest.onReceived(ExtendedKeyRequest.CDB:4)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:136)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:595)
    Caused by: java.io.StreamCorruptedException: unknown user type: 1001
    at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3289)
    at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2600)
    at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:348)
    at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2673)
    at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:257)
    ... 15 more
    Is there any way by which we can apply continuous query cache without providing object definition at server side?
    Regards
    Nitin Jain

  • SAP BW 3.5 Query Cache - no data for queries

    Dear experts,
    we do have a problem with the SAP BW Query Cache (BW 3.5). Sometimes the
    problem arise that after the queries have been pre-calculated using web templates there are no figures available when running a web-report, or when doing a drilldown or filter navigation. A solution to solve that issue is to delete the cache for that query, however this solution is not reasonable or passable. The problem occurs non-reproducible for different queries.
    Is this a "normal" error of the SAP BW we have to live with, or are there any solutions for it? Any hints are greatly appreciated.
    Thanks in advance & kind regards,
    daniel

    HI Daniel
    Try to wirk without cache for those queries.
    Anyway, you should check how the cache option is configured for those queries .
    You can see that, in RSRV tx
    Hope this help

  • BW3.5 - Query Cache - InfoObject decimal formatting

    Hello all,
    I built a bw query, which displays key figures.  Each key figure uses the decimal place formatting from the key figure infoobject (In Query Designer, the properties for each keyfigure for Decimal places, is set to "[From Key Figure 0.00]").
    I decided to change the InfoObject key figure to have 0 decimals places (in BEx formatting tab for the Key Figure).  Now, when I open up query designer, and look at the properties for the Key figure, it still is set to "[From Key Figure 0.00]" (it should be "[From Key Figure 0]" to reflect the Key Figure Infoobject change. 
    I tried to generate the report using RSRT, and deleting the query cache, but it still shows up with two decimal places.  Has anyone encountered this problem before?  I am trying to avoid removing the Key Figure infoobject from the query and readding it to reflect the change.
    Thanks!

    Hello Brendon
    You have changed the KF infoObject to show only 0 decimal( no decimal)..that is okay but in query KF property you have selected with 2 decimal so data will be displayiing in 2 decimal...the query setting is local and have priority over KF InfoObject settings...
    If you notice in KF property in query u will have one option from the field somethjing which means whatever is defined by KF infoObject...just select that now onwards u will get only those many decimal which u have defined in KF InfoObject
    Thanks
    Tripple k

  • Issues with Query Caching in MII

    Hi All,
    I am facing a strange problem with Query caching in MII query. Have created one xacute query and set cache duration 30 sec. The associated BLS with the query retrieves data from SAP system. In the web page this value is populated by executing an iCommand. Followings are the steps I followed -
    Query executed for first time, it retrives data from SAP correctly. Lets say value is val1
    At 10th sec Value in SAP changed to val2 from val1.
    Query excuted at 15th sec, it gives value as val1. Which is expected as it gives from cache.
    Query is executed 1t 35th sec, it gives value as val2 retriving from SAP system. Which is correct.
    Query executed at 40th sec, it gives value as val1 from cache. Which is not expected.
    I have tried with java cache clear in browser and JCo cache of the server.
    Same problem I have seen for tag query also..
    MII Version - 12.0.6 Build(12)
    Any thoughts on this.
    Thanks,
    Soumen

    Soumen Mondal,
    If you are facing this problem from the same client PC and in the same session, it's very strange.. but if there are two sessions running on the same PC this kind of issue may come..
    Something about caching:
    To decide whether to cache a query or not, consider the number of times you intend to run the query in a minute. For data changing in seconds, and queried in minutes, I would recommend not to cache query at all.
    I may give a typical example for query caching in a query on material master cached for 24 hours, where we know that after creating material master, we are not creating a PO on the same day.
    BR,
    SB

  • Named query cache not hit

    Hi,
    I'm using Toplink ORM 10.1.3.
    I have a table called STORE_CONFIG which has a primary key called KEYWORD (a VARCHAR2). The POJO mapped to this table is called StoreConfig.
    In the JDeveloper (10.1.3.1.0) mapping workbench I've defined a named query to query by the PK called "getConfigPropertyByKeyword". The type of the named query is ReadObjectQuery.
    SELECT keyword, key_value
    FROM STORE_CONFIG
    WHERE (keyword = #key)
    Under the options tab I have the following settings:
    Cache Statement: true
    Bind Parameters: true
    Cache Usage: Check Cache by Primary Key
    The application logs show that the same database queries are executed multiple times for the same PK (keyword)! Why is that? Shouldn't it be checking the Object Cache rather than going to the DB?
    I've tried it with "Cache Statement: false" and "Bind Parameters: false" with the same problem.
    If I click the Advanced tab and check "Cache Query Results" then the database is not hit twice for the same record. However it was my understanding that since I am querying by PK that I wouldn't need to set "Cache Query Results".
    Doesn't "Cache Query Results" apply to the Query Cache and not the Object Cache?

    Your issue seems to be that you are using custom SQL for the query, not a TopLink expression. When you use an Expression query TopLink's know if the query is by primary key and can get a cache hit.
    When you use custom SQL, TopLink does not know that the SQL is by primary key, so does not get a cache hit.
    You could either use an Expression for the query,
    or when using custom SQL you should be able to name your query argument the same as your database field defined as the primary key in your descriptor (case sensitive).
    i.e.
    SELECT keyword, key_value
    FROM STORE_CONFIG
    WHERE (keyword = #KEYWORD)

  • Continuous Query Cache Local caching meaning

    Hi,
    I'm enconter following problem when I was working with continuous query cache with local caching TRUE.
    I was able to insert data into coherence cache and read as well.
    Then I stopped the process and try to read the data in the cache for keys which I inserted earlier.
    But I received NULL as the result.
    NamedCache::Handle hCache = DerivedCQC::create(CacheFactory::getCache("dist-AccountGCE"), AlwaysFilter::getInstance(), true);
    DerivedCQC.hpp
    * File: DerivedCQC.hpp
    * Author: srathna1
    * Created on 15 July 2011, 02:47
    #ifndef DERIVEDCQC_HPP
    #define     DERIVEDCQC_HPP
    #include "coherence/lang.ns"
    #include "coherence/net/cache/ContinuousQueryCache.hpp"
    #include "coherence/net/NamedCache.hpp"
    #include "coherence/util/Filter.hpp"
    #include "coherence/util/MapListener.hpp"
    using namespace coherence::lang;
    using coherence::net::cache::ContinuousQueryCache;
    using coherence::net::NamedCache;
    using coherence::util::Filter;
    using coherence::util::MapListener;
    class DerivedCQC
    : public class_spec<DerivedCQC,
    extends<ContinuousQueryCache> >
    friend class factory<DerivedCQC>;
    protected:
    DerivedCQC(NamedCache::Handle hCache,
    Filter::View vFilter, bool fCacheValues = false, MapListener::Handle hListener = NULL)
    : super(hCache, vFilter, fCacheValues, hListener) {}
    public:
    virtual bool containsKey(Object::View vKey) const
    return m_hMapLocal->containsKey(vKey);
    #endif     /* DERIVEDCQC_HPP */
    When I switch off the local storage flag to FALSE.
    I was able to read the data.
    NamedCache::Handle hCache = DerivedCQC::create(CacheFactory::getCache("dist-AccountGCE"), AlwaysFilter::getInstance(), false);
    Ideally I'm expecting in true scenario when I'm connected with coherence all keys and values with locally synced up with cache and store it locally and for each update also it will get synched up.
    In false scenario it will hook into the coherence cache and read it from their for each key and cache values from that moment onwards. Please share how it is implemented underneath
    Thanks and regards,
    Sura

    Hi Wei,
    I found the issue when you declare you cache as an global variable then you won't get data in TRUE scenario and if you declare the cache during a method then you will retrieve data.
    Try this.......
    #include <iostream>
    #include <coherence/net/CacheFactory.hpp>
    #include "coherence/lang.ns"
    #include <coherence/net/NamedCache.hpp>
    #include <stdio.h>
    #include <stdlib.h>
    #include <pthread.h>
    #include <coherence/net/cache/ContinuousQueryCache.hpp>
    #include <coherence/util/filter/AlwaysFilter.hpp>
    #include <coherence/util/filter/EntryFilter.hpp>
    #include "DerivedCQC.hpp"
    #include <fstream>
    #include <string>
    #include <sstream>
    #include <coherence/util/Set.hpp>
    #include <coherence/util/Iterator.hpp>
    #include <sys/types.h>
    #include <unistd.h>
    #include <coherence/stl/boxing_map.hpp>
    #include "EventPrinter.hpp"
    using namespace coherence::lang;
    using coherence::net::CacheFactory;
    using coherence::net::NamedCache;
    using coherence::net::ConcurrentMap;
    using coherence::net::cache::ContinuousQueryCache;
    using coherence::util::filter::AlwaysFilter;
    using coherence::util::filter::EntryFilter;
    using coherence::util::Set;
    using coherence::util::Iterator;
    using coherence::stl::boxing_map;
    NamedCache::Handle hCache = DerivedCQC::create(CacheFactory::getCache("gce-Account"), AlwaysFilter::getInstance(), true);
    int main(int argc, char** argv) {
    std::cout << "size: " << hCache->size() << std::endl;
    in above example you will see size is 0 for true case and size is equal to data size in the cache in false scenario.
    But if you declare the cache as below you will get the expected results as the documentation.
    int main(int argc, char** argv) {
    NamedCache::Handle hCache = DerivedCQC::create(CacheFactory::getCache("gce-Account"), AlwaysFilter::getInstance(), true);
    std::cout << "size: " << hCache->size() << std::endl;
    Is this a bug or is this the expected behaviour. According to my understanding this is a bug?
    Thanks and regards,
    Sura

  • Cache Problem while loading PDF in IE

    Hi,
    I am getting a problem when I load a PDF from my servlet.
    I am using the following code.
    pdfInputStream = httpURLConnection.getInputStream();
    output = aResponse.getOutputStream();
    aResponse.setContentType("application/pdf");
    byte[] buffer = new byte[1024];
    int bytesRead = 0;
    while((bytesRead = pdfInputStream.read(buffer, 0, buffer.length)) >0) {
    output.write(buffer, 0, bytesRead);
    It works fine in Netscape. In IE it gives a problem when the PDF is very small 5KB and less. It does not show the report. When I click reload it shows it then.
    When I make the browser cache setting to "Every visit to page". It works fine.
    Then I tried to disable the cache for IE using the following code but this does not work in IE.
    pdfInputStream = httpURLConnection.getInputStream();
    output = aResponse.getOutputStream();
    aResponse.setContentType("application/pdf");
    aResponse.setHeader("Cache-Control", "no-cache"); //or
    aResponse.setHeader("Pragma", "no-cache"); //or
    aResponse.setDateHeader("Expires", 0);
    byte[] buffer = new byte[1024];
    int bytesRead = 0;
    while((bytesRead = pdfInputStream.read(buffer, 0, buffer.length)) >0) {
    output.write(buffer, 0, bytesRead);
    Each of the above methods did not work. IE was not recognising it as pdf any more and showing the save file box.
    Anybody have any solutions for this. There was another query of this kind but no answers for that.
    Regards,
    Ritesh

    This is not a cache problem.
    With IE you must specify the PDF content length you sent to browser.
    So:
    You must add aResponse.setContentLength(bytesTotal); where bytesTotal is the length of data sent to browser.

  • Query Cache Behavior

    Hi Experts,
    I'm running 12.0.5, and have enjoyed 6 months or more of uptime for our production server, until Thursday when an electrical strom coupled with an odd occurance with our UPS system brought it to an abrupt halt.
    Dev and Test environments came up OK, however the icon related to the J2EE server on production remained yellow.  We were unable to access MII, or NWA for that matter.
    I handed the problem over to our Basis team, who couldn't figure it out then passed a note to SAP.  While waiting for a response, I decided to do some more poking around, as it was now 8:00pm Friday night and any hope of doing anything over the weekend (other than work) was quickly fading.
    Poking around in the NW database, I found that the XMII_QUERYCACHE table had >2.5M rows (that seemed odd).  Initital attempts to Truncate the table were unsuccessful (something about resources and NOWAIT, should have captured that error), however I was able to Truncate by shutting down the server, then executing just as the database was starting, and VOILA!, NW and MII came online.
    My assumption is that either the table is read on startup, or perhaps an attempt is made to 'delete' all rows on startup (delete would attempt to build a rollback segment in order to execute), is that the case?  Also, are 'expired' cache entries supposed to be hanging around?
    I found that I have a single frequently executed query that was the culprit, query caching was enabled with a duration of 2Hours (many jobs use this query).   I'll correct that situation, but I wanted to try and correct the root cause.
    Thanks,
    Rod

    I entered a ticket and just received a resolution from SAP, as follows..
    We have added this feature in both MII 12.0 and MII 12.1.
    The changes will be available with next SP release on SAP Service
    Marketplace.
    MII 12.0 SP12 will be available on 15-Dec-2010.
    My assumption is that there will be a system job that clears out expired query cache rows at some frequency, nightly maybe.
    Wanted to update the thread with the resolution in case anyone was having the same problem.
    Rod

  • Clear TREX query cache

    Hi all,
    I'm stuck on this problem: I'm adding a document to a Trex search index using KM API (Web Dynpro for Java app). The document is processed & added to the index. So far so good.
    The problem is that when I do a search on that index (I search for ''), the document won't show up because the query results of the previous search for '' on my index seem to be cached somewhere. When I wait for 10 mins (cache expires?) or when restart my Web Dynpro app (so the user needs to login again), the document is found.
    How can I clear/disable the trex query cache?
    Thanks a lot,
    Jeroen
    PS: clicking on the 'clear cache' button in Search and Classification Cache Administration (TREX monitor) is not working. The cache that is pestering me seems to be a session-specific query cache.

    TREX service @ visual admin: queries.usecache == false

  • OD Master and MCXD Cache Problem

    Hi,
    has someone an idea how to solve a mcxd Cache Problem?
    Always when I sate up the OD Master after a new installation of Mac OSX 10.4.11 Server, I get the mcxd problem after the reboot. After the reboot I have always this massage, see system.log in our system.
    SYSTEM.LOG
    Mar 5 21:45:23 mainserver /System/Library/CoreServices/mcxd.app/Contents/MacOS/mcxd: DSGetLocallyHostedNodeNames(): dsFindDirNode() == -14008
    Mar 5 21:45:23 mainserver /System/Library/CoreServices/mcxd.app/Contents/MacOS/mcxd: DSGetSearchPath(): DSGetLocallyHostedNodeNames() == -14956
    Mar 5 21:45:23 mainserver /System/Library/CoreServices/mcxd.app/Contents/MacOS/mcxd: DSGetCurrentConfigInfo(): DSGetSearchPath() == -14956
    Mar 5 21:45:23 mainserver /System/Library/CoreServices/mcxd.app/Contents/MacOS/mcxd: DSGetCacheInfo(): DSGetCurrentConfigInfo() == -14956
    Mar 5 21:45:23 mainserver /System/Library/CoreServices/mcxd.app/Contents/MacOS/mcxd: * MCXD.getComputerInfo: Couldn't get cache info -14956
    How can I solve this problem?
    This problem makes my crazy since 3 weeks.
    And is this problem known under Mac OSX Server 10.5?
    To our server system:
    One XServer G5 is the DNS Server and
    A second XServer G5 is the OD Master.
    Thanks for your help!

    Hi
    +"Is the replica truly read only?"+
    It should be. However I've come across a similar situation. At a site I support the local admin was creating users and editing passwords on the Replica rather than the Master. He kept getting the usual "dsDirectoryetc" errors but he persisted and eventually got the settings to 'take'. Querying the databases on both Master and Replica produced similar results to yours. It was difficult then to go back to the Master and 'redo' properly what he'd done as he'd not kept track of the changes he made. To 'fix' I simply demoted/repromoted the Replica which worked for me. Although time will tell if it turns out to be a permanent fix?
    Tony

  • Caching problem w/ primary-foreign key mapping

    I have seen this a couple of times now. It is not consistent enough to
    create a simple reproducible test case, so I will have to describe it to you
    with an example and hope you can track it down. It only occurs when caching
    is enabled.
    Here are the classes:
    class C1 { int id; C2 c2; }
    class C2 { int id; C1 c1; }
    Each class uses application identity using static nested Id classes: C1.Id
    and C2.Id. What is unusual is that the same value is used for both
    instances:
    int id = nextId();
    C1 c1 = new C1(id);
    C2 c2 = new C2(id);
    c1.c2 = c2;
    c2.c1 = c1;
    This all works fine using optimistic transactions with caching disabled.
    Although the integer values are the same, the oids are unique because each
    class defines its own unique oid class.
    Here is the schema and mapping (this works with caching disabled but fails
    with caching enabled):
    table t1: column id integer, column revision integer, primary key (id)
    table t2: column id integer, column revision integer, primary key (id)
    <jdo>
    <package name="test">
    <class name="C1" objectid-class="C1$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t1"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c2">
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="column.id" value="id"/>
    </extension>
    </field>
    </class>
    <class name="C2" objectid-class="C2$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t2"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c1">
    <extension vendor-name="kodo" key="dependent" value="true"/>
    <extension vendor-name="kodo" key="inverse-owner" value="c2"/>
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="table" value="t1"/>
    <extension vendor-name="kodo" key="ref-column.id" value="id"/>
    <extension vendor-name="kodo" key="column.id" value="id"/>
    </extension>
    </field>
    </class>
    </package>
    </jdo>
    Because the ids are known to be the same, the primary key values are also
    used as foreign key values. Accessing C2.c1 is always non-null when caching
    is disabled. With caching is enabled C2.c1 is usually non-null but sometimes
    null. When it is null we get warnings about dangling references to deleted
    instances with id values of 0 and other similar warnings.
    The workaround is to add a redundant column with the same value. For some
    reason this works around the caching problem (this is unnecessary with
    caching disabled):
    table t1: column id integer, column id2 integer, column revision integer,
    primary key (id), unique index (id2)
    table t2: column id integer, column revision integer, primary key (id)
    <jdo>
    <package name="test">
    <class name="C1" objectid-class="C1$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t1"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c2">
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="column.id" value="id2"/>
    </extension>
    </field>
    </class>
    <class name="C2" objectid-class="C2$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t2"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c1">
    <extension vendor-name="kodo" key="dependent" value="true"/>
    <extension vendor-name="kodo" key="inverse-owner" value="c2"/>
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="table" value="t1"/>
    <extension vendor-name="kodo" key="ref-column.id" value="id2"/>
    <extension vendor-name="kodo" key="column.id" value="id"/>
    </extension>
    </field>
    </class>
    </package>
    </jdo>
    Needless to say, the extra column adds a lot of overhead, including the
    addition of a second unique index, for no value other than working around
    the caching defect.

    Tom-
    The first thing that I think of whenever I see a problem like this is
    that the equals() and hashCode() methods of your application identity
    classes are not correct. Can you check them to ensure that they are
    written in accordance to the guidelines at:
    http://docs.solarmetric.com/manual.html#jdo_overview_pc_identity_application
    If that doesn't help address the problem, can you post the code for your
    application identity classes so we can double-check, and we will try to
    determine what might be causing the problem.
    In article <[email protected]>, Tom Landon wrote:
    I have seen this a couple of times now. It is not consistent enough to
    create a simple reproducible test case, so I will have to describe it to you
    with an example and hope you can track it down. It only occurs when caching
    is enabled.
    Here are the classes:
    class C1 { int id; C2 c2; }
    class C2 { int id; C1 c1; }
    Each class uses application identity using static nested Id classes: C1.Id
    and C2.Id. What is unusual is that the same value is used for both
    instances:
    int id = nextId();
    C1 c1 = new C1(id);
    C2 c2 = new C2(id);
    c1.c2 = c2;
    c2.c1 = c1;
    This all works fine using optimistic transactions with caching disabled.
    Although the integer values are the same, the oids are unique because each
    class defines its own unique oid class.
    Here is the schema and mapping (this works with caching disabled but fails
    with caching enabled):
    table t1: column id integer, column revision integer, primary key (id)
    table t2: column id integer, column revision integer, primary key (id)
    <jdo>
    <package name="test">
    <class name="C1" objectid-class="C1$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t1"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c2">
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="column.id" value="id"/>
    </extension>
    </field>
    </class>
    <class name="C2" objectid-class="C2$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t2"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c1">
    <extension vendor-name="kodo" key="dependent" value="true"/>
    <extension vendor-name="kodo" key="inverse-owner" value="c2"/>
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="table" value="t1"/>
    <extension vendor-name="kodo" key="ref-column.id" value="id"/>
    <extension vendor-name="kodo" key="column.id" value="id"/>
    </extension>
    </field>
    </class>
    </package>
    </jdo>
    Because the ids are known to be the same, the primary key values are also
    used as foreign key values. Accessing C2.c1 is always non-null when caching
    is disabled. With caching is enabled C2.c1 is usually non-null but sometimes
    null. When it is null we get warnings about dangling references to deleted
    instances with id values of 0 and other similar warnings.
    The workaround is to add a redundant column with the same value. For some
    reason this works around the caching problem (this is unnecessary with
    caching disabled):
    table t1: column id integer, column id2 integer, column revision integer,
    primary key (id), unique index (id2)
    table t2: column id integer, column revision integer, primary key (id)
    <jdo>
    <package name="test">
    <class name="C1" objectid-class="C1$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t1"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c2">
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="column.id" value="id2"/>
    </extension>
    </field>
    </class>
    <class name="C2" objectid-class="C2$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t2"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c1">
    <extension vendor-name="kodo" key="dependent" value="true"/>
    <extension vendor-name="kodo" key="inverse-owner" value="c2"/>
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="table" value="t1"/>
    <extension vendor-name="kodo" key="ref-column.id" value="id2"/>
    <extension vendor-name="kodo" key="column.id" value="id"/>
    </extension>
    </field>
    </class>
    </package>
    </jdo>
    Needless to say, the extra column adds a lot of overhead, including the
    addition of a second unique index, for no value other than working around
    the caching defect.
    Marc Prud'hommeaux [email protected]
    SolarMetric Inc. http://www.solarmetric.com

  • I am facing a caching problem in the Web-Application that I've developed us

    Dear Friends,
    I am facing a caching problem in the Web-Application that I've developed using Java/JSP/Servlet.
    Problem Description: In this application when a hyperlink is clicked it is supposed to go the Handling Servlet and then servlet will fetch the data (using DAO layer) and will store in the session. After this the servlet will forward the request to the view JSP to present the data. The JSP access the object stored in the session and displays the data.
    However, when the link is clicked second time then the request is not received by our servlet and the cached(prev data) page is shown. If we refresh the page then request come to the servlet and we get correct data. But as you will also agree that we don't want the users to refresh the page again and again to get the updated data.
    We've included these lines in JSPs also but it does no good:
    <%
    response.setHeader("Expires", "0");
    response.setHeader("Cache-Control" ,"no-cache, must-revalidate");
    response.setHeader("Pragma", "no-cache");
    response.setHeader("Cache-Control","no-store");
    %>
    Request you to please give a solution for the same.
    Thanks & Regards,
    Mohan

    However, when the link is clicked second time then the request is not received by our servlet Impossible mate.. can you show your code. You sure there are no javascript errors ?
    Why dont you just remove your object from the session after displaying the data from it and see if your page "automatically" hits the servlet when the link is clicked.
    cheers..
    S

Maybe you are looking for

  • How can I transfer Voice Memos from my iPhone to my computer?

    Dear Iphone users, I have a very nice Iphone. I know how to transfer the pictures on my Iphone to my computer but I do not know how to transfer my "audios" (from the "dictaphone" function in French). How can I do that ? Thank you very much

  • Zen MicroPhoto Problems! PLease,I Really Need Help!!

    hello everybody, i bought a zen microphoto 4GB in June and it worked perfectly until two days ago. I turned it on like i do everyday to listen to music on my way to school. i downloaded the newest version of firmware onto it'sthree days before. when

  • Importing field values in iPhoto 08

    Hi, I've been trying out Portfolio to organize my images but have come to the conclusion that it is too much software for my needs. The thing is that I now have about 300 images with keywords and other info now sitting in Portfolio. One solution woul

  • Auto Defocus in Motion 5

    Hi everyone. I am not sure if this will be possible but I thought I would ask the guys in the know. I am using the defocus filter on a piece of work in 2D. I would like the filter placed on some other elements (not the base video) to approximate the

  • Strange problem with default mail account

    Just recently, the behavior of my default mail account (ATT yahoo server) has changed. This is a regular POP account, but suddenly, although mail still comes into the account as it should, the messages are not treated as new messages: there's no mail