Problems with offline caching

In a recent AIR 2.6 application I use the offline caching feature of LCDS 3.1
The application itself uses a Java 6 / Spring 3 / JPA 1 / Hibernate stack (custom code, not using LC model driven development). For offline support, I started with the sample code from the LCDS 3.1 documentation which works ok in most cases - at least, if I fetch collections when online, then go to offline mode and fetch the same collections from the cache. This works fine and the SQLite data base gets created correctly.
However, as soon as I start calling fill() on a DataService in offline mode on a collection that has *not* been fetched before or try to load an lazy association which has not been fetched before, the DataService "hangs" in terms of that the AsyncToken returned by that fill() method never fires a result or fault event.
The last thing that happens in the client is
  [DEBUG] mx.data.SQLDBCache SQLDBCache - before select: SELECT id,data FROM [ORGANIZATION_FAVSOrganization]
and that’s it. In this case I don’t get any further feedback and the DataService does not create a result or fault event, it does not even time out. Not sure if this is by design or not but as I’m listening for result/faults event to display/hide progress indicators gives me a lot of headaches on how to control them in offline mode.
Any ideas?
Thanks,
Dirk

Harry, thank you for your immed response.
I think I might have posted my problem under a wrong thread. I am facing this issue for older version of LCDS which I downloaded couple of years back.. it says ES 2.6 and am still a bit confused about the version numbers.
That being said, my problem is exactly the same. Everything works fine when LCDS server is reachable. However when offline, Fill() just does not call any of the handlers.
In case if you can still help:
Here is the DS definition:
     public class UsersDataService extends EventDispatcher implements IFITDataService
          private var mDS:DataServiceWrapper;
          private var mAllUsers:ArrayCollection;
          public function UsersDataService()
               mDS = new DataServiceWrapper("UserInfo");
               mDS.cacheID = "allusers"
               mAllUsers = new ArrayCollection();
               mDS.addEventListener(DataServiceFaultEvent.FAULT,OnFault);
               mDS.autoConnect = true;
          public function getAllUsers(locationId:String):void
               mDS.addEventListener(ResultEvent.RESULT, OnResult);
               if(mDS.isCollectionManaged(mAllUsers))
                    mDS.refreshCollection(mAllUsers);
               else
                    mDS.fill(mAllUsers, locationId);
          public function getUser(userName:String):void
               mDS.addEventListener(ResultEvent.RESULT, OnResult);
               mDS.getItem({ID:userName});
          public function OnResult(event:ResultEvent):void
               mDS.removeEventListener(ResultEvent.RESULT, OnResult);
               trace(event)
               var dsEvent:DataServiceEvent = new DataServiceEvent(DataServiceEvent.Result)
               dsEvent.user = event.result
               dispatchEvent(dsEvent);
          public function OnFault(event:FaultEvent):void
               trace(event)
          public function getData(locationId:String):void
               getAllUsers(locationId);
          public function refresh():void
               if(mDS.isCollectionManaged(mAllUsers))
                    mDS.addEventListener(ResultEvent.RESULT, OnResult);
                    mDS.refreshCollection(mAllUsers);
               else
                    getAllUsers(FITSession.regLocationID);
          public function get users():ArrayCollection
               return mAllUsers
          public function get connected():Boolean
               return mDS.connected;
I am also attaching the 'allusers' db
Thanks much!

Similar Messages

  • Problem with the cache hit ratio

    Hello,
    I ma having a problem with the cache hit ratio I am geting. I am sure, 100% sure, that something has got to be wrong with the cache hit ratio I am fetching!
    1) I will post the code that I am using to retrieve the cache hit ratio. I've seen about a thousand different equations, all equivalent in the end.
    In oracle cache hit ratio seems to be:
    cache hits / cache lookups,
    where cache hits <=> logica IO - physical reads
    cache lookups <=> logical IO
    Now some people use the session logical Reads stat, from teh view v$sysstat; others use db block gets + db consistent gets; whatever. At the end of the day its all the same, and this is what i Use:
    SELECT (P1.value + P2.value - P3.value) AS CACHE_HITS, (P1.value + P2.value) AS CACHE_LOOKUPS, P4.value AS MAX_BUFFS_SIZEB
    FROM v$sysstat P1, v$sysstat P2, v$sysstat P3, V$PARAMETER P4
    WHERE
    P1.name = 'db block gets' AND
    P2.name = 'consistent gets' AND
    P3.name = 'physical reads' AND
    P4.name = 'sga_max_size'
    2) The problem:
    The cache hit ratio I am retrieving cannot be correct. In this case i was benchamarking a HUGELY inneficient query, consisting of the Union of 5 Projections over the same source table, and Oracle is configured with a relatively small SGA of 300 MB. They query plan is awful, the database will read the source database table 5 times.
    And I can see in the physical data statistics of the source tablespace, that total Bytes read is aproximatly 5 times the size of the text file that I used to bulk load data into the databse.
    Some of the relevant stats, wait events:
    db file scattered read     1129,93 seconds
    Elapsed time: 1311,9 seconds
    CPU time: 179,84
    SGA max Size: 314572800 Bytes
    And total bytes read: 77771964416 B (aproximatly 72 Gga bytes)
    the source txt loaded to the database was aprox 16 G
    Number of reads was like 4.5 times the source datafile.
    I would say this, given the difference between CPU time and Elapsed Time, it is clear that the query spent almost all of its time doin DB file scattered reads. How is it possible that i get the following cache hit ratio:
    Cache hit Ratio: 0,92
    Cache hits: 109680186
    Cache lookups: 119173819
    I mean only 8% of that Logical I/O corresponded to physical I/O? It is just not possible.
    3) Procedure of taking stats:
    Now to retrieve these stats I snapshot the system 2 times. One before the query, one after the query.
    But: this is not done in a single session. In total 3 sessions are created. One session two retrieve the stats before the query, one session to run the query, a last session to snapshot after the query.
    Could the problem, assuming there is one, be related to this:
    "The V$SESSTAT view contains statistics on a per-session basis and is only valid for the session currently connected. When a session disconnects all statistics for the session are updated in V$SYSSTAT. The values for the statistics are cleared until the next session uses them."
    What does this paragraph mean. Does it mean that the v$sysstat only shows you the stats of the last session that closed? Or does it mean thtat v$sysstat is increamented with the statistics of each v$sessionstat once a session terminates? If so, then my procedure for gathering those stats should be correct.
    Can anyone help me sort out the origin of such a high cache hit ratio, with so much I/O being done?

    sono99 wrote:
    Hi,s
    first of, let me start by saying that there were many things in your post that you mentioned that I could no understand. 1. Because i am not an Oracle Expert, i use whatever RDBMS whenever i need to. 2. Because another problem has come up and, right now, i cannot inform myself to be able to comprehend it all.Well, could it be that you need to understand the database you are working on in order to comprehend it? That is why we strongly advise you to read the concepts manual first, you need to understand the architecture that Oracle uses, as well as the basic concepts of how oracle does locking and maintains read consistency. It does these different than other database engines, and some things become nonsense if looked at from the viewpoint of a single user.
    >
    quote:
    It would be useful to see the execution plan jhust in case you have simplified the problem so much that a critical detail is missing.
    First, the query code:
    2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:>SQL> CREATE TABLE FAVFRIEND
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 2 NOLOGGING TABLESPACE TARGET
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 3 AS
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 4 SELECT ID as USRID, FAVF1 as FAVF FROM PROFILE
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 5 UNION ALL
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 6 SELECT ID as USRID, FAVF2 AS FAVF FROM PROFILE
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 7 UNION ALL
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 8 SELECT ID as USRID, FAVF3 AS FAVF FROM PROFILE
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 9 UNION ALL
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 10 SELECT ID as USRID, FAVF4 AS FAVF FROM PROFILE
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 11 UNION ALL
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 12 SELECT ID as USRID, FAVF5 AS FAVF FROM PROFILE
    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 13 ;
    Now, Althought it is clear from the query that the statement is executed with the NOLOGGiNG, i have disabled the logging entirely for the tablespace.There are certain rules about nologging that may not be obvious. Again, this derives from the basic Oracle architecture, and if you use the wrong definitions of things like logging, you will be led down the primrose path to confusion.
    >
    Futhermore, yes, the RDBMS is a test RDBMS... I have droped the database a few times... And I am constantly deleting an re-inserting data into the source database table named PROFILE.>
    I also make sure do check all the datafile statistics, and for this query the amount of RedoLog, Undo "Log", Templife used is negligible, practically zero.Create table is DDL, which has implied commits before and afterwards. There is a lot going on, some of it dependent on the volume of data returned. The Oracle database writer writes things out when it feels like it, there are situations where it might just leave it in memory for a while. With nologging, Oracle may not care that you can't perform recovery if it is interrupted. So you might want to look into statspack or EM to tell you what is going on, the datafile statistics may not be all that informative for this case.
    >
    Most of the I/O is reading, a few of the I/O is writing.
    My idea is not to optimize this query, it is to understand how it performs. Well, have you read the Concepts manual?
    I have other implementations to test, namely I having trouble with one of them.
    Furthermore, I doubt the query Plan Oracle is using actually involves tablescans (as I I'd like it to do); because in the Wait Events, most of the wait time for this query is spent doing "db file scattered read". And I think this is different from a tablescan.Please look up the definition of [db file scattered read|http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/instance_tune.htm#sthref703].
    >
    Do you really have to use sessions external to the query session ? Can you query v$mystat joined to v$statname from the session itself.
    No, I don't want to that!
    I avoid as much as possible having the code I execute being implemented in java. Why do you think java has anything to do with this? In your session, desc v$mystat and v$statname, these are views you can look at.
    When i can avoid it I don't query the database directly through JDBC, i use the RDBMS command line client, which is supposed to be very robust. Er, is that sqlplus?
    So yes, I only connect to the database with JDBC... in very last session.
    Of course, I Could Have put both the gather stats before query and gathers stats after query a single script: the script that would be also runing the query.
    But that would cause me a number of problems, namely some of the SQL i build has to be implemented dynamically. And I don't want to be replicating the snapshoting code into every query script I make. This way I have one sql with the snapshoting scripts; and multiple scripts for running each query. I avoid code replication in this manner.Instrumentation is a large subject; dynamic sql generation is something to be avoided if possible. Remember, Oracle is written with the idea that many people are going to be sharing code and the database, so it is optimized in that way. For SQL parsing in particular, if every SQL is different, you get a performance problem called "hard parsing." You can (and generally should, and sometimes can't avoid) use bind variables so that Oracle doesn't need to hard parse every SQL. In fact, this is one of those things that applies to other engines besides Oracle. I would recommend you read Tom Kyte's books, he explains what is going on in detail, including in some places the non-Oracle viewpoint.
    >
    Furthermore, Since the database is not a production database, it is there so I can do my tests. I don't have to be concerned with what other sessions may be doing to my system. There are only the sessions I control.No, there are sessions Oracle controls. If you are on unix, you can easily see this, but there are ways to see it on Windows, too. In some cases, your own sessions can affect themselves.
    >
    then what it the array fetch size ? If the array fetch size is large enough the number of block visits would be similar to the number of physical block reads.
    I don't know what the arraysize you mention is. i have not touched that parameter. So whatever it is, it's the default.You should find out! You can go to http://tahiti.oracle.com and type array fetch size into the search box. You can also go to http://asktom.oracle.com and do the same thing, with some more interesting detail.
    >
    By the way, I don't get the query results into my client, the query results are dumped into a target output table.
    So, if the arraysize has something to do with the number of rows that Oracle is returning the client in each step... I think it doesn't matter.You may hear this phrase a lot:
    "It depends."
    >
    As for the query plan, If i am not mistaken you can't get get query plans for queries that are: create table as select.What?
    JG@TTST> explain plan for create table jjj as select * from product_master;
    Explained.
    JG@TTST> select count(*) from plan_table;
      COUNT(*)
             3
    I can however commit the create table part and just call for the evalution of the Select part of the query; i believe it should be same.
    "Optimizer"     "Cost"     "Cardinality"     "Bytes"     "Partition Start"     "Partition Stop"     "Partition Id"     "ACCESS PREDICATES"     "FILTER PREDICATES"
    "SELECT STATEMENT"     "ALL_ROWS"     "2563"     "586110"     "15238860"     ""     ""     ""     ""     ""
    "UNION-ALL"     ""     ""     ""     ""     ""     ""     ""     ""     ""
    "TABLE ACCESS(FULL) SONO99.PROFILE"     ""     "512"     "117222"     "3047772"     ""     ""     ""     ""     ""
    "TABLE ACCESS(FULL) SONO99.PROFILE"     ""     "513"     "117222"     "3047772"     ""     ""     ""     ""     ""
    "TABLE ACCESS(FULL) SONO99.PROFILE"     ""     "513"     "117222"     "3047772"     ""     ""     ""     ""     ""
    "TABLE ACCESS(FULL) SONO99.PROFILE"     ""     "513"     "117222"     "3047772"     ""     ""     ""     ""     ""
    "TABLE ACCESS(FULL) SONO99.PROFILE"     ""     "513"     "117222"     "3047772"     ""     ""     ""     ""     ""
    This query plan was taken from sql developer, exported to txt, and the PROFILE table here has only 100k tuples.
    Right now I am more concerned with testing the MODEL query. Which Oracle doesn't seem to be able any more... but that is a matter for another thread.
    Regarding this plan. The Union ALL seems to be more than just a binary Operator... IT seems to be Neray.
    The union all on that execution plan seems to be taking as leaf tables 5 99sono.Profile tables, and be making a table scan to them all. So I'd say that the RDBMS should only scan each database block once and not 5 times.
    But: It doesn't seem to be so. IT seems like what oracle is doing is scanning completly each the table, and then moving on to next select statement in the UNION ALL. Because given the amount of source table that was read, 5 times greater than the size of the source table. Oracle didn't reuse read blocks.
    But this is just my feeling.Your feeling is uninteresting. Telling us what you really hope to accomplish might be more interesting.
    Anyway, in terms of consistent gets, how many consistent gets should the RDBMS be doing? 5
    One for each table block?It depends.
    >
    My best regards,
    Nuno (99sono xp).

  • Problem with Offline Files with Cisco NSS322 (firmware 1.3)

    Hi
    We've recently bought a new NSS322 NAS box to replace an ageing Windows SBS 2003 file server. We've upgraded the firmware to 1.3, and begun moving files across from the Windows server to the new NAS box. However, we're having a problem with Offline Files on the NAS, as attempting to make directories available offline on Windows clients (only tried Vista Business 32-bit so far) we get loads of errors stating that these files cannot be made available offline "because they are in use" (or something like that).
    Any suggestions as to why this might be? It doesn't appear to affect all files, but most of them.
    I've read somewhere that the "allow oplocks" setting can affect this if disabled, but it is already enabled on the device, so this can't be the cause of the problem.
    Any help you could provide would be much appreciated, as at the moment this issue is rendering our NAS unusable as we rely heavily on the offline files functionality.
    Thanks in advance.
    I have some more information - this appears to be related to the ownership permissions of the files on the NAS. New files created on the NAS can be made available offline, and have the ownership of the person who created the file, however, the files we moved across from the server are all listed as being owned by the user "guest" (I think this may be related to the fact that when we moved the files over, the user was logged in as "admin" rather than a user on the AD Domain). I should be able to resolve this through changing the owner of the files on the NAS, but I can't seem to do this, as I get an "access denied" error each time I try to change ownership.
    Message was edited by: redcitrus
    Is there any chance someone's going to be able to help me with this? I've currently spent £500 on a box that I can't use. If it can't handle Offline Files satisfactorily, it's no use to us.
    I've been looking at this a bit more - if appears that each user can make files they have created available offline, but files created by other people cannot - the error message that appears is: "The process cannot access the file because it is being used by another process".
    I'm getting desperate here...can anyone help?
    Message was edited by: redcitrus

    Sorry, but I don1 speak very good english..
    I was Try this quick test: As a quick test, you can do a forward all and make sure you can get into the ftp server.
    but result it is to same.
    I think, that problem is in firmware, becasue I know more people who own this router with to same firmware version (2.0.1.3), and have they to same problem how I.
    I have DISABLED FIREWALL on my router and problem hold over.
    If I replace router Cisco Linksys WRVS4400N by Ovislink router or any other, issue (problem) already no is.
    Configuration in Cisco Linksys WRVS4400N and configurations in any other router is to same, but FTP connection via Cisco Lynksys always will crash.
    If I connect to any other FTP server, sometimes requieres from me ftp login and password. It is OK, but if I assign login and password, after connect me to this FTP server, i see folders and files on this server, but without reason connecting fall down.
    Few seconds later if I connect to the to same FTP server, sometime either not request login and password and connection fall down.

  • Problem with JCS Caching

    Hi,
    I am using JCS for Disk Based Caching. I don't want to use memory cache. So I set the configuration like this:
    ##### Default Region Configuration
    jcs.default=DC
    jcs.default.cacheattributes=org.apache.jcs.engine.CompositeCacheAttributes
    jcs.default.cacheattributes.MaxObjects=0
    jcs.default.cacheattributes.MemoryCacheName=org.apache.jcs.engine.memory.lru.LRUMemoryCache
    ##### CACHE REGIONS
    jcs.region.myRegion1=DC
    jcs.region.myRegion1.cacheattributes=org.apache.jcs.engine.CompositeCacheAttributes
    jcs.region.myRegion1.cacheattributes.MaxObjects=0
    jcs.region.myRegion1.cacheattributes.MemoryCacheName=org.apache.jcs.engine.memory.lru.LRUMemoryCache
    jcs.region.myRegion1.cacheattributes.MaxSpoolPerRun=100
    ##### AUXILIARY CACHES
    # Indexed Disk Cache
    jcs.auxiliary.DC=org.apache.jcs.auxiliary.disk.indexed.IndexedDiskCacheFactory
    jcs.auxiliary.DC.attributes=org.apache.jcs.auxiliary.disk.indexed.IndexedDiskCacheAttributes
    jcs.auxiliary.DC.attributes.DiskPath=target/test-sandbox/indexed-disk-cache
    jcs.auxiliary.DC.attributes.MaxPurgatorySize=1000
    jcs.auxiliary.DC.attributes.MaxKeySize=1000
    jcs.auxiliary.DC.attributes.MaxRecycleBinSize=5000
    jcs.auxiliary.DC.attributes.OptimizeAtRemoveCount=300000
    jcs.auxiliary.DC.attributes.OptimizeOnShutdown=true
    jcs.auxiliary.DC.attributes.EventQueueType=POOLED
    jcs.auxiliary.DC.attributes.EventQueuePoolName=disk_cache_event_queue
    ################## OPTIONAL THREAD POOL CONFIGURATION ########
    # Disk Cache Event Queue Pool
    thread_pool.disk_cache_event_queue.useBoundary=false
    thread_pool.remote_cache_client.maximumPoolSize=15
    thread_pool.disk_cache_event_queue.minimumPoolSize=1
    thread_pool.disk_cache_event_queue.keepAliveTime=3500
    thread_pool.disk_cache_event_queue.startUpSize=1I set max object 0 so that all items are cache in disk.
    jcs.default.cacheattributes.MaxObjects=0
    As I set the maxkeysize=1000, after writing 1000 items, I am calling jcs.freeMemoryElements(1000), so that all the items are successfully writen to disk. But this method throwing exception: Update: Last item null.
    jcs.auxiliary.DC.attributes.MaxKeySize=1000
    I wrote 2000 items in cache, but I am able to get only 1000 items back.
    What I am doing wrong ? I want to use this disk caching for items [ nearabout 50 lac items]

    Hi,
    I am similar problem with JCS indexed cache machanism for writing data to disk, but I am not successful with below cache.ccf configuration.
    Also, I am not able to find the complete details from the above post: http://www.nabble.com/Unable-to-get-the-stored-Cache-td23331442.html
    Here is my configuration, I do not want to save the objects in memory, I want everything to be on disk. Please help me.
    Appriciate your quick response.
    # sets the default aux value for any non configured caches
    jcs.default=DC
    jcs.default.cacheattributes=org.apache.jcs.engine.CompositeCacheAttributes
    jcs.default.cacheattributes.MaxObjects=1000
    jcs.default.cacheattributes.MemoryCacheName=org.apache.jcs.engine.memory.lru.LRUMemoryCache
    jcs.default.elementattributes.IsEternal=false
    jcs.default.elementattributes.MaxLifeSeconds=3600
    jcs.default.elementattributes.IdleTime=1800
    jcs.default.elementattributes.IsSpool=true
    jcs.default.elementattributes.IsRemote=true
    jcs.default.elementattributes.IsLateral=true
    # CACHE REGIONS AVAILABLE
    # Regions preconfigured for caching
    jcs.region.monitoringCache=DC
    jcs.region.monitoringCache.cacheattributes=org.apache.jcs.engine.CompositeCacheAttributes
    jcs.region.monitoringCache.cacheattributes.MaxObjects=0
    jcs.region.monitoringCache.cacheattributes.MemoryCacheName=org.apache.jcs.engine.memory.lru.LRUMemoryCache
    # AUXILIARY CACHES AVAILABLE
    # Primary Disk Cache -- faster than the rest because of memory key storage
    jcs.auxiliary.DC=org.apache.jcs.auxiliary.disk.indexed.IndexedDiskCacheFactory
    jcs.auxiliary.DC.attributes=org.apache.jcs.auxiliary.disk.indexed.IndexedDiskCacheAttributes
    jcs.auxiliary.DC.attributes.DiskPath=C:/TestCache
    jcs.auxiliary.DC.attributes.MaxPurgatorySize=10000
    jcs.auxiliary.DC.attributes.MaxKeySize=10000
    jcs.auxiliary.DC.attributes.OptimizeAtRemoveCount=300000
    jcs.auxiliary.DC.attributes.MaxRecycleBinSize=7500

  • Problems with data cache plugin - The ResultList has been closed

    I'm testing out the data cache to see if it helps some of my performance
    problems, but I now get lots of exceptions that I didn't get before I
    enabled the cache. Here's how I enabled it:
    # CACHE
    com.solarmetric.kodo.DataCacheClass=com.solarmetric.kodo.runtime.datacache.p
    lugins.CacheImpl
    com.solarmetric.kodo.RemoteCommitProviderClass=com.solarmetric.kodo.runtime.
    event.impl.SingleJVMRemoteCommitProvider
    The exception I'm getting follows. I'm curious if anyone has any insight
    into what's going on? I'm sure there is a problem with my code, I'm
    forgetting to close something or other but since it works fine without the
    cache I'm really stuck as to what it is.
    Thanks
    Michael
    22:17:32,792 ERROR ObjectFinder - Exception =
    com.solarmetric.kodo.runtime.FatalUserException: The ResultList has been
    closed.
    NestedThrowables:
    com.solarmetric.kodo.runtime.ClosurePoint: Closure point of object held in
    embedded stack trace.
    com.solarmetric.kodo.runtime.FatalUserException: The ResultList has been
    closed.
    NestedThrowables:
    com.solarmetric.kodo.runtime.ClosurePoint: Closure point of object held in
    embedded stack trace.
    at
    com.solarmetric.kodo.runtime.objectprovider.EagerResultList.checkClosed(Eage
    rResultList.java:66)
    at
    com.solarmetric.kodo.runtime.objectprovider.EagerResultList.get(EagerResultL
    ist.java:84)
    at
    com.solarmetric.kodo.runtime.datacache.query.CachingRandomAccessList.get(Cac
    hingRandomAccessList.java:124)
    at java.util.AbstractList$Itr.next(AbstractList.java:416)
    at java.util.AbstractList.equals(AbstractList.java:604)
    at serp.util.WeakCollection$WeakValue.equals(WeakCollection.java:123)
    at java.util.HashMap.eq(HashMap.java:270)
    at java.util.HashMap.removeEntryForKey(HashMap.java:525)
    at java.util.HashMap.remove(HashMap.java:507)
    at java.util.HashSet.remove(HashSet.java:198)
    at serp.util.RefValueCollection.removeFilter(RefValueCollection.java:272)
    at serp.util.RefValueCollection.remove(RefValueCollection.java:235)
    at
    com.solarmetric.kodo.runtime.datacache.query.QueryCacheImpl.unregisterClassC
    hangeListener(QueryCacheImpl.java:160)
    at
    com.solarmetric.kodo.runtime.datacache.query.CachingRandomAccessList.abortCa
    ching(CachingRandomAccessList.java:103)
    at
    com.solarmetric.kodo.runtime.datacache.query.CachingRandomAccessList.get(Cac
    hingRandomAccessList.java:149)
    at java.util.AbstractList$Itr.next(AbstractList.java:416)
    at java.util.AbstractList.hashCode(AbstractList.java:624)
    at serp.util.WeakCollection$WeakValue.<init>(WeakCollection.java:93)
    at serp.util.WeakCollection.createRefValue(WeakCollection.java:64)
    at serp.util.RefValueCollection.addFilter(RefValueCollection.java:193)
    at serp.util.RefValueCollection.add(RefValueCollection.java:166)
    at serp.util.RefValueCollection.add(RefValueCollection.java:157)
    at
    com.solarmetric.kodo.runtime.datacache.query.QueryCacheImpl.registerClassCha
    ngeListener(QueryCacheImpl.java:151)
    at
    com.solarmetric.kodo.runtime.datacache.query.CachingRandomAccessList.<init>(
    CachingRandomAccessList.java:76)
    at
    com.solarmetric.kodo.runtime.datacache.query.CacheAwareQuery.wrapList(CacheA
    wareQuery.java:146)
    at
    com.solarmetric.kodo.runtime.datacache.query.CacheAwareQuery.execute(CacheAw
    areQuery.java:270)
    at
    com.verideon.veriguard.persistence.ObjectFinder.getObjects(ObjectFinder.java
    :62)
    at
    com.verideon.veriguard.persistence.ObjectFinder.getObject(ObjectFinder.java:
    44)
    at
    com.verideon.veriguard.persistence.ObjectFinder.getRole(ObjectFinder.java:91
    at
    com.verideon.veriguard.services.VeriguardService.createCustomerAccount(Verig
    uardService.java:210)
    at
    com.verideon.veriguard.services.TestVeriguardServiceMonitors.createTestUser(
    TestVeriguardServiceMonitors.java:133)
    at
    com.verideon.veriguard.services.TestVeriguardServiceMonitors.setUp(TestVerig
    uardServiceMonitors.java:80)
    at junit.framework.TestCase.runBare(TestCase.java:125)
    at junit.framework.TestResult$1.protect(TestResult.java:106)
    at junit.framework.TestResult.runProtected(TestResult.java:124)
    at junit.framework.TestResult.run(TestResult.java:109)
    at junit.framework.TestCase.run(TestCase.java:118)
    at junit.framework.TestSuite.runTest(TestSuite.java:208)
    at junit.framework.TestSuite.run(TestSuite.java:203)
    at
    org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRu
    nner.java:392)
    at
    org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.
    java:276)
    at
    org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner
    ..java:167)
    NestedThrowablesStackTrace:
    com.solarmetric.kodo.runtime.ClosurePoint: Closure point of object held in
    embedded stack trace.
    at
    com.solarmetric.kodo.runtime.objectprovider.EagerResultList.close(EagerResul
    tList.java:78)
    at com.solarmetric.kodo.impl.jdbc.query.JDBCQuery.close(JDBCQuery.java:127)
    at com.solarmetric.kodo.query.QueryImpl.closeAll(QueryImpl.java:637)
    at
    com.solarmetric.kodo.runtime.datacache.query.CacheAwareQuery.closeAll(CacheA
    wareQuery.java:343)
    at
    com.solarmetric.kodo.runtime.PersistenceManagerImpl.closeQueries(Persistence
    ManagerImpl.java:934)
    at
    com.solarmetric.kodo.runtime.PersistenceManagerImpl.close(PersistenceManager
    Impl.java:914)
    at
    com.solarmetric.kodo.runtime.PersistenceManagerImpl.close(PersistenceManager
    Impl.java:884)
    at com.verideon.veriguard.services.PMService.close(PMService.java:120)
    at com.verideon.veriguard.services.PMService.close(PMService.java:111)
    at
    com.verideon.veriguard.services.TestVeriguardServiceMonitors.deleteTestUser(
    TestVeriguardServiceMonitors.java:127)
    at
    com.verideon.veriguard.services.TestVeriguardServiceMonitors.tearDown(TestVe
    riguardServiceMonitors.java:103)
    at junit.framework.TestCase.runBare(TestCase.java:130)
    at junit.framework.TestResult$1.protect(TestResult.java:106)
    at junit.framework.TestResult.runProtected(TestResult.java:124)
    at junit.framework.TestResult.run(TestResult.java:109)
    at junit.framework.TestCase.run(TestCase.java:118)
    at junit.framework.TestSuite.runTest(TestSuite.java:208)
    at junit.framework.TestSuite.run(TestSuite.java:203)
    at
    org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRu
    nner.java:392)
    at
    org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.
    java:276)
    at
    org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner
    ..java:167)

    Michael,
    Could you send your test case to [email protected] so that we
    can take a look at what's going on to cause this exception?
    Thanks,
    -Patrick
    On Thu, 22 May 2003 17:18:50 -0400, Michael wrote:
    I'm testing out the data cache to see if it helps some of my performance
    problems, but I now get lots of exceptions that I didn't get before I
    enabled the cache. Here's how I enabled it:
    # CACHE
    com.solarmetric.kodo.DataCacheClass=com.solarmetric.kodo.runtime.datacache.p
    lugins.CacheImpl
    com.solarmetric.kodo.RemoteCommitProviderClass=com.solarmetric.kodo.runtime.
    event.impl.SingleJVMRemoteCommitProvider
    The exception I'm getting follows. I'm curious if anyone has any
    insight into what's going on? I'm sure there is a problem with my code,
    I'm forgetting to close something or other but since it works fine
    without the cache I'm really stuck as to what it is.
    Thanks
    Michael
    22:17:32,792 ERROR ObjectFinder - Exception =
    com.solarmetric.kodo.runtime.FatalUserException: The ResultList has been
    closed.
    NestedThrowables:
    com.solarmetric.kodo.runtime.ClosurePoint: Closure point of object held
    in embedded stack trace.
    com.solarmetric.kodo.runtime.FatalUserException: The ResultList has been
    closed.
    NestedThrowables:
    com.solarmetric.kodo.runtime.ClosurePoint: Closure point of object held
    in embedded stack trace.
    at
    com.solarmetric.kodo.runtime.objectprovider.EagerResultList.checkClosed(Eage
    rResultList.java:66)
    at
    com.solarmetric.kodo.runtime.objectprovider.EagerResultList.get(EagerResultL
    ist.java:84)
    at
    com.solarmetric.kodo.runtime.datacache.query.CachingRandomAccessList.get(Cac
    hingRandomAccessList.java:124)
    at java.util.AbstractList$Itr.next(AbstractList.java:416) at
    java.util.AbstractList.equals(AbstractList.java:604) at
    serp.util.WeakCollection$WeakValue.equals(WeakCollection.java:123) at
    java.util.HashMap.eq(HashMap.java:270) at
    java.util.HashMap.removeEntryForKey(HashMap.java:525) at
    java.util.HashMap.remove(HashMap.java:507) at
    java.util.HashSet.remove(HashSet.java:198) at
    serp.util.RefValueCollection.removeFilter(RefValueCollection.java:272)
    at serp.util.RefValueCollection.remove(RefValueCollection.java:235) at
    com.solarmetric.kodo.runtime.datacache.query.QueryCacheImpl.unregisterClassC
    hangeListener(QueryCacheImpl.java:160)
    at
    com.solarmetric.kodo.runtime.datacache.query.CachingRandomAccessList.abortCa
    ching(CachingRandomAccessList.java:103)
    at
    com.solarmetric.kodo.runtime.datacache.query.CachingRandomAccessList.get(Cac
    hingRandomAccessList.java:149)
    at java.util.AbstractList$Itr.next(AbstractList.java:416) at
    java.util.AbstractList.hashCode(AbstractList.java:624) at
    serp.util.WeakCollection$WeakValue.<init>(WeakCollection.java:93) at
    serp.util.WeakCollection.createRefValue(WeakCollection.java:64) at
    serp.util.RefValueCollection.addFilter(RefValueCollection.java:193) at
    serp.util.RefValueCollection.add(RefValueCollection.java:166) at
    serp.util.RefValueCollection.add(RefValueCollection.java:157) at
    com.solarmetric.kodo.runtime.datacache.query.QueryCacheImpl.registerClassCha
    ngeListener(QueryCacheImpl.java:151)
    at
    com.solarmetric.kodo.runtime.datacache.query.CachingRandomAccessList.<init>(
    CachingRandomAccessList.java:76)
    at
    com.solarmetric.kodo.runtime.datacache.query.CacheAwareQuery.wrapList(CacheA
    wareQuery.java:146)
    at
    com.solarmetric.kodo.runtime.datacache.query.CacheAwareQuery.execute(CacheAw
    areQuery.java:270)
    at
    com.verideon.veriguard.persistence.ObjectFinder.getObjects(ObjectFinder.java
    :62)
    at
    com.verideon.veriguard.persistence.ObjectFinder.getObject(ObjectFinder.java:
    44)
    at
    com.verideon.veriguard.persistence.ObjectFinder.getRole(ObjectFinder.java:91
    at
    com.verideon.veriguard.services.VeriguardService.createCustomerAccount(Verig
    uardService.java:210)
    at
    com.verideon.veriguard.services.TestVeriguardServiceMonitors.createTestUser(
    TestVeriguardServiceMonitors.java:133)
    at
    com.verideon.veriguard.services.TestVeriguardServiceMonitors.setUp(TestVerig
    uardServiceMonitors.java:80)
    at junit.framework.TestCase.runBare(TestCase.java:125) at
    junit.framework.TestResult$1.protect(TestResult.java:106) at
    junit.framework.TestResult.runProtected(TestResult.java:124) at
    junit.framework.TestResult.run(TestResult.java:109) at
    junit.framework.TestCase.run(TestCase.java:118) at
    junit.framework.TestSuite.runTest(TestSuite.java:208) at
    junit.framework.TestSuite.run(TestSuite.java:203) at
    org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRu
    nner.java:392)
    at
    org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.
    java:276)
    at
    org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner
    .java:167)
    NestedThrowablesStackTrace:
    com.solarmetric.kodo.runtime.ClosurePoint: Closure point of object held
    in embedded stack trace.
    at
    com.solarmetric.kodo.runtime.objectprovider.EagerResultList.close(EagerResul
    tList.java:78)
    at
    com.solarmetric.kodo.impl.jdbc.query.JDBCQuery.close(JDBCQuery.java:127)
    at com.solarmetric.kodo.query.QueryImpl.closeAll(QueryImpl.java:637) at
    com.solarmetric.kodo.runtime.datacache.query.CacheAwareQuery.closeAll(CacheA
    wareQuery.java:343)
    at
    com.solarmetric.kodo.runtime.PersistenceManagerImpl.closeQueries(Persistence
    ManagerImpl.java:934)
    at
    com.solarmetric.kodo.runtime.PersistenceManagerImpl.close(PersistenceManager
    Impl.java:914)
    at
    com.solarmetric.kodo.runtime.PersistenceManagerImpl.close(PersistenceManager
    Impl.java:884)
    at com.verideon.veriguard.services.PMService.close(PMService.java:120)
    at com.verideon.veriguard.services.PMService.close(PMService.java:111)
    at
    com.verideon.veriguard.services.TestVeriguardServiceMonitors.deleteTestUser(
    TestVeriguardServiceMonitors.java:127)
    at
    com.verideon.veriguard.services.TestVeriguardServiceMonitors.tearDown(TestVe
    riguardServiceMonitors.java:103)
    at junit.framework.TestCase.runBare(TestCase.java:130) at
    junit.framework.TestResult$1.protect(TestResult.java:106) at
    junit.framework.TestResult.runProtected(TestResult.java:124) at
    junit.framework.TestResult.run(TestResult.java:109) at
    junit.framework.TestCase.run(TestCase.java:118) at
    junit.framework.TestSuite.runTest(TestSuite.java:208) at
    junit.framework.TestSuite.run(TestSuite.java:203) at
    org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRu
    nner.java:392)
    at
    org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.
    java:276)
    at
    org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner
    .java:167)--
    Patrick Linskey
    SolarMetric Inc.

  • Problem with Preview Cache and LR5 keeps shutting down

    I keep trying to open LR5 and it gives me an error message that says it encountered a problem with the preview cache and needs to shut down, and it will attempt to fix the problem next time I open LR 5.  It keeps happening and won't allow me to open LR5 at all.  What should I do?

    Delete .lrdata folder (the whole thing) - it's in with your catalog.
    Do NOT delete your catalog (.lrcat file) or anything else.
    If you have 2 .lrdata folders, delete the one that does NOT have "Smart Previews" in the name.
    Rob

  • 2 questions--I've tried the cookies/cache thing but when I try to register a site, it still says I have a problem with my cache and cookies setting, and my icons for tools, etc has disappeared from my tool bar, how do I get them back?

    I did the self help to set private settings, delete cookies and no tracking of my visited website, and delete history...but when I try to register on a site to send them emails (comments) i keep getting sent back to an error page that my cache/cookie settings are not right!!! and then the icons for my "tools", etc that were on the upper right of my toolbar have disappeared and I want them back...where did they go and I can't find any help on the help menus to solve any of these issues..it's aggravating.

    -> Tap ALT key or press F10 to show the Menu Bar
    -> go to Help Menu -> select "Restart with Add-ons Disabled"
    Firefox will close then it will open up with just basic Firefox. Now do this:
    -> Update ALL your Firefox Plug-ins https://www.mozilla.com/en-US/plugincheck/
    -> go to View Menu -> Zoom -> click "Reset" -> Page Style -> select "Basic Page Style"
    -> go to View Menu -> Toolbars -> select '''Menu Bar''' and '''Navigation Toolbar''' -> unselect All Unwanted toolbars
    -> go to Tools Menu -> Clear Recent History -> '''Time range to clear: select EVERYTHING''' -> click Details (small arrow) button -> place Checkmarks on '''Cookies, Cache''' -> click '''Clear Now'''
    -> go to Tools Menu -> Options -> Content -> place Checkmarks on:
    1) Block Pop-up windows 2) Load images automatically 3) Enable JavaScript
    -> go to Tools Menu -> Options -> Privacy -> History section -> '''Firefox will: select "Use Custom Settings for History"''' -> REMOVE Checkmark from '''"Permanent Private Browsing mode"''' -> place CHECKMARKS on:
    1) Remember my Browsing History 2) Remember Download History 3) Remember Search History 4) Accept Cookies from sites -> select "Exceptions..." button -> Click "Remove All Sites" at the bottom of "Exception - Cookies" window
    4a) Accept Third-party Cookies -> under "Keep Until" select "They Expire"
    -> REMOVE CHECKMARK from CLEAR HISTORY WHEN FIREFOX CLOSES
    -> When using the Location Bar, suggest: select "History and Bookmarks"
    -> go to Tools Menu -> Options -> Security -> place Checkmarks on:
    1) Warn me when sites try to install add-ons 2) Block reported attack sites 3) Block reported web forgeries 4) Remember Passwords for sites
    -> go to Tools Menu -> Options -> Advanced -> Network -> Offline Storage (Cache): click '''Clear Now''' button
    -> Click OK on Options window
    -> click the Favicon on SearchBar -> click "Manage Search Engines" -> select all Unwanted Search Engines and click '''Remove''' button -> click OK
    -> go to Tools Menu -> Add-ons -> Extensions section -> REMOVE All Unwanted/Suspicious Extensions (Add-ons) -> Restart Firefox
    You can enable your Known & Trustworthy Add-ons later. Check and tell if its working.

  • Problems with statement cache using OCI

    Hello!
    We recently changed our program to use statement cache, but we found a problem and not yet a solution.
    We have problems in this situation:
    OCIEnvCreate();
    OCIHandleAlloc();
    OCILogon2(..... OCI_LOGON2_STMTCACHE);
    OCIStmtPrepare2("CREATE TABLE db_testeSP (cod_usuario INTEGER, usuario CHAR(20), dat_inclusao DATE)")
    OCIStmtExecute();
    OCIStmtRelease(... OCI_DEFAULT);
    OCIStmtPrepare2("INSERT INTO db_testeSP (1,\'user\',CURRENT_DATE");
    OCIStmtExecute();
    OCIStmtRelease(... OCI_DEFAULT);
    OCIStmtPrepare2("SELECT * FROM db_testeSP");
    OCIStmtExecute();
    OCIStmtRelease(... OCI_DEFAULT);
    OCIStmtPrepare2("DROP TABLE db_testeSP");
    OCIStmtExecute();
    OCIStmtRelease(... OCI_DEFAULT);
    OCIStmtPrepare2("CREATE TABLE db_testeSP (cod_usuario INTEGER, usuario CHAR(20), idade INTEGER, dat_inclusao DATE)");
    OCIStmtExecute();
    OCIStmtRelease(... OCI_DEFAULT);
    OCIStmtPrepare2("INSERT INTO db_testeSP (1,\'user\',20,CURRENT_DATE");
    OCIStmtExecute();
    OCIStmtRelease(... OCI_DEFAULT);
    OCIStmtPrepare2("SELECT * FROM db_testeSP");
    OCIStmtExecute();
    OCIStmtRelease(... OCI_DEFAULT);
    On the second Select (wich is in bold), returns -1 from Execute, and if I get the error with OCIErrorGet I have: ORA-00932 - inconsistent datatypes
    Researching I discovered that this is statement cache problem, is there a way to clear the cache of one table ? I'm asking this because I could clear whenever there is a DROP TABLE or ALTER TABLE instruction (but I don't know what statements will need to be cleared from the cache). I can't clear all the cache because I may have other statements from other tables on the cache.
    This situation above is just an example, but I think that this will cause other problems too.
    Our program is a gateway from the main program and database, so I don't know the SQL instructions before executing. How can we resolve this problem?
    I have tested this issue with Oracle 10g (10.2.0.4.0) and 11g (11.2.0.1.0) both 64 bits and the result is the same (the OCI is version 11.2.0).
    We appreciate any help.
    Thanks in advance,
    Daniel

    After long time searching for answers, apparently this is expected to happen and the program should not use Statement caching in this situation.
    I found this on an Oracle document (Tuning Data Source Connection Pools - 11g Release 1 (10.3.6)) and we will need to review the use of statement caching.
    Stay as a tip for others who might be in the same situation.

  • Problem with iBot cache seeding

    Hello,
    I am trying to seed cache with iBots. It works just fine for many of my reports, however there are 2 reports that are not working properly.
    First of all, running the two reports by themselves (no iBot) results in the cache being populated.
    Second, running the iBot for each of these two reports also seeds the cache.
    The problem is that after running the iBot to seed the cache for each report, when I go to the report in the dashboards and then view the session log, it is not hitting the query that was cached by the iBot. It is generating a new entry in the cache. I am positive that the iBot is created on the correct report and the other options are configured exactly like my working iBots.
    Any ideas?
    Thanks

    Hi,
    The way the OBIEE server creates the SQL (both logical and physical) for a request can be a bit funky sometimes, the cache seeding only really works in fairly simple cases. Does your request have a pivot table in it by any chance? These are notorious for not caching properly, if you look at the log for the request you can see why as the server adds this strange "aggregate by" to the request (why this can't be done at the presentation level since the only change you are asling for is in the presentation of the content is beyond me). Those "aggregate by"s tend to stop a request being a cache hit unless it is identical to the one that seeded that cache, any change in parameters, columns etc (even if a logical subset) will not get a cache hit.
    Regards,
    Matt

  • With VPD I see other users results - problems with AM caching??

    I am using JDeveloper 11.1.1.0.2.
    In my application I have 3 Appllication Modules (Admin, Store and Sale) and I use VPD and setcontext in every AM.
    @Override
    protected void prepareSession(Session session) {
    super.prepareSession(session);
    setVPDcontext();
    private void setVPDcontext() {
    String userName = getUserContext().getUserId();
    String ind = "J";
    String sql =
    "begin xxx_context.set_context('" + userName +
    "', '" + ind + "'); end;";
    java.sql.CallableStatement stmt = null;
    try {
    stmt = getDBTransaction().createCallableStatement(sql, 0);
    stmt.execute();
    catch (SQLException se) {
    throw new JboException(se.getMessage());
    finally {
    if (stmt != null) {
    try {
    stmt.close();
    catch (SQLException e) {
    throw new JboException(e);
    The problem is that I see the results of previous VPD-queries.
    3 testcases:
    1. User1 has access to 3 AM (Admin, Store, Sale), if I log in I can see the correct results in the application (in Admin, Store and Sale).
    User2 has only access to 1 AM (Store), If I log in as User2 I see the results of User1 in Store (wrong).
    2. After restarting the weblogic server and logging in as User2 I see the correct results for User2!
    If I log in as User1 I see the results of User2 in Store (wrong!!) and the correct results in Admin and Sale!
    3. User1 has access to 3 AM (Admin, Store, Sale), if I login as user User1 I only use Admin and Sale (so there are no cached results for Store).
    User3 has access to Sale and Store. If I login as User3 I see the results of User1 in Sale and the correct result in Store.
    Conclusion: I see cached query results of the first user who logges in a Application Module. Only restarting the Weblogic Server makes the cache empty. The problem only occurs with queries with VPD.
    How can I resolve this problem?

    User,
    I'm on a bind variable crusade today (not the answer to your question, unfortunately). Please please please please use bind variables instead of gluing literals together like that (use PreparedStatement instead of callable statement). You can also parse the PreparedStatement once and avoid the overhead of parsing on each call to prepareSession.
    John

  • Problem with Java cache which makes the computer very slow.

    Hello!
    I think that Java has installed som code that creats problems to my processor when I am working with the computer.
    I have got the following wrong code: 0x805 8023 and the containerfile & file below:
    1) Containerfile C:\Users\Oskar\AppData\LocalLow\Sun\Java\Deployment\cache\6.0\11\37a4040b-62415be5
    2) File C:\Users\Oskar\AppData\LocalLow\Sun\Java\Deployment\cache\6.0\11\37a4040b-62415be5->cuumfrsmeyghklbvetb/bljwt
    hwbhkkbsywmtwjeqk.class
    Could you please help me to get rid of the problem?
    You can do it directly or give me the instructions by e-mail : [email protected]
    Thank you in advance
    Best regards

    Check for malware infection. The names of these files really indicate the system is infected.
    Which version of Java client is installed? Especially older versions are very vulnerable against website or email based attacks.
    Since malware infections are very different, detailed support can hardly be given via forums.
    Best greetings from Germany
    Olaf

  • Problem with XI Cache

    Hi
    I have QA Server which poiniting the SLD of the XI DEV
    I cant send messages in the QA,and also the J2EE restarts from time to time
    the errors I get in Logs and SXMB_MONI is
    "unable to refresh cache"
    in sxi_cache i c the error
    "if_http_client receive http_communication_failure"
    I trying recr8ing the INTEGRATION_DIRECTORY_HMI RFC
    and it seems ok
    I think its connected to some defenition in the J2EE I missed
    any1 have ideas how to solve the problem?
    Thx,Shai

    Dear Shai Rosenzweig,
    the J2EE restarts from time to time  -
    >
      Kindly  check the Post Installation.
    "if_http_client receive http_communication_failure" -->
    Some time we will get this error if XIRWBUSER is locked.
    Try unlocking XIRWBUSER user from su01. ( in su01 u will have a lock button whihc will show u the staus if the user is locked or not.)
    Kindly check whether you posses the roles required to access the RWB are:
    1. SAP_XI_RWB_SERV_USER
    2. SAP_XI_RWB_SERV_USER_MAIN
    Cross check the below:-
    1.Ensure that the required JDK is being used in the client system
    2.ry clearing the Web Start cache and try downloading again.
    3.tart>Programs>Java Web Start>File>Preferences>Advanced>Clear Folder
    4.If The jars are already downloaded in some other client system, then copy them to the following path...Drive\Documents and Settings\Client-User\Application Data\Sun\Java\Deployment\javaws\cache... in the client system and try opening the IB.
    Also go thro' Shabarish's blog on 'Trouble logging to Integration Builder ( IR / ID )':-
    /people/shabarish.vijayakumar/blog/2006/02/13/unable-to-open-iresrid-xipipi-71-updated-for-pi-71-support
    Regards
    Agasthuri Doss

  • Problem with offline Folder

    Hi all
    After moving PSE7 to an new Notebook, all the files are offline (I have them on a external USB Disk named Q). See the Johnrellis Tool att.
    Ant Hint how to get the Files on [FOTO] back to Q:.....?
    thnks for any help
    Peter

    It appears that on your old computer, your photos were originally on a drive named FOTOS with volume serial number 6D84-D93C.  Then at some point you copied them to the current external hard drive 2A0B-20FE, assigned the drive letter Q on your new computer.  PSE is now quite confused, due to some bugs/misfeatures in the way it handles multiple drives.   There are two straightforward though tedious options:
    1. Go back to the original computer, verify that you can still open your catalog and access all your photos, and use the backup/restore method for moving the photos:
    http://www.johnrellis.com/psedbtool/photoshop-elements-faq.htm#_Move_your_photos_1
    2. On the current computer with the current contents of the Q drive, reconnect your catalog to the photos on the Q drive.  You'll have to do this folder by folder, due to limitations in PSE's Reconnect command.  See method 4 of this FAQ for how to do this:
    http://www.johnrellis.com/psedbtool/photoshop-elements-faq.htm#_Quickly_reconnecting_large

  • Problem with clearing cache by identity class

    Our application has requirement to have admin methods to clear/refresh objects of certain type from the toplink cache. We tried to use initializeIdentityMap(class) method, but it seems the objects stick around in the cache. But the initializeAllIdentityMaps() method worked. Unfortunately we need the granularity.
    We are currently using toplink 9.0.4 and configured the cache to use FullIdentityMap as the data in cache are quiet static.
    Any help is greatly appreciated.
    Thanks,
    zq

    Doug,
    Thank you very much for your reply. I understand that the initialize features are risky and we tried to use reader-writer lock for each entity type. We are using refresh policy to refresh cache for most of data. But for some types of data which are cached based on the user requests, we don't have queries to refresh all objects of that type in cache. I thought about using a query which will select from cache all the records of given type. But that would be fairly time-consuming if we need to loop through all the records and issue refreshing. What we really want is clear all the records in cache and build the cache again based on user requests.
    What's your recommendation?
    The initializeIdentityMap(class) is prefect for us if it works. Our application is very much like a data provider which get the data from either database or cache and send it over the wire using RMI. The entity we try to clear is very much on its own: no other records/application reference it. But the initialize method seems fail to remove them as we still see them through getAllFromIdentityMap() and subsequent request don't hit the database. What could be wrong?
    When will 10.1.3 ready for production? I looked briefly on the cache related enhancement. A lot of nice new features. But I don't see it help much on the problem we face now: ad hoc clear all the records of an identity type from cache(without references to them). I could be wrong as I only browse through the API doc. Can you shield some light on this?
    Really appreciate your insight.
    zq

  • Problem with page cache clearing

    I have a button that exits HTML DB and returns to our portal main page. That is done through a URL, and not a page number. There is no way to reset page cache with a URL branch. So I made a page that has a branch that calls the URL, and then I changed the button on the page to branch to this page and do the proper page cache resets before calling the other page that branches to the URL. However, the page cache is not being reset when I do this.
    Here is the page flow
    page 10 - Select a record for update
    page 20 - Display data using an ID that is on page 10
    If Return to Menu is clicked branch to page 1 and reset page 10 and page 20
    Page 1 - Branches to URL
    When I go back to page 10 from the application server page page 10 & 20 are still populated.
    If I change page 20's return to menu to branch to page 10, then the reset does occur. What am I doing wrong? How do I reset the pages when leaving HTML DB?

    Dwayne,
    Validations, like other components, can have conditions that observe the current request value. This value is set to the button name when a button submits the page. So you could have a condition on a validation like 'When request != Expression 1' and in the Expression 1 field type the button name (check the button attributes to confirm the case-sensitive name and submit the page in debug mode to see the value of request when you click the button).
    About clearing page cache when branching to URL -- not sure what you're after: Cache can be cleared if an HTML DB page is invoked via URL. You specify clear-cache options in the branch definition which you can observe in the URL that invokes the target page. If you mean that you want the cache of one or more HTML DB pages to be cleared "when" a branch to a non-HTML DB page is taken, that doesn't really make sense because nothing much happens during a branch except to call the HTML DB engine's accept procedure or to do a URL redirect. So all you can do is specify some action to happen "before" a branch is taken, e.g., clear cache on pages x,y,z using a page process, or specify that the clear cache action is to occur "after" the branch is taken as manifested in the resulting URL to an HTML DB page. For something to happen "when" a branch is being done presumably would be after page processing but before a redirect, which we already support in the form of page processes.
    Scott

Maybe you are looking for