Notes 'cache' issue

I have produced a set of video quizzes for learners of French; for more information visit:
http://www.ashcombe.surrey.sch.uk/ipod/ipod.html
or
http://ashcombeweb.blogspot.com/
However, the use of in excess of sixty hyperlinked 'notes' has thrown up an issue to which I do not have a solution - yet!
There seems to be some sort of cache within the iPod. During testing, after I have completed one of my quizzes (and in so doing, clicked on one of several available hyperlinks on each page), if I then choose to do the same test again, the iPod 'remembers' which links I clicked and has them already selected, which doesn't make for much of a challenge! Even if I complete a different quiz, when I return to the previous ones, my answers are still selected!
Is there any way to overcome this issue?
iMac 24" Core 2 Duo and MacBook Core Duo   Mac OS X (10.4.8)   80Gb video iPod and 4Gb Nano

hi..
Go through [http://orainternals.wordpress.com/2009/06/02/library-cache-lock-and-library-cache-pin-waits/]
Anand

Similar Messages

  • Directory Caching issue with Cisco Jabber client for Windows

    Hi ,
    I am facing cache issue with Cisco Jabber client for Windows. If I do any change related to modification or deletion of contacts in Active Directory/ Callmanager, it does not reflect in the Jabber. Because jabber takes the contacts from the locally stored cache file in the Windows system.
    Every time I have to remove the cache file to overcome this issue, practically it's not possible to do the same with all the Widows users. As, if any employee leaves the company and still I can see his contact appears in the "Cisco Jabber client". I have not seen this issue with Android/Apple iOS.
    Is there any automated way to remove the cache file? 
    Here is the detail of CUCM,Presence and Jabber.
    CUCM version: 9.1.x
    Presence          : 9.1.X
    Jabber              : 10.5 and 10.6

    Hello
    On our environment we had to install a dedicated Microsoft Certificate Authority "just for Cisco Jabber usage" to house the
    Network Device Enrollment Service.
    Our certificate for the CUPS were generated on this Certification Authority too.
    I discussed this certificate matter with my colleagues this afternoon and nobody seems to remember how these certificates were deployed into the
    Enterprise Trust store for the users.
    But I think they asked all 400 users to accept the 3 certificates by answering "yes" to the popup instead of using a script deployed by GPO...
    I wish you success with that deployment and really hope you have a technical partner that *Knows* this subject.
    Our partner left us alone with that unfortunately.
    Florent
    EDIT: If the "Certutil script method" works, please let me know. This could be useful in our own deployment.

  • Sql query cache issue

    I am trying to see the log file in Manage sessions for the sql query in Answers. I see that if we run the same report multiple times, the sql query is showing up only the first time. Second time if I run it is not showing up. If I do a brand new report with diff columns picked it is giving me the sql then. Where do I set this option to show the sql query everytime I run a report even if it is the same report run multiple times. Is this caching issue?

    It shouldn't.... Have you unchecked the "Cache" on the physical layer for this table? If you go onto the Advanced tab, is the option "Bypass the Oracle BI cache" checked?

  • WEB u2013 I cache issue.

    We have BO XI 3.1 PF 1.7 and SAP BW 7.0 SP 18
    We are facing strong caching issue  with WEB- I XI 3.1 , what we observed is it cache previously fetched data and does not recognize data been updated in BW Infoprovider ( in our case we have BEx query on Direct update DSO )
    I have tried u2013
    1.     I made caching inactive for that BEx query in BW side.
    2.     I have check caching setting in BO , they are not helping me much to in active cache in BO
    On executing BEx query in BW, I have checked RSRT cash monitor; the query result is not getting cached, and get up-to-date data. Query works perfectly file.
    Also observed Events in RSDDSTAT_OLAP table, I could find very well Event 9000. This shows me that BEx query hits database and bring data back.
    When I execute BO report on the top of same BEx query, I do not get up to date data , it shows me previously fetched data and does not recognize data been updated in BW.
    Now question is where this data being getting cashed , and how to make that inactive to make sure I do get up to date data in WEB-I .
    Many thanks for your kind response.
    Regards
    Ashutosh D

    HI Ashutosh,
    This would be stored in the C:\Program Files\Business Objects\BusinessObjects Enterprise 12.0\Data\servername_6400\storage folder.  Try renaming this folder and then testing again.  If this allows for updated records, then that proves that the Webi Processing Server is getting the cached data from this directory.
    There are also a few settings you can try to disable in the Webi Processing Server settings within the CMC:
    Disable cache Sharing (Checked)
    Enable Real-Time Caching (Unchecked)
    Enable Document Cache (Unchecked)
    Cache Timeout (1 min) - sets the cache to expire after 1 min.
    Try playing with these settings and see if that resolves the issue. 
    Thanks
    Jb

  • Hot mail is not working after firefox upgrade. I am using Firefox 12.0. I did not have issue with hotmail, but I am facing problem this week .

    I am using Firefox 12.0. I did not have issue with hotmail, but I am facing problem this week after firefox upgrade. After entering login the page freeze. I am not able to continue.
    I reviewed the thread. I am not using Foxit.
    I reviewed hot mail forum. They suggest firefox upgrade causes this issue.
    I tried after clearing my browser's cache and cookies. The situation remain same. Any idea?
    What is the workaround?

    Firefox 3.6 needs the Java Second Generation Plugin which comes with newer versions than 1.6.0_03 - update Java, the latest version is 1.6.0_22

  • SharedObject and Caching issues

    Background:
         I am working on a project where I am using sharedObject to store data for a favorites list. The project contains 2 swfs, one in AS 3.0 (using Flex 4.5) and the other an AS 2.0 flash 8 project.  Basically the AS 2.0 project read and writes data to the shared object and the AS 3.0 project just reads the data.
    Setup:
         OS: Windows 7
         Flash Player: 10.3     Browsers: IE 9, IE 8, Chrome 14, and some misc Firefox testing -- Chrome seems to be working the best.  
    Problems:
         First off it has been extremely hard to get consistent results across different computers so if anyone has a suggestion as to why that would be (other than different browsers, different flash player versions, different flash player settings/global settings and different OS versions) because I have been making sure all that stuff is consistent across devices for testing purposes.
         1. It appears that some computers/browers won't store the shared object at all even though everything to allow it is turned on.
         2. Some computers/browsers will store it, but won't update until I refresh the page
    General Questions:
         1. Is the storage of sharedObject object data in a .sol file configured the same in both AS 2.0 and AS 3.0 and if so, would this be a means of communicating between two swfs running off the same domain?
         2. Is it possible to have a browser cache sharedObject data(the .sol file)?
          3. I have noticed as new flash player releases are coming out(especially recently i.e. 10.3) I have had to make a lot of repairs to the actionscipt 2 stuff.  Is actionscript 2 support slowly being phased out? Is anyone else experiencing the same problems? 
    Possible Solutions/Findings:
         I am starting to believe that it has something to do with browser caching since Chrome seems to perform the best and Chrome has performed best when it comes to browser caching issues.
    ** code is available if needed but since it was working before I don't think it is relevant.

    Hello,
    #1
    are you using explicit call to flush method immediately afer you're writing data to reference to SharedObject?
    http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/SharedObject. html#flush()
    If not you could have different data read by different browsers sessions run at the same time (e.g. IE/FireFox both hosting your test Flash movies) - as data are flushed only when given Flash runtime session is to be terminated (e.g. movie is to be unloaded) and in other cases as outlined in documentation.
    #2
    the SharedObject is reference to binary file stored on local machine by Flash runtime (Air runtime too). It has nothing similar to web-fetched content to be cached:
    http://en.wikipedia.org/wiki/Local_Shared_Object
    regards,
    Peter

  • SecurityDomain and Caching Issues

    I am running into some caching issues when setting the securityDomain of an imported SWF to match the calling SWF file and I was curious if anyone had any ideas on how to get around this issue.
    Here is the scenario:
    A.swf is launched on Domain A, it sends a Loader request for B.swf on Domain B.
    B.swf will be updated frequently so caching is disabled, A.swf however can be cached.
    B.swf has some ExternalInterface.calls so it requires being in the same securityDomain as A.swf otherwise it will receive a Security Error #2006.
    The code I am using for this is fairly straightforward:
    ActionScript Code:
    var request:URLRequest = new URLRequest("http://domainB/B.swf");
    var loader:Loader = new Loader();
    var context:LoaderContext = new LoaderContext();
    context.securityDomain = SecurityDomain.currentDomain;
    loader.load(request, context);
    I believe B.swf is inheriting the caching setting of A.swf because it is residing in the same securityDomain. If I make a small update to B.swf and refresh A.swf in a browser it will not load the B.swf updates until I clear cache.
    If I get rid of the securityDomain context on the load it will always update B.swf with the most current version, but I run into security issues with ExternalInterface.
    ActionScript Code:
    var request:URLRequest = new URLRequest("http://domainB/B.swf");
    var loader:Loader = new Loader();
    loader.load(request);
    I have tried appending random strings to the end of the URLRequest while using the securityDomain context but it will always used a cached version of B.swf. I have also tried loading B.swf into a ByteArray then using loadBytes on a Loader but that didn't work either.
    Any ideas are appreciated!

    Hello,
    #1
    are you using explicit call to flush method immediately afer you're writing data to reference to SharedObject?
    http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/SharedObject. html#flush()
    If not you could have different data read by different browsers sessions run at the same time (e.g. IE/FireFox both hosting your test Flash movies) - as data are flushed only when given Flash runtime session is to be terminated (e.g. movie is to be unloaded) and in other cases as outlined in documentation.
    #2
    the SharedObject is reference to binary file stored on local machine by Flash runtime (Air runtime too). It has nothing similar to web-fetched content to be cached:
    http://en.wikipedia.org/wiki/Local_Shared_Object
    regards,
    Peter

  • Caching issue in jsp or servlet

    I am not sure where I should post the issue that I have. I have a j2ee web app (struts) running on Apache Web Server + Glassfish + MySQL. My server has multi core.
    The problem is that whenever I insert a new data or delete it from a table, I don't see the change right away. Sometimes, I see the change. Sometimes I don't. It's very inconsistent.
    If the old data is cached in JSP, I shouldn't see the change in log file, but I do see it even in the log file.
    For example, I have a page managing user's folders. when I delete a certain folder name from jsp page, this folder is deleted from a db table. but when I refresh the page, I still see the folder name that is not supposed to show up. or when I go to a different page and come back to this page, I don't see it. The behavior is very inconsistent.
    If it's a browser caching issue, I don't think that I should see it in the log file, but I still see the folder name(which is supposed to be deleted) in the log file.
    I am including these lines in all included jsp pages.
    response.setHeader("Cache-Control","no-cache");
    response.setHeader("Pragma","no-cache");
    response.setDateHeader ("Expires", -1);
    does anybody have any opinion about this?
    It's hard to debug it and describe the behavior.
    But it would be very helpful if someone who had a same experience about this explains about this and tells me how to fix it.
    Thanks.

    caesarkim1 wrote:
    I am including these lines in all included jsp pages.
    response.setHeader("Cache-Control","no-cache");
    response.setHeader("Pragma","no-cache");
    response.setDateHeader ("Expires", -1); Instead of including these lines in all jsp's, make a filter and add these lines to it.

  • Acrobat Connect Pro LMS 7.5 server cache issue - displaying old content

    Our Adobe Acrobat Connect Pro server is showing old Captivate-created content from about 4-6 weeks ago.
    I loaded 35+ sets of Captivate 3 (SCORM 1.2, HTML, zipped) content onto our Acrobat Connect Pro LMS about 6 weeks ago.
    I converted all training content from Captivate 3.0.1 to 4.0.1, 4 weeks ago by opening each file in Captivate 4 and following prompts to "Save As" new files with different file names.
    I reloaded all content 4 weeks ago, and again 2 weeks ago.
    I reloaded about half of all content again last week.
    End User Acceptance testing performed this week showed that most of the courses are showing old content, ranging from 2-6 weeks old
    Attempted fixes and workarounds:
    Deleting content entirely and reloaded from scratch - this will not work long term, as we lose usage data each time we reload completely new files.
    Contacted Adobe, provided times to track incidents of the issue.  We reached Tier 2 - who told us it was our problem and that everything appeared to be working fine from their side.
    New workaround - load new content and reattach course to new content.  This presents the same long term issue as the first workaround, but enables us to retain older versions of content in the system, should we need to revert or report on it.
    Gaining server side access is a bit challenging due to the hosting situation we have, so I am looking (ideally) for a solution that can be performed from the Administrator/Author Frontend.  However, I want to learn the real cause of the problem, wherever it might reside, so that it can be properly corrected and avoided in the future.  I am calling this a server cache issue, as it seems the server has somewhere retained unwanted old versions of content, preventing current content from being displayed to end users.  Viewing content as an end user = see old content.  Viewing content from the Content area (Author view) shows the current files, so I know they are on the server and are loading correctly, up to a point.
    I am preparing all content for another round of loading/reloading due to other issues and updates, so republishing and reloading all 35+ files into the LMS is unavoidable at this point.
    This issue is keeping our LMS from launching to several thousand users across the country, so any suggestions or helpful tips are much appreciated.

    I think I have isolated the source of this problem. It's the Pitstop Professional 9 plug in. I un-installed this, and everything opens quicker than greased lightning. I re-installed it and it's back to slowsville.
    Unfortunately Pitstop is essential to my workflow.
    Until recently I did my pre-press on a Mac G5 with Acrobat Pro 7 and Pitstop 6.5. I never had this problem with slow file opening. But it seems that the delays would occur when I used the plug-in with large complex files.. So it would open files as fast as you'd expect from an elderly machine. But starting to use Pitstop would result in a prolonged period of staring at a spinning beachball.
    I wonder is there any way to stop the Pitstop plug-in from initializing until it is used? So the plug-in stays inert until you select the tool from from the menus.

  • IE Caching Issue while closing and opening a UI thrice or more

    Hi Everybody..
    Greetings of the day..
    I am working on a web application (Can not disclose the name) where at certain level a word document generated contains a customized toolbar.
    It contains one print option, when clicked opens a UI which shows default printer available (Its different from our normal printing).
    When I repeat opening and then closing of this UI thrice or more times, the entire application gets hanged and I have
    to terminate it by killing the associated process from task manager.
    When this issue was analysed our by onsite team they arrived at a conclusion saying that its an IE caching issue.
    But upto my knowledge I know that any application which is .SSL secured, by default IE never does cache control.
    What could be the solution if application is not .SSL secured or if it is.
    Kindly help asap.
    Thanks.

    Hi all
    Did anyone ever manage to fix this? I'm having this issue across multiple iPads. Even if i completely reset the iPad (erasing all content and settings) it does it again when restored.
    I've just got an iPad Air and that does it too. I even created a new Apple ID to see if that would work (trying to avoid any kind of caching aross iCloud, etc) and it does it straight away in those books. It's massivley frsustrating.
    I have noticed it only seems to do it for iBooks formatted books, not ePubs.
    Anyone have a resolution - it's quite embarrassing when you're doing a presentation about book features and the audience assumes youve only the wrong book each time. More annoying when clsoing the book and the wrong cover appears again, but also takes you to that books collection page, rather than the one you were in.

  • Why would Safari 6 not cache certain cacheable resources?

    In our web application, which uses GWT, we have many Javascript resources. In the case of the GWT module's Javascript, the resource is 790kB (226kB gzipped). All the other Javascript and stylesheet resources are cached, but Safari 6 doesn't cache this one for some reason. Some points that may be important:
    - Our server sends this javascript gzipped, and the response does include the header
    Cache-Control: public,max-age=31536000
    and I can see this response header, along with a confirmation that this resource is not cached, using the Safari developer extensions.
    - The URL has a .html extension, as is the pattern with GWT applications (it has to do with bootstrapping the module). I don't know if this is relevant, but it is the only difference I could identify between this resource versus others that are cached.
    - This is happening with Safari 6 on Mac OS 10.8.*. Chrome and Firefox on Mac, as well as the IE versions on Windows that we've tested do cache this javascript resource.
    - In my particular Safari environment, there are no extensions installed or enabled, and caching is definitely not disabled (and as I mentioned, all the other static resources that have the same Cache-Control header are cached as expected).
    Has anyone else experienced an issue like this with a GWT web application or otherwise? Is the .html extension the the cause of issue? If so, why would Safari care about the content type of the resource when it comes to whether or not to cache it, as long as the appropriate HTTP response headers are set? And why would it behave differently from other browsers?
    I spent some time searching for a similar question or discussion but I could not find anything here or anywhere else.

    Same problems here on that site with Safari 6.0.5
    The have a Facebook page you can post and ask for help >  https://www.facebook.com/DocFuAndHealthTeam

  • Possible session caching issue in SSRS2014

    Using custom FormsAuth, User A can sign into our own main asp.net mvc app (WIF cookie), then SSRS (FormsAuth cookie) and all is well.  Here is where things go bad.  User A signs out of our main application (WIF cookie deleted) then back in into
    our main application as User B then back into SSRS.  SSRS report that displays User!UserID show UserA instead of current User B.  Its like there is either a session or cookie caching issue going on but not for sure.  
    1. What is the proper way to sign out of SSRS and prevent session caching?
    2. Do I need to worry about making my SSRS logon page non-cacheable?  If so, what is the recommended way of doing this?
    thanks
    scott

    Hi scott_m,
    According to your description, you used custom FormsAuthentication in Reporting Services, after user A sign out the application an sign in as user B, SSRS built-in user is shows user A instead of user B.
    Based on my search, once we configured SSRS to use Custom (Forms) authentication by deploying a custom security extension, we can logon to MS Report Manager (MSRM) using credentials of our custom security framework via a logon web page. But there is no way
    to logout or to expire the authentication cookie, so we need to close the browser manually. As a workaround, we can add a logout button to the Report Manager which is using Forms Authentication, then use code to clear the cookie and redirect to home page.
    In addition, if you extend Reporting Services to use Forms Authentication, it’s better to use Secure Sockets Layer (SSL) for all communications with the report server to prevent malicious users from gaining access to another user's cookie. SSL enables clients
    and a report server to authenticate each other and to ensure that no other computers can read the contents of communications between the two computers. All data sent from a client through an SSL connection is encrypted so that malicious users cannot intercept
    passwords or data sent to a report server.
    Here is a relevant thread you can reference:
    https://social.msdn.microsoft.com/Forums/sqlserver/en-US/5e33949d-7757-45d1-9c43-6dc3911c3ced/how-do-you-logout-of-report-manager
    For more information about Forms Authentication, please refer to the following document:
    https://technet.microsoft.com/en-us/library/aa902691%28v=sql.80%29.aspx?f=255&MSPPError=-2147217396
    If you have any more questions, please feel free to ask.
    Thanks,
    Wendy Fu
    If you have any feedback on our support, please click
    here.
    Wendy Fu
    TechNet Community Support

  • Parallel Caching Issue

    We have found issues with the parallel loading of the OLAP cache using reporting agent jobs where entries are not populated correctly.We rely on the OLAP cache for delivering the very high levels of concurrency during the peak times.
    Once the main batch data loading has been completed we run some background reporting agent jobs to pre-populate the OLAP cach.Each job contains a web application template that holds 1 or more queries and will process 1500+ stores for each run.
    We have different reporting agent jobs for the different web application templates, but we have discovered that if we run these jobs in parallel we do not get the full benefit of the OLAP cache compared to if we run them sequentially.
    If we run the jobs in parallel, when we look in RSRCACHE for these queries it would appear to have populated correctly, but when we check RSDDSTATS for the query performance the following day, we can see that a large number of the
    stores still hit the database and did not benefit from the cache entries. Sometime this can be as much as 60% failure to hit the cache.
    If we run the same job sequentially, then check RSDDSTATS the following
    day, we can see 100% success rate with every store hitting the OLAP cache.
    Is anyone able to advise how we can resolve this parallel caching issue?

    Hi Ganesh,
    I am currently having similar trouble with TBB1 where I receive error message below:
    TRL intialization for MM, FX, Derivatives, co. code WTRD, valn area 003 is not yet complete
    Message no. TPM_TRL052
    Are you familiar with this issue? any help would be greatly appreciated! i hope that you can help..

  • Zfs list on solaris express 11 always reads from disk (metadata not cached)

    Hello All,
    I am migrating from OpenSolaris 2009.11 to SolarisExpress 11.
    I noticed that "zfs list" takes longer than usual, and is not instant. I then discovered via a combination of arcstat.pl and iostat -xnc 2 that every time a list command is issued, there are disk reads. This leads me to believe that some metadata is not getting cached.
    This is not the case in OpenSolaris where repeated "zfs list" do not cause disk reads.
    Has anyone observed this, and do you know of any solution?
    This is on an IDLE sustem with 48 GB of RAM - with plenty of free memory.

    Hi Steve,
    Great info again. I am still new to dtrace, particularly navigating probes and etc. I've seen that navigation tree before.
    I would like to start by answering your questions:
    Q) Have you implemented any ARC tuning to limit the ARC?
    -> No out of the box config
    Q) Are you running short on memory? (the memstat above should tell you)
    -> Definetelly not. I have 48 GB ram, ARC grows to about 38 GB and then stops growing. I can reproduce problem at boot at will with only 8GB used. Nothing is getting aged out of the ARC at that time. Only those metadata reads never get stored.
    Q) Are any of your fileystems over 70% full?
    -> No. I am curious, what changes when this happens? Particularly in regards to ARC - perhaps another discussion, I don't want to distract this subject.
    Q) Have you altered what data is/is not cached? ($ zfs get primarycache)
    -> No - everything should be cached. I also have recently added l2cache (80GB). The metadata is not cached there neither.
    I am not yet familiar with dtrace processing capabilities, thus I had to parse output via perl. Notice how each execution has the exact same number of misses. This is due to the fact that these particular datablocks (metadata blocks) are not cached in the arc at all:
    :~/dtrace# perl -MData::Dumper -e 'while (<>) {if (/.+(:arc-hit|:arc-miss).*/) { $h{$1}+=1}} print Dumper \%h' ^C
    :~/dtrace# ./zfs_list.d -c 'zfs list' |perl -MData::Dumper -e 'while (<>) {if (/.+(:arc-hit|:arc-miss).*/) { $h{$1}+=1}} print Dumper \%h'
    dtrace: script './zfs_list.d' matched 4828 probes
    dtrace: pid 11021 has exited
    $VAR1 = {
              ':arc-hit' => 2,
              ':arc-miss' => 192
    :~/dtrace# ./zfs_list.d -c 'zfs list' |perl -MData::Dumper -e 'while (<>) {if (/.+(:arc-hit|:arc-miss).*/) { $h{$1}+=1}} print Dumper \%h'
    dtrace: script './zfs_list.d' matched 4828 probes
    dtrace: pid 11026 has exited
    $VAR1 = {
              ':arc-hit' => 1,
              ':arc-miss' => 192
    :~/dtrace# ./zfs_list.d -c 'zfs list' |perl -MData::Dumper -e 'while (<>) {if (/.+(:arc-hit|:arc-miss).*/) { $h{$1}+=1}} print Dumper \%h'
    dtrace: script './zfs_list.d' matched 4828 probes
    dtrace: pid 11031 has exited
    $VAR1 = {
              ':arc-hit' => 12,
              ':arc-miss' => 192
    :~/dtrace# ./zfs_list.d -c 'zfs list' |perl -MData::Dumper -e 'while (<>) {if (/.+(:arc-hit|:arc-miss).*/) { $h{$1}+=1}} print Dumper \%h'
    dtrace: script './zfs_list.d' matched 4828 probes
    dtrace: pid 11036 has exited
    $VAR1 = {
              ':arc-hit' => 4,
              ':arc-miss' => 192
    :~/dtrace# ./zfs_list.d -c 'zfs list' |perl -MData::Dumper -e 'while (<>) {if (/.+(:arc-hit|:arc-miss).*/) { $h{$1}+=1}} print Dumper \%h'
    dtrace: script './zfs_list.d' matched 4828 probes
    dtrace: pid 11041 has exited
    $VAR1 = {
              ':arc-hit' => 27,
              ':arc-miss' => 192
    :~/dtrace# I presume next steps would be to perform stack analysis on which blocks are been not cached. I don't know how to do this ... I am guessing this is a mid-function probe? "| arc_read_nolock:arc-miss" I don't know how to access it's parameters.
    FYI, here's an example of a cache miss in my zfs list:
      0  -> arc_read                             
      0    -> arc_read_nolock                    
      0      -> spa_guid                         
      0      <- spa_guid                         
      0      -> buf_hash_find                    
      0        -> buf_hash                       
      0        <- buf_hash                       
      0      <- buf_hash_find                    
      0      -> add_reference                    
      0      <- add_reference                    
      0      -> buf_cons                         
      0        -> arc_space_consume              
      0        <- arc_space_consume              
      0      <- buf_cons                         
      0      -> arc_get_data_buf                 
      0        -> arc_adapt                      
      0          -> arc_reclaim_needed           
      0          <- arc_reclaim_needed           
      0        <- arc_adapt                      
      0        -> arc_evict_needed               
      0          -> arc_reclaim_needed           
      0          <- arc_reclaim_needed           
      0        <- arc_evict_needed               
      0        -> zio_buf_alloc                  
      0        <- zio_buf_alloc                  
      0        -> arc_space_consume              
      0        <- arc_space_consume              
      0      <- arc_get_data_buf                 
      0      -> arc_access                       
      0       | arc_access:new_state-mfu         
      0        -> arc_change_state               
      0        <- arc_change_state               
      0      <- arc_access                       
      0     | arc_read_nolock:arc-miss           
      0     | arc_read_nolock:l2arc-miss 
      0      -> zio_read                         
      0        -> zio_create                     
      0          -> zio_add_child                
      0          <- zio_add_child                
      0        <- zio_create                     
      0      <- zio_read                         
      0      -> zio_nowait                       
      0        -> zio_unique_parent              
      0          -> zio_walk_parents             
      0          <- zio_walk_parents             
      0          -> zio_walk_parents             
      0          <- zio_walk_parents             
      0        <- zio_unique_parent              
      0        -> zio_execute                    
      0          -> zio_read_bp_init             
      0            -> zio_buf_alloc              
      0            <- zio_buf_alloc              
      0            -> zio_push_transform         
      0            <- zio_push_transform         
      0          <- zio_read_bp_init             
      0          -> zio_ready                    
      0            -> zio_wait_for_children      
      0            <- zio_wait_for_children      
      0            -> zio_wait_for_children      
      0            <- zio_wait_for_children      
      0            -> zio_walk_parents           
      0            <- zio_walk_parents           
      0            -> zio_walk_parents           
      0            <- zio_walk_parents           
      0            -> zio_notify_parent          
      0            <- zio_notify_parent          
      0          <- zio_ready                    
      0          -> zio_taskq_member             
      0          <- zio_taskq_member             
      0          -> zio_vdev_io_start            
      0            -> spa_config_enter           
      0            <- spa_config_enter           
      0            -> vdev_mirror_io_start       
      0              -> vdev_mirror_map_alloc    
      0                -> spa_get_random         
      0                <- spa_get_random         
      0                -> vdev_lookup_top        
      0                <- vdev_lookup_top        
      0              <- vdev_mirror_map_alloc    
      0              -> vdev_mirror_child_select 
      0                -> vdev_readable          
      0                  -> vdev_is_dead         
      0                  <- vdev_is_dead         
      0                <- vdev_readable          
      0                -> vdev_dtl_contains      
      0                <- vdev_dtl_contains      
      0              <- vdev_mirror_child_select 
      0              -> zio_vdev_child_io        
      0                -> zio_create             
      0                  -> zio_add_child        
      0                  <- zio_add_child        
      0                <- zio_create             
      0              <- zio_vdev_child_io        
      0              -> zio_nowait               
      0                -> zio_execute            
      0                  -> zio_vdev_io_start    
      0                    -> spa_syncing_txg    
      0                    <- spa_syncing_txg    
      0                    -> zio_buf_alloc      
      0                    <- zio_buf_alloc      
      0                    -> zio_push_transform 
      0                    <- zio_push_transform 
      0                    -> vdev_mirror_io_start
      0                      -> vdev_mirror_map_alloc
      0                      <- vdev_mirror_map_alloc
      0                      -> vdev_mirror_child_select
      0                        -> vdev_readable  
      0                          -> vdev_is_dead 
      0                          <- vdev_is_dead 
      0                        <- vdev_readable  
      0                        -> vdev_dtl_contains
      0                        <- vdev_dtl_contains
      0                      <- vdev_mirror_child_select
      0                      -> zio_vdev_child_io
      0                        -> zio_create     
      0                          -> zio_add_child
      0                          <- zio_add_child
      0                        <- zio_create     
      0                      <- zio_vdev_child_io
      0                      -> zio_nowait       
      0                        -> zio_execute    
      0                          -> zio_vdev_io_start
      0                            -> vdev_cache_read
      0                              -> vdev_cache_allocate
      0                              <- vdev_cache_allocate
      0                            <- vdev_cache_read
      0                            -> vdev_queue_io
      0                              -> vdev_queue_io_add
      0                              <- vdev_queue_io_add
      0                              -> vdev_queue_io_to_issue
      0                                -> vdev_queue_io_remove
      0                                <- vdev_queue_io_remove
      0                              <- vdev_queue_io_to_issue
      0                            <- vdev_queue_io
      0                            -> vdev_accessible
      0                              -> vdev_is_dead
      0                              <- vdev_is_dead
      0                            <- vdev_accessible
      0                            -> vdev_disk_io_start
      0                            <- vdev_disk_io_start
      0                          <- zio_vdev_io_start
      0                        <- zio_execute    
      0                      <- zio_nowait       
      0                    <- vdev_mirror_io_start
      0                  <- zio_vdev_io_start    
      0                  -> zio_vdev_io_done     
      0                    -> zio_wait_for_children
      0                    <- zio_wait_for_children
      0                  <- zio_vdev_io_done     
      0                <- zio_execute            
      0              <- zio_nowait               
      0            <- vdev_mirror_io_start       
      0          <- zio_vdev_io_start            
      0          -> zio_vdev_io_done             
      0            -> zio_wait_for_children      
      0            <- zio_wait_for_children      
      0          <- zio_vdev_io_done             
      0        <- zio_execute                    
      0      <- zio_nowait                       
      0    <- arc_read_nolock                    
      0  <- arc_read                              I've compared the output of a single-non cached metadata read, to a single read from filesystem by running dd and read from a file that is not in the cache. The only difference in the stack is that the non-cached reads are missing:
      0                                -> vdev_queue_offset_compare
      0                                <- vdev_queue_offset_compare This is called in "-> vdev_queue_io_to_issue ". But I don't think this is relevant, perhaps related to metadata vs file data read.
    What do you think should be next?

  • EJB CMP remove create cache issue? DuplicateKeyException

    EJB CMP remove create cache issue? DuplicateKeyException
    Hi I have an EJB 2.1 application using CMP. Most things work fine. But if I in a transaction tries to remove a bean and create a new one with the same primary key I get a DuplicateKeyException:
    2007-11-16 09:25:31,963 ERROR [RMICallHandler-6] AdminGroupData_ConcreteSubClass147 - Error adding AccessRules:
    javax.ejb.DuplicateKeyException: Exception [EJB - 10007]: Exception creating bean of type [AccessRulesData]. Bean already exists.
    at oracle.toplink.internal.ejb.cmp.EJBExceptionFactory.duplicateKeyException(EJBExceptionFactory.java:195)
    I suspect that the remove call only removes from the cache (until commit), but that the create call checks the database or something?
    My code is simple like the following:
    AdminPreferencesDataLocal apdata1 = adminpreferenceshome.findByPrimaryKey(certificatefingerprint);
    adminpreferenceshome.remove(certificatefingerprint);
    adminpreferenceshome.create(certificatefingerprint,newadminpreference);
    Is there some configuration I can set in toplink-ejb-jar.xml to fix this?
    I use OC4j 10.1.3.3
    Cheers,
    Tomas

    The bean.remove() was executed but the sql DELETE was executed to the database as the result, oc4j manages all sql statements and they are optimized to commit to the database in batch at once when the transaction commit. Oc4j ejb container is smart and preventing you from creating the same entity cached in the transaction because the sql delete was not really committed to the database when the create is called.
    My guess is that the reason it works with IBM WAS was because it issued sql for each remove/create call right away instead of doing them in batch as oc4j, or your WAS remove then create were called in separate transactions.

Maybe you are looking for

  • SQL Developer Data Modeler Repository

    Hi, I would like to know how to save all my applications into the Data Modeler Repository instead of doing it piece by piece and having to create a dmd file for every single application I imported into Data Modeler. In Oracle Designer, everything is

  • Do I need to set host side shared variables to RT FIFO?

    Hello there, in my application a LV host application communicates with compact RIO through shared variables. On cRIO i have some shared vars with RT FIFO enabled. On host side for those vars which are bound to cRIO i did the same. Is that neccessary?

  • CS6: Select multiple objects - Hide border?

    Hi! Lots of our layouters are bothered because of the same thing: If you select multiple objects in CS6, you'll get a border which surrounds all of the selected objects. That way it's quite difficult to see, which items are already selected and which

  • Make-to-Order with Different Plant

    Hi Experts, I'd like to know how to implement business scenario of make-to-order but delivering plant (in Sales Order) is different with manufacturing plant (in Process Order). My client has material locking problem (huge batch records) so that my cl

  • (HelP)got an  unsatisfiedlinkerror when using the jni

    I am now working on a jni project and get the unsatisfiedlinkerror. But after I browse this forum, I do not get any useful information. my java code: package graphmining.cliquer; public class WClique {      * @param args      public static void main(