Blob cache issue

I have installed SharePoint 2013 Enterprise version we have 3 Server of SharePoint from which 1 is App Server and other 2 is web server. Web server are on load balance. Database is SQL Server 2012 R2
SharePoint is purely used for heavy traffic Internet website. We have enable blob cache on it. Since when I enable blob cache I am getting error in ULS log 
An error occured in the blob cache.  The exception message was 'The system cannot find the file specified. (Exception from HRESULT: 0x80070002)'.
When I go to details and I found below. Now I totally understand that in my SharePoint some page or in master page or in web-part I have given wrong path of css or js due to that I am getting below error.
Which is nice to resolve but I have more that 100 pages on my website. So is there any way that I can find on which page is having such issue so I can directly concentrate to remove that error.
Hope I could put my point over here.

wp-includes is a WordPress directory. Is WordPress installed on this server?
https://wordpress.org/plugins/tinymce-advanced/
Trevor Seward
Follow or contact me at...
This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

Similar Messages

  • An error occured in the blob cache

    An error occured in the blob cache.  The exception message was 'The system cannot find the file specified. (Exception from HRESULT: 0x80070002)'.
    Understand that this know issue on SharePoint 2010 is there any fix for this ? 
    http://blogs.msdn.com/b/spses/archive/2013/10/23/sharepoint-2010-using-blob-caching-throws-many-errors-in-the-uls-and-event-logs-the-system-cannot-find-the-file-specified.aspx
    K.Mohamed Faizal, Solution Architect, Singapore, @kmdfaizal
    http://faizal-comeacross.blogspot.com/ |AzureUG.SG

    Install the December 2013 CU. http://support.microsoft.com/kb/2912738
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Directory Caching issue with Cisco Jabber client for Windows

    Hi ,
    I am facing cache issue with Cisco Jabber client for Windows. If I do any change related to modification or deletion of contacts in Active Directory/ Callmanager, it does not reflect in the Jabber. Because jabber takes the contacts from the locally stored cache file in the Windows system.
    Every time I have to remove the cache file to overcome this issue, practically it's not possible to do the same with all the Widows users. As, if any employee leaves the company and still I can see his contact appears in the "Cisco Jabber client". I have not seen this issue with Android/Apple iOS.
    Is there any automated way to remove the cache file? 
    Here is the detail of CUCM,Presence and Jabber.
    CUCM version: 9.1.x
    Presence          : 9.1.X
    Jabber              : 10.5 and 10.6

    Hello
    On our environment we had to install a dedicated Microsoft Certificate Authority "just for Cisco Jabber usage" to house the
    Network Device Enrollment Service.
    Our certificate for the CUPS were generated on this Certification Authority too.
    I discussed this certificate matter with my colleagues this afternoon and nobody seems to remember how these certificates were deployed into the
    Enterprise Trust store for the users.
    But I think they asked all 400 users to accept the 3 certificates by answering "yes" to the popup instead of using a script deployed by GPO...
    I wish you success with that deployment and really hope you have a technical partner that *Knows* this subject.
    Our partner left us alone with that unfortunately.
    Florent
    EDIT: If the "Certutil script method" works, please let me know. This could be useful in our own deployment.

  • Sql query cache issue

    I am trying to see the log file in Manage sessions for the sql query in Answers. I see that if we run the same report multiple times, the sql query is showing up only the first time. Second time if I run it is not showing up. If I do a brand new report with diff columns picked it is giving me the sql then. Where do I set this option to show the sql query everytime I run a report even if it is the same report run multiple times. Is this caching issue?

    It shouldn't.... Have you unchecked the "Cache" on the physical layer for this table? If you go onto the Advanced tab, is the option "Bypass the Oracle BI cache" checked?

  • WEB u2013 I cache issue.

    We have BO XI 3.1 PF 1.7 and SAP BW 7.0 SP 18
    We are facing strong caching issue  with WEB- I XI 3.1 , what we observed is it cache previously fetched data and does not recognize data been updated in BW Infoprovider ( in our case we have BEx query on Direct update DSO )
    I have tried u2013
    1.     I made caching inactive for that BEx query in BW side.
    2.     I have check caching setting in BO , they are not helping me much to in active cache in BO
    On executing BEx query in BW, I have checked RSRT cash monitor; the query result is not getting cached, and get up-to-date data. Query works perfectly file.
    Also observed Events in RSDDSTAT_OLAP table, I could find very well Event 9000. This shows me that BEx query hits database and bring data back.
    When I execute BO report on the top of same BEx query, I do not get up to date data , it shows me previously fetched data and does not recognize data been updated in BW.
    Now question is where this data being getting cashed , and how to make that inactive to make sure I do get up to date data in WEB-I .
    Many thanks for your kind response.
    Regards
    Ashutosh D

    HI Ashutosh,
    This would be stored in the C:\Program Files\Business Objects\BusinessObjects Enterprise 12.0\Data\servername_6400\storage folder.  Try renaming this folder and then testing again.  If this allows for updated records, then that proves that the Webi Processing Server is getting the cached data from this directory.
    There are also a few settings you can try to disable in the Webi Processing Server settings within the CMC:
    Disable cache Sharing (Checked)
    Enable Real-Time Caching (Unchecked)
    Enable Document Cache (Unchecked)
    Cache Timeout (1 min) - sets the cache to expire after 1 min.
    Try playing with these settings and see if that resolves the issue. 
    Thanks
    Jb

  • SharedObject and Caching issues

    Background:
         I am working on a project where I am using sharedObject to store data for a favorites list. The project contains 2 swfs, one in AS 3.0 (using Flex 4.5) and the other an AS 2.0 flash 8 project.  Basically the AS 2.0 project read and writes data to the shared object and the AS 3.0 project just reads the data.
    Setup:
         OS: Windows 7
         Flash Player: 10.3     Browsers: IE 9, IE 8, Chrome 14, and some misc Firefox testing -- Chrome seems to be working the best.  
    Problems:
         First off it has been extremely hard to get consistent results across different computers so if anyone has a suggestion as to why that would be (other than different browsers, different flash player versions, different flash player settings/global settings and different OS versions) because I have been making sure all that stuff is consistent across devices for testing purposes.
         1. It appears that some computers/browers won't store the shared object at all even though everything to allow it is turned on.
         2. Some computers/browsers will store it, but won't update until I refresh the page
    General Questions:
         1. Is the storage of sharedObject object data in a .sol file configured the same in both AS 2.0 and AS 3.0 and if so, would this be a means of communicating between two swfs running off the same domain?
         2. Is it possible to have a browser cache sharedObject data(the .sol file)?
          3. I have noticed as new flash player releases are coming out(especially recently i.e. 10.3) I have had to make a lot of repairs to the actionscipt 2 stuff.  Is actionscript 2 support slowly being phased out? Is anyone else experiencing the same problems? 
    Possible Solutions/Findings:
         I am starting to believe that it has something to do with browser caching since Chrome seems to perform the best and Chrome has performed best when it comes to browser caching issues.
    ** code is available if needed but since it was working before I don't think it is relevant.

    Hello,
    #1
    are you using explicit call to flush method immediately afer you're writing data to reference to SharedObject?
    http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/SharedObject. html#flush()
    If not you could have different data read by different browsers sessions run at the same time (e.g. IE/FireFox both hosting your test Flash movies) - as data are flushed only when given Flash runtime session is to be terminated (e.g. movie is to be unloaded) and in other cases as outlined in documentation.
    #2
    the SharedObject is reference to binary file stored on local machine by Flash runtime (Air runtime too). It has nothing similar to web-fetched content to be cached:
    http://en.wikipedia.org/wiki/Local_Shared_Object
    regards,
    Peter

  • SecurityDomain and Caching Issues

    I am running into some caching issues when setting the securityDomain of an imported SWF to match the calling SWF file and I was curious if anyone had any ideas on how to get around this issue.
    Here is the scenario:
    A.swf is launched on Domain A, it sends a Loader request for B.swf on Domain B.
    B.swf will be updated frequently so caching is disabled, A.swf however can be cached.
    B.swf has some ExternalInterface.calls so it requires being in the same securityDomain as A.swf otherwise it will receive a Security Error #2006.
    The code I am using for this is fairly straightforward:
    ActionScript Code:
    var request:URLRequest = new URLRequest("http://domainB/B.swf");
    var loader:Loader = new Loader();
    var context:LoaderContext = new LoaderContext();
    context.securityDomain = SecurityDomain.currentDomain;
    loader.load(request, context);
    I believe B.swf is inheriting the caching setting of A.swf because it is residing in the same securityDomain. If I make a small update to B.swf and refresh A.swf in a browser it will not load the B.swf updates until I clear cache.
    If I get rid of the securityDomain context on the load it will always update B.swf with the most current version, but I run into security issues with ExternalInterface.
    ActionScript Code:
    var request:URLRequest = new URLRequest("http://domainB/B.swf");
    var loader:Loader = new Loader();
    loader.load(request);
    I have tried appending random strings to the end of the URLRequest while using the securityDomain context but it will always used a cached version of B.swf. I have also tried loading B.swf into a ByteArray then using loadBytes on a Loader but that didn't work either.
    Any ideas are appreciated!

    Hello,
    #1
    are you using explicit call to flush method immediately afer you're writing data to reference to SharedObject?
    http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/SharedObject. html#flush()
    If not you could have different data read by different browsers sessions run at the same time (e.g. IE/FireFox both hosting your test Flash movies) - as data are flushed only when given Flash runtime session is to be terminated (e.g. movie is to be unloaded) and in other cases as outlined in documentation.
    #2
    the SharedObject is reference to binary file stored on local machine by Flash runtime (Air runtime too). It has nothing similar to web-fetched content to be cached:
    http://en.wikipedia.org/wiki/Local_Shared_Object
    regards,
    Peter

  • Caching issue in jsp or servlet

    I am not sure where I should post the issue that I have. I have a j2ee web app (struts) running on Apache Web Server + Glassfish + MySQL. My server has multi core.
    The problem is that whenever I insert a new data or delete it from a table, I don't see the change right away. Sometimes, I see the change. Sometimes I don't. It's very inconsistent.
    If the old data is cached in JSP, I shouldn't see the change in log file, but I do see it even in the log file.
    For example, I have a page managing user's folders. when I delete a certain folder name from jsp page, this folder is deleted from a db table. but when I refresh the page, I still see the folder name that is not supposed to show up. or when I go to a different page and come back to this page, I don't see it. The behavior is very inconsistent.
    If it's a browser caching issue, I don't think that I should see it in the log file, but I still see the folder name(which is supposed to be deleted) in the log file.
    I am including these lines in all included jsp pages.
    response.setHeader("Cache-Control","no-cache");
    response.setHeader("Pragma","no-cache");
    response.setDateHeader ("Expires", -1);
    does anybody have any opinion about this?
    It's hard to debug it and describe the behavior.
    But it would be very helpful if someone who had a same experience about this explains about this and tells me how to fix it.
    Thanks.

    caesarkim1 wrote:
    I am including these lines in all included jsp pages.
    response.setHeader("Cache-Control","no-cache");
    response.setHeader("Pragma","no-cache");
    response.setDateHeader ("Expires", -1); Instead of including these lines in all jsp's, make a filter and add these lines to it.

  • Acrobat Connect Pro LMS 7.5 server cache issue - displaying old content

    Our Adobe Acrobat Connect Pro server is showing old Captivate-created content from about 4-6 weeks ago.
    I loaded 35+ sets of Captivate 3 (SCORM 1.2, HTML, zipped) content onto our Acrobat Connect Pro LMS about 6 weeks ago.
    I converted all training content from Captivate 3.0.1 to 4.0.1, 4 weeks ago by opening each file in Captivate 4 and following prompts to "Save As" new files with different file names.
    I reloaded all content 4 weeks ago, and again 2 weeks ago.
    I reloaded about half of all content again last week.
    End User Acceptance testing performed this week showed that most of the courses are showing old content, ranging from 2-6 weeks old
    Attempted fixes and workarounds:
    Deleting content entirely and reloaded from scratch - this will not work long term, as we lose usage data each time we reload completely new files.
    Contacted Adobe, provided times to track incidents of the issue.  We reached Tier 2 - who told us it was our problem and that everything appeared to be working fine from their side.
    New workaround - load new content and reattach course to new content.  This presents the same long term issue as the first workaround, but enables us to retain older versions of content in the system, should we need to revert or report on it.
    Gaining server side access is a bit challenging due to the hosting situation we have, so I am looking (ideally) for a solution that can be performed from the Administrator/Author Frontend.  However, I want to learn the real cause of the problem, wherever it might reside, so that it can be properly corrected and avoided in the future.  I am calling this a server cache issue, as it seems the server has somewhere retained unwanted old versions of content, preventing current content from being displayed to end users.  Viewing content as an end user = see old content.  Viewing content from the Content area (Author view) shows the current files, so I know they are on the server and are loading correctly, up to a point.
    I am preparing all content for another round of loading/reloading due to other issues and updates, so republishing and reloading all 35+ files into the LMS is unavoidable at this point.
    This issue is keeping our LMS from launching to several thousand users across the country, so any suggestions or helpful tips are much appreciated.

    I think I have isolated the source of this problem. It's the Pitstop Professional 9 plug in. I un-installed this, and everything opens quicker than greased lightning. I re-installed it and it's back to slowsville.
    Unfortunately Pitstop is essential to my workflow.
    Until recently I did my pre-press on a Mac G5 with Acrobat Pro 7 and Pitstop 6.5. I never had this problem with slow file opening. But it seems that the delays would occur when I used the plug-in with large complex files.. So it would open files as fast as you'd expect from an elderly machine. But starting to use Pitstop would result in a prolonged period of staring at a spinning beachball.
    I wonder is there any way to stop the Pitstop plug-in from initializing until it is used? So the plug-in stays inert until you select the tool from from the menus.

  • IE Caching Issue while closing and opening a UI thrice or more

    Hi Everybody..
    Greetings of the day..
    I am working on a web application (Can not disclose the name) where at certain level a word document generated contains a customized toolbar.
    It contains one print option, when clicked opens a UI which shows default printer available (Its different from our normal printing).
    When I repeat opening and then closing of this UI thrice or more times, the entire application gets hanged and I have
    to terminate it by killing the associated process from task manager.
    When this issue was analysed our by onsite team they arrived at a conclusion saying that its an IE caching issue.
    But upto my knowledge I know that any application which is .SSL secured, by default IE never does cache control.
    What could be the solution if application is not .SSL secured or if it is.
    Kindly help asap.
    Thanks.

    Hi all
    Did anyone ever manage to fix this? I'm having this issue across multiple iPads. Even if i completely reset the iPad (erasing all content and settings) it does it again when restored.
    I've just got an iPad Air and that does it too. I even created a new Apple ID to see if that would work (trying to avoid any kind of caching aross iCloud, etc) and it does it straight away in those books. It's massivley frsustrating.
    I have noticed it only seems to do it for iBooks formatted books, not ePubs.
    Anyone have a resolution - it's quite embarrassing when you're doing a presentation about book features and the audience assumes youve only the wrong book each time. More annoying when clsoing the book and the wrong cover appears again, but also takes you to that books collection page, rather than the one you were in.

  • Possible session caching issue in SSRS2014

    Using custom FormsAuth, User A can sign into our own main asp.net mvc app (WIF cookie), then SSRS (FormsAuth cookie) and all is well.  Here is where things go bad.  User A signs out of our main application (WIF cookie deleted) then back in into
    our main application as User B then back into SSRS.  SSRS report that displays User!UserID show UserA instead of current User B.  Its like there is either a session or cookie caching issue going on but not for sure.  
    1. What is the proper way to sign out of SSRS and prevent session caching?
    2. Do I need to worry about making my SSRS logon page non-cacheable?  If so, what is the recommended way of doing this?
    thanks
    scott

    Hi scott_m,
    According to your description, you used custom FormsAuthentication in Reporting Services, after user A sign out the application an sign in as user B, SSRS built-in user is shows user A instead of user B.
    Based on my search, once we configured SSRS to use Custom (Forms) authentication by deploying a custom security extension, we can logon to MS Report Manager (MSRM) using credentials of our custom security framework via a logon web page. But there is no way
    to logout or to expire the authentication cookie, so we need to close the browser manually. As a workaround, we can add a logout button to the Report Manager which is using Forms Authentication, then use code to clear the cookie and redirect to home page.
    In addition, if you extend Reporting Services to use Forms Authentication, it’s better to use Secure Sockets Layer (SSL) for all communications with the report server to prevent malicious users from gaining access to another user's cookie. SSL enables clients
    and a report server to authenticate each other and to ensure that no other computers can read the contents of communications between the two computers. All data sent from a client through an SSL connection is encrypted so that malicious users cannot intercept
    passwords or data sent to a report server.
    Here is a relevant thread you can reference:
    https://social.msdn.microsoft.com/Forums/sqlserver/en-US/5e33949d-7757-45d1-9c43-6dc3911c3ced/how-do-you-logout-of-report-manager
    For more information about Forms Authentication, please refer to the following document:
    https://technet.microsoft.com/en-us/library/aa902691%28v=sql.80%29.aspx?f=255&MSPPError=-2147217396
    If you have any more questions, please feel free to ask.
    Thanks,
    Wendy Fu
    If you have any feedback on our support, please click
    here.
    Wendy Fu
    TechNet Community Support

  • Parallel Caching Issue

    We have found issues with the parallel loading of the OLAP cache using reporting agent jobs where entries are not populated correctly.We rely on the OLAP cache for delivering the very high levels of concurrency during the peak times.
    Once the main batch data loading has been completed we run some background reporting agent jobs to pre-populate the OLAP cach.Each job contains a web application template that holds 1 or more queries and will process 1500+ stores for each run.
    We have different reporting agent jobs for the different web application templates, but we have discovered that if we run these jobs in parallel we do not get the full benefit of the OLAP cache compared to if we run them sequentially.
    If we run the jobs in parallel, when we look in RSRCACHE for these queries it would appear to have populated correctly, but when we check RSDDSTATS for the query performance the following day, we can see that a large number of the
    stores still hit the database and did not benefit from the cache entries. Sometime this can be as much as 60% failure to hit the cache.
    If we run the same job sequentially, then check RSDDSTATS the following
    day, we can see 100% success rate with every store hitting the OLAP cache.
    Is anyone able to advise how we can resolve this parallel caching issue?

    Hi Ganesh,
    I am currently having similar trouble with TBB1 where I receive error message below:
    TRL intialization for MM, FX, Derivatives, co. code WTRD, valn area 003 is not yet complete
    Message no. TPM_TRL052
    Are you familiar with this issue? any help would be greatly appreciated! i hope that you can help..

  • Since applying Feb 2013 Sharepoint 2010 CUs - Critical event log entries for Blob cache and missing images

    Hi,
    Since applying the February 2013 SharePoint 2010 updates, we are getting lots of entries in our event logs along the following:
    Content Management     Publishing Cache         
    5538     Critical 
    An error occurred in the blob cache.  The exception message was 'The system cannot find the file specified. (Exception from HRESULT: 0x80070002)’
    In pretty much all of these cases the image/ file in question that is reported in the ULS logs as missing is not actually in the collaboration site, master page / html etc so the fix needs to go back to the site owner to make the correction to avoid
    the 404 (if they make it!). This has only started happening, I believe since feb 2013 sp2010 cumulative updates updates
    I didn’t see this mentioned as a change / in the Fix list of the February updates. i.e. it flags up a critical error in our event logs. So with a lot of sites and a lot of missing images your event log can quickly fill up.
    Obviously you can suppress them in the monitoring -> web content management ->publishing cache = none & none which is not ideal.
    So my question is... are others seeing this and was a change made by Microsoft to flag a 404 missing image / file up a critical error in event log when blob cache is enabled?
    If i log this with MS they will just say, you need to fix it up the missing files in the site but would be nice to know this had changed prior! I also deleted and recreated the blob cache and this made no diffference
    thanks
    Brad

    I'm facing the same error on our SharePoint 2013 farm. We are on Aug 2013 CU and if the Dec CU (which is supposed to be the latest) doesn't solve it then what else could be done.
    Some users started getting the message "Server is busy now try again later" with a corelation id. I looked up ULS with that corelation id and found these two errors in addition to hundreds of "Micro Trace Tags (none)" and "forced
    due to logging gap":
    "GetFileFromUrl: FileNotFoundException when attempting get file Url /favicon.ico The system cannot find the file specified. (Exception from HRESULT: 0x80070002)"
    "Error in blob cache. System.IO.FileNotFoundException: The system cannot find the file specified. (Exception from HRESULT: 0x80070002)"
    "Unable to cache URL /FAVICON.ICO.  File was not found" 
    Looks like this is a bug and MS hasn't fixed it in Dec CU..
    "The opinions expressed here represent my own and not those of anybody else"

  • EJB CMP remove create cache issue? DuplicateKeyException

    EJB CMP remove create cache issue? DuplicateKeyException
    Hi I have an EJB 2.1 application using CMP. Most things work fine. But if I in a transaction tries to remove a bean and create a new one with the same primary key I get a DuplicateKeyException:
    2007-11-16 09:25:31,963 ERROR [RMICallHandler-6] AdminGroupData_ConcreteSubClass147 - Error adding AccessRules:
    javax.ejb.DuplicateKeyException: Exception [EJB - 10007]: Exception creating bean of type [AccessRulesData]. Bean already exists.
    at oracle.toplink.internal.ejb.cmp.EJBExceptionFactory.duplicateKeyException(EJBExceptionFactory.java:195)
    I suspect that the remove call only removes from the cache (until commit), but that the create call checks the database or something?
    My code is simple like the following:
    AdminPreferencesDataLocal apdata1 = adminpreferenceshome.findByPrimaryKey(certificatefingerprint);
    adminpreferenceshome.remove(certificatefingerprint);
    adminpreferenceshome.create(certificatefingerprint,newadminpreference);
    Is there some configuration I can set in toplink-ejb-jar.xml to fix this?
    I use OC4j 10.1.3.3
    Cheers,
    Tomas

    The bean.remove() was executed but the sql DELETE was executed to the database as the result, oc4j manages all sql statements and they are optimized to commit to the database in batch at once when the transaction commit. Oc4j ejb container is smart and preventing you from creating the same entity cached in the transaction because the sql delete was not really committed to the database when the create is called.
    My guess is that the reason it works with IBM WAS was because it issued sql for each remove/create call right away instead of doing them in batch as oc4j, or your WAS remove then create were called in separate transactions.

  • Is there a caching issue with DPS

    Facing a unique issue. When I am creating a folio it references to redundent files while build which are not used in the folio, and then the build does not upload to folio producer. Could there be a caching issue and how can I clear the DPS cache (Not the indesign cache.) After some time it shows the dreaded network error message.

    Hopefully you are using Windows:
    Close InDesign and follow the steps below:
        1.       Go to C: drive.
        2.       Click on Organize button on top left hand side.
        3.       Click on Folder search option.
        4.       Click on view tab and select show all hidden items.
        5.       Apply and ok.
        6.       Then  c:\users\username\AppData\Roaming\StageManager.BD092818F67280F4B42B04 877600987F0111B594.1\Local Store.
        7.       Delete dmp and shared object folder.
        8.       Empty Recycle Bin.
    Then re-launch InDesign

Maybe you are looking for