Measuring "performance" improvement when enabling blob caching

Hi.
Is there a way to measure improvement/disimprovment when you enable blob caching in a Sharepoint 2013 farm for the user experience? Will it show when measuring page
download time? 
Wish you a good day!
Simon

I would use Fiddler2 which will provide you with an overall time. One of the largest improvements for blobcaching is that it tells the client to cache the data so blobs aren't re-requested.
Trevor Seward
Follow or contact me at...
&nbsp&nbsp
This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

Similar Messages

  • Blob cache issue

    I have installed SharePoint 2013 Enterprise version we have 3 Server of SharePoint from which 1 is App Server and other 2 is web server. Web server are on load balance. Database is SQL Server 2012 R2
    SharePoint is purely used for heavy traffic Internet website. We have enable blob cache on it. Since when I enable blob cache I am getting error in ULS log 
    An error occured in the blob cache.  The exception message was 'The system cannot find the file specified. (Exception from HRESULT: 0x80070002)'.
    When I go to details and I found below. Now I totally understand that in my SharePoint some page or in master page or in web-part I have given wrong path of css or js due to that I am getting below error.
    Which is nice to resolve but I have more that 100 pages on my website. So is there any way that I can find on which page is having such issue so I can directly concentrate to remove that error.
    Hope I could put my point over here.

    wp-includes is a WordPress directory. Is WordPress installed on this server?
    https://wordpress.org/plugins/tinymce-advanced/
    Trevor Seward
    Follow or contact me at...
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Performance improvement from OAS 4.0.7 to 9iAS

    Any performance improvement from OAS4.0.7 to 9iAS. What are the parameters to be changed to upgrade from 4.0.7 to 9iAS.
    Regards,

    Customers generally see performance improvements when upgrading to 9iAS. How the upgrade process works is explained in the documentation at http://otn.oracle.com/docs/products/ias/index.html
    null

  • OEL5 iSCSI performance improvement

    Hi
    I've just installed the latest OEL5 updates on one of our servers which is used for disk-to-disk backup. It was previously running kernel version 2.6.18-92.1.6.0.2.el5 from June 2008 but it is now running 2.6.18-92.1.13.0.1.el5. It uses iSCSI to mount drives on a Dell MD3000i storage array.
    I've always had performance problems on this server - I could only get around 20Mbyte/sec throughput to the storage array. However the latest updates have improved this enormously - I now get over 90Mbytes/sec and the backups go like the wind.
    I've looked through the changes but couldn't find anything obvious - does anyone know why I got this big performance win?
    Dave

    Customers generally see performance improvements when upgrading to 9iAS. How the upgrade process works is explained in the documentation at http://otn.oracle.com/docs/products/ias/index.html
    null

  • Webi performance improvement

    Hi All, can anyone share your experience on the steps that you did for performance improvement when webi documents are opening on the CRM portal. we are trying to display the data for customer and we have integrated 3 webis on the CRM portal. The back end is HANA. For few big customers it takes a lot of time and there is a timeout in the portal. Can any steps be taken from the webi end to improve the performance?

    Optimize the query in WEBI.
    When you run the WEBI, in launchpad , how much time does it take?
    You can also limit the number of rows returned in Universe.

  • Metadata Caching-Performance Improvement

    Hi all,
    Could u plz give me suggestions to improve speed of metadata caching?
    Now, I'm retrieving the metadata from the database and writing it as xml file. Connection pooling has helped improve performance, but further improvement is required
    Besides, is there some means to determine optimum values for MaxIdle, MinIdle and EvictionInterval?
    I'd be grateful for any help. Thanks

    For performance first try to identify your bottlenecks.
    Also saving the cache data in XML can itself become a bottleneck. Becouse accessing the cache will invole IO operations and XML parsing.
    Normaly the best thing to do to keep the data that you think will be accessed frequantly in memory. Also make your algorighm keep only a small amount of data in the cache by regulerly cleaning it up by removing old entries. Small cache means better search performance in the cache.

  • What's supposed to happen to SharePoint when the IIS blob cache is full?

    Can someone possibly point me at a description of what's supposed to happen to SharePoint if the IIS blob cache directory fills up?
    I've seen postings from MS saying that you need to keep the directory 20% bigger than the amount of files you might end up storing in there, but that's clearly not the whole answer.
    I ask because we've had a problem today on SP2010 with files disappearing from people's sites as soon as they are uploaded, or being replaced with files of 0 bytes.   It seems to be caused by the blob cache directory being full, and SharePoint not making
    any attempt to free up space in it.
    Rob Schifreen SharePoint 2010 Admin University of Brighton, UK

    For me, on Mac, the second (or more) time I open an image from Bridge or from Photoshop's Recent menu, it opens the image in ACR (7.2) a second time and appends a -2 to the file name once opened in Photoshop. Each time the image is opened up again, it adds an appended -x number. I'm pretty sure this is the expected behavior...

  • Performance improvement in OBIEE 11.1.1.5

    Hi all,
    In OBIEE 11.1.1.5 reports takes long time to load , Kindly provide me some performance improvement guides.
    Thanks,
    Haree.

    Hi Haree,
    Steps to improve the performance.
    1. implement caching mechanism
    2. use aggregates
    3. use aggregate navigation
    4. limit the number of initialisation blocks
    5. turn off logging
    6. carry out calculations in database
    7. use materialized views if possible
    8. use database hints
    9. alter the NQSONFIG.ini parameters
    Note:calculate all the aggregates in the Repository it self and Create a Fast Refresh for MV(Materialized views).
    and you can also do one thing you can schedule an IBOT to run the report every 1 hour or some thing so that the report data will be cached and when the user runs the report the BI Server extracts the data from Cache
    This is the latest version for OBIEE11g.
    http://blogs.oracle.com/pa/resource/Oracle_OBIEE_Tuning_Guide.pdf
    Report level:
    1. Enable cache -- change nqsconfig instead of NO change to YES.
    2. GO--> Physical layer --> right click table--> properties --> check cacheable.
    3. Try to implement Aggregate mechanism.
    4.Create Index/Partition in Database level.
    There are multiple other ways to fine tune reports from OBIEE side itself:
    1) You can check for your measures granularity in reports and have level base measures created in RPD using OBIEE utility.
    http://www.rittmanmead.com/2007/10/using-the-obiee-aggregate-persistence-wizard/
    This will pick your aggr tables and not detailed tables.
    2) You can use Caching Seeding options. Using ibot or Using NQCMD command utility
    http://www.artofbi.com/index.php/2010/03/obiee-ibots-obi-caching-strategy-with-seeding-cache/
    http://satyaobieesolutions.blogspot.in/2012/07/different-to-manage-cache-in-obiee-one.html
    OR
    http://hiteshbiblog.blogspot.com/2010/08/obiee-schedule-purge-and-re-build-of.html
    Using one of the above 2 methods, you can fine tune your reports and reduce the query time.
    Also, on a safer side, just take the physical SQL from log and run it directly on DB to see the time taken and check for the explain plan with the help of a DBA.
    Hope this help's
    Thanks,
    Satya
    Edited by: Satya Ranki Reddy on Aug 12, 2012 7:39 PM
    Edited by: Satya Ranki Reddy on Aug 12, 2012 8:12 PM
    Edited by: Satya Ranki Reddy on Aug 12, 2012 8:20 PM

  • Can't Enable write cache on disk with event ID 34

    One of our clients has a performance issue similar this one:
    Performance issue on TS as VM - Resolution with screenshots
    After monitoring it we found the problem is write cache. We improve the
    ... I
    checked all other VMs, the Enable advanced performance is unchecked. Do you
    ... A: To improve the performance for the disk, here are some proactively action
    plans:.
    www.chicagotech.net/remoteissues/tsperformance1.htm
    However, when we follow the resolution to Enable write cache, it doesn't take it with this event:
    Event Type: Warning
    Event Source: Disk
    Event Category: None
    Event ID: 34
    Description:
    The driver disabled the write cache on device \Device\Harddisk0\DR0.
    How to fix it?
    Bob Lin, MCSE & CNE Networking, Internet, Routing, VPN Networking, Internet, Routing, VPN Troubleshooting on http://www.ChicagoTech.net How to Install and Configure Windows, VMware, Virtualization and Cisco on http://www.HowToNetworking.com

    Hi,
    By default, write caching is disabled on a disk that contains the Active Directory database (Ntds.dit). Also, write caching is disabled on a disk that contains the Active Directory log files. By doing this, you enhance the reliability of the Active
    Directory files.
    Thus if it is a DC, try to move the Active Directory database and the Active Directory log files off of a disk in which you need to enable write caching.
    If you have any feedback on our support, please send to [email protected]

  • Performance improve using TEZ/HIVE

    Hi,
    I’m newbie in HDInsight. Sorry for asking simple Questions. I have queries around performance improvement of my HIVE query on File data of 90 GB (15 GB * 6).
    We have enabled execution engine has TEZ, I heard the AVRO format improves the speed of execution, Is AVRO SERDE enabled TEZ Queries or do I need upload *.jar files to WASB. I’m using latest version. Any sample Query.
    In TEZ, Will ORC Column Format and Avro compression can work together, when we set ORC compression level on hive has
    Snappy and LZO ?. Is there any Limitation of Number of columns for ORC tables.
    Is there any best compression technique to upload data file to Blob, I mean compress and upload.  I used *.gz, which compressed by 1/4<sup>th</sup> of File Size and upload to Blob, but problem *.gz is not split able and it will always
    uses less (single ) Mapper or should I use Avro with Snappy Compression . Is the Microsoft Avro Library performs snappy Compression or is there any compress which can be  split and compress.
    If data structure for file change over time , will there be necessity of reloading older data?. Can existing query works without change in code.
    It has been said that TEZ has Real Time Reporting capability , but when I Query 90 GB file (It includes Group By, order by clauses) is taking almost 8 mins of time on 20 nodes, are there any pointers to improve performance further and get the Query result
    in Seconds.
    Mahender

    -- Tez is an execution engine, I don't think you need any additional jar file to get AVRO Serde working on Hive when Tez is used.  You can used  AvroSerDe, AvroContainerInputFormat & AvroContainerOutputFormat to get AVRO working when tez is
    used.
    -- I tried creating a table with about 220 columns, although the table was empty, I was able to query from the table, how many columns does your table hold?
    CREATE EXTERNAL TABLE LargColumnTable02(t1 string,.... t220 string)
    PARTITIONED BY(EventDate string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS ORC LOCATION '/data'
    tblproperties("orc.compress"="SNAPPY");
    --  You can refer
    http://dennyglee.com/2013/03/12/using-avro-with-hdinsight-on-azure-at-343-industries/   
    Getting Avro data into Azure Blob Storage Section
    -- It depends on what data has change , if you are using Hadoop, HBase etc..
    -- You will have to monitor your application check node manager logs if there is any pause in execution again. It depends on what you are doing, would suggest open a Support case to investigate further.

  • Advice: Potential Performance Improvements

    All
    My client has requested that their queries return in <15 seconds but a basic run-time was 1min 20 seconds.
    In order to improve performance, we have made several improvements to our system. So far, we have:-
    Built aggregates on the cubes
    Turned on OLAP Caching
    Enabled the Setting 'Query to Read when you Navigate or Expand' in the Query properties
    Ensured Database Statistics are refreshed ever day
    Compressed Cubes with Zero Elimination
    I am looking for advice on any other steps people would advise to further improve report performance. The above actions have reduced query execution time from 80 to 40 seconds, but we are still short of the 15 seconds required by our client.
    When the OLAP cache is filled, reports are averaging about 8 seconds. however, one of our cubes is quite transactional and this means the OLAP cache is regularly invalidated. Does anyone know a way to split the caching of the cubes so that only the transactional cube is split?
    Can you think of any other settings that we can change that will further reduce the initial query run time?
    Our objective is to make the initial execution of the query <15 seconds, so if there are settings that can make this quicker then please advise!
    You help is, as always, much appreciated and your answers will be rewarded with forum points as appropriate.
    Many thanks!
    Dave

    Check this link:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/aec09790-0201-0010-8eb9-e82df5763455
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cbd2d390-0201-0010-8eab-a8a9269a23c2
    Hope this helps..

  • Since applying Feb 2013 Sharepoint 2010 CUs - Critical event log entries for Blob cache and missing images

    Hi,
    Since applying the February 2013 SharePoint 2010 updates, we are getting lots of entries in our event logs along the following:
    Content Management     Publishing Cache         
    5538     Critical 
    An error occurred in the blob cache.  The exception message was 'The system cannot find the file specified. (Exception from HRESULT: 0x80070002)’
    In pretty much all of these cases the image/ file in question that is reported in the ULS logs as missing is not actually in the collaboration site, master page / html etc so the fix needs to go back to the site owner to make the correction to avoid
    the 404 (if they make it!). This has only started happening, I believe since feb 2013 sp2010 cumulative updates updates
    I didn’t see this mentioned as a change / in the Fix list of the February updates. i.e. it flags up a critical error in our event logs. So with a lot of sites and a lot of missing images your event log can quickly fill up.
    Obviously you can suppress them in the monitoring -> web content management ->publishing cache = none & none which is not ideal.
    So my question is... are others seeing this and was a change made by Microsoft to flag a 404 missing image / file up a critical error in event log when blob cache is enabled?
    If i log this with MS they will just say, you need to fix it up the missing files in the site but would be nice to know this had changed prior! I also deleted and recreated the blob cache and this made no diffference
    thanks
    Brad

    I'm facing the same error on our SharePoint 2013 farm. We are on Aug 2013 CU and if the Dec CU (which is supposed to be the latest) doesn't solve it then what else could be done.
    Some users started getting the message "Server is busy now try again later" with a corelation id. I looked up ULS with that corelation id and found these two errors in addition to hundreds of "Micro Trace Tags (none)" and "forced
    due to logging gap":
    "GetFileFromUrl: FileNotFoundException when attempting get file Url /favicon.ico The system cannot find the file specified. (Exception from HRESULT: 0x80070002)"
    "Error in blob cache. System.IO.FileNotFoundException: The system cannot find the file specified. (Exception from HRESULT: 0x80070002)"
    "Unable to cache URL /FAVICON.ICO.  File was not found" 
    Looks like this is a bug and MS hasn't fixed it in Dec CU..
    &quot;The opinions expressed here represent my own and not those of anybody else&quot;

  • Will there performance improvement over separate tables vs single table with multiple partitions?

    Will there performance improvement over separate tables vs single table with multiple partitions? Is advisable to have separate tables than having a single big table with partitions? Can we expect same performance having single big table with partitions? What is the recommendation approach in HANA?

    Suren,
    first off a friendly reminder: SCN is a public forum and for you as an SAP employee there are multiple internal forums/communities/JAM groups available. You may want to consider this.
    Concerning your question:
    You didn't tell us what you want to do with your table or your set of tables.
    As tables are not only storage units but usually bear semantics - read: if data is stored in one table it means something else than the same data in a different table - partitioned tables cannot simply be substituted by multiple tables.
    Looked at it on a storage technology level, table partitions are practically the same as tables. Each partition has got its own delta store & can be loaded and displaced to/from memory independent from the others.
    Generally speaking there shouldn't be too many performance differences between a partitioned table and multiple tables.
    However, when dealing with partitioned tables, the additional step of determining the partition to work on is always required. If computing the result of the partitioning function takes a major share in your total runtime (which is unlikely) then partitioned tables could have a negative performance impact.
    Having said this: as with all performance related questions, to get a conclusive answer you need to measure the times required for both alternatives.
    - Lars

  • Enabling BIOS caching.

    Sorry for the noob question, but I was curious about enabling the BIOS cache feature. Is there a noticable performance gain? Is it okay to do from a security standpoint?
    I have my BIOS backed up on a floppy as I learned the hard way a long time ago, but maybe that's what makes me nervous about enabling the cache feature. So, who uses this and have you had any problems with it?? I would like to try it, but want to check the knowledge pool here before I do something I would regret.

    I would kindly like to disagree,
    The only thing about that is, sure if you can cache your system BIOS to memory it should be faster.  But XP/2000 Don't rely on the system BIOS like old legacy OS's (Win3.1, 9x, OS/2) do and in fact rarely at all ever make a call to it.  So while you may see a tiny AND I MEAN TINY shred of performance, overall you are slowing your machine because you're wasting memory on something that's hardly ever used.
    The similar situation with VIDEO BIOS Cacheable.  While this sounds like a great idea for improving total system performance, video memory is fast enough that it usually outperforms system memory.  In addition as far as I'm aware (please correct me if I'm wrong) XP/2K GARTs handle video on their own independant of the actual video BIOS, as such the memory bandwidth would be better spent elsewhere than caching the Video BIOS.

  • Unable to create system performance counter SharePoint Disk-Based Cache - SharePoint 2010 SP2 Farm

    Hi,
    I'm getting this in my Trace Logs:
    Performance Counter OS (pdh) call failed with error code PDH_CSTATUS_BAD_COUNTERNAME
    PDH failure on counter <serverName>\Sharepoint Disk-Based Cache\Blob Cache hit ration with Unknown error (0x0000bc0)
    Unable to create system performance counter <ServerName>\SharePoint Disk-Based Cache\\Blob Cache fill ratio
    Unable to create system performance counter <ServerName>\SharePoint Disk-Based Cache\\Total Blob Size
    There are bunch of them from SharePoint Disk-Based Cache counter that its trying to create but they cannot be used.
    I've logged on to the server as a farm admin, which has full farm permissions, local admin and runs OWSTIMER.EXE service, but I was not able to add the counters myself. I've read few posts here on TechNet but no one answered on how to enable/recreate these
    performance counters. I would like to resolve this if its possible.

    I could not find the reason why those SharePoint Performance Counters were not on the WFE servers so I just disabled the "Diagnostic Data Provider: Performance Counters - Web Front Ends" in the Central Admin > Monitoring > Job Definitions.

Maybe you are looking for

  • Cairo-dock bar icons are not being displaying correctly

    I have an intel HD4000 3rd generation (lvy bridge) and extra/xf86-video-intel 2.21.15-1 (xorg-drivers xorg) installed. My laptop is a i7-3630. When I launch cairo-dock using opengl the icons in the bar are not displayed until I move the mouse over an

  • HT2518 migration is stuck on looking for other computers....

    I am a brand new MAC user.  I'm trying to transfer files, etc. from my PC to my new MAC.  I am in "Migration" on Mac, have downloaded the software to do so on my PC, networks are both on, file sharing is on my PC...but on MAC I am stuck on "looking f

  • Merging 4 addresses on a Postcard template

    Is it possible to merge 4 different addresses in one page? I have been trying to use a Pages template for postcards that fits 4 cards in one 8.5" X 11" page, but I'm forced to have the same contact/address in all 4 postcards. The merge feature in Pag

  • Webservice error handling Clarification

    Hi all,     We are implementing synchronus web services(proxy to web services) to sync master data with the external systems. Java mapping is used. We get 2  types of errors 1. application and 2. system errors. Can any one suggest the best approch  f

  • Unable to download plug ins

    I use Windows XP, I have both Photoshop 7.0 and Elements 4.0. I've been trying to download plugins for my Canon EOS Rebel to access RAW (Cr2) files and every time I download I get an error message either that the file is corrupted or that the ZIP ext