Blob cache - dedicated drive - allocation unit size

Hi
I'm setting up dedicated drives on the WFEs for Blobcache and just formatting the new simple volumes and I was wondering if Default is the best option for the Allocation unit size
Is there any best practice or guidance here or should I just stick with default?
Thanks
J

Default is ideal since you'll be mostly dealing with small files.
That said, the advantage for BC comes from the max-age value, which allows clients to not even ask the SharePoint server for the file and instead use local browser cache.
Trevor Seward
Follow or contact me at...
This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

Similar Messages

  • Hard Drive Setup - Allocation Unit Size

    I'm adding on 2 one tb HDD's.
    when formatting i'm wondering what the performance gain and size loss would be of using 8192 bytes instead of the default(4096) allocation unit size.
    i've got 3 one tb HDD's and 4 tb of external storage now but i use a lot of space editing 4-10 hours of multi camera footage at a time and im terrible at cleaning up after myself, hah, so I'm not sure if i have enough overall space to dedicate each one for media, page file, etc... at least for now they will all be general purpose use
    any thoughts on the effects of using 8192 over 4096?
    thanks

    By your own admission, cleaning up is not one of your fortes. This probably means that there are a lot of preview files left on your disk as well and these tend to be rather small. Increasing the allocation units as you call them will create more slack space and result in wasted space, sucking up your disk space very quickly. I do not think you will gain anything, on the contrary, you will probably lose space and performance. If you look at your preview files, you will see a lot of XMP and MPGINDEX files that are all 4 KB max and by changing the allocation unit to 8 K most of these files will have between 75 and 50% slack space (not used for data but reserved) and you will run out of disk space much quicker. Personally I would not change the default NTFS allocation.

  • "Best" Allocation Unit Size (AU_SIZE) for ASM diskgroups when using NetApp

    We're building a new non-RAC 11.2.0.3 system on x86-64 RHEL 5.7 with ASM diskgroups stored on a NetApp device (don't know the model # since we are not storage admins but can get it if that would be helpful). The system is not a data warehouse--more of a hybrid than pure OLTP or OLAP.
    In Oracle® Database Storage Administrator's Guide 11g Release 2 (11.2) E10500-02, Oracle recommends using allocation unit (AU) size of 4MB (vs. a default of 1MB) for a disk group be set to 4 MB to enhance performance. However, to take advantage of the au_size benefits, it also says the operating system (OS) I/O size should be set "to the largest possible size."
    http://docs.oracle.com/cd/E16338_01/server.112/e10500/asmdiskgrps.htm
    Since we're using NetApp as the underlying storage, what should we ask our storage and sysadmins (we don't manage the physical storage or the OS) to do:
    * What do they need to confirm and/or set regarding I/O on the Linux side
    * What do they need to confirm and/or set regarding I/O on the NetApp side?
    On some other 11.2.0.2 systems that use ASM diskgroups, I checked v$asm_diskgroup and see we're currently using a 1MB Allocation Unit Size. The diskgroups are on an HP EVA SAN. I don't recall, when creating the diskgroups via asmca, if we were even given an option to change the AU size. We're inclined to go with Oracle's recommendation of 4MB. But we're concerned there may be a mismatch on the OS side (either Redhat or the NetApp device's OS). Would rather "first do no harm" and stick with the default of 1MB before going with 4MB and not knowing the consequences. Also, when we create diskgroups we set Redundancy to External--because we'd like the NetApp device to handle this. Don't know if that matters regarding AU Size.
    Hope this makes sense. Please let me know if there is any other info I can provide.

    Thanks Dan. I suspected as much due to the absence of info out there on this particular topic. I hear you on the comparsion with deviating from a tried-and-true standard 8K Oracle block size. Probably not worth the hassle. I don't know of any particular justification with this system to bump up the AU size--especially if this is an esoteric and little-used technique. The only justification is official Oracle documentation suggesting the value change. Since it seems you can't change an ASM Diskgroup's AU size once you create it, and since we won't have time to benchmark using different AU sizes, I would prefer to err on the side of caution--e.g. first do no harm.
    Does anyone out there use something larger than a 1MB AU size? If so, why? And did you benchmark between the standard size and the size you chose? What performance results did you observe?

  • SQL Server NTFS allocation unit size for SSD disk

    Hi,
    I have read that the recommended NTFS allocation unit size for SQL Server generally is 64 kb since data pages are 8 kb and SQL Server usually reads pages in extents which are 8 pages = 64 kb.
    Wanted to check if this is true also for SSD disks or if it only applies to spinning disks?
    Also would it make more sense to use an 8 kb size if wanting to optimize the writes rather than reads?
    Please provide some additional info or reference instead of just a yes or no :)
    Thanks!

    Ok thanks for clarifying that.
    I did a test using SQLIO comparing 4kb with 64kb when using 8kb vs 64kb writes/reads.
    In my scenario it seems it doesnt matter if using 4kb or 64kb.
    Here are my results expressed as how much higher the values were for 64kb vs 4kb.
    Access type
    IOps
    MB/sec
    Min latency (ms)
    Avg latency (ms)
    Max latency (ms)
    8kb random write
    -2,61%
    -2,46%
    0,00%
    0,00%
    60,00%
    64kb random write
    -2,52%
    -2,49%
    0,00%
    0,00%
    -2,94%
    8kb random read
    0,30%
    0,67%
    0,00%
    0,00%
    -57,14%
    64kb random read
    0,06%
    0,23%
    0,00%
    0,00%
    44,00%
    8kb sequential write
    -0,15%
    -0,36%
    0,00%
    0,00%
    15,38%
    64kb squential write
    0,41%
    0,57%
    0,00%
    0,00%
    6,25%
    8kb sequential read
    0,17%
    0,33%
    0,00%
    0,00%
    0,00%
    64kb squential read
    0,26%
    0,23%
    0,00%
    0,00%
    -15,79%
    For anyone interested this test was done on Intel S3700 200gb on PERC H310 controller and each test was run for 6 minutes.

  • Windows Allocation Unit Size and db_block_size

    We are using Oracle 9.2 on Windows 2000 Advanced Server. What is the largest db_block_size we can use? I am going to reformat the drive to the largest cluster size (or allocation unit) Oracle can use.
    Thanks.

    First, you shouldn't use db_block_size in 9.2 but db_cache_size :-)
    With 32-bit version, your buffer cache will be limited to 2,5 Go (if win 2000 know how to manage this quantity). On 64 bit, dunno but probably enough for all of us :-)
    I think you should check Oracle 9.2 for win 2000 realease note on technet.oracle.com
    Fred

  • What is the best allocation block size for HDDs in Hyper-v?

    Hello experts,
    A server will be running Hyper-V soon. All virtual machines and their virtual HDDs will be stored on D driver of the physical machine.
    What is the best allocation unit size for formatting the physical D drive? Shall it be the biggest available option (64k) for getting better performance?

    Hi,
    You can check the following post.
    Recommendations for base disk cluster size for hosting Hyper-V Virtual Machines?
    http://social.technet.microsoft.com/Forums/en/winserverhyperv/thread/45919c42-bc39-47f4-9214-3f1cf00f2ea9
    Best Regards,
    Vincent Hu

  • Allocation units based on file size

    Hi Experts,
    I have 11gR2 RAC environment which i installed recently with ASM.
    When i create a datafile of size 12GB its taking 2MB extra.
    For storing what information this 2MB is taken, as far as my understanding, 1MB is for maintaining block extent mappings inode information, how about the remaining 1MB?
    2. Does the number of allocation unit varies if i create 32GB like additional 4MB will be taken by oracle asm, is there any specific formula
    Your help will be much appreciated
    Thanks,
    Shine

    Resizing is very much dependent on Image compression algorithms (or what ever they are called)… My Guess is that this is beyond the 'average' scripter if you wanted to deal with the open file data size then this can be done. If you want to deal with saved file's system sizes you may well be out of luck…

  • What Disk Allocation Units should I use, RAID0 or JBOD

    Hi, what Disk Allocation ubits should I use for drives.  The default in NTFS is 4k, is that OK or should I make them bigger?  If I use different drives to store different things should the different drives have different Allocation Units.
    I have a USB3 enclosure with 2 TB drives in it.  Would it be better to use JBOD (i.e. pass throught) so I effectivly have 2 seperate drives I can put different stuff on or RAID0 whitch gives better performance in a single volume?

    Thanks, was not sure if it should be in HW forum as it was conserning configuration of storage not chosing hardware.  Sorry about that.

  • Error occurred while attempting to drop allocation unit ID

    Anyone have any experience with this?  I'm on  64 bit Sql 2005 Standard SP3. 
    These errors are showing up in my sql server logs and roughly correspond with periods of poor
    db performance.
    03/05/2011 16:02:18,spid431,Unknown,Error [36<c/> 17<c/> 145] occurred while attempting to drop allocation unit ID 438356380090368 belonging
    to worktable with partition ID 438356380090368.
    03/07/2011 14:42:18,spid333,Unknown,Error [36<c/> 17<c/> 145] occurred while attempting to drop allocation unit ID 438859152752640 belonging
    to worktable with partition ID 438859152752640.

    We had this exact problem when we were on SQL Server 2005 SP2 early builds when there was high tempdb activity + a mistimed job that
    updates statistics on a relatively large table in the middle of the day.
    After talking to Bob Ward at MSFT, it was identified that a resource monitor was trying to free up a cached plan that has a cursor,
    where it was trying to deallocate a worktable but gets stuck on a latch for an IAM page in tempdb.
    The solution that worked for us was to change the schedule for the update stats job + reduced the load on tempdb.
    I would suggest you to look at the VirtualFileStats DMV and see the amount of IO you are doing against tempdb when compared to other
    databases. For us at the time of this issue, our tempdb IO activity was 90% of the total activity on the box. After few optimizations, we got it down to 50-60%.
    I am NOT sure if this really helps you but wanted to share some notes.
    http://SankarReddy.com/

  • An error occured in the blob cache

    An error occured in the blob cache.  The exception message was 'The system cannot find the file specified. (Exception from HRESULT: 0x80070002)'.
    Understand that this know issue on SharePoint 2010 is there any fix for this ? 
    http://blogs.msdn.com/b/spses/archive/2013/10/23/sharepoint-2010-using-blob-caching-throws-many-errors-in-the-uls-and-event-logs-the-system-cannot-find-the-file-specified.aspx
    K.Mohamed Faizal, Solution Architect, Singapore, @kmdfaizal
    http://faizal-comeacross.blogspot.com/ |AzureUG.SG

    Install the December 2013 CU. http://support.microsoft.com/kb/2912738
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • How can I know the allocated Heap size at runtime?

    I need to know (at runtime) the allocated Heap memory of my application.
    How can I do that ??????

    The Runtime methods freeMemory() and totalMemory() will give you the currently allocated heap size. The JVM can allocate more memory from the system as it needs it and these numbers will change.
    It doesn't look like you can get the actual maximum heap size the system can allocate. See
    http://developer.java.sun.com/developer/bugParade/bugs/4175139.html

  • Blob cache issue

    I have installed SharePoint 2013 Enterprise version we have 3 Server of SharePoint from which 1 is App Server and other 2 is web server. Web server are on load balance. Database is SQL Server 2012 R2
    SharePoint is purely used for heavy traffic Internet website. We have enable blob cache on it. Since when I enable blob cache I am getting error in ULS log 
    An error occured in the blob cache.  The exception message was 'The system cannot find the file specified. (Exception from HRESULT: 0x80070002)'.
    When I go to details and I found below. Now I totally understand that in my SharePoint some page or in master page or in web-part I have given wrong path of css or js due to that I am getting below error.
    Which is nice to resolve but I have more that 100 pages on my website. So is there any way that I can find on which page is having such issue so I can directly concentrate to remove that error.
    Hope I could put my point over here.

    wp-includes is a WordPress directory. Is WordPress installed on this server?
    https://wordpress.org/plugins/tinymce-advanced/
    Trevor Seward
    Follow or contact me at...
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Since applying Feb 2013 Sharepoint 2010 CUs - Critical event log entries for Blob cache and missing images

    Hi,
    Since applying the February 2013 SharePoint 2010 updates, we are getting lots of entries in our event logs along the following:
    Content Management     Publishing Cache         
    5538     Critical 
    An error occurred in the blob cache.  The exception message was 'The system cannot find the file specified. (Exception from HRESULT: 0x80070002)’
    In pretty much all of these cases the image/ file in question that is reported in the ULS logs as missing is not actually in the collaboration site, master page / html etc so the fix needs to go back to the site owner to make the correction to avoid
    the 404 (if they make it!). This has only started happening, I believe since feb 2013 sp2010 cumulative updates updates
    I didn’t see this mentioned as a change / in the Fix list of the February updates. i.e. it flags up a critical error in our event logs. So with a lot of sites and a lot of missing images your event log can quickly fill up.
    Obviously you can suppress them in the monitoring -> web content management ->publishing cache = none & none which is not ideal.
    So my question is... are others seeing this and was a change made by Microsoft to flag a 404 missing image / file up a critical error in event log when blob cache is enabled?
    If i log this with MS they will just say, you need to fix it up the missing files in the site but would be nice to know this had changed prior! I also deleted and recreated the blob cache and this made no diffference
    thanks
    Brad

    I'm facing the same error on our SharePoint 2013 farm. We are on Aug 2013 CU and if the Dec CU (which is supposed to be the latest) doesn't solve it then what else could be done.
    Some users started getting the message "Server is busy now try again later" with a corelation id. I looked up ULS with that corelation id and found these two errors in addition to hundreds of "Micro Trace Tags (none)" and "forced
    due to logging gap":
    "GetFileFromUrl: FileNotFoundException when attempting get file Url /favicon.ico The system cannot find the file specified. (Exception from HRESULT: 0x80070002)"
    "Error in blob cache. System.IO.FileNotFoundException: The system cannot find the file specified. (Exception from HRESULT: 0x80070002)"
    "Unable to cache URL /FAVICON.ICO.  File was not found" 
    Looks like this is a bug and MS hasn't fixed it in Dec CU..
    &quot;The opinions expressed here represent my own and not those of anybody else&quot;

  • Error Code - client cache is smaller than the size of the requested content

    Even though we have increased the size of the ccmcache via Control Panel > Configuration Manager, we still get the Error Code 0x87D01202 (-2016407038) "the content
    download cannot be performed because the total size of the client cache is smaller than the size of the requested content"  The CCMEXEC Service and computer have both been restarted, after increasing the ccmcache size.  Which local log
    file under C:\Windows\CCM\Logs should we check for more information ?
    Thanks

    so when you re deploying the client go into your settings and set the variable below:
    smscachesize=10240
    note:
    SMSCACHESIZE
    Specifies the size of the client cache folder in megabyte (MB) or as a percentage when used with the PERCENTDISKSPACE or PERCENTFREEDISKSPACE property. If this property is not set, the folder defaults to a maximum size of 5120 MB. The lowest value that you
    can specify is 1 MB.
    Note
    If a new package that must be downloaded would cause the folder to exceed the maximum size, and if the folder cannot be purged to make sufficient space available, the package download fails, and the program or application will not run.
    This setting is ignored when you upgrade an existing client and when the client downloads software updates.
    Example: CCMSetup.exe SMSCACHESIZE=100
    Note
    If you reinstall a client, you cannot use the SMSCACHESIZE or SMSCACHEFLAGS installation properties to set the cache size to be smaller than it was previously. If you try to do this, your value is ignored and the cache size is automatically set to the last
    size it was previously.
    For example, if you install the client with the default cache size of 5120 MB, and then reinstall the client with a cache size of 100 MB, the cache folder size on the reinstalled client is set to 5120 MB.
    Twitter: @dguilloryjr LinkedIn: http://www.linkedin.com/in/dannyjr Facebook: http://www.facebook.com/#!/dguilloryjr

  • Logic to capture deletion of   CS08 allocation unit.

    Hi gurus,
    I have a problem in Cs08 transaction. I need to delete the allocation unit dynamically . For that i need to Capture Auskz field. Can any one suggest me in capturing this field.

    Hi Kim,
    Please follow the logic below:
    *Select all organizational units.
    SELECT * INTO IT1_HRP1000 FROM HRP1000 WHERE OBJTYPE = 'O'.
    LOOP AT IT1_HRP1000.
    *Check if that org unit is sub unit for another one
    SELECT * FROM HRP1001 where rsign eq 'B' and relat eq '002' and objid eq IT1_HRP1000-objid.
    *if true
    if sy-subrc eq 0.
    *check whether that org unit has a chief or not.
    SELECT * FROM HRP1001 where relat eq '012' and objid eq IT1_HRP1000-objid.
    *if false
    if sy-subrc ne 0.
    GET THIS OBJID
    endif.
    endif.
    ENDLOOP.
    ENDSELECT.
    Regards,
    Dilek

Maybe you are looking for

  • Help--why isn't Airport recognizing my network on a second computer?

    The second Macbook in our household (OS 10.6x) has  suddenly stopped recognizing the Airport password in its keychain and refuses to use our personal network. The  password and  network work fine on our primary (administrator) Macbook (also OS 10.6x)

  • Outlook 2010 - how to make "global address list" display larger?

    Does anyone know how to make the "global address list" box and text appear larger in Outlook 2010?  Thanks

  • Data Conversion Design Patters

    I'm looking at building a conversion program that will import data from several different formats and convert into one common format. The convertor should simply be pointed to a database or a flat-file and it will extract the data and populate tables

  • Itunes store does not display with in the itunes player, why?

    I have latest iTunes download. I have PC with Windows Vista. My system requirements are what they should be for the iTunes player. I have gone through all of the troubleshooting that i am willing to go through to make this work - Can someone !! Tell

  • Do your company use Retrofit function in ERP Landscape

    Hi Charm experts, I work in Vendor company and one of our customer plans to start using Retrofit during ERP Ehp upgrade project. We both  like to know does someone have experience how Retrofit works in real world. If you are using Retrofit could you