"Best" Allocation Unit Size (AU_SIZE) for ASM diskgroups when using NetApp

We're building a new non-RAC 11.2.0.3 system on x86-64 RHEL 5.7 with ASM diskgroups stored on a NetApp device (don't know the model # since we are not storage admins but can get it if that would be helpful). The system is not a data warehouse--more of a hybrid than pure OLTP or OLAP.
In Oracle® Database Storage Administrator's Guide 11g Release 2 (11.2) E10500-02, Oracle recommends using allocation unit (AU) size of 4MB (vs. a default of 1MB) for a disk group be set to 4 MB to enhance performance. However, to take advantage of the au_size benefits, it also says the operating system (OS) I/O size should be set "to the largest possible size."
http://docs.oracle.com/cd/E16338_01/server.112/e10500/asmdiskgrps.htm
Since we're using NetApp as the underlying storage, what should we ask our storage and sysadmins (we don't manage the physical storage or the OS) to do:
* What do they need to confirm and/or set regarding I/O on the Linux side
* What do they need to confirm and/or set regarding I/O on the NetApp side?
On some other 11.2.0.2 systems that use ASM diskgroups, I checked v$asm_diskgroup and see we're currently using a 1MB Allocation Unit Size. The diskgroups are on an HP EVA SAN. I don't recall, when creating the diskgroups via asmca, if we were even given an option to change the AU size. We're inclined to go with Oracle's recommendation of 4MB. But we're concerned there may be a mismatch on the OS side (either Redhat or the NetApp device's OS). Would rather "first do no harm" and stick with the default of 1MB before going with 4MB and not knowing the consequences. Also, when we create diskgroups we set Redundancy to External--because we'd like the NetApp device to handle this. Don't know if that matters regarding AU Size.
Hope this makes sense. Please let me know if there is any other info I can provide.

Thanks Dan. I suspected as much due to the absence of info out there on this particular topic. I hear you on the comparsion with deviating from a tried-and-true standard 8K Oracle block size. Probably not worth the hassle. I don't know of any particular justification with this system to bump up the AU size--especially if this is an esoteric and little-used technique. The only justification is official Oracle documentation suggesting the value change. Since it seems you can't change an ASM Diskgroup's AU size once you create it, and since we won't have time to benchmark using different AU sizes, I would prefer to err on the side of caution--e.g. first do no harm.
Does anyone out there use something larger than a 1MB AU size? If so, why? And did you benchmark between the standard size and the size you chose? What performance results did you observe?

Similar Messages

  • What is the best allocation block size for HDDs in Hyper-v?

    Hello experts,
    A server will be running Hyper-V soon. All virtual machines and their virtual HDDs will be stored on D driver of the physical machine.
    What is the best allocation unit size for formatting the physical D drive? Shall it be the biggest available option (64k) for getting better performance?

    Hi,
    You can check the following post.
    Recommendations for base disk cluster size for hosting Hyper-V Virtual Machines?
    http://social.technet.microsoft.com/Forums/en/winserverhyperv/thread/45919c42-bc39-47f4-9214-3f1cf00f2ea9
    Best Regards,
    Vincent Hu

  • SQL Server NTFS allocation unit size for SSD disk

    Hi,
    I have read that the recommended NTFS allocation unit size for SQL Server generally is 64 kb since data pages are 8 kb and SQL Server usually reads pages in extents which are 8 pages = 64 kb.
    Wanted to check if this is true also for SSD disks or if it only applies to spinning disks?
    Also would it make more sense to use an 8 kb size if wanting to optimize the writes rather than reads?
    Please provide some additional info or reference instead of just a yes or no :)
    Thanks!

    Ok thanks for clarifying that.
    I did a test using SQLIO comparing 4kb with 64kb when using 8kb vs 64kb writes/reads.
    In my scenario it seems it doesnt matter if using 4kb or 64kb.
    Here are my results expressed as how much higher the values were for 64kb vs 4kb.
    Access type
    IOps
    MB/sec
    Min latency (ms)
    Avg latency (ms)
    Max latency (ms)
    8kb random write
    -2,61%
    -2,46%
    0,00%
    0,00%
    60,00%
    64kb random write
    -2,52%
    -2,49%
    0,00%
    0,00%
    -2,94%
    8kb random read
    0,30%
    0,67%
    0,00%
    0,00%
    -57,14%
    64kb random read
    0,06%
    0,23%
    0,00%
    0,00%
    44,00%
    8kb sequential write
    -0,15%
    -0,36%
    0,00%
    0,00%
    15,38%
    64kb squential write
    0,41%
    0,57%
    0,00%
    0,00%
    6,25%
    8kb sequential read
    0,17%
    0,33%
    0,00%
    0,00%
    0,00%
    64kb squential read
    0,26%
    0,23%
    0,00%
    0,00%
    -15,79%
    For anyone interested this test was done on Intel S3700 200gb on PERC H310 controller and each test was run for 6 minutes.

  • Recommended Number LUNs for ASM Diskgroup

    We are installation Oracle Clusterware 11g, Oracle ASM 11g and Oracle Database 11g R1 (11.1.0.6) Enterprise Edition with RAC option. We have EMC Clariion CX-3 SAN for shared storage (All oracle software will reside on locally). We are trying to determine the recommended or best practice number of LUNs and LUN size for ASM Diskgroups. I have found only the following specific to ASM 11g:
    ASM Deployment Best Practice
    Use diskgroups with four or more disks, and making sure these disks span several backend disk adapters.
    1) Recommended number of LUNs?
    2) Recommended size of LUNs?
    3) In the ASM Deployment Best Practice above, "four or more disks" for a diskgroup, is this referring to LUNs (4 LUNs) or one LUN with 4 physical spindles?
    4) Should the number of physical spindles in LUN be even numbered? Does it matter?

    user10437903 wrote:
    Use diskgroups with four or more disks, and making sure these disks span several backend disk adapters.This means that the LUNs (disks) should be created over multiple SCSI adapters in the storage box. EMCs have multiple SCSI channels to which disks are attached. Best practice says that the disks/luns that you assing to a diskgroup should be spread over as many channels in the storage box as possible. This increases the bandwidth and therefore, performance.
    1) Recommended number of LUNs?Like the best practice says, if possible, at least 4
    2) Recommended size of LUNs?That depends on your situation. If you are planning a database of 100GB, then a LUN size of 50GB is a bit overkill.
    3) In the ASM Deployment Best Practice above, "four or more disks" for a diskgroup, is this referring to LUNs (4 LUNs) or one LUN with 4 physical spindles?LUNs, spindles if you have only access to physical spindles
    4) Should the number of physical spindles in LUN be even numbered? Does it matter?If you are using RAID5, I'd advise to keep a 4+1 spindle allocation, but it might not be possible to realize that. It all depends on the storage solution and how far you can go in configuring it.
    Arnoud Roth

  • Blob cache - dedicated drive - allocation unit size

    Hi
    I'm setting up dedicated drives on the WFEs for Blobcache and just formatting the new simple volumes and I was wondering if Default is the best option for the Allocation unit size
    Is there any best practice or guidance here or should I just stick with default?
    Thanks
    J

    Default is ideal since you'll be mostly dealing with small files.
    That said, the advantage for BC comes from the max-age value, which allows clients to not even ask the SharePoint server for the file and instead use local browser cache.
    Trevor Seward
    Follow or contact me at...
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Hard Drive Setup - Allocation Unit Size

    I'm adding on 2 one tb HDD's.
    when formatting i'm wondering what the performance gain and size loss would be of using 8192 bytes instead of the default(4096) allocation unit size.
    i've got 3 one tb HDD's and 4 tb of external storage now but i use a lot of space editing 4-10 hours of multi camera footage at a time and im terrible at cleaning up after myself, hah, so I'm not sure if i have enough overall space to dedicate each one for media, page file, etc... at least for now they will all be general purpose use
    any thoughts on the effects of using 8192 over 4096?
    thanks

    By your own admission, cleaning up is not one of your fortes. This probably means that there are a lot of preview files left on your disk as well and these tend to be rather small. Increasing the allocation units as you call them will create more slack space and result in wasted space, sucking up your disk space very quickly. I do not think you will gain anything, on the contrary, you will probably lose space and performance. If you look at your preview files, you will see a lot of XMP and MPGINDEX files that are all 4 KB max and by changing the allocation unit to 8 K most of these files will have between 75 and 50% slack space (not used for data but reserved) and you will run out of disk space much quicker. Personally I would not change the default NTFS allocation.

  • [svn:fx-trunk] 10545: Make DataGrid smarter about when and how to calculate the modulefactory for its renderers when using embedded fonts

    Revision: 10545
    Author:   [email protected]
    Date:     2009-09-23 13:33:21 -0700 (Wed, 23 Sep 2009)
    Log Message:
    Make DataGrid smarter about when and how to calculate the modulefactory for its renderers when using embedded fonts
    QE Notes: 2 Mustella tests fail:
    components/DataGrid/DataGrid_HaloSkin/Properties/datagrid_properties_columns_halo datagrid_properties_columns_increase0to1_halo
    components/DataGrid/DataGrid_SparkSkin/Properties/datagrid_properties_columns datagrid_properties_columns_increase0to1
    These fixes get us to measure the embedded fonts correctly when going from 0 columns to a set of columns so rowHeight will be different (and better) in those scenarios
    Doc Notes: None
    Bugs: SDK-15241
    Reviewer: Darrell
    API Change: No
    Is noteworthy for integration: No
    tests: checkintests mustella/browser/DataGrid
    Ticket Links:
        http://bugs.adobe.com/jira/browse/SDK-15241
    Modified Paths:
        flex/sdk/trunk/frameworks/projects/framework/src/mx/controls/DataGrid.as
        flex/sdk/trunk/frameworks/projects/framework/src/mx/controls/dataGridClasses/DataGridBase .as
        flex/sdk/trunk/frameworks/projects/framework/src/mx/controls/dataGridClasses/DataGridColu mn.as

    Hi Matthias,
    Sorry, if this reply seems like a products plug (which it is), but this is really how we solve this software engineering challenge at JKI...
    At JKI, we create VI Packages (which are basically installers for LabVIEW instrument drivers and toolkits) of our reusable code (using the package building capabilities of VIPM Professional).  We keep a VI Package Configuration file (that includes a copy of the actual packages) in each of our project folders (and check it into source code control just as we do for all our project files).  We also use VIPM Enterprise to distribute new VI Packages over the network.
    Also, as others have mentioned, we use the JKI TortoiseSVN Tool to make it easy to use TortoiseSVN directly from LabVIEW.
    Please feel free to contact JKI if you have any specific questions about these products.
    Thanks,
    -Jim 

  • Choose accent color for hovered icons when using dark interface in Bridge?

    How about add the ability to choose the accent color for hovered icons when using a dark interface in Bridge?  Currently, the icon color remains orange and unchangeable.  Personally, I would prefer a blue accent color to match my system.  Any chance of making this possible in CS6?

    Oh, I should add that I'm looking for solutions somewhat up-to-date and/or are related to or designed for 10.6 Snow Leonard. I did a search for answers before asking the question, and the (mostly unsatisfactory) answers I found were all from several years ago and applied to earlier versions of OSX (10.3, 10.4, etc.), and therefore no longer were relevant or usable.

  • Any solution for 4280 error when using Itunes to burn CDs?

    Any solution for 4280 error when using Itunes to burn CDs?

    http://support.apple.com/kb/TA38101?viewlocale=en_US

  • What is a valid location for autorecovery files when using Word for MAC?

    What is a valid location for autorecovery files when using Word for MAC?

    Microsoft Word for Mac support forums is probably a better place to ask.

  • Windows Allocation Unit Size and db_block_size

    We are using Oracle 9.2 on Windows 2000 Advanced Server. What is the largest db_block_size we can use? I am going to reformat the drive to the largest cluster size (or allocation unit) Oracle can use.
    Thanks.

    First, you shouldn't use db_block_size in 9.2 but db_cache_size :-)
    With 32-bit version, your buffer cache will be limited to 2,5 Go (if win 2000 know how to manage this quantity). On 64 bit, dunno but probably enough for all of us :-)
    I think you should check Oracle 9.2 for win 2000 realease note on technet.oracle.com
    Fred

  • ASM Hang when using ASMLIB

    SR 6591645.994
    ASM started as part of CRS start which will start ASM and then the database. It's hang in ASM start.
    pls noted that ASM startup hang when ASM init.ora is set to
    asm_diskstring='ORCL:*'
    When not using ASMLIB, ie asm_diskstring='/dev/oracleasm/disks/ORA*', ASM started ok and then database started ok.
    Any idea what this would happen.
    Thanks
    Loren

    Here are the answers to the above questions;
    Can you provide more details... When you first time created your ASM instances, how was this done.. using DBCA? did it hang then?
    ANS: yes using DBCA but not using ASMLIB. it's running fine for a long time.
    When you have this hang situation, do you have visibility to all the storage devices?
    ANS: only when we tried to use ASMLIB ie change asm_string to 'ORCL:*'
    Did the reminder of the CRS stack start ok? for example, the VIP, LISTENER etc?
    ANS: YES all ok except ASM hang. pls note when the asm_string changed back to not using 'ORCL:*', everything works fine!
    I would also try to start ASM manually to help determine where the problem could be?
    Make sure the CRS is running and then start ASM via SQLPLUS.
    ANS: YES, it started up ok when using sqlplus
    [oracle@rhel34dev2 pfile]$ sqlplus
    SQL*Plus: Release 10.2.0.2.0 - Production on Wed Nov 28 15:09:41 2007
    Copyright (c) 1982, 2005, Oracle. All Rights Reserved.
    Enter user-name: / as sysdba
    Connected to an idle instance.
    SQL> startup
    ASM instance started
    Total System Global Area 130023424 bytes
    Fixed Size 2042960 bytes
    Variable Size 102814640 bytes
    ASM Cache 25165824 bytes
    ASM diskgroups mounted
    SQL> exit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production
    With the Partitioning, Real Application Clusters, OLAP and Data Mining options

  • "No enought room on startup disk for Application Memory" when using the Accelerate Framework

    Dear colleagues,
    I am running what I know is a large problem for a scientific application (tochnog) a finite element solver that runs from the Terminal. The application tries to solve 1,320,000 simultaneous linear equations. The problem starts when I use the Accelerate Framework as the Virtual Memory size jumps from 142 G to about 576 G after the library  (LAPACK) is called to solve the system.It does not do it if I use a solver that does not calls LAPACK inside Accelerate.
    The machine is a mac pro desktop with 8 GB of ram, the 2.66 GHz Quad-core Intel and the standard 640 GB hard drive. The system tells me that I have 487 GB available on hard drive.
    The top instruction in Terminal reads VM 129G vsize when starting. When I run the finite element application once the LAPACK library in the Accelerate framework gets called, the Virtual Memory (VM) jumps to 563 G vsize.
    After a short while, I get the "No enought room on startup disk for Application Memory error"
    This is a screen capture of the application attempting to solve the problem using the LAPACK library inside the Accelerate framework: Here are the numbers as reported by the activity Monitor.
    Tochnog Real Memory 6.68 GB
    System Memory  Free: 33.8 MB, Wired 378.8 MB, Active 5.06 GB, Inactive 2.53 GB, Used 7.96 GB.
    VM size 567.52 GB, Page ins 270.8 MB, Page outs 108.2 MB, Swap used 505 MB
    This is a screen copy of the same application solving the same problemwithout using the Accelerate framework.
    Tochnog Real Memory 1.96 GB,
    System Memory  Free: 4.52 MB, Wired 382.1 MB, Active 2.69 GB, Inactive 416.2 GB, Used 3.47 GB.
    VM size 148.60 GB, Page ins 288.8 MB, Page outs 108.2 MB, Swap used 2.5 MB
    I can not understand the disparity in the behavior for the same case. As I said before, the only difference is the use of Accelerate in the first case. Also, as you can see, I thought that 8 GB of ram memory was a lot.
    Your help will be greatly appreciated
    Best regards,
    F Lorenzo

    The OP had posted this question in the iMac Intel forum.
    I replied along similar lines, but suggested he repost this in the SL forum where I know there are usually several people who have a far better grasp of these issues than I.
    I would be interested in getting their take on this.
    Although, I think you are coming to the correct conclusion that there are not enough resources available for this process, I'm not certain that what you are saying on the way to that conclusion is correct. My understanding of VM is that it is the total theoretical demand on memory a process might make. It is not necessarily the actual or real world demand being made.
    As such, this process is not actually demanding 568GB (rounded.) As evidence of that, you can see there is still memory available, albeit quite small, in the form of free memory of 33.8MB and inactive of 2.53GB (the GB for that figure, above, seems like it might be a typo, since for the process when not using Accelerate the reported figure for inactive was 416.2 GB -- surely impossible) and 7.96GB used. The process, itself, is using 6.68GB real memory.
    In addition, I question whether the OP has misstated the 487GB free drive space. I think that might be the total drive capacity, not the free space.
    My guess is that it is the combination of low available memory and low free drive space prompting this error.
    From Dr. Smoke on VM:
    it is possible that swap files could grow to the point where all free space on your disk is consumed by them. This can happen if you are very low on both RAM and free disk space.
    https://discussions.apple.com/message/2232469?messageID=2232469&#2232469
    This gets more to the actual intent of your question...
    EDIT: Looks like some kind of glitch right now getting to the Dr. Smoke post.
    Message was edited by: WZZZ
    <Hyperlink Edited by Host>

  • Performance Impact for the Application when using ADFLogger

    Hi All,
    I am very new to ADFLogger and I am going to implement this in my Application. I go through Duncan Mill's acticles and got the basic understanding.
    I have some questions to be clear.
    Is there any Performance Impact when using ADFLogger to slower the Appllication.
    Is there any Best Practices to follow with ADFLogger to minimize the negative impact(if exists).
    Thanks
    Dk

    Well, add a call to a logger is a method call. So if you add a log message for every line of code, you'll see an impact.
    You can implement it in a way that you only write the log messages if the log level if set to a level which your logger writes (or lower). In this case the impact is like having an if statement, and a method call if the if statement returns true.
    After this theory here is my personal finding, as I use ADFLogger quite a lot. In production systems you turn the log lever to WARNING or higher so you will not see many log messages in the log. Only when a problem is reported you set the log level to a lower value to get more output.
    I normally use the 'check log level before logging a message' and the 'just print the message' combined. When I know that a message is printed very often, I first check the level. If I assume or know that a message is only logged seldom, I just log it.
    I personally have not seen a negative impact this way.
    Timo

  • HT1947 Why can't I see all songs for an artist when using iPad remote for home sharing music through apple tv.

    Why can't I see all my songs for an artist when I search by artist using iPad remote? I'm listening to music via home sharing through apple tv using iPad 2 as my remote. All my songs seem to be listed fine under "songs", but when I select "artist" many of the songs I have for that artist are not there, any idea why that is or how I can fix this.....I suspect it's because many songs may not be associated with an album and it seems only songs in albums are displayed when I search by artist.

    Thanks Ferretbite, but that's not the issue in my case, the songs are not part of a compilation. I just checked several of them and the "Part of a compilation" box is unchecked.....it seems as if the song does not have an album associated with it, it will not display under the artist when I search by artist on the iPad remote which REALLY stinks.
    Hope someone else might have a solution.

Maybe you are looking for