2 luns in the metadata storage pool

Hi,
I am currently using Xsan ( for redundancy ) with a volume that requires high I/O. However, I am receiving poor performance at times, with the read latency mainly. I think this is caused by the metadata storage pool (RAID 1 on its own controller) as I can see the RAID controller running at maximum most of the time.
Would having RAID 0+1 with 4 drives or two luns in the metadata data pool with each on its own controller improve performance or would it be decremental to the SAN???
Any advice would be appreciated
Thanks
Gary

Hi,
I am aware that one dedicated metadata lun with RAID 1 is listed as the optimal configuration and is more than adequate for most. However, I am using the volume for very large I/O with millions of file requests each day for which Xsan wasn't originally designed to do. I was aware of this at the time, but with the original intended I/O levels not really a problem. Also, as far as I can tell all hardware is acting normally with optimal configuration. I have experimented with some different settings with no major improvement.
I have read the mentioned guide before and it is a very useful document.
I'm not convinced that it is a configuration problem anywhere, but rather a limitation of the topology. I've got another 2 drives in now for a test to see what happens.
Thanks for your advice.
Gary

Similar Messages

  • Adding lun to an existing storage pool

    i have problems to add a lun to an existing storage pool.
    The storage pool is data only type.
    The log file /var/run/xsancvupdatefsxsan.log indicates:
    ^MMerging bitmap data ( 99%)99.00
    ^MMerging bitmap data (100%)
    Bitmap fragmentation: 1900004 chunks (0%)
    Bitmap fragmentation threshold exceeded. Aborting.
    Invalid argument
    Fatal: Failed to expand stripe group
    Check configuration and try again
    After run the defrag command to some folders the log indicates:
    Merging bitmap data (100%)
    Bitmap fragmentation: 1898528 chunks (0%)
    Bitmap fragmentation threshold exceeded. Aborting.
    Invalid argument
    Fatal: Failed to expand stripe group
    Check configuration and try again
    Do i need to run a complete defrag ?
    Apple documentation indicates we need to delete the data before add the new LUN because there´s an issue. (is a very big issue!!!).
    http://docs.info.apple.com/article.html?artnum=303571
    Thanks for any help.
    xsan 1.4   Mac OS X (10.4.8)  

    hi william, thanks for your answer.
    Right now my storage is %68 used.
    Do you think, if i have < 60% used, the storage pool expansion will work?
    I prefer to add a lun to a storage pool instead to create a new storage pool, because adding a lun add bandwith too.
    thanks for any advice.
    CCL

  • Two native iSCSI LUNs in a 2012 storage pool

    I have a Windows Server 2012 storage pool connected to a single SAN VIA iSCSI. I'd like to add a second SAN and present the virtual disks as part of the same pool to extend my total storage. Is this possible?

    Hi,
    I do not have the environment to test if add LUNs from different SAN will work or not but personally I believe it will work as SAN (LUN) is supported to be added into a storage pool. 
    Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Don't put any Storage Pools on your Metadata & Journaling controller!

    I found out the hard way - many wasted hours of trying to figure out our performance problems - that putting any Storage Pools on the controller in an Xserve RAID that also houses the Metadata & Journal pool is a Big Mistake.
    We do full-blown 10-bit HD work here, and while capturing/writing wasn't a problem, playback was - in either Final Cut Pro or creating self-contained QuickTime movies and playing them back in QuickTime Player.
    Playing them back would result in missed frames, or even downright stop/start stuttering - completely unacceptable. Xsan Tuner was no help - it kept claiming I was getting 150-155 MB/sec, which I clearly wasn't.
    My co-workers were getting mad at me that I couldn't get it to work - every tuning thing I tried failed. (I'd originally set it up under Xsan 1.1, and had split the LUNs due to them being over 2 TB, so I had to completely rip it apart and do it all over from scratch under Xsan 1.3 to un-do the LUN slicing.)
    I finally stumbled upon the answer while trying lots of different tests - permutations/combinations, changing block size/stripe breadth, you name it. Nothing worked. Finally I tried building a simple Volume without the Storage Pool I'd created with the remaining 4+1 disks on the Metadata & Journaling controller ...
    Voila! All the performance problems went away magically, and now the videos play properly. Ever since then, with further testing, the only way I can "break" it again is to create a Volume with a 512 K (max) block size and stripe breadth of 2 - then it siezes up. (Which makes sense, if you think hard enough about it.) With block sizes between 16 to 64, no problems. (I eventually found out that 16 was the best size, according to my Bonnie benchmarking tests.)
    Anyway - maybe this is common knowledge, but here's some real-world data to back it up. Hopefully this posting will save someone in the future from making the same mistakes I did along the way ...

    In my discussions with Apple's consulting group, they always recommend this. Sorry you had to find it out the hard way :-/
    Xsan is easy "for a SAN," but it's a complicated product area, and if you want it to perform at its best, it's good to take training which gives an overview of what to do and more importantly what not to do. Also, the investment in consulting from people with experience is worth its weight in gold.

  • Failover cluster storage pool cannot be added

    Hi.
    Environment: Windows Server 2012 R2 with Update.
    Storage: Dell MD3600F
    I created an LUN with 5GB space and map it to both node of this cluster. It can be seen on both side on Disk Management. I installed it as GPT based disk without any partition.
    The New Storage Pool wizard can be finished by selecting this disk without any error message, nor event logs.
    But after that, the pool will not be visible from Pools and the LUN will be gone from Disk Management. The LUN cannot be shown again even after rescanning.
    This can be repo many times.
    In the same environment, many LUNs work well in Storage - Disks. It just failed while acting as a pool.
    What's wrong here?
    Thanks.

    Hi EternalSnow,
    Please refer to following article to create clustered storage pool :
    http://blogs.msdn.com/b/clustering/archive/2012/06/02/10314262.aspx
    Any further information please feel free to let us know .
    Best Regards,
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Creation of new storage pool on iomega ix12-300r failed

    I have a LenovoEMC ix12-300r (iomega version).
    IX12-300r serial number: 2JAA21000A
    There is at present one storagepool (SP0) consisting of 8 drives (RAID5).
    HDD 1-8 (existing SP0): ST31000520AS CC38
    I have aquired 4 new Seagate ST3000DM001 drives as per recommendation on this forum:
    https://lenovo-na-en.custhelp.com/app/answers/detail/a_id/33028/kw/ix12%20recommended%20hdd 
    I want to make a new storage pool with these 4 drives:
    HDD9-12 (new HDD and SP1): ST3000DM001-1CH166 CC29
    I have used diskpart to clean all 4 drives and the IX12-300r can see the drives just fine.
    When I try to make a new storage pool, naming it SP1 as the only storage pool is named SP0, I get an error: "Storage Pool Creation Failed"
    Please advise as to how I can get these drives up and running.
    Regards
    Kristen Thiesen
    adena IT

    I have pulled the 8 hdd from storage pool 0.
    Then I rebooted with the 4 new hdd in slot 9 - 12.
    Result: http://device IP/manage/restart.html, with the message: Confirmation required. You must authorize overwrite existing data to start using this device. Are you sure you want to overwrite existing data?k [yes] / [no]
    I then answer yes four times anticipating that each new drive needs an acceptance, but the dialog just keeps poping up...
    Then I shut down the device and repositioned the 4 new drives to slot 1-4 - but the same thing happened...
    Any suggestions?

  • How to move a virtual disk's physical allocation within a Storage Pool

    I have a pool of 3x500GB where one the physical drives is having intermittent issues. Currently, there is only one parity Virtual Disk of 300GB Fixed across 3 columns. I want to replace the bad drive with a good one. The old way (pre-2012) was replace the
    disk, repair the RAID 5, resync and done. These basic steps are not working.
    So far I have added a 4th 500GB drive to the pool. After searching and failing to find a way to move the data non-destructively, I decided to just pull the data cable on the disk I wanted to replace. After refresh/rescan, the disconnected drive shows "lost
    communication" and the virtual disk (after trying to repair) shows "unknown" (but the volume on that disk is accessible in Explorer).  When I try to remove the physical disk in Server Manager, I get "The selected physical disk cannot
    be removed". Reading the error message, I see that the replacement disk cannot contain any part of a virtual disk. The replacement disk that I just added appears to have some space allocated (possibly because I have tried this same procedure a couple
    of times already?). When I look at the parity disk properties/health, it shows all four physical disks under "physical disks in use".
    I have deleted and recreated a lot of storage pools lately while trying to understand how they work but I would like to avoid that this time. The data on the virtual disk in question is highly deduplicated and it took quite a while to get it that way. Since
    I can't find a way to copy/mirror the disk while keeping it fully deduplicated, I would need 3x the space to copy it all off, or a lot of time to load up and deduplicate a new virtual disk.
    I have several questions:
    1. How can a 3 column parity disk use parts of four physical disks? And can that be fixed without recreating the virtual disk?
    2. When creating a virtual disk (for example a 3 column disk in a pool that has four or more physical drives), is there a way to specify which physical disks to use?
    3. I understand that after a physical disk failure, the recovery process will move a virtual disk's allocation to a replacement disk, but can a virtual disk's allocation be moved manually among physical disks within the same storage pool
    using a PS script?
    4. Can a deduplicated virtual disk be moved/mirrored/backed up without expanding the data?
    Any help is appreciated.

    Im still fighting with storage pools and need more tests to be done and have a lot of questions my self but ther goes what I understood so far.
    You may define physical disks used for virtual disk by Powershell ,
    for list of all commands follow this:
    http://technet.microsoft.com/en-us/library/hh848705(v=wps.620).aspx ,
    specific command defining physical disks to be used on already existing virtual disk:
    Example 4: Manually assigning physical disks to a virtual disk
    This example gets two physical disks that have already been added to the storage pool and designated as ManualSelect disks,
    PhysicalDisk3 and PhysicalDisk4, and assigns them to the virtual disk
    UserData.
    PS C:\> Add-PhysicalDisk –VirtualDiskFriendlyName UserData –PhysicalDisks (Get-PhysicalDisk -FriendlyName PhysicalDisk3, PhysicalDisk4)
    http://technet.microsoft.com/en-us/library/hh848702(v=wps.620).aspx
    If You haven't seen this yet You may check it out http://blogs.technet.com/b/yungchou/archive/2011/12/06/free-ebooks.aspx

  • File CRC error on Read/Copy on NTFS on Parity Storage Pool

    I can not figure out a way to resolve a CRC error while trying to Read/Copy a file that resides on NTFS on a Parity Storage Pool over 10 disks.
    I would have figured that the Parity Pool would have caught the disk CRC error and healed the data with the Parity data. It doesn't.
    XCopy does not work.
    Robocopy does not work.
    Cacls does not work.
    Chkdsk /f does not find any errors.
    Chkdsk /r /x does not find any errors.
    fsutil repair file does not find any errors.
    What am I missing? I thought Parity was supposed to protect me from a drive failure. In this case it isn't a complete drive failure, but the drive is reporting a failure.
    Thanks,
    Scott

    I suspect it could be a drive problem.Can you run your hardware diagnostic tool and see if it reports any errors? There could be a predictive failure of one of the disk. Do you see any error event logs?
    Thanks,
    Umesh.S.K
    Storage Space hides drives from any S.M.A.R.T. tools I have. The health status reported by the disk in the storage pool states Healthy. I would have to take down my server to do any disk specific diagnostics. Isn't this what Storage Spaces, Pools and Parity
    was supposed to prevent.
    No Disk errors in my event logs.
    Do have any suggestions of hardware diagnostics that will not require me to take down the server, and also bypass the Windows Storage Pool Drive Hiding "feature?"

  • DPM 2010 reinstall secondary, format storage pool

    I have 2 DPM servers, both 2010, a primary and secondary. The secondary is at a separate site and backs up the primary. The OS disk on the secondary died. Single disk. I did not have a backup of the system state nor the DPM db/configuration. The storage
    pool data is still in tact. I have managed to reinstall DPM and configure a protection group for the primary DPM server and attempt backups.
    The issue is that all of the DPM data in the storage pool from before the crash is still there. The storage pool is 7tb and was about 40% used. Since I've run new replication from the primary with the new jobs, it's about 80% used. This is my secondary server,
    so I think I can afford to wipe the entire storage pool on the secondary DPM and start over.
    But how do I completely format the storage pool? Is there a better way to accomplish what I'm doing? I do not need to keep the old recovery points/storage pool data, it's months old.

    I have another question for this configuration. I'm trying to understand whether the secondary DPM server "touches" or directly backs up the "protected servers" in the PG's on the primary? What I mean - does the secondary only pull data
    and backup the data from the storage pool on the primary or does it actually pull data and backup directly from the protected server, ie - Exchange or file server?
    I'm trying to best setup recovery points and sync on the secondary. I have a full PG setup on the primary with proper sync's and recovery points for my servers, but what is best for the secondary? It's simply replication of the primary, so a 1-time copy of
    the data, probably nightly would suffice for my business needs. Do I need to schedule any synchronization at all for the replication?
    On the secondary PG, do I need to choose each "protected server" and it's volumes and db's and applications, etc.?
    I know this question is all over the place. Sorry.

  • Migrating Protection group to new storage pool

    Hello all running DPM 2012. The environment was setup prior to my arrival and it was setup with 10 1.5TB storage pools.  I heard the thinking was this was the recommended size for storage pools.  Problem is now we are/have run out of disk space
    on those storage pools and jobs are failing with "DPM does not have sufficient storage space available on the recovery point volume to create new recovery points (ID 214)"
    I have created new 4 new storage pools of 10TB each and I want to know how I can migrate the protection group data to the new storage pools and have the protection group use the new storage pool moving forward.  I want to empty the smaller storage pools
    so I can kill them off and create larger storage pools.
    Unfortunately the GUI does not allow me to point to specific storage pools, it should, and I'm not familiar with DPM Shell commands.
    Any suggestions would be appreciated.

    Hi,
    http://technet.microsoft.com/en-us/library/dd282970.aspx
    The following commands can be used with the ntdisk number as seen in either the get-DPMDISK output, the Windows Disk Management GUI or DPM Management - Disks GUI.  In my example disk-0 is the system/boot disk and not used by DPM storage pool.
    EXAMPLE These commands show the migrations going from NTDisk 1,2,3,4,5,6,7,8,9,10 to NTDisks 11,12,13,14 
    $source = get-dpmdisk -dpmserver DPM-SVR-NAME | where {1,2,3,4,5,6,7,8,9,10 -contains $_.ntdiskid}
    $source
    $destination = get-dpmdisk -dpmserver DPM-SVR-NAME | where {11,12,13,14  -contains $_.ntdiskid}
    $destination
    Once you are satisfied the outputs are what you want, then run the following to do the migration:
    MigrateDatasourceDataFromDPM.ps1 -DPMServername DPM-SVR-NAME -source $source -destination $destination
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • DPM 2010; migrated DPM protection group to new storage pool ; data on source disk is not expiring

    I am running DPM 2010 with 4 storage pool disks. 2 are iSCSI; 1 is SAS; the fourth, also iSCSI, was recently deployed to replace the SAS storage pool disk.
    (1) iSCSI
    (2) iSCSI
    (3) SAS - Source
    (4) iSCSI - Destination
    Initially we tried to migrate the entire data source (3) to the new iSCSI storage pool disk destination (4). This failed
    Set-ProtectionGroup: The allocation of disk space for storage pool volume failed because there is not enough unallocated disk space in the storage pool (ID: 358). (Sorry, I was not allowed to attach an image).
    We believe the source disk previously created dynamic links to the long-term retention device (tape) which resulted in exceeding the available space on the new destination iSCSI storage pool disk. The source storage pool disk was totaling ~2.5TB, but needed
    ~9TB for the destination.
    Instead I've tried to migrate the DPM protection group. The migration was successful. I followed the procedure outlined here
    "Microsoft DPM 2012 Sp1 – How To Migrate Data Source using MigrateDatasourceDataFromDPM by ICTtechie" (Sorry, I was not allowed to include a link).
    My understanding / expectation was that the data on the source disk would expire within 5 days (as this is our retention time set for disk replication); instead data started to expire from the other 2 iSCSI (1), (2) storage pool disks.
    What am I missing here?
    Thanks

    Ok - you may have bumped into this.
    WMF 3.0 is incompatible with some Microsoft products including
    DPM 2010.
    Windows Management Framework 3.0 (WMF 3.0), which
    includes PowerShell 3.0, was made available Dec. 11 on Windows Update as an
    optional update but has since been pulled.
    More information is available on http://blogs.msdn.com/b/powershell/archive/2012/12/20/windows-management-framework-3-0-compatibility-update.aspx
    Resolution
    To resolve this issue, uninstall KB2506146 or KB2506143.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • DPM Storage Pool on Windows Server 2012R2 iSCSI target Storage Spaces - supported?

    Hi,
    for migration purpose we need more space for one of our DPM 2012R2 Servers.
    And we cannot buy more space for our NetAPPs..
    So our idea was to use an Windows Server 2012R2 as an iSCSI target and use Storage Spaces.
    I remember something that using an iSCSI target on Windows 2008 was not supported.
    Is this configuration supported when usinf DPM 2012R2 on Win 2012R2 and the target is Win2012R2 aswell?
    Thanks in advance
    regards
    /bkpfast
    My postings are provided "AS IS" with no warranties and confer no rights

    Hi,
    This talks about
    Virtual DPM servers, but the same is true for Physical DPM servers when it comes to using .VHD(x) files for DPM storage pool.
    Virtual DPM installations do not support the following:
    Windows 2012 Storage Spaces.
    Virtual hard drives built on top of storage spaces.
    Local or remote hosting of VHDX files on Windows 2012 storage spaces.
    Enabling Disk Dedupe on volumes hosting virtual hard drives.
    Using synthetic FC to connect to tape drives.
    Windows 2012 iSCSI targets (which use virtual hard drives) as a DPM storage pool.
    NTFS compression for volumes hosting VHD files used in the DPM storage pool.
    Bitlocker on volumes hosting VHD files used for the storage pool.
    A native 4K sector size of physical disks for VHDX files in the DPM storage pool.
    Virtual hard drives hosted on Windows 2008 servers.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • Zfs storage pools with EMC Snapview clones

    We're currently using Veritas VM and Snapview clones to present multiple copies of our database data to the same host. I've found that Veritas doesn't like multiple copies of the same disk group being imported on a host and found a way around this. My question is: does zfs have this problem? If I switch to zfs for our new system and create a storage pool for each group of data files, (we put the data files with our tables on one filesystem, indexes on another, redo logs another, etc.) can I mount a clone of the indexes storage pool (for example) on the same host as the original?

    We're currently using Veritas VM and Snapview clones
    to present multiple copies of our database data to
    the same host. I've found that Veritas doesn't like
    multiple copies of the same disk group being imported
    on a host and found a way around this. VxVM 5.0 has some tools to redo the ID on the copy. That might make it easier to deal with. Of course that brings up other issues if you ever want to use that copy and roll it back as primary.
    (If you don't have 5.0, you can still do it, but it's a lot more fiddley)
    My question
    is: does zfs have this problem? If I switch to zfs
    for our new system and create a storage pool for each
    group of data files, (we put the data files with our
    tables on one filesystem, indexes on another, redo
    logs another, etc.) can I mount a clone of the
    indexes storage pool (for example) on the same host
    as the original?No. Same issue. The pool/volume group has a (hopefully) unique identifier to find the pieces of the storage. When the identified piece starts showing up in multiple locations, it knows things are wrong. You'd have to have some method of modifying that data on the copied disks. Today, I don't think there's any support in ZFS for doing that.
    Darren

  • Use Storsimple LUN as storage pool in DPM

    Can we use Storsimple LUN as the storage pool and assign to DPM?
    Is this supported? 
    Lai (My blog:- http://www.ms4u.info)

    Hi,
    DPM tracks changes of protected files at the block level, and applies block level changes to files sitting on the DPM replica.  DPM does not operate at the file level, so if storsimple device was used in DPM storage pool
    and were to truncate files as part of it’s offload to cloud operation and files changed on the protected server, when DPM tried to apply that block level change to the files – that would cause corruption.   Also consistency checks perform block level
    comparisons to NTFS structures and file data, and again, DPM would basically see the work done by Storsimple as mismatched data and end up brining over a full copy of the file and re-writing it to the replica volume.
    Simply said, the technologies are not compatible.
    With that said, using Storsimple as a depository for VTL media (like firestreamer) would be supported by DPM, but not sure that has actually been tested by anybody at Microsoft.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • Can ZFS storage pools share a physical drive w/ the root (UFS) file system?

    I wonder if I'm missing something here, because I was under the impression ZFS offered ultimate flexability until I encountered the following fine print 50 pages into the ZFS Administration Guide:
    "Before creating a storage pool, you must determine which devices will store your data. These devices must be disks of at least 128 Mbytes in size, and _they must not be in use by other parts of the operating system_. The devices can be individual slices on a preformatted disk, or they can be entire disks that ZFS formats as a single large slice."
    I thought it was frustrating that ZFS couldn't be used as a boot disk, but the fact that I can't even use the rest of the space on the boot drive for ZFS is aggrivating. Or am I missing something? The following text appears elsewhere in the guide, and suggests that I can use the 7th slice:
    "A storage device can be a whole disk (c0t0d0) or _an individual slice_ (c0t0d0s7). The recommended mode of operation is to use an entire disk, in which case the disk does not need to be specially formatted."
    Currently, I've just installed Solaris 10 (6/11) on an Ultra 10. I removed the slice for /export/users (c0t0d0s7) from the default layout during the installation. So there's approx 6 GB in UFS space, and 1/2 GB in swap space. I want to make the 70GB of unused HDD space a ZFS pool.
    Suggestions? I read somewhere that the other slices must be unmounted before creating a pool. How do I unmount the root partition, then use the ZFS tools that reside in that unmounted space to create a pool?
    Edited by: MindFuq on Oct 20, 2007 8:12 PM

    It's not convenient for me to post that right now, because my ultra 10 is offline (for some reason the DNS never got set up properly, and creating an /etc/resolv.conf file isn't enough to get it going).
    Anyway, you're correct, I can see that there is overlap with the cylinders.
    During installation, I removed slice 7 from the table. However, under the covers the installer created a 'backup' partition (slice 2), which used the rest of the space (~74.5GB), so the installer didn't leave the space unused as I had expected. Strangely, the backup partition overlapped; it started at zero as the swap partition did, and it ended ~3000 cylinders beyond the root partition. I trusted the installer to be correct about things, and simply figured it was acceptible for multiple partitions to share a cylinder. So I deleted slice 2, and created slice 7 using the same boundaries as slice 2.
    So next I'll have to remove the zfs pool, and shrink slice 7 so it goes from cylinder 258 to ~35425.
    [UPDATE] It worked. Thanks Alex! When I ran zpool create tank c0t0d0s7, there was no error.
    Edited by: MindFuq on Oct 22, 2007 8:15 PM

Maybe you are looking for