Zfs storage pools with EMC Snapview clones

We're currently using Veritas VM and Snapview clones to present multiple copies of our database data to the same host. I've found that Veritas doesn't like multiple copies of the same disk group being imported on a host and found a way around this. My question is: does zfs have this problem? If I switch to zfs for our new system and create a storage pool for each group of data files, (we put the data files with our tables on one filesystem, indexes on another, redo logs another, etc.) can I mount a clone of the indexes storage pool (for example) on the same host as the original?

We're currently using Veritas VM and Snapview clones
to present multiple copies of our database data to
the same host. I've found that Veritas doesn't like
multiple copies of the same disk group being imported
on a host and found a way around this. VxVM 5.0 has some tools to redo the ID on the copy. That might make it easier to deal with. Of course that brings up other issues if you ever want to use that copy and roll it back as primary.
(If you don't have 5.0, you can still do it, but it's a lot more fiddley)
My question
is: does zfs have this problem? If I switch to zfs
for our new system and create a storage pool for each
group of data files, (we put the data files with our
tables on one filesystem, indexes on another, redo
logs another, etc.) can I mount a clone of the
indexes storage pool (for example) on the same host
as the original?No. Same issue. The pool/volume group has a (hopefully) unique identifier to find the pieces of the storage. When the identified piece starts showing up in multiple locations, it knows things are wrong. You'd have to have some method of modifying that data on the copied disks. Today, I don't think there's any support in ZFS for doing that.
Darren

Similar Messages

  • Adding drives to storage pool with same unique id

    i have seen a lot of discussion about using storage pools with raid controllers that reporting the same unique id across multiple drives. 
    I am yet to find a solution to my problem is that i can't add 2 drives to storage pool because they share the same unique id. Is there a way i can get around this?
    Thanks brendon

    Thanks for your reply, 
    However, Storage spaces uses the uniqueid that the raid / sata controller reports for the drive. in my case this is the output from powershell
    PS C:\Users\tfs> get-physicaldisk | ft FriendlyName, uniqueid
    FriendlyName                                                uniqueid
    PhysicalDisk1                                               2039374232333633
    PhysicalDisk2                                               2039374232333633
    PhysicalDisk10                                              SCSI\Disk&Ven_Hitachi&Prod_HDS722020ALA330\4&37df755d&0&...
    PhysicalDisk8                                               SCSI\Disk&Ven_WDC&Prod_WD10EACS-00D6B0\4&37df755d&0&0300...
    PhysicalDisk6                                               SCSI\Disk&Ven_WDC&Prod_WD10EADS-00M2B0\4&37df755d&0&0100...
    PhysicalDisk7                                               SCSI\Disk&Ven_&Prod_ST2000DL003-9VT1\4&37df755d&0&020000...
    PhysicalDisk0                                               2039374232333633
    PhysicalDisk4                                               SCSI\Disk&Ven_&Prod_ST3000DM001-9YN1\5&10a0425f&0&010000...
    PhysicalDisk3                                               SCSI\Disk&Ven_Hitachi&Prod_HDS723030ALA640\5&10a0425f&0&...
    PhysicalDisk9                                               SCSI\Disk&Ven_&Prod_ST31500341AS\4&37df755d&0&040000:sho...
    PhysicalDisk5                                               SCSI\Disk&Ven_WDC&Prod_WD1001FALS-00J7B\4&37df755d&0&000...
    as you notice i have 3 drives with the same uniqueid. This i cannot change and this is what i am looking for a workaround for. 
    If you have any thoughts that would be great.
    Thanks in advance
    Brendon

  • Can ZFS storage pools share a physical drive w/ the root (UFS) file system?

    I wonder if I'm missing something here, because I was under the impression ZFS offered ultimate flexability until I encountered the following fine print 50 pages into the ZFS Administration Guide:
    "Before creating a storage pool, you must determine which devices will store your data. These devices must be disks of at least 128 Mbytes in size, and _they must not be in use by other parts of the operating system_. The devices can be individual slices on a preformatted disk, or they can be entire disks that ZFS formats as a single large slice."
    I thought it was frustrating that ZFS couldn't be used as a boot disk, but the fact that I can't even use the rest of the space on the boot drive for ZFS is aggrivating. Or am I missing something? The following text appears elsewhere in the guide, and suggests that I can use the 7th slice:
    "A storage device can be a whole disk (c0t0d0) or _an individual slice_ (c0t0d0s7). The recommended mode of operation is to use an entire disk, in which case the disk does not need to be specially formatted."
    Currently, I've just installed Solaris 10 (6/11) on an Ultra 10. I removed the slice for /export/users (c0t0d0s7) from the default layout during the installation. So there's approx 6 GB in UFS space, and 1/2 GB in swap space. I want to make the 70GB of unused HDD space a ZFS pool.
    Suggestions? I read somewhere that the other slices must be unmounted before creating a pool. How do I unmount the root partition, then use the ZFS tools that reside in that unmounted space to create a pool?
    Edited by: MindFuq on Oct 20, 2007 8:12 PM

    It's not convenient for me to post that right now, because my ultra 10 is offline (for some reason the DNS never got set up properly, and creating an /etc/resolv.conf file isn't enough to get it going).
    Anyway, you're correct, I can see that there is overlap with the cylinders.
    During installation, I removed slice 7 from the table. However, under the covers the installer created a 'backup' partition (slice 2), which used the rest of the space (~74.5GB), so the installer didn't leave the space unused as I had expected. Strangely, the backup partition overlapped; it started at zero as the swap partition did, and it ended ~3000 cylinders beyond the root partition. I trusted the installer to be correct about things, and simply figured it was acceptible for multiple partitions to share a cylinder. So I deleted slice 2, and created slice 7 using the same boundaries as slice 2.
    So next I'll have to remove the zfs pool, and shrink slice 7 so it goes from cylinder 258 to ~35425.
    [UPDATE] It worked. Thanks Alex! When I ran zpool create tank c0t0d0s7, there was no error.
    Edited by: MindFuq on Oct 22, 2007 8:15 PM

  • Creation of new storage pool on iomega ix12-300r failed

    I have a LenovoEMC ix12-300r (iomega version).
    IX12-300r serial number: 2JAA21000A
    There is at present one storagepool (SP0) consisting of 8 drives (RAID5).
    HDD 1-8 (existing SP0): ST31000520AS CC38
    I have aquired 4 new Seagate ST3000DM001 drives as per recommendation on this forum:
    https://lenovo-na-en.custhelp.com/app/answers/detail/a_id/33028/kw/ix12%20recommended%20hdd 
    I want to make a new storage pool with these 4 drives:
    HDD9-12 (new HDD and SP1): ST3000DM001-1CH166 CC29
    I have used diskpart to clean all 4 drives and the IX12-300r can see the drives just fine.
    When I try to make a new storage pool, naming it SP1 as the only storage pool is named SP0, I get an error: "Storage Pool Creation Failed"
    Please advise as to how I can get these drives up and running.
    Regards
    Kristen Thiesen
    adena IT

    I have pulled the 8 hdd from storage pool 0.
    Then I rebooted with the 4 new hdd in slot 9 - 12.
    Result: http://device IP/manage/restart.html, with the message: Confirmation required. You must authorize overwrite existing data to start using this device. Are you sure you want to overwrite existing data?k [yes] / [no]
    I then answer yes four times anticipating that each new drive needs an acceptance, but the dialog just keeps poping up...
    Then I shut down the device and repositioned the 4 new drives to slot 1-4 - but the same thing happened...
    Any suggestions?

  • Install Windows Server 2012 R2 VM on Storage Spaces with Storage Tiers

    Hey guys
    In my small/medium sized company we will soon update to Windows Server 2012 R2. I would like to implement virtual servers using Hyper-V. I didn't find a lot of information about Hyper-V in combination with storages spaces and autoamted storage tiers.
    And this is very confusing to me as it seems to me that this would be the best practice as it is the most cost-efficient and most elegant solution.
    My ideal scenario:
    With Hyper-V I virtualize two Windows Server 2012 R2 instances. So two separate virtual machines.
    I use the following disk setup:
    1x cheap HDD  40GB for hyper-v server 2012 r2 core.
    2x SSD 200GB (enterprise-grade)
    2x HDD 4TB (7.2k, enterprise-grade)
    Step 1:
    I will install Hyper-V Server 2012 R2 Core on the 40GB HDD. Via command line, I will create a storage pool with automated tiered storage using the SSDs and the HDDs in mirrored mode the following way:
    With Tiered Storage, I create a storage pool containing the SSDs and the HDDs. Then I create storage space A (1TB) and B (3.2TB) with the SSDs in a mirrored setup and the HDDs in a mirrored setup. The SSDs for the „hot files“ and the HDDs for the „cold files“.
    Step2:
    Ontop of the storage space A I want to install the first Windows Server 2012 R2 instance with Active directory. On storage space B I want to install the second Windows Server 2012 R2 instance for a business application to run on it.
    Conclusion:
    The SSDs are mirrored and therefore one SSD can fail.
    The 4TB HDDs are mirrored and therefore one HDD can fail.
    I have a fast and easy scalable environment.
    But in the Internet I found many information that it’s not possible to install an operating system onto a storage tier.
    Question 1:
    Is this setup possible?
    Question 2:
    If this setup is possible, why is not everyone doing it?
    Question 3:
    Is it possible to do Step 1 over a GUI from a remote machine?
    Question 4:
    If the creation of Storage Tiers in the Hyper-V Server 2012 R2 is not possible. Would it work to use a Windows Server 2012 R2 as a parent system on the 40GB HDD? To do Step 1?
    I would gladly get some feedback of people knowing Storage Tiers well.
    Thanks a lot!

    I would absolutely prefer a GUI. But a Windows Server 2012 R2 Standard Licence allows you to run two VM machines.
    It also grants you a physical installation ("POSE" in the licensing documents). You can buy one copy of WS2012R2 Standard, install it on the hardware, enable Hyper-V, and then operate two virtual machines with WS2012R2 Standard ("VOSE"
    in the licensing documents). The only restriction is that the management operating system (POSE) can only run services and applications meant to manage the virtual machines and/or the management operating system. The Hyper-V Server license is the same way
    so it's not really any different.
    In short, given the benefits of the GUI at your stage of learning, you have no solid reason not to install the full system and take advantage of it. You can disable the GUI later once you get your footing. Or not. Whatever suits you. However, in response
    to your Question 3, you can do this all remotely. Once you get WS2012R2 installed in a guest, you can use it to manage the management operating system if you want. There are many options.
    But then I would also need to have redundancy on the 40GB HDD as if this HDD brakes, all others brake as well?
    Yes, you're going to want some redundancy for the management operating system. But, you've listed 5 drives in your original layout. You don't really have a 5-bay system, do you? Is there an empty sixth bay? Could you not get two 40 GB drives instead of one
    and use hardware RAID-1?
    Eric Siron
    Altaro Hyper-V Blog
    I am an independent blog contributor, not an Altaro employee. I am solely responsible for the content of my posts.

  • Help setting a repository with EMC Fiber storage

    Hello
    We have a couple of servers running OVM 3.2.2 to setup a test environment. The only common space are 6 LUNS on a EMC CX3-40. I can see the luns with multipath from the servers and available on /dev/mpath
    [root@dvmicovm1 bin]# multipath -ll |grep RAID
    EMC_ASM05 (3600601601f911900a4250377b5b8e211) dm-5 DGC,RAID 10
    EMC_ASM04 (3600601601f9119000ec7e110b9b8e211) dm-3 DGC,RAID 10
    EMC_ASM03 (3600601601f91190044870613b5b8e211) dm-4 DGC,RAID 10
    EMC_ASM02 (3600601601f91190080abeac9b4b8e211) dm-6 DGC,RAID 10
    EMC_ASM01 (3600601601f911900ba6e016bb4b8e211) dm-2 DGC,RAID 10
    EMC_OVMstorage (3600601601f9119004e22ab7ab3b8e211) dm-1 DGC,RAID 5
    I can't create the clustered pool because i can not see the storage at management level. I tried to create an ocfs2 filesystem, but I can't find a good link explaining the steps to do it with this version at command line. Help is welcome
    [root@dvmicovm1 mpath]# mkfs.ocfs2 -Tdatafiles -L EMC_repo -b 4K -C 4K -J size=64M -N16 /dev/mpath/EMC_OVMstoragep1
    mkfs.ocfs2 1.8.2
    Cluster stack: classic o2cb
    Overwriting existing ocfs2 partition.
    mkfs.ocfs2: Unable to access cluster service while initializing the cluster
    thanks
    Ileana

    Hi Ileana,
    what do you mean by "at management level"? The LUNs need to be presented to the VM servers only, but to each one simultaneously. They need to show up under /dev/mapper, which seems to be the case. However, there seem to be more LUNs visible than there should be… I reckon that the ASM0x LUNs are not supposed to be used by OVM, but seem to be LUNs for some Oracle DB(s), no? If that is the case, I'd suggest to hide them from the OVM servers in the first place.
    Then, you'd need at least two LUNs for OVM, one small one, depending on the cluster size, you should be able to get away with 32G max., and another one, that actually gets used as your shared storage repository. This one could be nearly as big , as you like.
    So, to be able to create a clustered pool, you'd need to have access to two distinct LUNs from each OVM server. You can check, if each OVM server, has access to the LUNs by choosing the server in OVMM and change the perspective on the right pane to "Physical disks".
    I'd also refrain from creating any ocsf2 fs on my own, while setting up the clusteres storage pool, since this will likely lead you into more trouble, since OVM won't overwrite an existing OCFS2 fs, if it finds one on any disk you throw at it and to get rid of the OCFS2 fs, you will like have to manually wipe the LUNs, which always means using dd, which is a program, that one always has to used with extreme caution, especially if there're LUNs available, which under no circumstances, are allowed to be deleted - see your ASM-LUNs…

  • 2012 New Cluster Adding A Storage Pool fails with Error Code 0x8007139F

    Trying to setup a brand new cluster (first node) on Server 2012. Hardware passes cluster validation tests and consists of a dell 2950 with an MD1000 JBOD enclosure configured with a bunch of 7.2K RPM SAS and 15k SAS Drives. There is no RAID card or any other
    storage fabric, just a SAS adapter and an external enclosure.
    I can create a regular storage pool just fine and access it with no issues on the same box when I don't add it to the cluster. However when I try to add it to the cluster I keep getting these errors on adding a disk:
    Error Code: 0x8007139F if I try to add a disk (The group or resource is not in the correct state to perform the requested operation)
    When adding the Pool I get this error:
    Error Code 0x80070016 The Device Does not recognize the command
    Full Error on adding the pool
    Cluster resource 'Cluster Pool 1' of type 'Storage Pool' in clustered role 'b645f6ed-38e4-11e2-93f4-001517b8960b' failed. The error code was '0x16' ('The device does not recognize the command.').
    Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart it.  Check the resource and group state using Failover Cluster
    Manager or the Get-ClusterResource Windows PowerShell cmdlet.
    if I try to just add the raw disks to the storage -- without using a pool or anything - almost every one of them but one fails with incorrect function except for one (a 7.2K RPM SAS drive). I cannot see any difference between it and the other disks. Any
    ideas? The error codes aren't anything helpful. I would imagine there's something in the drive configuration or hardware I am missing here I just don't know what considering the validation is passing and I am meeting the listed prerequisites.
    If I can provide any more details that would assist please let me know. Kind of at a loss here.

    Hi,
    You mentioned you use Dell MD 1000 as storage, Dell MD 1000 is Direct Attached Storage (DAS)
    Windows Server cluster do support DAS storage, Failover clusters include improvements to the way the cluster communicates with storage, improving the performance of a storage area network (SAN) or direct attached storage (DAS).
    But the Raid controller PERC 5/6 in MD 1000 may not support cluster technology. I did find its official article, but I found its next generation MD 1200 use Raid controller PERC H 800 is still not support cluster technology.
    You may contact Dell to check that.
    For more information please refer to following MS articles:
    Technical Guidebook for PowerVault MD1200 and MD 1220
    http://www.dell.com/downloads/global/products/pvaul/en/storage-powervault-md12x0-technical-guidebook.pdf
    Dell™ PERC 6/i, PERC 6/E and CERC 6/I User’s Guide
    http://support.dell.com/support/edocs/storage/RAID/PERC6/en/PDF/en_ug.pdf
    Hope this helps!
    TechNet Subscriber Support
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Lawrence
    TechNet Community Support

  • Slow performance Storage pool.

    We also encounter performance problems with storage pools.
    The RC is somewhat faster than the CP version.
    Hardware: Intel S1200BT (test) motherboard with LSI 9200-8e SAS 6Gb/s HBA connected to 12 ST91000640SS disks. Heavy problems with “Bursts”.
    Using the ARC 1320IX-16 HBA card is somewhat faster and looks more stable (less bursts).
    Inserting an ARC 1882X RAID card increases speed with a factor 5 – 10.
    Hence hardware RAID on the same hardware is 5 – 10 times faster!
    We noticed that the “Recourse Monitor” becomes unstable (irresponsive) while testing.
    There are no heavy processor loads while testing.
    JanV.
    JanV

    Based on some testing, I have several new pieces of information on this issue.
    1. Performance limited by controller configuration.
    First, I tracked down the underlying root cause of the performance problems I've been having. Two of my controller cards are RAIDCore PCI-X controllers, which I am using for 16x SATA connections. These have fantastic performance for physical disks
    that are initialized with RAIDCore structures (so they can be used in arrays, or even as JBOD). They also support non-initialized disks in "Legacy" mode, which is what I've been using to pass-through the entire physical disk to SS. But for some reason, occasionally
    (but not always) the performance on Server 2012 in Legacy mode is terrible - 8MB/sec read and write per disk. So this was not directly a SS issue.
    So given my SS pools were built on top of disks, some of which were on the RAIDCore controllers in Legacy mode, on the prior configuration the performance of virtual disks was limited by some of the underlying disks having this poor performance. This may
    also have caused the unresponsiveness the entire machine, if the Legacy mode operation had interrupt problems. So the first lesson is - check the entire physical disk stack, under the configuration you are using for SS first.
    My solution is to use all RAIDCore-attached disks with the RAIDCore structures in place, and so the performance is more like 100MB/sec read and write per disk. The problems with this are (a) a limit of 8 arrays/JBOD groups to be presented to the OS (for
    16 disks across two controllers, and (b) loss of a little capacity to RAIDCore structures.
    However, the other advantage is the ability to group disks as JBOD or RAID0 before presenting them to SS, which provides better performance and efficiency due to limitations in SS.
    Unfortunately, this goes against advice at http://social.technet.microsoft.com/wiki/contents/articles/11382.storage-spaces-frequently-asked-questions-faq.aspx,
    which says "RAID adapters, if used, must be in non-RAID mode with all RAID functionality disabled.". But it seems necessary for performance, at least on RAIDCore controllers.
    2. SS/Virtual disk performance guidelines. Based on testing different configurations, I have the following suggestions for parity virtual disks:
    (a) Use disks in SS pools in multiples of 8 disks. SS has a maximum of 8 columns for parity virtual disks. But it will use all disks in the pool to create the virtual disk. So if you have 14 disks in the pool, it will use all 14
    disks with a rotating parity, but still with 8 columns (1 parity slab per 7 data slabs). Then, and unexpectedly, the write performance of this is a little worse than if you were just to use 8 disks. Also, the efficiency of being able to fully use different
    sized disks is much higher with multiples of 8 disks in the pool.
    I have 32 underlying disks but a maximum of 28 disks available to the OS (due to the 8 array limit for RAIDCore). But my best configuration for performance and efficiency is when using 24 disks in the pool.
    (b) Use disks as similar sized as possible in the SS pool.
    This is about the efficiency of being able to use all the space available. SS can use different sized disks with reasonable efficiency, but it can't fully use the last hundred GB of the pool with 8 columns - if there are different sized disks and there
    are not a multiple of 8 disks in the pool. You can create a second virtual disk with fewer columns to soak up this remaining space. However, my solution to this has been to put my smaller disks on the RAIDCore controller, and group them as RAID0 (for equal
    sized) or JBOD (for different sized) before presenting them to SS. 
    It would be better if SS could do this itself rather than needing a RAID controller to do this. e.g. you have 6x 2TB and 4x 1TB disks in the pool. Right now, SS will stripe 8 columns across all 10 disks (for the first 10TB /8*7), then 8 columns across 6
    disks (for the remaining 6TB /8*7). But it would be higher performance and a more efficient use of space to stripe 8 columns across 8 disk groups, configured as 6x 2TB and 2x (1TB + 1TB JBOD).
    (c) For maximum performance, use Windows to stripe different virtual disks across different pools of 8 disks each.
    On my hardware, each SS parity virtual disk appears to be limited to 490MB/sec reads (70MB/sec/disk, up to 7 disks with 8 columns) and usually only 55MB/sec writes (regardless of the number of disks). If I use more disks - e.g. 16 disks, this limit is
    still in place. But you can create two separate pools of 8 disks, create a virtual disk in each pool, and stripe them together in Disk Management. This then doubles the read and write performance to 980MB/sec read and 110MB/sec write.
    It is a shame that SS does not parallelize the virtual disk access across multiple 8-column groups that are on different physical disks, and that you need work around this by striping virtual disks together. Effectively you are creating a RAID50 - a Windows
    RAID0 of SS RAID5 disks. It would be better if SS could natively create and use a RAID50 for performance. There doesn't seem like any advantage not to do this, as with the 8 column limit SS is using 2/16 of the available disk space for parity anyhow.
    You may pay a space efficiency penalty if you have unequal sized disks by going the striping route. SS's layout algorithm seems optimized for space efficiency, not performance. Though it would be even more efficient to have dynamic striping / variable column
    width (like ZFS) on a single virtual disk, to fully be able to use the space at the end of the disks.
    (d) Journal does not seem to add much performance. I tried a 14-disk configuration, both with and without dedicated journal disks. Read speed was unaffected (as expected), but write speed only increased slightly (from 48MB/sec to
    54MB/sec). This was the same as what I got with a balanced 8-disk configuration. It may be that dedicated journal disks have more advantages under random writes. I am primarily interested in sequential read and write performance.
    Also, the journal only seems to be used if it in on the pool before the virtual disk is created. It doesn't seem that journal disks are used for existing virtual disks if added to the pool after the virtual disk is created.
    Final configuration
    For my configuration, I have now configured my 32 underlying disks over 5 controllers (15 over 2x PCI-X RAIDCore BC4852, 13 over 2x PCIe Supermicro AOC-SASLP-MV8, and 4 over motherboard SATA), as 24 disks presented to Windows. Some are grouped on my RAIDCore
    card to get as close as possible to 1TB disks, given various limitations. I am optimizing for space efficiency and sequential write speed, which are the effective limits for use as a network file share.
    So I have: 5x 1TB, 5x (500GB+500GB RAID0), (640GB+250GB JBOD), (3x250GB RAID0), and 12x 500GB. This gets me 366MB/sec reads (note - for some reason, this is worse than the 490MB/sec when just using 8 of disks in a virtual disk) and 76MB/sec write (better
    than 55MB/sec on a 8-disk group). On space efficiency, I'm able to use all but 29GB in the pool in a single 14,266GB parity virtual disk.
    I hope these results are interesting and helpful to others!

  • DNFS and ZFS Storage Appliance: What are the benefits and drawbacks

    Hello All:
    I have a client who has a 4TB database and wants to easily make clones for their DEV environment. The conventional methods (RMAN duplicate) are taking too long because of the size of the db. Looking into dNFS as an alternative standalone and looking into the ZFS storage appliance as well. What are the benefits of just dNFS being configured alone? I'm sure you can clone easily based on the copy-on-write capabilities but weighing the dNFS option alone (no additional costs) vs using ZFS Storage Appliance (which used dNFS as its protocol) but costs money. Your thoughts?

    Dear Kirkladb,
    as far as I understand your question you like to know the road blocks for usage of dNFS in combination with a ZFS Storage Appliance.
    First I like to mention that dNFS is no feature on the appliance and the dNFS traffic is perceived as regular NFS traffic, so there is currently no feature which needs extra licenses on the ZFS Storage Appliance if you run dNFS on your client; dNFS is client driven and requires software on the client. Second the use of dNFS does not exclude to have snapshots or clones on the appliance, although cloning requires a license to be bought.
    As mentioned by Nitin the appliance offers many features, some are based on ZFS, some are coming for the underlying OS and some coming form additional software. You seem to be interested in NFS, this is I guess mainly related to NFSv3 and NFSv4. The appliance will see dNFS from the clients as regular incoming NFS requests, means client side makes the difference and therefore it is important to have dNFS and maybe Infiniband drivers at a current level.
    To get a copy of your production Database you could go different ways, the appliance offers to create a snapshot (free of charge) which is read-only and to create a clone (additional costs) on top of the snapshot. You have mentioned RMAN, as additional method, Snap Management Utility (Clone License) will also help here and maybe DataGuard. I am sure there are some additional ways not listed.
    The point I wanted to make is that Cloning on ZFS-SA abd NFS are different are topics.
    Best Regards
    Peter

  • Creating multiple storage pools on a 7110

    Just received our first 7110 and so far it seems great. We are planning on using this device for database storage (MS SQL Server) and as such wanted to configure 2 storage pools, however through the config UI I cannot figure out how to make that happen. Is is possible on a 7110 to have multiple storage pools carved out of the base device? I've read docs where it appears if we add an additional JBOD we can make that happen but can't find anything related to the base device. If someone could point me in the right direction that would be great.

    The 7110 only supports one storage pool and you are not able to add additional storage to the 7110 platform at current release. You are able to create two pools with a 7410. Most of the time you are thinking in terms of older storage where you have to define the raid sets - its a little different with this as ZFS provides the magic. I think if you set it up and start to play you will see the preformance - the 7210 you can also start to add SSD if needed.

  • PX6-300D storage pool

    Hello,
    I have px6-300D nas with 3TB X 6 drives. I configured it with Raid 5. Few Days back it was showing a message The amount of free space on your 'Shares' volume is below 5% of capacity.  and asked to overwrite Drive 6. So i ejected it after shutdown i  put it back and started.It get started rebuilding than. At 43% it got stuck and showing red indication, It was also messaging that your storage pool failed. in web access it was messaging your storage pool has failed and above message also. I tried restart and all but nothing has worked. Then after i contacted customer care they told that your few drives (3 or 4) has failed. they asked for dump file. as i wasold firmware the dump file was not getting generated. so i upgraded firmware as per instuction.  they conculed that your few  hard drive has been failed and go with some data recovery solution provide.  I really dont understand how would it corruprt my storage pool without any notice or mesage. If its NAS with raid protection my data must be protected. I really need my data back.
    Till today i am receiving messaged like
    The amount of free space on your 'Shares' volume is below 5% of capacity.
    Data protection is being reconstructed on Storage Pool gstv. Reconstruction is 8% complete.
    Reconstucting restarts at 45% starts with 0.
    I wonder if reconstrcion is happeing how would my drives are failure.
    Please help me ......
    Solved!
    Go to Solution.

    Thanks for your reply Westly,
    I wondered about data capacity of Lenovo px6-300D. I observed that it was showing that "The amount of free space on your 'Shares' volume is below 5% of capacity".  After that it was getting hang few times. and fianlly raid got corrpted or HDD got faild. I have never faced any problem like this before In raid. I want to ask that Do i have a FIle systems can be storaged on another system so that I can recover if its get failed again. I mean I used to work with tyrone server with Fiber channel. It was getting mounted on windows system and shared thru SAN licences software. Is it possible that way? So i Can mount Lenovo EMC PX-6 300D volumes on a system and share it thru Windows PC. So that my file system and folder system would be safer. So that I will not have to go for costy data recovery service. Even I can try a free recovery softwares.
    Please reply,
    Thanks,

  • VirtualDisk on Windows Server 2012 R2 Storage Pool stuck in "Warning: In Service" state and all file transfers to and from is awfully slow

    Greetings,
    I'm having some trouble with my Windows Storage Pool and my VirtualDisk running on a Windows Server 2012 R2 installation. It consists of 8x Western Digital RE-4 2TB drives + 2x Western Digital Black Edition 2TB drives and have been configured in a single-disk
    parity setup and the virtual disk is running fixed provisioning (max size) and is formatted with ReFS.
    It's been running solid for months besides some awful write-speeds at times, it seems like the write performance running ReFS compared to NTFS is not that good.
    I was recommended to add SSD's for journalling in order to boost write-performance. Sadly I seemed to screw up this part, you need to due this through PowerShell and it needs to be done before creating the virtualdisk. I managed to add my SSD to the Storage
    Pool and then remove it.
    This seem to have caused some awkward issues, I'm not quite sure of why as the virtualdisk is "fixed" so adding the SSD to the Storage Pool shouldn't really do anything, right? But after I did this my virtual disk have been stuck in "Warning:
    In Service" and it seems to be stuck? It's been 4-5 days and it's still the same and the performance is currently horrible. Moving 40GB of data off the virtual disk took me about 20 hours or so. Launching files under 1mb of the virtual disk takes several
    minutes etc.. It's pretty much useless.
    The GUI is not providing any useful information about what's going on. What does "Warning: In Service" actually imply? How am I supposed to know how long this is supposed to take? Running Get-Virtualdisk in PowerShell does not provide any useful
    information either. I did try to do a repair through the Server Manager GUI but it goes to about 21% within 2-3 hours but drops back down to 10%. I have had the repair running for days but it wont go past 21% without dropping back down again.
    Running repair through PowerShell yields the same results, but if I detach the virtual disk and then try to repair through PowerShell (the GUI wont let me do repair on detached virtual disks) it will just run for a split second and then close.
    After doing some "Googeling" I've seen people mentioning that the repair is not able to finish unless I have at least the same amount of free space in the Storage Pool as the largest drive in my Storage Pool is housing so I added a 4TB drive as
    due to me running fixed provisioning I had used all the space in the pool but the repair is still not able to go past 21%.
    As am running "fixed provisioning" I guess adding a extra drive to the pool doesn't do much difference as it's not available for the virtual disk? So I went ahead and deleted 3 TB of data on the virtual disk so now I've got about 4 TB free space
    on the virtual disk so there should be plenty of room for Windows Server 2012 R2 to re-build the parity or whatever it's trying to do but it's still the same, the repair wont move past 21% and the virtual disk is still stuck in "Warning: In Service"
    mode and the performance keeps being horrible so taking a backup will take forever at these speeds...
    Am I missing something here? All the drives in the pool is working fine. I have verified using various bootable tools so why is this happening and what can I do to get the virtual disk running at full state again? Why doesn't the GUI prompt you with any
    kind of usable information?
    Best regards, Thomas Andre

    Hi,
    Please run chkdsk /f /r command on the virtual disk to have a try. In the meantime, run the following commands in PowerShell to share the output.
    get-virtualdisk -friendlyname <name> | get-physicaldisk | fl
    get-virtualdisk -friendlyname <name> |fl
    Best Regards,
    Mandy
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • After upgrading to 8.1 Pro from 8.0 Pro, my Storage Spaces and Storage Pool are all gone.

    Under 8.0 I had three 4-terabyte drives set up as a Storage Pool in Storage Spaces.  I had five storage-space drives using this pool  I had
    been using them for months with no problems.  After upgrading to 8.1 ( which gave no errors ) the drives no longer exist.  Going into "Storage Spaces" in the control panel, I do not see my existing storage pool or storage drives. I'm prompted
    to create a new Pool and Storage Spaces.  If I click the "Create Pool" it does not list the three drives I used previously as available to add.
    Device Manager shows all three drives as present and OK.  
    Disk Management shows Disks 0,1,2,6.  The gap in between 2 and 6 is where the 3,4,5 storage spaces drives were.  
    Nothing helpful in the event log or the services.
    I've downloaded the ReclaiMe Storage Spaces recovery tool and it sees all of my logical drives with a "good" prognosis for recovery.  I've not gone down that road yet though because it requires separate physical drives to copy everything to
    and they want $299 for the privilege.
    Does anyone have any ideas?  I'm thinking of doing a fresh 8.1 install to another drive to see if it can see it or reinstalling 8.1 to the existing drive in the hope that it will just suddenly work.  Or possibly going back to 8.0.
    Thanks for your help!
    Steve

    Hi,
    “For parity spaces you must backup your data and delete the parity spaces. At this point, you may upgrade or perform a clean install of Windows 8. After the upgrade or clean installation is complete, you may recreate parity spaces and
    restore your data.”
    I’d like to share the following article with you for reference:
    Storage Spaces Frequently Asked Questions
    (Pay attention to this part:How do I prepare Storage Spaces for upgrading from the Windows 8 Consumer Preview to Windows 8 Release Preview?)
    http://social.technet.microsoft.com/wiki/contents/articles/11382.storage-spaces-frequently-asked-questions-faq.aspx#How_do_I_prepare_Storage_Spaces_for_upgrading_from_the_Windows_8_Consumer_Preview_to_Windows_8_Release_Preview
    Regards,
    Yolanda
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Server 2012 R2 Storage Pool Disk Identification Method

    Hi all,
    I'm currently using Server 2012 R2 Essentials with a Storage Space consisting of 7 3TB disks. The disks are connected to an LSI MegaRAID controller which does not support JBOD so each disk is configured as a single disk RAID0. The disks are connected to
    the controller using SAS Breakout Cables (SATA to SFF-8087).
    I am considering moving my server into a new chassis. The new chassis has a SAS Backplane for drive attachment which means I would be re-cabling to use SFF-8087 to SFF-8087 cables instead and in doing so, the channel and port assignment on the LSI MegaRAID
    will change.
    I know that the LSI card will have no problem identifying the disk as the same disk when it's connected to a different port or channel on the controller, but is the same true for the Storage Space?
    How does Storage Spaces track the identity of the individual disks?
    Just to be clear, the hardware configuration otherwise will not be changing. Motherboard, CPU, RAID controller etc will all be the same, it will just be moving everything into a new chassis.

    Hi,
    If the disks are still recognized as the same, the storage space should be recognized as well.
    You could test to do the replacement and see if the storage pools are being recognized. If not you can still change them back with original devices and storage pools will back to work. Then we may need to find a way to migrate your data. Personally I think
    it will work directly. 
    Note: backup important files is always recommended. 
    If you have any feedback on our support, please send to [email protected]

  • Win 8.1 Storage Pool not allowing "add drive" nor allow expand capacity

    Have one Storage Space within one Storage Pool (Parity mode) containing 4 identical hard drives.
    Used for data storage, it appears to be functioning normally
    and
    has filled 88% of capacity
    (ie. 88% x 2/3 of physical capacity (parity mode))
    The only other storage on this new PC is an SSD used for OS (win 8.1 pro) and application software.
    In "Manage Storage Spaces"
    displays this warning message to add drives:
    <   Warning                               >
    <   Low capacity; add 3 drives   >
    After clicking "add drives", it displays:
    "No drives that work with Storage Spaces are available. Make sure that the drives that you want to use are connected.".
    However I had connected another two identical hard drives via SATA cables and "Disk Management" displays these two drives available.
    in summary:
    "Manage Storage Spaces" does not find these drives as available although they show correctly in Disk Management.
    btw - I removed the pre-existing partitioning on the 'new' drives so now they show only as "unallocated" in "Disk Management". (I did
    likewise before Storage Pool found the 4 original drives)
    Perhaps the problem is to increase the total nominal capacity of the Storage Space, before can add more drives?
    Microsoft says that the capacity of Storage Pools can be increased but cannot be decreased -
    but computer displays no Change "button" by which this can be done. There is supposed to be a "Change" button but that is
    not displaying for me. So "Manage Storage Spaces" offer me no option to manage the "size" of the pool.
    only five options are displayed:
    Create a storage space     (ie. from the small amount remaining unused in the Pool)
    Add drives     (.... as explained already)
    Rename pool    (only renames the storage space)
    Format        (ie. re-format and so lose all current data)
    Delete         (ie. delete the storage space and so lose all current data)
    using Google, find nothing bearing on this problem
    except the most basic instructions to set up a storage space!
    Can you help?
    The problem is that the Storage Pool is not displaying a "button" to increase capacity, and when click "add drives" finds no hard drives available. 

    Hi,
    I would suggest you launch Device Manager, then expand
    Disk drives. Right-click the disk listed as "Disk drive", and select
    Uninstall. On the Action menu, click Scan for hardware changes to reinstall the disk.
    Please also take a look of this link:
    see this part: How do I increase pool capacity?
    http://social.technet.microsoft.com/wiki/contents/articles/11382.storage-spaces-frequently-asked-questions-faq.aspx#How_do_I_increase_pool_capacity 
    According to the link, to extend a parity space, the pool would need the appropriate number of columns available to accommodate the layout of the disk.
    Yolanda Zhu
    TechNet Community Support

Maybe you are looking for

  • Unable to fetch data from R/3

    hi i have a few questions regarding data extraction from R/3 into BI 7.0. I am giving the steps that i followed for the same. 1. Created 2 char infoobjects vbeln,posnr. 1 keyfigure 0Gross_wt (unit 0UNIT_OF_WT) 2. In R/3 went into RSA5 activated SAP-R

  • Plug Icon Won't Go Away - Help !

    I did the software update for my 20GB Color iPod. Then it wanted me to plug it in, as the icon indicated. Unfortunately, that's not doing anything at all. For goodness sakes, this is the 3rd time I've had an iPod die like this in a year. Arghhh ! Ple

  • Design PDF form acrobat 10 pro

    Ok so I'm making a PDF form for my school to make it easier for teacher to make hand made report cards. I have an issue where I have a grid Which is a 4x3. Is their an easy wy to create a grid in the PDF form so that the user can write inside each bo

  • Backup strategy in FRA

    Hi Experts, BANNER Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production PL/SQL Release 11.1.0.6.0 - Production CORE     11.1.0.6.0     Production TNS for HPUX: Version 11.1.0.6.0 - Production NLSRTL Version 11.1.0.6.0 - Produc

  • Problem with sample hrApp apllication

    Hello, I am a new user of JDeveloper 10g. I tried to create a simple hrApp outlined in "Developing a Web Application Using the EJB Technology Scope" document. But then I run the first (browseDepartments) JSP page I get the following error message: Un