ZFS Dedup question

Hello All,
I have been playing around with ZFS dedup in Open Solaris.
I would like to know how does ZFS store the dedup table. I know it is usually in memory, but it must leave a copy on disk. How is this table protected? Are their multiple copies like Uber block?
Thanks
--Pete                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

Hello All,
I have been playing around with ZFS dedup in Open Solaris.
I would like to know how does ZFS store the dedup table. I know it is usually in memory, but it must leave a copy on disk. How is this table protected? Are their multiple copies like Uber block?
Thanks
--Pete                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

Similar Messages

  • More Major Issues with ZFS + Dedup

    I'm having more problems - this time, very, very serious ones, with ZFS and deduplication. Deduplication is basically making my iSCSI targets completely inaccessible to the clients trying to access them over COMSTAR. I have two commands right now that are completely hung:
    1) zfs destroy pool/volume
    2) zfs set dedup=off pool
    The first command I started four hours ago, and it has barely removed 10G of the 50G that were allocated to that volume. It also seems to periodically cause the ZFS system to stop responding to any other I/O requests, which in turn causes major issues on my iSCSI clients. I cannot kill or pause the destroy command, and I've tried renicing it, to no avail. If anyone has any hints or suggestions on what I can do to overcome this issue, I'd very much appreciate that. I'm open to suggestions that will kill the destroy command, or will at least reprioritize it such that other I/O requests have precedence over this destroy command.
    Thanks,
    Nick

    To add some more detail, I've been review iostat and zpool iostat for a couple of hours, and am seeing some very, very strange behavior. There seem to be three distinct patterns going on here.
    The first is extremely heavy writes. Using zpool iostat, I see write bandwidth in the 15MB/s range sustained for a few minutes. I'm guessing this is when ZFS is allowing normal access to volumes and when it is actually removing some of the data for the volume I tried to destroy. This only lasts for two to three minutes at a time before progressing to the next pattern.
    The second pattern is categorized by heavy, heavy read access - several thousand read operations per second, and several MB/s bandwidth. This also lasts for five or ten minutes before proceeding to the third pattern. During this time there is very little, if any write activity.
    The third and final pattern is categorized by absolutely no write activity (0s in both the write ops/sec and the write bandwidth columns, and very, very small read activity. By small ready activity, I mean 100-200 read ops per second, and 100-200K read bandwidth per second. This lasts for 30 to 40 minutes, and then the patter proceeds back to the first one.
    I have no idea what to make of this, and I'm out of my league in terms of ZFS tools to figure out what's going on. This is extremely frustrating because all of my iSCSI clients are essentially dead right now - this destroy command has completely taken over my ZFS storage, and it seems like all I can do is sit and wait for it to finish, which, as this rate, will be another 12 hours.
    Also, during this time, if I look at the plain iostat command, I see that the read ops for the physical disk and the actv are within normal ranges, as are asvc_t and %w. %b, however is pegged at 99-100%.
    Edited by: Nick on Jan 4, 2011 10:57 AM

  • Zfs newbie question

    Hello
    I have a fresh Solaris 10/6 install. My disk have 20Gb but I used only
    10GB for the base solaris installation and home directory.
    I have 10GB free without any partition.
    I would like to use my free space to store my zones using zfs filesystem.
    In example:
    /zones
    /zone/zona1
    /zone/zone2
    How I can create "/zones" filesystem ?
    zpool create zones ?????
    ?????: What I must write ?
    This is my filesystem setup now (solaris default):
    df -h
    Sistema de archivos tama��� usados aprovechar capacidad Montado en
    /dev/dsk/c0d0s0 4,7G 3,1G 1,5G 67% /
    /devices 0K 0K 0K 0% /devices
    ctfs 0K 0K 0K 0% /system/contract
    proc 0K 0K 0K 0% /proc
    mnttab 0K 0K 0K 0% /etc/mnttab
    swap 815M 716K 815M 1% /etc/svc/volatile
    objfs 0K 0K 0K 0% /system/object
    /usr/lib/libc/libc_hwcap1.so.1
    4,7G 3,1G 1,5G 67% /lib/libc.so.1
    fd 0K 0K 0K 0% /dev/fd
    swap 815M 8K 815M 1% /tmp
    swap 815M 28K 815M 1% /var/run
    /dev/dsk/c0d0s7 3,5G 3,8M 3,5G 1% /export/home
    Thanks in advance.
    roberto pereyra

    At least make a token attempt to read the documentation
    http://docs.sun.com/app/docs/doc/819-5461
    Then if you have a specific question someone will more likely to answer you.
    Otherwise it sounds like your saying, "I'm too lazy to read any documentation. Can someone tell me all about it".
    BTW I will point out that at the moment if you put your zones on a zfs filesystem, it means you will be unable to install new solaris upgrades. You can still patch, but not upgrade to a new release. They may add that functionality at some stage. But for the moment, its not there.

  • Zfs snapshot question

    Hi guys,
    I will really appreciate it if someone can answer this for me.
    I do understand that you can use snapshots to back up file systems. But they also use up pool space when their file systems grow.
    So, is it necessary to create zfs snapshots even when you already have a full system back up in place?
    Thank you very much for your kind explanation.
    Arrey

    985798 wrote:
    So, is it necessary to create zfs snapshots even when you already have a full system back up in place? Nobody will force you to create or keep snapshots and if you are happy with taking "classic" backups then there may be no need for additional snapshots. And since snapshots will also take up space in your pool, it is usually a good idea to keep them only for a short period and delete them periodically. I like to use snapshots for two purposes:
    - create a snapshot, then write that snaptshot to take and destroy it afterwards. that way, you can guarantee that this tape backup is consistent
    - create snapshot at regular intervals and keep them around for a few days so that if I need to restore a file from just a day ago I don't have to go back to tapes but can rather fetch it from the snapshot. So that would be in addition to regular backups
    cheers
    bjoern

  • ZFS Configuration question

    Hello,
    I have 2 x 140GB (hw Raid-0 + spare). About 20GB is allocated to Solaris on UFS (standard installation and partitions/slices).
    I would like to allocate the 120GB space left to ZFS. I found a lot of documentation of how to create a ZFS Pool on an empty disk drive. The documentation is less clear on how to do it on a drive already use by another FS.
    One of the document found talk about using format / fdisk and define that partition as DOS or anything else). I tried different things around that, but a piece of the puzzle is missing.
    Any help will be appreciated.
    Michel

    I have no room for the ZFS partition on a slice. I
    have a lot of free space unallocated to any partition
    and this is where I would like the ZFS partition.You would need to allocate it to one yourself.
    I
    saw a lot of confusion between slices and
    partitions.Yes. In many cases they are used interchangably.
    However, I often try to use the term "slice" for the solaris label subdivision, leaving "partition" to refer to x86 architecture-specific divisions.
    Solaris on x86 can only (easily) use a single x86 partition of type "Solaris". You can't use another partition (whether with ZFS or not).
    In my case, I have a 20GB partition where I have 10
    slices created when I installed Solaris on UFS.
    The 120GB left is not allocated to any slice.If it's visible to the Solaris label, then allocate it to a slice.
    Darren

  • ZFS Configuration Question - Also posted in Solaris x86

    Hello,
    I have 2 x 140GB (hw Raid-0 + spare). About 20GB is allocated to Solaris on UFS (standard installation and partitions/slices).
    I would like to allocate the 120GB left on the disk drive to ZFS. That space is not unallocated to any file system for now.
    I found a lot of documentation of how to create a ZFS Pool on an empty disk drive. The documentation is less clear on how to do it on a drive already use by another FS.
    Any help will be appreciated.
    Michel

    Darren,
    I just setup another Solaris x86 server tonight to play with. That future production server is in it's own vlan wtih no Internet access for now.
    The test server has the same configuration, but less disk space.
    Here is the output of prtvtoc
    # prtvtoc /dev/dsk/c0t0d0s0
    * /dev/dsk/c0t0d0s0 partition map
    * Dimensions:
    * 512 bytes/sector
    * 63 sectors/track
    * 255 tracks/cylinder
    * 16065 sectors/cylinder
    * 2350 cylinders
    * 2348 accessible cylinders
    * Flags:
    * 1: unmountable
    * 10: read-only
    * First Sector Last
    * Partition Tag Flags Sector Count Sector Mount Directory
    0 2 00 10522575 27198045 37720619 /
    1 3 01 16065 8401995 8418059
    2 5 00 0 37720620 37720619
    7 8 00 8418060 2104515 10522574 /export/home
    8 1 01 0 16065 16064
    Output of format
    selecting c0t0d0
    [disk formatted]
    Warning: Current Disk has mounted partitions.
    /dev/dsk/c0t0d0s0 is currently mounted on /. Please see umount(1M).
    Total disk size is 8920 cylinders
    Cylinder size is 16065 (512 byte) blocks
    Cylinders
    Partition Status Type Start End Length %
    ====== ====== ============ ===== === ====== ===
    1 Active Solaris2 1 2350 2350 26
    As you can see, the 1st partition is used at 26%. With fdisk, I can create a 2nd partition. My problem is to get that 2nd partiition availlable to Solaris and create a zpool in that unused space..
    Michel

  • Dedupe question

    Does dedupe work on a virtual file server's data drive that is a vhd file within a cluster shared volume?

    Does dedupe work on a virtual file server's data drive that is a vhd file within a cluster shared volume?
    Microsoft deduplication does not work for running VMs (unless they are configured for VDI, see link below) but you can enable dedupe INSIDE your guest VM and indeed save some space. 
    Microsoft Windows Server 2012 Dedupe
    http://technet.microsoft.com/en-us/library/hh831700.aspx
    Not good candidates for deduplication:
    Hyper-V hosts
    What's new with dedupe in R2
    http://technet.microsoft.com/en-us/library/dn486808.aspx
    Feature/functionality
    New or updated?
    Description
    Data deduplication for remote storage of Virtual Desktop Infrastructure (VDI) workloads
    New
    Optimize active virtual hard disks (VHDs) for Virtual Desktop Infrastructure (VDI) workloads by implementing Data Deduplication on Cluster Shared Volumes (CSVs).
    Here's a thread where OP is running a file server inside a VM with dedupe enabled and using VHDX shrink to save space on host (CSV). Kludge but would
    work with some pressure applied (I would not go for this but worth at least reading...). See:
    File Server inside a VM + dedupe enabled
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/74f30d29-b0f3-4955-9844-46af0c7db683/server-2012-not-compacting-vhdx-files?forum=winserverhyperv#04e7eb1d-1962-487d-8b1e-8d2775e2c77f
    Hope this helped :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • ZFS mirrors question

    Let's say I have one disk, c0t0d0. I have three zpools on this disk, including the root pool (rpool). If I add a mirror to the root pool (say disk c0t1d0), will only the root pool get mirrored? The other two pools are on the same disk - do they have to be specified as being mirrored separately, or does the entire disk get mirrored by extension, regardless of the pool? If the three pools are all on the same disk, but on different slices, do they have to be mirrored separately then as well?

    No, you'll have to mirror the other pools separately.

  • GNOME System Monitory says no mem consumed by ZFS??

    Hi, I'm guessing there is probably a simple answer to this but Google and I haven't found it.
    With OpenSolaris svn_111b, the GNOME System Monitor would show almost all memory consumed. (By ZFS cache...a good thing.)
    With Solaris Express svn_151a, System Monitor shows <1gb (out of 16gb) consumed.
    Would I be correct in assuming that ZFS cache is no longer counted in the System Monitor, but ZFS is still using almost all available memory (hopefully)?
    The reason for concern is that with svn_111b, I was getting a reliable 90 MB/s transfer over SMB (with Solaris as Samba server), without even jumbo frames. But now, on the same server and network config but with a clean install of svn_151a and minimal tweaking, I'm getting terrible performance at about 3 to 10 MB/s.
    (I've posted the performance problem specifically as a separate post, so as not to double-post I don't want to dwell on that here - this is just about the System Monitor / ZFS cache question.)
    Thanks!!
    -Jim
    Specs, if it matters for this question:
    Machine:
    - 2*4-core Xeons
    - 16gb ECC RAM
    - Onboard e1000g NIC
    - 20 hot-swap bays
    - cooling out the wazoo
    rpool:
    - mirrored 30gb SSDs
    Pool "zp3" config:
    - LSI SAS1068E-R c9 HBAs
    - 6 * 3-way mirrors, 1tb, 7200 RPM enterprise-class SATA
    - 2 hot spares
    - 2 * 30gb SSD for L2ARC
    Pool "zp3" settings:
    zp3 version 5 -
    zp3 sync disabled local
    zp3 com.sun:auto-snapshot true local
    zp3 recordsize 128K default
    zp3 utf8only on -
    zp3 normalization formD -
    zp3 casesensitivity insensitive -
    zp3 encryption off -
    zp3 compression on local
    zp3 compressratio 1.07x -
    zp3 dedup off local
    zp3 logbias throughput local
    zp3 used 4.21T -
    zp3 usedbysnapshots 311G -
    zp3 usedbydataset 3.90T -
    zp3 usedbychildren 8.78G -
    zp3 usedbyrefreservation 0 -
    zp3 available 1.04T -
    Type of data and access:
    - Mostly large media files (DNG photos, large photoshop project files, high bitrate 1080p video, 32-bit/96khz audio, etc.)
    - Plenty of small "office" data files too, but of secondary importance.
    - Low usage. Typically accessed by one CIFS/SMB client - and one application - at a time. Needs throughput, not IOPS.
    Edited by: user13689618 on Feb 8, 2011 12:48 PM (Oops, used ">" as list bullets.)

    Hi, I'm guessing there is probably a simple answer to this but Google and I haven't found it.
    With OpenSolaris svn_111b, the GNOME System Monitor would show almost all memory consumed. (By ZFS cache...a good thing.)
    With Solaris Express svn_151a, System Monitor shows <1gb (out of 16gb) consumed.
    Would I be correct in assuming that ZFS cache is no longer counted in the System Monitor, but ZFS is still using almost all available memory (hopefully)?
    The reason for concern is that with svn_111b, I was getting a reliable 90 MB/s transfer over SMB (with Solaris as Samba server), without even jumbo frames. But now, on the same server and network config but with a clean install of svn_151a and minimal tweaking, I'm getting terrible performance at about 3 to 10 MB/s.
    (I've posted the performance problem specifically as a separate post, so as not to double-post I don't want to dwell on that here - this is just about the System Monitor / ZFS cache question.)
    Thanks!!
    -Jim
    Specs, if it matters for this question:
    Machine:
    - 2*4-core Xeons
    - 16gb ECC RAM
    - Onboard e1000g NIC
    - 20 hot-swap bays
    - cooling out the wazoo
    rpool:
    - mirrored 30gb SSDs
    Pool "zp3" config:
    - LSI SAS1068E-R c9 HBAs
    - 6 * 3-way mirrors, 1tb, 7200 RPM enterprise-class SATA
    - 2 hot spares
    - 2 * 30gb SSD for L2ARC
    Pool "zp3" settings:
    zp3 version 5 -
    zp3 sync disabled local
    zp3 com.sun:auto-snapshot true local
    zp3 recordsize 128K default
    zp3 utf8only on -
    zp3 normalization formD -
    zp3 casesensitivity insensitive -
    zp3 encryption off -
    zp3 compression on local
    zp3 compressratio 1.07x -
    zp3 dedup off local
    zp3 logbias throughput local
    zp3 used 4.21T -
    zp3 usedbysnapshots 311G -
    zp3 usedbydataset 3.90T -
    zp3 usedbychildren 8.78G -
    zp3 usedbyrefreservation 0 -
    zp3 available 1.04T -
    Type of data and access:
    - Mostly large media files (DNG photos, large photoshop project files, high bitrate 1080p video, 32-bit/96khz audio, etc.)
    - Plenty of small "office" data files too, but of secondary importance.
    - Low usage. Typically accessed by one CIFS/SMB client - and one application - at a time. Needs throughput, not IOPS.
    Edited by: user13689618 on Feb 8, 2011 12:48 PM (Oops, used ">" as list bullets.)

  • Slow Performance with De-duplication turned on

    Hello, I have been reading through ZFS tuning guides, but I can't find anything that seems to explain why I am getting such poor transfer speeds. The setup is as follows:
    Solaris Express 11
    4 - 500GB SATA drives in a RAIDZ
    8 GB of RAM
    Zpool size = 1.81T, alloc=1.03T, FREE=804G, DEDUP 1.31x
    Compression is off
    Looking at the DDT histogram, there are 685739 duplicate entries, and 5869199 unique entries. Total allocated is 6.25M, total referenced is 8.23M.
    When copying a new file with deduplication on, I'm getting approximately 2-3 MB/second, CPU is hovering around 20-40%. I get the same speeds when copying data off the server (using CIFS). The troubling part is that there is 4500M Free memory while the files are transferring. 2015M total swap, 2015M free swap. Why isn't ZFS using all of it? I have considered adding a cache drive, but it doesn't seem like that would help if all of the RAM isn't being utilized. Any thoughts or suggestions?
    EDIT: Also, after I turn deduplication off on the share, I eventually start getting transfer speeds in the 25-35 MiB/s range, and then free memory goes down to 958M as you would expect.
    Edited by: user12005271 on May 4, 2011 2:02 PM

    It is my understanding that ZFS deduplication requires a lot of memory to hold the deduplication tables which can lead poor performance. I seem to recall seeing a blog which calculates how much memory is needed for a given disk size but I'm not finding it now. I'm no expert so I can't give any direct advice but here is a reference to an article which in turn refers to a couple of blog entries about ZFS dedup performance which might be helpful in getting started researching this topic. Of course, you could also do a web search on solaris express deduplication performance and see what that turns up.
    http://constantin.glez.de/blog/2010/03/opensolaris-zfs-deduplication-everything-you-need-know
    Hope this helps.

  • Deduplication combined with differencing disks for better or not ?

        We have a test hyper-V for various VMs. It is naturally easier for us to start from Parent disk which is sys prepped and move on.
        On the other hand, deduplication(2012 R2) claims to be most effective on VHD library data.
        Is the effectiveness of deduplication different when it comes to differencing VHDs as compared to other disks ?
        Assuming it only works when VM is once off but then What is the algorithm ? and how much time it takes to de-dupe. What happens when the original old files (referenced chunks) are updated ?
    Shahid Roofi

      Problem with diff disks, is the architecture of it. With time, the derived disk grows equivalent to parent. even if a single minor patch is applied sometimes renders most of parent OS to be considered updated.
      others are achieving wonderful results with dedupe on VM library.
    http://blog.compower.org/2013/10/31/deduplication-windows-8-1-laptop-great-hyper-v-lab-environment/. 80-95%. Highest space saving they claim on VMs.
      I would request VR28DETT to elaborate on the information on semantics of that dedupe process which we are unable to find yet. how it does the matching of similar contents and how often and what happens when the referenced data is updated so that
    the child content is cross updated. What are the I/O penalties in achieving that. That is what we are interested to know instead of just the opinions.
      I am sure VR28DETT does have that information to help us out.
      @BrianEH: When we say Dedup is for static content, then also elaborate how static. It should be a read only data? btw is there a mentioning on TechNet docs that it's only for static content. please refer if it does.
    The question you're asking is entirely rhetorical: you cannot use MSFT off-line deduplication with live data like running VMs (unless that's a VDI scenario and it's not what you do according to your first post). So for VM library you CAN use use both diff.
    disks and dedupe (simplicity of new VM provisioning comes here as a benefit) or you can use dedupe only. For production you can use only diff. disks or dedupe enabled inside a Windows-running VM (hell to manage with shrinking VHDX and no deduplication between
    VMs so very limited use and poor space savings). You can read more about dedupe and running VMs here:
    Dedupe and running VMs
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/e275f38c-a440-4790-bd42-1024d0819000/dedupe-question?forum=winserverfiles#f6d2044c-8e3b-4ee5-a3c0-b663b97c729b
    This decent discussion has all the links and answers for questions you've asked.
    Hope this helped.
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • ZFS Usable Capacity Question

    The question is about usable space in a ZFS 7320.  System has two shelves - total (44) 900GB and (4) write-accelerators, so around 40TB raw.  The summary per ZFS software is as follows:
    (1) Pool
    Striped Log
    Double Parity
    Usable Data - 28.3TB
    Parity - 5.62TB
    Reserved - 480GB
    Spare - 1.64TB
    Data & Parity - 42 Disks
    Spare - 2 Disks
    Log - 4 Disks
    Cache - 4 Disks
    Snapshot can be seen here - http://systemsmarketing.com/ZFS_Snapshot.png
    After spares, parity and reserved there is 28.3TB available.  User wants to add another shelf but wants to be sure it will give him what he needs. 
    Adding a shelf with (24) 900GB is 21.6TB raw.  The goal with that shelf is to be well over 40TB usable, hopefully at least 45TB.  What we are trying to determine is, after adding the shelf, wiping everything and restarting with one pool, how much usable will be added?  We assume it will still be two spares so the 1.64TB won’t change.  The big question is the parity and whether that allocation is linear.  We're assuming not as there is probably some initial overhead in that figure, but don’t know.
    Our experience with Sun formatting on earlier drives is that firmware can account for as much as 15% but don’t know if that applies to these ZFS HDs.  If so that would take off 6TB of the initial 40 which could explain most of the differential (expected had been around 35TB).  If that 15% came off the  21.6TB shelf we would be starting at 18TB and any significant parity loss could be problematic.
    Input on this question would be appreciated.  Thank you.

    All of this is prefaced with the that these are general recommendations and may not apply to your actual environment. Also, if connecting a ZFS storage appliance to Exadata, Infiniband connectivity is the only way you should do it.
    I would recommend a ZFS storage appliance in place of using DBFS in most cases, just because DBFS seems to be cumbersome and it performs slow depending on what you're using it for. Even if you removed everything from the DBFS_DG diskgroup, Oracle does not recommend putting those griddisks into the DATA or RECO diskgroups, because the diskgroups would then become unbalanced. You could use DBFS_DG for things other than a DBFS filesystem, but remember that it's on the slowest area of the disks.
    Would you place all of your RECO area on the ZFS, or just use it as a place for backups? I'd most likely suggest having a small FRA on the Exadata and backing up to the ZFS appliance, but it depends on how tight you are on space and what your workload looks like.
    As for moving the development databases off of the Exadata storage and on to the ZFS, I don't really see the point in having them on the compute nodes at all, except for licensing reasons. It's going to be taking CPU and memory cycles away from databases that are open to utilizing Exadata storage. Keep in mind that the ZFS storage appliance will not present disks to be used within ASM, and aren't going to act like a database running on Exadata (with the exception of HCC). As long as you know that going in, you can certainly take databases running on the Exadata and restore them to the ZFS storage appliance.
    As Marc said above, look into your storage needs and see what's using so much space. If it's historical data that doesn't get updated, it's a candidate for HCC.

  • Sun ZFS Virtual Box Appliance - Performance question

    Hope this is the right place to ask this question - apologies if it is not.
    We are testing out the Sun ZFS storage using the handy Virtual Box Virtual Appliance.
    Now I know this is a VM appliance and as such, we're not going to see anything like real life performance on this thing - but I was wondering if there were any tips someone might share with regards to getting the best performance possible?
    We are using the VB appliance to test a lab based OVM setup - and up to now, performance is too dire to even install a simple VM Guest or upload an ISO to a repository.
    The host specification is 24GB RAM and 8 cores - all of which we have presented to the ZFS appliance - however doesn't seem to make any difference .
    Is performance of the VB appliance throttled?
    We have got OK performance out of the HP P4000 VSA (again not great - but OK for lab testing).
    Cheers,
    Jeff

    Jeff,
    You are correct - the simulator is for testing a subset of functionality only. I am unaware if the VM will use more than one processor, but the RAM will help. One item I can also comment on based on experience is that iSCSI offers the best performance for demos. I use iSCSI as opposed to cifs / nfs / ftp etc. Looking at DTrace, I get much better performance - but I am running a system with only 2 disks, so even that might top out on your server. Perhaps a remote demo with the Oracle Solution Center?

  • I have question about ZFS feature anyone have knowledge in ZFS please ...

    1. I try to understand about RAID-Z in ZFS but i can't . Anybody can tell me about architecture or this raid technology conpare with raid-5 or raid-6. i don't know why it can tell " full stripe write "
    2. In ZFS if i don't specific raid when i create pool , what type o RIAD that should be. stripe ot concat????
    Thank in advance
    Ko.

    raidz is comparable to raid 5, but with the benefits of ZFS and without the raid 5 "write hole." raidz2 is comparable to raid 6. If you don't specify a type of raid when creating a pool I believe (though I'm not sure) it will default to striping across the drives. If that doesn't answer your questions, maybe you could be a bit more clear? Hope that helps.

  • Question about using ZFS root pool for whole disk?

    I played around with the newest version of Solaris 10/08 over my vacation by loading it onto a V210 with dual 72GB drives. I used the ZFS root partition configuration and all seem to go well.
    After I was done, I wondered if using the whole disk for the zpool was a good idea or not. I did some looking around but I didn't see anything that suggested that was good or bad.
    I already know about some the flash archive issues and will be playing with those shortly, but I am curious how others are setting up their root ZFS pools.
    Would it be smarter to setup say a 9gb partition on both drives so that the root ZFS is created on that to make the zpool and then mirror it to the other drive and then create another ZFS pool from the remaining disk?

    route1 wrote:
    Just a word of caution when using ZFS as your boot disk. There are tons of bugs in ZFS boot that can make the system un-bootable and un-recoverable.Can you expand upon that statement with supporting evidence (BugIDs and such)? I have a number of local zones (sparse and full) on three Sol10u6 SPARC machines and they've been booting fine. I am having problems LiveUpgrading (lucreate) that I'm scratching my head to resolve. But I haven't had any ZFS boot/root corruption.

Maybe you are looking for

  • I can't download my Dark Knight digital copy.

    When I downloaed it for the first time it said that I have downloaded 1.67GB out of 1.67GB but the blue bar was still moving like it was still downloading, I went to my movies and couldn't play it. I then exited itunes to try again and I tried again

  • Unable to copy using std plng fun

    Hi Expert, I am trying to copy transaction data in same RTC using std copy planning function from 1 version to other version. I hv checked the data in cube..... --- It's available. I hv checked real time behaviour.---- Planning allowed. Locking - No

  • HT5577 I"m trying to watch a movie on my Apple TV on Netflix and it will not accept my password

    I"m trying to watch a movie on my Apple TV/Netflix account and it will not let me in - says my username and or password is incorrect.  It is not. What now?

  • Can Front Row display other applications?

    As a teacher, I use PowerPoint a lot, and would love to display my PowerPoint presentations with Front Row. Can Front Row be customized (or something) to display created PowerPoint presentations? Is it possible to add applications to the circle of ap

  • BUY a PB now or should i wait

    well i have been waiting for a powerbook for some time, i just got a new imac (aug)and a ibook a (oct) but that ibook is primarly for my girl friend. I am still running on my old ti...... what do you think we have in store for MWSF? i mean come on th