ASM Volumes on thin-provisioned ISCSI dirtying whole volumes

Hi, we've got some EBS test instances going and are testing auto-extend (in reasonable chunks) on a thin-provisioned ISCSI volume for the 11g database tier. Something doesn't seem right though: typically I see about 55GB in use by datafiles but about 95% of the thin-provisioned volume is marked dirty.
Obviously, this sort of negates the point of using thin provisioning at all, but I can't help but think there's something else at work here. Does anyone have experience with this situation and if so what parameters can be set to make what we're trying to do actually work right?

Billy, thank you for the helpful reply. In this case, none of that is news to me - we wear a lot of hats where I work and in this case I not only put together the systems the ASM instance is running on, but the ASM instance itself (DB is at 11.1.0.7 with the 6851110 ASM patch recently applied). System is OEL 5 x86_64, the ISCSI volume is thin-provisioned, no filesystem ever written to it, raw disk other than a partition table, holds DATA and FRA, configured with autoextend on.
What I'm trying to get at here is why a 500GB volume which is holding only about 55GB of data would have almost all its blocks marked dirty - as far as the filer is concerned, something has touched nearly every block on the volume at one point or another. Now if I'd written a 500GB datafile to it - which as you say is pointless where ASM is concerned - I'd understand how the blocks got dirtied, but in this case that never happened - it's somewhere in the way ASM behaves (as far as I can see now anyway) that has caused the data to be written all over the physical blocks on the drive, even if only momentarily.
So getting back to my original question, is there a way in ASM to ensure that writes are done in a contiguous fashion so that the apparent disk usage (from the SAN point of view) more closely matches the actual amount of data stored? I'm not seeking a 1:1 relationship here, but we're close to 1:10, and I think that's only because that's how big I sized the volume in the first place. As far as it being ISCSI, the only relevance (and the whole point of my question) is that it's how I happened to attach the volume, which was intended to be thin-provisioned and only allocate space when it's needed. If I'd made the volume 250GB (and I can test this theory if need be), it would likely dirty 250GB of blocks and still store the same 55GB of data. My hope is that there's a way to get ASM to play nicer on thin-provisoned volumes than what I'm currently seeing.
Thanks again.

Similar Messages

  • ASM Volumes on thin-provisioned SAN dirtying all blocks

    Hi there, sorry for the x-post from database-general but it was suggested that I do so. Anyhow, we've got 11g (11.1.0.7 with the 6851110 ASM patch recently applied) running on OEL 5 x86_64, with ASM connected to a raw, thin-provisioned ISCSI volume partitioned for DATA and FRA, and in every case where we do so, the SAN device reports within a few weeks that the whole volume has been allocated even though the DB (configured with autoextend on) is only holding about one tenth of the amount of available space on the device. What this means in systems terms is that somehow ASM is marking writes to nearly every block on the drive if only momentarily.
    In the original thread, there was speculation that a process of indexing AUs has led to the dirtying of the whole volume, but this would make more sense if the whole disk had been allocated immediately rather than over the course of a few weeks. My question is: what else could account for this behavior, and what steps can I take to help ensure that ASM behaves correctly in a thin-provisioned volume? (by "correctly" I mean writes contiguous blocks of data and doesn't dirty the whole thing)
    Thanks!

    Hi,
    recently i had some time and did some tests with thin provisioning and ASM.
    I used storage based on Opensolaris with ZFS thin provisioning against a 11g R2 database with 11g R2 ASM running on Windows. I created two LUNs and exported the LUNs via iSCSI. On the ASM side i formed a single disk group with external redundancy of the two LUNs presented and created one big file tablespace with approx 15 GB total size.
    The storage systems shows the LUNs as follows:
    NAME                       PROPERTY       VALUE    SOURCE
    pool1/iscsi-racwin-temp05  volsize        15G      local
    pool1/iscsi-racwin-temp05  usedbydataset  7.45G    -
    pool1/iscsi-racwin-temp06  volsize        15G      local
    pool1/iscsi-racwin-temp06  usedbydataset  7.45G    -You can see: 15 GB total size reported while 7.45 GB are allocated. Thats pretty normal due to the data file created in the disk group.
    During the night i ran a script which imported a schema and dropped it afterwards. The steps were repeated infinitely.
    After more than 24 hours the thin provisioned disks look like this:
    NAME                       PROPERTY       VALUE    SOURCE
    pool1/iscsi-racwin-temp05  volsize        15G      local
    pool1/iscsi-racwin-temp05  usedbydataset  7.47G    -
    pool1/iscsi-racwin-temp06  volsize        15G      local
    pool1/iscsi-racwin-temp06  usedbydataset  7.47G    -As you can see there is a extremely small growth in size (from 7.45 GB to 7.47 GB). I observed this growth shortly after starting the very first import. Subsequent imports did not increased the actual allocated volume size.
    So if we exclude the storage as a source for problems there might be the fact that 11g R1 ASM behaves different than 11g R2 ASM. I have not yet tested this...
    Ronny Egner
    My Blog: http://blog.ronnyegner-consulting.de

  • ISCSI and ZFS Thin Provisioning Sparse Volumes - constraints?

    Hello,
    I am running an iSCSI target using COMSTAR.
    I activated Time Slider (Snapshot feature) for all pools.
    Now I want to set up an iSCSI target using thin provisioning, storing the data in a file system rather than a file.
    Is there any official documentation about thin p.?
    All I found was
    http://www.cuddletech.com/blog/pivot/entry.php?id=729
    http://www.c0t0d0s0.org/archives/4222-Less-known-Solaris-Features-iSCSI-Part-4-Alternative-backing-stores.html
    Are there any problems to be expected about the snapshots?
    How would I set up a 100 GByte iSCSI target with mentioned thin p.?
    Thanks
    n00b

    To create a thin provisioned volume:
    zfs create -V <SIZE> -s path/to/volume
    Where <SIZE> is the capacity of the volume and path/to/volume is the ZFS path and volume name.
    To create a COMSTAR target:
    stmfadm create-lu /dev/zvol/rdsk/path/to/volume
    You'll get a LU ID, which you can then use to create a view, optionally with target and host groups to limit access.
    -Nick

  • Do I need to extend the OS disk if my Datastore is thin provisioned?

    I noticed on my windows server (VM) that one of the disk drives was getting low. This was showing 22gb free from the OS. (Windows 2008 R2)  It is functioning as the file storage and print server.   So, I extended the SAN LUN by 100gb, extended the vCenter Datastore by the same amount and now it's showing in vCenter that I have an extra 100gb of space. On the OS, it's still showing 22gb free.  I know that if I go into the VM and edit the properties of the disk drive, and add up to 100gb of space, then on the OS, in this case Windows I can go into disk management and extend the volume I will see the additional space.  My question, Do I need to do this if my Datastore is thin provisioned? Will it not grow if needed?  or am I getting the OS and VMware confused as to how it functions?

    You will need increase the size of virtual disk on the properties of virtual machine to the desired size and after this, if Windows, go to disk management (or diskpart) and extend the disk.
    And about the thin provisioning concept, you're confused and I recommend you read this article: Using thin provisioned disks with virtual machines (1005418)

  • Whether Hyper-V 2012 will support Thin provisioning ?

    Hi All,
           I am installing windows server 2012 in Hyper V as a VM using iso images.... While creating new virtual machine in Hyper-V I am not able to see any options like "Thin Provisioning"... In Hyper V how thin provisioning will
    work and how to enable it?
    I install one Server 2012 in Hyper V--> It display harddisk type as "xxxx.vhdx"
    Note:In Vmware the thin provisioning option is available while creating virtual machines.
    Thanks,
    Rajarajan.D

    Hello,
    I hope you have a great day.
    Creating the VHDX file as dynamically expanding, is the VMware equivalent to thin provisioning.
    Example: Create dynamically vhdx with size of 100 GB and install Server 2012 OS. The Guest OS will see drive capacity as 100GB however the actual vhdx file would only consume about 15 - 30GB physical hard disk space depending on the features/roles you
    install in the OS.
    Creating a 100GB Fixed VHDX would consume 100GB of physical hard disk space.

  • Changing devices for ASM Volumes

    Hi,
    I have a two node RAC Cluster Setup 10gR2.
    I created the ASM Volumes for the database,during the configuration, by the commands
    # /etc/init.d/oracleasm createdisk VOL1 /dev/sdb2
    I have three Volumes made of /devsdb2, /dev/sdb3 and /dev/sdb4.
    Is there a way I can change the devices to /dev/sdd2,/dev/sdd3 and /dev/sdd4 for the ASM Volumes.

    Hey all,
    Thanx you all for your support.
    It worked. The ownership and permissions are not getting reverted back.
    But seriously, the rc.local file is not in */etc/init.d/*
    It is in */etc*
    Infact, I have created a file rc.local in */etc/init.d/*, made the entries, and restarted the system, but of no use.
    Then I found the rc.local file in */etc*, made the changes and tested by restarting the system, and it worked.
    Thank you all guys. gr8 job..
    Thanx again ...
    Cheers,
    ORA_SRI

  • Thin provisioning?

    Hi All,
    I am working on a deployment project Exchange 2013, We have planned to deploy exchange 2013 on VMware platform. Could some one share your experience Exchange 2013 with VMware . I have already gone trough the VMware recommendations but i am looking for some
    one with practical experience that would be really help full for me to accomplish this task successfully. 
    First Question : Can we use thin provisioning for Database and logs?
    Second Question : what type of disk we need to use for DB & Logs (RDM or VMFS)? I am beginner to vmware sorry if i have asked any wrong question here :-)
    Third Question : How to Size the processor for exchange 2013 with MSCalc? I have followed the below article for MScalc still have some doubts on Processor calculation. :-)
    http://blogs.technet.com/b/exchange/archive/2009/11/09/3408737.aspx 
    Here is the server details, could some one help me with the Processor part, I appreciate your response.
    Phycial 4 sockets, 32 cores in total, 256 GB of RAM
    SPECint®_rate2006 = 1130
    Regards, Balgates

    Hi Balgates,
    In addition to the above suggestions, I would like to clarify that the megacycle guidance in Exchange 2013 leverages a new server baseline, therefore, you can't directly compare the output from the Exchange 2010 calculator with Exchange 2013 calculator.
    For more information about it, here is a blog for your reference.
    Sizing Exchange 2013 Deployments
    http://blogs.technet.com/b/exchange/archive/2013/05/06/ask-the-perf-guy-sizing-exchange-2013-deployments.aspx
    Hope this can be helpful to you.
    Best regards,
    Amy Wang
    TechNet Community Support

  • Recover ASM volume (ASMLIB)

    Hi,
    My storage array have a power failure today. After that, (/etc/init.d/oracleasm scandisks) shows NO ASM volume.
    I am using RH4 x86_64. Does anyone know where the log files are using ASMLIB?
    I have rebooted my RAC servers many time, but no luck.
    Thanks in advance,
    B-

    Hi,
    My storage array have a power failure today. After that, (/etc/init.d/oracleasm scandisks) shows NO ASM volume.
    I am using RH4 x86_64. Does anyone know where the log files are using ASMLIB?
    I have rebooted my RAC servers many time, but no luck.
    Thanks in advance,
    B-

  • 11GR2 2nodes CRSD ASM - Failed to open file in dirty mode

    Hi...
    we facing a problem with a two node 11gr2 cluster.
    Independently first started node one ore node two. The node that has start first starts normal.
    The second started node fail with error mess ......
    vi .../emcrsp.log
    2011-04-17 10:19:14.406: [  OCRASM][4090540208]ASM Error Stack : ORA-15077: could not locate ASM instance serving a required diskgroup
    2011-04-17 10:19:14.408: [  OCRASM][4090540208]proprasmo: kgfoCheckMount returned [7]
    2011-04-17 10:19:14.408: [  OCRASM][4090540208]proprasmo: The ASM instance is down
    2011-04-17 10:19:14.416: [  OCRRAW][4090540208]proprioo: Failed to open [+DGCONF]. Returned proprasmo() with [26]. Marking location as UNAVAILABLE.
    2011-04-17 10:19:14.416: [  OCRRAW][4090540208]proprioo: No OCR/OLR devices are usable
    2011-04-17 10:19:14.416: [  OCRASM][4090540208]proprasmcl: asmhandle is NULL
    2011-04-17 10:19:14.416: [  OCRRAW][4090540208]proprinit: Could not open raw device
    2011-04-17 10:19:14.416: [  OCRASM][4090540208]proprasmcl: asmhandle is NULL
    2011-04-17 10:19:14.416: [ default][4090540208]a_init:7!: Backend init unsuccessful : [26]
    [   CLWAL][738463920]clsw_Initialize: OLR initlevel [30000]
    2011-04-17 10:19:15.272: [  OCRASM][3128352944]proprasmo: Failed to open file in dirty mode
    2011-04-17 10:19:15.272: [  OCRASM][3128352944]proprasmo: Error in open/create file in dg [DGCONF]
    [  OCRASM][3128352944]SLOS : SLOS: cat=8, opn=kgfolclcpi1, dep=402, loc=kgfokge
    The interlink is up and running.
    We try to recreate the OCR and Voting from daily backup without any result
    Does anyone has an idea ?
    Thanks *T
    Edited by: tbrinkmann on Apr 20, 2011 5:15 AM

    Hi Paul,
    yes the ASM is down.
    That was confusing me. If I shutdown the other node the +ASM can start and clustering com´s up normal.
    It looks like only one node can use voting or ocr....
    The behavior looks like the interlink is down buts is not ;:-(
    One node ( first com´s up) start normally and take all cluster resources ...scan .. the vips..
    And second node show this error mess..
    Thanks
    *T                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • How to tell what physical disk is an ASM volume?

    Hello all,
    I'm trying to install some clusterware. I have attached, a SAN unit, that was previously used for ASM.
    After I installed the ASM libs/drivers...I ran oracleasm scandisks, and it found all of them.
    I have ASM1,ASM2....ASMn
    I thought, "great"..saves me time having to go through, find each disk device and configure them. I don't mind losing data on them.
    I'm in the middle of the OUI for cluster install (11Gr2 on RHEL5)...and on the ASM portion (getting ready to create a disk group for the cluster for voting disk, etc)...well, under candidate disks...it shows nothing.
    If I click the all disks option..it shows all my oracle volumes...but their status is 'member'. It will not allow me to add them to my to be new disk group.
    I'm not sure what to do.
    One problem...is that I can't seem to find a way to find out what physical disk device is associated with each ASM disk.
    For instance:
    oracleasm querydisk ASM33
    Disk "ASM33" is a valid ASM disk
    It just shows it's a valid disk...but if I wanted to release it, reconfigure it..etc, I don't know what physical disk it is, and I'd like to use the same layout that was used before.
    Is there a way to find out what disk (example: /dev/sda12) is associated with ASM33?
    I don't see any options listed for this in the man page for oracleasm, and I can't seem to find much documentation for this either...
    Thanks in advance,
    cayenne

    Well for this you can use oracleasm querydisk.Using this you can identify which device as marked for asm or not. for example you can see this below example.
    [oracle@localhost init.d]$ sqlplus "/as sysdba"
    SQL*Plus: Release 10.2.0.4.0 - Production on Thu Jun 3 11:52:12 2010
    Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> select path from v$asm_disk;
    PATH
    /dev/oracleasm/disks/VOL2
    /dev/oracleasm/disks/VOL1
    SQL> exit;
    [oracle@localhost init.d]$ su
    Password:
    [root@localhost init.d]# /sbin/fdisk -l
    Disk /dev/sda: 80.0 GB, 80000000000 bytes
    255 heads, 63 sectors/track, 9726 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *           1        1305    10482381   83  Linux
    /dev/sda2            1306        9401    65031120   83  Linux
    /dev/sda3            9402        9662     2096482+  82  Linux swap / Solaris
    /dev/sda4            9663        9726      514080    5  Extended
    /dev/sda5            9663        9726      514048+  83  Linux
    Disk /dev/sdb: 80.0 GB, 80026361856 bytes
    255 heads, 63 sectors/track, 9729 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdb1               1        4859    39029886   83  Linux
    /dev/sdb2            4860        9729    39118275   83  Linux
    [root@localhost init.d]# ./oracleasm querydisk /dev/sdb1
    Device "/dev/sdb1" is marked an ASM disk with the label "VOL1"
    [root@localhost init.d]# ./oracleasm querydisk /dev/sdb2
    Device "/dev/sdb2" is marked an ASM disk with the label "VOL2"
    [root@localhost init.d]# ./oracleasm querydisk /dev/sda1
    Device "/dev/sda1" is not marked as an ASM disk
    [root@localhost init.d]#Also in windows :
    C:\Documents and Settings\comp>asmtool -list
    NTFS                             \Device\Harddisk0\Partition1           140655M
    ORCLDISKDATA1                    \Device\Harddisk0\Partition2             4102M
    ORCLDISKDATA2                    \Device\Harddisk0\Partition3             4102M
    NTFS                             \Device\Harddisk0\Partition4           152617M
    C:\Documents and Settings\comp>answered by chinar in thread:how to identify which rawdevice Disk Is named as VOL1 IN ASM from os level

  • LVM Volumes not available after update

    Hi All!
    I haven't updated my system for about two months and today I updated it. Now I have the problem that I cannot boot properly. I have my root partition in an LVM volume and on boot I get the message
    ERROR: device 'UUID=xxx' not found. Skipping fs
    ERROR: Unable to find root device 'UUID=xxx'
    After that I land in the recovery shell. After some research I found, that "lvm lvdisplay" showed that my volumes where not available and I had to reenable them with "lvm vgchange -a y".
    Issuing any lvm command also produced the following warning:
    WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!
    Anyway, after issuing the commands and exiting the recovery shell, the system booted again. However, I would prefer being able to boot without manual actions.
    Thanks in advance!
    Further information:
    vgdisplay
    --- Volume group ---
    VG Name ArchLVM
    System ID
    Format lvm2
    Metadata Areas 1
    Metadata Sequence No 3
    VG Access read/write
    VG Status resizable
    MAX LV 0
    Cur LV 2
    Open LV 1
    Max PV 0
    Cur PV 1
    Act PV 1
    VG Size 232.69 GiB
    PE Size 4.00 MiB
    Total PE 59568
    Alloc PE / Size 59568 / 232.69 GiB
    Free PE / Size 0 / 0
    VG UUID SoB3M1-v1fD-1abI-PNJ3-6IOn-FfdI-0RoLK5
    lvdisplay (LV Status was 'not available' right after booting)
    --- Logical volume ---
    LV Path /dev/ArchLVM/Swap
    LV Name Swap
    VG Name ArchLVM
    LV UUID XRYBrz-LojR-k6SD-XIxV-wHnY-f3VG-giKL6V
    LV Write Access read/write
    LV Creation host, time archiso, 2014-05-16 14:43:06 +0200
    LV Status available
    # open 0
    LV Size 8.00 GiB
    Current LE 2048
    Segments 1
    Allocation inherit
    Read ahead sectors auto
    - currently set to 256
    Block device 254:0
    --- Logical volume ---
    LV Path /dev/ArchLVM/Root
    LV Name Root
    VG Name ArchLVM
    LV UUID lpjDl4-Jqzu-ZWkq-Uphc-IaOo-6Rzd-cIh5yv
    LV Write Access read/write
    LV Creation host, time archiso, 2014-05-16 14:43:27 +0200
    LV Status available
    # open 1
    LV Size 224.69 GiB
    Current LE 57520
    Segments 1
    Allocation inherit
    Read ahead sectors auto
    - currently set to 256
    Block device 254:1
    /etc/fstab
    # /etc/fstab: static file system information
    # <file system> <dir> <type> <options> <dump> <pass>
    # /dev/mapper/ArchLVM-Root
    UUID=2db82d1a-47a4-4e30-a819-143e8fb75199 / ext4 rw,relatime,data=ordered 0 1
    #/dev/mapper/ArchLVM-Root / ext4 rw,relatime,data=ordered 0 1
    # /dev/sda1
    UUID=72691888-a781-4cdd-a98e-2613d87925d0 /boot ext2 rw,relatime 0 2
    /etc/mkinitcpio.conf
    # vim:set ft=sh
    # MODULES
    # The following modules are loaded before any boot hooks are
    # run. Advanced users may wish to specify all system modules
    # in this array. For instance:
    # MODULES="piix ide_disk reiserfs"
    MODULES=""
    # BINARIES
    # This setting includes any additional binaries a given user may
    # wish into the CPIO image. This is run last, so it may be used to
    # override the actual binaries included by a given hook
    # BINARIES are dependency parsed, so you may safely ignore libraries
    BINARIES=""
    # FILES
    # This setting is similar to BINARIES above, however, files are added
    # as-is and are not parsed in any way. This is useful for config files.
    FILES=""
    # HOOKS
    # This is the most important setting in this file. The HOOKS control the
    # modules and scripts added to the image, and what happens at boot time.
    # Order is important, and it is recommended that you do not change the
    # order in which HOOKS are added. Run 'mkinitcpio -H <hook name>' for
    # help on a given hook.
    # 'base' is _required_ unless you know precisely what you are doing.
    # 'udev' is _required_ in order to automatically load modules
    # 'filesystems' is _required_ unless you specify your fs modules in MODULES
    # Examples:
    ## This setup specifies all modules in the MODULES setting above.
    ## No raid, lvm2, or encrypted root is needed.
    # HOOKS="base"
    ## This setup will autodetect all modules for your system and should
    ## work as a sane default
    # HOOKS="base udev autodetect block filesystems"
    ## This setup will generate a 'full' image which supports most systems.
    ## No autodetection is done.
    # HOOKS="base udev block filesystems"
    ## This setup assembles a pata mdadm array with an encrypted root FS.
    ## Note: See 'mkinitcpio -H mdadm' for more information on raid devices.
    # HOOKS="base udev block mdadm encrypt filesystems"
    ## This setup loads an lvm2 volume group on a usb device.
    # HOOKS="base udev block lvm2 filesystems"
    ## NOTE: If you have /usr on a separate partition, you MUST include the
    # usr, fsck and shutdown hooks.
    HOOKS="base udev autodetect modconf block lvm2 filesystems keyboard fsck"
    # COMPRESSION
    # Use this to compress the initramfs image. By default, gzip compression
    # is used. Use 'cat' to create an uncompressed image.
    #COMPRESSION="gzip"
    #COMPRESSION="bzip2"
    #COMPRESSION="lzma"
    #COMPRESSION="xz"
    #COMPRESSION="lzop"
    #COMPRESSION="lz4"
    # COMPRESSION_OPTIONS
    # Additional options for the compressor
    #COMPRESSION_OPTIONS=""
    /boot/grub/grub.cfg
    # DO NOT EDIT THIS FILE
    # It is automatically generated by grub-mkconfig using templates
    # from /etc/grub.d and settings from /etc/default/grub
    ### BEGIN /etc/grub.d/00_header ###
    insmod part_gpt
    insmod part_msdos
    if [ -s $prefix/grubenv ]; then
    load_env
    fi
    if [ "${next_entry}" ] ; then
    set default="${next_entry}"
    set next_entry=
    save_env next_entry
    set boot_once=true
    else
    set default="0"
    fi
    if [ x"${feature_menuentry_id}" = xy ]; then
    menuentry_id_option="--id"
    else
    menuentry_id_option=""
    fi
    export menuentry_id_option
    if [ "${prev_saved_entry}" ]; then
    set saved_entry="${prev_saved_entry}"
    save_env saved_entry
    set prev_saved_entry=
    save_env prev_saved_entry
    set boot_once=true
    fi
    function savedefault {
    if [ -z "${boot_once}" ]; then
    saved_entry="${chosen}"
    save_env saved_entry
    fi
    function load_video {
    if [ x$feature_all_video_module = xy ]; then
    insmod all_video
    else
    insmod efi_gop
    insmod efi_uga
    insmod ieee1275_fb
    insmod vbe
    insmod vga
    insmod video_bochs
    insmod video_cirrus
    fi
    if [ x$feature_default_font_path = xy ] ; then
    font=unicode
    else
    insmod part_msdos
    insmod lvm
    insmod ext2
    set root='lvmid/SoB3M1-v1fD-1abI-PNJ3-6IOn-FfdI-0RoLK5/lpjDl4-Jqzu-ZWkq-Uphc-IaOo-6Rzd-cIh5yv'
    if [ x$feature_platform_search_hint = xy ]; then
    search --no-floppy --fs-uuid --set=root --hint='lvmid/SoB3M1-v1fD-1abI-PNJ3-6IOn-FfdI-0RoLK5/lpjDl4-Jqzu-ZWkq-Uphc-IaOo-6Rzd-cIh5yv' 2db82d1a-47a4-4e30-a819-143e8fb75199
    else
    search --no-floppy --fs-uuid --set=root 2db82d1a-47a4-4e30-a819-143e8fb75199
    fi
    font="/usr/share/grub/unicode.pf2"
    fi
    if loadfont $font ; then
    set gfxmode=auto
    load_video
    insmod gfxterm
    fi
    terminal_input console
    terminal_output gfxterm
    if [ x$feature_timeout_style = xy ] ; then
    set timeout_style=menu
    set timeout=5
    # Fallback normal timeout code in case the timeout_style feature is
    # unavailable.
    else
    set timeout=5
    fi
    ### END /etc/grub.d/00_header ###
    ### BEGIN /etc/grub.d/10_linux ###
    menuentry 'Arch Linux' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-2db82d1a-47a4-4e30-a819-143e8fb75199' {
    load_video
    set gfxpayload=keep
    insmod gzio
    insmod part_msdos
    insmod ext2
    set root='hd0,msdos1'
    if [ x$feature_platform_search_hint = xy ]; then
    search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 72691888-a781-4cdd-a98e-2613d87925d0
    else
    search --no-floppy --fs-uuid --set=root 72691888-a781-4cdd-a98e-2613d87925d0
    fi
    echo 'Loading Linux linux ...'
    linux /vmlinuz-linux root=UUID=2db82d1a-47a4-4e30-a819-143e8fb75199 rw quiet
    echo 'Loading initial ramdisk ...'
    initrd /initramfs-linux.img
    submenu 'Advanced options for Arch Linux' $menuentry_id_option 'gnulinux-advanced-2db82d1a-47a4-4e30-a819-143e8fb75199' {
    menuentry 'Arch Linux, with Linux linux' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-linux-advanced-2db82d1a-47a4-4e30-a819-143e8fb75199' {
    load_video
    set gfxpayload=keep
    insmod gzio
    insmod part_msdos
    insmod ext2
    set root='hd0,msdos1'
    if [ x$feature_platform_search_hint = xy ]; then
    search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 72691888-a781-4cdd-a98e-2613d87925d0
    else
    search --no-floppy --fs-uuid --set=root 72691888-a781-4cdd-a98e-2613d87925d0
    fi
    echo 'Loading Linux linux ...'
    linux /vmlinuz-linux root=UUID=2db82d1a-47a4-4e30-a819-143e8fb75199 rw quiet
    echo 'Loading initial ramdisk ...'
    initrd /initramfs-linux.img
    menuentry 'Arch Linux, with Linux linux (fallback initramfs)' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-linux-fallback-2db82d1a-47a4-4e30-a819-143e8fb75199' {
    load_video
    set gfxpayload=keep
    insmod gzio
    insmod part_msdos
    insmod ext2
    set root='hd0,msdos1'
    if [ x$feature_platform_search_hint = xy ]; then
    search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 72691888-a781-4cdd-a98e-2613d87925d0
    else
    search --no-floppy --fs-uuid --set=root 72691888-a781-4cdd-a98e-2613d87925d0
    fi
    echo 'Loading Linux linux ...'
    linux /vmlinuz-linux root=UUID=2db82d1a-47a4-4e30-a819-143e8fb75199 rw quiet
    echo 'Loading initial ramdisk ...'
    initrd /initramfs-linux-fallback.img
    ### END /etc/grub.d/10_linux ###
    ### BEGIN /etc/grub.d/20_linux_xen ###
    ### END /etc/grub.d/20_linux_xen ###
    ### BEGIN /etc/grub.d/30_os-prober ###
    ### END /etc/grub.d/30_os-prober ###
    ### BEGIN /etc/grub.d/40_custom ###
    # This file provides an easy way to add custom menu entries. Simply type the
    # menu entries you want to add after this comment. Be careful not to change
    # the 'exec tail' line above.
    ### END /etc/grub.d/40_custom ###
    ### BEGIN /etc/grub.d/41_custom ###
    if [ -f ${config_directory}/custom.cfg ]; then
    source ${config_directory}/custom.cfg
    elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then
    source $prefix/custom.cfg;
    fi
    ### END /etc/grub.d/41_custom ###
    ### BEGIN /etc/grub.d/60_memtest86+ ###
    ### END /etc/grub.d/60_memtest86+ ###
    Last edited by Kirodema (2014-07-16 07:31:34)

    use_lvmetad = 0
    lvm2-lvmetad is not enabled or running on my system. Shall I activate it?
    # This is an example configuration file for the LVM2 system.
    # It contains the default settings that would be used if there was no
    # /etc/lvm/lvm.conf file.
    # Refer to 'man lvm.conf' for further information including the file layout.
    # To put this file in a different directory and override /etc/lvm set
    # the environment variable LVM_SYSTEM_DIR before running the tools.
    # N.B. Take care that each setting only appears once if uncommenting
    # example settings in this file.
    # This section allows you to set the way the configuration settings are handled.
    config {
    # If enabled, any LVM2 configuration mismatch is reported.
    # This implies checking that the configuration key is understood
    # by LVM2 and that the value of the key is of a proper type.
    # If disabled, any configuration mismatch is ignored and default
    # value is used instead without any warning (a message about the
    # configuration key not being found is issued in verbose mode only).
    checks = 1
    # If enabled, any configuration mismatch aborts the LVM2 process.
    abort_on_errors = 0
    # Directory where LVM looks for configuration profiles.
    profile_dir = "/etc/lvm/profile"
    # This section allows you to configure which block devices should
    # be used by the LVM system.
    devices {
    # Where do you want your volume groups to appear ?
    dir = "/dev"
    # An array of directories that contain the device nodes you wish
    # to use with LVM2.
    scan = [ "/dev" ]
    # If set, the cache of block device nodes with all associated symlinks
    # will be constructed out of the existing udev database content.
    # This avoids using and opening any inapplicable non-block devices or
    # subdirectories found in the device directory. This setting is applied
    # to udev-managed device directory only, other directories will be scanned
    # fully. LVM2 needs to be compiled with udev support for this setting to
    # take effect. N.B. Any device node or symlink not managed by udev in
    # udev directory will be ignored with this setting on.
    obtain_device_list_from_udev = 1
    # If several entries in the scanned directories correspond to the
    # same block device and the tools need to display a name for device,
    # all the pathnames are matched against each item in the following
    # list of regular expressions in turn and the first match is used.
    preferred_names = [ ]
    # Try to avoid using undescriptive /dev/dm-N names, if present.
    # preferred_names = [ "^/dev/mpath/", "^/dev/mapper/mpath", "^/dev/[hs]d" ]
    # A filter that tells LVM2 to only use a restricted set of devices.
    # The filter consists of an array of regular expressions. These
    # expressions can be delimited by a character of your choice, and
    # prefixed with either an 'a' (for accept) or 'r' (for reject).
    # The first expression found to match a device name determines if
    # the device will be accepted or rejected (ignored). Devices that
    # don't match any patterns are accepted.
    # Be careful if there there are symbolic links or multiple filesystem
    # entries for the same device as each name is checked separately against
    # the list of patterns. The effect is that if the first pattern in the
    # list to match a name is an 'a' pattern for any of the names, the device
    # is accepted; otherwise if the first pattern in the list to match a name
    # is an 'r' pattern for any of the names it is rejected; otherwise it is
    # accepted.
    # Don't have more than one filter line active at once: only one gets used.
    # Run vgscan after you change this parameter to ensure that
    # the cache file gets regenerated (see below).
    # If it doesn't do what you expect, check the output of 'vgscan -vvvv'.
    # If lvmetad is used, then see "A note about device filtering while
    # lvmetad is used" comment that is attached to global/use_lvmetad setting.
    # By default we accept every block device:
    filter = [ "a/.*/" ]
    # Exclude the cdrom drive
    # filter = [ "r|/dev/cdrom|" ]
    # When testing I like to work with just loopback devices:
    # filter = [ "a/loop/", "r/.*/" ]
    # Or maybe all loops and ide drives except hdc:
    # filter =[ "a|loop|", "r|/dev/hdc|", "a|/dev/ide|", "r|.*|" ]
    # Use anchors if you want to be really specific
    # filter = [ "a|^/dev/hda8$|", "r/.*/" ]
    # Since "filter" is often overridden from command line, it is not suitable
    # for system-wide device filtering (udev rules, lvmetad). To hide devices
    # from LVM-specific udev processing and/or from lvmetad, you need to set
    # global_filter. The syntax is the same as for normal "filter"
    # above. Devices that fail the global_filter are not even opened by LVM.
    # global_filter = []
    # The results of the filtering are cached on disk to avoid
    # rescanning dud devices (which can take a very long time).
    # By default this cache is stored in the /etc/lvm/cache directory
    # in a file called '.cache'.
    # It is safe to delete the contents: the tools regenerate it.
    # (The old setting 'cache' is still respected if neither of
    # these new ones is present.)
    # N.B. If obtain_device_list_from_udev is set to 1 the list of
    # devices is instead obtained from udev and any existing .cache
    # file is removed.
    cache_dir = "/etc/lvm/cache"
    cache_file_prefix = ""
    # You can turn off writing this cache file by setting this to 0.
    write_cache_state = 1
    # Advanced settings.
    # List of pairs of additional acceptable block device types found
    # in /proc/devices with maximum (non-zero) number of partitions.
    # types = [ "fd", 16 ]
    # If sysfs is mounted (2.6 kernels) restrict device scanning to
    # the block devices it believes are valid.
    # 1 enables; 0 disables.
    sysfs_scan = 1
    # By default, LVM2 will ignore devices used as component paths
    # of device-mapper multipath devices.
    # 1 enables; 0 disables.
    multipath_component_detection = 1
    # By default, LVM2 will ignore devices used as components of
    # software RAID (md) devices by looking for md superblocks.
    # 1 enables; 0 disables.
    md_component_detection = 1
    # By default, if a PV is placed directly upon an md device, LVM2
    # will align its data blocks with the md device's stripe-width.
    # 1 enables; 0 disables.
    md_chunk_alignment = 1
    # Default alignment of the start of a data area in MB. If set to 0,
    # a value of 64KB will be used. Set to 1 for 1MiB, 2 for 2MiB, etc.
    # default_data_alignment = 1
    # By default, the start of a PV's data area will be a multiple of
    # the 'minimum_io_size' or 'optimal_io_size' exposed in sysfs.
    # - minimum_io_size - the smallest request the device can perform
    # w/o incurring a read-modify-write penalty (e.g. MD's chunk size)
    # - optimal_io_size - the device's preferred unit of receiving I/O
    # (e.g. MD's stripe width)
    # minimum_io_size is used if optimal_io_size is undefined (0).
    # If md_chunk_alignment is enabled, that detects the optimal_io_size.
    # This setting takes precedence over md_chunk_alignment.
    # 1 enables; 0 disables.
    data_alignment_detection = 1
    # Alignment (in KB) of start of data area when creating a new PV.
    # md_chunk_alignment and data_alignment_detection are disabled if set.
    # Set to 0 for the default alignment (see: data_alignment_default)
    # or page size, if larger.
    data_alignment = 0
    # By default, the start of the PV's aligned data area will be shifted by
    # the 'alignment_offset' exposed in sysfs. This offset is often 0 but
    # may be non-zero; e.g.: certain 4KB sector drives that compensate for
    # windows partitioning will have an alignment_offset of 3584 bytes
    # (sector 7 is the lowest aligned logical block, the 4KB sectors start
    # at LBA -1, and consequently sector 63 is aligned on a 4KB boundary).
    # But note that pvcreate --dataalignmentoffset will skip this detection.
    # 1 enables; 0 disables.
    data_alignment_offset_detection = 1
    # If, while scanning the system for PVs, LVM2 encounters a device-mapper
    # device that has its I/O suspended, it waits for it to become accessible.
    # Set this to 1 to skip such devices. This should only be needed
    # in recovery situations.
    ignore_suspended_devices = 0
    # ignore_lvm_mirrors: Introduced in version 2.02.104
    # This setting determines whether logical volumes of "mirror" segment
    # type are scanned for LVM labels. This affects the ability of
    # mirrors to be used as physical volumes. If 'ignore_lvm_mirrors'
    # is set to '1', it becomes impossible to create volume groups on top
    # of mirror logical volumes - i.e. to stack volume groups on mirrors.
    # Allowing mirror logical volumes to be scanned (setting the value to '0')
    # can potentially cause LVM processes and I/O to the mirror to become
    # blocked. This is due to the way that the "mirror" segment type handles
    # failures. In order for the hang to manifest itself, an LVM command must
    # be run just after a failure and before the automatic LVM repair process
    # takes place OR there must be failures in multiple mirrors in the same
    # volume group at the same time with write failures occurring moments
    # before a scan of the mirror's labels.
    # Note that these scanning limitations do not apply to the LVM RAID
    # types, like "raid1". The RAID segment types handle failures in a
    # different way and are not subject to possible process or I/O blocking.
    # It is encouraged that users set 'ignore_lvm_mirrors' to 1 if they
    # are using the "mirror" segment type. Users that require volume group
    # stacking on mirrored logical volumes should consider using the "raid1"
    # segment type. The "raid1" segment type is not available for
    # active/active clustered volume groups.
    # Set to 1 to disallow stacking and thereby avoid a possible deadlock.
    ignore_lvm_mirrors = 1
    # During each LVM operation errors received from each device are counted.
    # If the counter of a particular device exceeds the limit set here, no
    # further I/O is sent to that device for the remainder of the respective
    # operation. Setting the parameter to 0 disables the counters altogether.
    disable_after_error_count = 0
    # Allow use of pvcreate --uuid without requiring --restorefile.
    require_restorefile_with_uuid = 1
    # Minimum size (in KB) of block devices which can be used as PVs.
    # In a clustered environment all nodes must use the same value.
    # Any value smaller than 512KB is ignored.
    # Ignore devices smaller than 2MB such as floppy drives.
    pv_min_size = 2048
    # The original built-in setting was 512 up to and including version 2.02.84.
    # pv_min_size = 512
    # Issue discards to a logical volumes's underlying physical volume(s) when
    # the logical volume is no longer using the physical volumes' space (e.g.
    # lvremove, lvreduce, etc). Discards inform the storage that a region is
    # no longer in use. Storage that supports discards advertise the protocol
    # specific way discards should be issued by the kernel (TRIM, UNMAP, or
    # WRITE SAME with UNMAP bit set). Not all storage will support or benefit
    # from discards but SSDs and thinly provisioned LUNs generally do. If set
    # to 1, discards will only be issued if both the storage and kernel provide
    # support.
    # 1 enables; 0 disables.
    issue_discards = 0
    # This section allows you to configure the way in which LVM selects
    # free space for its Logical Volumes.
    allocation {
    # When searching for free space to extend an LV, the "cling"
    # allocation policy will choose space on the same PVs as the last
    # segment of the existing LV. If there is insufficient space and a
    # list of tags is defined here, it will check whether any of them are
    # attached to the PVs concerned and then seek to match those PV tags
    # between existing extents and new extents.
    # Use the special tag "@*" as a wildcard to match any PV tag.
    # Example: LVs are mirrored between two sites within a single VG.
    # PVs are tagged with either @site1 or @site2 to indicate where
    # they are situated.
    # cling_tag_list = [ "@site1", "@site2" ]
    # cling_tag_list = [ "@*" ]
    # Changes made in version 2.02.85 extended the reach of the 'cling'
    # policies to detect more situations where data can be grouped
    # onto the same disks. Set this to 0 to revert to the previous
    # algorithm.
    maximise_cling = 1
    # Whether to use blkid library instead of native LVM2 code to detect
    # any existing signatures while creating new Physical Volumes and
    # Logical Volumes. LVM2 needs to be compiled with blkid wiping support
    # for this setting to take effect.
    # LVM2 native detection code is currently able to recognize these signatures:
    # - MD device signature
    # - swap signature
    # - LUKS signature
    # To see the list of signatures recognized by blkid, check the output
    # of 'blkid -k' command. The blkid can recognize more signatures than
    # LVM2 native detection code, but due to this higher number of signatures
    # to be recognized, it can take more time to complete the signature scan.
    use_blkid_wiping = 1
    # Set to 1 to wipe any signatures found on newly-created Logical Volumes
    # automatically in addition to zeroing of the first KB on the LV
    # (controlled by the -Z/--zero y option).
    # The command line option -W/--wipesignatures takes precedence over this
    # setting.
    # The default is to wipe signatures when zeroing.
    wipe_signatures_when_zeroing_new_lvs = 1
    # Set to 1 to guarantee that mirror logs will always be placed on
    # different PVs from the mirror images. This was the default
    # until version 2.02.85.
    mirror_logs_require_separate_pvs = 0
    # Set to 1 to guarantee that cache_pool metadata will always be
    # placed on different PVs from the cache_pool data.
    cache_pool_metadata_require_separate_pvs = 0
    # Specify the minimal chunk size (in kiB) for cache pool volumes.
    # Using a chunk_size that is too large can result in wasteful use of
    # the cache, where small reads and writes can cause large sections of
    # an LV to be mapped into the cache. However, choosing a chunk_size
    # that is too small can result in more overhead trying to manage the
    # numerous chunks that become mapped into the cache. The former is
    # more of a problem than the latter in most cases, so we default to
    # a value that is on the smaller end of the spectrum. Supported values
    # range from 32(kiB) to 1048576 in multiples of 32.
    # cache_pool_chunk_size = 64
    # Set to 1 to guarantee that thin pool metadata will always
    # be placed on different PVs from the pool data.
    thin_pool_metadata_require_separate_pvs = 0
    # Specify chunk size calculation policy for thin pool volumes.
    # Possible options are:
    # "generic" - if thin_pool_chunk_size is defined, use it.
    # Otherwise, calculate the chunk size based on
    # estimation and device hints exposed in sysfs:
    # the minimum_io_size. The chunk size is always
    # at least 64KiB.
    # "performance" - if thin_pool_chunk_size is defined, use it.
    # Otherwise, calculate the chunk size for
    # performance based on device hints exposed in
    # sysfs: the optimal_io_size. The chunk size is
    # always at least 512KiB.
    # thin_pool_chunk_size_policy = "generic"
    # Specify the minimal chunk size (in KB) for thin pool volumes.
    # Use of the larger chunk size may improve performance for plain
    # thin volumes, however using them for snapshot volumes is less efficient,
    # as it consumes more space and takes extra time for copying.
    # When unset, lvm tries to estimate chunk size starting from 64KB
    # Supported values are in range from 64 to 1048576.
    # thin_pool_chunk_size = 64
    # Specify discards behaviour of the thin pool volume.
    # Select one of "ignore", "nopassdown", "passdown"
    # thin_pool_discards = "passdown"
    # Set to 0, to disable zeroing of thin pool data chunks before their
    # first use.
    # N.B. zeroing larger thin pool chunk size degrades performance.
    # thin_pool_zero = 1
    # This section that allows you to configure the nature of the
    # information that LVM2 reports.
    log {
    # Controls the messages sent to stdout or stderr.
    # There are three levels of verbosity, 3 being the most verbose.
    verbose = 0
    # Set to 1 to suppress all non-essential messages from stdout.
    # This has the same effect as -qq.
    # When this is set, the following commands still produce output:
    # dumpconfig, lvdisplay, lvmdiskscan, lvs, pvck, pvdisplay,
    # pvs, version, vgcfgrestore -l, vgdisplay, vgs.
    # Non-essential messages are shifted from log level 4 to log level 5
    # for syslog and lvm2_log_fn purposes.
    # Any 'yes' or 'no' questions not overridden by other arguments
    # are suppressed and default to 'no'.
    silent = 0
    # Should we send log messages through syslog?
    # 1 is yes; 0 is no.
    syslog = 1
    # Should we log error and debug messages to a file?
    # By default there is no log file.
    #file = "/var/log/lvm2.log"
    # Should we overwrite the log file each time the program is run?
    # By default we append.
    overwrite = 0
    # What level of log messages should we send to the log file and/or syslog?
    # There are 6 syslog-like log levels currently in use - 2 to 7 inclusive.
    # 7 is the most verbose (LOG_DEBUG).
    level = 0
    # Format of output messages
    # Whether or not (1 or 0) to indent messages according to their severity
    indent = 1
    # Whether or not (1 or 0) to display the command name on each line output
    command_names = 0
    # A prefix to use before the message text (but after the command name,
    # if selected). Default is two spaces, so you can see/grep the severity
    # of each message.
    prefix = " "
    # To make the messages look similar to the original LVM tools use:
    # indent = 0
    # command_names = 1
    # prefix = " -- "
    # Set this if you want log messages during activation.
    # Don't use this in low memory situations (can deadlock).
    # activation = 0
    # Some debugging messages are assigned to a class and only appear
    # in debug output if the class is listed here.
    # Classes currently available:
    # memory, devices, activation, allocation, lvmetad, metadata, cache,
    # locking
    # Use "all" to see everything.
    debug_classes = [ "memory", "devices", "activation", "allocation",
    "lvmetad", "metadata", "cache", "locking" ]
    # Configuration of metadata backups and archiving. In LVM2 when we
    # talk about a 'backup' we mean making a copy of the metadata for the
    # *current* system. The 'archive' contains old metadata configurations.
    # Backups are stored in a human readable text format.
    backup {
    # Should we maintain a backup of the current metadata configuration ?
    # Use 1 for Yes; 0 for No.
    # Think very hard before turning this off!
    backup = 1
    # Where shall we keep it ?
    # Remember to back up this directory regularly!
    backup_dir = "/etc/lvm/backup"
    # Should we maintain an archive of old metadata configurations.
    # Use 1 for Yes; 0 for No.
    # On by default. Think very hard before turning this off.
    archive = 1
    # Where should archived files go ?
    # Remember to back up this directory regularly!
    archive_dir = "/etc/lvm/archive"
    # What is the minimum number of archive files you wish to keep ?
    retain_min = 10
    # What is the minimum time you wish to keep an archive file for ?
    retain_days = 30
    # Settings for the running LVM2 in shell (readline) mode.
    shell {
    # Number of lines of history to store in ~/.lvm_history
    history_size = 100
    # Miscellaneous global LVM2 settings
    global {
    # The file creation mask for any files and directories created.
    # Interpreted as octal if the first digit is zero.
    umask = 077
    # Allow other users to read the files
    #umask = 022
    # Enabling test mode means that no changes to the on disk metadata
    # will be made. Equivalent to having the -t option on every
    # command. Defaults to off.
    test = 0
    # Default value for --units argument
    units = "h"
    # Since version 2.02.54, the tools distinguish between powers of
    # 1024 bytes (e.g. KiB, MiB, GiB) and powers of 1000 bytes (e.g.
    # KB, MB, GB).
    # If you have scripts that depend on the old behaviour, set this to 0
    # temporarily until you update them.
    si_unit_consistency = 1
    # Whether or not to display unit suffix for sizes. This setting has
    # no effect if the units are in human-readable form (global/units="h")
    # in which case the suffix is always displayed.
    suffix = 1
    # Whether or not to communicate with the kernel device-mapper.
    # Set to 0 if you want to use the tools to manipulate LVM metadata
    # without activating any logical volumes.
    # If the device-mapper kernel driver is not present in your kernel
    # setting this to 0 should suppress the error messages.
    activation = 1
    # If we can't communicate with device-mapper, should we try running
    # the LVM1 tools?
    # This option only applies to 2.4 kernels and is provided to help you
    # switch between device-mapper kernels and LVM1 kernels.
    # The LVM1 tools need to be installed with .lvm1 suffices
    # e.g. vgscan.lvm1 and they will stop working after you start using
    # the new lvm2 on-disk metadata format.
    # The default value is set when the tools are built.
    # fallback_to_lvm1 = 0
    # The default metadata format that commands should use - "lvm1" or "lvm2".
    # The command line override is -M1 or -M2.
    # Defaults to "lvm2".
    # format = "lvm2"
    # Location of proc filesystem
    proc = "/proc"
    # Type of locking to use. Defaults to local file-based locking (1).
    # Turn locking off by setting to 0 (dangerous: risks metadata corruption
    # if LVM2 commands get run concurrently).
    # Type 2 uses the external shared library locking_library.
    # Type 3 uses built-in clustered locking.
    # Type 4 uses read-only locking which forbids any operations that might
    # change metadata.
    # N.B. Don't use lvmetad with locking type 3 as lvmetad is not yet
    # supported in clustered environment. If use_lvmetad=1 and locking_type=3
    # is set at the same time, LVM always issues a warning message about this
    # and then it automatically disables lvmetad use.
    locking_type = 1
    # Set to 0 to fail when a lock request cannot be satisfied immediately.
    wait_for_locks = 1
    # If using external locking (type 2) and initialisation fails,
    # with this set to 1 an attempt will be made to use the built-in
    # clustered locking.
    # If you are using a customised locking_library you should set this to 0.
    fallback_to_clustered_locking = 1
    # If an attempt to initialise type 2 or type 3 locking failed, perhaps
    # because cluster components such as clvmd are not running, with this set
    # to 1 an attempt will be made to use local file-based locking (type 1).
    # If this succeeds, only commands against local volume groups will proceed.
    # Volume Groups marked as clustered will be ignored.
    fallback_to_local_locking = 1
    # Local non-LV directory that holds file-based locks while commands are
    # in progress. A directory like /tmp that may get wiped on reboot is OK.
    locking_dir = "/run/lock/lvm"
    # Whenever there are competing read-only and read-write access requests for
    # a volume group's metadata, instead of always granting the read-only
    # requests immediately, delay them to allow the read-write requests to be
    # serviced. Without this setting, write access may be stalled by a high
    # volume of read-only requests.
    # NB. This option only affects locking_type = 1 viz. local file-based
    # locking.
    prioritise_write_locks = 1
    # Other entries can go here to allow you to load shared libraries
    # e.g. if support for LVM1 metadata was compiled as a shared library use
    # format_libraries = "liblvm2format1.so"
    # Full pathnames can be given.
    # Search this directory first for shared libraries.
    # library_dir = "/lib"
    # The external locking library to load if locking_type is set to 2.
    # locking_library = "liblvm2clusterlock.so"
    # Treat any internal errors as fatal errors, aborting the process that
    # encountered the internal error. Please only enable for debugging.
    abort_on_internal_errors = 0
    # Check whether CRC is matching when parsed VG is used multiple times.
    # This is useful to catch unexpected internal cached volume group
    # structure modification. Please only enable for debugging.
    detect_internal_vg_cache_corruption = 0
    # If set to 1, no operations that change on-disk metadata will be permitted.
    # Additionally, read-only commands that encounter metadata in need of repair
    # will still be allowed to proceed exactly as if the repair had been
    # performed (except for the unchanged vg_seqno).
    # Inappropriate use could mess up your system, so seek advice first!
    metadata_read_only = 0
    # 'mirror_segtype_default' defines which segtype will be used when the
    # shorthand '-m' option is used for mirroring. The possible options are:
    # "mirror" - The original RAID1 implementation provided by LVM2/DM. It is
    # characterized by a flexible log solution (core, disk, mirrored)
    # and by the necessity to block I/O while reconfiguring in the
    # event of a failure.
    # There is an inherent race in the dmeventd failure handling
    # logic with snapshots of devices using this type of RAID1 that
    # in the worst case could cause a deadlock.
    # Ref: https://bugzilla.redhat.com/show_bug.cgi?id=817130#c10
    # "raid1" - This implementation leverages MD's RAID1 personality through
    # device-mapper. It is characterized by a lack of log options.
    # (A log is always allocated for every device and they are placed
    # on the same device as the image - no separate devices are
    # required.) This mirror implementation does not require I/O
    # to be blocked in the kernel in the event of a failure.
    # This mirror implementation is not cluster-aware and cannot be
    # used in a shared (active/active) fashion in a cluster.
    # Specify the '--type <mirror|raid1>' option to override this default
    # setting.
    mirror_segtype_default = "raid1"
    # 'raid10_segtype_default' determines the segment types used by default
    # when the '--stripes/-i' and '--mirrors/-m' arguments are both specified
    # during the creation of a logical volume.
    # Possible settings include:
    # "raid10" - This implementation leverages MD's RAID10 personality through
    # device-mapper.
    # "mirror" - LVM will layer the 'mirror' and 'stripe' segment types. It
    # will do this by creating a mirror on top of striped sub-LVs;
    # effectively creating a RAID 0+1 array. This is suboptimal
    # in terms of providing redundancy and performance. Changing to
    # this setting is not advised.
    # Specify the '--type <raid10|mirror>' option to override this default
    # setting.
    raid10_segtype_default = "raid10"
    # The default format for displaying LV names in lvdisplay was changed
    # in version 2.02.89 to show the LV name and path separately.
    # Previously this was always shown as /dev/vgname/lvname even when that
    # was never a valid path in the /dev filesystem.
    # Set to 1 to reinstate the previous format.
    # lvdisplay_shows_full_device_path = 0
    # Whether to use (trust) a running instance of lvmetad. If this is set to
    # 0, all commands fall back to the usual scanning mechanisms. When set to 1
    # *and* when lvmetad is running (automatically instantiated by making use of
    # systemd's socket-based service activation or run as an initscripts service
    # or run manually), the volume group metadata and PV state flags are obtained
    # from the lvmetad instance and no scanning is done by the individual
    # commands. In a setup with lvmetad, lvmetad udev rules *must* be set up for
    # LVM to work correctly. Without proper udev rules, all changes in block
    # device configuration will be *ignored* until a manual 'pvscan --cache'
    # is performed. These rules are installed by default.
    # If lvmetad has been running while use_lvmetad was 0, it MUST be stopped
    # before changing use_lvmetad to 1 and started again afterwards.
    # If using lvmetad, the volume activation is also switched to automatic
    # event-based mode. In this mode, the volumes are activated based on
    # incoming udev events that automatically inform lvmetad about new PVs
    # that appear in the system. Once the VG is complete (all the PVs are
    # present), it is auto-activated. The activation/auto_activation_volume_list
    # setting controls which volumes are auto-activated (all by default).
    # A note about device filtering while lvmetad is used:
    # When lvmetad is updated (either automatically based on udev events
    # or directly by pvscan --cache <device> call), the devices/filter
    # is ignored and all devices are scanned by default. The lvmetad always
    # keeps unfiltered information which is then provided to LVM commands
    # and then each LVM command does the filtering based on devices/filter
    # setting itself.
    # To prevent scanning devices completely, even when using lvmetad,
    # the devices/global_filter must be used.
    # N.B. Don't use lvmetad with locking type 3 as lvmetad is not yet
    # supported in clustered environment. If use_lvmetad=1 and locking_type=3
    # is set at the same time, LVM always issues a warning message about this
    # and then it automatically disables lvmetad use.
    use_lvmetad = 0
    # Full path of the utility called to check that a thin metadata device
    # is in a state that allows it to be used.
    # Each time a thin pool needs to be activated or after it is deactivated
    # this utility is executed. The activation will only proceed if the utility
    # has an exit status of 0.
    # Set to "" to skip this check. (Not recommended.)
    # The thin tools are available as part of the device-mapper-persistent-data
    # package from https://github.com/jthornber/thin-provisioning-tools.
    # thin_check_executable = "/usr/bin/thin_check"
    # Array of string options passed with thin_check command. By default,
    # option "-q" is for quiet output.
    # With thin_check version 2.1 or newer you can add "--ignore-non-fatal-errors"
    # to let it pass through ignorable errors and fix them later.
    # thin_check_options = [ "-q" ]
    # Full path of the utility called to repair a thin metadata device
    # is in a state that allows it to be used.
    # Each time a thin pool needs repair this utility is executed.
    # See thin_check_executable how to obtain binaries.
    # thin_repair_executable = "/usr/bin/thin_repair"
    # Array of extra string options passed with thin_repair command.
    # thin_repair_options = [ "" ]
    # Full path of the utility called to dump thin metadata content.
    # See thin_check_executable how to obtain binaries.
    # thin_dump_executable = "/usr/bin/thin_dump"
    # If set, given features are not used by thin driver.
    # This can be helpful not just for testing, but i.e. allows to avoid
    # using problematic implementation of some thin feature.
    # Features:
    # block_size
    # discards
    # discards_non_power_2
    # external_origin
    # metadata_resize
    # external_origin_extend
    # thin_disabled_features = [ "discards", "block_size" ]
    activation {
    # Set to 1 to perform internal checks on the operations issued to
    # libdevmapper. Useful for debugging problems with activation.
    # Some of the checks may be expensive, so it's best to use this
    # only when there seems to be a problem.
    checks = 0
    # Set to 0 to disable udev synchronisation (if compiled into the binaries).
    # Processes will not wait for notification from udev.
    # They will continue irrespective of any possible udev processing
    # in the background. You should only use this if udev is not running
    # or has rules that ignore the devices LVM2 creates.
    # The command line argument --nodevsync takes precedence over this setting.
    # If set to 1 when udev is not running, and there are LVM2 processes
    # waiting for udev, run 'dmsetup udevcomplete_all' manually to wake them up.
    udev_sync = 1
    # Set to 0 to disable the udev rules installed by LVM2 (if built with
    # --enable-udev_rules). LVM2 will then manage the /dev nodes and symlinks
    # for active logical volumes directly itself.
    # N.B. Manual intervention may be required if this setting is changed
    # while any logical volumes are active.
    udev_rules = 1
    # Set to 1 for LVM2 to verify operations performed by udev. This turns on
    # additional checks (and if necessary, repairs) on entries in the device
    # directory after udev has completed processing its events.
    # Useful for diagnosing problems with LVM2/udev interactions.
    verify_udev_operations = 0
    # If set to 1 and if deactivation of an LV fails, perhaps because
    # a process run from a quick udev rule temporarily opened the device,
    # retry the operation for a few seconds before failing.
    retry_deactivation = 1
    # How to fill in missing stripes if activating an incomplete volume.
    # Using "error" will make inaccessible parts of the device return
    # I/O errors on access. You can instead use a device path, in which
    # case, that device will be used to in place of missing stripes.
    # But note that using anything other than "error" with mirrored
    # or snapshotted volumes is likely to result in data corruption.
    missing_stripe_filler = "error"
    # The linear target is an optimised version of the striped target
    # that only handles a single stripe. Set this to 0 to disable this
    # optimisation and always use the striped target.
    use_linear_target = 1
    # How much stack (in KB) to reserve for use while devices suspended
    # Prior to version 2.02.89 this used to be set to 256KB
    reserved_stack = 64
    # How much memory (in KB) to reserve for use while devices suspended
    reserved_memory = 8192
    # Nice value used while devices suspended
    process_priority = -18
    # If volume_list is defined, each LV is only activated if there is a
    # match against the list.
    # "vgname" and "vgname/lvname" are matched exactly.
    # "@tag" matches any tag set in the LV or VG.
    # "@*" matches if any tag defined on the host is also set in the LV or VG
    # If any host tags exist but volume_list is not defined, a default
    # single-entry list containing "@*" is assumed.
    # volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
    # If auto_activation_volume_list is defined, each LV that is to be
    # activated with the autoactivation option (--activate ay/-a ay) is
    # first checked against the list. There are two scenarios in which
    # the autoactivation option is used:
    # - automatic activation of volumes based on incoming PVs. If all the
    # PVs making up a VG are present in the system, the autoactivation
    # is triggered. This requires lvmetad (global/use_lvmetad=1) and udev
    # to be running. In this case, "pvscan --cache -aay" is called
    # automatically without any user intervention while processing
    # udev events. Please, make sure you define auto_activation_volume_list
    # properly so only the volumes you want and expect are autoactivated.
    # - direct activation on command line with the autoactivation option.
    # In this case, the user calls "vgchange --activate ay/-a ay" or
    # "lvchange --activate ay/-a ay" directly.
    # By default, the auto_activation_volume_list is not defined and all
    # volumes will be activated either automatically or by using --activate ay/-a ay.
    # N.B. The "activation/volume_list" is still honoured in all cases so even
    # if the VG/LV passes the auto_activation_volume_list, it still needs to
    # pass the volume_list for it to be activated in the end.
    # If auto_activation_volume_list is defined but empty, no volumes will be
    # activated automatically and --activate ay/-a ay will do nothing.
    # auto_activation_volume_list = []
    # If auto_activation_volume_list is defined and it's not empty, only matching
    # volumes will be activated either automatically or by using --activate ay/-a ay.
    # "vgname" and "vgname/lvname" are matched exactly.
    # "@tag" matches any tag set in the LV or VG.
    # "@*" matches if any tag defined on the host is also set in the LV or VG
    # auto_activation_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
    # If read_only_volume_list is defined, each LV that is to be activated
    # is checked against the list, and if it matches, it as activated
    # in read-only mode. (This overrides '--permission rw' stored in the
    # metadata.)
    # "vgname" and "vgname/lvname" are matched exactly.
    # "@tag" matches any tag set in the LV or VG.
    # "@*" matches if any tag defined on the host is also set in the LV or VG
    # read_only_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
    # Each LV can have an 'activation skip' flag stored persistently against it.
    # During activation, this flag is used to decide whether such an LV is skipped.
    # The 'activation skip' flag can be set during LV creation and by default it
    # is automatically set for thin snapshot LVs. The 'auto_set_activation_skip'
    # enables or disables this automatic setting of the flag while LVs are created.
    # auto_set_activation_skip = 1
    # For RAID or 'mirror' segment types, 'raid_region_size' is the
    # size (in KiB) of each:
    # - synchronization operation when initializing
    # - each copy operation when performing a 'pvmove' (using 'mirror' segtype)
    # This setting has replaced 'mirror_region_size' since version 2.02.99
    raid_region_size = 512
    # Setting to use when there is no readahead value stored in the metadata.
    # "none" - Disable readahead.
    # "auto" - Use default value chosen by kernel.
    readahead = "auto"
    # 'raid_fault_policy' defines how a device failure in a RAID logical
    # volume is handled. This includes logical volumes that have the following
    # segment types: raid1, raid4, raid5*, and raid6*.
    # In the event of a failure, the following policies will determine what
    # actions are performed during the automated response to failures (when
    # dmeventd is monitoring the RAID logical volume) and when 'lvconvert' is
    # called manually with the options '--repair' and '--use-policies'.
    # "warn" - Use the system log to warn the user that a device in the RAID
    # logical volume has failed. It is left to the user to run
    # 'lvconvert --repair' manually to remove or replace the failed
    # device. As long as the number of failed devices does not
    # exceed the redundancy of the logical volume (1 device for
    # raid4/5, 2 for raid6, etc) the logical volume will remain
    # usable.
    # "allocate" - Attempt to use any extra physical volumes in the volume
    # group as spares and replace faulty devices.
    raid_fault_policy = "warn"
    # 'mirror_image_fault_policy' and 'mirror_log_fault_policy' define
    # how a device failure affecting a mirror (of "mirror" segment type) is
    # handled. A mirror is composed of mirror images (copies) and a log.
    # A disk log ensures that a mirror does not need to be re-synced
    # (all copies made the same) every time a machine reboots or crashes.
    # In the event of a failure, the specified policy will be used to determine
    # what happens. This applies to automatic repairs (when the mirror is being
    # monitored by dmeventd) and to manual lvconvert --repair when
    # --use-policies is given.
    # "remove" - Simply remove the faulty device and run without it. If
    # the log device fails, the mirror would convert to using
    # an in-memory log. This means the mirror will not
    # remember its sync status across crashes/reboots and
    # the entire mirror will be re-synced. If a
    # mirror image fails, the mirror will convert to a
    # non-mirrored device if there is only one remaining good
    # copy.
    # "allocate" - Remove the faulty device and try to allocate space on
    # a new device to be a replacement for the failed device.
    # Using this policy for the log is fast and maintains the
    # ability to remember sync state through crashes/reboots.
    # Using this policy for a mirror device is slow, as it
    # requires the mirror to resynchronize the devices, but it
    # will preserve the mirror characteristic of the device.
    # This policy acts like "remove" if no suitable device and
    # space can be allocated for the replacement.
    # "allocate_anywhere" - Not yet implemented. Useful to place the log device
    # temporarily on same physical volume as one of the mirror
    # images. This policy is not recommended for mirror devices
    # since it would break the redundant nature of the mirror. This
    # policy acts like "remove" if no suitable device and space can
    # be allocated for the replacement.
    mirror_log_fault_policy = "allocate"
    mirror_image_fault_policy = "remove"
    # 'snapshot_autoextend_threshold' and 'snapshot_autoextend_percent' define
    # how to handle automatic snapshot extension. The former defines when the
    # snapshot should be extended: when its space usage exceeds this many
    # percent. The latter defines how much extra space should be allocated for
    # the snapshot, in percent of its current size.
    # For example, if you set snapshot_autoextend_threshold to 70 and
    # snapshot_autoextend_percent to 20, whenever a snapshot exceeds 70% usage,
    # it will be extended by another 20%. For a 1G snapshot, using up 700M will
    # trigger a resize to 1.2G. When the usage exceeds 840M, the snapshot will
    # be extended to 1.44G, and so on.
    # Setting snapshot_autoextend_threshold to 100 disables automatic
    # extensions. The minimum value is 50 (A setting below 50 will be treated
    # as 50).
    snapshot_autoextend_threshold = 100
    snapshot_autoextend_percent = 20
    # 'thin_pool_autoextend_threshold' and 'thin_pool_autoextend_percent' define
    # how to handle automatic pool extension. The former defines when the
    # pool should be extended: when its space usage exceeds this many
    # percent. The latter defines how much extra space should be allocated for
    # the pool, in percent of its current size.
    # For example, if you set thin_pool_autoextend_threshold to 70 and
    # thin_pool_autoextend_percent to 20, whenever a pool exceeds 70% usage,
    # it will be extended by another 20%. For a 1G pool, using up 700M will
    # trigger a resize to 1.2G. When the usage exceeds 840M, the pool will
    # be extended to 1.44G, and so on.
    # Setting thin_pool_autoextend_threshold to 100 disables automatic
    # extensions. The minimum value is 50 (A setting below 50 will be treated
    # as 50).
    thin_pool_autoextend_threshold = 100
    thin_pool_autoextend_percent = 20
    # While activating devices, I/O to devices being (re)configured is
    # suspended, and as a precaution against deadlocks, LVM2 needs to pin
    # any memory it is using so it is not paged out. Groups of pages that
    # are known not to be accessed during activation need not be pinned
    # into memory. Each string listed in this setting is compared against
    # each line in /proc/self/maps, and the pages corresponding to any
    # lines that match are not pinned. On some systems locale-archive was
    # found to make up over 80% of the memory used by the process.
    # mlock_filter = [ "locale/locale-archive", "gconv/gconv-modules.cache" ]
    # Set to 1 to revert to the default behaviour prior to version 2.02.62
    # which used mlockall() to pin the whole process's memory while activating
    # devices.
    use_mlockall = 0
    # Monitoring is enabled by default when activating logical volumes.
    # Set to 0 to disable monitoring or use the --ignoremonitoring option.
    monitoring = 1
    # When pvmove or lvconvert must wait for the kernel to finish
    # synchronising or merging data, they check and report progress
    # at intervals of this number of seconds. The default is 15 seconds.
    # If this is set to 0 and there is only one thing to wait for, there
    # are no progress reports, but the process is awoken immediately the
    # operation is complete.
    polling_interval = 15
    # Report settings.
    # report {
    # Align columns on report output.
    # aligned=1
    # When buffered reporting is used, the report's content is appended
    # incrementally to include each object being reported until the report
    # is flushed to output which normally happens at the end of command
    # execution. Otherwise, if buffering is not used, each object is
    # reported as soon as its processing is finished.
    # buffered=1
    # Show headings for columns on report.
    # headings=1
    # A separator to use on report after each field.
    # separator=" "
    # Use a field name prefix for each field reported.
    # prefixes=0
    # Quote field values when using field name prefixes.
    # quoted=1
    # Output each column as a row. If set, this also implies report/prefixes=1.
    # colums_as_rows=0
    # Comma separated list of columns to sort by when reporting 'lvm devtypes' command.
    # See 'lvm devtypes -o help' for the list of possible fields.
    # devtypes_sort="devtype_name"
    # Comma separated list of columns to report for 'lvm devtypes' command.
    # See 'lvm devtypes -o help' for the list of possible fields.
    # devtypes_cols="devtype_name,devtype_max_partitions,devtype_description"
    # Comma separated list of columns to report for 'lvm devtypes' command in verbose mode.
    # See 'lvm devtypes -o help' for the list of possible fields.
    # devtypes_cols_verbose="devtype_name,devtype_max_partitions,devtype_description"
    # Comma separated list of columns to sort by when reporting 'lvs' command.
    # See 'lvs -o help' for the list of possible fields.
    # lvs_sort="vg_name,lv_name"
    # Comma separated list of columns to report for 'lvs' command.
    # See 'lvs -o help' for the list of possible fields.
    # lvs_cols="lv_name,vg_name,lv_attr,lv_size,pool_lv,origin,data_percent,move_pv,mirror_log,copy_percent,convert_lv"
    # Comma separated list of columns to report for 'lvs' command in verbose mode.
    # See 'lvs -o help' for the list of possible fields.
    # lvs_cols_verbose="lv_name,vg_name,seg_count,lv_attr,lv_size,lv_major,lv_minor,lv_kernel_major,lv_kernel_minor,pool_lv,origin,data_percent,metadata_percent,move_pv,copy_percent,mirror_log,convert
    # Comma separated list of columns to sort by when reporting 'vgs' command.
    # See 'vgs -o help' for the list of possible fields.
    # vgs_sort="vg_name"
    # Comma separated list of columns to report for 'vgs' command.
    # See 'vgs -o help' for the list of possible fields.
    # vgs_cols="vg_name,pv_count,lv_count,snap_count,vg_attr,vg_size,vg_free"
    # Comma separated list of columns to report for 'vgs' command in verbose mode.
    # See 'vgs -o help' for the list of possible fields.
    # vgs_cols_verbose="vg_name,vg_attr,vg_extent_size,pv_count,lv_count,snap_count,vg_size,vg_free,vg_uuid,vg_profile"
    # Comma separated list of columns to sort by when reporting 'pvs' command.
    # See 'pvs -o help' for the list of possible fields.
    # pvs_sort="pv_name"
    # Comma separated list of columns to report for 'pvs' command.
    # See 'pvs -o help' for the list of possible fields.
    # pvs_cols="pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free"
    # Comma separated list of columns to report for 'pvs' command in verbose mode.
    # See 'pvs -o help' for the list of possible fields.
    # pvs_cols_verbose="pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,dev_size,pv_uuid"
    # Comma separated list of columns to sort by when reporting 'lvs --segments' command.
    # See 'lvs --segments -o help' for the list of possible fields.
    # segs_sort="vg_name,lv_name,seg_start"
    # Comma separated list of columns to report for 'lvs --segments' command.
    # See 'lvs --segments -o help' for the list of possible fields.
    # segs_cols="lv_name,vg_name,lv_attr,stripes,segtype,seg_size"
    # Comma separated list of columns to report for 'lvs --segments' command in verbose mode.
    # See 'lvs --segments -o help' for the list of possible fields.
    # segs_cols_verbose="lv_name,vg_name,lv_attr,seg_start,seg_size,stripes,segtype,stripesize,chunksize"
    # Comma separated list of columns to sort by when reporting 'pvs --segments' command.
    # See 'pvs --segments -o help' for the list of possible fields.
    # pvsegs_sort="pv_name,pvseg_start"
    # Comma separated list of columns to sort by when reporting 'pvs --segments' command.
    # See 'pvs --segments -o help' for the list of possible fields.
    # pvsegs_cols="pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,pvseg_start,pvseg_size"
    # Comma separated list of columns to sort by when reporting 'pvs --segments' command in verbose mode.
    # See 'pvs --segments -o help' for the list of possible fields.
    # pvsegs_cols_verbose="pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,pvseg_start,pvseg_size,lv_name,seg_start_pe,segtype,seg_pe_ranges"
    # Advanced section #
    # Metadata settings
    # metadata {
    # Default number of copies of metadata to hold on each PV. 0, 1 or 2.
    # You might want to override it from the command line with 0
    # when running pvcreate on new PVs which are to be added to large VGs.
    # pvmetadatacopies = 1
    # Default number of copies of metadata to maintain for each VG.
    # If set to a non-zero value, LVM automatically chooses which of
    # the available metadata areas to use to achieve the requested
    # number of copies of the VG metadata. If you set a value larger
    # than the the total number of metadata areas available then
    # metadata is stored in them all.
    # The default value of 0 ("unmanaged") disables this automatic
    # management and allows you to control which metadata areas
    # are used at the individual PV level using 'pvchange
    # --metadataignore y/n'.
    # vgmetadatacopies = 0
    # Approximate default size of on-disk metadata areas in sectors.
    # You should increase this if you have large volume groups or
    # you want to retain a large on-disk history of your metadata changes.
    # pvmetadatasize = 255
    # List of directories holding live copies of text format metadata.
    # These directories must not be on logical volumes!
    # It's possible to use LVM2 with a couple of directories here,
    # preferably on different (non-LV) filesystems, and with no other
    # on-disk metadata (pvmetadatacopies = 0). Or this can be in
    # addition to on-disk metadata areas.
    # The feature was originally added to simplify testing and is not
    # supported under low memory situations - the machine could lock up.
    # Never edit any files in these directories by hand unless you
    # you are absolutely sure you know what you are doing! Use
    # the supplied toolset to make changes (e.g. vgcfgrestore).
    # dirs = [ "/etc/lvm/metadata", "/mnt/disk2/lvm/metadata2" ]
    # Event daemon
    dmeventd {
    # mirror_library is the library used when monitoring a mirror device.
    # "libdevmapper-event-lvm2mirror.so" attempts to recover from
    # failures. It removes failed devices from a volume group and
    # reconfigures a mirror as necessary. If no mirror library is
    # provided, mirrors are not monitored through dmeventd.
    mirror_library = "libdevmapper-event-lvm2mirror.so"
    # snapshot_library is the library used when monitoring a snapshot device.
    # "libdevmapper-event-lvm2snapshot.so" monitors the filling of
    # snapshots and emits a warning through syslog when the use of
    # the snapshot exceeds 80%. The warning is repeated when 85%, 90% and
    # 95% of the snapshot is filled.
    snapshot_library = "libdevmapper-event-lvm2snapshot.so"
    # thin_library is the library used when monitoring a thin device.
    # "libdevmapper-event-lvm2thin.so" monitors the filling of
    # pool and emits a warning through syslog when the use of
    # the pool exceeds 80%. The warning is repeated when 85%, 90% and
    # 95% of the pool is filled.
    thin_library = "libdevmapper-event-lvm2thin.so"
    # Full path of the dmeventd binary.
    # executable = "/usr/sbin/dmeventd"

  • There is no more space for virtual disk ServerName_2.vmdk. You might be able to continue this session by freeing disk space on the relevant volume, and clicking Retry. Click Cancel to terminate this session.   Time: 30/05/2014 1:16:20 AM

    Recently, our mail server crashed at about 7pm one night, with the error 'There is no more space for virtual disk ServerName_2.vmdk. You might be able to continue this session by freeing disk space on the relevant volume, and clicking Retry. Click Cancel to terminate this session.'
    When we click Retry, the server starts up OK.
    There are no snapshots listed in Snapshot manager for any of the virtual machines on the host.
    There is also free disk space available on the host and for the VM with the disk errors.
    This happened at least three more times, often at bad times. Each time, we were able to click 'Retry' and the disk/system would allow the VM to start-up successfully.
    I checked the Forums, the VMware support articles and the internet as I had not seen this problem before. I have completed the VSphere 5.1 - Fast Track course and this issue was NOT covered in the training.
    Most of the advice on-line and even that on the VMware web-site was pointing to snapshots being the cause of this issue. There are no snapshots enabled and I cannot see evidence of snapshots ever being used.
    - We are running VMware vsphere (5.1.0) and there are (were) 4 virtual machines running on the ESX host. We are using the free version of VMware/ESXi.
    - The Hard disk types we are using for this Virtual Machine are 'Thin Provisioned'.
    - There are 4 [Thin Provisioned] Hard Disks for this virtual machine.
    - There are 6 CPUs
    - There is 20GB of RAM (memory)
    - The VM is running Windows Server 2008 R2 as the guest/VM operating system. It is an Exchange 2010 SP1 mail server. There is plenty of available disk space on all the drives. The [Exchange] log files are cleaned out regularly (automated).
    I decided to move one of our non-critical servers off this host and on to another host to see if this helped the problem. This took quite some time, as we are not using HA or vmotion, nor do we have VCenter Server...nonetheless, I finally managed to get the non-critical server on to another host (n.b. This was a much smaller machine with less virtual resources assigned to it).
    After moving the non-critical server off this host, we decided to monitor the Host and see if the issue resolved itself.
    I checked the host about 6-10 times a day, from first thing in the morning till last thing at night - monitoring the performance of not only the Virtual Machine, but the ESXi host also.
    There were no adverse performance issues. The only thing I did note, was in the Summary page on the ESX Host, under Storage, was If I right-clicked on the datastore and clicked refresh, then the free disk space would drop (ie from 140GB to 125GB).
    After monitoring the host and VM for about 2 weeks, we did NOT have another instance of the above error.
    Sorry for the long winded post, but I wanted to give as much detail given this error has been raised before and snapshots are usually blamed as the cause.
    My question is this:
    If the ESX host had plenty of available disk capacity and there were no snapshots enabled on the VM (or any other VM's on the same host), then why did our Virtual machine crash with the error that 'there is not more space for virtual disk Servername_2.vmdk'?
    How do we prevent this issue from happening if we don't know the underlying cause?
    I would greatly appreciate any advice or suggestions.
    If I have not provided enough info on the specs or environment, please let me know and I will provide more information.
    Thanks all,
    Kurt

    The type of storage is really based on your requirements, and your ability to withstand downtime.
    iSCSI as you are using with a NAS such as Synology or QNAP.  NAS Selector - Support - Synology - Network Attached Storage (NAS)
    I wouldn't use iSCSI for Exchange or any database.  It's a bit slow.
    Do you have a single physical host?  Then I'd probably to an external direct attached storage.  This would be a card inserted in your host server that gives you multilane SAS/SATA connectivity (www.techcable.com/SAS-SATA/SAS-SATA.pps) and an external disk enclosure/array.
    For multiple hosts to a single array, I recommend a fibre channel connection to a FC capable switch, and on to a FC connected array.
    We used to use a HP P2000 (on old G1), but it's since been retired.  Worked pretty well once firmware was upgraded.  http://www8.hp.com/us/en/products/disk-storage/product-detail.html?oid=4118559#!tab=features.  They can be connected via iSCSI, Fibre Channel or 6GB SAS so they are flexible and reasonably priced.
    Recommendations:
         Use RAID 6 with your large disk arrays.  With large disks there is a measurable failure rate when rebuilding a failed RAID5 array based on MTBF.
         Use smaller 15K disks in RAID 0+1 for speed on databases/Exchange.
         Use slower 7.2K disks in RAID6 for file storage.
    We are a small hospital and we have 3 VMware servers with dual CNA (FC and Ethernet in a single twinax cable) connections to 2 redundant Cisco Nexus 5K switches and then 4 Fibre Channel connections to an EMC VNX 5300.  It's extremely fast with about 50 virtual servers, but was quite an investment.  One thing we don't have to worry about is down time.  If there ever is an equipment failure, we have redundant everything, including power split between two UPSs.
    Our VNX has 3 tiers of performance.  3 100GB SSD "Fast Cache" in RAID 1 with hot spare, to keep the most used data ready, but it's not really a tier, however one could be built utilizing the same disks.  A second tier is performance tier with a 8 600GB RAID 0+1 and hot spare.  The third is a bunch of 7.2K 3TB disks in RAID6.  The VNX autotiers, placing data on disks depending on where it's needed.  The volumes are sliced and diced automatically in the background to make this happen and we never have to touch it.  I used a demo of Solarwinds Storage Manager to monitor performance for a while and the utilization was always low, meaning all data access was fast, througout the day.
    D

  • Need for multiple ASM disk groups on a SAN with RAID5??

    Hello all,
    I've successfully installed clusterware, and ASM on a 5 node system. I'm trying to use asmca (11Gr2 on RHEL5)....to configure the disk groups.
    I have a SAN, which actually was previously used for a 10G ASM RAC setup...so, reusing the candidate volumes that ASM has found.
    I had noticed on the previous incarnation....that several disk groups had been created, for example:
    ASMCMD> ls
    DATADG/
    INDEXDG/
    LOGDG1/
    LOGDG2/
    LOGDG3/
    LOGDG4/
    RECOVERYDG/
    Now....this is all on a SAN....which basically has two pools of drives set up each in a RAID5 configuration. Pool 1 contains ASM volumes named ASM1 - ASM32. Each of these logical volumes is about 65 GB.
    Pool #2...has ASM33 - ASM48 volumes....each of which is about 16GB in size.
    I used ASM33 from pool#2...by itself to contain my cluster voting disk and OCR.
    My question is....with this type setup...would doing so many disk groups as listed above really do any good for performance? I was thinking with all of this on a SAN, which logical volumes on top of a couple sets of RAID5 disks...the divisions on the disk group level with external redundancy would do anything?
    I was thinking of starting with about half of the ASM1-ASM31 'disks'...to create one large DATADG disk group, which would house all of the database instances data, indexes....etc. I'd keep the remaining large candidate disks as needed for later growth.
    I was going to start with the pool of the smaller disks (except the 1 already dedicated to cluster needs) to basically serve as a decently sized RECOVERYDG...to house logs, flashback area...etc. It appears this pool is separate from pool #1...so, possibly some speed benefits there.
    But really...is there any need to separate the diskgroups, based on a SAN with two pools of RAID5 logical volumes?
    If so, can someone give me some ideas why...links on this info...etc.
    Thank you in advance,
    cayenne

    The best practice is to use 2 disk groups, one for data and the other for the flash/fast recovery area. There really is no need to have a disk group for each type of file, in fact the more disks in a disk group (to a point I've seen) the better for performance and space management. However, there are times when multiple disk groups are appropriate (not saying this is one of them only FYI), such as backup/recovery and life cycle management. Typically you will still get benefit from double stripping, i.e. having a SAN with RAID groups presenting multiple LUNs to ASM, and then having ASM use those LUNs in disk groups. I saw this in my own testing. Start off with a minimum of 4 LUNs per disk group, and add in pairs as this will provide optimal performance (at least it did in my testing). You should also have a set of standard LUN sizes to present to ASM so things are consistent across your enterprise, the sizing is typically done based on your database size. For example:
    300GB LUN: database > 10TB
    150GB LUN: database 1TB to 10 TB
    50GB LUN: database < 1TB
    As databases grow beyond the threshold the larger LUNs are swapped in and the previous ones are swapped out. With thin provisioning it is a little different since you only need to resize the ASM LUNs. I'd also recommend having at least 2 of each standard sized LUNs ready to go in case you need space in an emergency. Even with capacity management you never know when something just consumes space too quickly.
    ASM is all about space savings, performance, and management :-).
    Hope this helps.

  • "Optimize" a CSV volume

    I have a H-V 2012 cluster with 3 nodes.  I use a CSV volume to store the VMs.  I have all the latest patches installed. I have an Equallogic SAN (PS4000) with the latest firmware on it providing the LUN for the CSV.  Everything in my environment is
    supposed to support re-thinning (or unmapping, or whatever the right term is) the LUN.  I have about 500 GB of unused space on the 2TB volume, and the volume was thin provisioned.  A restore of some very large vhd files from backup caused the
    thin provisioned volume to grow and to use almost the entire volume at one point but the corrupt VHDs have since been deleted.  Now I have "dirty" blocks in the LUN that I want to reclaim into free space on the SAN. This all happens, apparently,
    when Server 2012 performs an "Optimize" on the disks.  In my environment this is scheduled to happen once a week.  It did apparently do something this last week, because my volume utilization on the SAN went from 96% to 91%.  Not even
    close to reclaiming all dirty blocks, but it's a start I guess.  So now I went in to the "Defragment and Optimize Drives" utility and told it to commence a manual optimization.  Nothing happens and event viewer give me this error:
    The volume VMStorage1 (C:\ClusterStorage\Volume1) was not optimized because an error was encountered: CSVFS failed operation as volume is not in redirected mode. (0x8007174F)
    So my questions are these:
    Shouldn't it put the CSV in redirect mode if it needs to do this in order to optimize the drive automatically?
    If it can't do this automatically, how did it return 5% of the CSV SAN volume to free space last week?
    Can I put the volume in redirect mode manually and do the optimize manually? Redirect mode is not supposed to be necessary in 2012 CSV any more- at least not for backup.  Why here?
    Will my environment re-thin, Unmap, whatever?  It appears it MIGHT.  Does it take several iterations (ie weeks)?
    Can anyone explain this incredibly vague and cloaked process from a Windows server 2012 perspective?
    Thank you for any help!
    DML
    DLovitt

    Hi,
    In CSVv2.0, every effort was made to expand the number of scenarios that would use Direct I/O over Redirected I/O. 
    Direct I/O delivers faster performance with lower network overhead. Emphasis is on using Direct I/O for all types of file open actions. 
    Direct I/O uses buffered reads and writes which means it can take advantage of the Windows Cache Manager. As an example, Direct I/O results in better virtual machine creation times and improved copy performance. In CSVv1.0, to get the highest performance
    during a copy operation, the destination node had to be the Coordinator node for the destination CSV volume. 
    CSVv2.0 uses a new algorithm for determine what types of I/O are redirected. 
    Oplocks are used as a distributed locking mechanism to determine if I/O can go via a direct path.
    All of the optimizations and performance improvements are for naught if the file system cannot remain available for the applications to use. 
    The new file system check and repair capability goes a long way towards ensuring file system availability. 
    The new file system health-checking model coupled with new functionality in chkdsk helps in this area. 
    In addition, CSV volumes can take advantage of these new capabilities. 
    Thanks.
    Kevin Ni
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • Iscsi target rewriting sparse backing store

    Hi all,
    I have this particular problem when trying to use sparse file residing on zfs as backing store for iscsi target. For the sake of this post, lets say I have to use the sparse file instead of whole zfs filesystem as iscsi backing store.
    However, as soon as the sparse file is used as iscsi target backing store, Solaris OS (iscsitgt process) decides to rewrite entire sparse file and make it non-sparse. Note this all happens without any iscsi initiator (client) accessed this iscsi target.
    My question is why the sparse file is being rewritten at that time?
    I can expect write at iscsi initiator connect time, but why at the iscsi target create time?
    Here are the steps:
    1. Create the sparse file, note the actual size,
    # dd if=/dev/zero of=sparse_file.dat bs=1024k count=1 seek=4096
    1+0 records in
    1+0 records out
    # du -sk .
    2
    # ll sparse_file.dat
    -rw-r--r--   1 root     root     4296015872 Feb  7 10:12 sparse_file.dat
    2. Create the iscsi target using that file as backing store:
    # iscsitadm create target --backing-store=$PWD/sparse_file.dat sparse
    3. Above command returns immediately, everything seems ok at this time
    4. But after couple of seconds, disk activity increases, and zpool iostat shows
    # zpool iostat 3
                   capacity     operations    bandwidth
    pool         used  avail   read  write   read  write
    mypool  5.04G   144G      0    298      0  35.5M
    mypool  5.20G   144G      0    347      0  38.0M
    and so on, until the write over previously sparse 4G is over:
    5. Note the real size now:
    # du -sk .
    4193252 .Note all of the above was happening with no iscsi initiators connected to that node nor target. Solaris OS did it by itself, and I can see no reasons why.
    I would like to have those files sparse, at least until I use them as iscsi targets, and I would prefer those files to grow as my initiators (clients) are filling them.
    If anyone can share some thoughts on this, I'd appreciate it
    Thanks,
    Robert

    Problem solved.
    Solaris iscsi target daemon configuration file has to be updated with:
    <thin-provisioning>true</thin-provisioning>
    in order to iscsitgtd not to initialize the iscsi target backing store files. This is all valid only for iscsi targets having files as backing store.
    After creating iscsi targets with file (sparse or not) as backing store, there is no i/o activity whatsoever, and thats' what I wanted.
    FWIW, This is how the config file looks now.
    # more /etc/iscsi/target_config.xml
    <config version='1.0'>
    <thin-provisioning>true</thin-provisioning>
    </config>
    #

Maybe you are looking for