Strange units in conky (KiB, MiB, GiB)

Conky is working fine here (using KDE), but the units are strange: KiB, MiB, GiB. I want kB, MB and GB instead and also a fixed number of digits. How can i format that? I only found a variable for padding percentage-values.
Here's my conkyrc:
background yes
cpu_avg_samples 5
net_avg_samples 5
out_to_console no
font 7x13
own_window_transparent no
own_window_colour hotpink
xftalpha 1.0
on_bottom yes
mail_spool $MAIL
update_interval 2
own_window no
double_buffer yes
minimum_size 5 5
draw_shades yes
draw_outline no
draw_borders yes
stippled_borders 0
border_margin 10
border_width 2
default_color white
default_shade_color black
default_outline_color white
alignment top_left
gap_x 20
gap_y 20
use_spacer yes
no_buffers yes
uppercase no
# stuff after 'TEXT' will be formatted on screen
TEXT
${color #CCCCCC}${time %A, %d %B %Y} ${alignr}${time %T}
${color #ffccaa}SYSTEM
${color white}${hr 1}
${color #888888}Kernel: ${color #CCCCCC}$kernel ${color #888888}${alignr}Uptime: ${color #CCCCCC}$uptime
${color #888888}CPU : ${color #CCCCCC}$freq_g GHz $cpu% ${alignr}${i2c temp 2} C
${color #888888} ${cpugraph 15,210 ff0000 ff00ff}
${color #888888}RAM : ${color #CCCCCC}$memmax $memperc% ${membar 6}
${color #888888}Swap : ${color #CCCCCC}$swapmax $swapperc% ${swapbar 6}
${color #ffccaa}DISK
${color white}${hr 1}
${color #888888}/ : ${color #CCCCCC}${fs_size /} ${fs_free_perc /} % ${fs_bar 6 /}
${color #888888}/boot : ${color #CCCCCC}${fs_size /boot} ${fs_free_perc /boot} % ${fs_bar 6 /boot}
${color #888888}/tmp : ${color #CCCCCC}${fs_size /tmp} ${fs_free_perc /tmp} % ${fs_bar 6 /tmp}
${color #888888}/home : ${color #CCCCCC}${fs_size /home} ${fs_free_perc /home} % ${fs_bar 6 /home}
${color #888888}/data : ${color #CCCCCC}${fs_size /mnt/data} ${fs_free_perc /mnt/data} % ${fs_bar 6 /mnt/data}
${color #ffccaa}PROCESSES
${color white}${hr 1}
${color #888888}Processes: ${color #CCCCCC}$processes ${color #888888}Running: ${color #CCCCCC}$running_processes
${color #888888}Name PID CPU% MEM%
${color #CCCCCC}${top name 1} ${top pid 1} ${top cpu 1} ${top mem 1}
${color #CCCCCC}${top name 2} ${top pid 2} ${top cpu 2} ${top mem 2}
${color #CCCCCC}${top name 3} ${top pid 3} ${top cpu 3} ${top mem 3}
${color #CCCCCC}${top_mem name 1} ${top_mem pid 1} ${top_mem cpu 1} ${top_mem mem 1}
${color #CCCCCC}${top_mem name 2} ${top_mem pid 2} ${top_mem cpu 2} ${top_mem mem 2}
${color #CCCCCC}${top_mem name 3} ${top_mem pid 3} ${top_mem cpu 3} ${top_mem mem 3}
${color #ffccaa}NETWORK
${color white}${hr 1}
${color #888888}Down : ${color #CCCCCC}${downspeed eth0} kb/s ${color #888888}Up : ${color #CCCCCC}${upspeed eth0} kb/s
${color #888888}Total: ${color #CCCCCC}${totaldown eth0} ${color #888888}Total: ${color #CCCCCC}${totalup eth0}

Well ... I've read the WiKi post and my conclusion is: it is the stupidest thing I have ever seen.
Adding those stupid letters does not help at all the end users. Is not a matter of name what confuses people, it is that they do not know what the different powers of 10 mean.

Similar Messages

  • Strange MPD problems (conky/dzen?)

    I'm having a strange problem with MPD ever since I started messing around with xmonad/dzen2/conky-cli. I seem to lose control of mpd, but my music continue's to play... Sometimes it will randomly come back, but most of the time I'm left unable to control it, the only way of stopping it being a kill. here are the errors I'm getting (from a conky-cli config):
    ^fg(white) | ^fg(white)Wednesday 26 Aug 2009 07:51:39 PM
    Conky: MPD error: problems getting a response from "localhost" on port 6600 : Operation now in progress
    Conky: MPD error: problems getting a response from "localhost" on port 6600 : Operation now in progress
    ^fg(white) | ^fg(white)Wednesday 26 Aug 2009 07:51:40 PM
    Conky: MPD error: problems getting a response from "localhost" on port 6600 : Operation now in progress
    ^fg(white) | ^fg(white)Wednesday 26 Aug 2009 07:51:41 PM
    Conky: MPD error: problems getting a response from "localhost" on port 6600 : Operation now in progress
    ^fg(white) | ^fg(white)Wednesday 26 Aug 2009 07:51:42 PM
    Conky: MPD error: problems getting a response from "localhost" on port 6600 : Operation now in progress
    ^fg(white) | ^fg(white)Wednesday 26 Aug 2009 07:51:43 PM
    Conky: MPD error: problems getting a response from "localhost" on port 6600 : Operation now in progress
    ^fg(white) | ^fg(white)Wednesday 26 Aug 2009 07:51:44 PM
    Conky: MPD error: problems getting a response from "localhost" on port 6600 : Operation now in progress
    ^fg(white) | ^fg(white)Wednesday 26 Aug 2009 07:51:45 PM
    Conky: MPD error: problems getting a response from "localhost" on port 6600 : Operation now in progress
    I never had this problem until I started messing with all this new stuff.... I'll check mpd in a different environment and see what happens... in the mean time, here is my relevant mpd.conf, although nothing has changed since when it was working:
    music_directory "/mnt/storage/Music"
    playlist_directory "/home/heleos/.mpd/playlists/"
    db_file "/home/heleos/.mpd/mpd.db"
    log_file "/home/heleos/.mpd/mpd.log"
    error_file "/home/heleos/.mpd/mpd.error"
    pid_file "/home/heleos/.mpd/mpd.pid"
    state_file "/home/heleos/.mpd/mpdstate"
    user "heleos"
    port "6600"
    audio_output {
    type "oss"
    name "My OSS Device"
    # device "/dev/dsp" # optional
    # format "44100:16:2" # optional

    CoolWhip, better open a thread on your own and refer back to this one. You will be more in control besides being more actual.
    (You can even mark your thread as [SOLVED] if you get this far. )
    Closing this one.

  • LVM Volumes not available after update

    Hi All!
    I haven't updated my system for about two months and today I updated it. Now I have the problem that I cannot boot properly. I have my root partition in an LVM volume and on boot I get the message
    ERROR: device 'UUID=xxx' not found. Skipping fs
    ERROR: Unable to find root device 'UUID=xxx'
    After that I land in the recovery shell. After some research I found, that "lvm lvdisplay" showed that my volumes where not available and I had to reenable them with "lvm vgchange -a y".
    Issuing any lvm command also produced the following warning:
    WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!
    Anyway, after issuing the commands and exiting the recovery shell, the system booted again. However, I would prefer being able to boot without manual actions.
    Thanks in advance!
    Further information:
    vgdisplay
    --- Volume group ---
    VG Name ArchLVM
    System ID
    Format lvm2
    Metadata Areas 1
    Metadata Sequence No 3
    VG Access read/write
    VG Status resizable
    MAX LV 0
    Cur LV 2
    Open LV 1
    Max PV 0
    Cur PV 1
    Act PV 1
    VG Size 232.69 GiB
    PE Size 4.00 MiB
    Total PE 59568
    Alloc PE / Size 59568 / 232.69 GiB
    Free PE / Size 0 / 0
    VG UUID SoB3M1-v1fD-1abI-PNJ3-6IOn-FfdI-0RoLK5
    lvdisplay (LV Status was 'not available' right after booting)
    --- Logical volume ---
    LV Path /dev/ArchLVM/Swap
    LV Name Swap
    VG Name ArchLVM
    LV UUID XRYBrz-LojR-k6SD-XIxV-wHnY-f3VG-giKL6V
    LV Write Access read/write
    LV Creation host, time archiso, 2014-05-16 14:43:06 +0200
    LV Status available
    # open 0
    LV Size 8.00 GiB
    Current LE 2048
    Segments 1
    Allocation inherit
    Read ahead sectors auto
    - currently set to 256
    Block device 254:0
    --- Logical volume ---
    LV Path /dev/ArchLVM/Root
    LV Name Root
    VG Name ArchLVM
    LV UUID lpjDl4-Jqzu-ZWkq-Uphc-IaOo-6Rzd-cIh5yv
    LV Write Access read/write
    LV Creation host, time archiso, 2014-05-16 14:43:27 +0200
    LV Status available
    # open 1
    LV Size 224.69 GiB
    Current LE 57520
    Segments 1
    Allocation inherit
    Read ahead sectors auto
    - currently set to 256
    Block device 254:1
    /etc/fstab
    # /etc/fstab: static file system information
    # <file system> <dir> <type> <options> <dump> <pass>
    # /dev/mapper/ArchLVM-Root
    UUID=2db82d1a-47a4-4e30-a819-143e8fb75199 / ext4 rw,relatime,data=ordered 0 1
    #/dev/mapper/ArchLVM-Root / ext4 rw,relatime,data=ordered 0 1
    # /dev/sda1
    UUID=72691888-a781-4cdd-a98e-2613d87925d0 /boot ext2 rw,relatime 0 2
    /etc/mkinitcpio.conf
    # vim:set ft=sh
    # MODULES
    # The following modules are loaded before any boot hooks are
    # run. Advanced users may wish to specify all system modules
    # in this array. For instance:
    # MODULES="piix ide_disk reiserfs"
    MODULES=""
    # BINARIES
    # This setting includes any additional binaries a given user may
    # wish into the CPIO image. This is run last, so it may be used to
    # override the actual binaries included by a given hook
    # BINARIES are dependency parsed, so you may safely ignore libraries
    BINARIES=""
    # FILES
    # This setting is similar to BINARIES above, however, files are added
    # as-is and are not parsed in any way. This is useful for config files.
    FILES=""
    # HOOKS
    # This is the most important setting in this file. The HOOKS control the
    # modules and scripts added to the image, and what happens at boot time.
    # Order is important, and it is recommended that you do not change the
    # order in which HOOKS are added. Run 'mkinitcpio -H <hook name>' for
    # help on a given hook.
    # 'base' is _required_ unless you know precisely what you are doing.
    # 'udev' is _required_ in order to automatically load modules
    # 'filesystems' is _required_ unless you specify your fs modules in MODULES
    # Examples:
    ## This setup specifies all modules in the MODULES setting above.
    ## No raid, lvm2, or encrypted root is needed.
    # HOOKS="base"
    ## This setup will autodetect all modules for your system and should
    ## work as a sane default
    # HOOKS="base udev autodetect block filesystems"
    ## This setup will generate a 'full' image which supports most systems.
    ## No autodetection is done.
    # HOOKS="base udev block filesystems"
    ## This setup assembles a pata mdadm array with an encrypted root FS.
    ## Note: See 'mkinitcpio -H mdadm' for more information on raid devices.
    # HOOKS="base udev block mdadm encrypt filesystems"
    ## This setup loads an lvm2 volume group on a usb device.
    # HOOKS="base udev block lvm2 filesystems"
    ## NOTE: If you have /usr on a separate partition, you MUST include the
    # usr, fsck and shutdown hooks.
    HOOKS="base udev autodetect modconf block lvm2 filesystems keyboard fsck"
    # COMPRESSION
    # Use this to compress the initramfs image. By default, gzip compression
    # is used. Use 'cat' to create an uncompressed image.
    #COMPRESSION="gzip"
    #COMPRESSION="bzip2"
    #COMPRESSION="lzma"
    #COMPRESSION="xz"
    #COMPRESSION="lzop"
    #COMPRESSION="lz4"
    # COMPRESSION_OPTIONS
    # Additional options for the compressor
    #COMPRESSION_OPTIONS=""
    /boot/grub/grub.cfg
    # DO NOT EDIT THIS FILE
    # It is automatically generated by grub-mkconfig using templates
    # from /etc/grub.d and settings from /etc/default/grub
    ### BEGIN /etc/grub.d/00_header ###
    insmod part_gpt
    insmod part_msdos
    if [ -s $prefix/grubenv ]; then
    load_env
    fi
    if [ "${next_entry}" ] ; then
    set default="${next_entry}"
    set next_entry=
    save_env next_entry
    set boot_once=true
    else
    set default="0"
    fi
    if [ x"${feature_menuentry_id}" = xy ]; then
    menuentry_id_option="--id"
    else
    menuentry_id_option=""
    fi
    export menuentry_id_option
    if [ "${prev_saved_entry}" ]; then
    set saved_entry="${prev_saved_entry}"
    save_env saved_entry
    set prev_saved_entry=
    save_env prev_saved_entry
    set boot_once=true
    fi
    function savedefault {
    if [ -z "${boot_once}" ]; then
    saved_entry="${chosen}"
    save_env saved_entry
    fi
    function load_video {
    if [ x$feature_all_video_module = xy ]; then
    insmod all_video
    else
    insmod efi_gop
    insmod efi_uga
    insmod ieee1275_fb
    insmod vbe
    insmod vga
    insmod video_bochs
    insmod video_cirrus
    fi
    if [ x$feature_default_font_path = xy ] ; then
    font=unicode
    else
    insmod part_msdos
    insmod lvm
    insmod ext2
    set root='lvmid/SoB3M1-v1fD-1abI-PNJ3-6IOn-FfdI-0RoLK5/lpjDl4-Jqzu-ZWkq-Uphc-IaOo-6Rzd-cIh5yv'
    if [ x$feature_platform_search_hint = xy ]; then
    search --no-floppy --fs-uuid --set=root --hint='lvmid/SoB3M1-v1fD-1abI-PNJ3-6IOn-FfdI-0RoLK5/lpjDl4-Jqzu-ZWkq-Uphc-IaOo-6Rzd-cIh5yv' 2db82d1a-47a4-4e30-a819-143e8fb75199
    else
    search --no-floppy --fs-uuid --set=root 2db82d1a-47a4-4e30-a819-143e8fb75199
    fi
    font="/usr/share/grub/unicode.pf2"
    fi
    if loadfont $font ; then
    set gfxmode=auto
    load_video
    insmod gfxterm
    fi
    terminal_input console
    terminal_output gfxterm
    if [ x$feature_timeout_style = xy ] ; then
    set timeout_style=menu
    set timeout=5
    # Fallback normal timeout code in case the timeout_style feature is
    # unavailable.
    else
    set timeout=5
    fi
    ### END /etc/grub.d/00_header ###
    ### BEGIN /etc/grub.d/10_linux ###
    menuentry 'Arch Linux' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-2db82d1a-47a4-4e30-a819-143e8fb75199' {
    load_video
    set gfxpayload=keep
    insmod gzio
    insmod part_msdos
    insmod ext2
    set root='hd0,msdos1'
    if [ x$feature_platform_search_hint = xy ]; then
    search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 72691888-a781-4cdd-a98e-2613d87925d0
    else
    search --no-floppy --fs-uuid --set=root 72691888-a781-4cdd-a98e-2613d87925d0
    fi
    echo 'Loading Linux linux ...'
    linux /vmlinuz-linux root=UUID=2db82d1a-47a4-4e30-a819-143e8fb75199 rw quiet
    echo 'Loading initial ramdisk ...'
    initrd /initramfs-linux.img
    submenu 'Advanced options for Arch Linux' $menuentry_id_option 'gnulinux-advanced-2db82d1a-47a4-4e30-a819-143e8fb75199' {
    menuentry 'Arch Linux, with Linux linux' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-linux-advanced-2db82d1a-47a4-4e30-a819-143e8fb75199' {
    load_video
    set gfxpayload=keep
    insmod gzio
    insmod part_msdos
    insmod ext2
    set root='hd0,msdos1'
    if [ x$feature_platform_search_hint = xy ]; then
    search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 72691888-a781-4cdd-a98e-2613d87925d0
    else
    search --no-floppy --fs-uuid --set=root 72691888-a781-4cdd-a98e-2613d87925d0
    fi
    echo 'Loading Linux linux ...'
    linux /vmlinuz-linux root=UUID=2db82d1a-47a4-4e30-a819-143e8fb75199 rw quiet
    echo 'Loading initial ramdisk ...'
    initrd /initramfs-linux.img
    menuentry 'Arch Linux, with Linux linux (fallback initramfs)' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-linux-fallback-2db82d1a-47a4-4e30-a819-143e8fb75199' {
    load_video
    set gfxpayload=keep
    insmod gzio
    insmod part_msdos
    insmod ext2
    set root='hd0,msdos1'
    if [ x$feature_platform_search_hint = xy ]; then
    search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 72691888-a781-4cdd-a98e-2613d87925d0
    else
    search --no-floppy --fs-uuid --set=root 72691888-a781-4cdd-a98e-2613d87925d0
    fi
    echo 'Loading Linux linux ...'
    linux /vmlinuz-linux root=UUID=2db82d1a-47a4-4e30-a819-143e8fb75199 rw quiet
    echo 'Loading initial ramdisk ...'
    initrd /initramfs-linux-fallback.img
    ### END /etc/grub.d/10_linux ###
    ### BEGIN /etc/grub.d/20_linux_xen ###
    ### END /etc/grub.d/20_linux_xen ###
    ### BEGIN /etc/grub.d/30_os-prober ###
    ### END /etc/grub.d/30_os-prober ###
    ### BEGIN /etc/grub.d/40_custom ###
    # This file provides an easy way to add custom menu entries. Simply type the
    # menu entries you want to add after this comment. Be careful not to change
    # the 'exec tail' line above.
    ### END /etc/grub.d/40_custom ###
    ### BEGIN /etc/grub.d/41_custom ###
    if [ -f ${config_directory}/custom.cfg ]; then
    source ${config_directory}/custom.cfg
    elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then
    source $prefix/custom.cfg;
    fi
    ### END /etc/grub.d/41_custom ###
    ### BEGIN /etc/grub.d/60_memtest86+ ###
    ### END /etc/grub.d/60_memtest86+ ###
    Last edited by Kirodema (2014-07-16 07:31:34)

    use_lvmetad = 0
    lvm2-lvmetad is not enabled or running on my system. Shall I activate it?
    # This is an example configuration file for the LVM2 system.
    # It contains the default settings that would be used if there was no
    # /etc/lvm/lvm.conf file.
    # Refer to 'man lvm.conf' for further information including the file layout.
    # To put this file in a different directory and override /etc/lvm set
    # the environment variable LVM_SYSTEM_DIR before running the tools.
    # N.B. Take care that each setting only appears once if uncommenting
    # example settings in this file.
    # This section allows you to set the way the configuration settings are handled.
    config {
    # If enabled, any LVM2 configuration mismatch is reported.
    # This implies checking that the configuration key is understood
    # by LVM2 and that the value of the key is of a proper type.
    # If disabled, any configuration mismatch is ignored and default
    # value is used instead without any warning (a message about the
    # configuration key not being found is issued in verbose mode only).
    checks = 1
    # If enabled, any configuration mismatch aborts the LVM2 process.
    abort_on_errors = 0
    # Directory where LVM looks for configuration profiles.
    profile_dir = "/etc/lvm/profile"
    # This section allows you to configure which block devices should
    # be used by the LVM system.
    devices {
    # Where do you want your volume groups to appear ?
    dir = "/dev"
    # An array of directories that contain the device nodes you wish
    # to use with LVM2.
    scan = [ "/dev" ]
    # If set, the cache of block device nodes with all associated symlinks
    # will be constructed out of the existing udev database content.
    # This avoids using and opening any inapplicable non-block devices or
    # subdirectories found in the device directory. This setting is applied
    # to udev-managed device directory only, other directories will be scanned
    # fully. LVM2 needs to be compiled with udev support for this setting to
    # take effect. N.B. Any device node or symlink not managed by udev in
    # udev directory will be ignored with this setting on.
    obtain_device_list_from_udev = 1
    # If several entries in the scanned directories correspond to the
    # same block device and the tools need to display a name for device,
    # all the pathnames are matched against each item in the following
    # list of regular expressions in turn and the first match is used.
    preferred_names = [ ]
    # Try to avoid using undescriptive /dev/dm-N names, if present.
    # preferred_names = [ "^/dev/mpath/", "^/dev/mapper/mpath", "^/dev/[hs]d" ]
    # A filter that tells LVM2 to only use a restricted set of devices.
    # The filter consists of an array of regular expressions. These
    # expressions can be delimited by a character of your choice, and
    # prefixed with either an 'a' (for accept) or 'r' (for reject).
    # The first expression found to match a device name determines if
    # the device will be accepted or rejected (ignored). Devices that
    # don't match any patterns are accepted.
    # Be careful if there there are symbolic links or multiple filesystem
    # entries for the same device as each name is checked separately against
    # the list of patterns. The effect is that if the first pattern in the
    # list to match a name is an 'a' pattern for any of the names, the device
    # is accepted; otherwise if the first pattern in the list to match a name
    # is an 'r' pattern for any of the names it is rejected; otherwise it is
    # accepted.
    # Don't have more than one filter line active at once: only one gets used.
    # Run vgscan after you change this parameter to ensure that
    # the cache file gets regenerated (see below).
    # If it doesn't do what you expect, check the output of 'vgscan -vvvv'.
    # If lvmetad is used, then see "A note about device filtering while
    # lvmetad is used" comment that is attached to global/use_lvmetad setting.
    # By default we accept every block device:
    filter = [ "a/.*/" ]
    # Exclude the cdrom drive
    # filter = [ "r|/dev/cdrom|" ]
    # When testing I like to work with just loopback devices:
    # filter = [ "a/loop/", "r/.*/" ]
    # Or maybe all loops and ide drives except hdc:
    # filter =[ "a|loop|", "r|/dev/hdc|", "a|/dev/ide|", "r|.*|" ]
    # Use anchors if you want to be really specific
    # filter = [ "a|^/dev/hda8$|", "r/.*/" ]
    # Since "filter" is often overridden from command line, it is not suitable
    # for system-wide device filtering (udev rules, lvmetad). To hide devices
    # from LVM-specific udev processing and/or from lvmetad, you need to set
    # global_filter. The syntax is the same as for normal "filter"
    # above. Devices that fail the global_filter are not even opened by LVM.
    # global_filter = []
    # The results of the filtering are cached on disk to avoid
    # rescanning dud devices (which can take a very long time).
    # By default this cache is stored in the /etc/lvm/cache directory
    # in a file called '.cache'.
    # It is safe to delete the contents: the tools regenerate it.
    # (The old setting 'cache' is still respected if neither of
    # these new ones is present.)
    # N.B. If obtain_device_list_from_udev is set to 1 the list of
    # devices is instead obtained from udev and any existing .cache
    # file is removed.
    cache_dir = "/etc/lvm/cache"
    cache_file_prefix = ""
    # You can turn off writing this cache file by setting this to 0.
    write_cache_state = 1
    # Advanced settings.
    # List of pairs of additional acceptable block device types found
    # in /proc/devices with maximum (non-zero) number of partitions.
    # types = [ "fd", 16 ]
    # If sysfs is mounted (2.6 kernels) restrict device scanning to
    # the block devices it believes are valid.
    # 1 enables; 0 disables.
    sysfs_scan = 1
    # By default, LVM2 will ignore devices used as component paths
    # of device-mapper multipath devices.
    # 1 enables; 0 disables.
    multipath_component_detection = 1
    # By default, LVM2 will ignore devices used as components of
    # software RAID (md) devices by looking for md superblocks.
    # 1 enables; 0 disables.
    md_component_detection = 1
    # By default, if a PV is placed directly upon an md device, LVM2
    # will align its data blocks with the md device's stripe-width.
    # 1 enables; 0 disables.
    md_chunk_alignment = 1
    # Default alignment of the start of a data area in MB. If set to 0,
    # a value of 64KB will be used. Set to 1 for 1MiB, 2 for 2MiB, etc.
    # default_data_alignment = 1
    # By default, the start of a PV's data area will be a multiple of
    # the 'minimum_io_size' or 'optimal_io_size' exposed in sysfs.
    # - minimum_io_size - the smallest request the device can perform
    # w/o incurring a read-modify-write penalty (e.g. MD's chunk size)
    # - optimal_io_size - the device's preferred unit of receiving I/O
    # (e.g. MD's stripe width)
    # minimum_io_size is used if optimal_io_size is undefined (0).
    # If md_chunk_alignment is enabled, that detects the optimal_io_size.
    # This setting takes precedence over md_chunk_alignment.
    # 1 enables; 0 disables.
    data_alignment_detection = 1
    # Alignment (in KB) of start of data area when creating a new PV.
    # md_chunk_alignment and data_alignment_detection are disabled if set.
    # Set to 0 for the default alignment (see: data_alignment_default)
    # or page size, if larger.
    data_alignment = 0
    # By default, the start of the PV's aligned data area will be shifted by
    # the 'alignment_offset' exposed in sysfs. This offset is often 0 but
    # may be non-zero; e.g.: certain 4KB sector drives that compensate for
    # windows partitioning will have an alignment_offset of 3584 bytes
    # (sector 7 is the lowest aligned logical block, the 4KB sectors start
    # at LBA -1, and consequently sector 63 is aligned on a 4KB boundary).
    # But note that pvcreate --dataalignmentoffset will skip this detection.
    # 1 enables; 0 disables.
    data_alignment_offset_detection = 1
    # If, while scanning the system for PVs, LVM2 encounters a device-mapper
    # device that has its I/O suspended, it waits for it to become accessible.
    # Set this to 1 to skip such devices. This should only be needed
    # in recovery situations.
    ignore_suspended_devices = 0
    # ignore_lvm_mirrors: Introduced in version 2.02.104
    # This setting determines whether logical volumes of "mirror" segment
    # type are scanned for LVM labels. This affects the ability of
    # mirrors to be used as physical volumes. If 'ignore_lvm_mirrors'
    # is set to '1', it becomes impossible to create volume groups on top
    # of mirror logical volumes - i.e. to stack volume groups on mirrors.
    # Allowing mirror logical volumes to be scanned (setting the value to '0')
    # can potentially cause LVM processes and I/O to the mirror to become
    # blocked. This is due to the way that the "mirror" segment type handles
    # failures. In order for the hang to manifest itself, an LVM command must
    # be run just after a failure and before the automatic LVM repair process
    # takes place OR there must be failures in multiple mirrors in the same
    # volume group at the same time with write failures occurring moments
    # before a scan of the mirror's labels.
    # Note that these scanning limitations do not apply to the LVM RAID
    # types, like "raid1". The RAID segment types handle failures in a
    # different way and are not subject to possible process or I/O blocking.
    # It is encouraged that users set 'ignore_lvm_mirrors' to 1 if they
    # are using the "mirror" segment type. Users that require volume group
    # stacking on mirrored logical volumes should consider using the "raid1"
    # segment type. The "raid1" segment type is not available for
    # active/active clustered volume groups.
    # Set to 1 to disallow stacking and thereby avoid a possible deadlock.
    ignore_lvm_mirrors = 1
    # During each LVM operation errors received from each device are counted.
    # If the counter of a particular device exceeds the limit set here, no
    # further I/O is sent to that device for the remainder of the respective
    # operation. Setting the parameter to 0 disables the counters altogether.
    disable_after_error_count = 0
    # Allow use of pvcreate --uuid without requiring --restorefile.
    require_restorefile_with_uuid = 1
    # Minimum size (in KB) of block devices which can be used as PVs.
    # In a clustered environment all nodes must use the same value.
    # Any value smaller than 512KB is ignored.
    # Ignore devices smaller than 2MB such as floppy drives.
    pv_min_size = 2048
    # The original built-in setting was 512 up to and including version 2.02.84.
    # pv_min_size = 512
    # Issue discards to a logical volumes's underlying physical volume(s) when
    # the logical volume is no longer using the physical volumes' space (e.g.
    # lvremove, lvreduce, etc). Discards inform the storage that a region is
    # no longer in use. Storage that supports discards advertise the protocol
    # specific way discards should be issued by the kernel (TRIM, UNMAP, or
    # WRITE SAME with UNMAP bit set). Not all storage will support or benefit
    # from discards but SSDs and thinly provisioned LUNs generally do. If set
    # to 1, discards will only be issued if both the storage and kernel provide
    # support.
    # 1 enables; 0 disables.
    issue_discards = 0
    # This section allows you to configure the way in which LVM selects
    # free space for its Logical Volumes.
    allocation {
    # When searching for free space to extend an LV, the "cling"
    # allocation policy will choose space on the same PVs as the last
    # segment of the existing LV. If there is insufficient space and a
    # list of tags is defined here, it will check whether any of them are
    # attached to the PVs concerned and then seek to match those PV tags
    # between existing extents and new extents.
    # Use the special tag "@*" as a wildcard to match any PV tag.
    # Example: LVs are mirrored between two sites within a single VG.
    # PVs are tagged with either @site1 or @site2 to indicate where
    # they are situated.
    # cling_tag_list = [ "@site1", "@site2" ]
    # cling_tag_list = [ "@*" ]
    # Changes made in version 2.02.85 extended the reach of the 'cling'
    # policies to detect more situations where data can be grouped
    # onto the same disks. Set this to 0 to revert to the previous
    # algorithm.
    maximise_cling = 1
    # Whether to use blkid library instead of native LVM2 code to detect
    # any existing signatures while creating new Physical Volumes and
    # Logical Volumes. LVM2 needs to be compiled with blkid wiping support
    # for this setting to take effect.
    # LVM2 native detection code is currently able to recognize these signatures:
    # - MD device signature
    # - swap signature
    # - LUKS signature
    # To see the list of signatures recognized by blkid, check the output
    # of 'blkid -k' command. The blkid can recognize more signatures than
    # LVM2 native detection code, but due to this higher number of signatures
    # to be recognized, it can take more time to complete the signature scan.
    use_blkid_wiping = 1
    # Set to 1 to wipe any signatures found on newly-created Logical Volumes
    # automatically in addition to zeroing of the first KB on the LV
    # (controlled by the -Z/--zero y option).
    # The command line option -W/--wipesignatures takes precedence over this
    # setting.
    # The default is to wipe signatures when zeroing.
    wipe_signatures_when_zeroing_new_lvs = 1
    # Set to 1 to guarantee that mirror logs will always be placed on
    # different PVs from the mirror images. This was the default
    # until version 2.02.85.
    mirror_logs_require_separate_pvs = 0
    # Set to 1 to guarantee that cache_pool metadata will always be
    # placed on different PVs from the cache_pool data.
    cache_pool_metadata_require_separate_pvs = 0
    # Specify the minimal chunk size (in kiB) for cache pool volumes.
    # Using a chunk_size that is too large can result in wasteful use of
    # the cache, where small reads and writes can cause large sections of
    # an LV to be mapped into the cache. However, choosing a chunk_size
    # that is too small can result in more overhead trying to manage the
    # numerous chunks that become mapped into the cache. The former is
    # more of a problem than the latter in most cases, so we default to
    # a value that is on the smaller end of the spectrum. Supported values
    # range from 32(kiB) to 1048576 in multiples of 32.
    # cache_pool_chunk_size = 64
    # Set to 1 to guarantee that thin pool metadata will always
    # be placed on different PVs from the pool data.
    thin_pool_metadata_require_separate_pvs = 0
    # Specify chunk size calculation policy for thin pool volumes.
    # Possible options are:
    # "generic" - if thin_pool_chunk_size is defined, use it.
    # Otherwise, calculate the chunk size based on
    # estimation and device hints exposed in sysfs:
    # the minimum_io_size. The chunk size is always
    # at least 64KiB.
    # "performance" - if thin_pool_chunk_size is defined, use it.
    # Otherwise, calculate the chunk size for
    # performance based on device hints exposed in
    # sysfs: the optimal_io_size. The chunk size is
    # always at least 512KiB.
    # thin_pool_chunk_size_policy = "generic"
    # Specify the minimal chunk size (in KB) for thin pool volumes.
    # Use of the larger chunk size may improve performance for plain
    # thin volumes, however using them for snapshot volumes is less efficient,
    # as it consumes more space and takes extra time for copying.
    # When unset, lvm tries to estimate chunk size starting from 64KB
    # Supported values are in range from 64 to 1048576.
    # thin_pool_chunk_size = 64
    # Specify discards behaviour of the thin pool volume.
    # Select one of "ignore", "nopassdown", "passdown"
    # thin_pool_discards = "passdown"
    # Set to 0, to disable zeroing of thin pool data chunks before their
    # first use.
    # N.B. zeroing larger thin pool chunk size degrades performance.
    # thin_pool_zero = 1
    # This section that allows you to configure the nature of the
    # information that LVM2 reports.
    log {
    # Controls the messages sent to stdout or stderr.
    # There are three levels of verbosity, 3 being the most verbose.
    verbose = 0
    # Set to 1 to suppress all non-essential messages from stdout.
    # This has the same effect as -qq.
    # When this is set, the following commands still produce output:
    # dumpconfig, lvdisplay, lvmdiskscan, lvs, pvck, pvdisplay,
    # pvs, version, vgcfgrestore -l, vgdisplay, vgs.
    # Non-essential messages are shifted from log level 4 to log level 5
    # for syslog and lvm2_log_fn purposes.
    # Any 'yes' or 'no' questions not overridden by other arguments
    # are suppressed and default to 'no'.
    silent = 0
    # Should we send log messages through syslog?
    # 1 is yes; 0 is no.
    syslog = 1
    # Should we log error and debug messages to a file?
    # By default there is no log file.
    #file = "/var/log/lvm2.log"
    # Should we overwrite the log file each time the program is run?
    # By default we append.
    overwrite = 0
    # What level of log messages should we send to the log file and/or syslog?
    # There are 6 syslog-like log levels currently in use - 2 to 7 inclusive.
    # 7 is the most verbose (LOG_DEBUG).
    level = 0
    # Format of output messages
    # Whether or not (1 or 0) to indent messages according to their severity
    indent = 1
    # Whether or not (1 or 0) to display the command name on each line output
    command_names = 0
    # A prefix to use before the message text (but after the command name,
    # if selected). Default is two spaces, so you can see/grep the severity
    # of each message.
    prefix = " "
    # To make the messages look similar to the original LVM tools use:
    # indent = 0
    # command_names = 1
    # prefix = " -- "
    # Set this if you want log messages during activation.
    # Don't use this in low memory situations (can deadlock).
    # activation = 0
    # Some debugging messages are assigned to a class and only appear
    # in debug output if the class is listed here.
    # Classes currently available:
    # memory, devices, activation, allocation, lvmetad, metadata, cache,
    # locking
    # Use "all" to see everything.
    debug_classes = [ "memory", "devices", "activation", "allocation",
    "lvmetad", "metadata", "cache", "locking" ]
    # Configuration of metadata backups and archiving. In LVM2 when we
    # talk about a 'backup' we mean making a copy of the metadata for the
    # *current* system. The 'archive' contains old metadata configurations.
    # Backups are stored in a human readable text format.
    backup {
    # Should we maintain a backup of the current metadata configuration ?
    # Use 1 for Yes; 0 for No.
    # Think very hard before turning this off!
    backup = 1
    # Where shall we keep it ?
    # Remember to back up this directory regularly!
    backup_dir = "/etc/lvm/backup"
    # Should we maintain an archive of old metadata configurations.
    # Use 1 for Yes; 0 for No.
    # On by default. Think very hard before turning this off.
    archive = 1
    # Where should archived files go ?
    # Remember to back up this directory regularly!
    archive_dir = "/etc/lvm/archive"
    # What is the minimum number of archive files you wish to keep ?
    retain_min = 10
    # What is the minimum time you wish to keep an archive file for ?
    retain_days = 30
    # Settings for the running LVM2 in shell (readline) mode.
    shell {
    # Number of lines of history to store in ~/.lvm_history
    history_size = 100
    # Miscellaneous global LVM2 settings
    global {
    # The file creation mask for any files and directories created.
    # Interpreted as octal if the first digit is zero.
    umask = 077
    # Allow other users to read the files
    #umask = 022
    # Enabling test mode means that no changes to the on disk metadata
    # will be made. Equivalent to having the -t option on every
    # command. Defaults to off.
    test = 0
    # Default value for --units argument
    units = "h"
    # Since version 2.02.54, the tools distinguish between powers of
    # 1024 bytes (e.g. KiB, MiB, GiB) and powers of 1000 bytes (e.g.
    # KB, MB, GB).
    # If you have scripts that depend on the old behaviour, set this to 0
    # temporarily until you update them.
    si_unit_consistency = 1
    # Whether or not to display unit suffix for sizes. This setting has
    # no effect if the units are in human-readable form (global/units="h")
    # in which case the suffix is always displayed.
    suffix = 1
    # Whether or not to communicate with the kernel device-mapper.
    # Set to 0 if you want to use the tools to manipulate LVM metadata
    # without activating any logical volumes.
    # If the device-mapper kernel driver is not present in your kernel
    # setting this to 0 should suppress the error messages.
    activation = 1
    # If we can't communicate with device-mapper, should we try running
    # the LVM1 tools?
    # This option only applies to 2.4 kernels and is provided to help you
    # switch between device-mapper kernels and LVM1 kernels.
    # The LVM1 tools need to be installed with .lvm1 suffices
    # e.g. vgscan.lvm1 and they will stop working after you start using
    # the new lvm2 on-disk metadata format.
    # The default value is set when the tools are built.
    # fallback_to_lvm1 = 0
    # The default metadata format that commands should use - "lvm1" or "lvm2".
    # The command line override is -M1 or -M2.
    # Defaults to "lvm2".
    # format = "lvm2"
    # Location of proc filesystem
    proc = "/proc"
    # Type of locking to use. Defaults to local file-based locking (1).
    # Turn locking off by setting to 0 (dangerous: risks metadata corruption
    # if LVM2 commands get run concurrently).
    # Type 2 uses the external shared library locking_library.
    # Type 3 uses built-in clustered locking.
    # Type 4 uses read-only locking which forbids any operations that might
    # change metadata.
    # N.B. Don't use lvmetad with locking type 3 as lvmetad is not yet
    # supported in clustered environment. If use_lvmetad=1 and locking_type=3
    # is set at the same time, LVM always issues a warning message about this
    # and then it automatically disables lvmetad use.
    locking_type = 1
    # Set to 0 to fail when a lock request cannot be satisfied immediately.
    wait_for_locks = 1
    # If using external locking (type 2) and initialisation fails,
    # with this set to 1 an attempt will be made to use the built-in
    # clustered locking.
    # If you are using a customised locking_library you should set this to 0.
    fallback_to_clustered_locking = 1
    # If an attempt to initialise type 2 or type 3 locking failed, perhaps
    # because cluster components such as clvmd are not running, with this set
    # to 1 an attempt will be made to use local file-based locking (type 1).
    # If this succeeds, only commands against local volume groups will proceed.
    # Volume Groups marked as clustered will be ignored.
    fallback_to_local_locking = 1
    # Local non-LV directory that holds file-based locks while commands are
    # in progress. A directory like /tmp that may get wiped on reboot is OK.
    locking_dir = "/run/lock/lvm"
    # Whenever there are competing read-only and read-write access requests for
    # a volume group's metadata, instead of always granting the read-only
    # requests immediately, delay them to allow the read-write requests to be
    # serviced. Without this setting, write access may be stalled by a high
    # volume of read-only requests.
    # NB. This option only affects locking_type = 1 viz. local file-based
    # locking.
    prioritise_write_locks = 1
    # Other entries can go here to allow you to load shared libraries
    # e.g. if support for LVM1 metadata was compiled as a shared library use
    # format_libraries = "liblvm2format1.so"
    # Full pathnames can be given.
    # Search this directory first for shared libraries.
    # library_dir = "/lib"
    # The external locking library to load if locking_type is set to 2.
    # locking_library = "liblvm2clusterlock.so"
    # Treat any internal errors as fatal errors, aborting the process that
    # encountered the internal error. Please only enable for debugging.
    abort_on_internal_errors = 0
    # Check whether CRC is matching when parsed VG is used multiple times.
    # This is useful to catch unexpected internal cached volume group
    # structure modification. Please only enable for debugging.
    detect_internal_vg_cache_corruption = 0
    # If set to 1, no operations that change on-disk metadata will be permitted.
    # Additionally, read-only commands that encounter metadata in need of repair
    # will still be allowed to proceed exactly as if the repair had been
    # performed (except for the unchanged vg_seqno).
    # Inappropriate use could mess up your system, so seek advice first!
    metadata_read_only = 0
    # 'mirror_segtype_default' defines which segtype will be used when the
    # shorthand '-m' option is used for mirroring. The possible options are:
    # "mirror" - The original RAID1 implementation provided by LVM2/DM. It is
    # characterized by a flexible log solution (core, disk, mirrored)
    # and by the necessity to block I/O while reconfiguring in the
    # event of a failure.
    # There is an inherent race in the dmeventd failure handling
    # logic with snapshots of devices using this type of RAID1 that
    # in the worst case could cause a deadlock.
    # Ref: https://bugzilla.redhat.com/show_bug.cgi?id=817130#c10
    # "raid1" - This implementation leverages MD's RAID1 personality through
    # device-mapper. It is characterized by a lack of log options.
    # (A log is always allocated for every device and they are placed
    # on the same device as the image - no separate devices are
    # required.) This mirror implementation does not require I/O
    # to be blocked in the kernel in the event of a failure.
    # This mirror implementation is not cluster-aware and cannot be
    # used in a shared (active/active) fashion in a cluster.
    # Specify the '--type <mirror|raid1>' option to override this default
    # setting.
    mirror_segtype_default = "raid1"
    # 'raid10_segtype_default' determines the segment types used by default
    # when the '--stripes/-i' and '--mirrors/-m' arguments are both specified
    # during the creation of a logical volume.
    # Possible settings include:
    # "raid10" - This implementation leverages MD's RAID10 personality through
    # device-mapper.
    # "mirror" - LVM will layer the 'mirror' and 'stripe' segment types. It
    # will do this by creating a mirror on top of striped sub-LVs;
    # effectively creating a RAID 0+1 array. This is suboptimal
    # in terms of providing redundancy and performance. Changing to
    # this setting is not advised.
    # Specify the '--type <raid10|mirror>' option to override this default
    # setting.
    raid10_segtype_default = "raid10"
    # The default format for displaying LV names in lvdisplay was changed
    # in version 2.02.89 to show the LV name and path separately.
    # Previously this was always shown as /dev/vgname/lvname even when that
    # was never a valid path in the /dev filesystem.
    # Set to 1 to reinstate the previous format.
    # lvdisplay_shows_full_device_path = 0
    # Whether to use (trust) a running instance of lvmetad. If this is set to
    # 0, all commands fall back to the usual scanning mechanisms. When set to 1
    # *and* when lvmetad is running (automatically instantiated by making use of
    # systemd's socket-based service activation or run as an initscripts service
    # or run manually), the volume group metadata and PV state flags are obtained
    # from the lvmetad instance and no scanning is done by the individual
    # commands. In a setup with lvmetad, lvmetad udev rules *must* be set up for
    # LVM to work correctly. Without proper udev rules, all changes in block
    # device configuration will be *ignored* until a manual 'pvscan --cache'
    # is performed. These rules are installed by default.
    # If lvmetad has been running while use_lvmetad was 0, it MUST be stopped
    # before changing use_lvmetad to 1 and started again afterwards.
    # If using lvmetad, the volume activation is also switched to automatic
    # event-based mode. In this mode, the volumes are activated based on
    # incoming udev events that automatically inform lvmetad about new PVs
    # that appear in the system. Once the VG is complete (all the PVs are
    # present), it is auto-activated. The activation/auto_activation_volume_list
    # setting controls which volumes are auto-activated (all by default).
    # A note about device filtering while lvmetad is used:
    # When lvmetad is updated (either automatically based on udev events
    # or directly by pvscan --cache <device> call), the devices/filter
    # is ignored and all devices are scanned by default. The lvmetad always
    # keeps unfiltered information which is then provided to LVM commands
    # and then each LVM command does the filtering based on devices/filter
    # setting itself.
    # To prevent scanning devices completely, even when using lvmetad,
    # the devices/global_filter must be used.
    # N.B. Don't use lvmetad with locking type 3 as lvmetad is not yet
    # supported in clustered environment. If use_lvmetad=1 and locking_type=3
    # is set at the same time, LVM always issues a warning message about this
    # and then it automatically disables lvmetad use.
    use_lvmetad = 0
    # Full path of the utility called to check that a thin metadata device
    # is in a state that allows it to be used.
    # Each time a thin pool needs to be activated or after it is deactivated
    # this utility is executed. The activation will only proceed if the utility
    # has an exit status of 0.
    # Set to "" to skip this check. (Not recommended.)
    # The thin tools are available as part of the device-mapper-persistent-data
    # package from https://github.com/jthornber/thin-provisioning-tools.
    # thin_check_executable = "/usr/bin/thin_check"
    # Array of string options passed with thin_check command. By default,
    # option "-q" is for quiet output.
    # With thin_check version 2.1 or newer you can add "--ignore-non-fatal-errors"
    # to let it pass through ignorable errors and fix them later.
    # thin_check_options = [ "-q" ]
    # Full path of the utility called to repair a thin metadata device
    # is in a state that allows it to be used.
    # Each time a thin pool needs repair this utility is executed.
    # See thin_check_executable how to obtain binaries.
    # thin_repair_executable = "/usr/bin/thin_repair"
    # Array of extra string options passed with thin_repair command.
    # thin_repair_options = [ "" ]
    # Full path of the utility called to dump thin metadata content.
    # See thin_check_executable how to obtain binaries.
    # thin_dump_executable = "/usr/bin/thin_dump"
    # If set, given features are not used by thin driver.
    # This can be helpful not just for testing, but i.e. allows to avoid
    # using problematic implementation of some thin feature.
    # Features:
    # block_size
    # discards
    # discards_non_power_2
    # external_origin
    # metadata_resize
    # external_origin_extend
    # thin_disabled_features = [ "discards", "block_size" ]
    activation {
    # Set to 1 to perform internal checks on the operations issued to
    # libdevmapper. Useful for debugging problems with activation.
    # Some of the checks may be expensive, so it's best to use this
    # only when there seems to be a problem.
    checks = 0
    # Set to 0 to disable udev synchronisation (if compiled into the binaries).
    # Processes will not wait for notification from udev.
    # They will continue irrespective of any possible udev processing
    # in the background. You should only use this if udev is not running
    # or has rules that ignore the devices LVM2 creates.
    # The command line argument --nodevsync takes precedence over this setting.
    # If set to 1 when udev is not running, and there are LVM2 processes
    # waiting for udev, run 'dmsetup udevcomplete_all' manually to wake them up.
    udev_sync = 1
    # Set to 0 to disable the udev rules installed by LVM2 (if built with
    # --enable-udev_rules). LVM2 will then manage the /dev nodes and symlinks
    # for active logical volumes directly itself.
    # N.B. Manual intervention may be required if this setting is changed
    # while any logical volumes are active.
    udev_rules = 1
    # Set to 1 for LVM2 to verify operations performed by udev. This turns on
    # additional checks (and if necessary, repairs) on entries in the device
    # directory after udev has completed processing its events.
    # Useful for diagnosing problems with LVM2/udev interactions.
    verify_udev_operations = 0
    # If set to 1 and if deactivation of an LV fails, perhaps because
    # a process run from a quick udev rule temporarily opened the device,
    # retry the operation for a few seconds before failing.
    retry_deactivation = 1
    # How to fill in missing stripes if activating an incomplete volume.
    # Using "error" will make inaccessible parts of the device return
    # I/O errors on access. You can instead use a device path, in which
    # case, that device will be used to in place of missing stripes.
    # But note that using anything other than "error" with mirrored
    # or snapshotted volumes is likely to result in data corruption.
    missing_stripe_filler = "error"
    # The linear target is an optimised version of the striped target
    # that only handles a single stripe. Set this to 0 to disable this
    # optimisation and always use the striped target.
    use_linear_target = 1
    # How much stack (in KB) to reserve for use while devices suspended
    # Prior to version 2.02.89 this used to be set to 256KB
    reserved_stack = 64
    # How much memory (in KB) to reserve for use while devices suspended
    reserved_memory = 8192
    # Nice value used while devices suspended
    process_priority = -18
    # If volume_list is defined, each LV is only activated if there is a
    # match against the list.
    # "vgname" and "vgname/lvname" are matched exactly.
    # "@tag" matches any tag set in the LV or VG.
    # "@*" matches if any tag defined on the host is also set in the LV or VG
    # If any host tags exist but volume_list is not defined, a default
    # single-entry list containing "@*" is assumed.
    # volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
    # If auto_activation_volume_list is defined, each LV that is to be
    # activated with the autoactivation option (--activate ay/-a ay) is
    # first checked against the list. There are two scenarios in which
    # the autoactivation option is used:
    # - automatic activation of volumes based on incoming PVs. If all the
    # PVs making up a VG are present in the system, the autoactivation
    # is triggered. This requires lvmetad (global/use_lvmetad=1) and udev
    # to be running. In this case, "pvscan --cache -aay" is called
    # automatically without any user intervention while processing
    # udev events. Please, make sure you define auto_activation_volume_list
    # properly so only the volumes you want and expect are autoactivated.
    # - direct activation on command line with the autoactivation option.
    # In this case, the user calls "vgchange --activate ay/-a ay" or
    # "lvchange --activate ay/-a ay" directly.
    # By default, the auto_activation_volume_list is not defined and all
    # volumes will be activated either automatically or by using --activate ay/-a ay.
    # N.B. The "activation/volume_list" is still honoured in all cases so even
    # if the VG/LV passes the auto_activation_volume_list, it still needs to
    # pass the volume_list for it to be activated in the end.
    # If auto_activation_volume_list is defined but empty, no volumes will be
    # activated automatically and --activate ay/-a ay will do nothing.
    # auto_activation_volume_list = []
    # If auto_activation_volume_list is defined and it's not empty, only matching
    # volumes will be activated either automatically or by using --activate ay/-a ay.
    # "vgname" and "vgname/lvname" are matched exactly.
    # "@tag" matches any tag set in the LV or VG.
    # "@*" matches if any tag defined on the host is also set in the LV or VG
    # auto_activation_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
    # If read_only_volume_list is defined, each LV that is to be activated
    # is checked against the list, and if it matches, it as activated
    # in read-only mode. (This overrides '--permission rw' stored in the
    # metadata.)
    # "vgname" and "vgname/lvname" are matched exactly.
    # "@tag" matches any tag set in the LV or VG.
    # "@*" matches if any tag defined on the host is also set in the LV or VG
    # read_only_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
    # Each LV can have an 'activation skip' flag stored persistently against it.
    # During activation, this flag is used to decide whether such an LV is skipped.
    # The 'activation skip' flag can be set during LV creation and by default it
    # is automatically set for thin snapshot LVs. The 'auto_set_activation_skip'
    # enables or disables this automatic setting of the flag while LVs are created.
    # auto_set_activation_skip = 1
    # For RAID or 'mirror' segment types, 'raid_region_size' is the
    # size (in KiB) of each:
    # - synchronization operation when initializing
    # - each copy operation when performing a 'pvmove' (using 'mirror' segtype)
    # This setting has replaced 'mirror_region_size' since version 2.02.99
    raid_region_size = 512
    # Setting to use when there is no readahead value stored in the metadata.
    # "none" - Disable readahead.
    # "auto" - Use default value chosen by kernel.
    readahead = "auto"
    # 'raid_fault_policy' defines how a device failure in a RAID logical
    # volume is handled. This includes logical volumes that have the following
    # segment types: raid1, raid4, raid5*, and raid6*.
    # In the event of a failure, the following policies will determine what
    # actions are performed during the automated response to failures (when
    # dmeventd is monitoring the RAID logical volume) and when 'lvconvert' is
    # called manually with the options '--repair' and '--use-policies'.
    # "warn" - Use the system log to warn the user that a device in the RAID
    # logical volume has failed. It is left to the user to run
    # 'lvconvert --repair' manually to remove or replace the failed
    # device. As long as the number of failed devices does not
    # exceed the redundancy of the logical volume (1 device for
    # raid4/5, 2 for raid6, etc) the logical volume will remain
    # usable.
    # "allocate" - Attempt to use any extra physical volumes in the volume
    # group as spares and replace faulty devices.
    raid_fault_policy = "warn"
    # 'mirror_image_fault_policy' and 'mirror_log_fault_policy' define
    # how a device failure affecting a mirror (of "mirror" segment type) is
    # handled. A mirror is composed of mirror images (copies) and a log.
    # A disk log ensures that a mirror does not need to be re-synced
    # (all copies made the same) every time a machine reboots or crashes.
    # In the event of a failure, the specified policy will be used to determine
    # what happens. This applies to automatic repairs (when the mirror is being
    # monitored by dmeventd) and to manual lvconvert --repair when
    # --use-policies is given.
    # "remove" - Simply remove the faulty device and run without it. If
    # the log device fails, the mirror would convert to using
    # an in-memory log. This means the mirror will not
    # remember its sync status across crashes/reboots and
    # the entire mirror will be re-synced. If a
    # mirror image fails, the mirror will convert to a
    # non-mirrored device if there is only one remaining good
    # copy.
    # "allocate" - Remove the faulty device and try to allocate space on
    # a new device to be a replacement for the failed device.
    # Using this policy for the log is fast and maintains the
    # ability to remember sync state through crashes/reboots.
    # Using this policy for a mirror device is slow, as it
    # requires the mirror to resynchronize the devices, but it
    # will preserve the mirror characteristic of the device.
    # This policy acts like "remove" if no suitable device and
    # space can be allocated for the replacement.
    # "allocate_anywhere" - Not yet implemented. Useful to place the log device
    # temporarily on same physical volume as one of the mirror
    # images. This policy is not recommended for mirror devices
    # since it would break the redundant nature of the mirror. This
    # policy acts like "remove" if no suitable device and space can
    # be allocated for the replacement.
    mirror_log_fault_policy = "allocate"
    mirror_image_fault_policy = "remove"
    # 'snapshot_autoextend_threshold' and 'snapshot_autoextend_percent' define
    # how to handle automatic snapshot extension. The former defines when the
    # snapshot should be extended: when its space usage exceeds this many
    # percent. The latter defines how much extra space should be allocated for
    # the snapshot, in percent of its current size.
    # For example, if you set snapshot_autoextend_threshold to 70 and
    # snapshot_autoextend_percent to 20, whenever a snapshot exceeds 70% usage,
    # it will be extended by another 20%. For a 1G snapshot, using up 700M will
    # trigger a resize to 1.2G. When the usage exceeds 840M, the snapshot will
    # be extended to 1.44G, and so on.
    # Setting snapshot_autoextend_threshold to 100 disables automatic
    # extensions. The minimum value is 50 (A setting below 50 will be treated
    # as 50).
    snapshot_autoextend_threshold = 100
    snapshot_autoextend_percent = 20
    # 'thin_pool_autoextend_threshold' and 'thin_pool_autoextend_percent' define
    # how to handle automatic pool extension. The former defines when the
    # pool should be extended: when its space usage exceeds this many
    # percent. The latter defines how much extra space should be allocated for
    # the pool, in percent of its current size.
    # For example, if you set thin_pool_autoextend_threshold to 70 and
    # thin_pool_autoextend_percent to 20, whenever a pool exceeds 70% usage,
    # it will be extended by another 20%. For a 1G pool, using up 700M will
    # trigger a resize to 1.2G. When the usage exceeds 840M, the pool will
    # be extended to 1.44G, and so on.
    # Setting thin_pool_autoextend_threshold to 100 disables automatic
    # extensions. The minimum value is 50 (A setting below 50 will be treated
    # as 50).
    thin_pool_autoextend_threshold = 100
    thin_pool_autoextend_percent = 20
    # While activating devices, I/O to devices being (re)configured is
    # suspended, and as a precaution against deadlocks, LVM2 needs to pin
    # any memory it is using so it is not paged out. Groups of pages that
    # are known not to be accessed during activation need not be pinned
    # into memory. Each string listed in this setting is compared against
    # each line in /proc/self/maps, and the pages corresponding to any
    # lines that match are not pinned. On some systems locale-archive was
    # found to make up over 80% of the memory used by the process.
    # mlock_filter = [ "locale/locale-archive", "gconv/gconv-modules.cache" ]
    # Set to 1 to revert to the default behaviour prior to version 2.02.62
    # which used mlockall() to pin the whole process's memory while activating
    # devices.
    use_mlockall = 0
    # Monitoring is enabled by default when activating logical volumes.
    # Set to 0 to disable monitoring or use the --ignoremonitoring option.
    monitoring = 1
    # When pvmove or lvconvert must wait for the kernel to finish
    # synchronising or merging data, they check and report progress
    # at intervals of this number of seconds. The default is 15 seconds.
    # If this is set to 0 and there is only one thing to wait for, there
    # are no progress reports, but the process is awoken immediately the
    # operation is complete.
    polling_interval = 15
    # Report settings.
    # report {
    # Align columns on report output.
    # aligned=1
    # When buffered reporting is used, the report's content is appended
    # incrementally to include each object being reported until the report
    # is flushed to output which normally happens at the end of command
    # execution. Otherwise, if buffering is not used, each object is
    # reported as soon as its processing is finished.
    # buffered=1
    # Show headings for columns on report.
    # headings=1
    # A separator to use on report after each field.
    # separator=" "
    # Use a field name prefix for each field reported.
    # prefixes=0
    # Quote field values when using field name prefixes.
    # quoted=1
    # Output each column as a row. If set, this also implies report/prefixes=1.
    # colums_as_rows=0
    # Comma separated list of columns to sort by when reporting 'lvm devtypes' command.
    # See 'lvm devtypes -o help' for the list of possible fields.
    # devtypes_sort="devtype_name"
    # Comma separated list of columns to report for 'lvm devtypes' command.
    # See 'lvm devtypes -o help' for the list of possible fields.
    # devtypes_cols="devtype_name,devtype_max_partitions,devtype_description"
    # Comma separated list of columns to report for 'lvm devtypes' command in verbose mode.
    # See 'lvm devtypes -o help' for the list of possible fields.
    # devtypes_cols_verbose="devtype_name,devtype_max_partitions,devtype_description"
    # Comma separated list of columns to sort by when reporting 'lvs' command.
    # See 'lvs -o help' for the list of possible fields.
    # lvs_sort="vg_name,lv_name"
    # Comma separated list of columns to report for 'lvs' command.
    # See 'lvs -o help' for the list of possible fields.
    # lvs_cols="lv_name,vg_name,lv_attr,lv_size,pool_lv,origin,data_percent,move_pv,mirror_log,copy_percent,convert_lv"
    # Comma separated list of columns to report for 'lvs' command in verbose mode.
    # See 'lvs -o help' for the list of possible fields.
    # lvs_cols_verbose="lv_name,vg_name,seg_count,lv_attr,lv_size,lv_major,lv_minor,lv_kernel_major,lv_kernel_minor,pool_lv,origin,data_percent,metadata_percent,move_pv,copy_percent,mirror_log,convert
    # Comma separated list of columns to sort by when reporting 'vgs' command.
    # See 'vgs -o help' for the list of possible fields.
    # vgs_sort="vg_name"
    # Comma separated list of columns to report for 'vgs' command.
    # See 'vgs -o help' for the list of possible fields.
    # vgs_cols="vg_name,pv_count,lv_count,snap_count,vg_attr,vg_size,vg_free"
    # Comma separated list of columns to report for 'vgs' command in verbose mode.
    # See 'vgs -o help' for the list of possible fields.
    # vgs_cols_verbose="vg_name,vg_attr,vg_extent_size,pv_count,lv_count,snap_count,vg_size,vg_free,vg_uuid,vg_profile"
    # Comma separated list of columns to sort by when reporting 'pvs' command.
    # See 'pvs -o help' for the list of possible fields.
    # pvs_sort="pv_name"
    # Comma separated list of columns to report for 'pvs' command.
    # See 'pvs -o help' for the list of possible fields.
    # pvs_cols="pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free"
    # Comma separated list of columns to report for 'pvs' command in verbose mode.
    # See 'pvs -o help' for the list of possible fields.
    # pvs_cols_verbose="pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,dev_size,pv_uuid"
    # Comma separated list of columns to sort by when reporting 'lvs --segments' command.
    # See 'lvs --segments -o help' for the list of possible fields.
    # segs_sort="vg_name,lv_name,seg_start"
    # Comma separated list of columns to report for 'lvs --segments' command.
    # See 'lvs --segments -o help' for the list of possible fields.
    # segs_cols="lv_name,vg_name,lv_attr,stripes,segtype,seg_size"
    # Comma separated list of columns to report for 'lvs --segments' command in verbose mode.
    # See 'lvs --segments -o help' for the list of possible fields.
    # segs_cols_verbose="lv_name,vg_name,lv_attr,seg_start,seg_size,stripes,segtype,stripesize,chunksize"
    # Comma separated list of columns to sort by when reporting 'pvs --segments' command.
    # See 'pvs --segments -o help' for the list of possible fields.
    # pvsegs_sort="pv_name,pvseg_start"
    # Comma separated list of columns to sort by when reporting 'pvs --segments' command.
    # See 'pvs --segments -o help' for the list of possible fields.
    # pvsegs_cols="pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,pvseg_start,pvseg_size"
    # Comma separated list of columns to sort by when reporting 'pvs --segments' command in verbose mode.
    # See 'pvs --segments -o help' for the list of possible fields.
    # pvsegs_cols_verbose="pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,pvseg_start,pvseg_size,lv_name,seg_start_pe,segtype,seg_pe_ranges"
    # Advanced section #
    # Metadata settings
    # metadata {
    # Default number of copies of metadata to hold on each PV. 0, 1 or 2.
    # You might want to override it from the command line with 0
    # when running pvcreate on new PVs which are to be added to large VGs.
    # pvmetadatacopies = 1
    # Default number of copies of metadata to maintain for each VG.
    # If set to a non-zero value, LVM automatically chooses which of
    # the available metadata areas to use to achieve the requested
    # number of copies of the VG metadata. If you set a value larger
    # than the the total number of metadata areas available then
    # metadata is stored in them all.
    # The default value of 0 ("unmanaged") disables this automatic
    # management and allows you to control which metadata areas
    # are used at the individual PV level using 'pvchange
    # --metadataignore y/n'.
    # vgmetadatacopies = 0
    # Approximate default size of on-disk metadata areas in sectors.
    # You should increase this if you have large volume groups or
    # you want to retain a large on-disk history of your metadata changes.
    # pvmetadatasize = 255
    # List of directories holding live copies of text format metadata.
    # These directories must not be on logical volumes!
    # It's possible to use LVM2 with a couple of directories here,
    # preferably on different (non-LV) filesystems, and with no other
    # on-disk metadata (pvmetadatacopies = 0). Or this can be in
    # addition to on-disk metadata areas.
    # The feature was originally added to simplify testing and is not
    # supported under low memory situations - the machine could lock up.
    # Never edit any files in these directories by hand unless you
    # you are absolutely sure you know what you are doing! Use
    # the supplied toolset to make changes (e.g. vgcfgrestore).
    # dirs = [ "/etc/lvm/metadata", "/mnt/disk2/lvm/metadata2" ]
    # Event daemon
    dmeventd {
    # mirror_library is the library used when monitoring a mirror device.
    # "libdevmapper-event-lvm2mirror.so" attempts to recover from
    # failures. It removes failed devices from a volume group and
    # reconfigures a mirror as necessary. If no mirror library is
    # provided, mirrors are not monitored through dmeventd.
    mirror_library = "libdevmapper-event-lvm2mirror.so"
    # snapshot_library is the library used when monitoring a snapshot device.
    # "libdevmapper-event-lvm2snapshot.so" monitors the filling of
    # snapshots and emits a warning through syslog when the use of
    # the snapshot exceeds 80%. The warning is repeated when 85%, 90% and
    # 95% of the snapshot is filled.
    snapshot_library = "libdevmapper-event-lvm2snapshot.so"
    # thin_library is the library used when monitoring a thin device.
    # "libdevmapper-event-lvm2thin.so" monitors the filling of
    # pool and emits a warning through syslog when the use of
    # the pool exceeds 80%. The warning is repeated when 85%, 90% and
    # 95% of the pool is filled.
    thin_library = "libdevmapper-event-lvm2thin.so"
    # Full path of the dmeventd binary.
    # executable = "/usr/sbin/dmeventd"

  • How can I Convert a numeric string to a formatted string?

    I have a string value returned from a background tool that will range from 0 to possibly terabytes as a full number.  I want to format that number to use commas and to reduce the character count using an appropriate size modifier (KiB, MiB, GiB, etc).  I've tried converting the string number to a Double value using Double.parseDouble() and then performing the math based on the size of the value with this code:
    Double dblConversionSize;
    String stCinvertedSize;
    dblConversionSize = Double.parseDouble(theValue);
    if (dblConversionSize > (1024 * 1024 * 1024))
         stConvertedSize = String.format("%,.000d", dblConversionSize / 1024 / 1024 / 1024) + " TiB";
    I've also tried using
         String.valueOf(dblConversionSize / 1024 / 1024 / 1024) + " TiB";
    However, the formatting is failing and I'm either getting a format exception or the result is displayed as a number with no decimal component.
    Anyone have a recipe for this?
    Thanks,
    Tim

    TOLIS Tim wrote:
    Thanks, but the reason for using Double is that the division math can leave me with values like 2.341 GiB.  I don't want to drop the values (or round) on the right side of the decimal point.
    Then you may look for DigDecimal which avoids digitization errors.
    NumberFormat will handle all subclasses of Number. You might need another specialization. Just look at the API which one is suitable.
    bye
    TPD

  • [SOLVED]Systemd autostart conky

    Hi guys, as a new Arch user I tried to play a bit with systemd to have it automatically start Conky after logon. When I try to start conky manually from the terminal it all works perfectly but when systemd needs to do it as a unit (service) it's not possible as Conky doesn't accept the rc file it's been given. It seems that the variable $HOME or %h is unknown to systemd. Below some details
    Starting my 2 Conky files from the Gnome terminal is no problem:
    conky -d -c $HOME/.Conky/ConkyToprc
    conky -d -c $HOME/.Conky/ConkyLogrc
    cat /etc/systemd/system/conky.service:
    [Unit]
    Description=Conky system monitor
    Documentation=man:conky(1)
    [Service]
    Type=forking
    ExecStart=/usr/bin/conky -d -c /home/viper/.Conky/ConkyToprc
    [Install]
    WantedBy=xinitrc.target
    [root@Arch viper]# systemctl status conky.service
    ● conky.service - Conky system monitor
    Loaded: loaded (/etc/systemd/system/conky.service; disabled)
    Active: failed (Result: core-dump) since Fri 2014-05-16 18:39:51 CEST; 10min ago
    Docs: man:conky(1)
    Process: 4833 ExecStart=/usr/bin/conky -d -c /home/viper/.Conky/ConkyToprc (code=dumped, signal=ABRT)
    May 16 18:39:51 Arch conky[4833]: Conky: $HOME environment variable doesn't exist
    May 16 18:39:51 Arch conky[4833]: conky: malloc.c:2369: sysmalloc: Assertion `(old_top == (((mbinptr) (((char *) &((av)->bins[((1) - 1) * 2])) - __builtin_offsetof (struct malloc_chunk, fd)))) && old_size == ...
    May 16 18:39:51 Arch systemd-coredump[4834]: Process 4833 (conky) dumped core.
    May 16 18:39:51 Arch systemd[1]: conky.service: control process exited, code=dumped status=6
    May 16 18:39:51 Arch systemd[1]: Failed to start Conky system monitor.
    May 16 18:39:51 Arch systemd[1]: Unit conky.service entered failed state.
    Hint: Some lines were ellipsized, use -l to show in full.
    So even after changing the systemd unit file to use the full path /home/viper/.Conky/ConkyToprc it's still not working. I think after changing $HOME or %h Conky did accept the config file but doesn't recognize the $HOME variable in the config file anymore. Or maybe I'm missing something here... And I'm not even trying to load the second conky file Anyone an idea as where I need to look? Any help is highly appreciated. Thank you in advance.
    It seems to be a bit related to this: http://lists.freedesktop.org/archives/s … 06217.html
    Last edited by DarkLite1 (2014-05-17 16:48:04)

    Thank you everyone for your feedback, I really appreciate it Some things I would still like to clarify although I fully agree with your reasoning of not using systemd for my 'conky' idea here.
    @ANOKNUSA: Yes you are right, systemd starts everything simultaneously at boot time. The way I did my test was to start the unit manually from the Gnome terminal after already being logged on to Gnome. So in my point of view the $HOME variable did already exist at that point and there was no need to wait for the X server... So in theory, it should've been able to get the job done.
    @twelveeighty: So yes, I do believe you are right. systemd doesn't know $HOME at all.
    Than my final question to have this solved for me. What is the best way to have 2 instances of Conky running with each it's own config file? I tried the following already, but it was unsuccessful:
    cat /etc/profile.d/autostart.sh
    Exec=/usr/sbin/conky -d -c /home/viper/.Conky/ConkyToprc
    Exec=/usr/sbin/conky -d -c /home/viper/.Conky/ConkyLogrc
    cat /usr/share/gnome/autostart/conky.desktop
    [Desktop Entry]
    Type=Application
    Name=Conky
    Comment=Start conky script
    Exec=/usr/sbin/conky -d -c /home/viper/.Conky/ConkyToprc
    Exec=/usr/sbin/conky -d -c /home/viper/.Conky/ConkyLogrc
    OnlyShowIn=GNOME;
    X-GNOME-Autostart-Phase=Application
    When trying to follow the proposed solution found here https://wiki.archlinux.org/index.php/xinitrc there is no example file availble for .xinitrc in /etc/skel as suggested 'Copy the sample /etc/skel/.xinitrc file to your home directory' And if there was, how would the syntax be for 2 instances of Conky? Because when I read the Note it's not possible to add 2 lines of EXEC:
    Note: Make sure to uncomment only one exec line, since that will be the last command run from the script; all the following lines will just be ignored. Do NOT attempt to background your WM by appending a `&` to the line.
    /again: thanks for still helping me out and reading my jibber/jabber. I sometimes really feel like a noob here between all you pro's. But the only way of getting there is by falling and learning how to get up? Right Bruce?
    Last edited by DarkLite1 (2014-05-17 16:18:43)

  • BEX Formula with IF

    Hello BI experts,
    I try to calculate a formula using boolean operators (IF, AND).
    When I execute the query, I had a message like this but I only use amount in EUR 
    So I've added NODIM in my formula but the result is displayed without EUR
    If I don't use NODIM, the result is displayed with strange unit : EUR^2
    Do you have any idea to display the result with EUR symbol?
    Thank you in advance
    Regards,
    Nicolas 

    Hi Nicolas,
    The Display of your Unit seems to be like EUR raised to powere 2 of EUR Square.
    I have not seen this type of Unit display. However, I think When you apply the NODIM, try to apply it only for the Net sales +Portfolio YTD Key figure and check if results displayed correctly.
    Hope this helps.
    -Swati.

  • I recently migrated my MacBook from 10.5.8 to 10.6.8, using the Snow Leopard CD I purchased from Apple. Since then I have noticed a strange ozone smell coming from the MacBook in the area where the hinges are to open and close the unit. I was told by Appl

    I recently migrated my MacBook from 10.5.8 to 10.6.8, using the Snow Leopard CD I purchased from Apple. Since then I have noticed a strange ozone smell coming from the MacBook in the area where the hinges are to open and close the unit. I was told by Apple that it probably is not serious and is coming from the fans in the MacBook. It is true that the smell/odor is more intense when the fans are working. Apple said I should take the MacBook to an Apple Store to have the hardware checked out. Everything seems to be working correctly. I am backing up automatically wirelessly to a Time Capsule, which I had not done until recently, although I had purchased the Time Capsule in 2009. I have not noticed this odor before. Does anyone suspect something more serious that may be going on?

    You may be smelling it because with the new OS the processor is having to work harder causing it to get hotter then it would with the other OS.  I can't say there is nothing wrong, but what could be happening is just that, it's getting hotter so the fan is spinning faster and moving more air so you are getting more of the smell then you would before.  I would still continue to back it up, and take it to the Apple Store at a Genius Bar and have them look at it to make sure.  Was the computer ever in a smoky environment??

  • Strange noise from (I guess) PSU unit

    Lately I can hear very low crackling sound coming from I guess PSU unit. Most noticable is on back of the PSU unit. It's random, sometimes can be quiet for a long time and sometimes can be heard very often. If there are ambient noises in the room during the day I can't hear it at all it's that low, but in the evening when everything is quiet it's there.
    Sample of the noise can be heard here:
    http://a-n-t-i-s-t-a-t-i-c.rootylicious.com/macProSound.mp3
    I haven't experience any problems in computer performance.
    Should I take the machine for the checkup?
    Thanks
    Mac Pro, 2.66, 1GB, 250 HDD   Mac OS X (10.4.8)  

    Thanks for the answer I was thinking the same. But very strange what I have noticed. I have triple boot machine. OSX Tiger on one drive, Win XP and Vista on other. Strange thing is that noise is not appearing while working in any Windows OS. I will try to plug second hard drive out and work only with OSX drive.
    Mac Pro, 2.66, 1GB, 250 HDD   Mac OS X (10.4.8)  

  • How Do I install kiba dock and conky/configure.

    Hey guys, I'm new to archlinux, and I would like to know how to install Kiba-dock. I have no clue what to do. Also, I installed conky via, pacman -S conky, but how do I configure it and open it. Any help is greatly appreciated guys.

    Hmm.... Here is the error im getting after I try to makepkg in akamaru-svn. This is after I pacman -S gnome-desktop librsvg".
    [gr1m@myhost ~]$ su
    Password:
    [root@myhost gr1m]# cd Desktop
    [root@myhost Desktop]# cd akamaru-svn
    [root@myhost akamaru-svn]# makepkg --asroot
    ==> Determining latest svn revision...
    Error validating server certificate for 'https://kibadock.svn.sourceforge.net:443':
    - The certificate is not issued by a trusted authority. Use the
       fingerprint to validate the certificate manually!
    Certificate information:
    - Hostname: *.svn.sourceforge.net
    - Valid: from Tue, 09 Oct 2007 14:15:07 GMT until Mon, 08 Dec 2008 15:15:07 GMT
    - Issuer: Equifax Secure Certificate Authority, Equifax, US
    - Fingerprint: fb:75:6c:40:58:ae:21:8c:63:dd:1b:7b:6a:7d:bb:8c:74:36:e7:8a
    (R)eject, accept (t)emporarily or accept (p)ermanently? p
      -> Version found: 3
    ==> Making package: akamaru-svn 3-2  (Mon Jul 28 01:41:38 CDT 2008)
    ==> WARNING: Running makepkg as root...
    ==> Checking Runtime Dependencies...
    ==> Checking Buildtime Dependencies...
    ==> Retrieving Sources...
    ==> Validating source files with md5sums...
    ==> Extracting Sources...
    ==> Removing existing pkg/ directory...
    ==> Starting build()...
    ==> Connecting to SVN server...
    ==> Checking out akamaru
    Error validating server certificate for 'https://kibadock.svn.sourceforge.net:443':
    - The certificate is not issued by a trusted authority. Use the
       fingerprint to validate the certificate manually!
    Certificate information:
    - Hostname: *.svn.sourceforge.net
    - Valid: from Tue, 09 Oct 2007 14:15:07 GMT until Mon, 08 Dec 2008 15:15:07 GMT
    - Issuer: Equifax Secure Certificate Authority, Equifax, US
    - Fingerprint: fb:75:6c:40:58:ae:21:8c:63:dd:1b:7b:6a:7d:bb:8c:74:36:e7:8a
    (R)eject, accept (t)emporarily or accept (p)ermanently? t
    A    akamaru/include
    A    akamaru/include/akamaru.h
    A    akamaru/include/Makefile.am
    A    akamaru/AUTHORS
    A    akamaru/INSTALL
    A    akamaru/configure.in
    A    akamaru/ChangeLog
    A    akamaru/akamaru.pc.in
    A    akamaru/src
    A    akamaru/src/akamaru.c
    A    akamaru/src/Makefile.am
    A    akamaru/COPYING
    A    akamaru/Makefile.am
    A    akamaru/autogen.sh
    A    akamaru/NEWS
    A    akamaru/README
    Checked out revision 3.
    ==> SVN checkout done or server timeout
    ==> Starting build...
    checking for intltool...
    found intltool
    checking for libtoolize...
    found libtoolize
    checking for automake...
    found automake
    checking for autoconf...
    found autoconf
    Running 'autoreconf -v --install'...
    autoreconf: Entering directory `.'
    autoreconf: configure.in: not using Gettext
    autoreconf: running: aclocal
    configure.in:51: error: AC_SUBST: `"$AKAMARU_REQUIRES"' is not a valid shell variable name
    configure.in:51: the top level
    autom4te: /usr/bin/m4 failed with exit status: 1
    aclocal: autom4te failed with exit status: 1
    autoreconf: aclocal failed with exit status: 1
    make: *** No targets specified and no makefile found.  Stop.
    ==> ERROR: Build Failed.
        Aborting...
    Last edited by paulie1984 (2008-07-27 23:43:22)

  • Conky: how to cut "MiB" out of $mem?

    Hi there!
    I'm not that much used to conky, so I ask how it is possible to get an output of $mem without "MiB" (I'd like to color this differently). Found nothing in here.
    Ah yes, the same with $swap.
    No other way than using $memperc/$swapperc ?
    Last edited by nexus7 (2011-10-14 15:35:29)

    Great news!
    I think that Jon gave you a link to a tutorial on making Cuts. Here is a link to many more PrPro learning assets: http://forums.adobe.com/thread/878529?tstart=0 and here is a listing of many Adobe TV tutorials, that will be useful: http://tv.adobe.com/show/learn-premiere-pro-cs6/
    If your question has been answered, you might also want to look over this article: http://forums.adobe.com/thread/1058744?tstart=0
    Good luck,
    Hunt

  • Fixed Duration & effort driven task (strange outcome with Units

    Hi, 
    Sorry, but I misformulated my problem.
    This makes it a bit more clear I hope:
    While in a chapter about effort-driven Microsoft says the following:
    If the assigned task type is Fixed Duration, assigning additional resources decreases the individual unit values for resources.
    Why aren't the units set to 50% after I assigned Peter as 2nd resource?
    TIA
    L

    Hi Guillaume,
    Thank you very much for your reply!!!!
    I have been chasing my tail about it.
    I would have to agree with one of the comments regarding that article: (some parts of it below)
    It might have been nice then if Microsoft had carried the Peak field into the Task Details form, as well as others since from these views only display the Assignment Units  is not available in the Task Details view, it just looks like the product is broken.
     Fixed duration 5 day task, 20 hours on each resouce (2 total), shows 100% Assignment units, just doesn't seem very intuitive.
    At least now I can let it go (sort of)
    The Peak fiels is my saviour I guess but:
    I would still would like to understand the logic in when you add 4 resources one by one on a 10 day task the Units go like this:
    1st =100%
    2nd = 100%
    3nd = 50%
    4th=33 %
    Thanks again!
    L

  • Lumia 820 + Alpine CDE-183BT Head unit strange blu...

    Hello al,
    I had a Sony Xplod with bluetooth and everything was working fine streaming audio / phone  and even the txtings where over the speakers and the Here Drive+ instructions.
    Now i bought a "better" heeadunit and the most is working.
    i have:
    bluetooth for phone including sync for phonebook
    streaming for audio
    i DONT have:
    Voice for naviation ??
    txt messages spoken over headunit ??
    I tried to reconnect / restart phone / reinstall Drive+ incl voice and maps still nothing.
    HELP i need the navigation voice
    I tried also both settings for the bluetooth like the advanced switch.
    Nokia Lumia 820 - WP8.1 Developer Edition
    Alpine CDE-183BT

    Hi, Tankyspanky and Hopseflops.
    Welcome to the Nokia Support Discussions! 
    Have you downloaded a voice for the said app? Note that you will also need to install a voice in your Windows Phone settings. Check this: http://www.nokia.com/ph-en/support/product/here-drive-plus/faq/?action=singleTopic&topic=FA141002&ca.... 
    (Note: The above link is intended for customer's in the Philippines. For your region, go here: http://www.nokia.com/global/support/locations/.)
    By the way, Tankyspanky, you can also check for the same issue in the Microsoft Developer Network forum since your phone has the Windows Phone 8.1 Preview for Developers SW: http://social.msdn.microsoft.com/Forums/en-US/home?category=wpprevdev.
    If you have other questions, just let us know. Have a good one! 

  • [SOLVED] Conky/ifconifg no up/down-speed

    Hi,
    Conky shows no downspeed or upspeed.
    ifconfig also shows no downspeed or upspeed.
    iftop does show the speed of up and down traffic.
    Eth: 03:00.0 Ethernet controller: Qualcomm Atheros Killer E2200 Gigabit Ethernet Controller (rev 13)
    Box: Linux archbeast 3.13.7-1-ARCH #1 SMP PREEMPT Mon Mar 24 20:06:08 CET 2014 x86_64 GNU/Linux
    At the moment of test; downloading 1MB/s and uploading 1MB/s
    [thijs@archbeast ~]$ ifconfig
    enp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
    inet 76.83.1.135 netmask 255.255.255.0 broadcast 76.83.1.255
    inet6 xx:xx:xx:xx:xx prefixlen 64 scopeid 0x20<link>
    ether xx:xx:xx:xx:xx txqueuelen 1000 (Ethernet)
    RX packets 0 bytes 0 (0.0 B)
    RX errors 0 dropped 0 overruns 0 frame 0
    TX packets 0 bytes 0 (0.0 B)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
    device interrupt 17
    lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
    inet 127.0.0.1 netmask 255.0.0.0
    inet6 ::1 prefixlen 128 scopeid 0x10<host>
    loop txqueuelen 0 (Local Loopback)
    RX packets 78483 bytes 16327096 (15.5 MiB)
    RX errors 0 dropped 0 overruns 0 frame 0
    TX packets 78483 bytes 16327096 (15.5 MiB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
    I think Conky will work when stats are back in ifconfig. When I run Debian 7 or Mint 16 on this machine; all is well.
    Cheerio
    Last edited by Thijxx (2014-11-17 08:06:31)

    I did nothing unusual when following the beginners guide when I installed Arch. BTW, you may want to edit your post and xx out you MAC addy for security reasons. It's not wise to share.
    $ ifconfig
    enp5s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
    inet 192.168.1.14 netmask 255.255.255.0 broadcast 192.168.1.255
    inet6 fe80::76d4:35ff:fe56:ed09 prefixlen 64 scopeid 0x20<link>
    ether xx:xx:xx:xx:xx:xx txqueuelen 1000 (Ethernet)
    RX packets 13807694 bytes 16561364557 (15.4 GiB)
    RX errors 0 dropped 10 overruns 0 frame 0
    TX packets 12601230 bytes 11800214551 (10.9 GiB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
    lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
    inet 127.0.0.1 netmask 255.0.0.0
    inet6 ::1 prefixlen 128 scopeid 0x10<host>
    loop txqueuelen 0 (Local Loopback)
    RX packets 643 bytes 42216 (41.2 KiB)
    RX errors 0 dropped 0 overruns 0 frame 0
    TX packets 643 bytes 42216 (41.2 KiB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
    My network portion of Conky:
    ######### NETWORK ###############
    ${color CC9900}${font Arial:style=Bold}NETWORK ${hr 2}$color$font
    ${color2}IP on enp5s0: ${color}${alignr}${execi 3600 curl -s http://192.168.1.1|grep 'wan_ipaddr'|awk '{print $1,$2}'|sed 's/<span id="wan_ipaddr">//'|sed 's!</span>&nbsp;!!'}
    ${color2}Down ${color}$alignr ${downspeedf enp5s0} kb/s
    ${color2}Up ${color}$alignr ${upspeedf enp5s0} kb/s

  • MacBook Pro (Early 2011) behaves strange after GPU replacement

    I have an early MBP 2011 and had the GPU issue. After Apple finally announced the replace program I immediatelly went to a a stor and the fixed it. But now my System acts very strangely ever since. Here a list of all the strange things:
    - The overall performance is not as good as it was bevore
    - Yosemite allways asks for my iCloud Password on startup. If I enter it, the message just reappears. this happens till I click on "cancel" twice! (In the accounts settings everything is running and all the options in the iCloud menu are checked)
    - the spotlight search does not work anymore. If I enter something and press Enter nothing happens except spotlight is freezing on the spot for some time till it disappears.
    - Safari, iTunes and Mail have no, of very little, internet connection. But the strange thing is: Chrome does!!!
    - Finder sometimes does not work correctly. I have to restart it quite often (I never had to do this before) and sometimes it does not react when clicking on the dock icon.
    For me all this makes no sense. What is wrong with my system?
    I should mention: I built in 8GB of ram (over two years ago) and I put an SSD in and removed the optical drive (over a year ago). I use the TRIM enabler for the SSD.
    Thanks for any help!

    1. This procedure is a diagnostic test. It changes nothing, for better or worse, and therefore will not, in itself, solve the problem. But with the aid of the test results, the solution may take a few minutes, instead of hours or days.
    The test works on OS X 10.7 ("Lion") and later. I don't recommend running it on older versions of OS X. It will do no harm, but it won't do much good either.
    Don't be put off by the complexity of these instructions. The process is much less complicated than the description. You do harder tasks with the computer all the time.
    2. If you don't already have a current backup, back up all data before doing anything else. The backup is necessary on general principle, not because of anything in the test procedure. Backup is always a must, and when you're having any kind of trouble with the computer, you may be at higher than usual risk of losing data, whether you follow these instructions or not.
    There are ways to back up a computer that isn't fully functional. Ask if you need guidance.
    3. Below are instructions to run a UNIX shell script, a type of program. As I wrote above, it changes nothing. It doesn't send or receive any data on the network. All it does is to generate a human-readable report on the state of the computer. That report goes nowhere unless you choose to share it. If you prefer, you can act on it yourself without disclosing the contents to me or anyone else.
    You should be wondering whether you can believe me, and whether it's safe to run a program at the behest of a stranger. In general, no, it's not safe and I don't encourage it.
    In this case, however, there are a couple of ways for you to decide whether the program is safe without having to trust me. First, you can read it. Unlike an application that you download and click to run, it's transparent, so anyone with the necessary skill can verify what it does.
    You may not be able to understand the script yourself. But variations of it have been posted on this website thousands of times over a period of years. The site is hosted by Apple, which does not allow it to be used to distribute harmful software. Any one of the millions of registered users could have read the script and raised the alarm if it was harmful. Then I would not be here now and you would not be reading this message. See, for example, this discussion.
    Nevertheless, if you can't satisfy yourself that these instructions are safe, don't follow them. Ask for other options.
    4. Here's a summary of what you need to do, if you choose to proceed:
    ☞ Copy a line of text in this window to the Clipboard.
    ☞ Paste into the window of another application.
    ☞ Wait for the test to run. It usually takes a few minutes.
    ☞ Paste the results, which will have been copied automatically, back into a reply on this page.
    The sequence is: copy, paste, wait, paste again. You don't need to copy a second time. Details follow.
    5. Try to test under conditions that reproduce the problem, as far as possible. For example, if the computer is sometimes, but not always, slow, run the test during a slowdown.
    You may have started up in "safe" mode. If the system is now in safe mode and works well enough in normal mode to run the test, restart as usual. If you can only test in safe mode, do that.
    6. If you have more than one user, and the one affected by the problem is not an administrator, then please run the test twice: once while logged in as the affected user, and once as an administrator. The results may be different. The user that is created automatically on a new computer when you start it for the first time is an administrator. If you can't log in as an administrator, test as the affected user. Most personal Macs have only one user, and in that case this section doesn’t apply. Don't log in as root.
    7. The script is a single long line, all of which must be selected. You can accomplish this easily by triple-clicking anywhere in the line. The whole line will highlight, though you may not see all of it in the browser window, and you can then copy it. If you try to select the line by dragging across the part you can see, you won't get all of it.
    Triple-click anywhere in the line of text below on this page to select it:
    PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/libexec;clear;cd;p=(1281 ' 0.5 0.25 50 1000 15 5120 1000 25000 5 5 5 1 0 100 ' 51 25600 4 10 25 5120 102400 1000 25 1 300 40 500 300 85 25 20480 262144 20 2000 524288 604800 5 1024 );k=({Soft,Hard}ware Memory Diagnostics Power FireWire Thunderbolt USB Bluetooth SerialATA Extensions Applications Frameworks PrefPane Fonts Displays PCI UniversalAccess InstallHistory ConfigurationProfile AirPort 'com\.apple\.' -\\t N\\/A 'AES|atr|udit|msa|dnse|ax|ensh|fami|FileS|fing|ft[pw]|gedC|kdu|etS|is\.|alk|ODSA|otp|htt|pace|pcas|ps-lp|rexe|rlo|rsh|smb|snm|teln|upd-[aw]|uuc|vix|webf' OSBundle{Require,AllowUserLoa}d 'Mb/s:Mb/s:ms/s:KiB/s:%:total:MB:total:lifetime:sampled:per sec' 'Net in:Net out:I/O wait time:I/O requests:CPU usage:Open files:Memory:Mach ports:Energy:Energy:File opens:Forks:Failed forks:System errors' 'tsA|[ST]M[HL]' PlistBuddy{,' 2>&1'}' -c Print' 'Info\.plist' CFBundleIdentifier );f=('\n%s'{': ','\n\n'}'%s\n' '\nRAM details\n%s\n' %s{' ','\n'{"${k[22]}",}}'%s\n' '%.1f GiB: %s\n' '\n    ...and %s more line(s)\n' '\nContents of %s\n    '"${k[22]}"'mod date: %s\n    '"${k[22]}"'checksum: %s\n%s\n' );c=(879294308 4071182229 461455494 216630318 3627668074 1083382502 1274181950 1855907737 2758863019 1848501757 464843899 2636415542 3694147963 1233118628 2456546649 2806998573 2778718105 842973933 1383871077 1591517921 676087606 1445213025 2051385900 3301885676 891055588 998894468 695903914 1443423563 4136085286 3374894509 1051159591 892310726 1707497389 523110921 2883943871 3873345487 );s=(' s/[0-9A-Za-z._]+@[0-9A-Za-z.]+\.[0-9A-Za-z]{2,4}/EMAIL/g;/faceb/s/(at\.)[^.]+/\1NAME/g;/\/Shared/!s/(\/Users\/)[^ /]+/\1USER/g;s/[-0-9A-Fa-f]{22,}/UUID/g;' ' s/^ +//;/de: S|[nst]:/p;' ' {sub(/^ +/,"")};/er:/;/y:/&&$2<'${p[4]} ' s/:$//;3,6d;/[my].+:/d;s/^ {4}//;H;${ g;s/\n$//;/s: (E[^m]|[^EO])|x([^08]|02[^F]|8[^0])/p;} ' ' 5h;6{ H;g;/P/!p;} ' ' ($1~/^Cy/&&$3>'${p[9]}')||($1~/^Cond/&&$2!~/^N/) ' ' /:$/{ N;/:.+:/d;s/ *://;b0'$'\n'' };/^ *(V.+ [0N]|Man).+ /{ s/ 0x.... //;s/[()]//g;s/(.+: )(.+)/ (\2)/;H;};$b0'$'\n'' d;:0'$'\n'' x;s/\n\n//;/Apple[ ,]|Genesy|Intel|SMSC/d;s/\n.*//;/\)$/p;' ' s/^.*C/C/;H;${ g;/No th|pms/!p;} ' '/= [^GO]/p' '{$1=""};1' ' /Of|yc/!{ s/^.+is |\.//g;p;q;} ' ' BEGIN { FS="\f";if(system("A1 42 83 114")) d="^'"${k[21]}"'launch(d\.peruser\.[0-9]+|ctl\.(Aqua|Background|System))$";} { if($2~/[1-9]/) { $2="status: "$2;printf("'"${f[4]}"'",$1,$2);} else if(!d||$1!~d) print $1;} ' ' $1>1{$NF=$NF" x"$1} /\*/{if(!f)f="\n\t* Code injection"} {$1=""} 1;END{print f} ' ' NR==2&&$4<='${p[7]}'{print $4} ' ' BEGIN{FS=":"} ($1~"wir"&&$2>'${p[22]}') {printf("wired %.1f\n",$2/2^18)} ($1~/P.+ts/&&$2>'${p[19]}') {printf("paged %.1f\n",$2/2^18)} ' '/YLD/s/=/ /p' ' { q=$1;$1="";u=$NF;$NF="";gsub(/ +$/,"");print q"\f"$0"\f"u;} ' ' /^ {6}[^ ]/d;s/:$//;/([^ey]|[^n]e):/d;/e: Y/d;s/: Y.+//g;H;${ g;s/ \n (\n)/\1/g;s/\n +(M[^ ]+)[ -~]+/ (\1)/;s/\n$//;/( {8}[^ ].*){2,}/p;} ' 's:^:/:p;' ' !/, .+:/ { print;n++;} END{if(n<'{${p[12]},${p[13]}}')printf("^'"${k[21]}"'.+")} ' '|uniq' ' 1;END { print "/L.+/Scr.+/Templ.+\.app$";print "/L.+/Pri.+\.plugin$";if(NR<'{${p[14]},${p[21]}}') print "^/[Sp].+|'${k[21]}'";} ' ' /\.(framew|lproj)|\):/d;/plist:|:.+(Mach|scrip)/s/:.+//p;' '&&echo On' '/\.(bundle|component|framework|kext|mdimporter|plugin|qlgenerator|saver|wdgt)$/p' '/\.dylib$/p' ' /Temp|emac/{next};/(etc|Preferences|Launch[AD].+)\// { sub(".","");print $0"$";} END { split("'"${c[*]}"'",c);for(i in c) print "\t"c[i]"$";} ' ' /^\/(Ap|Dev|Inc|Prev)/d;/((iTu|ok).+dle|\.(component|mailbundle|mdimporter|plugin|qlgenerator|saver|wdgt))$/p;' ' BEGIN{ FS="= "} $2 { gsub(/[()"]/,"",$2);print $2;} !/:/&&!$2{print "'${k[23]}'"} ' ' /^\//!d;s/^.{5}//;s/ [^/]+\//: \//p;' '>&-||echo No' '{print $3"\t"$1}' 's/\'$'\t''.+//p' 's/1/On/p' '/Prox.+: [^0]/p' '$2>'${p[2]}'{$2=$2-1;print}' ' BEGIN { M1='${p[16]}';M2='${p[18]}';M3='${p[8]}';M4='${p[3]}';} !/^A/{next};/%/ { getline;if($5<M1) o["CPU"]="CPU: user "$2"%, system "$4"%";next;} $2~/^disk/&&$4>M2 { o[$2]=$2": "$3" ops/s, "$4" blocks/s";next;} $2~/^(en[0-9]|bridg)/ { if(o[$2]) { e=$3+$4+$5+$6;if(e) o[$2]=o[$2]"; errors "e"/s";next;};if($4>M3||$6>M4) o[$2]=$2": in "int($4/1024)", out "int($6/1024)" (KiB/s)";} END { for(i in o) print o[i];} ' ' /r\[0\] /&&$NF!~/^1(0|72\.(1[6-9]|2[0-9]|3[0-1])|92\.168)\./ { print $NF;exit;} ' ' !/^T/ { printf "(static)";exit;} ' '/apsd|BKAg|OpenD/!s/:.+//p' ' (/k:/&&$3!~/(255\.){3}0/)||(/v6:/&&$2!~/A/) ' ' BEGIN{FS=": "} /^ {10}O/ {exit} /^ {0,12}[^ ]/ {next} $1~"Ne"&&$2!~/^In/{print} $1~"Si" { split($2,a," ");if(a[1]-a[4]<'${p[5]}') print;};$1~"T"&&$2<'${p[20]}'{print};$1~"Se"&&$2!~"2"{print};' ' BEGIN { FS="\f";} { n=split($3,a,".");sub(/_2[01].+/,"",$3);print $2" "$3" "a[n]$1;} ' ' BEGIN { split("'"${p[1]}"'",m);FS="\f";} $2<=m[$1]{next} $1<11 { o[$1]=o[$1]"\n    "$3" (UID "$4"): "$2;} $1==11&&$5!~"^/dev" { o[$1]=o[$1]"\n    "$3" (UID "$4") => "$5" (status "$6"): "$2;} $1==12&&$5 { p="ps -c -ocomm -p"$5"|sed 1d";p|getline n;close(p);if(n) $5=n;o[$1]=o[$1]"\n    "$5" => "$3" (UID "$4"): "$2;} $1~/1[34]/ { o[$1]=o[$1]"\n    "$3" (UID "$4", error "$5"): "$2;} END { n=split("'"${k[27]}"'",u,":");for(i=n+1;i<n+4;i++)u[i]=u[n];split("'"${k[28]}"'",l,":");for(i=1;i<15;i++) if(o[i])print "\n"l[i]" ("u[i]")\n"o[i];} ' ' /^ {8}[^ ]/{print} ' ' BEGIN { L='${p[17]}';} !/^[[:space:]]*(#.*)?$/ { l++;if(l<=L) f=f"\n    "$0;} END { F=FILENAME;if(!F) exit;if(!f) f="\n    [N/A]";"cksum "F|getline C;split(C, A);C=A[1];"stat -f%Sm "F|getline D;"file -b "F|getline T;if(T~/^Apple b/) { f="";l=0;while("'"${k[30]}"' "F|getline g) { l++;if(l<=L) f=f"\n    "g;};};if(T!~/^(AS.+ (En.+ )?text(, with v.+)?$|(Bo|PO).+ sh.+ text ex|XM)/) F=F"\n    '"${k[22]}"'"T;printf("'"${f[8]}"'",F,D,C,f);if(l>L) printf("'"${f[7]}"'",l-L);} ' ' s/^ ?n...://p;s/^ ?p...:/-'$'\t''/p;' 's/0/Off/p' 's/^.{52}(.+) <.+/\1/p' ' /id: N|te: Y/{i++} END{print i} ' ' /kext:/ { split($0,a,":");p=a[1];k[S]='${k[25]}';k[U]='${k[26]}';v[S]="Safe";v[U]="true";for(i in k) { s=system("'"${k[30]}"'\\ :"k[i]" \""p"\"/*/I*|grep -qw "v[i]);if(!s) a[1]=a[1]" "i;};if(!a[2]) a[2]="'"${k[23]}"'";printf("'"${f[4]}"'",a[1],a[2]);next;} !/^ *$/ { p="'"${k[31]}"'\\ :'"${k[33]}"' \""$0"\"/*/'${k[32]}'";p|getline b;close(p);if(b~/, .+:/||b=="") b="'"${k[23]}"'";printf("'"${f[4]}"'",$0,b);} ' '/ en/!s/\.//p' ' NR>=13 { gsub(/[^0-9]/,"",$1);print;} ' ' $10~/\(L/&&$9!~"localhost" { sub(/.+:/,"",$9);print $1": "$9|"sort|uniq";} ' '/^ +r/s/.+"(.+)".+/\1/p' 's/(.+\.wdgt)\/(Contents\/)?'${k[32]}'$/\1/p' 's/^.+\/(.+)\.wdgt$/\1/p' ' /l: /{ /DVD/d;s/.+: //;b0'$'\n'' };/s: /{ / [VY]/d;s/^ */- /;H;};$b0'$'\n'' d;:0'$'\n'' x;/APPLE [^:]+$/d;p;' '/^find: /!p;' ' /^p/{ s/.//g;x;s/\nu/'$'\f''/;s/(\n)c/\1'$'\f''/;s/\n\n//;p;};H;' ' BEGIN{FS="= "} /Path/{print $2} ' ' /^ *$/d;s/^ */    /;p;' ' s/^.+ |\(.+\)$//g;p;' '1;END{if(NR<'${p[15]}')printf("^/(S|usr/(X|li))")}' ' /2/{print "WARN"};/4/{print "CRITICAL"};' ' /EVHF|MACR|^s/d;s/^.+: //p;' ' $3~/^[1-9][0-9]{0,2}(\.[1-9][0-9]{0,2}){2}$/ { i++;n=n"\n"$1"\t"$3;} END{ if(i>1)print n} ' s/{'\.|jnl: ','P.+:'}'//;s/ +([0-9]+)(.+)/\2'$'\t\t''\1/p' ' /^ +iP.+:$/{ s/://;b0'$'\n'' };/es: ./{ /iOS/d;s/^.+://;b0'$'\n'' };/^ +C.+ted: +[NY]/H;/:$/b0'$'\n'' d;:0'$'\n'' x;/: +N/d;s/\n.+//p;' ' 1d;/:$/b0'$'\n'' $b0'$'\n'' /(D|^ *Loc.+): /{ s/^.+: //;H;};/(B2|[my]): /H;d;:0'$'\n'' x;/[my]: [AM]|m: I.+p$|^\/Vo/d;s/(^|\n) [ -~]+//g;s/(.+)\n(.+)/\2:\1/;s/\n//g;/[ -~]/p;' 's/$/'$'\f''(0|-(4[34])?)$/p' '|sort'{'|uniq'{,\ -c},\ -nr} ' s/^/'{5,6,7,8,9,10}$'\f''/;s/ *'$'\f'' */'$'\f''/g;p;' 's/:.+$//p' '|wc -l' /{\\.{kext,xpc,'(appex|pluginkit)'}'\/(Contents\/)?'Info,'Launch[AD].+'}'\.plist$/p' 's/([-+.?])/\\\1/g;p' 's/, /\'$'\n/g;p' ' BEGIN{FS="\f"} { printf("'"${f[6]}"'",$1/2^30,$2);} ' ' /= D/&&$1!~/'{${k[24]},${k[29]}}'/ { getline d;if(d~"t") print $1;} ' ' BEGIN{FS="\t"} NR>1&&$NF!~/0x|\.([0-9]{3,}|[-0-9A-F]{36})$/ { print $NF"\f"a[split($(NF-1),a," ")];} ' '|tail -n'{${p[6]},${p[10]}} ' s/.+bus /Bus: /;s/,.+[(]/ /;s/,.+//p;' ' { $NF=$NF" Errors: "$1;$1="";} 1 ' ' 1s/^/\'$'\n''/;/^ +(([MNPRSV]|De|Li|Tu).+|Bus): .|d: Y/d;s/:$//;$d;p;' ' BEGIN { RS=",";FS=":";} $1~"name" { gsub("\"","",$2);print $2;} ' '|grep -q e:/' '/[^ .]/p' '{ print $1}' ' /^ +N.+: [1-9]/ { i++;} END { if(i) print "system: "i;} ' ' NF { print "'{admin,user}' "$NF;exit;} ' ' /se.+ =/,/[\}]/!d;/[=\}]/!p ' ' 3,4d;/^ +D|Of|Fu| [0B]/d;s/^  |:$//g;$!H;${ x;/:/p;} ' ' BEGIN { FS=": ";} NR==1 { sub(":","");h="\n"$1"\n";} /:$/ { l=$1;next;} $1~"S"&&$2!~3 { getline;next;} /^ {6}I/ { i++;L[i]=l" "$2;if(i=='${p[24]}') nextfile;} END { if(i) print h;for(j=0;j<i;j++) print L[i-j];} ' ' /./H;${ x;s/\n//;s/\n/, /g;/,/p;} ' ' {if(int($6)>'${p[25]}')printf("swap used %.1f\n",$6/1024)} ' ' BEGIN{FS="\""} $3~/ t/&&$2!~/'{${k[24]},${k[29]}}'/{print $2} ' ' int($1)>13 ' p ' BEGIN{FS="DB="} { sub(/\.db.*/,".db",$2);print $2;} ' {,1d\;}'/r%/,/^$/p' ' NR==1{next} NR>11||!$0{exit} {print $NF"\f"substr($0,1,32)"\f"$(NF-7)} ' '/e:/{print $2}' ' /^[(]/{ s/....//;s/$/:/;N;/: [)]$/d;s/\n.+ ([^ ]+).$/\1/;H;};${ g;p;} ' );c1=(system_profiler pmset\ -g nvram fdesetup find syslog df vm_stat sar ps crontab kextfind top pkgutil "${k[30]}\\" echo cksum kextstat launchctl smcDiagnose sysctl\ -n defaults\ read stat lsbom 'mdfind -onlyin' env pluginkit scutil 'dtrace -q -x aggsortrev -n' security sed\ -En awk 'dscl . -read' networksetup mdutil lsof test osascript\ -e netstat mdls route cat uname powermetrics );c2=(${k[21]}loginwindow\ LoginHook ' /L*/P*/loginw*' "'tell app \"System Events\" to get properties of login items'" 'L*/Ca*/'${k[21]}'Saf*/E* -d 2 -name '${k[32]} '~ $TMPDIR.. \( -flags +sappnd,schg,uappnd,uchg -o ! -user $UID -o ! -perm -600 \)' -i '-nl -print' '-F \$Sender -k Level Nle 3 -k Facility Req "'${k[21]}'('{'bird|.*i?clou','lsu|sha'}')"' "-f'%N: %l' Desktop {/,}L*/Keyc*" therm sysload boot-args status " -F '\$Time \$Message' -k Sender kernel -k Message CRne '0xdc008012|(allow|call)ing|Goog|(mplet|nabl)ed|ry HD|safe b|xpm' -k Message CReq 'bad |Can.t l|corru|dead|fail|GPU |hfs: Ru|inval|Limiti|v_c|NVDA[(]|pagin|Purg(ed|in)|error|Refus|TCON|tim(ed? ?|ing )o|trig|WARN' " '-du -n DEV -n EDEV 1 10' 'acrx -o%cpu,comm,ruid' "' syscall::recvfrom:return {@a[execname,uid]=sum(arg0)} syscall::sendto:return {@b[execname,uid]=sum(arg0)} syscall::open*:entry {@c[execname,uid,copyinstr(arg0),errno]=count()} syscall::execve:return, syscall::posix_spawn:return {@d[execname,uid,ppid]=count()} syscall::fork:return, syscall::vfork:return, syscall::posix_spawn:return /arg0<0/ {@e[execname,uid,arg0]=count()} syscall:::return /errno!=0/ {@f[execname,uid,errno]=count()} io:::wait-start {self->t=timestamp} io:::wait-done /self->t/ { this->T=timestamp - self->t;@g[execname,uid]=sum(this->T);self->t=0;} io:::start {@h[execname,uid]=sum(args[0]->b_bcount)} tick-10sec { normalize(@a,2560000);normalize(@b,2560000);normalize(@c,10);normalize(@d,10);normalize(@e,10);normalize(@f,10);normalize(@g,10000000);normalize(@h,10240);printa(\"1\f%@d\f%s\f%d\n\",@a);printa(\"2\f%@d\f%s\f%d\n\",@b);printa(\"11\f%@d\f%s\f%d\f%s\f%d\n\",@c);printa(\"12\f%@d\f%s\f%d\f%d\n\",@d);printa(\"13\f%@d\f%s\f%d\f%d\n\",@e);printa(\"14\f%@d\f%s\f%d\f%d\n\",@f);printa(\"3\f%@d\f%s\f%d\n\",@g);printa(\"4\f%@d\f%s\f%d\n\",@h);exit(0);} '" '-f -pfc /var/db/r*/'${k[21]}'*.{BS,Bas,Es,J,OSXU,Rem,up}*.bom' '{/,}L*/Lo*/Diag* -type f -regex .\*[cght] ! -name .?\* ! -name \*ag \( -exec grep -lq "^Thread c" {} \; -exec printf \* \; -o -true \) -execdir stat -f'$'\f''%Sc'$'\f''%N -t%F {} \;' '/S*/*/Ca*/*xpc*' '-L /{S*/,}L*/StartupItems -type f -exec file {} +' /\ kMDItemContentTypeTree=${k[21]}{bundle,mach-o-dylib} :Label "/p*/e*/{auto*,{cron,fs}tab,hosts,{[lp],sy}*.conf,mach_i*/*,pam.d/*,ssh{,d}_config,*.local} {/p*,/usr/local}/e*/periodic/*/* /L*/P*{,/*}/com.a*.{Bo,sec*.ap}*t {/S*/,/,}L*/Lau*/*t .launchd.conf" list '-F "" -k Sender hidd -k Level Nle 3' /Library/Preferences/${k[21]}alf\ globalstate --proxy '-n get default' vm.swapusage --dns -get{dnsservers,info} dump-trust-settings\ {-s,-d,} -n1 '-R -ce -l1 -n5 -o'{'prt -stats prt','mem -stats mem'}',command,uid' -kl -l -s\ / '--regexp --files '${k[21]}'pkg.*' '+c0 -i4TCP:0-1023' ${k[21]}dashboard\ layer-gadgets '-d /L*/Mana*/$USER' '-app Safari WebKitDNSPrefetchingEnabled' '-Fcu +c0 -l' -m 'L*/{Con*/*/Data/L*/,}Pref* -type f -size 0c -name *.plist.???????' kern.memorystatus_vm_pressure_level '3>&1 >&- 2>&3' '-F \$Message -k Sender kernel -k Message CReq "'{'n Cause: -','(a und|I/O |jnl_io.+)err','USBF:.+bus'}'"' -name\ kMDItem${k[33]} -T\ hfs '-n get default' -listnetworkserviceorder :${k[33]} :CFBundleDisplayName $EUID {'$TMPDIR../C ','/{S*/,}'}'L*/{,Co*/*/*/L*/}{Cache,Log}s -type f -size +'${p[11]}'G -exec stat -f%z'$'\f''%N {} \;' \ /v*/d*/*/*l*d{,.*.$UID}/* '-app Safari UserStyleSheetEnabled' 'L*/A*/Fi*/P*/*/a*.json' users/$USER\ HomeDirectory '{/,}L*/{Con,Pref}* -type f ! -size 0 -name *.plist -exec plutil -s {} \;' ' -F "\$Time \$(Sender): \$Message" -k Sender Rne "launchd|nsurls" -k Level Nle 3 -k Facility R'{'ne "user|','eq "'}'console" -k Message CRne "[{}<>]|asser|commit - no t|deprec|done |fmfd|Goog|ksho|ndum|obso|realp|rned f|/root|sandbox ex|sudo:" ' getenv '/ "kMDItemDateAdded>=\$time.now(-'${p[23]}')&&kMDItem'${k[33]}'=*"' -m\ / '' ' -F "\$Time \$(RefProc): \$Message" -k Sender Req launchd -k Level Nle 3 -k Message Rne "asse|bug|File ex|hij|Ig|Jet|key is|lid t|Plea|ship" ' print{,-disabled}\ {system,{gui,user}/$UID} '-n1 --show-initial-usage --show-process-energy' -r ' -F "\$Message" -k Sender nsurlstoraged -k Time ge -1h -k Level Nle 4 -k Message Req "^(ER|IN)" ' );N1=${#c2[@]};for j in {0..20};do c2[N1+j]=SP${k[j]}DataType;done;l=({Restricted\ ,Lock,Pro}files POST Battery {Safari,App,{Bad,Loaded}\ kernel,Firefox}\ extensions System\ load boot\ args FileVault\ {2,1} {Kernel,System,Console,launchd}\ log SMC Login\ hook 'I/O per process' 'High file counts' UID {Daemons,{Login,All,User}\ agents}\ {load,disabl}ed {Admin,Root}\ access Font\ issues Firewall Proxies DNS TCP/IP Wi-Fi 'Elapsed time (sec)' {Root,User}\ crontab {Global,User}' login items' Spotlight Memory\ pressure Listeners Widgets Parental\ Controls Prefetching Nets Volumes {Continuity,I/O,iCloud,HID,HCI}\ errors {User,System}\ caches/logs XPC\ cache Startup\ items Shutdown\ codes Heat Diagnostic\ reports Bad\ {plist,cache}s 'VM (GiB)' Bundles{,' (new)'} Trust\ settings Activity Free\ space Stylesheet Library\ paths{,' ('{shell,launchd}\)} );N3=${#l[@]};for i in {0..8};do l[N3+i]=${k[5+i]};done;F() { local x="${s[$1]}";[[ "$x" =~ ^([\&\|\<\>]|$) ]]&&{ printf "$x";return;};:|${c1[30]} "$x" 2>&-;printf "%s \'%s\'" "|${c1[30+$?]}" "$x";};A0() { Q=6;v[2]=1;id -G|grep -qw 80;v[1]=$?;((v[1]))||{ Q=7;sudo -v;v[2]=$?;((v[2]))||Q=8;};v[3]=`date +%s`;date '+Start time: %T %D%n';printf '\n[Process started]\n\n'>&4;printf 'Revision: %s\n\n' ${p[0]};};A1() { local c="${c1[$1]} ${c2[$2]}";shift 2;c="$c ` while [[ "$1" ]];do F $1;shift;done`";((P2))&&{ c="sudo $c";P2=;};v=`eval "$c"`;[[ "$v" ]];};A2() { local c="${c1[$1]}";[[ "$c" =~ ^(awk|sed ) ]]&&c="$c '${s[$2]}'"||c="$c ${c2[$2]}";shift 2;local d=` while [[ "$1" ]];do F $1;shift;done`;((P2))&&{ c="sudo $c";P2=;};local a;v=` while read a;do eval "$c '$a' $d";done<<<"$v";`;[[ "$v" ]];};A3(){ v=$((`date +%s`-v[3]));};export -f A1 A2 F;B1() { v=No;! ((v[1]))&&{ v=;P1=1;};};eval "`type -a B1|sed '1d;s/1/2/'`";B3(){ v[$1]="$v";};B4() { local i=$1;local j=$2;shift 2;local c="cat` while [[ "$1" ]];do F $1;shift;done`";v[j]=`eval "{ $c;}"<<<"${v[i]}"`;};B5(){ v="${v[$1]}"$'\n'"${v[$2]}";};B6() { v=` paste -d$'\e' <(printf "${v[$1]}") <(printf "${v[$2]}")|awk -F$'\e' ' {printf("'"${f[$3]}"'",$1,$2)} ' `;};B7(){ v=`egrep -v "${v[$1]}"<<<"$v"|sort`;};eval "`type -a B7|sed '1d;s/7/8/;s/-v //'`";C0() { [[ "$v" ]]&&sed -E "$s"<<<"$v";};C1() { [[ "$v" ]]&&printf "${f[$1]}" "${l[$2]}" "$v"|sed -E "$s";};C2() { v=`echo $v`;[[ "$v" != 0 ]]&&C1 0 $1;};C3() { B4 0 0 63&&C1 1 $1;};C4() { echo $'\t'"Part $((++P)) of $Q done at $((`date +%s`-v[3])) sec">&4;};C5() { sudo -k;pbcopy<<<"$o";printf '\n\tThe test results are on the Clipboard.\n\n\tPlease close this window.\n';exit 2>&-;};for i in 1 2;do eval D$((i-1))'() { A'$i' $@;C0;};';for j in 2 3;do eval D$((i+2*j-3))'() { local x=$1;shift;A'$i' $@;C'$j' $x;};';done;done;trap C5 2;o=$({ A0;D0 0 N1+1 2;D0 0 $N1 1;B1;C2 31;B1&&! B2&&C2 32;D2 22 15 63;D0 0 N1+2 3;D0 0 N1+15 17;D4 3 0 N1+3 4;D4 4 0 N1+4 5;D4 N3+4 0 N1+9 59;D0 0 N1+16 99;for i in 0 1 2;do D4 N3+i 0 N1+5+i 6;done;D4 N3+3 0 N1+8 71;D4 62 1 10 7;D4 10 1 11 8;B2&&D4 18 19 53 67;D2 11 2 12 9;D2 12 3 13 10;D2 13 32 70 101 25;D2 71 6 76 13;D2 45 20 52 66;A1 7 77 14;B3 28;A1 20 31 111;B6 0 28 5;B4 0 0 110;C2 66;D4 70 8 15 38;D0 9 16 16 77 45;C4;B2&&D0 35 49 61 75 76 78 45;B2&&{ D0 28 17 45;C4;};B2&&{ A1 43 85 117;B3 29;B4 0 0 119 76 81 45;C0;B4 29 0 118 119 76 82 45;C0;    };D0 12 40 54 16 79 45;D0 12 39 54 16 80 45;D4 74 25 77 15&&{ B4 0 8 103;B4 8 0;A2 18 74;B6 8 0 3;C3 75;};B2&&D4 19 21 0;B2&&D4 40 10 42;D2 2 0 N1+19 46 84;D2 44 34 43 53;D2 59 22 20 32;D2 33 0 N1+14 51;for i in {0..2};do A1 29 35+i 104+i;B3 25+i;done;B6 25 27 5;B6 0 26 5;B4 0 0 110;C2 69;D2 34 21 28 35;D4 35 27 29 36;A1 40 59 120;B3 18;A1 33 60 121;B8 18;B4 0 19 83;A1 27 32 39&&{ B3 20;B4 19 0;A2 33 33 40;B3 21;B6 20 21 3;};C2 36;D4 50 38 5 68;B4 19 0;D5 37 33 34 42;B2&&D4 46 35 45 55;D4 38 0 N1+20 43;B2&&D4 58 4 65 76 91;D4 63 4 19 44 75 95 12;B1&&{ D4 53 5 55 75 69&&D4 51 6 58 31;D4 56 5 56 97 75 98&&D0 0 N1+7 99;D2 55 5 27 84;D4 61 5 54 75 70;D4 14 5 14 96;D4 15 5 72 96;D4 17 5 78 96;C4;};D4 16 5 73 96;A1 13 44 74 18;C4;B3 4;B4 4 0 85;A2 14 61 89;B4 0 5 19 102;A1 17 41 50;B7 5;C3 8;B4 4 0 88;A2 14 24 89;C4;B4 0 6 19 102;B4 4 0 86;A2 14 61 89;B4 0 7 19 102;B5 6 7;B4 0 11 73 102;A1 42 86 114;j=$?;for i in 0 1 2;do ((! j))||((i))||B2&&A1 18 $((79+i-(i+53)*j)) 107+8*j 94 74||continue;((PIPESTATUS))&&break;B7 11;B4 0 0 11;C3 $((23+i*(1+i+2*j)));D4 $((24+i*(1+i+2*j))) 18-4*j 82+i-16*j $((112+((3-i)*i-40*j)/2));done;D4 60 4 21 24;D4 42 14 1 62;D4 43 37 2 90 48;D4 41 10 42;D2 48 36 47 25;A1 4 3 60&&{ B3 9;A2 14 61;B4 0 10 21;B4 9 0;A2 14 62;B4 0 0 21;B6 0 10 4;C3 5;};D4 9 41 69 100;D2 72 21 68 35;D2 49 21 48 49;B4 4 22 57 102;A1 21 46 56 74;B7 22;B4 0 0 58;C3 47;D4 54 5 7 75 76 69;D4 52 5 8 75 76 69;D4 57 4 64 76 91;D2 0 4 4 84;D2 1 4 51 84;D4 21 22 9 37;D0 0 N1+17 108;A1 23 18 28 89;B4 0 16 22 102;A1 16 25 33;B7 16;B4 0 0 34;D1 31 47;D4 64 4 71 41;D4 65 5 87 116 74;C4;B4 4 12 26 89 23 102;for i in {0..3};do A1 0 N1+10+i 72 74;B7 12;B4 0 0 52;C3 N3+5+i;((i))||C4;done;A1 24 22 29;B7 12;B3 14;A2 39 57 30;B3 15;B6 14 15 4;C3 67;A1 24 75 74;B3 23;A2 39 57 30;B3 24;B6 23 24 4;C3 68;B4 4 13 27 89 65;A1 24 23;B7 13;C3 73;B4 4 0 87;A2 14 61 89 20;B4 0 17;A1 26 50 64;B7 17;C3 6;D0 0 N1+18 109;D4 7 11 6;A3;C2 39;C4;} 4>&2 2>/dev/null;);C5
    Copy the selected text to the Clipboard by pressing the key combination command-C.
    8. Launch the built-in Terminal application in any of the following ways:
    ☞ Enter the first few letters of its name into a Spotlight search. Select it in the results (it should be at the top.)
    ☞ In the Finder, select Go ▹ Utilities from the menu bar, or press the key combination shift-command-U. The application is in the folder that opens.
    ☞ Open LaunchPad and start typing the name.
    Click anywhere in the Terminal window and paste by pressing command-V. The text you pasted should vanish immediately. If it doesn't, press the return key.
    9. If you see an error message in the Terminal window such as "Syntax error" or "Event not found," enter
    exec bash
    and press return. Then paste the script again.
    10. If you're logged in as an administrator, you'll be prompted for your login password. Nothing will be displayed when you type it. You will not see the usual dots in place of typed characters. Make sure caps lock is off. Type carefully and then press return. You may get a one-time warning to be careful. If you make three failed attempts to enter the password, the test will run anyway, but it will produce less information. If you don't know the password, or if you prefer not to enter it, just press return three times at the password prompt. Again, the script will still run.
    If you're not logged in as an administrator, you won't be prompted for a password. The test will still run. It just won't do anything that requires administrator privileges.
    11. The test may take a few minutes to run, depending on how many files you have and the speed of the computer. A computer that's abnormally slow may take longer to run the test. While it's running, a series of lines will appear in the Terminal window like this:
    [Process started]
            Part 1 of 8 done at … sec
            Part 8 of 8 done at … sec
            The test results are on the Clipboard.
            Please close this window.
    [Process completed]
    The intervals between parts won't be exactly equal, but they give a rough indication of progress. The total number of parts may be different from what's shown here.
    Wait for the final message "Process completed" to appear. If you don't see it within about ten minutes, the test probably won't complete in a reasonable time. In that case, press the key combination control-C or command-period to stop it and go to the next step. You'll have incomplete results, but still something.
    12. When the test is complete, or if you stopped it because it was taking too long, quit Terminal. The results will have been copied to the Clipboard automatically. They are not shown in the Terminal window. Please don't copy anything from there. All you have to do is start a reply to this comment and then paste by pressing command-V again.
    At the top of the results, there will be a line that begins with the words "Start time." If you don't see that, but instead see a mass of gibberish, you didn't wait for the "Process completed" message to appear in the Terminal window. Please wait for it and try again.
    If any private information, such as your name or email address, appears in the results, anonymize it before posting. Usually that won't be necessary.
    13. When you post the results, you might see an error message on the web page: "You have included content in your post that is not permitted," or "You are not authorized to post." That's a bug in the forum software. Please post the test results on Pastebin, then post a link here to the page you created.
    14. This is a public forum, and others may give you advice based on the results of the test. They speak for themselves, not for me. The test itself is harmless, but whatever else you're told to do may not be. For others who choose to run it, I don't recommend that you post the test results on this website unless I asked you to.
    Copyright © 2014, 2015 by Linc Davis. As the sole author of this work, I reserve all rights to it except as provided in the Use Agreement for the Apple Support Communities website ("ASC"). Readers of ASC may copy it for their own personal use. Neither the whole nor any part may be redistributed.

  • [SOLVED]After a logout/in , can't start conky as user.

    I logged out of my xfce session. Logged back in and conky did not start.
    I now get this error:
    xxx@xxx~]$ conky -c ~/AccuW_2/accuw.Toronto.conky
    [1] 1442
    bash: -c: command not found
    [xxx@xxx ~]$ conky: no process found
    No changes at all were made to the computer. No software was uninstalled/reinstalled.
    Here is my conky config that I've been using for months:
    ## killall conky && conky -c ~/AccuW_2/accuw.Toronto.conky &
    # 1e_Accuweather_USA_Images script by TeoBigusGeekus
    # http://crunchbanglinux.org/forums/post/212605/
    ### Begin Window Settings ##################################################
    # Create own window instead of using desktop (required in nautilus)
    own_window yes
    # Use the Xdbe extension? (eliminates flicker)
    # It is highly recommended to use own window with this one
    # so double buffer won't be so big.
    double_buffer yes
    own_window_type normal #override
    own_window_transparent yes
    #own_window_hints undecorated,below,skip_taskbar,skip_pager
    own_window_hints undecorated,below,sticky,skip_taskbar,skip_pager
    own_window_colour black
    own_window_class Conky
    own_window_title tedbell
    ### ARGB can be used for real transparency
    ### NOTE that a composite manager is required for real transparency.
    ### This option will not work as desired (in most cases) in conjunction with
    ### own_window_type override
    # own_window_argb_visual yes
    ### When ARGB visuals are enabled, this use this to modify the alpha value
    ### Use: own_window_type normal
    ### Use: own_window_transparent no
    ### Valid range is 0-255, where 0 is 0% opacity, and 255 is 100% opacity.
    #own_window_argb_value 150
    #minimum_size 310 0 ## width, height
    maximum_width 310 ## width, usually a good idea to equal minimum width
    gap_x 10 ### left &right
    gap_y 10 ### up & down
    alignment top_right
    #################################################### End Window Settings ###
    ### Font Settings ##########################################################
    # Use Xft (anti-aliased font and stuff)
    use_xft yes
    xftfont MonoCondensed:bold:size=10
    # Alpha of Xft font. Must be a value at or between 1 and 0 ###
    xftalpha 1
    # Force UTF8? requires XFT ###
    override_utf8_locale yes
    draw_shades yes #no #### <<<--- To see it easier on light screens.
    default_shade_color black
    draw_outline yes #no #### <<<--- Amplifies text if yes
    default_outline_color black
    uppercase no
    ###################################################### End Font Settings ###
    ### Color Settings #########################################################
    default_shade_color gray
    default_outline_color black
    default_color DCDCDC #Gainsboro
    color0 ffe595 #Teo Gold
    color1 778899 #LightSlateGrey
    color2 FF8C00 #Darkorange
    color3 7FFF00 #Chartreuse
    color4 FFA07A #LightSalmon
    color5 000000 #NavajoWhite
    color6 00BFFF #DeepSkyBlue
    color7 00FFFF #Cyan #48D1CC #MediumTurquoise
    color8 FFFF00 #Yellow
    color9 FF0000 #Red #A52A2A #DarkRed
    ##################################################### End Color Settings ###
    ### Borders Section ########################################################
    draw_borders no
    # Stippled borders?
    stippled_borders 0
    # border margins
    border_inner_margin 5
    border_outer_margin 0
    # border width
    border_width 0
    # graph borders
    draw_graph_borders no
    ##################################################### End Borders Secton ###
    ### Miscellaneous Section ##################################################
    # Boolean value, if true, Conky will be forked to background when started.
    background no
    # Adds spaces around certain objects to stop them from moving other things
    # around, this only helps if you are using a mono font
    # Options: right, left or none
    use_spacer right
    # Default and Minimum size is 256 - needs more for single commands that
    # "call" a lot of text IE: bash scripts
    text_buffer_size 256
    # Subtract (file system) buffers from used memory?
    no_buffers yes
    # change GiB to G and MiB to M
    short_units yes
    # Like it says, ot pads the decimals on % values
    # doesn't seem to work since v1.7.1
    pad_percents 2
    # Maximum size of user text buffer, i.e. layout below TEXT line in config file
    # (default is 16384 bytes)
    # max_user_text 16384
    ############################################## End Miscellaneous Section ###
    ### LUA Settings ###########################################################
    ## Above and After TEXT - requires a composite manager or blinks.
    # lua_load ~/Conky/LUA/draw-bg.lua
    #TEXT
    #${lua conky_draw_bg 10 0 0 0 0 0x000000 0.6}
    ## ${lua conky_draw_bg corner_radius x_position y_position width height color alpha}
    ## OR Both above TEXT (No composite manager required - no blinking!)
    lua_load ~/Conky/LUA/draw-bg.lua
    lua_draw_hook_pre draw_bg 5 0 0 0 0 0x000000 0.0
    ####################################################### End LUA Settings ###
    # The all important - How often conky refreshes.
    # If you have a "Crey" try: 0.2 - smokin' - but watch the CPU useage go UP!
    update_interval 1
    ## ${image ~/Conky/images/red_1.png -p 0,15 -s 67x40}
    ## ${image ~/Conky/images/red_1.png -p 165,52 -s 125x75}
    ## $HOME/AccuW_2/Toronto/acc_int_images
    default_bar_size 100 12
    # stuff after 'TEXT' will be formatted on screen
    TEXT
    ${color0}${time %A %d %b %Y} ${color1}--${color0} Scarborough, ON ${color1}--${color0} ${time %T}${color}\
    ${texeci 500 bash $HOME/AccuW_2/Toronto/acc_int_images}\
    ${image $HOME/AccuW_2/Toronto/cc.png -p 220,25 -s 67x40}
    ${execpi 600 sed -n '1p' $HOME/AccuW_2/Toronto/messages}
    ${font Instruction:size=20} Now ${execpi 600 sed -n '29p' $HOME/AccuW_2/Toronto/curr_cond}${goto 120} FL ${execpi 600 sed -n '30p' $HOME/AccuW_2/Toronto/curr_cond}${font}
    ${color0}Wind: ${color}${execpi 600 sed -n '31p' $HOME/AccuW_2/Toronto/curr_cond} ${execpi 600 sed -n '32p' $HOME/AccuW_2/Toronto/curr_cond}\
    ${color0}Today Tonight${color}
    ${color0}Hum: ${color}${execpi 600 sed -n '33p' $HOME/AccuW_2/Toronto/curr_cond}\
    ${font Toronto Subway:size=14}${goto 170}${color6}${execpi 600 sed -n '26p' $HOME/AccuW_2/Toronto/first_days}${color}\
    ${goto 210}${color9}${execpi 600 sed -n '27p' $HOME/AccuW_2/Toronto/first_days}${color}\
    ${goto 245}${color6}${execpi 600 sed -n '32p' $HOME/AccuW_2/Toronto/first_days}${color}\
    ${goto 290}${color9}${execpi 600 sed -n '31p' $HOME/AccuW_2/Toronto/first_days}${color} ${font}
    ${color0}Bar: ${color}${execpi 600 sed -n '34p' $HOME/AccuW_2/Toronto/curr_cond}\
    ${image $HOME/AccuW_2/Toronto/tod.png -p 160,100 -s 67x40}\
    ${image $HOME/AccuW_2/Toronto/ton.png -p 235,100 -s 67x40}
    ${color0}Cloud: ${color}${execpi 600 sed -n '35p' $HOME/AccuW_2/Toronto/curr_cond}
    ${color0}UVI: ${color}${execpi 600 sed -n '36p' $HOME/AccuW_2/Toronto/curr_cond}
    ${color0}Vis: ${color}${execpi 600 sed -n '38p' $HOME/AccuW_2/Toronto/curr_cond}
    ${color0}DP: ${color}${execpi 600 sed -n '37p' $HOME/AccuW_2/Toronto/curr_cond}°
    ${color0}Sunrise: ${color}${execpi 600 sed -n '39p' $HOME/AccuW_2/Toronto/curr_cond}\
    ${goto 160}${color0} Sunset: ${color}${execpi 600 sed -n '40p' $HOME/AccuW_2/Toronto/curr_cond}
    ${color0}Moonrise: ${color}${execpi 600 sed -n '41p' $HOME/AccuW_2/Toronto/curr_cond}\
    ${goto 160}${color0}Moonset: ${color}${execpi 600 sed -n '42p' $HOME/AccuW_2/Toronto/curr_cond}
    ${color0}9 DAY FORECAST ${color1}${hr}${color}
    ${color1}${execpi 600 sed -n '5p' $HOME/AccuW_2/Toronto/first_days}\
    ${execpi 600 sed -n '10p' $HOME/AccuW_2/Toronto/first_days}\
    ${execpi 600 sed -n '15p' $HOME/AccuW_2/Toronto/first_days}${color}\
    ${image $HOME/AccuW_2/Toronto/6.png -p 5,232 -s 67x40}\
    ${image $HOME/AccuW_2/Toronto/11.png -p 120,232 -s 67x40}\
    ${image $HOME/AccuW_2/Toronto/16.png -p 220,232 -s 67x40}
    ${font Toronto Subway:size=11}
    ${color6}${execpi 600 sed -n '8p' $HOME/AccuW_2/Toronto/first_days}°\
    ${color6}${goto 180}${execpi 600 sed -n '13p' $HOME/AccuW_2/Toronto/first_days}°\
    ${color6}${goto 283}${execpi 600 sed -n '18p' $HOME/AccuW_2/Toronto/first_days}°
    ${color9}${execpi 600 sed -n '9p' $HOME/AccuW_2/Toronto/first_days}°\
    ${color9}${goto 180}${execpi 600 sed -n '14p' $HOME/AccuW_2/Toronto/first_days}°\
    ${color9}${goto 283}${execpi 600 sed -n '19p' $HOME/AccuW_2/Toronto/first_days}° ${font}${color}
    ${color1} ${execpi 600 sed -n '20p' $HOME/AccuW_2/Toronto/first_days}\
    ${execpi 600 sed -n '1p' $HOME/AccuW_2/Toronto/last_days}\
    ${execpi 600 sed -n '6p' $HOME/AccuW_2/Toronto/last_days}${color}\
    ${image $HOME/AccuW_2/Toronto/21.png -p 5,310 -s 67x40}\
    ${image $HOME/AccuW_2/Toronto/last_2.png -p 120,310 -s 67x40}\
    ${image $HOME/AccuW_2/Toronto/last_7.png -p 223,310 -s 67x40}
    ${font Toronto Subway:size=11}
    ${color6}${execpi 600 sed -n '23p' $HOME/AccuW_2/Toronto/first_days}°\
    ${color6}${goto 180}${execpi 600 sed -n '4p' $HOME/AccuW_2/Toronto/last_days}°\
    ${color6}${goto 283}${execpi 600 sed -n '9p' $HOME/AccuW_2/Toronto/last_days}°
    ${color9}${execpi 600 sed -n '24p' $HOME/AccuW_2/Toronto/first_days}°\
    ${color9}${goto 180}${execpi 600 sed -n '5p' $HOME/AccuW_2/Toronto/last_days}°\
    ${color9}${goto 283}${execpi 600 sed -n '10p' $HOME/AccuW_2/Toronto/last_days}° ${font}
    ${color1} ${execpi 600 sed -n '11p' $HOME/AccuW_2/Toronto/last_days}\
    ${execpi 600 sed -n '16p' $HOME/AccuW_2/Toronto/last_days}\
    ${execpi 600 sed -n '21p' $HOME/AccuW_2/Toronto/last_days}${color}\
    ${image $HOME/AccuW_2/Toronto/last_12.png -p 5,385 -s 67x40}\
    ${image $HOME/AccuW_2/Toronto/last_17.png -p 120,385 -s 67x40}\
    ${image $HOME/AccuW_2/Toronto/last_22.png -p 225,385 -s 67x40}
    ${font Toronto Subway:size=11}
    ${color6}${execpi 600 sed -n '14p' $HOME/AccuW_2/Toronto/last_days}°\
    ${color6}${goto 180}${execpi 600 sed -n '19p' $HOME/AccuW_2/Toronto/last_days}°\
    ${color6}${goto 283}${execpi 600 sed -n '24p' $HOME/AccuW_2/Toronto/last_days}°
    ${color9}${execpi 600 sed -n '15p' $HOME/AccuW_2/Toronto/last_days}°\
    ${color9}${goto 180}${execpi 600 sed -n '20p' $HOME/AccuW_2/Toronto/last_days}°\
    ${color9}${goto 283}${execpi 600 sed -n '25p' $HOME/AccuW_2/Toronto/last_days}° ${font}
    ${color0}COMPUTER ${color1}${hr}${color}
    ${font Instruction:size=7.5}
    ${color1}CPU0:${goto 70}${color7}${cpubar cpu0}${goto 70}${color1}${cpubar cpu4}${goto 80}${color5}${cpu cpu0}%${color}${goto 190}${color7}${execi 8 sensors | grep 'Core 0:' | cut -c16-17}°
    ${color1}CPU1:${goto 70}${color6}${cpubar cpu1}${goto 70}${color1}${cpubar cpu4}${goto 80}${color5}${cpu cpu1}%${color}${goto 190}${color6}${execi 8 sensors | grep 'Core 1:' | cut -c16-17}°
    ${color1}RAM:${goto 70}${color4}${membar}${goto 70}${color1}${cpubar cpu4}${goto 80}${color5}${memperc}%${color4}${goto 190}$mem/$memmax${font}
    ${color0}DISK ${color1}${hr}${color}
    ${color1}sda: ${color9}Write: ${diskio_write /dev/sda} ${color3}Read: ${diskio_read /dev/sda} ${color8}Temp: ${color8}${execpi 15 hddtemp -n /dev/sda} °
    ${font Instruction:size=7.5}
    ${color1}/:${goto 70}${color2}${fs_bar /}${goto 70}${color1}${cpubar cpu4}${goto 80}${color5}${fs_used_perc /} %${color2}\
    ${goto 190}${fs_used /}/${fs_size /}
    ${color1}/home:${goto 70}${color8}${fs_bar /home}${goto 70}${color1}${cpubar cpu4}${goto 80}${color5}${fs_used_perc /home} %${color8}\
    ${goto 190}${fs_used /home}/${fs_size /home}
    ${color1}../sda3:${goto 70}${color0}${fs_bar /media/sda3}${goto 70}${color1}${cpubar cpu4}${goto 80}${color5}${fs_used_perc /media/sda3} %${color0}\
    ${goto 190}${fs_used /media/sda3}/${fs_size /media/sda3}${font}
    ${color0}NETWORK ${color1}${hr}${color}
    ${color1}wlan0: ${color9}Down: ${downspeedf wlan0}${color3}Up: ${upspeedf wlan0}
    ${color8}RX${goto 110}${color8}TX${goto 200}${color8}Total
    ${color0}Today:${color}
    ${color} ${execi 300 vnstat -i wlan0 | grep "today" | awk '{print $2 $3}'}\
    ${color}${goto 110}${execi 300 vnstat -i wlan0 | grep "today" | awk '{print $5 $6}'}\
    ${color}${goto 200}${execi 300 vnstat -i wlan0 | grep "today" | awk '{print $8 $9}'}${color}
    ${color0}Yesterday:${color}
    ${color} ${execi 300 vnstat -i wlan0 | grep "yesterday" | awk '{print $2 $3}'}\
    ${color}${goto 110}${execi 300 vnstat -i wlan0 | grep "yesterday" | awk '{print $5 $6}'}\
    ${color}${goto 200}${execi 300 vnstat -i wlan0 | grep "yesterday" | awk '{print $8 $9}'}${color}
    ${color0}Last Week:${color}
    ${color} ${execi 300 vnstat -w -i wlan0 | grep "current week" | awk '{print $3 $4}'}\
    ${color}${goto 110}${execi 300 vnstat -w -i wlan0 | grep "current week" | awk '{print $6 $7}'}\
    ${color}${goto 200}${execi 300 vnstat -w -i wlan0 | grep "current week" | awk '{print $9 $10}'}${color}
    ${color0}${time %B}:${color}
    ${color} ${execi 300 vnstat -m -i wlan0 | grep "`date +"%b '%y"`" | awk '{print $3 $4}'}\
    ${color}${goto 110}${execi 300 vnstat -m -i wlan0 | grep "`date +"%b '%y"`" | awk '{print $6 $7}'}\
    ${color}${goto 200}${execi 300 vnstat -m -i wlan0 | grep "`date +"%b '%y"`" | awk '{print $9 $10}'} ${color}
    Can anyone tell me how to get it to be able to run as user again. I've traced the problem down to that.
    Thanks,
    Last edited by tedbell (2012-08-14 11:22:09)

    DSpider wrote:
    tedbell wrote:
    P.s. In never needed the '&' to run this. If it matters, I edited my bash rc to alias the command to just 'conky'
    alias conky='killall conky && conky -c ~/AccuW_2/accuw.Toronto.conky &'
    Ok, there's your problem. When you're running "$ conky -c ~/AccuW_2/accuw.Toronto.conky", you're effectively running:
    $ <insert 'conky' alias here> -c ~/AccuW_2/accuw.Toronto.conky
    Or in other words:
    $ killall conky && conky -c ~/AccuW_2/accuw.Toronto.conky & -c ~/AccuW_2/accuw.Toronto.conky
    Heh. Told you this belongs in the newbie section.
    There's an "-c ~/AccuW_2/accuw.Toronto.conky" appendage at the end that Bash doesn't know how to interpret.
    I guess I never noticed the problem because conky always autostarted. I deleted the line and restarted and everything works now. Thanks!
    Mods. move this to newbie or better yet delete it, it's embarassing.

Maybe you are looking for

  • Picture attachment in mail

    why when I attach a picture to a Mail email it becomes embedded within the email and PC users can't download it? Richard

  • IDOC Data Records

    I have the IDOC Number with me. How to fetch the Data Records from that ?

  • Best Practices for Data Access

    Good morning! I was wondering if someone might give me some advice on some best practices for retrieving data from a SQL server in the cloud via a desktop application? I'm curious if I embed into my desktop application the server address (IP, or Doma

  • Tables created using Enterprise Console Manager do not show up in SQLPLUS

    I am struggling with Enterprise console manager and command line SQLPLUS. I have tried scott/tiger , system/manager and the new user that I have created. But I am having the following problems: 1. I cannot connect as NORMAL through Enterprise Console

  • MacBook pro won't boot correct anymore...

    Hi, I put my MacBook Pro to sleep yesterday evening, did nothing special, just closed the lid and it started breathing. Today I wanted to wake it. Putted my usb mouse in the left port, it woke up, lid was still closed. Opened the lid, no image on the