ZFS mirroring

Hello,
I just build a Solaris 10 server on an x86 box. I forgot to mirror the two disks when I install the OS. Can I get some help with this?
I have this
# zpool list
rpool 278G 5.77G 272G 2% ONLINE -
# zpool status
pool: online
state: ONLINE
scan: none requested
config:
rpool ONLINE
c0t0d0s0 ONLINE
Anyway I want to add a 2nd disk and mirror it. The other disk is c0t1d0s0.
The rpool a zfs root.
when I try to add the disk it gives me the error 'cannot open '/dev/dsk/c0t1d0s0' : I/O error'
Can someone tell me what I'm doing wrong. I'm not to knowledgeable on zfs.
Thanks
Edited by: CyberNinja on Dec 21, 2011 10:06 AM
Edited by: CyberNinja on Dec 21, 2011 10:10 AM

"I just build a Solaris 10 server on an x86 box. I forgot to mirror the two disks when I install the OS. Can I get some help with this?"
man installboot:
The installboot utility is a SPARC only program. It is not
supported on the x86 architecture. x86 users should use
installgrub(1M) instead.
--ron                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

Similar Messages

  • ZFS mirror on root, one of the two devices appears offline

    Hello everybody,
    I'm a new Archlinux user. I've managed to install it on a pool (called RAID1) composed of two LUKS encrypted devices (/dev/mapper/HD3 and /dev/mapper/HD10).
    This is the pool:
    # zpool status
    pool: RAID1
    state: ONLINE
    scan: resilvered 11.4M in 0h0m with 0 errors on Mon Jan 13 07:41:13 2014
    config:
    NAME STATE READ WRITE CKSUM
    RAID1 ONLINE 0 0 0
    mirror-0 ONLINE 0 0 0
    HD10 ONLINE 0 0 0
    HD3 ONLINE 0 0 0
    errors: No known data errors
    I wrote a custom hook, run before the zfs hook, which opens a small (5MB) LUKS image (/etc/keys.img) asking for its key, and mounts it under /keys. Then, for each device in /etc/earlycrypttab, it opens the device taking the respective keyfile from /keys.
    Here is the code, if it could help debug the problem. Feel free to use it, if it might be useful to you. Sorry if it's not too good.
    http://pastebin.com/PAbaTUcq   /etc/earlycrypttab
    http://pastebin.com/JULEdHx9   /usr/lib/initcpio/hooks/customcrypt
    http://pastebin.com/Ew44anvY   /usr/lib/initcpio/install/customcrypt
    Now, the problem is this: for some reason, after booting, HD3 is shown as offline and the pool is shown as DEGRADED. /dev/mapper/HD3 is open and working, though. Running a simple
    # zpool online RAID1 HD3
    fixes the pool.
    I know for sure that after the customcrypt hook is run, that LUKS device is open. I've managed to get into ash just after the customcrypt hook, and the LUKS device is open, but
    # zpool list
    keeps saying that HD3 is offline.
    Anyway, if (still inside ash, after the customcrypt hook and before the zfs hook) I run
    # zpool export RAID1
    # zpool import
    zpool finds both devices as online.
    I first thought it could be a problem of the /etc/zfs/zpool.cache file. I tried recreating it and recreating the initram (with mkinitcpio -p linux), but still no luck.
    Do you have any idea about what could be the problem?
    Thank you very much for your time!

    @stefa : Did you get any resolution on this?  I ran into the exact same problem two days ago and I can't seem to find the "right" fix.  I'm using a zfs mirror on top of luks and figured it was due to the luks device not being online but after reading your post... I'm not so sure.
    Post boot I'm able to offline the missing device and online it without issue.  My only concern is why isn't this coming up with both disks at the time of import and will it boot if the offline drive is the only one of the two active at the time of boot?  I setup the mirror so I could have a drive fail and still boot...
    @esko: I've got zfs set to auto-mount and the mount points are all working properly.

  • Jumpstart/ZFS - mirror any any uses two slices on the same disk

    Hi all,
    I am setting up a set of custom Jumpstart profiles for a Solaris 10 10/09 SPARC install server. I am attempting to set up ZFS mirrored root filesystems using part of the two root disks (reserving the remainder for a second ZFS pool for application data). To achieve this, I have been using "mirror any any" as the device list for the pool directive in the Jumpstart profile
    On my test system with 73GB disks I've had no problems, but testing against a new server with 146GB disks the installer seems to be putting both halves of the mirror onto the same disk - I presume because the root pool size is less than half the size of the disk, and so it can.
    Is there any way to specify that the root pool should be mirrored across any two distinct disks, or any other advice on how to avoid this happening? I know I can specify exact slice names (i.e. "mirror c1t0d0s0 c1t1d0s0"), but some of the servers will have FC cards in, and so I can't guarantee what controller number the internal disks will be on.
    As an example, here is a line from the rules file, and the associated profile (the start/finish scripts are currently only displaying information).
    rules
    ====
    arch sparc && memsize 4000-4200 begin_settings.sh sparc_4G_zfs lmu_setup.sh
    sparc_4G_zfs profile
    ===============
    install_type initial_install
    system_type standalone
    cluster SUNWCXall
    pool rpool 30g 4097 1g mirror any any
    Thanks,
    Dave Taylor
    Senior Infrastructure Consultant
    Leeds Metropolitan University

    Hello Dave,
    I would expect that the profile you supplied will result in two slices off the same 146G disk to be selected for both sides of the mirror. If the numbers you had used for poolsize/swapsize/dumpsize had been smaller, you would would have had the same behavior on the 73G disk as well.
    In short, when you say 'any', you are basically saying "I don't care where it goes, give me the first slice/device that works". So, with the 'any' in the first position, kernel probe order is used, so in this case the first disk is selected, and the zpool is created. With 'any' in the second position, again the first disk is found, with available space, the mirror slice is created.
    If you want the two parts of the mirror to be on different devices, either alter the numbers for poolsize/swapsize/dumpsize in your profile so you cannot get both on the same disk, or don't use 'any' for the mirror vdev.
    Again, if you use 'any', you are saying "I don't care". If you do care, don't use 'any'.
    Channing

  • ZFS mirrors question

    Let's say I have one disk, c0t0d0. I have three zpools on this disk, including the root pool (rpool). If I add a mirror to the root pool (say disk c0t1d0), will only the root pool get mirrored? The other two pools are on the same disk - do they have to be specified as being mirrored separately, or does the entire disk get mirrored by extension, regardless of the pool? If the three pools are all on the same disk, but on different slices, do they have to be mirrored separately then as well?

    No, you'll have to mirror the other pools separately.

  • Solaris 10 (sparc) + ZFS boot + ZFS zonepath + liveupgrade

    I would like to set up a system like this:
    1. Boot device on 2 internal disks in ZFS mirrored pool (rpool)
    2. Non-global zones on external storage array in individual ZFS pools e.g.
    zone alpha has zonepath=/zones/alpha where /zones/alpha is mountpoint for ZFS dataset alpha-pool/root
    zone bravo has zonepath=/zones/bravo where /zones/bravo is mountpoint for ZFS dataset bravo-pool/root
    3. Ability to use liveupgrade
    I need the zones to be separated on external storage because the intent is to use them in failover data services within Sun Cluster (er, Solaris Cluster).
    With Solaris 10 10/08, it looks like I can do 1 & 2 but not 3 or I can do 1 & 3 but not 2 (using UFS instead of ZFS).
    Am I missing something that would allow me to do 1, 2, and 3? If not is such a configuration planned to be supported? Any guess at when?
    --Frank                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    Nope, that is still work in progress. Quite frankly I wonder if you would even want such a feature considering the way the filesystem works. It is possible to recover if your OS doesn't boot anymore by forcing your rescue environment to import the zfs pool, but its less elegant than merely mounting a specific slice.
    I think zfs is ideal for data and data-like places (/opt, /export/home, /opt/local) but I somewhat question the advantages of moving slices like / or /var into it. Its too early to draw conclusions since the product isn't ready yet, but at this moment I'd only think off disadvantages.

  • How to back up a ZFS boot disk ?

    Hello all,
    I have just installed Solaris 10 update 6 (10/08) on a Sparc machine (an Ultra 45 workstation) using ZFS for the boot disk.
    Now I want to port a custom UFS boot disk backup script to ZFS.
    Basically, this script copies the boot disk to a secondary disk and makes the secondary disk bootable.
    With UFS, I had to play with the vfstab a bit to allow booting off the secondary disk, but this is not necessary with ZFS.
    How can I perform such a backup of my ZFS boot disk ?
    I tried the following (source disk: c1t0d0, target disk: c1t1d0):
    # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 110G 118G 94K /rpool
    rpool/ROOT 4.58G 118G 18K legacy
    rpool/ROOT/root 4.58G 25.4G 4.50G /
    rpool/ROOT/root/var 79.2M 4.92G 79.2M /var
    rpool/dump 16.0G 118G 16.0G -
    rpool/export 73.3G 63.7G 73.3G /export
    rpool/homelocal 21.9M 20.0G 21.9M /homelocal
    rpool/swap 16G 134G 16K -
    # zfs snapshot -r rpool@today
    # zpool create -f -R /mnt rbackup c1t1d0
    # zfs send -R rpool@today | zfs receive -F -d rbackup               <- This one fails (see below)
    # installboot /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t1d0s0
    The send/receive command fails after transfering the "/" filesystem (4.5 GB) with the following error message:
    cannot mount '/mnt': directory is not empty
    There may be some kind of unwanted recursion here (trying to back up the backup or something) but I cannot figure it out.
    I tried a workaround: creating the mount point outside the snapshot:
    zfs snapshot -r rpool@today
    mkdir /var/tmp/mnt
    zpool create -f -R /var/tmp/mnt rbackup c1t1d0
    zfs send -R rpool@today | zfs receive -F -d rbackup
    But it still fails, this time with mounting "/var/tmp/mnt".
    So how does one back up the ZFS boot disk to a secondary disk in a live environment ?

    OK, this post requires some clarification.
    First, thanks to robert.cohen and rogerfujii for giving some elements.
    The objective is to make a backup of the boot disk on another disk of the same machine. The backup must be bootable just like the original disk.
    The reason for doing this instead of (or, even better, in addition to) mirroring the boot disk is to be able to quickly recover a stable operating system in case anything gets corrupted on the boot disk. Corruption includes hardware failures, but also any software corruption which could be caused by a virus, an attacker or an operator mistake (rm -rf ...).
    After doing lots of experiments, I found two potential solutions to this need.
    Solution 1 looks like what rogerfujii suggested, albeit with a few practical additions.
    It consists in using ZFS mirroring and breaking up the mirror after resilvering:
         - Configure the backup disk as a mirror of the boot disk :
         zpool attach -f rpool <boot disk>s0 <backup disk>s0
         - Copy the boot block to the backup disk:
         installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/<backup disk>s0
         - Monitor the mirror resilvering:
         zpool status rpool
         - Wait until the "action" field disappears (this can be scripted).
         - Prevent any further resilvering:
         zpool offline rpool <backup disk>s0
         Note: this step is mandatory because detaching the disk without offlining it first results in a non bootable backup disk.
         - Detach the backup disk from the mirror:
         zpool detach rpool <backup disk>s0
         POST-OPERATIONS:
         After booting on the backup disk, assuming the main boot disk is unreachable:
         - Log in as super-user.
         - Detach the main boot disk from the mirror
         zpool detach rpool <boot disk>s0
    This solution has many advantages, including simplicity and using no dirty tricks. However, it has two major drawbacks:
    - When booting on the backup disk, if the main boot disk is online, it will be resilvered with the old data.
    - There is no easy way to access the backup disk data without rebooting.
    So if you accidentally lose one file on the boot disk, you cannot easily recover it from the backup.
    This is because the pool name is the same on both disks, therefore effectively preventing any pool import.
    Here is now solution 2, which I favor.
    It is more complex and dependent on the disk layout and ZFS implementation changes, but overall offers more flexibility.
    It may need some additions if there are other disks than the boot disk with ZFS pools (I have not tested that case yet).
    ***** HOW TO BACKUP A ZFS BOOT DISK TO ANOTHER DISK *****
    1. Backup disk partitioning
    - Clean up ZFS information from the backup disk:
    The first and last megabyte of the backup disk, which hold ZFS information (plus other stuff) are erased:
    dd if=/dev/zero seek=<backup disk #blocks minus 2048> count=2048 of=/dev/rdsk/<backup disk>s2
    dd if=/dev/zero count=2048 of=/dev/rdsk/<backup disk>s2
    - Label and partition the backup disk in SMI :
    format -e <backup disk>
         label
         0          -> SMI label
         y
         (If more questions asked: press Enter 3 times.)
         partition
         (Create a single parition, number 0, filling the whole disk)
         label
         0
         y
         quit
         quit
    2. Data copy
    - Create the target ZFS pool:
    zpool create -f -o failmode=continue -R /mnt -m legacy rbackup <backup disk>s0
    Note: the chosen pool name is here "rbackup".
    - Create a snapshot of the source pool :
    zfs snapshot -r rpool@today
    - Copy the data :
    zfs send -R rpool@today | zfs receive -F -d rbackup
    - Remove the snapshot, plus its copy on the backup disk :
    zfs destroy -r rbackup@today
    zfs destroy -r rpool@today
    3. Backup pool reconfiguration
    - Edit the following files:
    /mnt/etc/vfstab
    /mnt/etc/power.conf
    /mnt/etc/dumpadm.conf
    In these files, replace the source pool name "rpool" with the backup pool name "rbackup".
    - Remove the ZFS mount list:
    rm /mnt/etc/zfs/zpool.cache
    4. Making the backup disk bootable
    - Note the name of the current boot filesystem:
    df -k /
    E.g.:
    # df -k /
    Filesystem kbytes used avail capacity Mounted on
    rpool/ROOT/root 31457280 4726390 26646966 16% /
    - Configure the boot filesystem on the backup pool:
    zpool set bootfs=rbackup/ROOT/root rbackup
    Note: "rbackup/ROOT/root" is derived from the main boot filesystem name "rpool/ROOT/root".
    - Copy the ZFS boot block to the backup disk:
    installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/<backup disk>s0
    5. Cleaning up
    - Detach the target pool:
    zpool export rbackup
    I hope this howto will be useful to those like me who need to change all their habits while migrating to ZFS.
    Regards.
    HL

  • Using ZFS for Oracle RAC 11gR2 binaries

    Hi,
    We have following scenario,
    Two Node Cluster: Oracle RAC 11Gr2 with Clusterware on Solaris 10
    We want to keep Oracle & Clusterware binaries on ZFS mirror file system on each node locally and for Data files, FRA, Voting disks & OCR on shared SAN using ASM.
    My question, is the above scenario certified by Oracle or can we keep Oracle binaries on ZFS...?
    Will appreciate your input.
    Thanks

    Well my confusion started after reading this doc on oracle support:
    Certification of Zeta File System (Zfs) On Solaris 10 for Oracle RDBMS [ID 403202.1]
    "Oracle database 10gR2 (10.2.0.3 and higher patches), 11gR1 (11.1.0.6 and higher patches) and 11gR2 (11.2.0.1 and higher patches) are certified with Solaris 10 ZFS on Sparc 64-bit and Solaris x84-64. See Solaris ZFS_Best_Practices_Guide. This is for single instance ONLY. ZFS is currently not applicable to RAC and not Certified to use it as a shared filesystem."

  • ZFS with Hitachi SAN Questions

    We are moving all our oracle and zone storage to ZFS using an external Hitachi VSP (with Dynamic Provisioning).
    Is there a best practice for LUN sizes..? Since I can create a LUN of any using with HDP, what is the best approach:
    a) If I need a 1TB zpool, do I allocate a 1TB sized LUN or 10 x 100GB LUNs for example. I don't need to do any mirroring or anything like that. Underneath the hood these LUNs are sitting on the same RAID group. (I guess I could configure the LUNs so they were on different raid groups but this is a level of performance far beyond what I will need for this cluster)
    If we are deploying oracle in zone clusters, does it make sense to create different zpools for the zonepath itself, and the /u01, /u02, etc... mounts the database will be using? OR Just create one large zpool that will be the zonepath and then the DBA's can create there /u01, /u02, etc... directories at the root level of the zone?
    If using ZFS mirroring to migrate from old storage onto new storage feasible?
    ie: I have some solaris 10 u5 stand-alone systems running Oracle using LUNs from a USP-V. The USP-V is being replaced with a VSP and the M5000's are being replaced with new ones (which will be a solaris cluster running solaris 10 update 9) Can I present new LUNs from the VSP to the old systems, configure a mirror for the existing zpools, and then break that mirror, ultimately ending up on the new storage? (and eventually importing these onto the new nodes) The issues I see are a) not sure if the zpool/zfs version of update 5 will pose any problems once imported on the update 9 systems; b) can a mirror be added to a pre-existing zpool that wasn't originally configured that way (I'm sure it can but haven't actually done this)
    Thanks for any info

    There is what works.
    There is what is supported.
    It probably works to present the NetApp LUNs to the same HBAs that are accessing the SAN. However, if NetApp says it isn't supported .... then you need to ask them why.

  • CLUSTERING WITH ZFS BOOT DISK

    hi guys,
    i'm looking for create a new cluster on two standalone server
    the two server boot with a rpool zfs, and i don't know if in installation procedure the boot disk was layered with a dedicated slice for global device.
    Is possible to install SunCluster with a rpool boot zfs disk?
    What do i have to do?
    Alessio

    Hi!
    I am have 10 node Sun Cluster.
    All nodes have zfs rpool with mirror.
    is better create mirror zfs disk boot after installation of Sun Cluster or not?I create zfs mirror when install Solaris 10 OS.
    But I don't see any problems to do this after installation of Sun Cluster or Solaris 10.
    P.S. And you may use UFS global with ZFS root.
    Anatoly S. Zimin

  • Host based zfs config with Oracle's Unified Storage 7000 series

    Hi all,
    It is my understanding that the 7000 storage displays a FC or ISCSI lun to the host. I understand this LUN is a ZFS lun in the 7000 storage, however the host still sees this as only one LUN. If I configure a host based ZFS storage device on top of this LUN I have no host based zfs redundancy. So do we still need to create a host based ZFS mirror or a host based ZFS raidz device when use a 7000 series storage array?
    Thanks,
    Shawn

    Many thanks - telling ESX to connect to the 7310's IP address on one of the other subnets DOES appear to work!
    My brain must still be addled from some other recent issues we've been having...absolutely no idea why I hadn't tried it already...
    I stand by the fact that the BUI is ambigious, however - it still mentions that it's exported on only one of the networks...
    Thanks again...

  • Global device filesystem mirror between nodes.

    Maybe this is an open door, but I have a question.
    In a two node cluster (sc3.2) I want to have an extra disk in each node.
    Then I would like to mirror them somehow, via the global filesystem.
    So when one node goes down, the contents of this mirror is still available on
    the other node.
    Is something like this possible ? and where to look for info how to setup this ?
    Kind regards,

    The global file system is expected to create on shared disk if you
    need high availability and accessibility from all cluster nodes.
    If you have just locally attached disks, the global file system creation
    doesn't provide any high availability.
    If your intention is to run application (with high availability) on a
    cluster with local disks, you can take iscsi approach.
    -Create ZFS mirror with iscsi initiators for both targets
    -Use ZFS pool for your application.
    NOTE: This is not recommended for production as ZFS has some
    issues(?) regarding synchronizing mirror when there are multiple failures
    of nodes.
    I think this configuration has been explained in open HA cluster
    environments.
    Thanks
    -Venku

  • Any suggestions on T2000 disk detection?

    We bought 2 used T2000 with 2 72GB disks in each.  Initially the one we tested seemed to run OK.
    Then we had a requirement to use the system with 4 disks.
    Initially the system was running Solaris 10 fine with ZFS mirrored disks in slots 0 and 1.
    Adding 2 disks from the second server, they were inserted at slots 2 and 3.
    During the install the only disks shown for ZFS mirror file system were slots 0 and 2.
    After the install the disks in slots 1 and 3 were still not showing from the OS,
    nor from the OK prompt.
    However, showenvironment at the sc level did show all 4 disks.
    We have played around extensively with 5 disks, running probe-scsi
    at the OK prompt.  There is no pattern leading to the conclusion that certain
    disks are bad, or certain slots are bad.  It is random what we get, but at most
    2 disks can be seen by the system, while at the SC level, it is seeing all disks.
    We get a green light on the disk, showenvironment says it is inserted,
    but OBP's probe-scsi and the OS's format command do not see the additional
    disks.
    The same problem is seen on the other used system, however we don't know how
    robust that server is as we've not run an OS on it yet.
    We did flash the firmware on the known used T2000 in case it was a problem in OBP,
    but nothing has improved.  showfaults comes back clean.  Post comes up with no errors.
    Has this kind of thing been seen before by anyone?  We are puzzled by it and forced to
    think we have 2 bad T2000 servers, although there was no hint of this before in the one
    unit we had run with mirrored disks in slots 0 and 1 for a few months.

    The solution is found.  The disks from the previous owner were part of a raid set with a controller which was removed from the box.
    It was discussed on serverfault.com  Deleting the legacy raid set info with raidctl allowed the disks to be recognized again.
    I am really surprised this factored into what disks were visible from Open Boot Prompt.  I could see Solaris reporting falsely about this, but the ok> prompt ?  Surprising ok> prompt level is swayed by raidctl set info somewhere on the disk.
    http://serverfault.com/questions/517192/disk-not-shown-in-solaris-11-1-format-tool

  • IPhoto & NAS

    Back in 2012 my external USB drive suffered a catastrophic failure and I lost about 9k photos which were mostly from when I was deployed to Afghanistan. So I set out to ensure that I would never loose my photos again and settled on an in house NAS solution using FreeNAS with mirrored storage drives. I don't know how I got it to work but I was able to mount the drive and use it as my primary iPhoto Library drive. Recently I noticed that I was having performance issues with iPhoto and found where people were having problems with it due to the fact that they had over 10k photos. I figured that this was my problem too and set out to divide my iPhoto library up into different years. Everything went great, and iPhoto Library Manager was a big time saver. However when I went to put the libraries back onto the ZFS formatted NAS drive (using AFP) the libraries would get about 150mb copied and then crawl to a speed of about 2mb per minute or less. This was very aggravating as I have a Cat 6 gigabit wired network in my house with a HP ProCurve gigabit switch for the backbone. Performing a FTP transfer did the trick though and at the speed that I would expect. Unfortunatly I found that I could no longer open the libraries in iPhoto with the NAS drive mounted (again using AFP). I figured it was some sort of permissions problem and started playing with the file permissions going as far as chmod 777 and chown username:username to see if that would fix the problem.
    With no luck I turned to the forums and found that iPhoto will only read iPhoto Libraries from a HFS+ journal formatted drive. Now I know that FreeNAS does support HFS+ format but to get it to work was beyond my needs of the solution. Thinking that I don't need these libraries that much anyway as they are just a back up I thought of compressing the files to zip format and then store them on the ZFS mirrored drives. However I realized that I would be decompressing the folder and recompressing the folder using valuable drive space with each change that needed to be made.
    My solution was a sparse disk image for each library, then copied to the ZFS Mirrored drives. This way if I need to add more photos to those libraries, all I need to do is mount the NAS share, mound the needed years iPhoto Library and then open iPhoto selecting the needed iPhoto library. Since the sparse image is expandable, I am not limited to a set size and due to iPhotos requirement of a HFS+ drive format, the disk image solves that problem as well.
    I am sure there are many other solutions that more technically minded users than I have come up with, but I wanted to share this for those who might have similar problems but not wanting to devote too much of their life to the solution. Hope this helps someone.
    Kurt

    Unfortunately there is no solution involving a NAS that is reliable. Don't take my word for it, take Apple's:
    See this article
    http://support.apple.com/kb/TS5168
    and note the comment:
    “Additionally, storing the iPhoto library on a network rather than locally on your computer can also lead to poor performance or data loss.”

  • [SOLVED] What sets default I/O scheduler for disks?

    Hi,
    having read the Wiki https://wiki.archlinux.org/index.php/So … _Scheduler , my attention got caught by the current I/O schedulers for my disks:
    SSD (two partitiotions: / and /boot, both ext4):
    # cat /sys/block/sda/queue/scheduler
    noop deadline [cfq]
    2 HDDs (part of ZFS mirror pool, partitioned by ZFS itself, mounted automatically on import by ZFS):
    # cat /sys/block/sdb/queue/scheduler
    [noop] deadline cfq
    I am about to create my own udev.rules to set noop and cfq for SSD and HDD, respectively. I just wonder, why all of them are not [cfq], which should be the default?
    P.S. I grepped all files in /etc for "noop" and "cfq" to make sure it is not some of my setting forgotten in time. And found nothing.
    Last edited by MilanKnizek (2015-04-22 08:32:59)

    The kernel config does I believe.
    % zgrep -i iosched /proc/config.gz
    CONFIG_IOSCHED_NOOP=y
    CONFIG_IOSCHED_DEADLINE=y
    CONFIG_IOSCHED_CFQ=y
    CONFIG_CFQ_GROUP_IOSCHED=y
    CONFIG_DEFAULT_IOSCHED="cfq"
    Perhaps zpools default to noop somehow.
    Last edited by graysky (2015-04-22 07:40:35)

  • Booting from a mirrored disk on a zfs root system

    Hi all,
    I am a newbee here.
    I have a zfs root system with mirrored disks c0t0d0s0 and c1t0d0s0, grub has been installed on c0t0d0s0 and OS booting is just fine.
    Now the question is if I want to boot the OS from the mirrored disk c1t0d0s0, how can I achieve that.
    OS is solaris 10 update 7.
    I installed the grub to c1t0d0s0 and assume menu.lst need to be changed (but i don't know how), somehow no luck.
    # zpool status zfsroot
    pool: zfsroot
    state: ONLINE
    scrub: none requested
    config:
    NAME STATE READ WRITE CKSUM
    zfsroot ONLINE 0 0 0
    mirror ONLINE 0 0 0
    c1t0d0s0 ONLINE 0 0 0
    c0t0d0s0 ONLINE 0 0 0
    # bootadm list-menu
    The location for the active GRUB menu is: /zfsroot/boot/grub/menu.lst
    default 0
    timeout 10
    0 s10u6-zfs
    1 s10u6-zfs failsafe
    # tail /zfsroot/boot/grub/menu.lst
    title s10u6-zfs
    findroot (BE_s10u6-zfs,0,a)
    bootfs zfsroot/ROOT/s10u6-zfs
    kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS
    module /platform/i86pc/boot_archive
    title s10u6-zfs failsafe
    findroot (BE_s10u6-zfs,0,a)
    bootfs zfsroot/ROOT/s10u6-zfs
    kernel /boot/multiboot kernel/unix -s -B console=ttya
    module /boot/x86.miniroot-safe
    Appreciate anyone can provide some tips.
    Thanks.
    Mizuki

    This is what I have in my notes.... not sure if I wrote them or not. This is a sparc example as well. I believe on my x86 I still have to tell the bios to boot the mirror.
    After attaching mirror (if the mirror was not present during the initial install) you need to fix the boot block.
    #installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t1d0s0
    If the primary then fails you need to set the obp to the mirror:
    ok>boot disk1
    for example
    Apparently there is a way to set the obp to search for a bootable disk automatically.
    Good notes on all kinds of zfs and boot issues here:
    http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#ZFS_Boot_Issues

Maybe you are looking for

  • Loading a raw image in BufferedImage

    Hi guys, I'm very new to java .I have a raw image which has 32 bit pixels which has ABGR...with alpha as the most signficant byte and the red as the least significant byte. I have an array of ints and I want to load this to a BufferedImage object. I

  • When updating the new operating system, I accidentally backed up my phone onto my husband's iPhone.  How do I get his old data back on his phone?

    I can't find an option in iTunes to restore to a previous backup.  I know I have done this one time before when we got a used iphone 3G at one point and I was able to go back and install his old data on the new phone.  Now it is only coming up as my

  • I want to transfer my purchases to my new MacBook Air.

    I just recently bought a MacBook Air. I am learning the basics and noticed that the Apps that I have downloaded and purchased on my iPhone (i no longer have an iPhone, I have a HTC One X) is not transferring through to my MacBook's App Store. Does th

  • Importing and exporting

    Hi Friends, Please help me, present requirement given like this, what is mean below statements: P_guid is parameter, lt_appraisal_documents is internal table import documents to lt_appraisal_documents   from database zplm2(cd) id p_guid. export conta

  • Default values in select options

    Hi Experts , Could some one please explain how we can fill the select option fields with initial values, my current code is giving a "The current application triggered a termination with a short dump " dump My Code: data:lt_range_table       type ref