ZFS root problem after iscsi target experiment

Hello all.
I need help with this situation... I've installed solaris 10u6, patched, created branded full zone. Everything went well until I started to experiment with iSCSI target according to this document: http://docs.sun.com/app/docs/doc/817-5093/fmvcd?l=en&a=view&q=iscsi
After setting up iscsi discovery address of my iscsi target solaris hung up and the only way was to send break from service console. Then I got these messages during boot
SunOS Release 5.10 Version Generic_138888-01 64-bit
/dev/rdsk/c5t216000C0FF8999D1d0s0 is clean
Reading ZFS config: done.
Mounting ZFS filesystems: (1/6)cannot mount 'root': mountpoint or dataset is busy
(6/6)
svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a failed: exit status 1
Jan 23 14:25:42 svc.startd[7]: svc:/system/filesystem/local:default: Method "/lib/svc/method/fs-local" failed with exit status 95.
Jan 23 14:25:42 svc.startd[7]: system/filesystem/local:default failed fatally: transitioned to maintenance (see 'svcs -xv' for details)
---- There are many affected services from this error, unfortunately one of them is system-log, so I cannot find any relevant information why this happens.
bash-3.00# svcs -xv
svc:/system/filesystem/local:default (local file system mounts)
State: maintenance since Fri Jan 23 14:25:42 2009
Reason: Start method exited with $SMF_EXIT_ERR_FATAL.
See: http://sun.com/msg/SMF-8000-KS
See: /var/svc/log/system-filesystem-local:default.log
Impact: 32 dependent services are not running:
svc:/application/psncollector:default
svc:/system/webconsole:console
svc:/system/filesystem/autofs:default
svc:/system/system-log:default
svc:/milestone/multi-user:default
svc:/milestone/multi-user-server:default
svc:/system/basicreg:default
svc:/system/zones:default
svc:/application/graphical-login/cde-login:default
svc:/system/iscsitgt:default
svc:/application/cde-printinfo:default
svc:/network/smtp:sendmail
svc:/network/ssh:default
svc:/system/dumpadm:default
svc:/system/fmd:default
svc:/system/sysidtool:net
svc:/network/rpc/bind:default
svc:/network/nfs/nlockmgr:default
svc:/network/nfs/status:default
svc:/network/nfs/mapid:default
svc:/application/sthwreg:default
svc:/application/stosreg:default
svc:/network/inetd:default
svc:/system/sysidtool:system
svc:/system/postrun:default
svc:/system/filesystem/volfs:default
svc:/system/cron:default
svc:/application/font/fc-cache:default
svc:/system/boot-archive-update:default
svc:/network/shares/group:default
svc:/network/shares/group:zfs
svc:/system/sac:default
[ Jan 23 14:25:40 Executing start method ("/lib/svc/method/fs-local") ]
WARNING: /usr/sbin/zfs mount -a failed: exit status 1
[ Jan 23 14:25:42 Method "start" exited with status 95 ]
Finaly there is output of zpool list command, where everything about ZFS pools looks OK:
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
root 68G 18.5G 49.5G 27% ONLINE -
storedgeD2 404G 45.2G 359G 11% ONLINE -
I would appretiate any help.
thanks in advance,
Berrosch

OK, i've tryied to install s10u6 to default rpool and move root user to /rpool directory (which it is nonsense of course, it was just for this testing purposes) and everything went OK.
Another experiment was with rootpool name 'root' and root user in /root, everything went OK as well.
Next try was with rootpool 'root', root user in /root, enabling iscsi initiator:
# svcs -a |grep iscsi
disabled 16:31:07 svc:/network/iscsi_initiator:default
# svcadm enable iscsi_initiator
# svcs -a |grep iscsi
online 16:34:11 svc:/network/iscsi_initiator:default
and voila! the problem is here...
Mounting ZFS filesystems: (1/5)cannot mount 'root': mountpoint or dataset is busy
(5/5)
svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a failed: exit status 1
Feb 9 16:37:35 svc.startd[7]: svc:/system/filesystem/local:default: Method "/lib/svc/method/fs-local" failed with exit status 95.
Feb 9 16:37:35 svc.startd[7]: system/filesystem/local:default failed fatally: transitioned to maintenance (see 'svcs -xv' for details)
Seems to be a bug in iscsi implementation, some quotas of 'root' in source code or something like it...
Martin

Similar Messages

  • ISCSI-Target service start issue

    I am trying to configure 11g RAC on RHEL 4.7. I have just done following steps on my storage:
    [root@XRBSstorage ~]# rpmbuild rebuild iscsitarget0.4.126.src.rpm
    [root@XRBSstorage ~]# rpm ivh /usr/src/redhat/RPMS/i386/iscsitarget*
    Preparing... ########################################### [100%]
    1:iscsitargetkernel########################################### [ 33%]
    2:iscsitarget ########################################### [ 67%]
    3:iscsitargetdebuginfo########################################### [100%]
    [root@XRBSstorage ~]#vi /etc/ietd.conf
    Target iqn.2001-04.:XRBSstorage.sda6
    Lun 0 Path=/dev/sda6,Type=fileio
    Alias sda6
    [root@XRBSstorage ~]# chkconfig --level 35 iscsitarget on
    [root@XRBSstorage ~]# service iscsi-target start
    Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
    netlink fd
    : Connection refused
    [FAILED]
    [root@XRBSstorage ~]#
    Kindly guide me how to get rid of the error.
    Regards,
    Umair

    This is the wrong forum for that kind of questions. Please move to Generice Linux forum.
    Anyway, check if all kernel module are loaded:
    [root@srv01 ~]# lsmod |grep -i iscsi
    be2iscsi               50665  0
    iscsi_tcp               9524  0
    libiscsi_tcp           15026  3 iscsi_tcp,cxgb3i,libcxgbi
    libiscsi               39373  7 be2iscsi,ib_iser,iscsi_tcp,bnx2i,cxgb3i,libcxgbi,libiscsi_tcp
    scsi_transport_iscsi    35096  8 be2iscsi,ib_iser,iscsi_tcp,bnx2i,libcxgbi,libiscsiRegards,
    - wiZ

  • Unable to expand/extend partition after growing SAN-based iSCSI target

    Hello, all. I have odd situation regarding how to expand iSCSI-based partitions.
    Here is my setup:
    I use the GlobalSAN iSCSI initiator on 10.6.x server (Snow Leopard).
    The iSCSI LUN is formatted with the GPT partition table.
    The filesystem is Journaled HFS+
    My iSCSI SAN has the ability to non-destructively grow a LUN (iSCSI target).
    With this in mind, I wanted to experiment with growing a LUN/target on the SAN and then expanding the Apple partition within it using disk utility. I have been unable to do so.
    Here is my procedure:
    1) Eject the disk (iSCSI targets show up as external hard drives)
    2) Disconnect the iSCSI target using the control panel applet (provided by GlobalSAN)
    3) Grow the LUN/target on the SAN.
    4) Reconnect the iSCSI initiator
    5) Expand/extend the partition using Disk Utility to consume the (newly created) free space.
    It works until the last step. When I reconnect to the iSCSI target after expanding it on the SAN, it shows up Disk Utility as being larger than it was (so far, so expected). When I go to expand the partition, however, it errors out saying that there is not enough space.
    Investigating further, I went the command line and performed a
    "diskutil resizeVolume <identifier> limits"
    to determine what the limit was to the partition. The limits did NOT reflect the newly-created space.
    My suspicion is that the original partition map, since it was created as 100% of the volume, does not allow room for growth despite the fact that the disk suddenly (and, to the system, unexpectedly) became larger.
    Is this assumption correct? Is there any way around this? I would like to be able to expand my LUNs/targets (since the SAN can grow with the business), but this has no value if I cannot also extend the partition table to use the new space.
    If anyone has any insight, I would greatly appreciate it. Thank you!

    I have exactly the same problem that you describe above. My iSCSI LUN was near capacity and therefore i extended the iSCSI LUN from 100G to 150G. No problem so far.
    Disk Utility shows the iSCSI device as 150G but i cannot extend the volume to the new size. It gives me the same error (in Dutch).
    Please someone help us out !

  • HT5262 I update to IOS 6.1.3 in my Iphone 4S and don't saved SHSH an APtickets, after it I experiment a problems with my Idevice, it show service provider but don't show signal in any place,I want to know if its posible repair my device

    I update to IOS 6.1.3 in my Iphone 4S and don't saved SHSH an APtickets, after it I experiment a problems with my Idevice, it show service provider but don't show signal in any place,I want to know if its posible repair my device

    anduran wrote:
    I update to IOS 6.1.3 in my Iphone 4S and don't saved SHSH an APtickets, after it I experiment a problems with my Idevice, it show service provider but don't show signal in any place,I want to know if its posible repair my device
    Basic  troubleshooting
    Reset
    Restore with back up and if required as new

  • [SOLVED] iscsi target lio (targetcli_fb) problem

    I configured the targetcli:(don't use authentication)
    tpg1> set parameter AuthMethod=None
    set attribute authentication=0
    /etc/rc.d/target run well.
    But when the initiator connects the target, target reports the error:
    iSCSI Initiator Node: iqn.xxxx-xxxx-xxxx is not authorized to access iSCSI target portal group: 1
    uname -a
    Linux  3.2.13-1-ARCH
    what else settings? Thanks.
    This is the saveconfig.json
      "fabric_modules": [],
      "storage_objects": [
          "attributes": {
            "block_size": 512,
            "emulate_dpo": 0,
            "emulate_fua_read": 0,
            "emulate_fua_write": 1,
            "emulate_rest_reord": 0,
            "emulate_tas": 1,
            "emulate_tpu": 0,
            "emulate_tpws": 0,
            "emulate_ua_intlck_ctrl": 0,
            "emulate_write_cache": 0,
            "enforce_pr_isids": 1,
            "is_nonrot": 0,
            "max_sectors": 1024,
            "max_unmap_block_desc_count": 0,
            "max_unmap_lba_count": 0,
            "optimal_sectors": 1024,
            "queue_depth": 32,
            "unmap_granularity": 0,
            "unmap_granularity_alignment": 0
          "buffered_mode": true,
          "dev": "/home/faicker/test.img",
          "name": "disk1",
          "plugin": "fileio",
          "size": 10737418240,
          "wwn": "b83cf4a1-5e30-4df4-a5a2-ac7163d78a4d"
      "targets": [
          "fabric": "iscsi",
          "tpgs": [
              "attributes": {
        "authentication": 0,
        "cache_dynamic_acls": 0,
        "default_cmdsn_depth": 16,
        "demo_mode_write_protect": 1,
        "generate_node_acls": 0,
        "login_timeout": 15,
        "netif_timeout": 2,
        "prod_mode_write_protect": 0
      "enable": 1,
      "luns": [
          "index": 0,
          "storage_object": "/backstores/fileio/disk1"
      "node_acls": [],
      "parameters": {
        "AuthMethod": "None",
        "DataDigest": "CRC32C,None",
        "DataPDUInOrder": "Yes",
        "DataSequenceInOrder": "Yes",
        "DefaultTime2Retain": "20",
        "DefaultTime2Wait": "2",
        "ErrorRecoveryLevel": "0",
        "FirstBurstLength": "65536",
        "HeaderDigest": "CRC32C,None",
        "IFMarkInt": "2048~65535",
        "IFMarker": "No",
        "ImmediateData": "Yes",
        "InitialR2T": "Yes",
        "MaxBurstLength": "262144",
        "MaxConnections": "1",
        "MaxOutstandingR2T": "1",
        "MaxRecvDataSegmentLength": "8192",
        "OFMarkInt": "2048~65535",
                "OFMarkInt": "2048~65535",
                "OFMarker": "No",
                "TargetAlias": "LIO Target"
              "portals": [
                  "ip_address": "0.0.0.0",
                  "port": 3260
              "tag": 1
          "wwn": "iqn.2003-01.org.linux-iscsi.u205.x8664:sn.5f9e1e0139f2"
    Last edited by faicker (2012-04-16 15:27:21)

    /iscsi/iqn.20...1e0139f2/tpg1> ls /
    o- / ..................................................................... [...]
      o- backstores .......................................................... [...]
      | o- block ................................................ [0 Storage Object]
      | o- fileio ............................................... [1 Storage Object]
      | | o- disk1 ........................ [/home/mocan/test.img (10.0G) activated]
      | o- pscsi ................................................ [0 Storage Object]
      | o- ramdisk .............................................. [0 Storage Object]
      o- ib_srpt ....................................................... [Not found]
      o- iscsi .......................................................... [1 Target]
      | o- iqn.2003-01.org.linux-iscsi.u205.x8664:sn.5f9e1e0139f2 .......... [1 TPG]
      |   o- tpg1 ........................................................ [enabled]
      |     o- acls ........................................................ [1 ACL]
      |     | o- iqn.2003-01.org.linux-iscsi.u205.x8664:sn.5f9e1e0139f2  [1 Mapped LUN]
      |     |   o- mapped_lun0 ............................ [lun0 fileio/disk1 (rw)]
      |     o- luns ........................................................ [1 LUN]
      |     | o- lun0 ........................ [fileio/disk1 (/home/mocan/test.img)]
      |     o- portals .................................................. [1 Portal]
      |       o- 0.0.0.0:3260 ................................................. [OK]
      o- loopback ....................................................... [0 Target]
      o- qla2xxx ....................................................... [Not found]
      o- tcm_fc ........................................................ [Not found]
    This is targetcli ls / result.
    the open-iscsi initiator report error:
    iscsiadm: initiator reported error (19 - encountered non-retryable iSCSI login failure)
    It still wasn't solved.
    Last edited by faicker (2012-04-14 15:36:07)

  • Iscsi target rewriting sparse backing store

    Hi all,
    I have this particular problem when trying to use sparse file residing on zfs as backing store for iscsi target. For the sake of this post, lets say I have to use the sparse file instead of whole zfs filesystem as iscsi backing store.
    However, as soon as the sparse file is used as iscsi target backing store, Solaris OS (iscsitgt process) decides to rewrite entire sparse file and make it non-sparse. Note this all happens without any iscsi initiator (client) accessed this iscsi target.
    My question is why the sparse file is being rewritten at that time?
    I can expect write at iscsi initiator connect time, but why at the iscsi target create time?
    Here are the steps:
    1. Create the sparse file, note the actual size,
    # dd if=/dev/zero of=sparse_file.dat bs=1024k count=1 seek=4096
    1+0 records in
    1+0 records out
    # du -sk .
    2
    # ll sparse_file.dat
    -rw-r--r--   1 root     root     4296015872 Feb  7 10:12 sparse_file.dat
    2. Create the iscsi target using that file as backing store:
    # iscsitadm create target --backing-store=$PWD/sparse_file.dat sparse
    3. Above command returns immediately, everything seems ok at this time
    4. But after couple of seconds, disk activity increases, and zpool iostat shows
    # zpool iostat 3
                   capacity     operations    bandwidth
    pool         used  avail   read  write   read  write
    mypool  5.04G   144G      0    298      0  35.5M
    mypool  5.20G   144G      0    347      0  38.0M
    and so on, until the write over previously sparse 4G is over:
    5. Note the real size now:
    # du -sk .
    4193252 .Note all of the above was happening with no iscsi initiators connected to that node nor target. Solaris OS did it by itself, and I can see no reasons why.
    I would like to have those files sparse, at least until I use them as iscsi targets, and I would prefer those files to grow as my initiators (clients) are filling them.
    If anyone can share some thoughts on this, I'd appreciate it
    Thanks,
    Robert

    Problem solved.
    Solaris iscsi target daemon configuration file has to be updated with:
    <thin-provisioning>true</thin-provisioning>
    in order to iscsitgtd not to initialize the iscsi target backing store files. This is all valid only for iscsi targets having files as backing store.
    After creating iscsi targets with file (sparse or not) as backing store, there is no i/o activity whatsoever, and thats' what I wanted.
    FWIW, This is how the config file looks now.
    # more /etc/iscsi/target_config.xml
    <config version='1.0'>
    <thin-provisioning>true</thin-provisioning>
    </config>
    #

  • ISCSI targets for the Mac

    I setup a VMware vSphere 4 Server with RAID 10 direct-attached storage and 3 virtual machines:
    - OpenSolaris 2009.06 dev version (snv_111b) running 64-bit
    - CentOS 5.3 x64 (ran yum update)
    - Ubuntu Server 9.04 x64 (ran apt-get upgrade)
    I gave each virtual 2 GB of RAM, a 32 GB virtual drive and setup a 16 GB iSCSI target on each (the two Linux vms used iSCSI Enterprise Target 0.4.16 with blockio). VMware Tools was installed on each. No tuning was done on any of the operating systems.
    I ran two tests for write performance - one on the server itself and one from my MacBook Pro (10.5.7) connected via Gigabit (mtu of 1500) iSCSI connection using globalSAN 3.3.0.43.
    Here’s what I used on the servers:
    time dd if=/dev/zero of=/root/testfile bs=1048576k count=4
    and the Mac OS with the iSCSI connected drive (formatted with GPT / Mac OS Extended journaled):
    time dd if=/dev/zero of=/Volumes/test/testfile bs=1048576k count=4
    The results were very interesting (all calculations using 1 MB = 1,084,756 bytes).
    For OpenSolaris, the local write performance averaged 86 MB/s. I turned on lzjb compression for rpool (zfs set compression=lzjb rpool) and it went up to 414 MB/s since I’m writing zeros). The average performance via iSCSI was an abysmal 16 MB/s (even with compression turned on - with it off, 13 MB/s).
    For CentOS (ext3), local write performance averaged 141 MB/s. iSCSI performance was 78 MB/s (almost as fast as local ZFS performance on the OpenSolaris server when compression was turned off).
    Ubuntu Server (ext4) had 150 MB/s for the local write. iSCSI performance averaged 80 MB/s.
    One of the main differences between the three virtual machines was that the iSCSI target on the Linux machines used partitions with no file system. On OpenSolaris, the iSCSI target created sits on top of ZFS. That creates a lot of overhead (although you do get some great features).
    Since all the virtual machines were connected to the same switch (with the same MTU), had the same amount of RAM, used default configurations for the operating systems, and sat on the same RAID 10 storage, I’d say it was a pretty level playing field.
    At this point, I think I'll be using Ubuntu 9.04 Server (64-bit) as my iSCSI target for Macs.
    Has anyone else done similar (or more extensive) testing?

    I had a lot of trouble with SimCity4 on my iMac. It became such a headache that I returned it to compUSA. It ran very choppy and crashed repeatedly when the city began to develop. My system FAR exceeds the system requirements for the game, and after some online research I discovered that I am not the only person to have this trouble with SimCity running on 10.4. I have also read problems concerning the SIMS2. Some of what I have read indicates that 10.3 runs the games fine, but 10.4 causes them to crash. I don't know if this is the case, but I do know that I am now very weary before dropping $50 on a game that may not perform on my computer as it claims to on the box. Some people trying to run games are talking about waiting for Mac OS updates that will allow them to run smoother.
    I would check out what gamers are saying before buying anything
    http://www.macosx.com/forums/showthread.php?t=226286

  • Question about using ZFS root pool for whole disk?

    I played around with the newest version of Solaris 10/08 over my vacation by loading it onto a V210 with dual 72GB drives. I used the ZFS root partition configuration and all seem to go well.
    After I was done, I wondered if using the whole disk for the zpool was a good idea or not. I did some looking around but I didn't see anything that suggested that was good or bad.
    I already know about some the flash archive issues and will be playing with those shortly, but I am curious how others are setting up their root ZFS pools.
    Would it be smarter to setup say a 9gb partition on both drives so that the root ZFS is created on that to make the zpool and then mirror it to the other drive and then create another ZFS pool from the remaining disk?

    route1 wrote:
    Just a word of caution when using ZFS as your boot disk. There are tons of bugs in ZFS boot that can make the system un-bootable and un-recoverable.Can you expand upon that statement with supporting evidence (BugIDs and such)? I have a number of local zones (sparse and full) on three Sol10u6 SPARC machines and they've been booting fine. I am having problems LiveUpgrading (lucreate) that I'm scratching my head to resolve. But I haven't had any ZFS boot/root corruption.

  • Unable to use device as an iSCSI target

    My intended purpose is to have iSCSI targets for a virtualbox setup at home where block devices are for the systems and existing data on a large RAID partition is exported as well. I'm able to successfully export the block files by using dd and added them as a backing-store into the targets.conf file:
    include /etc/tgt/temp/*.conf
    default-driver iscsi
    <target iqn.2012-09.net.domain:vm.fsrv>
    backing-store /srv/vm/disks/iscsi-disk-fsrv
    </target>
    <target iqn.2012-09.net.domain:vm.wsrv>
    backing-store /srv/vm/disks/iscsi-disk-wsrv
    </target>
    <target iqn.2012-09.net.domain:lan.storage>
    backing-store /dev/md0
    </target>
    but the last one with /dev/md0 only creates the controller and not the disk.
    The RAID device is mounted, I don't whether or not that matters, unfortunately I can't try it being unmounted yet because it is in use. I've tried all permutations of backing-store and direct-store with md0 as well as another device (sda) with and without the partition number, all had the same result.
    If anyone has been successful exporting a device (specifically a multi disk) I'd be real interested in knowing how. Also, if anyone knows how, or if it's even possible, to use a directory as the backing/direct store I'd like to know that as well, my attempts there have been unsuccessful as well.
    I will preempt anyone asking why I'm not using some other technology, eg. NFS, CIFS, ZFS, etc., by saying that this is largely academic. I want to compare the performance that a virtualized file server has that receives it's content being served by both NFS and iSCSI, and the NFS part is easy.
    Thanks.

    Mass storage only looks at the memory expansion.
    Did you have a micro SD card in it?
    What OS on the PC are you running?
    Click here to Backup the data on your BlackBerry Device! It's important, and FREE!
    Click "Accept as Solution" if your problem is solved. To give thanks, click thumbs up
    Click to search the Knowledge Base at BTSC and click to Read The Fabulous Manuals
    BESAdmin's, please make a signature with your BES environment info.
    SIM Free BlackBerry Unlocking FAQ
    Follow me on Twitter @knottyrope
    Want to thank me? Buy my KnottyRope App here
    BES 12 and BES 5.0.4 with Exchange 2010 and SQL 2012 Hyper V

  • [SOLVED] Installing on ZFS root: "ZFS: cannot find bootfs" on boot.

    I have been experimenting with ZFS filesystems on external HDDs for some time now to get more comfortable with using ZFS in the hopes of one day reinstalling my system on a ZFS root.
    Today, I tried installing a system on an USB external HDD, as my first attempt to install on ZFS (I wanted to try in a safe, disposable environment before I try this on my main system).
    My partition configuration (from gdisk):
    Command (? for help): p
    Disk /dev/sdb: 3907024896 sectors, 1.8 TiB
    Logical sector size: 512 bytes
    Disk identifier (GUID): 2FAE5B61-CCEF-4E1E-A81F-97C8406A07BB
    Partition table holds up to 128 entries
    First usable sector is 34, last usable sector is 3907024862
    Partitions will be aligned on 8-sector boundaries
    Total free space is 0 sectors (0 bytes)
    Number Start (sector) End (sector) Size Code Name
    1 34 2047 1007.0 KiB EF02 BIOS boot partition
    2 2048 264191 128.0 MiB 8300 Linux filesystem
    3 264192 3902828543 1.8 TiB BF00 Solaris root
    4 3902828544 3907024862 2.0 GiB 8300 Linux filesystem
    Partition #1 is for grub, obviously. Partition #2 is an ext2 partition that I mount on /boot in the new system. Partition #3 is where I make my ZFS pool.
    Partition #4 is an ext4 filesystem containing another minimal Arch system for recovery and setup purposes. GRUB is installed on the other system on partition #4, not in the new ZFS system.
    I let grub-mkconfig generate a config file from the system on partition #4 to boot that. Then, I manually edited the generated grub.cfg file to add this menu entry for my ZFS system:
    menuentry 'ZFS BOOT' --class arch --class gnu-linux --class gnu --class os {
    load_video
    set gfxpayload=keep
    insmod gzio
    insmod part_gpt
    insmod ext2
    set root='hd0,gpt2'
    echo 'Loading Linux core repo kernel ...'
    linux /vmlinuz-linux zfs=bootfs zfs_force=1 rw quiet
    echo 'Loading initial ramdisk ...'
    initrd /initramfs-linux.img
    My ZFS configuration:
    # zpool list
    NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
    External2TB 1.81T 6.06G 1.81T 0% 1.00x ONLINE -
    # zpool status :(
    pool: External2TB
    state: ONLINE
    scan: none requested
    config:
    NAME STATE READ WRITE CKSUM
    External2TB ONLINE 0 0 0
    usb-WD_Elements_1048_575836314135334C32383131-0:0-part3 ONLINE 0 0 0
    errors: No known data errors
    # zpool get bootfs
    NAME PROPERTY VALUE SOURCE
    External2TB bootfs External2TB/ArchSystemMain local
    # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    External2TB 14.6G 1.77T 30K none
    External2TB/ArchSystemMain 293M 1.77T 293M /
    External2TB/PacmanCache 5.77G 1.77T 5.77G /var/cache/pacman/pkg
    External2TB/Swap 8.50G 1.78T 20K -
    The reason for the above configuration is that after I get this system to work, I want to install a second system in the same zpool on a different dataset, and have them share a pacman cache.
    GRUB "boots" successfully, in that it loads the kernel and the initramfs as expected from the 2nd GPT partition. The problem is that the kernel does not load the ZFS:
    ERROR: device '' not found. Skipping fsck.
    ZFS: Cannot find bootfs.
    ERROR: Failed to mount the real root device.
    Bailing out, you are on your own. Good luck.
    and I am left in busybox in the initramfs.
    What am I doing wrong?
    Also, here is my /etc/fstab in the new system:
    # External2TB/ArchSystemMain
    #External2TB/ArchSystemMain / zfs rw,relatime,xattr 0 0
    # External2TB/PacmanCache
    #External2TB/PacmanCache /var/cache/pacman/pkg zfs rw,relatime,xattr 0 0
    UUID=8b7639e2-c858-4ff6-b1d4-7db9a393578f /boot ext4 rw,relatime 0 2
    UUID=7a37363e-9adf-4b4c-adfc-621402456c55 none swap defaults 0 0
    I also tried to boot using "zfs=External2TB/ArchSystemMain" in the kernel options, since that was the more logical way to approach my intention of having multiple systems on different datasets. It would allow me to simply create separate grub menu entries for each, with different boot datasets in the kernel parameters. I also tried setting the mount points to "legacy" and uncommenting the zfs entries in my fstab above. That didn't work either and produced the same results, and that was why I decided to try to use "bootfs" (and maybe have a script for switching between the systems by changing the ZFS bootfs and mountpoints before reboot, reusing the same grub menuentry).
    Thanks in advance for any help.
    Last edited by tajjada (2013-12-30 20:03:09)

    Sounds like a zpool.cache issue. I'm guessing your zpool.cache inside your arch-chroot is not up to date. So on boot the ZFS hook cannot find the bootfs. At least, that's what I assume the issue is, because of this line:
    ERROR: device '' not found. Skipping fsck.
    If your zpool.cache was populated, it would spit out something other than an empty string.
    Some assumptions:
    - You're using the ZFS packages provided by demizer (repository or AUR).
    - You're using the Arch Live ISO or some version of it.
    On cursory glance your configuration looks good. But verify anyway. Here are the steps you should follow to make sure your zpool.cache is correct and up to date:
    Outside arch-chroot:
    - Import pools (not using '-R') and verify the mountpoints.
    - Make a copy of the /etc/zfs/zpool.cache before you export any pools. Again, make a copy of the /etc/zfs/zpool.cache before you export any pools. The reason for this is once you export a pool the /etc/zfs/zpool.cache gets updated and removes any reference to the exported pool. This is likely the cause of your issue, as you would have an empty zpool.cache.
    - Import the pool containing your root filesystem using the '-R' flag, and mount /boot within.
    - Make sure to copy your updated zpool.cache to your arch-chroot environment.
    Inside arch-chroot:
    - Make sure your bootloader is configured properly (i.e. read 'mkinitcpio -H zfs').
    - Use the 'udev' hook and not the 'systemd' one in your mkinitcpio.conf. The zfs-utils package does not have a ported hook (as of 0.6.2_3.12.6-1).
    - Update your initramfs.
    Outside arch-chroot:
    - Unmount filesystems.
    - Export pools.
    - Reboot.
    Inside new system:
    - Make sure to update the hostid then rebuild your initramfs. Then you can drop the 'zfs_force=1'.
    Good luck. I enjoy root on ZFS myself. However, I wouldn't recommend swap on ZFS. Despite what the ZoL tracker says, I still ran into deadlocks on occasion (as of a month ago). However, I cannot say definitely the cause of the issue; but it resolved when I moved swap off ZFS to a dedicated partition.
    Last edited by NVS (2013-12-29 14:56:44)

  • Failed Creation ISCSI Target on NSS324

    Hello,
    I configured a NSS324 4 drives of 2To in RAID5.I had not no problem
    But when i would like to create a ISCSI target (one LUN of my total capacity : 5,36 To), it tooks more than 15 hours and i got this error after 15H: [iSCSI] Failed to create iSCSI target. (error code=5).
    Can you help me? can i create a LUN of my whole capacity?
    Thanks a lot
    Sev

    Please use code tags for your config and other terminal output for better readability. Still, it looks ok to me.
    I have targetcli-fb on one of my Arch boxes, but it's an old version (2.1.fb35-1) built back in June (official packages are up to date). Discovery and login from another Arch box works. I don't have time to troubleshoot further, but if you haven't found a solution by Monday I can update and maybe be of more use.

  • ZFS dataset issue after zone migration

    Hi,
    I thought I'd document this as I could not find any references to people having run into this problem during zone migration.
    Last night I moved a full-root zone from a Solaris 10u4 host to a Solaris 10u7 host. It has a delegated zfs pool.
    The migration was smooth, with a zoneadm halt, followed by a zoneadm detach on the other node.
    An unmount of the ufs SAN LUN (which contained the zone root) on host A and a mount on host B (which is sharing the storage between the two nodes).
    The zoneadm attach worked after complaining about missing patches and packages (since the zone was Solaris 10 u4 as well).
    A zoneadm attach -F started the zone on host B, but did not detect the ZFS pool.
    After searching for possible fixes, trying to identify the issue, I halted the zone again on host B and did a zoneadm attach -u (which upgraded the zone to u7).
    At which point, a zoneadm attach and zoneadm boot resulted in the ZFS dataset being visible again...
    In all a smooth process, but I got a couple of gray hairs on my head trying to figure out what the problem with seeing the dataset after force-attaching the zone was...
    Any insights from Sun Gurus are welcome.

    I am looking at a similar migration scenario, so my question is did you get the webserver back up as well?
    Cheers,
    Davy

  • Share and save problem after latest update

    hello,
    i have share and save problem after latest update - i have installed ps touch on nexus 5 about 2 months ago and until yesterday everything was good but after latest update i can't save and share images in ps touch ( i do reinstall this app  but problem is still  and app doesn't work properly   ) how  can i downgrade to the old version ? Could you fix this problem quickly ?

    i use app which called "aLogcat"
    --------- beginning of /dev/log/main
    I/dalvikvm(24476): Enabling JNI app bug workarounds for target SDK version 11...
    D/dalvikvm(24476): GC_CONCURRENT freed 174K, 2% free 17054K/17264K, paused 2ms+2ms, total 15ms
    W/InputEventReceiver(24476): Attempted to finish an input event but the input event receiver has already been disposed.
    W/InputEventReceiver(24476): Attempted to finish an input event but the input event receiver has already been disposed.
    W/InputEventReceiver(24476): Attempted to finish an input event but the input event receiver has already been disposed.
    --------- beginning of /dev/log/system
    W/ViewRootImpl(24476): Dropping event due to root view being removed: MotionEvent { action=ACTION_MOVE, id[0]=0, x[0]=38.0, y[0]=-115.0, toolType[0]=TOOL_TYPE_FINGER, buttonState=0, metaState=0, flags=0x0, edgeFlags=0x0, pointerCount=1, historySize=2, eventTime=27743080, downTime=27743066, deviceId=4, source=0x1002 }
    W/ViewRootImpl(24476): Dropping event due to root view being removed: MotionEvent { action=ACTION_UP, id[0]=0, x[0]=38.0, y[0]=-115.0, toolType[0]=TOOL_TYPE_FINGER, buttonState=0, metaState=0, flags=0x0, edgeFlags=0x0, pointerCount=1, historySize=0, eventTime=27743086, downTime=27743066, deviceId=4, source=0x1002 }
    D/dalvikvm(24476): GC_FOR_ALLOC freed 122K, 1% free 17271K/17428K, paused 8ms, total 8ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed 10K, 1% free 17434K/17604K, paused 7ms, total 7ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed 65K, 2% free 17690K/17928K, paused 8ms, total 8ms
    I/dalvikvm-heap(24476): Grow heap (frag case) to 17.615MB for 328336-byte allocation
    D/dalvikvm(24476): GC_FOR_ALLOC freed 0K, 2% free 18010K/18252K, paused 8ms, total 8ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed <1K, 2% free 18011K/18252K, paused 9ms, total 9ms
    I/dalvikvm-heap(24476): Grow heap (frag case) to 18.073MB for 479536-byte allocation
    D/dalvikvm(24476): GC_FOR_ALLOC freed 0K, 2% free 18479K/18724K, paused 8ms, total 8ms
    D/dalvikvm(24476): GC_CONCURRENT freed <1K, 2% free 18481K/18724K, paused 2ms+1ms, total 10ms
    D/dalvikvm(24476): WAIT_FOR_CONCURRENT_GC blocked 6ms
    I/dalvikvm-heap(24476): Grow heap (frag case) to 18.532MB for 479536-byte allocation
    D/dalvikvm(24476): GC_FOR_ALLOC freed <1K, 2% free 18949K/19196K, paused 9ms, total 9ms
    D/dalvikvm(24476): GC_CONCURRENT freed <1K, 2% free 19418K/19668K, paused 1ms+1ms, total 10ms
    D/dalvikvm(24476): WAIT_FOR_CONCURRENT_GC blocked 7ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed <1K, 2% free 19592K/19844K, paused 8ms, total 8ms
    I/dalvikvm-heap(24476): Grow heap (frag case) to 20.049MB for 933136-byte allocation
    D/dalvikvm(24476): GC_FOR_ALLOC freed 0K, 2% free 20503K/20756K, paused 8ms, total 8ms
    D/dalvikvm(24476): GC_CONCURRENT freed <1K, 2% free 21416K/21668K, paused 1ms+1ms, total 12ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed 50K, 2% free 22351K/22580K, paused 7ms, total 8ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed 5K, 1% free 23900K/24140K, paused 8ms, total 8ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed <1K, 1% free 25824K/26084K, paused 8ms, total 8ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed 3411K, 14% free 24528K/28204K, paused 12ms, total 13ms
    D/dalvikvm(24476): GC_CONCURRENT freed 2814K, 14% free 24256K/28204K, paused 2ms+1ms, total 11ms
    D/dalvikvm(24476): WAIT_FOR_CONCURRENT_GC blocked 4ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed 9266K, 1% free 17270K/17424K, paused 10ms, total 10ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed 15K, 1% free 17575K/17748K, paused 8ms, total 8ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed 22K, 2% free 17875K/18072K, paused 8ms, total 8ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed 1K, 2% free 18195K/18396K, paused 8ms, total 8ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed <1K, 2% free 18517K/18720K, paused 8ms, total 8ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed <1K, 2% free 18838K/19044K, paused 11ms, total 11ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed <1K, 2% free 19159K/19368K, paused 8ms, total 8ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed <1K, 2% free 19480K/19692K, paused 8ms, total 8ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed <1K, 2% free 19974K/20192K, paused 8ms, total 8ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed <1K, 2% free 20616K/20840K, paused 8ms, total 8ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed <1K, 2% free 21431K/21664K, paused 8ms, total 8ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed <1K, 2% free 22714K/22960K, paused 8ms, total 8ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed 182K, 2% free 24004K/24432K, paused 8ms, total 8ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed 8478K, 33% free 17593K/26228K, paused 14ms, total 14ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed 18K, 32% free 17895K/26228K, paused 12ms, total 12ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed 18K, 31% free 18198K/26228K, paused 11ms, total 11ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed 1K, 30% free 18519K/26228K, paused 8ms, total 8ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed <1K, 29% free 18840K/26228K, paused 8ms, total 8ms
    D/dalvikvm(24476): GC_CONCURRENT freed <1K, 26% free 19482K/26228K, paused 2ms+1ms, total 14ms
    D/dalvikvm(24476): WAIT_FOR_CONCURRENT_GC blocked 10ms
    D/dalvikvm(24476): GC_CONCURRENT freed <1K, 23% free 20297K/26228K, paused 2ms+1ms, total 15ms
    D/dalvikvm(24476): WAIT_FOR_CONCURRENT_GC blocked 11ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed <1K, 19% free 21260K/26228K, paused 10ms, total 10ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed <1K, 15% free 22543K/26228K, paused 8ms, total 8ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed 182K, 9% free 23979K/26228K, paused 21ms, total 22ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed 8485K, 33% free 17598K/26132K, paused 13ms, total 14ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed 25K, 32% free 17894K/26132K, paused 12ms, total 12ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed 12K, 31% free 18204K/26132K, paused 12ms, total 12ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed 1K, 30% free 18524K/26132K, paused 12ms, total 12ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed <1K, 28% free 18845K/26132K, paused 8ms, total 9ms
    D/dalvikvm(24476): GC_CONCURRENT freed <1K, 26% free 19487K/26132K, paused 1ms+0ms, total 9ms
    D/dalvikvm(24476): WAIT_FOR_CONCURRENT_GC blocked 7ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed <1K, 23% free 20129K/26132K, paused 9ms, total 9ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed <1K, 20% free 21091K/26132K, paused 8ms, total 8ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed <1K, 15% free 22375K/26132K, paused 8ms, total 8ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed 9K, 9% free 24011K/26132K, paused 14ms, total 14ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed 2147K, 10% free 23855K/26456K, paused 8ms, total 8ms
    D/dalvikvm(24476): GC_CONCURRENT freed 8776K, 35% free 17337K/26304K, paused 3ms+2ms, total 20ms
    W/InputEventReceiver(24476): Attempted to finish an input event but the input event receiver has already been disposed.
    W/InputEventReceiver(24476): Attempted to finish an input event but the input event receiver has already been disposed.
    W/ViewRootImpl(24476): Dropping event due to root view being removed: MotionEvent { action=ACTION_UP, id[0]=0, x[0]=564.0, y[0]=-50.0, toolType[0]=TOOL_TYPE_FINGER, buttonState=0, metaState=0, flags=0x0, edgeFlags=0x0, pointerCount=1, historySize=0, eventTime=28190972, downTime=28190972, deviceId=4, source=0x1002 }
    D/dalvikvm(24476): GC_CONCURRENT freed 186K, 31% free 17626K/25476K, paused 2ms+1ms, total 12ms
    D/dalvikvm(24476): WAIT_FOR_CONCURRENT_GC blocked 4ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed 63K, 30% free 17884K/25476K, paused 8ms, total 8ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed 3K, 29% free 18201K/25476K, paused 8ms, total 8ms
    D/dalvikvm(24476): GC_CONCURRENT freed <1K, 27% free 18696K/25476K, paused 2ms+1ms, total 12ms
    D/dalvikvm(24476): WAIT_FOR_CONCURRENT_GC blocked 9ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed <1K, 25% free 19190K/25476K, paused 8ms, total 8ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed <1K, 23% free 19831K/25476K, paused 9ms, total 9ms
    D/dalvikvm(24476): GC_CONCURRENT freed <1K, 19% free 20794K/25476K, paused 2ms+0ms, total 12ms
    D/dalvikvm(24476): WAIT_FOR_CONCURRENT_GC blocked 8ms
    D/dalvikvm(24476): GC_CONCURRENT freed <1K, 14% free 22078K/25476K, paused 1ms+2ms, total 14ms
    D/dalvikvm(24476): WAIT_FOR_CONCURRENT_GC blocked 12ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed <1K, 7% free 23855K/25476K, paused 10ms, total 10ms
    D/dalvikvm(24476): GC_CONCURRENT freed <1K, 6% free 24176K/25476K, paused 2ms+1ms, total 14ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed 6440K, 26% free 19562K/26216K, paused 9ms, total 10ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed 93K, 23% free 20207K/26216K, paused 10ms, total 10ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed 11K, 21% free 20935K/26216K, paused 8ms, total 8ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed <1K, 18% free 21673K/26216K, paused 9ms, total 9ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed <1K, 15% free 22499K/26216K, paused 8ms, total 8ms
    D/dalvikvm(24476): GC_FOR_ALLOC freed <1K, 10% free 23695K/26216K, paused 9ms, total 9ms

  • Patching zfs root in single user mode

    Hi
    I have to apply the Oracle recommended patchset to a server running on zfs root with zone roots also on zfs. The usual method of patching with Live Upgrade is giving some problems after the activation of the ABE. Can the patchset be applied to the server in the traditional single user mode
    Any inputs would be appreciated

    Yes, you can apply the patchset in single user mode, just keep in mind that your zones need to be bootable when patching as the patch installation will "partly" boot each zone when applying the patches. So if your zones rely on any NFS mounts etc, you will need to mount these first.
    An alternative is to detach your zones during patching and then attach them using "zoneadm -z zonename attach -u" (or -U see the zoneadm(1M) man page for details on the differences).

  • ISCSI target setup fails: command "iscsitadm" not available?

    Hello,
    I want to set up a iSCSI target:
    however it seems I don't have
    iscsitadm
    available on my system, only
    iscsiadm
    What to do?
    Is this
    http://alessiodini.wordpress.com/2010/10/24/iscsi-nice-to-meet-you/
    still valid in terms of set up procedure?
    Thanks

    Ok,
    here you go using COMSTAR:
    pkg install storage-server
    pkg install -v SUNWiscsit
    http://thegreyblog.blogspot.com/2010/02/setting-up-solaris-comstar-and.html
    svcs \*stmf\*
    svcadm enable svc:/system/stmf:default
    zfs create -V 250G tank-esata/macbook0-tm
    sbdadm create-lu /dev/zvol/rdsk/tank-esata/macbook0-tm
    sbdadm list-lu
    stmfadm list-lu -v
    stmfadm add-view 600144f00800271b51c04b7a6dc70001
    svcs \*scsi\*
    itadm create-target
    devfsadm -i iscsi
    reboot
    Solaris 11 Express iSCSI manual:
    http://dlc.sun.com/pdf/821-1459/821-1459.pdf
    and that for reference
    http://nwsmith.blogspot.com/2009/07/opensolaris-2009-06-and-comstar-iscsi.html
    Windows iSCSI initiator
    http://www.microsoft.com/downloads/en/details.aspx?familyid=12cb3c1a-15d6-4585-b385-befd1319f825&displaylang=en
    works after manually adding the Server's IP (no auto detect)

Maybe you are looking for

  • "cannot read or write to disk" when transferring itunes TV shows to iPod

    I have downloaded two TV shows from iTunes, but cannot get them transferred to my iPod. When I click "Update iPod," with one of the shows selected, it will begin to download to the iPod and get ALMOST to the point of finishing, and then I will get an

  • Trying to find-out if any PSU/patches present in the Oracle_Home

    Hi Experts, I have a Oracle_Home copied from a different server and using which , we have created a DB and it is being used for Agile applications. But , initially , after the DB creation , we have not configure the inventory for the Oracle_Home. And

  • JSTL On Oracle Java Cloud

    I am trying to use JSTL tags in a JSP page but keep getting errors of the following nature. jstl.jsp:2:5: No tag library could be found with this URI. Possible causes could be that the URI is incorrect, or that there were errors during parsing of the

  • FI-AR Interest calculation Interest Block

    Hi, Can any one give suggession on the following. 1. When i calculate the Interest system is calculating the Interest on Interest. But my client is asking Interest should not calculate on Interest Line item. Is there any way to solve this issue? 2. I

  • How to update database table !!!

    hi all, Please advice how to update database table with certain cndition needs to be checked. Please consider below scenario. have used enqueu and dequeue function to lock entries  and also i have used BAPI so considering that return parameter . i wa