Iscsi in Solaris 5.10.1 express 3/05

Since Solaris 10 express (5.10.1) 3/05 (start from 2/05), iscsiadm is provided as isci initiator;
I have a Cisco iscsi SN5428 router, (the storage
is Sun StorEdge 9980) I have some solaris 9 iscsi
clients setup in our enviroment using the Cisco iscsi driver.
I want to try the Solaris 10 iscsi driver, so I load the lastest Solaris 10 express 3/05 (build 09) on a Sun blade server; but unfortunately, I can't get the iscsi to work;
Since I have setup serveral Solaris 9 iscsi clients before, and I double checked my iscsi target settings on Cisco iscsi router, I think the iscsi disks should be available there. (if the box is Solaris 9 with cisco driver. And I knew the iscsi target is without CHAP,
radius or CRC. )
Please see following commands I tried on Soalris 10 test box. Could anyone give me some hints how to get this thing work?
Also, by the way, it seems that iscsiadm man page
(1M) is not included in this distro.(I installed SUNWXall)
# uname -a
SunOS fb2-sb0 5.10.1 snv_09 sun4u sparc SUNW,Serverblade1
# grep iscsiadm /var/sadm/install/contents
/usr/sbin/iscsiadm f none 0555 root sys 100516 59590 1109092145 SUNWiscsiu
# cat /etc/hosts
# Internet host table
127.0.0.1 localhost
xxx.xxx.xxx.xxx fb2-sb0 fb2-sb0.xxx.com loghost
10.10.107.101 iscsirouter01
10.10.107.102 iscsirouter02
# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
ce0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet xxx.xxx.xxx.xxx netmask ffffff00 broadcast xxx.xxx.xxx.255
ether 0:3:ba:xx:xx:xx
# ping iscsirouter01
iscsirouter01 is alive
# ping iscsirouter02
iscsirouter02 is alive
# ping 10.10.107.101
10.10.107.101 is alive
# ping 10.10.107.102
10.10.107.102 is alive
# iscsiadm remove discovery_address 10.10.107.101
# iscsiadm remove discovery_address 10.10.107.102
# iscsiadm list discovery_address
# iscsiadm add discovery_address 10.10.107.101
# iscsiadm add discovery_address 10.10.107.102
# iscsiadm list discovery_address
Discovery Address: 10.10.107.101:3260
Discovery Address: 10.10.107.102:3260
# iscsiadm list discovery
Discovery:
Static: enabled
Send Targets: enabled
# iscsiadm list initiator_node
iscsiadm: Unable to complete operation
# devfsadm -i iscsi
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0 <TOS MK3019GAXB SUN30G cyl 58138 alt 2 hd 16 sec 63>
/pci@1f,0/ide@d/dad@0,0
Specify disk (enter its number): ^D
# devfsadm
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0 <TOS MK3019GAXB SUN30G cyl 58138 alt 2 hd 16 sec 63>
/pci@1f,0/ide@d/dad@0,0
Specify disk (enter its number): ^D
# iscsiadm list target
# iscsiadm list initiator_node
iscsiadm: Unable to complete operation
# svcs -a |grep iscsi
online 2:23:57 svc:/network/iscsi_initiator:default
# iscsiadm modify initiator_node -?
iscsiadm modify initiator_node <OPTIONS>
OPTIONS:
-N, --node-name  <initiator node name>
-A, --node-alias  <initiator node alias>
-h, --headerdigest  <none|CRC32>
-d, --datadigest  <none|CRC32>
-C, --CHAP-Secret   (exclusive)
-a, --authentication  <chap|none>
-R, --radius-access  <enable|disable>
-r, --radius-server  <<IP address>:port>
-S, --radius-shared-secret   (exclusive)
For more information, please see iscsiadm(1M)
# iscsiadm modify initiator_node -N fb2-sb0
iscsiadm: unknown iSCSI name type.
iscsiadm: Unable to complete operation
# cd /var/svc/log
# ls scsi
network-iscsi_initiator:default.log
# cat network-iscsi_initiator:default.log
[ Apr  9 00:14:13 Disabled. ]
[ Apr  9 00:14:13 Rereading configuration. ]
[ Apr  9 01:42:21 Enabled. ]
[ Apr  9 01:42:21 Executing start method ("/lib/svc/method/iscsid") ]
[ Apr  9 01:42:21 Method "start" exited with status 0 ]
[ Apr  9 01:44:37 Stopping because service restarting. ]
[ Apr  9 01:44:38 Executing stop method (:kill) ]
[ Apr  9 01:44:38 Executing start method ("/lib/svc/method/iscsid") ]
[ Apr  9 01:44:38 Method "start" exited with status 0 ]
[ Apr  9 02:22:08 Stopping because service disabled. ]
[ Apr  9 02:22:08 Executing stop method (:kill) ]
[ Apr  9 02:35:37 Stopping because service restarting. ]
[ Apr  9 02:35:37 Executing stop method (:kill) ]
[ Apr  9 02:35:37 Executing start method ("/lib/svc/method/iscsid") ]
[ Apr  9 02:35:37 Method "start" exited with status 0 ]
#

I followed your instruction, it made some progress,
but when I do the devfsadm -i iscsi, the server paniced.
Please see following as the command output and the console log:
$ su -
Password:
Sun Microsystems Inc. SunOS 5.10.1 snv_09 October 2007
# iscsiadm modify discovery -t disable
# iscsiadm modify discovery -s disable
# iscsiadm modify initiator_node -N iqn.2005-04.fb2-sb0
# iscsiadm list initiator_node
Initiator node name: iqn.2005-04.fb2-sb0
Initiator node alias: -
Login Parameters (Default/Configured):
Header Digest: NONE/-
Data Digest: NONE/-
Authentication Type: NONE
RADIUS Server: NONE
RADIUS access: unknown
# iscsiadm list discovery_address
Discovery Address: 10.10.107.101:3260
Discovery Address: 10.10.107.102:3260
# iscsiadm list discovery_address -v 10.10.107.101:3260
Discovery Address: 10.10.107.101:3260
Target name: iqn.1987-05.com.cisco:00.f5a1b374bf74.s10test
Target address: 10.10.107.101:3260, 1
# iscsiadm list discovery_address -v 10.10.107.102:3260
Discovery Address: 10.10.107.102:3260
Target name: iqn.1987-05.com.cisco:00.932abc4bbcbd.s10test2
Target address: 10.10.107.102:3260, 1
# iscsiadm modify discovery -t enable
# iscsiadm list target
Target: iqn.1987-05.com.cisco:00.932abc4bbcbd.s10test2
Target Portal Group Tag: 1
Connections: 0
Target: iqn.1987-05.com.cisco:00.f5a1b374bf74.s10test
Target Portal Group Tag: 1
Connections: 0
# iscsiadm list target -S
Target: iqn.1987-05.com.cisco:00.932abc4bbcbd.s10test2
Target Portal Group Tag: 1
Connections: 0
Target: iqn.1987-05.com.cisco:00.f5a1b374bf74.s10test
Target Portal Group Tag: 1
Connections: 0
# iscsiadm list target -v
Target: iqn.1987-05.com.cisco:00.932abc4bbcbd.s10test2
Target Portal Group Tag: 1
Connections: 0
Discovery Method: SendTargets
Target: iqn.1987-05.com.cisco:00.f5a1b374bf74.s10test
Target Portal Group Tag: 1
Connections: 0
Discovery Method: SendTargets
# iscsiadm list discovery
Discovery:
Static: disabled
Send Targets: enabled
# devfsadm -i iscsi
Connection closed by foreign host.
On Console:
panic[cpu0]/thread=2a1008c9cc0: Apr 11 15:22:35 fb2-sb0 iscsi: NOTICE: iscsi session(8) iqn.1987-05.com.c
isco:00BAD TRAP: type=31 rp=2a1008c8800 addr=38 mmu_fsr=0 occurred in module "scsi_vhci" due to a N
ULL pointer dereference
sched: trap type = 0x31
addr=0x38
pid=0, pc=0x12271b8, sp=0x2a1008c80a1, tstate=0x4480001603, context=0x0
g1-g7: f0, 2, 0, c6e1, c6e0, 0, 2a1008c9cc0
000002a1008c8520 unix:die+78 (31, 2a1008c8800, 38, 0, 2a1008c85e0, 106f400)
%l0-3: 0000000000001fff 0000000000000031 0000000001000000 0000000000002000
%l4-7: 0000000001819378 0000000001819000 0000000000010000 00000000d25a2038
000002a1008c8600 unix:trap+8f0 (2a1008c8800, 38, 0, 1, 0, 1)
%l0-3: 00000000018338c0 0000000000000031 0000000000000005 0000000000000005
%l4-7: 00000000d25a2038 0000000000001fff 0000000000000000 000000000180c180
000002a1008c8750 unix:ktl0+48 (0, 0, 3000aeb3580, 6, 20, 18b7000)
%l0-3: 0000000000000004 0000000000001400 0000004480001603 0000000001013ea0
%l4-7: 000000007bf38ab0 0000000000000000 0000000000000000 000002a1008c8800
000002a1008c88a0 scsi_vhci:vhci_pathinfo_online+33c (3000aeb3580, 3000b57df30, 300000a1a40, 3000b55
e510, 3000b57fbc0, 30002a0ecf0)
%l0-3: 0000030002a0ec80 0000000000000000 00000000018b60b8 0000030002a0ecb0
%l4-7: 000000000122b06c 0000000000000001 00000000018b61d0 00000000018b60a8
000002a1008c8980 scsi_vhci:vhci_pathinfo_state_change+260 (30000390e98, 3000b57df30, 1, 0, 0, 18b5c
00)
%l0-3: 0000030000391238 000003000b57df30 000003000127b998 0000030002a0ec80
%l4-7: 0000030002a0eb40 000003000b57fbc0 0000000000000000 0000000010000000
000002a1008c8a30 genunix:i_mdi_pi_state_change+388 (3000b57df30, 10000, 0, 300015ab418, 3000b571260
, 1)
%l0-3: 0000000000000000 0000000000000001 0000000000000000 0000000000000000
%l4-7: 0000000000000000 00000300000b5048 000000000000000b 000003000b5712c0
000002a1008c8ae0 genunix:mdi_pi_online+10 (3000b57df30, 0, 3000b486080, 3000b571260, 0, 0)
%l0-3: 000003000b57df30 0000000000000004 0000000000000000 000002a1008c8c68
%l4-7: 0000000000000000 000002a1008c8c68 00000300000b5048 00000300015ab418
000002a1008c8b90 iscsi:iscsi_lun_virt_create+17c (3000ac82000, 0, 3000b57de50, 0, 18a7400, 18a77a0)
%l0-3: 000002a1008c8c68 0000000000000005 0000000000000000 0000000000000000
%l4-7: 000000007007bba8 0000000000000000 0000000000000000 0000000000000000
000002a1008c8c70 iscsi:iscsi_lun_create+158 (3000ac82000, 0, 30002329680, 3000b582900, 3000ac82040,
7007bc00)
%l0-3: 000000007007bc00 000003000b57fc00 0000000000000000 0000000000000000
%l4-7: 000000000000000e 000000000000000f 000000007007bc00 000003000b57de50
000002a1008c8d20 iscsi:iscsi_sess_inquiry+208 (3000ac82000, 0, 3000b582900, ff, 3000aeb3880, 300023
29680)
%l0-3: 0000000000000000 0000000000000083 0000000000000001 000000000000007c
%l4-7: 000002a1008c8df8 000003000ac8410c 0000000000000001 0000000000000000
000002a1008c8e40 iscsi:iscsi_sess_reportluns+2e0 (3000ac82000, 10, 3000ac82040, 3000b582930, 8, 0)
%l0-3: 0000000000000030 0000000000000010 0000000000000008 0000000000000000
%l4-7: 0000000000000000 0000000000000008 0000000000000030 0000000000000000
000002a1008c8f40 iscsi:iscsi_sess_enumeration+1c (3000af04268, 0, 4, 1, 3000af04268, 3000005efc8)
%l0-3: 00000000000000bb 0000000000000002 00000000018522e4 0000000001852000
%l4-7: 000000000000000a 00000000018522bc 0000000001852000 000003000ac82000
000002a1008c8ff0 iscsi:iscsi_sess_state_free+a4 (3000ac82018, 0, 7bf58400, 1, 2120, 2000)
%l0-3: 000003000ac862bc 00000000000042bc 0000000000004000 0000000000000001
%l4-7: 000003000af04268 000000007bf58400 0000000000000000 0000000000000000
000002a1008c90a0 iscsi:iscsi_conn_state_in_login+50 (30002234900, 1, 3000ac82000, 3000ac82018, 2, 1
%l0-3: 000003000001ee40 0000030014770000 0000030014760000 0000000000000020
%l4-7: 0000030014772000 000003000ac862b0 00000000000042b0 0000000000004000
000002a1008c9150 iscsi:iscsi_login_start+148 (3000af04278, 0, 2a1008c920e, 0, 30002234900, 30002234
928)
%l0-3: 000003000ac82000 0000000000010000 0000000000009b1d 0000000000000000
%l4-7: 0000000000000000 0000000000000064 000002a1008c920f 0000000001813c20
000002a1008c9210 iscsi:iscsi_conn_state_free+74 (30002234928, 0, 3000ac82000, 30002234900, 3000af04
278, 1)
%l0-3: 0000030002234900 0000000000000000 0000000000000000 0000000000000000
%l4-7: 0000000000000000 000003000ac862bc 00000000000042bc 0000000000004000
000002a1008c92c0 iscsi:iscsi_ioctl+278 (22f4, 0, 300000a3560, 300000a3540, 7bf392e4, 3000ac82000)
%l0-3: 0000030002234900 0000000000002000 0000000000000004 0000030002a0efe0
%l4-7: ffffffff80000000 0000000000000000 0000000000000000 0000000000000000
000002a1008c9740 iscsi:discovery_queue_login_tgt+8c (0, 30002a0ef00, 7007bc00, 0, 0, 0)
%l0-3: 000000007007be38 0000000000000000 000000007007a400 000000007007a400
%l4-7: 0000000000000000 0000000000000000 0000000000000000 000000007007bc00
000002a1008c97f0 iscsi:___const_seg_900006901+2658 (300000a3540, 7007be20, 7007bc00, 7007bc00, 0, 0
%l0-3: 00000300003912a0 0000000000000000 000000000000000e 0000000000000000
%l4-7: 000000000183a320 000000000183a000 0000000000003d5b 0000000000000000
000002a1008c98a0 iscsi:iscsi_tran_bus_config+118 (30000391238, 300000a3540, 2, ffffffff, 0, 0)
%l0-3: 000000000000003c 0000000000000000 0000000004004048 0000000000000000
%l4-7: 0000000000000000 000000007007bc00 0000000000000000 0000000000000000
000002a1008c9960 genunix:devi_config_common+a4 (30000391238, 2, ffffffff, 18a8000, 0, 4004048)
%l0-3: 00000000018a1800 000002a10051fcc0 0000000000000006 0000000000000010
%l4-7: 00000000018b9120 000000007007a198 0000000000000008 0000000001231c14
000002a1008c9a10 genunix:mt_config_thread+60 (3000b1040e0, 0, 18338c0, 18338c0, 30000391238, 3000b5
7ff40)
%l0-3: 0000000000000000 0000000000000000 000003000b5857d8 0000000000000008
%l4-7: 00000000018a5400 000002a1001a7cc0 000002a10088bcc0 0000000000000000
syncing file systems... 1 1 done
dumping to /dev/dsk/c0t0d0s1, offset 429326336, content: kernel
100% done: 14660 pages dumped, compression ratio 3.71, dump succeeded
rebooting...
Resetting ...
BSC status is 0000.0000.0000.000c
Speed Jumper is set to 0000.0000.0000.000e
Software Power ON
@(#)OBP 4.11.5 2003/11/12 10:40 Sun Serverblade1
CPU SPEED 0x0000.0000.26be.3680
Initializing Memory Controller
MCR0 0000.0000.57b2.cf06
MCR1 0000.0000.8000.8000
MCR2 0000.0000.c333.00ff
MCR3 0000.0000.9060.00cf
Clearing E$ Tags Done
Clearing I/D TLBs Done
Probing Memory Done
Clearing Memory Done
MEM BASE = 0000.0000.4000.0000
MEM SIZE = 0000.0000.4000.0000
MMUs ON
Find dropin, Copying Done, Size 0000.0000.0000.4c70
PC = 0000.01ff.f000.3924
PC = 0000.0000.0000.3968
Find dropin, (copied), Decompressing Done, Size 0000.0000.0005.efc0 Reset Control: BXIR:0 BPOR:1 SXIR:0 SPOR:0 P
OR:0
Probing upa at 1f,0 pci
Probing upa at 0,0 SUNW,UltraSPARC-IIe (512 KB)
Loading Support Packages: obp-tftp SUNW,i2c-ram-device SUNW,fru-device
Loading onboard drivers:
Probing /pci@1f,0 Device 7 isa serial bscbus bscv i2c motherboard-fru
rtc power flashprom
Probing /pci@1f,0 Device 3 pmu i2c dimm-spd dimm-spd nvram idprom
Probing Memory Bank #0 1024 Megabytes
Probing Memory Bank #1 1024 Megabytes
Probing Memory Bank #2 0 Megabytes
Probing Memory Bank #3 0 Megabytes
No clientid supplied by the BSC
Probing /pci@1f,0 Device a network
Probing /pci@1f,0 Device b network
Probing /pci@1f,0 Device d ide disk cdrom
Sun Serverblade1 (UltraSPARC-IIe 650MHz), No Keyboard
Copyright 1998-2003 Sun Microsystems, Inc. All rights reserved.
OpenBoot 4.11.5, 2048 MB memory installed, Serial #52765079.
Ethernet address 0:3:ba:68:9a:1c, Host ID: 83252197.
Rebooting with command: boot
Boot device: disk File and args:
Loading ufs-file-system package 1.4 04 Aug 1995 13:02:54.
FCode UFS Reader 1.12 00/07/17 15:48:16.
Loading: /platform/SUNW,Serverblade1/ufsboot
Loading: /platform/sun4u/ufsboot
SunOS Release 5.10.1 Version snv_09 64-bit
Copyright 1983-2005 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
WARNING: Last shutdown is later than time on time-of-day chip; check date.
NOTICE: Failed to set param 4 for OID 1
Hostname: fb2-sb0
checking ufs filesystems
/dev/rdsk/c0t0d0s7: is logging.
fb2-sb0 console login: Apr 11 15:24:53 fb2-sb0 svc.startd[7]: network/ssh:default failed

Similar Messages

  • How to enable the iscsi in Solaris 10

    Dear All,
    Kindly help me,How to enable the iscsi in solaris 10,That box already running in solaris 9,Now we installed solaris 10,In that solaris box separate iscsi card in there,How to enable the iscsi card to access the NAS storage.
    Kindly sent any related PDF document to "[email protected]"
    Thanks in advance,.
    Regards,
    Venkatachalam.M

    Kindly answer the above question please.
    Regards,
    Venkatachalam.M

  • Iscsi for Solaris 10 - light is green

    Yo,
    Have it running on x86 nv_10 package of Software Express 04/05. Target is SANRAD V-Switch 3000. By the way it also works fine from Solaris 10 virtual machine on VMWare Workstation. Solaris system disk is connected to the workstation by iSCSI and the virtual machine is booting off it... gotta love brave new world of IP storage!
    P.S Thanks to Torrey McMahon - great blog, dude!

    You're welcome. Glad I, and the folks that give me the info, can help.

  • Where is iSCSI ? Solaris 10 HW1

    I've searched and seen hints, tips and the existence in Solaris Express but I was wondering if iSCSI initiator support is in the HW1 release and I'm missing it somewhere.
    As a sidebar, is there a collection of "What's Been Added, Changed or Fixed" for each Solaris release? I'm willing to read!
    Thanks,
    Doug

    The initiator as you already noted is in Solaris Express. We were unable to make the HW1 release. It will also be in HW2. While applying the Solaris Express packages to Solaris 10 may work be aware Solaris Express is different S10. We are already working on adding new S10U2 features into Solaris Express. Solaris Express can be considered the latest and greatest. While S10 HW and Update releases are stable check point/milestones.

  • ISCSI FOR SOLARIS 9

    Hello.
    If anyone has or knows where I can find the Cisco iSCSI driver 3.3.5/6 for Solaris 9, I would be grateful to know how to get it. I have been unsuccessful in finding one (Cisco no longer suppports it).
    Please let me know.
    I want to empower my production Solaris 9 v210 systems with iSCSI and I don't yet want to upgrade to Solaris 10.
    Regards,
    R

    Hi,
    were you able to get the iscsi initiator from somewhere? I am also in the same predicament as you were, with a solris 9 box. please let me know. thanks - S.R.
    Hello.
    If anyone has or knows where I can find the Cisco
    iSCSI driver 3.3.5/6 for Solaris 9, I would be
    grateful to know how to get it. I have been
    unsuccessful in finding one (Cisco no longer
    suppports it).
    Please let me know.
    I want to empower my production Solaris 9 v210
    systems with iSCSI and I don't yet want to upgrade to
    Solaris 10.
    Regards,
    R

  • ISCSI and Solaris device names ..... target binding settings

    Hi,
    I configured an iSCSI environment with Sparc Solaris 10 U4 software inititator and EMC CLARiiON CX3-10c.
    So far it is working, but I`m wondering about the Solaris device names /dev/rdsk/c5t3d0. I disabled MPxIO
    in /kernel/drv/iscsi.conf. I have 32 LUN configured in CX3-10c target. The Host LUNs are 0 ...31, but Solaris
    created device nodes started with target 3, e.g. c5t3d0s2 (see output from iscsiadm list target).
    # iscsiadm list target -S iqn.1992-04.com.emc:cx.hk193001030.a1
    Target: iqn.1992-04.com.emc:cx.hk193001030.a1
    Alias: 1030.a1
    TPGT: 2
    ISID: 4000002a0000
    Connections: 1
    LUN: 31
    Vendor: DGC
    Product: RAID 10
    OS Device Name: */dev/rdsk/c5t34d0s2*
    LUN: 30
    Vendor: DGC
    Product: RAID 10
    OS Device Name: */dev/rdsk/c5t33d0s2*
    LUN: 1
    Vendor: DGC
    Product: RAID 5
    OS Device Name: */dev/rdsk/c5t4d0s2*
    LUN: 0
    Vendor: DGC
    Product: RAID 5
    OS Device Name: */dev/rdsk/c5t3d0s2*
    Why are the device node names stared with target 3 instead of target 0 ????
    Thanks for your help !!!
    Edited by: test111 on Mar 25, 2008 8:36 AM

    Hi,
    I configured an iSCSI environment with Sparc Solaris 10 U4 software inititator and EMC CLARiiON CX3-10c.
    So far it is working, but I`m wondering about the Solaris device names /dev/rdsk/c5t3d0. I disabled MPxIO
    in /kernel/drv/iscsi.conf. I have 32 LUN configured in CX3-10c target. The Host LUNs are 0 ...31, but Solaris
    created device nodes started with target 3, e.g. c5t3d0s2 (see output from iscsiadm list target).
    # iscsiadm list target -S iqn.1992-04.com.emc:cx.hk193001030.a1
    Target: iqn.1992-04.com.emc:cx.hk193001030.a1
    Alias: 1030.a1
    TPGT: 2
    ISID: 4000002a0000
    Connections: 1
    LUN: 31
    Vendor: DGC
    Product: RAID 10
    OS Device Name: */dev/rdsk/c5t34d0s2*
    LUN: 30
    Vendor: DGC
    Product: RAID 10
    OS Device Name: */dev/rdsk/c5t33d0s2*
    LUN: 1
    Vendor: DGC
    Product: RAID 5
    OS Device Name: */dev/rdsk/c5t4d0s2*
    LUN: 0
    Vendor: DGC
    Product: RAID 5
    OS Device Name: */dev/rdsk/c5t3d0s2*
    Why are the device node names stared with target 3 instead of target 0 ????
    Thanks for your help !!!
    Edited by: test111 on Mar 25, 2008 8:36 AM

  • ISCSI in Solaris 9

    Hi all,
    Has anyone been able to reliably use the Cisco iSCSI driver version 3.3.6 with Solaris 9?
    I can install the package, and it works initially, but after a reboot I get nothing but errors such as:
    May 16 11:03:51 oahu genunix: [ID 197085 kern.warning] WARNING: mod_installdrv: no major number for iscsi
    May 16 11:03:51 oahu iscsi: [ID 757068 kern.warning] WARNING: iSCSI: mod_install failed
    May 16 11:03:51 oahu iscsid[1118]: [ID 358429 daemon.error] iSCSI failed to push module iscsimod, Invalid argument
    May 16 11:03:51 oahu iscsid[1118]: [ID 801593 daemon.error] short PDU header read from socket 5: Interrupted system call
    May 16 11:03:51 oahu iscsid[1118]: [ID 702911 daemon.error] login I/O error, failed to receive a PDU
    May 16 11:03:51 oahu iscsid[1118]: [ID 702911 daemon.error] login failed - 1
    Thanks in advance -
    Edward
    [email protected]

    Hi there,
    Has anyone seen these errors below?
    tks,
    ul 4 15:00:17 pgw01 lom: [ID 702911 kern.warning] +13d+11h45m6s Alarm 2 ON
    Jul 4 15:00:18 pgw01 lom: [ID 702911 kern.warning] +13d+11h45m7s Alarm 1 ON
    Jul 4 15:00:19 pgw01 lom: [ID 702911 kern.warning] +13d+11h45m8s Alarm 3 ON
    Jul 4 15:00:27 pgw01 lom: [ID 702911 kern.notice] +13d+11h45m15s Alarm 3 OFF
    Jul 4 15:00:27 pgw01 lom: [ID 702911 kern.warning] +13d+11h45m16s Alarm 2 ON
    Jul 4 15:00:28 pgw01 lom: [ID 702911 kern.notice] +13d+11h45m17s Alarm 2 OFF
    Jul 4 15:00:46 pgw01 lom: [ID 702911 kern.warning] +13d+11h45m35s Alarm 1 ON
    Jul 4 15:00:47 pgw01 lom: [ID 702911 kern.notice] +13d+11h45m36s Alarm 1 OFF
    Jul 4 15:00:49 pgw01 lom: [ID 702911 kern.warning] +13d+11h45m38s Alarm 2 ON
    Jul 4 15:00:49 pgw01 lom: [ID 702911 kern.notice] +13d+11h45m38s Alarm 2 OFF
    Jul 4 15:00:50 pgw01 lom: [ID 702911 kern.warning] +13d+11h45m39s Alarm 2 ON
    Jul 4 15:00:51 pgw01 lom: [ID 702911 kern.notice] +13d+11h45m40s Alarm 2 OFF
    Jul 4 15:06:58 pgw01 xntpd[216]: [ID 774427 daemon.notice] time reset (step) 335

  • Deal-breakers for real use of Solaris 11 Express

    I run Solaris 10 U9 for my home 12TB NAS box - based on Supermicro H8SSL-i2 motherboard (ServerWorks HT1000 Chipset and Dual-port Broadcom BCM5704C) and their 8-port SATA2 PCI-X card (AOC-SAT2-MV8). It's a great (but aging) platform and a rock solid OS with the unbeatable ZFS volume manager/filesystem.
    However, despite my willingness to run Solaris 11 Express in this role, I can't because of these deal-breakers:
    1) Lack of a full-featured installer that allows me to lay out or preserve existing partitions the way I want. Making /var a separate file system is a must. Ideally, I'd be able to run multiple versions of Solaris on the same box by customizing grub, and use my ZPOOLs on either Solaris 10 or 11 Express while I learn the new OS.
    2) Lack of support for the Broadcom BCM5704C dual-port gigabit NIC (and others), which work wonderfully under Solaris 10, but are badly broken under Solaris 11 Express. I know I could disable the on-board Broadcom NICs and go buy an Intel card - but why the need for this? Won't there be a fix for Broadcom NICs?
    3) Lack of support for modern, generic, server-class motherboards and PCI-e multi-port SATA/SAS cards. I wonder about the future for Solaris without support for modern, affordable x64 server hardware.
    Maybe I'm missing the point and Solaris 11 Express is only intended to be run as a virtual machine under VBox or VMware. But it would sure be nice to be able to run it on my real hardware - even if it is just a small hobbyist rig. Any suggestions?
    Regards,
    Mike

    In Solaris 11, you get a separate /var by default. If you update from Solaris 11 Express to Solaris 11, this transition doesn't happen automatically. If you decide to tackle it on your own, you need to be sure that it is done in a way that beadm, pkg, and other consumers of libbe will handle properly. I would recommend something along the lines of the following. This is untested and may break your system - prove it out somewhere unimportant first.
    Do the work in a new boot environment so you reduce the likelihood that you will break things in an unrecoverable way.
    # beadm create sepvar
    # beadm mount sepvar /mntFigure out the name of the root dataset of the new boot environment, then create a var dataset as a child of that.
    # rootds=$(zfs list -H -o name /mnt)
    # zfs create -o mountpoint=/var -o canmount=noauto $rootds/varMount this new /var and migrate data
    # mkdir /tmp/newvar
    # zfs mount -o mountpoint=/tmp/newvar $rootds/var
    # cd /mnt/var
    # mv $(ls -A) /tmp/newvarUnmount, remount
    # umount /tmp/newvar
    # beadm unmount sepvar
    # beadm mount sepvar /mntAt this point /mnt/var should be a separate dataset than /mnt. The contents of /mnt/var should look just like the contents of /var, aside from transient data that has changed while you were doing this. Assuming that is the case, you should be ready to activate and boot the new boot environment.
    # beadm activate sepvar
    # beadm unmount sepvar
    # init 6

  • ISCSI support on Solaris

    A recent article on Search Storage asserts that Solaris 10 will support iSCSI. Is this true or just another journalistic fantasy?

    I have seen several threads that have suggested that a version of Solaris 10 will support iSCSI. Can you tell me your best estimate of when that "version" of Solaris 10 will become available ?
    Also, I have seen several threads that have suggested that support for iSCSI on Solaris 10 is up to the manufactures of the adapters to create device drivers that are compliant with Solaris 10 ( i.e. Cisco ). Can you please comment on what is meant by Solaris 10 will "support" iSCSI, and what part of that "support" is left up to the manufactures of the devices interfacing with Solaris 10 ( i.e. device drivers )

  • Upgrade from Solaris 11 Express to Solaris 11 Release (11/11); Guide MIA?

    Would there be an easy way to upgrade from Solaris 11 Express to Solaris 11 Release (11/11)? The FAQ for Solaris 11 Release has the following hopeful text:
    Can I upgrade to Oracle Solaris 11 from Oracle Solaris 11 Express or Oracle Solaris 11 Early Adopter?
    Yes. Customers can upgrade to Oracle Solaris 11 by using the package management tools. Refer to the Oracle Solaris 11
    Transition Guide (http://www.oracle.com/pls/topic/lookup?ctx=E23824&id=MF JAI) for more information
    But that link comes up with an error page 404, "topic not found".
    The one transition guide I can find is:
    http://download.oracle.com/docs/cd/E23824_01/html/E21799/index.html
    But that covers Solaris 10 jumpstart, not Solaris 11 Express or Early Adopter.
    The Solaris 11 documentation at:
    http://www.oracle.com/technetwork/server-storage/solaris11/documentation/index.html
    Also seems to mention only Solaris 10, not 11 Express or Early Adopter.
    I have already downloaded the full repo for Solaris 11 Release, and the live installer, but I'm hoping it's something as simple as pointing the ips manager to a new repository, or just loading the full repo I already have as a local source. I would also like to keep the dtrace and sun studio set up I have now in the Express version, without reinstalling them on a new release version on bare metal.
    If it keeps the features of the express version while ironing out a few kinks (gdm authentication bug, etc.), that would rock.
    Many Thanks,
    Gordon

    Alan, Thank You, that worked with no problem at all. 11 Express was already using the correct repository, and only required a pkg command update, then other updates.
    So here's what I'm guessing:
    1. The Oracle Solaris maintained repository for Express and Early Adopter are now the same as Release, but they do require an updated pkg executable and pkg update command.
    2. The second form of the command mentioned in the update guide:
    # pkg set-publisher -g http://pkg.oracle.com/solaris/release
    -G http://internal.co.com/solaris solaris
    is for using a local copy of the repository within our lan or vpn wan.
    3. Updates should be fairly regular now that Solaris 11 is in release version.
    I think the new hash (SHA 256) fried some of our old gdm settings for our users, so we're creating new user accounts, and migrating the older accounts. We don't have too many, so it's not a problem for us. But that brings up another point:
    4. The graphical user manager in Express is now gone, and user and group management are now done with command line interface.
    That's fine with me as well. I always use ssh to connect to the machine and then run the screen command for administration. The graphical user manager was always kludgy in Express, and I couldn't get it to work correctly anyway.
    This is great stuff so far. Well done.
    Gordon

  • ISCSI target setup fails: command "iscsitadm" not available?

    Hello,
    I want to set up a iSCSI target:
    however it seems I don't have
    iscsitadm
    available on my system, only
    iscsiadm
    What to do?
    Is this
    http://alessiodini.wordpress.com/2010/10/24/iscsi-nice-to-meet-you/
    still valid in terms of set up procedure?
    Thanks

    Ok,
    here you go using COMSTAR:
    pkg install storage-server
    pkg install -v SUNWiscsit
    http://thegreyblog.blogspot.com/2010/02/setting-up-solaris-comstar-and.html
    svcs \*stmf\*
    svcadm enable svc:/system/stmf:default
    zfs create -V 250G tank-esata/macbook0-tm
    sbdadm create-lu /dev/zvol/rdsk/tank-esata/macbook0-tm
    sbdadm list-lu
    stmfadm list-lu -v
    stmfadm add-view 600144f00800271b51c04b7a6dc70001
    svcs \*scsi\*
    itadm create-target
    devfsadm -i iscsi
    reboot
    Solaris 11 Express iSCSI manual:
    http://dlc.sun.com/pdf/821-1459/821-1459.pdf
    and that for reference
    http://nwsmith.blogspot.com/2009/07/opensolaris-2009-06-and-comstar-iscsi.html
    Windows iSCSI initiator
    http://www.microsoft.com/downloads/en/details.aspx?familyid=12cb3c1a-15d6-4585-b385-befd1319f825&displaylang=en
    works after manually adding the Server's IP (no auto detect)

  • ZFS root problem after iscsi target experiment

    Hello all.
    I need help with this situation... I've installed solaris 10u6, patched, created branded full zone. Everything went well until I started to experiment with iSCSI target according to this document: http://docs.sun.com/app/docs/doc/817-5093/fmvcd?l=en&a=view&q=iscsi
    After setting up iscsi discovery address of my iscsi target solaris hung up and the only way was to send break from service console. Then I got these messages during boot
    SunOS Release 5.10 Version Generic_138888-01 64-bit
    /dev/rdsk/c5t216000C0FF8999D1d0s0 is clean
    Reading ZFS config: done.
    Mounting ZFS filesystems: (1/6)cannot mount 'root': mountpoint or dataset is busy
    (6/6)
    svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a failed: exit status 1
    Jan 23 14:25:42 svc.startd[7]: svc:/system/filesystem/local:default: Method "/lib/svc/method/fs-local" failed with exit status 95.
    Jan 23 14:25:42 svc.startd[7]: system/filesystem/local:default failed fatally: transitioned to maintenance (see 'svcs -xv' for details)
    ---- There are many affected services from this error, unfortunately one of them is system-log, so I cannot find any relevant information why this happens.
    bash-3.00# svcs -xv
    svc:/system/filesystem/local:default (local file system mounts)
    State: maintenance since Fri Jan 23 14:25:42 2009
    Reason: Start method exited with $SMF_EXIT_ERR_FATAL.
    See: http://sun.com/msg/SMF-8000-KS
    See: /var/svc/log/system-filesystem-local:default.log
    Impact: 32 dependent services are not running:
    svc:/application/psncollector:default
    svc:/system/webconsole:console
    svc:/system/filesystem/autofs:default
    svc:/system/system-log:default
    svc:/milestone/multi-user:default
    svc:/milestone/multi-user-server:default
    svc:/system/basicreg:default
    svc:/system/zones:default
    svc:/application/graphical-login/cde-login:default
    svc:/system/iscsitgt:default
    svc:/application/cde-printinfo:default
    svc:/network/smtp:sendmail
    svc:/network/ssh:default
    svc:/system/dumpadm:default
    svc:/system/fmd:default
    svc:/system/sysidtool:net
    svc:/network/rpc/bind:default
    svc:/network/nfs/nlockmgr:default
    svc:/network/nfs/status:default
    svc:/network/nfs/mapid:default
    svc:/application/sthwreg:default
    svc:/application/stosreg:default
    svc:/network/inetd:default
    svc:/system/sysidtool:system
    svc:/system/postrun:default
    svc:/system/filesystem/volfs:default
    svc:/system/cron:default
    svc:/application/font/fc-cache:default
    svc:/system/boot-archive-update:default
    svc:/network/shares/group:default
    svc:/network/shares/group:zfs
    svc:/system/sac:default
    [ Jan 23 14:25:40 Executing start method ("/lib/svc/method/fs-local") ]
    WARNING: /usr/sbin/zfs mount -a failed: exit status 1
    [ Jan 23 14:25:42 Method "start" exited with status 95 ]
    Finaly there is output of zpool list command, where everything about ZFS pools looks OK:
    NAME SIZE USED AVAIL CAP HEALTH ALTROOT
    root 68G 18.5G 49.5G 27% ONLINE -
    storedgeD2 404G 45.2G 359G 11% ONLINE -
    I would appretiate any help.
    thanks in advance,
    Berrosch

    OK, i've tryied to install s10u6 to default rpool and move root user to /rpool directory (which it is nonsense of course, it was just for this testing purposes) and everything went OK.
    Another experiment was with rootpool name 'root' and root user in /root, everything went OK as well.
    Next try was with rootpool 'root', root user in /root, enabling iscsi initiator:
    # svcs -a |grep iscsi
    disabled 16:31:07 svc:/network/iscsi_initiator:default
    # svcadm enable iscsi_initiator
    # svcs -a |grep iscsi
    online 16:34:11 svc:/network/iscsi_initiator:default
    and voila! the problem is here...
    Mounting ZFS filesystems: (1/5)cannot mount 'root': mountpoint or dataset is busy
    (5/5)
    svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a failed: exit status 1
    Feb 9 16:37:35 svc.startd[7]: svc:/system/filesystem/local:default: Method "/lib/svc/method/fs-local" failed with exit status 95.
    Feb 9 16:37:35 svc.startd[7]: system/filesystem/local:default failed fatally: transitioned to maintenance (see 'svcs -xv' for details)
    Seems to be a bug in iscsi implementation, some quotas of 'root' in source code or something like it...
    Martin

  • Solaris 11 - can't join AD domain

    I've upgraded to Solaris 11 from 11 Express and am trying to join the system to an Active Directory domain. I first joined workgroup, then tried to rejoin the domain, at which time I get the following (names changed to protect the anonymous):
    myuser@ganesh:~# smbadm join -u "DomainAdmin" lothlorien.domain.com
    After joining lothlorien.domain.com the smb service will be restarted automatically.
    Would you like to continue? [no]: yes
    Enter domain password:
    Locating DC in lothlorien.domain.com ... this may take a minute ...
    Joining lothlorien.domain.com ... this may take a minute ...
    failed to join lothlorien.domain.com: UNSUCCESSFUL
    Please refer to the system log for more information.
    /var/adm/messages shows this:
    Nov 11 00:46:17 ganesh smbd[641]: [ID 270243 daemon.error] smb_ads_update_dsattr: ldap_sasl_interactive_bind_s Local error
    Nov 11 00:46:35 ganesh smbd[641]: [ID 702911 daemon.error] smbns_kpasswd: KPASSWD protocol exchange failed (Cannot contact any KDC for requested realm)
    Nov 11 00:46:35 ganesh smbd[641]: [ID 702911 daemon.notice] Machine password update failed
    Nov 11 00:46:35 ganesh smbd[641]: [ID 702911 daemon.error] unable to join lothlorien.domain.com (UNSUCCESSFUL)
    I know for sure the system is locating the DC and trying to register itself - I can see the events in the Windows event log. Having deleted the previous computer account, if I watch the Computers node of the AD Users & Computers MMC snap-in, I can see the Solaris system appear briefly as disabled, then disappear a few seconds later (with corresponding events in the DC's Security event log).
    I can't find any documentation specific to S11 (as opposed to SE11) that addresses what might be different (if anything) in the smb join protocols. I know by now that S11 can autogenerate your /etc/krb5/krb5.conf so the fact that I can delete/rename that file and it will reappear with valid information validates the fact that it does locate and connect to the (K)DC and get relevant config info, not to mention that I can type garbage for my domain password and the behavior is different so it can do kerberos authentication.
    I think the key error here is the "ldap_sasl_interactive_bind_s Local error" but it's not enough information for me to determine causality. I've already gone through Google searches and implemented changes related to the NTLM levels and so forth, but unlike with SE11 which I did have working, these did not solve the issue.
    I'm still trying to go through the S11 documentation including the End of Feature Notices for what's changed but I didn't see anything revelatory in the Interop guide. I know this could also be something that's in my AD/GP configuration on the Windows side (e.g. I've implemented a PKI and strengthened system authentication among certain domain members). Has anyone run into anything similar? Do you have S11 (as opposed to SE11) joined to your domain?

    I finally got this figured out. It's a problem with client_lmauth_level on the smb service. the below script snippet configures Solaris 11 to join an AD domain on Windows 2008 R2:
    echo *** Installing SMB system
    pkg install system/file-system/smb
    echo *** Installing SMB service
    pkg install service/file-system/smb
    echo server $TIMESERVER > /etc/inet/ntp.conf
    svcadm enable ntp
    echo *** Joining domain: $DOMAIN
    svccfg -s smb setprop smb/client_lmauth_level=2
    svcadm enable -r smb/server
    smbadm join -u $DOMAIN/$DOMAINADMIN
    Obviously, you should set the various variables for your local environment and probably a good idea to sync the clock explicitly instead of assuming ntpd will do it for you.
    In addition, I had to set the auth level on the Windows 2008 domain:
    Start -> Admin Tools -> Local Secuity Policy: Security Settings -> Local Policies -> Security Optiopns:
    Network Security: LAN Manager authentication Level = Send LM & NTLM - Use NTLMv2 security session if negotiated

  • ISCSI support

    Hi,
    has anyone experience in connecting Promise vTrack raid arrays to Solaris using iSCSI?
    Solaris sees the targets but does not make conection, no matter how hard we try.
    MS Initiator on the other hand works instantly, as well as linux one.
    How to discover problem, find a compatibility solution?
    In the messages there is no problem mentioned, just no disk - LUNs - attached.

    Try cross-post to another mailing list:
    http://www.opensolaris.org/jive/forum.jspa?forumID=94
    http://www.opensolaris.org/os/community/storage/
    M.C>

  • Sharing an internet connection with Airport express and Time Capsule

    Hi, I have been happily using a cable modem and a Time Capsule for my internet connection and wireless network. We recently added solar panels to our roof and the system needs to be connected to the internet so it can be monitored remotely. Calling Comcast is an exercise in frustration, and the solar people don't seem to know exactly how to do it, so I'm trying this.
    We bought an Airport Express, thinking if it is connected with an ethernet cable to the solar system, it could somehow share the internet connection with the Time Capsule, or they could talk to each other. When I plug in the Airport Express, the only set-up option seems to be to set up a new network, which I don't think I want to do. Can anyone tell me how to set this up? Thank you!

    Hi, Under Water.
    You should configure the Express to "Join a wireless network" or to "Extend a wireless network". (Either one will work.)
    Connect your computer to your Express via Ethernet and open AirPort Utility. Press "Manual Setup" and authenticate if necessary.
    Go to the "Base Station" tab. Give the Express a name and password.
    Go to the "Wireless" tab. Set the Wireless mode to "Join a wireless network" or "Extend a wireless network." Enter the name of the wireless network created by your Time Capsule (or select it from the list). Check the box that says "Allow Ethernet clients" (if you're joining) or "Allow Wireless Clients" (if you're extending).
    Provide the appropriate security mode and password for the Time Capsule's network.
    Press "Update" and allow your Express to restart.
    Then you can plug the solar system into the Express's Ethernet port.

Maybe you are looking for