ISCSI in Solaris 9
Hi all,
Has anyone been able to reliably use the Cisco iSCSI driver version 3.3.6 with Solaris 9?
I can install the package, and it works initially, but after a reboot I get nothing but errors such as:
May 16 11:03:51 oahu genunix: [ID 197085 kern.warning] WARNING: mod_installdrv: no major number for iscsi
May 16 11:03:51 oahu iscsi: [ID 757068 kern.warning] WARNING: iSCSI: mod_install failed
May 16 11:03:51 oahu iscsid[1118]: [ID 358429 daemon.error] iSCSI failed to push module iscsimod, Invalid argument
May 16 11:03:51 oahu iscsid[1118]: [ID 801593 daemon.error] short PDU header read from socket 5: Interrupted system call
May 16 11:03:51 oahu iscsid[1118]: [ID 702911 daemon.error] login I/O error, failed to receive a PDU
May 16 11:03:51 oahu iscsid[1118]: [ID 702911 daemon.error] login failed - 1
Thanks in advance -
Edward
[email protected]
Hi there,
Has anyone seen these errors below?
tks,
ul 4 15:00:17 pgw01 lom: [ID 702911 kern.warning] +13d+11h45m6s Alarm 2 ON
Jul 4 15:00:18 pgw01 lom: [ID 702911 kern.warning] +13d+11h45m7s Alarm 1 ON
Jul 4 15:00:19 pgw01 lom: [ID 702911 kern.warning] +13d+11h45m8s Alarm 3 ON
Jul 4 15:00:27 pgw01 lom: [ID 702911 kern.notice] +13d+11h45m15s Alarm 3 OFF
Jul 4 15:00:27 pgw01 lom: [ID 702911 kern.warning] +13d+11h45m16s Alarm 2 ON
Jul 4 15:00:28 pgw01 lom: [ID 702911 kern.notice] +13d+11h45m17s Alarm 2 OFF
Jul 4 15:00:46 pgw01 lom: [ID 702911 kern.warning] +13d+11h45m35s Alarm 1 ON
Jul 4 15:00:47 pgw01 lom: [ID 702911 kern.notice] +13d+11h45m36s Alarm 1 OFF
Jul 4 15:00:49 pgw01 lom: [ID 702911 kern.warning] +13d+11h45m38s Alarm 2 ON
Jul 4 15:00:49 pgw01 lom: [ID 702911 kern.notice] +13d+11h45m38s Alarm 2 OFF
Jul 4 15:00:50 pgw01 lom: [ID 702911 kern.warning] +13d+11h45m39s Alarm 2 ON
Jul 4 15:00:51 pgw01 lom: [ID 702911 kern.notice] +13d+11h45m40s Alarm 2 OFF
Jul 4 15:06:58 pgw01 xntpd[216]: [ID 774427 daemon.notice] time reset (step) 335
Similar Messages
-
How to enable the iscsi in Solaris 10
Dear All,
Kindly help me,How to enable the iscsi in solaris 10,That box already running in solaris 9,Now we installed solaris 10,In that solaris box separate iscsi card in there,How to enable the iscsi card to access the NAS storage.
Kindly sent any related PDF document to "[email protected]"
Thanks in advance,.
Regards,
Venkatachalam.MKindly answer the above question please.
Regards,
Venkatachalam.M -
Iscsi in Solaris 5.10.1 express 3/05
Since Solaris 10 express (5.10.1) 3/05 (start from 2/05), iscsiadm is provided as isci initiator;
I have a Cisco iscsi SN5428 router, (the storage
is Sun StorEdge 9980) I have some solaris 9 iscsi
clients setup in our enviroment using the Cisco iscsi driver.
I want to try the Solaris 10 iscsi driver, so I load the lastest Solaris 10 express 3/05 (build 09) on a Sun blade server; but unfortunately, I can't get the iscsi to work;
Since I have setup serveral Solaris 9 iscsi clients before, and I double checked my iscsi target settings on Cisco iscsi router, I think the iscsi disks should be available there. (if the box is Solaris 9 with cisco driver. And I knew the iscsi target is without CHAP,
radius or CRC. )
Please see following commands I tried on Soalris 10 test box. Could anyone give me some hints how to get this thing work?
Also, by the way, it seems that iscsiadm man page
(1M) is not included in this distro.(I installed SUNWXall)
# uname -a
SunOS fb2-sb0 5.10.1 snv_09 sun4u sparc SUNW,Serverblade1
# grep iscsiadm /var/sadm/install/contents
/usr/sbin/iscsiadm f none 0555 root sys 100516 59590 1109092145 SUNWiscsiu
# cat /etc/hosts
# Internet host table
127.0.0.1 localhost
xxx.xxx.xxx.xxx fb2-sb0 fb2-sb0.xxx.com loghost
10.10.107.101 iscsirouter01
10.10.107.102 iscsirouter02
# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
ce0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet xxx.xxx.xxx.xxx netmask ffffff00 broadcast xxx.xxx.xxx.255
ether 0:3:ba:xx:xx:xx
# ping iscsirouter01
iscsirouter01 is alive
# ping iscsirouter02
iscsirouter02 is alive
# ping 10.10.107.101
10.10.107.101 is alive
# ping 10.10.107.102
10.10.107.102 is alive
# iscsiadm remove discovery_address 10.10.107.101
# iscsiadm remove discovery_address 10.10.107.102
# iscsiadm list discovery_address
# iscsiadm add discovery_address 10.10.107.101
# iscsiadm add discovery_address 10.10.107.102
# iscsiadm list discovery_address
Discovery Address: 10.10.107.101:3260
Discovery Address: 10.10.107.102:3260
# iscsiadm list discovery
Discovery:
Static: enabled
Send Targets: enabled
# iscsiadm list initiator_node
iscsiadm: Unable to complete operation
# devfsadm -i iscsi
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0 <TOS MK3019GAXB SUN30G cyl 58138 alt 2 hd 16 sec 63>
/pci@1f,0/ide@d/dad@0,0
Specify disk (enter its number): ^D
# devfsadm
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0 <TOS MK3019GAXB SUN30G cyl 58138 alt 2 hd 16 sec 63>
/pci@1f,0/ide@d/dad@0,0
Specify disk (enter its number): ^D
# iscsiadm list target
# iscsiadm list initiator_node
iscsiadm: Unable to complete operation
# svcs -a |grep iscsi
online 2:23:57 svc:/network/iscsi_initiator:default
# iscsiadm modify initiator_node -?
iscsiadm modify initiator_node <OPTIONS>
OPTIONS:
-N, --node-name <initiator node name>
-A, --node-alias <initiator node alias>
-h, --headerdigest <none|CRC32>
-d, --datadigest <none|CRC32>
-C, --CHAP-Secret (exclusive)
-a, --authentication <chap|none>
-R, --radius-access <enable|disable>
-r, --radius-server <<IP address>:port>
-S, --radius-shared-secret (exclusive)
For more information, please see iscsiadm(1M)
# iscsiadm modify initiator_node -N fb2-sb0
iscsiadm: unknown iSCSI name type.
iscsiadm: Unable to complete operation
# cd /var/svc/log
# ls scsi
network-iscsi_initiator:default.log
# cat network-iscsi_initiator:default.log
[ Apr 9 00:14:13 Disabled. ]
[ Apr 9 00:14:13 Rereading configuration. ]
[ Apr 9 01:42:21 Enabled. ]
[ Apr 9 01:42:21 Executing start method ("/lib/svc/method/iscsid") ]
[ Apr 9 01:42:21 Method "start" exited with status 0 ]
[ Apr 9 01:44:37 Stopping because service restarting. ]
[ Apr 9 01:44:38 Executing stop method (:kill) ]
[ Apr 9 01:44:38 Executing start method ("/lib/svc/method/iscsid") ]
[ Apr 9 01:44:38 Method "start" exited with status 0 ]
[ Apr 9 02:22:08 Stopping because service disabled. ]
[ Apr 9 02:22:08 Executing stop method (:kill) ]
[ Apr 9 02:35:37 Stopping because service restarting. ]
[ Apr 9 02:35:37 Executing stop method (:kill) ]
[ Apr 9 02:35:37 Executing start method ("/lib/svc/method/iscsid") ]
[ Apr 9 02:35:37 Method "start" exited with status 0 ]
#I followed your instruction, it made some progress,
but when I do the devfsadm -i iscsi, the server paniced.
Please see following as the command output and the console log:
$ su -
Password:
Sun Microsystems Inc. SunOS 5.10.1 snv_09 October 2007
# iscsiadm modify discovery -t disable
# iscsiadm modify discovery -s disable
# iscsiadm modify initiator_node -N iqn.2005-04.fb2-sb0
# iscsiadm list initiator_node
Initiator node name: iqn.2005-04.fb2-sb0
Initiator node alias: -
Login Parameters (Default/Configured):
Header Digest: NONE/-
Data Digest: NONE/-
Authentication Type: NONE
RADIUS Server: NONE
RADIUS access: unknown
# iscsiadm list discovery_address
Discovery Address: 10.10.107.101:3260
Discovery Address: 10.10.107.102:3260
# iscsiadm list discovery_address -v 10.10.107.101:3260
Discovery Address: 10.10.107.101:3260
Target name: iqn.1987-05.com.cisco:00.f5a1b374bf74.s10test
Target address: 10.10.107.101:3260, 1
# iscsiadm list discovery_address -v 10.10.107.102:3260
Discovery Address: 10.10.107.102:3260
Target name: iqn.1987-05.com.cisco:00.932abc4bbcbd.s10test2
Target address: 10.10.107.102:3260, 1
# iscsiadm modify discovery -t enable
# iscsiadm list target
Target: iqn.1987-05.com.cisco:00.932abc4bbcbd.s10test2
Target Portal Group Tag: 1
Connections: 0
Target: iqn.1987-05.com.cisco:00.f5a1b374bf74.s10test
Target Portal Group Tag: 1
Connections: 0
# iscsiadm list target -S
Target: iqn.1987-05.com.cisco:00.932abc4bbcbd.s10test2
Target Portal Group Tag: 1
Connections: 0
Target: iqn.1987-05.com.cisco:00.f5a1b374bf74.s10test
Target Portal Group Tag: 1
Connections: 0
# iscsiadm list target -v
Target: iqn.1987-05.com.cisco:00.932abc4bbcbd.s10test2
Target Portal Group Tag: 1
Connections: 0
Discovery Method: SendTargets
Target: iqn.1987-05.com.cisco:00.f5a1b374bf74.s10test
Target Portal Group Tag: 1
Connections: 0
Discovery Method: SendTargets
# iscsiadm list discovery
Discovery:
Static: disabled
Send Targets: enabled
# devfsadm -i iscsi
Connection closed by foreign host.
On Console:
panic[cpu0]/thread=2a1008c9cc0: Apr 11 15:22:35 fb2-sb0 iscsi: NOTICE: iscsi session(8) iqn.1987-05.com.c
isco:00BAD TRAP: type=31 rp=2a1008c8800 addr=38 mmu_fsr=0 occurred in module "scsi_vhci" due to a N
ULL pointer dereference
sched: trap type = 0x31
addr=0x38
pid=0, pc=0x12271b8, sp=0x2a1008c80a1, tstate=0x4480001603, context=0x0
g1-g7: f0, 2, 0, c6e1, c6e0, 0, 2a1008c9cc0
000002a1008c8520 unix:die+78 (31, 2a1008c8800, 38, 0, 2a1008c85e0, 106f400)
%l0-3: 0000000000001fff 0000000000000031 0000000001000000 0000000000002000
%l4-7: 0000000001819378 0000000001819000 0000000000010000 00000000d25a2038
000002a1008c8600 unix:trap+8f0 (2a1008c8800, 38, 0, 1, 0, 1)
%l0-3: 00000000018338c0 0000000000000031 0000000000000005 0000000000000005
%l4-7: 00000000d25a2038 0000000000001fff 0000000000000000 000000000180c180
000002a1008c8750 unix:ktl0+48 (0, 0, 3000aeb3580, 6, 20, 18b7000)
%l0-3: 0000000000000004 0000000000001400 0000004480001603 0000000001013ea0
%l4-7: 000000007bf38ab0 0000000000000000 0000000000000000 000002a1008c8800
000002a1008c88a0 scsi_vhci:vhci_pathinfo_online+33c (3000aeb3580, 3000b57df30, 300000a1a40, 3000b55
e510, 3000b57fbc0, 30002a0ecf0)
%l0-3: 0000030002a0ec80 0000000000000000 00000000018b60b8 0000030002a0ecb0
%l4-7: 000000000122b06c 0000000000000001 00000000018b61d0 00000000018b60a8
000002a1008c8980 scsi_vhci:vhci_pathinfo_state_change+260 (30000390e98, 3000b57df30, 1, 0, 0, 18b5c
00)
%l0-3: 0000030000391238 000003000b57df30 000003000127b998 0000030002a0ec80
%l4-7: 0000030002a0eb40 000003000b57fbc0 0000000000000000 0000000010000000
000002a1008c8a30 genunix:i_mdi_pi_state_change+388 (3000b57df30, 10000, 0, 300015ab418, 3000b571260
, 1)
%l0-3: 0000000000000000 0000000000000001 0000000000000000 0000000000000000
%l4-7: 0000000000000000 00000300000b5048 000000000000000b 000003000b5712c0
000002a1008c8ae0 genunix:mdi_pi_online+10 (3000b57df30, 0, 3000b486080, 3000b571260, 0, 0)
%l0-3: 000003000b57df30 0000000000000004 0000000000000000 000002a1008c8c68
%l4-7: 0000000000000000 000002a1008c8c68 00000300000b5048 00000300015ab418
000002a1008c8b90 iscsi:iscsi_lun_virt_create+17c (3000ac82000, 0, 3000b57de50, 0, 18a7400, 18a77a0)
%l0-3: 000002a1008c8c68 0000000000000005 0000000000000000 0000000000000000
%l4-7: 000000007007bba8 0000000000000000 0000000000000000 0000000000000000
000002a1008c8c70 iscsi:iscsi_lun_create+158 (3000ac82000, 0, 30002329680, 3000b582900, 3000ac82040,
7007bc00)
%l0-3: 000000007007bc00 000003000b57fc00 0000000000000000 0000000000000000
%l4-7: 000000000000000e 000000000000000f 000000007007bc00 000003000b57de50
000002a1008c8d20 iscsi:iscsi_sess_inquiry+208 (3000ac82000, 0, 3000b582900, ff, 3000aeb3880, 300023
29680)
%l0-3: 0000000000000000 0000000000000083 0000000000000001 000000000000007c
%l4-7: 000002a1008c8df8 000003000ac8410c 0000000000000001 0000000000000000
000002a1008c8e40 iscsi:iscsi_sess_reportluns+2e0 (3000ac82000, 10, 3000ac82040, 3000b582930, 8, 0)
%l0-3: 0000000000000030 0000000000000010 0000000000000008 0000000000000000
%l4-7: 0000000000000000 0000000000000008 0000000000000030 0000000000000000
000002a1008c8f40 iscsi:iscsi_sess_enumeration+1c (3000af04268, 0, 4, 1, 3000af04268, 3000005efc8)
%l0-3: 00000000000000bb 0000000000000002 00000000018522e4 0000000001852000
%l4-7: 000000000000000a 00000000018522bc 0000000001852000 000003000ac82000
000002a1008c8ff0 iscsi:iscsi_sess_state_free+a4 (3000ac82018, 0, 7bf58400, 1, 2120, 2000)
%l0-3: 000003000ac862bc 00000000000042bc 0000000000004000 0000000000000001
%l4-7: 000003000af04268 000000007bf58400 0000000000000000 0000000000000000
000002a1008c90a0 iscsi:iscsi_conn_state_in_login+50 (30002234900, 1, 3000ac82000, 3000ac82018, 2, 1
%l0-3: 000003000001ee40 0000030014770000 0000030014760000 0000000000000020
%l4-7: 0000030014772000 000003000ac862b0 00000000000042b0 0000000000004000
000002a1008c9150 iscsi:iscsi_login_start+148 (3000af04278, 0, 2a1008c920e, 0, 30002234900, 30002234
928)
%l0-3: 000003000ac82000 0000000000010000 0000000000009b1d 0000000000000000
%l4-7: 0000000000000000 0000000000000064 000002a1008c920f 0000000001813c20
000002a1008c9210 iscsi:iscsi_conn_state_free+74 (30002234928, 0, 3000ac82000, 30002234900, 3000af04
278, 1)
%l0-3: 0000030002234900 0000000000000000 0000000000000000 0000000000000000
%l4-7: 0000000000000000 000003000ac862bc 00000000000042bc 0000000000004000
000002a1008c92c0 iscsi:iscsi_ioctl+278 (22f4, 0, 300000a3560, 300000a3540, 7bf392e4, 3000ac82000)
%l0-3: 0000030002234900 0000000000002000 0000000000000004 0000030002a0efe0
%l4-7: ffffffff80000000 0000000000000000 0000000000000000 0000000000000000
000002a1008c9740 iscsi:discovery_queue_login_tgt+8c (0, 30002a0ef00, 7007bc00, 0, 0, 0)
%l0-3: 000000007007be38 0000000000000000 000000007007a400 000000007007a400
%l4-7: 0000000000000000 0000000000000000 0000000000000000 000000007007bc00
000002a1008c97f0 iscsi:___const_seg_900006901+2658 (300000a3540, 7007be20, 7007bc00, 7007bc00, 0, 0
%l0-3: 00000300003912a0 0000000000000000 000000000000000e 0000000000000000
%l4-7: 000000000183a320 000000000183a000 0000000000003d5b 0000000000000000
000002a1008c98a0 iscsi:iscsi_tran_bus_config+118 (30000391238, 300000a3540, 2, ffffffff, 0, 0)
%l0-3: 000000000000003c 0000000000000000 0000000004004048 0000000000000000
%l4-7: 0000000000000000 000000007007bc00 0000000000000000 0000000000000000
000002a1008c9960 genunix:devi_config_common+a4 (30000391238, 2, ffffffff, 18a8000, 0, 4004048)
%l0-3: 00000000018a1800 000002a10051fcc0 0000000000000006 0000000000000010
%l4-7: 00000000018b9120 000000007007a198 0000000000000008 0000000001231c14
000002a1008c9a10 genunix:mt_config_thread+60 (3000b1040e0, 0, 18338c0, 18338c0, 30000391238, 3000b5
7ff40)
%l0-3: 0000000000000000 0000000000000000 000003000b5857d8 0000000000000008
%l4-7: 00000000018a5400 000002a1001a7cc0 000002a10088bcc0 0000000000000000
syncing file systems... 1 1 done
dumping to /dev/dsk/c0t0d0s1, offset 429326336, content: kernel
100% done: 14660 pages dumped, compression ratio 3.71, dump succeeded
rebooting...
Resetting ...
BSC status is 0000.0000.0000.000c
Speed Jumper is set to 0000.0000.0000.000e
Software Power ON
@(#)OBP 4.11.5 2003/11/12 10:40 Sun Serverblade1
CPU SPEED 0x0000.0000.26be.3680
Initializing Memory Controller
MCR0 0000.0000.57b2.cf06
MCR1 0000.0000.8000.8000
MCR2 0000.0000.c333.00ff
MCR3 0000.0000.9060.00cf
Clearing E$ Tags Done
Clearing I/D TLBs Done
Probing Memory Done
Clearing Memory Done
MEM BASE = 0000.0000.4000.0000
MEM SIZE = 0000.0000.4000.0000
MMUs ON
Find dropin, Copying Done, Size 0000.0000.0000.4c70
PC = 0000.01ff.f000.3924
PC = 0000.0000.0000.3968
Find dropin, (copied), Decompressing Done, Size 0000.0000.0005.efc0 Reset Control: BXIR:0 BPOR:1 SXIR:0 SPOR:0 P
OR:0
Probing upa at 1f,0 pci
Probing upa at 0,0 SUNW,UltraSPARC-IIe (512 KB)
Loading Support Packages: obp-tftp SUNW,i2c-ram-device SUNW,fru-device
Loading onboard drivers:
Probing /pci@1f,0 Device 7 isa serial bscbus bscv i2c motherboard-fru
rtc power flashprom
Probing /pci@1f,0 Device 3 pmu i2c dimm-spd dimm-spd nvram idprom
Probing Memory Bank #0 1024 Megabytes
Probing Memory Bank #1 1024 Megabytes
Probing Memory Bank #2 0 Megabytes
Probing Memory Bank #3 0 Megabytes
No clientid supplied by the BSC
Probing /pci@1f,0 Device a network
Probing /pci@1f,0 Device b network
Probing /pci@1f,0 Device d ide disk cdrom
Sun Serverblade1 (UltraSPARC-IIe 650MHz), No Keyboard
Copyright 1998-2003 Sun Microsystems, Inc. All rights reserved.
OpenBoot 4.11.5, 2048 MB memory installed, Serial #52765079.
Ethernet address 0:3:ba:68:9a:1c, Host ID: 83252197.
Rebooting with command: boot
Boot device: disk File and args:
Loading ufs-file-system package 1.4 04 Aug 1995 13:02:54.
FCode UFS Reader 1.12 00/07/17 15:48:16.
Loading: /platform/SUNW,Serverblade1/ufsboot
Loading: /platform/sun4u/ufsboot
SunOS Release 5.10.1 Version snv_09 64-bit
Copyright 1983-2005 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
WARNING: Last shutdown is later than time on time-of-day chip; check date.
NOTICE: Failed to set param 4 for OID 1
Hostname: fb2-sb0
checking ufs filesystems
/dev/rdsk/c0t0d0s7: is logging.
fb2-sb0 console login: Apr 11 15:24:53 fb2-sb0 svc.startd[7]: network/ssh:default failed -
Hello.
If anyone has or knows where I can find the Cisco iSCSI driver 3.3.5/6 for Solaris 9, I would be grateful to know how to get it. I have been unsuccessful in finding one (Cisco no longer suppports it).
Please let me know.
I want to empower my production Solaris 9 v210 systems with iSCSI and I don't yet want to upgrade to Solaris 10.
Regards,
RHi,
were you able to get the iscsi initiator from somewhere? I am also in the same predicament as you were, with a solris 9 box. please let me know. thanks - S.R.
Hello.
If anyone has or knows where I can find the Cisco
iSCSI driver 3.3.5/6 for Solaris 9, I would be
grateful to know how to get it. I have been
unsuccessful in finding one (Cisco no longer
suppports it).
Please let me know.
I want to empower my production Solaris 9 v210
systems with iSCSI and I don't yet want to upgrade to
Solaris 10.
Regards,
R -
Iscsi for Solaris 10 - light is green
Yo,
Have it running on x86 nv_10 package of Software Express 04/05. Target is SANRAD V-Switch 3000. By the way it also works fine from Solaris 10 virtual machine on VMWare Workstation. Solaris system disk is connected to the workstation by iSCSI and the virtual machine is booting off it... gotta love brave new world of IP storage!
P.S Thanks to Torrey McMahon - great blog, dude!You're welcome. Glad I, and the folks that give me the info, can help.
-
Where is iSCSI ? Solaris 10 HW1
I've searched and seen hints, tips and the existence in Solaris Express but I was wondering if iSCSI initiator support is in the HW1 release and I'm missing it somewhere.
As a sidebar, is there a collection of "What's Been Added, Changed or Fixed" for each Solaris release? I'm willing to read!
Thanks,
DougThe initiator as you already noted is in Solaris Express. We were unable to make the HW1 release. It will also be in HW2. While applying the Solaris Express packages to Solaris 10 may work be aware Solaris Express is different S10. We are already working on adding new S10U2 features into Solaris Express. Solaris Express can be considered the latest and greatest. While S10 HW and Update releases are stable check point/milestones.
-
ISCSI and Solaris device names ..... target binding settings
Hi,
I configured an iSCSI environment with Sparc Solaris 10 U4 software inititator and EMC CLARiiON CX3-10c.
So far it is working, but I`m wondering about the Solaris device names /dev/rdsk/c5t3d0. I disabled MPxIO
in /kernel/drv/iscsi.conf. I have 32 LUN configured in CX3-10c target. The Host LUNs are 0 ...31, but Solaris
created device nodes started with target 3, e.g. c5t3d0s2 (see output from iscsiadm list target).
# iscsiadm list target -S iqn.1992-04.com.emc:cx.hk193001030.a1
Target: iqn.1992-04.com.emc:cx.hk193001030.a1
Alias: 1030.a1
TPGT: 2
ISID: 4000002a0000
Connections: 1
LUN: 31
Vendor: DGC
Product: RAID 10
OS Device Name: */dev/rdsk/c5t34d0s2*
LUN: 30
Vendor: DGC
Product: RAID 10
OS Device Name: */dev/rdsk/c5t33d0s2*
LUN: 1
Vendor: DGC
Product: RAID 5
OS Device Name: */dev/rdsk/c5t4d0s2*
LUN: 0
Vendor: DGC
Product: RAID 5
OS Device Name: */dev/rdsk/c5t3d0s2*
Why are the device node names stared with target 3 instead of target 0 ????
Thanks for your help !!!
Edited by: test111 on Mar 25, 2008 8:36 AMHi,
I configured an iSCSI environment with Sparc Solaris 10 U4 software inititator and EMC CLARiiON CX3-10c.
So far it is working, but I`m wondering about the Solaris device names /dev/rdsk/c5t3d0. I disabled MPxIO
in /kernel/drv/iscsi.conf. I have 32 LUN configured in CX3-10c target. The Host LUNs are 0 ...31, but Solaris
created device nodes started with target 3, e.g. c5t3d0s2 (see output from iscsiadm list target).
# iscsiadm list target -S iqn.1992-04.com.emc:cx.hk193001030.a1
Target: iqn.1992-04.com.emc:cx.hk193001030.a1
Alias: 1030.a1
TPGT: 2
ISID: 4000002a0000
Connections: 1
LUN: 31
Vendor: DGC
Product: RAID 10
OS Device Name: */dev/rdsk/c5t34d0s2*
LUN: 30
Vendor: DGC
Product: RAID 10
OS Device Name: */dev/rdsk/c5t33d0s2*
LUN: 1
Vendor: DGC
Product: RAID 5
OS Device Name: */dev/rdsk/c5t4d0s2*
LUN: 0
Vendor: DGC
Product: RAID 5
OS Device Name: */dev/rdsk/c5t3d0s2*
Why are the device node names stared with target 3 instead of target 0 ????
Thanks for your help !!!
Edited by: test111 on Mar 25, 2008 8:36 AM -
A recent article on Search Storage asserts that Solaris 10 will support iSCSI. Is this true or just another journalistic fantasy?
I have seen several threads that have suggested that a version of Solaris 10 will support iSCSI. Can you tell me your best estimate of when that "version" of Solaris 10 will become available ?
Also, I have seen several threads that have suggested that support for iSCSI on Solaris 10 is up to the manufactures of the adapters to create device drivers that are compliant with Solaris 10 ( i.e. Cisco ). Can you please comment on what is meant by Solaris 10 will "support" iSCSI, and what part of that "support" is left up to the manufactures of the devices interfacing with Solaris 10 ( i.e. device drivers ) -
ZFS root problem after iscsi target experiment
Hello all.
I need help with this situation... I've installed solaris 10u6, patched, created branded full zone. Everything went well until I started to experiment with iSCSI target according to this document: http://docs.sun.com/app/docs/doc/817-5093/fmvcd?l=en&a=view&q=iscsi
After setting up iscsi discovery address of my iscsi target solaris hung up and the only way was to send break from service console. Then I got these messages during boot
SunOS Release 5.10 Version Generic_138888-01 64-bit
/dev/rdsk/c5t216000C0FF8999D1d0s0 is clean
Reading ZFS config: done.
Mounting ZFS filesystems: (1/6)cannot mount 'root': mountpoint or dataset is busy
(6/6)
svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a failed: exit status 1
Jan 23 14:25:42 svc.startd[7]: svc:/system/filesystem/local:default: Method "/lib/svc/method/fs-local" failed with exit status 95.
Jan 23 14:25:42 svc.startd[7]: system/filesystem/local:default failed fatally: transitioned to maintenance (see 'svcs -xv' for details)
---- There are many affected services from this error, unfortunately one of them is system-log, so I cannot find any relevant information why this happens.
bash-3.00# svcs -xv
svc:/system/filesystem/local:default (local file system mounts)
State: maintenance since Fri Jan 23 14:25:42 2009
Reason: Start method exited with $SMF_EXIT_ERR_FATAL.
See: http://sun.com/msg/SMF-8000-KS
See: /var/svc/log/system-filesystem-local:default.log
Impact: 32 dependent services are not running:
svc:/application/psncollector:default
svc:/system/webconsole:console
svc:/system/filesystem/autofs:default
svc:/system/system-log:default
svc:/milestone/multi-user:default
svc:/milestone/multi-user-server:default
svc:/system/basicreg:default
svc:/system/zones:default
svc:/application/graphical-login/cde-login:default
svc:/system/iscsitgt:default
svc:/application/cde-printinfo:default
svc:/network/smtp:sendmail
svc:/network/ssh:default
svc:/system/dumpadm:default
svc:/system/fmd:default
svc:/system/sysidtool:net
svc:/network/rpc/bind:default
svc:/network/nfs/nlockmgr:default
svc:/network/nfs/status:default
svc:/network/nfs/mapid:default
svc:/application/sthwreg:default
svc:/application/stosreg:default
svc:/network/inetd:default
svc:/system/sysidtool:system
svc:/system/postrun:default
svc:/system/filesystem/volfs:default
svc:/system/cron:default
svc:/application/font/fc-cache:default
svc:/system/boot-archive-update:default
svc:/network/shares/group:default
svc:/network/shares/group:zfs
svc:/system/sac:default
[ Jan 23 14:25:40 Executing start method ("/lib/svc/method/fs-local") ]
WARNING: /usr/sbin/zfs mount -a failed: exit status 1
[ Jan 23 14:25:42 Method "start" exited with status 95 ]
Finaly there is output of zpool list command, where everything about ZFS pools looks OK:
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
root 68G 18.5G 49.5G 27% ONLINE -
storedgeD2 404G 45.2G 359G 11% ONLINE -
I would appretiate any help.
thanks in advance,
BerroschOK, i've tryied to install s10u6 to default rpool and move root user to /rpool directory (which it is nonsense of course, it was just for this testing purposes) and everything went OK.
Another experiment was with rootpool name 'root' and root user in /root, everything went OK as well.
Next try was with rootpool 'root', root user in /root, enabling iscsi initiator:
# svcs -a |grep iscsi
disabled 16:31:07 svc:/network/iscsi_initiator:default
# svcadm enable iscsi_initiator
# svcs -a |grep iscsi
online 16:34:11 svc:/network/iscsi_initiator:default
and voila! the problem is here...
Mounting ZFS filesystems: (1/5)cannot mount 'root': mountpoint or dataset is busy
(5/5)
svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a failed: exit status 1
Feb 9 16:37:35 svc.startd[7]: svc:/system/filesystem/local:default: Method "/lib/svc/method/fs-local" failed with exit status 95.
Feb 9 16:37:35 svc.startd[7]: system/filesystem/local:default failed fatally: transitioned to maintenance (see 'svcs -xv' for details)
Seems to be a bug in iscsi implementation, some quotas of 'root' in source code or something like it...
Martin -
Hi,
has anyone experience in connecting Promise vTrack raid arrays to Solaris using iSCSI?
Solaris sees the targets but does not make conection, no matter how hard we try.
MS Initiator on the other hand works instantly, as well as linux one.
How to discover problem, find a compatibility solution?
In the messages there is no problem mentioned, just no disk - LUNs - attached.Try cross-post to another mailing list:
http://www.opensolaris.org/jive/forum.jspa?forumID=94
http://www.opensolaris.org/os/community/storage/
M.C> -
Hi,
I am setting up a test environment for evaluating iSCSI and we would like to know if Sun Trunking 1.3 and iSCSI under Solaris 10 is supported, we would also like to implement jumbo frames as well.
Has anyone tried this kind of configuration?
Thank you in advance for any information!
ChrisI think you should be using dladm rather than Sun
Trunking with Solaris 10.
Also, you don't say, but if you're planning to use
Solaris iSCSI targets, then they don't exist in
Solaris 10 yet, you need OpenSolaris for target
support at the moment.Thanks for your reply.
The iSCSI target was a NetApp appliance. We actually
did find that Sun Trunking 1.3 works fine with
Solaris 10 as well as jumbo frames.
We used Sun's quad (ce) gigabit Ethernet adapter and
trunked two ports on the host end.
Thanks again.
I'll have to look into dladmMain. update 2 of Solaris 10 has native iSCSI initiator.
Works great on 3PAR's storage array.
M -
ISCSI target setup fails: command "iscsitadm" not available?
Hello,
I want to set up a iSCSI target:
however it seems I don't have
iscsitadm
available on my system, only
iscsiadm
What to do?
Is this
http://alessiodini.wordpress.com/2010/10/24/iscsi-nice-to-meet-you/
still valid in terms of set up procedure?
ThanksOk,
here you go using COMSTAR:
pkg install storage-server
pkg install -v SUNWiscsit
http://thegreyblog.blogspot.com/2010/02/setting-up-solaris-comstar-and.html
svcs \*stmf\*
svcadm enable svc:/system/stmf:default
zfs create -V 250G tank-esata/macbook0-tm
sbdadm create-lu /dev/zvol/rdsk/tank-esata/macbook0-tm
sbdadm list-lu
stmfadm list-lu -v
stmfadm add-view 600144f00800271b51c04b7a6dc70001
svcs \*scsi\*
itadm create-target
devfsadm -i iscsi
reboot
Solaris 11 Express iSCSI manual:
http://dlc.sun.com/pdf/821-1459/821-1459.pdf
and that for reference
http://nwsmith.blogspot.com/2009/07/opensolaris-2009-06-and-comstar-iscsi.html
Windows iSCSI initiator
http://www.microsoft.com/downloads/en/details.aspx?familyid=12cb3c1a-15d6-4585-b385-befd1319f825&displaylang=en
works after manually adding the Server's IP (no auto detect) -
ISCSI device mapping on Solaris 10
Hi,
A brief overview of my situation:
I have 3 Oracle Solaris X86-64 virtual machines that I'm using for testing. I have configured a ZFS storage pool on one of them (named solastorage), that will serve as my iSCSI target. The remaining 2 servers (solarac1 and solarac2) are meant to be my RAC nodes for a test Oracle 11g R2 RAC installation.
/etc/hosts listing:
# ZFS iSCSI target
192.168.247.150 solastorage solastorage.domain.com
# RAC Public IPs
192.168.247.131 solarac1 solarac1.domain.com loghost
192.168.247.132 solarac2 solarac2.domain.com
A brief overview of the steps carried out at solastorage (after enabling the iSCSI target service):
zpool create rac_volume mirror c0d1 c1d1
zfs create -V 0.5g rac_volume/ocr
zfs create -V 0.5g rac_volume/voting
zfs set shareiscsi=on rac_volume/ocr
zfs set shareiscsi=on rac_volume/voting
On both the RAC servers (of course I've enabled the iSCSI initiator service on both):
iscsiadm modify discovery -t enable
iscsiadm add discovery-address 192.168.247.150
devfsadm -i iscsi
After that, when I run format on both sides, I can see the following:
solarac1# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
/pci@0,0/pci15ad,1976@10/sd@0,0
1. c3t15d0 <DEFAULT cyl 509 alt 2 hd 64 sec 32>
/iscsi/[email protected]%3A02%3A54d1d1d2-7154-ee78-94e6-c3d053ca7ab50001,0
2. c3t16d0 <DEFAULT cyl 2045 alt 2 hd 128 sec 32>
/iscsi/[email protected]%3A02%3Af109f049-9f76-6a16-c36a-d42c4d6818fe0001,0
solarac2# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
/pci@0,0/pci15ad,1976@10/sd@0,0
1. c2t2d0 <DEFAULT cyl 509 alt 2 hd 64 sec 32>
/iscsi/[email protected]%3A02%3A54d1d1d2-7154-ee78-94e6-c3d053ca7ab50001,0
2. c2t3d0 <DEFAULT cyl 2045 alt 2 hd 128 sec 32>
/iscsi/[email protected]%3A02%3Af109f049-9f76-6a16-c36a-d42c4d6818fe0001,0
solastorage# iscsitadm list target -v | more
Target: rac_volume/ocr
iSCSI Name: iqn.1986-03.com.sun:02:54d1d1d2-7154-ee78-94e6-c3d053ca7ab5
Alias: rac_volume/ocr
Connections: 2
Initiator:
iSCSI Name: iqn.1986-03.com.sun:01:2a95e0f4ffff.4d586ac7
Alias: solarac1
Initiator:
iSCSI Name: iqn.1986-03.com.sun:01:2a95e0f4ffff.4d5b7cef
Alias: solarac2
ACL list:
TPGT list:
LUN information:
LUN: 0
GUID: 600144f04d5b8fca00000c29655dc000
VID: SUN
PID: SOLARIS
Type: disk
Size: 512M
Backing store: /dev/zvol/rdsk/rac_volume/ocr
Status: online
So, here I can see the same devices on both the servers, only it is being recognised with different device names. Without using any 3rd-party software (for example Oracle Cluster), how can I manually map the device names on both these servers so that they are the same?
Previously, for Oracle 10g Release 2, I was able to use the metainit commands to create a pseudo-name that is the same on both servers. However, as of Oracle 11g R2, devices with the naming format /dev/md/rdsk/.... are no longer valid.
Does anyone know of a way I can manually re-map these devices to the same device names on the OS level, without needing Oracle Cluster or something similar?
Thanks in advance,
NS Selvam
Edited by: NS Selvam on Feb 16, 2011 1:32 AM
Edited by: NS Selvam on Feb 16, 2011 1:33 AMThank you for your response.
Setting the "ddi-forceattach" property in Pseudo driver .conf file will not
help. Solaris does not "attach" Pseudo drivers which do not have ".conf"
children (even though the Pseudo driver conf file has "ddi-forceattach=1"
property set). Opening the Pseudo device file will attach the Pseudo driver.I'm confused... We have a .conf file, as mentioned, but what makes
it a "Pseudo driver .conf" rather than just a "driver .conf"?
From what I undestand of your requirement, the following should be sufficient :
1. Set property : "ddi-forceattach=1" for all physical devices that is
required by Pseudo driver.
2. Application opens the Pseudo device node.
Let me know if you have any queries / issues. I do have further questions.
Included below is a version of our .conf file modified to protect the
names of the guilty.
As you can see, there is part of it which defines a pseudo device,
and then a set of properties that apply to all devices. Or that's the
intention.
In #1, you said to set the ddi-forceattach property for all "physical
devices", but how do I do this, if it's not what I'm already doing? And what
do you mean "required by Pseudo driver"?
name="foobar" parent="pseudo" instance=1000 FOOBAR_PSEUDO=1;
ddi-forceattach=1
FOOBAR_SYM1=1
FOOBAR_SYM2=2
FOOBAR_SYM3=3;
On a Solaris 9 system of mine, recently I believe I have seen multiple cases
where I've booted, and a physical device has not gotten attached, but if I
reboot, it will be attached the next time.
Thanks,
Nathan -
Hi all,
i'm a solaris newbie and new user so, let's go.
I have difficult in configuring/installing iscsi target on solaris 10
My uname -a is
SUNOS 5.10 generic_118822-02 sun4u sparc
if i launch pkginfo | grep iscsi it returns :
system SUNWiscsir sun Iscsi device driver (root)
system SUNWiscsiu sun Iscsi management utilities (usr)
but i cannot find iscsiadm in no system path neither other file concerning iscsi.
I installed pkg-get too but i'm unable to find this package.
Could u help me ?
Thank u all for reading a for supporting
Darktux....hi all
1st of all, thank u for support...
then i need some explaination for the installation of 119090-25 (iscsi driver + target) installation
this patch requrires kernel patch (118833-36-1) that requires 3 other patches 118918-13, 119042-09 and 119578-30
now i have some questions for kernel installation patch... on my server is running oracle and i ask if this upgrade should give some problem with services running.
could u guide me through this adventure? I repeat i'm totally newbie and i'm reading as much as possible to understand as best i can solaris 10
thank u
sorry for my english -
Missing dependency in Solaris 10 Update8 iscsi initiator / zones not booted
In previous Solaris 10 Releases, the iscsi_initiator smf service had the following
dependeny:
# svcs -D iscsi_initiator
STATE STIME FMRI
online Jan_11 svc:/system/metainit:default
Because this one is missing with Update 8, the service is even started after
the zones services.
online 14:45:50 svc:/system/zones:default
online 14:46:01 svc:/network/iscsi/initiator:default
Result:
- No filesystems on iSCSI disks mounted, no Zone on iSCSI running ... after system boot.
Any ideas?Quick thought...
What happens if you mark all you zones to not boot as a default setting so system boot you won't have this race condition. Then have an rc3highnumber script that probes the services and makes sure all drivers and filesystems are available. Once available, boot the zones, and reset the autoboot option if need be. But make sure the rc3script has the autoboot function disabled again on the way down. This should hold the problem together until Sun/Oracle fixes this issue.
Nasty hack, but that's software for you.
Thoughts?
Maybe you are looking for
-
Running java as a daemon service in Solaris 10?
I am looking for instructions on how to run a java program as a daemon service (e.g. running JBoss as a service) in Solaris 10 X64? The java-wrapper from tanuki software seems to support most platforms but not Solaris 10 on Intel 64 bit? Any help wou
-
Creating a fill-in calculating form
I am trying to create a fill-in form that will calculate figures (addition, subtraction). It's a 3 column form, with the last column as a total column. I would like it to calculate automatically. This is going to be a network form. Is this possible
-
Customise the drill down in xl-reporter
Hi Dear; in xl-reporter, i can enable the drill down option in the report. 1- how can i customize the columns that i want to show? 2- how many drill down level can i setup? best regards;
-
Brocade FCX PoE with Cisco LAP
Hi, We are using Brocade FCX PoE switches with AIR-LAP1142N-N-K9 and AIR-LAP1252AG-A-K9. We have some issues that may or may not be due to the Brocade-LAP links but would like a reality check with other Brocade sites. If you are successfully using Br
-
Reinstalling Elements 6 from my disc
I have a problem reinstalling Elements 6 from my disc. I get message: encountered problem contact Adobe. Could you please help me?