7110 iSCSI Performance is ABYSMAL

I have gold support, so I opened a case on this in July*, and Sun doesn't seem to think that their machine not performing as advertised is any big deal and appears to be sitting on their thumbs. They were able to replicate the issue back in August, but no fix has been forthcoming. I updated the firmware to 2009.09.01.0.0,1-1.2 because it was supposed to improve iSCSI performance. It actually made things slightly worse.
So here's the problem:
Let's say you have an iSCSI LUN set up on a RAIDz2 pool. Until you fill up the read cache, you can copy files from the 7110 to your Windows server at wire speed. As soon as the cache fills up (and you wish to copy a file that has not been cached), the transfer rate drops to less than 10 MB/s. (!?!?!?!?!?!)
It has nothing to do with the switch. I bypassed it with a direct connection. It's not Nagle's Algorithm. I've disabled that. It's not Windows or the particular server. This behavior persists across multiple machines and operating systems. And like I said, Sun has replicated the problem.
When the file(s) are in cache and you copy them from the 7110, the Networking tab in Task Manager shows 50%-plus utilization of the NIC. Once the cache has been filled and I try to copy a non-cached file, NIC utilization drops to 7%.
I have graphs from the analytics that show that when a file is cached and you try to copy it using Windows Explorer, network utilization (on the 7110) spikes and hard disk activity drops. There is a direct correlation. If the read cache fills up part of the way into the file copy, network activity drops and hard drive activity picks up immediately.
Here's the kicker: Per instructions from the Sun support engineer, I ran iostat -xtcn from a shell before, during, and after attempts to copy a bunch of non-cached files (via Windows Explorer). Here are the results:
Sun1# iostat -xtnc
   tty         cpu
tin tout  us sy wt id
   0    0   0  7  0 92
                    extended device statistics             
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.3    6.2   19.0  277.0  0.0  0.1    2.8   18.7   0   1 c0t5000C50007F9EE7Bd0
    0.3    6.2   20.2  277.0  0.0  0.1    2.6   17.3   0   1 c0t5000C5000AD6B6C7d0
  192.7   16.2 1249.8   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F83226Fd0
  192.3   16.2 1258.3   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F816EF7d0
  192.5   16.2 1249.1   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F8188BBd0
  192.4   16.2 1253.9   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F83D80Bd0
  192.6   16.2 1247.3   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F85377Fd0
  192.6   16.2 1247.2   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F81889Fd0
  192.2   16.1 1258.7   20.0  0.3  0.9    1.4    4.5   3  15 c0t5000C5000F831B5Bd0
  192.5   16.2 1250.1   20.0  0.3  0.9    1.3    4.3   2  15 c0t5000C5000F8187F7d0
  192.2   16.1 1264.1   20.0  0.3  0.9    1.4    4.5   3  15 c0t5000C5000F814A23d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t5000C5000F814747d0
  192.2   16.2 1258.9   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F838E47d0
  192.5   16.2 1246.3   20.0  0.3  0.9    1.3    4.3   2  15 c0t5000C5000F83B94Fd0
  192.3   16.2 1260.3   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F814F37d0
  192.7   16.2 1247.5   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F8544F7d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c2t0d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c2t0d1
Sun1# iostat -xtnc
   tty         cpu
tin tout  us sy wt id
   0    0   0  7  0 92
                    extended device statistics             
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.3    6.2   19.0  277.0  0.0  0.1    2.8   18.7   0   1 c0t5000C50007F9EE7Bd0
    0.3    6.2   20.2  277.0  0.0  0.1    2.6   17.3   0   1 c0t5000C5000AD6B6C7d0
  192.7   16.2 1249.8   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F83226Fd0
  192.3   16.2 1258.3   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F816EF7d0
  192.5   16.2 1249.0   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F8188BBd0
  192.4   16.2 1253.9   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F83D80Bd0
  192.6   16.2 1247.3   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F85377Fd0
  192.6   16.2 1247.2   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F81889Fd0
  192.2   16.1 1258.7   20.0  0.3  0.9    1.4    4.5   3  15 c0t5000C5000F831B5Bd0
  192.5   16.2 1250.1   20.0  0.3  0.9    1.3    4.3   2  15 c0t5000C5000F8187F7d0
  192.2   16.1 1264.0   20.0  0.3  0.9    1.4    4.5   3  15 c0t5000C5000F814A23d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t5000C5000F814747d0
  192.2   16.2 1258.9   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F838E47d0
  192.5   16.2 1246.3   20.0  0.3  0.9    1.3    4.3   2  15 c0t5000C5000F83B94Fd0
  192.3   16.2 1260.3   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F814F37d0
  192.7   16.2 1247.4   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F8544F7d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c2t0d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c2t0d1
Sun1# iostat -xtnc
   tty         cpu
tin tout  us sy wt id
   0    0   0  7  0 92
                    extended device statistics             
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.3    6.2   19.0  277.0  0.0  0.1    2.8   18.7   0   1 c0t5000C50007F9EE7Bd0
    0.3    6.2   20.2  277.0  0.0  0.1    2.6   17.3   0   1 c0t5000C5000AD6B6C7d0
  192.7   16.2 1249.8   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F83226Fd0
  192.3   16.2 1258.3   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F816EF7d0
  192.5   16.2 1249.0   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F8188BBd0
  192.4   16.2 1253.9   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F83D80Bd0
  192.6   16.2 1247.3   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F85377Fd0
  192.6   16.2 1247.2   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F81889Fd0
  192.2   16.1 1258.7   20.0  0.3  0.9    1.4    4.5   3  15 c0t5000C5000F831B5Bd0
  192.5   16.2 1250.1   20.0  0.3  0.9    1.3    4.3   2  15 c0t5000C5000F8187F7d0
  192.2   16.1 1264.0   20.0  0.3  0.9    1.4    4.5   3  15 c0t5000C5000F814A23d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t5000C5000F814747d0
  192.2   16.2 1258.9   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F838E47d0
  192.5   16.2 1246.3   20.0  0.3  0.9    1.3    4.3   2  15 c0t5000C5000F83B94Fd0
  192.3   16.2 1260.3   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F814F37d0
  192.7   16.2 1247.4   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F8544F7d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c2t0d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c2t0d1I think it is clear from this output, as well as from the lack of rapid blinking of the LEDs on the 7110 that it isn't even trying to pull the information off the hard drives.
And get this--the support engineer was able to verify that the 7110 performs iSCSI write operations faster than reads.
So my question is this: Does anyone know if it's as simple as changing a setting, or will I have to wait for Sun to get their act together?

Thanks for that information. I'm not familiar with Solaris commands, so I just did what I was told to do. Adding the interval parameter did the trick. Thank you!
So here's some output obtained during a file transfer:
Sun1# iostat -xtnc 5
   tty         cpu
tin tout  us sy wt id
   0    0   0  7  0 92
                    extended device statistics             
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.3    6.2   19.1  277.3  0.0  0.1    2.8   18.7   0   1 c0t5000C50007F9EE7Bd0
    0.3    6.2   20.2  277.3  0.0  0.1    2.7   17.3   0   1 c0t5000C5000AD6B6C7d0
  194.2   16.2 1260.5   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F83226Fd0
  193.8   16.2 1269.0   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F816EF7d0
  194.1   16.2 1259.7   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F8188BBd0
  193.9   16.2 1264.6   20.0  0.3  0.9    1.3    4.5   2  15 c0t5000C5000F83D80Bd0
  194.1   16.2 1257.9   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F85377Fd0
  194.2   16.2 1257.8   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F81889Fd0
  193.7   16.2 1269.4   20.0  0.3  0.9    1.4    4.5   3  15 c0t5000C5000F831B5Bd0
  194.0   16.2 1260.8   20.0  0.3  0.9    1.3    4.3   2  15 c0t5000C5000F8187F7d0
  193.7   16.2 1274.9   20.0  0.3  0.9    1.4    4.5   3  15 c0t5000C5000F814A23d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t5000C5000F814747d0
  193.7   16.2 1269.7   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F838E47d0
  194.0   16.2 1257.0   20.0  0.3  0.9    1.3    4.3   2  15 c0t5000C5000F83B94Fd0
  193.8   16.2 1271.1   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F814F37d0
  194.3   16.2 1258.1   20.0  0.3  0.9    1.3    4.4   2  15 c0t5000C5000F8544F7d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c2t0d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c2t0d1
   tty         cpu
tin tout  us sy wt id
   0  336   0 12  0 88
                    extended device statistics             
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0   29.5    0.0 1863.2  0.1  0.4    2.2   14.8   1   5 c0t5000C50007F9EE7Bd0
    0.0   29.1    0.0 1863.2  0.0  0.4    0.4   13.8   0   5 c0t5000C5000AD6B6C7d0
  477.7    1.0  631.7    9.5  1.1  0.6    2.4    1.3  48  60 c0t5000C5000F83226Fd0
  483.1    1.0  669.9    9.5  0.0  0.9    0.0    1.8   0  36 c0t5000C5000F816EF7d0
  478.3    1.0  704.2    9.4  0.0  0.9    0.0    1.8   0  35 c0t5000C5000F8188BBd0
  484.5    1.0  678.8    9.6  0.0  0.8    0.0    1.7   0  35 c0t5000C5000F83D80Bd0
  485.3    1.0  683.3    9.6  0.0  0.8    0.0    1.7   0  35 c0t5000C5000F85377Fd0
  483.9    1.0  623.8    9.5  0.0  0.8    0.0    1.7   0  35 c0t5000C5000F81889Fd0
  480.1    1.0  650.3    9.5  0.0  0.9    0.0    1.9   0  37 c0t5000C5000F831B5Bd0
  484.3    1.0  709.0    9.5  0.0  0.9    0.0    1.8   0  35 c0t5000C5000F8187F7d0
  480.9    1.0  665.2    9.5  0.0  0.9    0.0    1.9   0  37 c0t5000C5000F814A23d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t5000C5000F814747d0
  484.9    1.0  651.6    9.5  0.0  0.9    0.0    1.9   0  37 c0t5000C5000F838E47d0
  483.5    1.0  718.7    9.6  0.0  0.9    0.0    1.9   0  38 c0t5000C5000F83B94Fd0
  485.5    1.0  668.4    9.5  0.0  0.8    0.0    1.7   0  35 c0t5000C5000F814F37d0
  482.9    1.0  675.4    9.4  0.0  0.8    0.0    1.7   0  35 c0t5000C5000F8544F7d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c2t0d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c2t0d1
   tty         cpu
tin tout  us sy wt id
   0  336   0  8  0 91
                    extended device statistics             
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.2    0.0    0.2    0.0  0.0  0.0    0.0    0.1   0   0 c0t5000C50007F9EE7Bd0
    0.2    0.0    0.1    0.0  0.0  0.0    0.0    0.3   0   0 c0t5000C5000AD6B6C7d0
  643.7    1.2  967.9    3.5  1.2  0.7    1.8    1.0  50  67 c0t5000C5000F83226Fd0
  662.1    1.2  946.4    3.3  0.0  1.1    0.0    1.6   0  43 c0t5000C5000F816EF7d0
  667.1    1.2  943.2    3.6  0.0  1.1    0.0    1.6   0  43 c0t5000C5000F8188BBd0
  670.5    1.2  952.7    3.6  0.0  1.0    0.0    1.5   0  41 c0t5000C5000F83D80Bd0
  673.3    1.2  938.0    3.6  0.0  1.0    0.0    1.5   0  40 c0t5000C5000F85377Fd0
  666.5    1.2  945.6    3.5  0.0  0.9    0.0    1.4   0  39 c0t5000C5000F81889Fd0
  667.1    1.2  948.3    3.3  0.0  1.0    0.0    1.5   0  41 c0t5000C5000F831B5Bd0
  671.5    1.2  945.1    3.5  0.0  1.0    0.0    1.5   0  40 c0t5000C5000F8187F7d0
  671.1    1.2  964.0    3.4  0.0  1.1    0.0    1.6   0  42 c0t5000C5000F814A23d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t5000C5000F814747d0
  669.7    1.2  973.7    3.2  0.0  1.0    0.0    1.6   0  42 c0t5000C5000F838E47d0
  668.9    1.2  926.0    3.6  0.0  1.0    0.0    1.5   0  40 c0t5000C5000F83B94Fd0
  671.3    1.2  928.2    3.3  0.0  1.1    0.0    1.6   0  42 c0t5000C5000F814F37d0
  673.3    1.2  932.2    3.6  0.0  1.0    0.0    1.5   0  40 c0t5000C5000F8544F7d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c2t0d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c2t0d1
   tty         cpu
tin tout  us sy wt id
   0  336   0 10  0 90
                    extended device statistics             
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t5000C50007F9EE7Bd0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t5000C5000AD6B6C7d0
  532.9    0.8  752.0    7.1  1.1  0.6    2.1    1.1  48  61 c0t5000C5000F83226Fd0
  550.7    0.8  762.7    7.2  0.0  0.9    0.0    1.7   0  35 c0t5000C5000F816EF7d0
  548.3    0.8  776.3    7.2  0.0  0.9    0.0    1.7   0  37 c0t5000C5000F8188BBd0
  551.1    0.8  812.9    7.2  0.0  0.9    0.0    1.7   0  37 c0t5000C5000F83D80Bd0
  551.1    0.8  752.3    7.2  0.0  0.9    0.0    1.6   0  36 c0t5000C5000F85377Fd0
  548.1    0.8  755.3    7.1  0.0  0.9    0.0    1.6   0  34 c0t5000C5000F81889Fd0
  544.5    0.8  800.4    7.2  0.0  1.0    0.0    1.8   0  38 c0t5000C5000F831B5Bd0
  553.9    0.8  736.8    7.2  0.0  0.9    0.0    1.6   0  36 c0t5000C5000F8187F7d0
  552.3    0.8  797.5    7.2  0.0  0.9    0.0    1.6   0  36 c0t5000C5000F814A23d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t5000C5000F814747d0
  549.1    0.8  788.2    7.2  0.0  0.9    0.0    1.7   0  37 c0t5000C5000F838E47d0
  548.5    0.8  787.3    7.1  0.0  0.9    0.0    1.7   0  36 c0t5000C5000F83B94Fd0
  552.3    0.8  756.7    7.2  0.0  0.9    0.0    1.7   0  36 c0t5000C5000F814F37d0
  545.3    0.8  795.7    7.1  0.0  0.9    0.0    1.7   0  37 c0t5000C5000F8544F7d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c2t0d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c2t0d1I don't know how to interpret this, so any help would be greatly appreciated.
Thanks!

Similar Messages

  • Poor iSCSI Performance over 10GbE connection.

    Hi Solaris Folks,
    I'v a setup of two SunFire X4540 connected over a 10GbE link (a Cisco Catalyst 4507 in the middle) using Sun's Dual 10 GbE NIC (X1027A-Z) on both machines.
    The plan is to provide the 2nd machine a ZFS volume via iSCSI from the 1st machine. The 1st machine has a ZPool wiht 22x 2-way mirror vdevs and a locale
    FileBench test shows good results.
    I've now configured the 1st machine to share a ZFS volume via iSCSI and on the 2nd machine I used that target as a raw-device for a new ZPool. If I now run
    FileBench againt the "iSCSI-ZPool" I get results which are far away from the local benchmarks and far away from really using the 10GbE link (most times the
    bandwidth is around 1,5 GBit/s).
    Did a couple of performance tuning which I found in the Sun Blogs and Wikis but without any luck to increase the performance significantly.
    Anybody out there who tried the same (iSCSI over 10GbE)? Any experience with iSCSI performance? Suggestions?
    Here is an overview about the changes I made:
    iSCSI Initiator only:
    tcp-nodelay=1; in /kernel/drv/iscsi.conf
    iSCSI Targer only:
    Configured a TPGT so that only the 10GbE connection can be used.
    On both machines:
    tcp_recv_hiwat 400000, tcp_xmit_hiwat 400000, tcp_max_buf 2097152, tcp_cwnd_max 2097152 on /dev/tcp via ndd
    soft-lso-enable = 1; and accept_jumbo=1; within nxge.conf
    MTU on both NICs is set to 9194. Jumbo Frames are also configured on the Cisco switch ports.
    Regards,
    Lemmy

    The answer could be .. don't use ZFS volumes
    I have an SSD and a simple Seagate disk. Both on SATA. Same capacity of 74.53GB
    If I use those disks to create a pool and make volumes on the pool and then
    use the shareiscsi property to create a target (and there is plenty of documentation
    describing that as the easy way to do it ) the performance
    is dreadful.
    If on the other hand I use iscsitadm to make a target out of the devices of two disks using commands like this
    for example
    iscsitadm create target --type raw -b /dev/dsk/c4t0d0
    iscsitadm create target --type raw -b /dev/dsk/c4t1d0
    and then on the initiator create the pool then the performance is entirely different!
    A bit more detail:
    For example I built a target out of a zfs volume then on the initiator I wrote to the pool using zfs send/recv with about 3.2GB. It took 26 minutes 24 sec - pathetic.
    I almost gave up waiting!
    So i exported the these useless zfs pools, removed the static-configs etc and cleared the zfs property etc and started again.
    If the target is instead built "manually" using iscsitadm e.g. "iscsitadm create target --type raw -b /dev/dsk/c4t0d0" (dsk or rdsk it doesn't seem to make much difference)
    the same write test takes just 3 min 12sec.
    So in this example Solaris 10 10/09 (x86) the performance degrades by a factor of 8 if you use zfs volumes. Nice idea ... easy to use but ... terrible overhead.
    At least for SATA disks.
    Incidentally for a SATA SSD disk the factor is not so bad (5 min vs 3 min) . However an ordinary disk like ST3808110AS .. 8 times slower with zfs volumes.
    No idea why .. but easily reproducible.
    So .. what do you see if you don't use zfs volumes but instead use the disk device name itself (whole disk remember - not a slice)
    For target and initiator I was using Solaris 10 10/09 (x86) on a v20z with 3rd party eSATA card. SunOS kernel patched recently to 5.10 Generic_142901-10 on the target.

  • FC vs. iSCSI performance with Mac OS X Server

    Hi All,
    I've relied on direct-attached Fiber Channel storage for our network server volumes since the early days of the Xserve RAID units. We use our network volumes for graphic design needs (with large files) over a Gigabit network, with some Windows clients, too. We're currently running Mac OS X 10.5.8 Server on an Xserve quad core (2008 model) with a 16 TB Enhance Technology UltraStor RS16FS single controller RAID unit with 4Gbit FC. It's worked extremely well for us, but it's going on 3 years old now, and I'm looking to replace it with a dual controller RAID unit for added piece of mind, either from Enhance Technology, Promise, or another vendor. I'll also be increasing the capacity to 32 TB.
    The big question is whether I should make the leap to iSCSI with a new RAID unit? I understand the advantages of iSCSI, but I'm concerned about its real-world performance and how taxing it would be on the server itself. Any decrease in performance from our existing 4Gbit FC would be unacceptable. I'm also concerned about the learning curve involved in setting up an iSCSI SAN, since my networking skills are fairly basic -- I really don't get into the advanced functions of our managed switches, and the one time I tried to set up link aggregation, things went horribly wrong! Direct-attached FC sounds a lot simpler.
    I'd get a PCIe 4-port Gigabit Ethernet card from Small Tree and use our existing SMC TigerStack II managed Gigabit switch to aggregate the ports, unless anyone thinks a new dedicated switch would be better. I'd likely buy the ATTO iSCSI initiator to make everything happen. I suppose if I wanted to put the old FC RAID unit on the SAN, I could use the
    Any opinions, suggestions, or links to benchmarks would be appreciated.
    Thanks

    Thanks for the reply, MrHoffman -- I appreciate your insights. Just to be clear, I'm currently using a 4Gbit FC RAID system directly connected to an Xserve with an Apple FC card, so there's no SAN.
    Just to be clear, there's a SAN here. You just don't have a SAN switch, based on your description.
    I'm sharing out the volumes on that RAID unit via AFP and SMB over the GbE network, so that's really where the bottleneck exists
    That's typical, and why I pointed to the GbE as the bottleneck. (If you listen very carefully to the server, you can hear the little skidding sounds as each of the SAN packets decelerates onto the GbE.)
    Ultimately, I would like to create a small SAN, if only to allow my backup server direct access to the main network volumes for D2D backups, and potentially to allow certain client computers direct access to the volumes. It would be far easier to do with iSCSI than with FC, since as you said, I could use my existing network infrastructure.
    That's a PCI-X or PCIe-class Mac, the Xsan software, a switch, and another connection into your existing (yes, you have one) SAN. As for the array, the prices on those range from Not Too Exorbitant to Oh My Aching Wallet. Used gear (where you can find it) can be a decent investment when you're on a budget.
    Xsan has a fixed price, and the SAN switches tend to show up on the used market; they're common in the enterprise space, and a 4 Gb SAN switch is not even remotely new gear.
    If you need Big Cheap Storage, then Direct Attached Storage (DAS) approach will be your cheapest option. (You almost have a DAS configuration now.) A PCI-X or PCIe controller or a RAID controller connected out to a Big Dumb Disk Array, err, a JBOD, or into a Big Not-So-Dumb RAID Array. If you have a controller or an open slot in your Xserve box.
    I'm leaning towards maybe just adding another 16-drive JBOD unit to the existing FC system to meet our short-term storage needs. That doesn't address my desire to get a dual controller unit. But really, how often do RAID controllers fail?
    Um, how often? Usually only when you have no current backups and a deadline, in my experience.

  • UCS-B: iSCSI performance with M51KR-B BCM57711 vs Palo M81KR

    Cisco pushes M81KR card saying that there's no reason to use anything else. What about systems with iSCSI storage (ESXi boots from local drive, VMs are on iSCSI storage, no iSCSI boot)? Is there any performance advantages to using BCM57711 card? It has iSCSI TOE, while M81KR does not, isn't this enough of a reason to use BCM57711 for iSCSI environments?

    Roman,
    You are correct that the current generation of M81KR VIC does not have the TOE for iSCSI as the broadcom card does.  However our adaptors are designed with adequate resources to provide similar performance. 
    Let me see if I can dig up some performance metrics so you can compare.
    Regards,
    Robert

  • Iomega StorCenter ix4-200d iSCSI performance

    Hi there, just looking for some help here, we have an Iomega StorCenter ix4-200d device that was not in use and I tried to make a drive for storing media files and moving them from the file server. I have created an iSCSI disk and connected to a Windows 2008R2 using an iSCSI Initiator, the Iomega box comes with two network interfaces that were previously bonded, the connection was quick and easy to setup, the problem that I am having is performance. When transferring small file/s the transfer is quick and effective, but if the file/s pass more than 1GB windows explorer crashes and the OS get frozen, I have been monitoring the disk use and is horribly high when transferring those files, taking more than 10min for a 1.5GB to transfer, the LAN is 1GB speed. I read about creating specific VLANS only for iSCSI transfers, but we did not have that before and the drive was previously used as iSCSI backup solution, before another solution now in place replaced it Any help or thoughts are more than welcome Juan

    GUS_Lenovo, Really helpful your coment, please can someone else let us know if it is having this problem. The setup is very simple a VMware ESXi 5.5 virtual machine, OS 2008R2, connected using passtrough native windows iSCSI initiator to an external Iomega StorCenter ix4-200d drive. I disconnected the target from the troubled server and connected the iSCSI disk to a test server, same setup VMware 2008R2 connected to the pysical iSCSI uing the windows native initiator, no roles on it. I double checked that only the test server was connected into it, same symptom, transfers files up to 500MB are quick and effective, if the file/s to transfer pass 1GB or more the transfer halts and the explorer process on the VM crashes, not very effective for now. Bandwith?? any help very much appreciated

  • OEL5 iSCSI performance improvement

    Hi
    I've just installed the latest OEL5 updates on one of our servers which is used for disk-to-disk backup. It was previously running kernel version 2.6.18-92.1.6.0.2.el5 from June 2008 but it is now running 2.6.18-92.1.13.0.1.el5. It uses iSCSI to mount drives on a Dell MD3000i storage array.
    I've always had performance problems on this server - I could only get around 20Mbyte/sec throughput to the storage array. However the latest updates have improved this enormously - I now get over 90Mbytes/sec and the backups go like the wind.
    I've looked through the changes but couldn't find anything obvious - does anyone know why I got this big performance win?
    Dave

    Customers generally see performance improvements when upgrading to 9iAS. How the upgrade process works is explained in the documentation at http://otn.oracle.com/docs/products/ias/index.html
    null

  • Appaling iSCSI performance

    Hi all,
    I have a severe perfromance issue with iSCSI targets hosted on Solaris 10 - and don't know how to try and improve/diagnose it.
    The setup is a solaris server with a zpool on external storage.
    To test I created an 8gb block device on the zfs and exported it as iSCSI.
    From a windows desktop machine I get a write perfromance of 70kB/s (yes kilo bytes or 560kbit/s)
    Creating a ZFS share on the same pool and transfering the same file via cifs results in write performance of 10MB/s (mega bytes or 80Mbit/s) which given the NIC is a 100Mb nic on the desktop is pretty close to ideal.
    This was a quick test - I have XEN VMs whose discs are stored on the Solaris server also - and the speed of them is just terrible.
    (I don't think its always that bad but after the server has been on for a while it gets bad).
    The XEN vms are attached via a gigabit switch directly to the solaris box.
    I have no idea about how to try and track this issue down - any pointers greatfully received

    if you have support with sun you can try:
    1. use snoop/wireshark/etc and capture your iscsi session when you are transfering data
    2. open a ticket and send that in!!!
    if you are familiar with iscsi protocol dissection, you can take a look and see if you are seeing lots of iscsi pdu retransmits or seq probs. there may be some other forums where you can post parts of the trace and see if anyone can see more issues

  • Airdisk performance is abysmal

    This morning I attempted to copy 2GB of files (my ~/Documents folder) to a 300GB backup disk, attached via USB 2.0, to my new Airport Extreme (802.11n) base station. When the copy began to take longer than a few minutes, I presumed something must be wrong and began doing some tests.
    After spending a number of hours testing, I feel I have conclusively proven that the Airdrive feature is implemented so poorly that in its current form, its usefulness is severely limited. The test results are here: http://www.tnpi.net/wiki/AiportExtreme802.11n
    I have since read the 802.11n review and comments on Macintouch and found that others have noticed similar performance issues with Airdisk as well. Write performance is simply awful. Since Airdisk is the feature I bought the Airport(n) for (I have two other Airports already) I expect to be returning it unless a resolution for this arrives very, very soon.
    (2) x dual G5, PowerBook G4 (x5), iMac Intel 20", iMac 24"   Mac OS X (10.4.8)  

    Not great? They are not even "not good." Folks with home LAN's (and who doesn't have one these days?) are already thinking "Leopard Time Machine" drive for these things.
    There will be much disappointment if these little machines can't be upgraded to achieve "decent" levels of performance. If you achieved 40mbit/s, then it was almost certainly a fairly small movie file that was cached.
    I wish I has seen the macintouch report before I bought, was disappointed by, and then started testing my Airport n.
    http://www.macintouch.com/reviews/airportn/

  • Disk IO performance fine from LiveCD, abysmal after install

    Hi everyone,
    I installed Solaris 11.1 from the live CD, and had the zpools mounted while running the live CD. Doing a scrub on both zpools the IO performance was fine, especially on the pool with a single SSD.
    But once installed and boot from the hard disks the IO performance was abysmal < 5MB/s.
    The system was running on the previous Oracle Solaris version, 11_11.
    There are 2x 1000GB (rpool) of the shelf disks in there, and one 256GB SSD (ssdpool hosting Virtualdisk images), and 8GB RAM. The system was running fine before the upgrade.
    Any idea where I should be looking for the cause of the performance drop?
    Günther

    Hi Gunther,
    I see your reply on the other thread, but I think we should rule out that the following known problem that is
    related to poor SSD performance and your overall poor system performance:
    Bug 15826358 - SUNBT7185015 Massive write slowdown on random write workloads due to SCSI unmap
    The workaround is this:
    In /etc/system add the following line:
    set zfs:zfs_unmap_ignore_size=0
    Please apply the workaround and let us know if this helps.
    Thanks, Cindy

  • ISCSI targets for the Mac

    I setup a VMware vSphere 4 Server with RAID 10 direct-attached storage and 3 virtual machines:
    - OpenSolaris 2009.06 dev version (snv_111b) running 64-bit
    - CentOS 5.3 x64 (ran yum update)
    - Ubuntu Server 9.04 x64 (ran apt-get upgrade)
    I gave each virtual 2 GB of RAM, a 32 GB virtual drive and setup a 16 GB iSCSI target on each (the two Linux vms used iSCSI Enterprise Target 0.4.16 with blockio). VMware Tools was installed on each. No tuning was done on any of the operating systems.
    I ran two tests for write performance - one on the server itself and one from my MacBook Pro (10.5.7) connected via Gigabit (mtu of 1500) iSCSI connection using globalSAN 3.3.0.43.
    Here’s what I used on the servers:
    time dd if=/dev/zero of=/root/testfile bs=1048576k count=4
    and the Mac OS with the iSCSI connected drive (formatted with GPT / Mac OS Extended journaled):
    time dd if=/dev/zero of=/Volumes/test/testfile bs=1048576k count=4
    The results were very interesting (all calculations using 1 MB = 1,084,756 bytes).
    For OpenSolaris, the local write performance averaged 86 MB/s. I turned on lzjb compression for rpool (zfs set compression=lzjb rpool) and it went up to 414 MB/s since I’m writing zeros). The average performance via iSCSI was an abysmal 16 MB/s (even with compression turned on - with it off, 13 MB/s).
    For CentOS (ext3), local write performance averaged 141 MB/s. iSCSI performance was 78 MB/s (almost as fast as local ZFS performance on the OpenSolaris server when compression was turned off).
    Ubuntu Server (ext4) had 150 MB/s for the local write. iSCSI performance averaged 80 MB/s.
    One of the main differences between the three virtual machines was that the iSCSI target on the Linux machines used partitions with no file system. On OpenSolaris, the iSCSI target created sits on top of ZFS. That creates a lot of overhead (although you do get some great features).
    Since all the virtual machines were connected to the same switch (with the same MTU), had the same amount of RAM, used default configurations for the operating systems, and sat on the same RAID 10 storage, I’d say it was a pretty level playing field.
    At this point, I think I'll be using Ubuntu 9.04 Server (64-bit) as my iSCSI target for Macs.
    Has anyone else done similar (or more extensive) testing?

    I had a lot of trouble with SimCity4 on my iMac. It became such a headache that I returned it to compUSA. It ran very choppy and crashed repeatedly when the city began to develop. My system FAR exceeds the system requirements for the game, and after some online research I discovered that I am not the only person to have this trouble with SimCity running on 10.4. I have also read problems concerning the SIMS2. Some of what I have read indicates that 10.3 runs the games fine, but 10.4 causes them to crash. I don't know if this is the case, but I do know that I am now very weary before dropping $50 on a game that may not perform on my computer as it claims to on the box. Some people trying to run games are talking about waiting for Mac OS updates that will allow them to run smoother.
    I would check out what gamers are saying before buying anything
    http://www.macosx.com/forums/showthread.php?t=226286

  • ORACLE PROCESS의 DISK I/O PERFORMANCE CHECK

    제품 : ORACLE SERVER
    작성날짜 : 2003-06-27
    ORACLE PROCESS 가 I/O 를 과도하게 사용할 경우 조치 방법
    =======================================================
    I/O 가 heavy 하여 database 의 performance 가 떨어질 경우,
    원인을 확인하는 방법은 다음과 같습니다.
    먼저, i/o 를 빠르게 하기 위한 async I/O 가 setting 되어 있는지 확인합니다.
    async I/O 란 사용하는 H/W level 에서 제공하는 것으로, 동시에 하나 이상의
    I/O 를 할 수 있도록 해 줍니다.
    SVRMGRL 또는 SQLDBA> show parameter asyn
    NAME TYPE VALUE
    async_read boolean TRUE
    async_write boolean TRUE
    위의 값이 false 이면, H/W 가 Async I/O 를 제공하는지 확인한 후에,
    $ORACLE_HOME/dbs/initSID.ora 에 위 값을 True 로 setting 하고
    restartup 해 줍니다.
    (Async I/O 가 제공되지 않을 경우, OS channel 한개 당 하나의
    dbwr process 가 기동되도록 할 수 있습니다. db_writers 를 늘려주는 방법
    을 고려할 수도 있습니다.)
    두 번째 방법은 각 데이타 화일의 I/O를 확인해서, I/O 가 빈번한 데이타 화일을
    찾아 disk 를 옮겨 주거나 table을 다른 데이타 화일로 move해줍니다.
    다음 결과에 의해 각 datafile 의 access가 다른 datafile의 수치와 비슷할 때,
    데이타들이 잘 분산되어 I/O 병목 현상이 없는 것입니다.
    다음은 datafile 6, 7번에 read 가 집중되어 있습니다.
    만약, I/O 속도의 향상을 원한다면, 자주 read 되는 table 을 찾아서 다른 disk의
    datafile 로 옮겨 주는 것이 좋은 경우입니다.
    SQL> select file#, phyrds, phywrts from v$filestat;
    FILE# PHYRDS PHYWRTS
                  1              61667             26946
                  2              2194               58882
                  3              1972                189
                  4              804                   2
                  5              7306               13575
                  6              431859            21137
                  7              431245            3965
                  8              307                  19
    마지막으로, I/O 가 빈번한 session 을 찾아 내어 어떤 작업을 하는지
    확인하는 방법입니다.
    Session ID를 알 수 있으므로, 이 session 의 SQL 문을 확인한 후에
    I/O 를 줄일 수 있는 SQL 문으로 조정하십시오.
    (tkprof 를 이용하여 plan 과 소요 시간을 확인할 수 있습니다.)
    SQL> select sid, physical_reads, block_changes from v$sess_io
    SID PHYSICAL_READS BLOCK_CHANGES
                  1                0                0
                  2                0                0
                  3                0                0
                  4               15468          379
                  5                67               0
                  6                0                6
                  7                1               105
                  8               2487          2366
                  9               61               14
                  11             311              47

    I have seen slow iSCSI performance but in all cases it is slow on OS level already. You measurements indicate however that this is not the case but that the performance is slow just from within the guests when iSCSI disks are used.
    Two thoughts:
    - try to disable Jumbo frames. They are not standardized. While incompatible Jumbo frames typically result in a total loss of communication there might be an issue with the block sizes. Your dd tests could have been fast because of the 4K block sizes you use but the iSCSI initiator of VB may use a different block size which does not work well with Jumbo frames.
    - test the iSCSI with dd a little bit more. Use a file created from /dev/random (you can't use /dev/random directly as this is dead slow) instead of /dev/zero to avoid any interference from possible optimizations along the way. Test with different block sizes with and without Jumbo frames. What I typically get (w/ Jumbo frames) is:
    bs OSOL AR
    512 14:43 9:13
    4096 1:57 1:44
    8192 1:18 1:09
    16384 1:14 1:06
    32768 1:08 1:04 <--- sweet spot
    65536 1:08 1:08
    131072 1:14 1:11
    1048576 1:38 1:32
    Good luck,
    ~Thomas

  • Basic starting point for performance monitoring of Hyper-V infrastructure

    I am new for this topic and cannot found the info, please help.  
    In the past, there were a series of article which describe how to monitor Hyper-V through performance counters. E.g.
    http://blogs.msdn.com/b/tvoellm/archive/2009/04/23/monitoring-hyper-v-performance.aspx
    http://blogs.msdn.com/b/tvoellm/archive/2008/05/09/hyper-v-performance-counters-part-two-of-many-hyper-v-hypervisor-counter-set.aspx
    http://blogs.msdn.com/b/tvoellm/archive/2008/05/09/hyper-v-performance-counters-part-three-of-many-hyper-v-logical-processors-counter-set.aspx
    http://blogs.msdn.com/b/tvoellm/archive/2008/05/12/hyper-v-performance-counters-part-four-of-many-hyper-v-hypervisor-virtual-processor-and-hyper-v-hypervisor-root-virtual-processor-counter-set.aspx
    http://blogs.msdn.com/b/tvoellm/archive/2008/09/29/hyper-v-performance-counters-part-five-of-many-hyper-vm-vm-vid-numa-node.aspx
    http://blogs.msdn.com/b/tvoellm/archive/2009/12/18/hyper-v-performance-faq-r2.aspx
    http://blogs.msdn.com/b/tvoellm/archive/2010/01/08/hyper-v-iscsi-performance-numbers.aspx
    However, they are for window server 2008. Is there any more recent article of the same topic for window server 2012 and 2012 R2 Hyper-V ? Apart from this, is there any MS article which describe the suggested alert threshold for these performance
    counters ? Thanks in advance.

    All of the guidance from TonyV that you found still applies.  And is still very relevant.
    And MSFT has never given guidance in regards to any performance counters that I can ever recall.
    Brian Ehlert
    http://ITProctology.blogspot.com
    Learn. Apply. Repeat.
    Disclaimer: Attempting change is of your own free will.

  • Filter hud performance profiling results

    I was really curious why it took so long for the filter hud to open and I think I may have at least a partial answer. I used the awesome Sampler and fs_usage tools that come with the developer kit. I started profiling and pushed the filter button. Here's what I found:
    1) A bunch of database access happens in the main thread. When you push the filter hud button you loose control of the application (SBOD) until the query is complete. I'm guessing it's asking the database which keywords exist in the currently selected project so it can make filter buttons for them. Why does a query of 600 images take 10 seconds...
    2) Whenever you push the filter hud button thousands of little seeks and reads happen on the hard disk. I'm assuming this is the bottleneck.
    Is this interesting to anyone? It's wandering off the standard subject matter a bit. However, performance is abysmal and knowing is half the battle!
    Dual 1.8 G5   Mac OS X (10.4.3)   1GB RAM, Sony Artisan Monitor, Sony HC-1 HD Camera

    I've dissected the SQLite database that Aperture uses. My current best guess is that either Aperture is doing something wrong with SQLite or it's doing extra queries that I don't understand. I recreated the database query that gives you a list of keywords for all images in the working set. It only took between 1/4 and 1/2 second to execute. Opening the filter hud in Aperture with the same working set of images takes over 3 seconds.
    I've posted more detailed information on my site. It includes an overview of the database structure.
    http://www.mungosmash.com/archives/2005/12/theapertureda.php
    This is good news to me. It means the Aperture guys only have themselves to blame and should be able to speed it up considerably since the db is fast.
    Dual 1.8 G5   Mac OS X (10.4.3)   1GB RAM, Sony Artisan Monitor, Sony HC-1 HD Camera

  • Mirror between FC and iscsi

    Is that type of mirror possible? We are currently running a 2 node cluster with NW 6.5 SP8 NSS volumes connected to an AX100 fiber san, and we're planning to migrate to OES 2 Linux VMs connected to an EQ PS5000 iscsi san. If I can mirror the volumes first, then our migration downtime should be reduced quite a bit.
    If this scenario is possible, my next question is, will users see a performance impact by having a mirror with iscsi between the time we mirror and the time we migrate? I know iscsi performance isn't too good on NW. I imagine the initial mirror setup would take awhile if iscsi is slow, too. Thanks for any info.
    Tim

    Update: I set up a test cluster volume on the AX100 fiber san and created a mirror on the EQ iscsi san and it worked like a champ. The documentation was a bit confusing, so there was a hiccup when I tried to create a partition on the EQ first, but I blew away the partition and then was able to find the free space on the device.
    I tested upload speeds from my desktop to both the new mirrored volume and a non-mirrored san volume on the same server and got almost exactly the same speed on a 547 MB upload.

  • Slow ISCSI perfomance on 7310

    We just got a sun 7310 cluster the 10TB configuration 2xWrite SSD 1xRead SSD, we configured the 7310 as a single strip (for testing only, will latter change to mirror), we ran several NFS and ISCSI tests, to get a peak performance, all tests where done on solaris 10 clients, while the NFS test's where great, peak at around 115MBs (gigE speed) we where unable to get ISCSI performance greater then 88MBs peak. We tried playing with the ISCSI settings on the 7310 like WCE, etc but where unable to get better results.
    I know we could get better performance as seen with the NFS tests, we where going to buy10gig interfaces but if we can't push ISCSI to greater then 88MBs per client it wont make sense to buy. I would rely appreciated if some one could point us in the right direction what could be changed to get better ISCSI performance.
    Eli

    The iSCSI lun's are setup in a mixed mode some 2k/4k and 8k, the reason for such a small block size (and correct me if I am wrong), all the zfs tunings mention to try and match the db block size, and this luns are going to be used by an informix database which has some 2k/4k/8k db spaces, so I was trying to match the db block size. (but for restores this might slow things down?)
    After testing all kind's of OS/Solaris 10 tunings, the only thing that improved performance was changing the Sessions to "4" by running " iscsiadm modify initiator-node -c 4".
    We are using the 4 built in NIC's, 1&2 are setup in an LACP group, we then use vlan tags, jumboframes are disabled, and 3&4 are used for management on each cluster node. We where questioning if we get/add a dual 10Gig card will the iSCSI performance be better/faster? what is the best performance we can expect on a single client with 10Gig? why single client, because we need to speed up the db restore (we are using netbackup) which is only running on a single client at a time.
    With the Sessions now changed to "4" we get around 120-130MBs, since its only a 1gig link we are not expecting any better speeds.
    Thanks for your help.

Maybe you are looking for

  • [SOLVED] Dual boot windows 7 and arch Linux with seperate hard drives

    Ok so I'm stuck trying to get my computer to dual boot windows 7 and arch. They are installed on different hard drives and I have grub 2 as the boot loader. I can't find any tutorials on how to do it with seperate hard drives I know how to do it if t

  • InDesign CS6 colors to Corel Draw 12

    I've designed a logo and stationery for a small business customer. The logo has 2 Pantone colours. The offset printed letterhead has a 7% tint for a graphic element (curved stripes) printed in Pantone #137 (orange) as a large watermark on the page. I

  • Multiple levels in ALV Hierarchy

    Hi All How to display Multiple levels in ALV Hierarchy. I have header and Item Tables but Under Header Expansions I have to display one more Expansion. Means Multiple levels.

  • AIArtSuite::GetArtBounds

    I try to use AIArtSuite::GetArtBounds to get the bounds of an art, but the result is different than the bounds data in Transform panel. Why? Does the result of AIArtSuite::GetArtBounds uses pixel unit? and how to get the same result as in Transform p

  • Rights to using GB Music in public settings .. ?

    Can someone use GB to create music for sale, ie: commercial? Does this infringe on the Apple rights??