Slow ISCSI perfomance on 7310
We just got a sun 7310 cluster the 10TB configuration 2xWrite SSD 1xRead SSD, we configured the 7310 as a single strip (for testing only, will latter change to mirror), we ran several NFS and ISCSI tests, to get a peak performance, all tests where done on solaris 10 clients, while the NFS test's where great, peak at around 115MBs (gigE speed) we where unable to get ISCSI performance greater then 88MBs peak. We tried playing with the ISCSI settings on the 7310 like WCE, etc but where unable to get better results.
I know we could get better performance as seen with the NFS tests, we where going to buy10gig interfaces but if we can't push ISCSI to greater then 88MBs per client it wont make sense to buy. I would rely appreciated if some one could point us in the right direction what could be changed to get better ISCSI performance.
Eli
The iSCSI lun's are setup in a mixed mode some 2k/4k and 8k, the reason for such a small block size (and correct me if I am wrong), all the zfs tunings mention to try and match the db block size, and this luns are going to be used by an informix database which has some 2k/4k/8k db spaces, so I was trying to match the db block size. (but for restores this might slow things down?)
After testing all kind's of OS/Solaris 10 tunings, the only thing that improved performance was changing the Sessions to "4" by running " iscsiadm modify initiator-node -c 4".
We are using the 4 built in NIC's, 1&2 are setup in an LACP group, we then use vlan tags, jumboframes are disabled, and 3&4 are used for management on each cluster node. We where questioning if we get/add a dual 10Gig card will the iSCSI performance be better/faster? what is the best performance we can expect on a single client with 10Gig? why single client, because we need to speed up the db restore (we are using netbackup) which is only running on a single client at a time.
With the Sessions now changed to "4" we get around 120-130MBs, since its only a 1gig link we are not expecting any better speeds.
Thanks for your help.
Similar Messages
-
Slow/lag perfomance 13'' late 2011 macbook pro
im experiencing slow-ish and lag-ish performance from my macbook pro.. but when i plugged my magsafe, its running smooth and fast. what should i do to fix this kind of problem reply asap!! im using OS X 10.9.3
How old is your computer? The battery may need to be replaced.
Battery Cycle Count
Battery Status Menu – Mavericks - About
Battery Testing
I asked the hosts to move your post to the Mavericks community since you are running 10.9.3 -
I 've installed Oracle VM server + Oracle VM manager 2.2 for test purposes
on domU installed fully virtualized OEL 64 5.2
Additionaly I attached hard disk, in vm.cfg I added 'phy:/dev/cciss/c0d1p1,sda,w'
Copy speed on Dom0 to this disk equals to 15MB/s
Copy speed on DomU (guest OS) to this disk equals to 30-35MB/s
What to do?
in DOM0:
[root@xen ~]# hdparm -tT /dev/cciss/c0d1
/dev/cciss/c0d1:
Timing cached reads: 21296 MB in 1.99 seconds = 10682.28 MB/sec
Timing buffered disk reads: 344 MB in 3.01 seconds = 114.33 MB/sec
in DOMU:
hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 23576 MB in 1.99 seconds = 11818.12 MB/sec
Timing buffered disk reads: 60 MB in 3.09 seconds = 19.44 MB/secThe man page for hdparm suggests that you should repeat the operation several times to get an accurate output.
I tried same command on a virtualized system and got increasing results each time:
# hdparm -tT /dev/hdb
/dev/hdb:
Timing cached reads: 24008 MB in 2.00 seconds = 12030.97 MB/sec
Timing buffered disk reads: 250 MB in 3.12 seconds = 80.21 MB/sec
# hdparm -tT /dev/hdb
/dev/hdb:
Timing cached reads: 24788 MB in 2.00 seconds = 12422.37 MB/sec
Timing buffered disk reads: 360 MB in 3.04 seconds = 118.27 MB/sec
# hdparm -tT /dev/hdb
/dev/hdb:
Timing cached reads: 24488 MB in 2.00 seconds = 12272.63 MB/sec
Timing buffered disk reads: 472 MB in 3.03 seconds = 155.72 MB/sec
# hdparm -tT /dev/hdb
/dev/hdb:
Timing cached reads: 22924 MB in 2.00 seconds = 11487.01 MB/sec
Timing buffered disk reads: 514 MB in 3.27 seconds = 157.40 MB/sec
# hdparm -tT /dev/hdb
/dev/hdb:
Timing cached reads: 24116 MB in 2.00 seconds = 12084.59 MB/sec
Timing buffered disk reads: 550 MB in 3.04 seconds = 181.13 MB/sec
# hdparm -tT /dev/hdb
/dev/hdb:
Timing cached reads: 24260 MB in 2.00 seconds = 12155.17 MB/sec
Timing buffered disk reads: 582 MB in 3.01 seconds = 193.07 MB/sec
# hdparm -tT /dev/hdb
/dev/hdb:
Timing cached reads: 22980 MB in 2.00 seconds = 11512.45 MB/sec
Timing buffered disk reads: 612 MB in 3.12 seconds = 196.05 MB/sec -
Oracle Database Slow Perfomance
Hi Dear,
We are using Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
Operating System Micsoft Window 2000 5.00.2195 With Service Pack 4.and conneted users max 70 all time. So we are facing slow database perfomance. Kindly resolve my problem with advance thanks
System configuration as below
System : Dell PowerEdge 1800
Processor : 3.00 GHZ
Ram : 2.00 GB
Cordial Regrds
RaheemWe are using below init file with name "init.ora.1122005225338"
# Copyright (c) 1991, 2001, 2002 by Oracle Corporation
# Cache and I/O
db_block_size=16384
db_cache_size=50331648
db_file_multiblock_read_count=32
# Cursors and Library Cache
open_cursors=600
# Database Identification
db_domain=uma
db_name=orauma
# Diagnostics and Statistics
background_dump_dest=D:\oracle\admin\orauma\bdump
core_dump_dest=D:\oracle\admin\orauma\cdump
timed_statistics=TRUE
user_dump_dest=D:\oracle\admin\orauma\udump
# File Configuration
control_files=("D:\oracle\oradata\orauma\CONTROL01.CTL", "D:\oracle\oradata\orauma\CONTROL02.CTL", "D:\oracle\oradata\orauma\CONTROL03.CTL")
# Instance Identification
instance_name=orauma
# Job Queues
job_queue_processes=10
# MTS
dispatchers="(PROTOCOL=TCP) (SERVICE=oraumaXDB)"
# Miscellaneous
aq_tm_processes=1
compatible=9.2.0.0.0
# Optimizer
hash_join_enabled=TRUE
query_rewrite_enabled=FALSE
star_transformation_enabled=FALSE
# Pools
java_pool_size=67108864
large_pool_size=8388608
shared_pool_size=50331648
# Processes and Sessions
processes=200
# Redo Log and Recovery
fast_start_mttr_target=300
# Security and Auditing
remote_login_passwordfile=EXCLUSIVE
# Sort, Hash Joins, Bitmap Indexes
pga_aggregate_target=25165824
sort_area_size=524288
# System Managed Undo and Rollback Segments
undo_management=AUTO
undo_retention=10800
undo_tablespace=UNDOTBS1 -
U310 Clean Windows 7 Installation
I bought a U310 which suffers from a very slow Wifi perfomance. The problem seems to be know in this forum.
Apart from that I'd prefer a clean Windows 7 installation without the "Lenovo overhead" and with my Win7 Professional liscence.
I prepared a USB pen drive to be bootable and tried the setup from the stick. Somehow the Windows 7 setup does not seem to recognize any drives and partitions to install on. I guess I have to include the intel storage driver on my setup usb stick? But on the other hand, shouldn't the normal hdd (500 Gb) be recognizabe for the setup?
In Windows storage manager, I see what I suppose is the 32GB SSD, but its only shown as 32GB.
Any hints on how to be able to fully use the 32GB SSD and install Windows on it?
ThanksHi,
the method described below is theoretical, I haven't tried it yet:
if you want to use the SSD cahce for installing Windows on it, you have to break up the raid array on it. After that you should see unallocated space.
If you don't want to use RapidStart, you can force delete the hibernation partition to gain extra 8GB space. (note: you have to disable it in the bios too)
Then you sould download the Rapid Storage F6 drivers (x86 or x64, based on your future installation architecture) and copy it on the usb drive.
During windows install, load the drivers from the disk. And install the system on the ssd cache drive. -
Intra-doc links (can you have too many??)
I have an ebook that will be apprx 500 pages. If I create intra-document links from the Table of Contents to each chapter, then back links from each page in every chapter to the TOC will this slow the perfomance of the PDF or does the number of intra-document links have no impact on performace.
For example, if I had 12 chapters, each chapter listed in the TOC would link to the first page of each chapter.
All 500 pages would have a link on the bottom of each page that the user could click and be taken back to the TOC.
Thanks for your help.I have some tables with over 100 million rows that still perform well, and I'm sure much larger tables are possible. The exact number of rows would vary significantly depending on a number of factors including:
Power of the underlying hardware
Use of the table – frequency of queries and updates
Number of columns and data types of the columns
Number of indexes
Ultimately the answer probably comes down to performance – if queries, updates, inserts, index rebuilds, backups, etc. all perform well, then you do not yet have too many rows.
The best way to rearrange the data would be horizontal partitioning. It distributes the rows into multiple files; which provides a number of advantages including the potential to perform well with larger number of rows.
http://msdn.microsoft.com/en-us/library/ms190787.aspx -
ORACLE PROCESS의 DISK I/O PERFORMANCE CHECK
제품 : ORACLE SERVER
작성날짜 : 2003-06-27
ORACLE PROCESS 가 I/O 를 과도하게 사용할 경우 조치 방법
=======================================================
I/O 가 heavy 하여 database 의 performance 가 떨어질 경우,
원인을 확인하는 방법은 다음과 같습니다.
먼저, i/o 를 빠르게 하기 위한 async I/O 가 setting 되어 있는지 확인합니다.
async I/O 란 사용하는 H/W level 에서 제공하는 것으로, 동시에 하나 이상의
I/O 를 할 수 있도록 해 줍니다.
SVRMGRL 또는 SQLDBA> show parameter asyn
NAME TYPE VALUE
async_read boolean TRUE
async_write boolean TRUE
위의 값이 false 이면, H/W 가 Async I/O 를 제공하는지 확인한 후에,
$ORACLE_HOME/dbs/initSID.ora 에 위 값을 True 로 setting 하고
restartup 해 줍니다.
(Async I/O 가 제공되지 않을 경우, OS channel 한개 당 하나의
dbwr process 가 기동되도록 할 수 있습니다. db_writers 를 늘려주는 방법
을 고려할 수도 있습니다.)
두 번째 방법은 각 데이타 화일의 I/O를 확인해서, I/O 가 빈번한 데이타 화일을
찾아 disk 를 옮겨 주거나 table을 다른 데이타 화일로 move해줍니다.
다음 결과에 의해 각 datafile 의 access가 다른 datafile의 수치와 비슷할 때,
데이타들이 잘 분산되어 I/O 병목 현상이 없는 것입니다.
다음은 datafile 6, 7번에 read 가 집중되어 있습니다.
만약, I/O 속도의 향상을 원한다면, 자주 read 되는 table 을 찾아서 다른 disk의
datafile 로 옮겨 주는 것이 좋은 경우입니다.
SQL> select file#, phyrds, phywrts from v$filestat;
FILE# PHYRDS PHYWRTS
1 61667 26946
2 2194 58882
3 1972 189
4 804 2
5 7306 13575
6 431859 21137
7 431245 3965
8 307 19
마지막으로, I/O 가 빈번한 session 을 찾아 내어 어떤 작업을 하는지
확인하는 방법입니다.
Session ID를 알 수 있으므로, 이 session 의 SQL 문을 확인한 후에
I/O 를 줄일 수 있는 SQL 문으로 조정하십시오.
(tkprof 를 이용하여 plan 과 소요 시간을 확인할 수 있습니다.)
SQL> select sid, physical_reads, block_changes from v$sess_io
SID PHYSICAL_READS BLOCK_CHANGES
1 0 0
2 0 0
3 0 0
4 15468 379
5 67 0
6 0 6
7 1 105
8 2487 2366
9 61 14
11 311 47I have seen slow iSCSI performance but in all cases it is slow on OS level already. You measurements indicate however that this is not the case but that the performance is slow just from within the guests when iSCSI disks are used.
Two thoughts:
- try to disable Jumbo frames. They are not standardized. While incompatible Jumbo frames typically result in a total loss of communication there might be an issue with the block sizes. Your dd tests could have been fast because of the 4K block sizes you use but the iSCSI initiator of VB may use a different block size which does not work well with Jumbo frames.
- test the iSCSI with dd a little bit more. Use a file created from /dev/random (you can't use /dev/random directly as this is dead slow) instead of /dev/zero to avoid any interference from possible optimizations along the way. Test with different block sizes with and without Jumbo frames. What I typically get (w/ Jumbo frames) is:
bs OSOL AR
512 14:43 9:13
4096 1:57 1:44
8192 1:18 1:09
16384 1:14 1:06
32768 1:08 1:04 <--- sweet spot
65536 1:08 1:08
131072 1:14 1:11
1048576 1:38 1:32
Good luck,
~Thomas -
Multithread causing cache line migration between the CPUs
hi, i have a problem with Sun JVM 1.5.0_15, with the Linux Scheduler in a SMP kernel 2.6, that make individual threads actually move from processor to processor even though they are in a continuous running state and never sleep, causing cache line migration between the CPUs, this slow down perfomance of multithread java applications, that in a Single CPU is faster than Dual CPU, any idea?
KR
Cl�visSun's JVM (HotSpot) doesn't do any thread scheduling, it leaves it up to the operating system. So the o/s is choosing to migrate threads. The obvious things for slowing down a multi-threaded app are lock contention or possibly garbage collection (multiple CPUs allow more work to get done, which means more heap allocation and more frequent GCs if using the same heap size as a single CPU box).
-
VI_ERROR_IO while viMOVE operation on VXI-MXI-2
Hello. We have troubles with our old PCI-MXI-2 & VXI-MXI-2 cards. OS WinXP SP3, NI-VXI 3.6, VISA 4.6.2
Any viMOVEOUT command, issued in A24 adress space, returns VI_ERROR_IO status. I read old posts on this forum and tried to turn off DMA mode for MXI-2. It helps to remove error status but all operations became very slow, inadmissible slow. I'd also tried to switch off SYNC MXI (in regedit set UseSyncMXI=0), but it doesnt solve problem with error status.
Could you ask me how to solve this problem, while not slowing down perfomance?
Thank you for attention.ABV-
I found one other documented issue related to yours, and the solution was posted as a potential solution, but is not guaranteed to work. It involves adjusting registry keys (which it looks like you are familiar with). Caution should be taken when adjusting registry keys. Again, this is just a step to try that helped one other customer out, so I thought I would offer it to you as a potential solution.
Some MITE-based systems have experienced problems with the MITE chip reaching the maximum number of retries on the PCI bus. To help prevent this problem from occuring, a delay was added before the MITE chip would perform a retry on the PCI bus. The amount of time that the MITE waits is determined by the DmaCpuRequestDelay value that is entered in the Registry. The default value for this setting is 2.
In this instance, it was this value that was causing the slowdown in the transfer rate. By changing the DmaCpuRequestDelay to 0, the transfer was increased to an acceptable level. However, caution should be taken when changing this value. If this value is changed, there is a slight possibility that bus errors might be received during large data transfers, due to the maximum retry limit being reached on the MITE chip.
If this happens, there are two options:
You can raise the DmaCpuRequestDelay up to 1 or back to 2
You can disable DMA on the MITE chip by changing the DisableMiteDma value to 1.
To change the DmaCpuRequestDelay, use the Registry Editor via Start » Run and type regedit. Navigate to the following section of the registry for WIN32: HKEY_LOCAL_MACHINE/Software/National Instruments/NI-VXI
I hope this helps. Have great day!
Gary P.
Applications Engineer
National Instruments
Visit ni.com/gettingstarted for step-by-step help in setting up your system. -
Very very slow file access iSCSI NSS on SLES11/XEN/OES11
Hi,
Like many Novell customers while carrying out a hardware refresh we are moving off traditional Netware 6.5 to OES11 and at the same time virtualising our environment.
We have new Dell Poweredge 620 serves attached by 10gig iSCSI to Equalogic SAN
Installed SLES will all patches and updates and XEN and then created OES11 SP2 virtual machines, these connect to NSS volume by iSCSI
Migrated files from traditional netware server to new hardware and stated testing and ran into very very slow files access times
A 3.5mb pdf file takes close to 10 minutes to open from local PC with Novell Client installed, same if no client and open via cifs. Opening same file off traditional NW6.5 server takes 3-4 seconds.
We have had a case open with Novell for almost 2 months but they are unable to resolve.
To test other options we installed VMWare ESXi on the internal usb flash drive and booted off that, created same OES11 VM and connected to NSS on SAN and same pdf open in seconds.
The current stack of SLES11/XEN/OES11 is not able to be put into production
Any ideas where the bottleneck might be? We think is in XEN.
ThanksOriginally Posted by idgandrewg
Waiting for support to tell me what the implications are of this finding and best way to fix
Hi,
As also mentioned in the SUSE forums, there is the option of using the Equallogic Hit Kit. One of the tools, next to the great autoconfigure options it has, is the eqltune tool.
Some of the stuff that I've found important:
-gro is known read performance killer. Switch if off on the iSCSI interfaces.
- if possible (meaning you have decent hardware), disable flowcontrol as this generally offers stability but at the cost of performance. If your hardware is decent, this form of traffic control should not be needed.
-To have multipath work correctly over iSCSI and starting SLES 11 SP1. Make sure kernel routing and arp handling are set correctly (not directly relevant if you only have 1 10 GB link):
net.ipv4.conf.[iSCSI interfaceX name].arp_ignore = 1
net.ipv4.conf.[iSCSI interfaceX name].arp_announce = 2
net.ipv4.conf.[iSCSI interfaceX name].rp_filter = 2
Test if traffic is actively routed over both iSCSI interfaces:
ping -I [iSCSI interfaceX name] [iSCSI Group IP EQL]
-Make sure network buffers etc are adequately set as recommended by Dell (set in /etc/sysctl.conf):
#NetEyes Increase network buffer sizes for iSCSI
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 8192 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.core.wmem_default = 262144
net.core.rmem_default = 262144
-Settings for the /etc/iscsi/iscsid.conf I'm using:
node.startup = automatic # <--- review and set according to environment
node.session.timeo.replacement_timeout = 60
node.conn[0].timeo.login_timeout = 15
node.conn[0].timeo.logout_timeout = 15
node.conn[0].timeo.noop_out_interval = 5
node.conn[0].timeo.noop_out_timeout = 5
node.session.err_timeo.abort_timeout = 15
node.session.err_timeo.lu_reset_timeout = 20 #Default is 30
node.session.err_timeo.tgt_reset_timeout=20 #Default is 30
node.session.initial_login_retry_max = 12 # Default is 4
node.session.cmds_max = 1024 #< --- Default is 128
node.session.queue_depth = 128 #< --- Default is 32
node.session.iscsi.InitialR2T = No
node.session.iscsi.ImmediateData = Yes
node.session.iscsi.FirstBurstLength = 262144
node.session.iscsi.MaxBurstLength = 16776192
node.conn[0].iscsi.MaxRecvDataSegmentLength = 131072 #A lower value improves latency at the cost of higher IO throughput
discovery.sendtargets.iscsi.MaxRecvDataSegmentLeng th = 32768
node.session.iscsi.FastAbort = No # < --- default is Yes
-Have Jumbo frames configured on the iSCSI interfaces & iSCSI switch.
If you are using multipathd instead of the dm-switch provided with the Equallogic Hit kit, make sure the /etc/multipath.conf holds the optimal settings for the Equallogic arrays.
Ever since Xen with SLES 11 SP1 we have been seeing strong performing virtual servers. We still use 1GB connections (two 1GB connections for each server, serving upto 180~190Mb/s).
There could be a difference with the 10GB setup, where multipath is not really needed or used (depending on the scale of your setup). One important thing is that the iSCSI switches are doing their thing correctly. But seeing you've already found better results tuning network parameters on the Xen host, seems to indicate that's ok.
Cheers,
Willem -
I've test the cfimap tag again in CF 10 Beta and it still have the same poor performance like CF9.
My mini test:
<cfimap action="open"
server="imap.gmx.net"
username="[email protected]"
password="xxx"
secure="true"
connection="myCon" />
<cfimap action="getheaderonly"
connection="myCon"
folder="ADR"
maxrows="20"
name="myQry" />
<cfimap action="close"
connection="myCon" />
In this case i have for 20 items and headeronly approx. 30 seconds!
This tag with that perfomance cannot used in any production environment.
Other developers have the same problem min. since CF9:
http://forums.adobe.com/message/2728550#2728550
Dear Adobe/CF developer team why you cannot speed-up this tag?
Kind Regards
RogerHi,
Ops Center comes with a utility called OCDoctor (http://wikis.sun.com/display/EM11gOC1/OC+Doctor) that lets you check your satellite/proxy performance, makes sure you meet the prereqs, and can suggest tuning changes. I'd give it a try first and see if it reports any problems
OC 11g should be fairly speedy on the right hardware: usually when I've seen it slow down it was on systems without enough RAM (and they were swapping like crazy).
Regards,
[email protected] -
Windows 2012 Nodes - Slow CSV Performance - Need help to resolve my iSCSI issue configuration
I spent weeks going over the forums and the net for any publications and advice on how to optimize iSCSI connections and i'm about to give up. I really need some help in determining if its something i'm not configuring right or maybe its an equipment
issue.
Hardware:
2x Windows 2012 Hosts with 10 Nics (same NIC configuration) in a Failover Cluster sharing a CSV LUN.
3x NICs Teamed for Host/Live Migration (192.168.0.x)
2x NICS teamed for Hyper-V Switch 1 (192.168.0.x)
1x NIC teamed for Hyper-V Switch 2 (192.168.10.x)
4x NICs for iSCSI traffic (192.168.0.x, 192.168.10.x, 192.168.20.x 192.168.30.x)
Jumbo frames and flow control turned on all the NICs on the host. IpV6 disabled. Client for Microsoft Network, File/Printing Sharing Disabled on iSCSI NICs.
MPIO Least Queue selected. Round Robin gives me an error message saying "The parameter is incorrect. The round robin policy attempts to evenly distribute incoming requests to all processing paths. "
Netgear ReadyNas 3200
4x NICs for iSCSI traffic ((192.168.0.x, 192.168.10.x, 192.168.20.x 192.168.30.x)
Network Hardware:
Cisco 2960S managed switch - Flow control on, Spanning Tree on, Jumbo Frames at 9k - this is for the .0 subnet
Netgear unmanaged switch - Flow control on, Jumbo Frames at 9k - this is for .10 subnet
Netgear unmanaged switch - Flow control on, Jumbo Frames at 9k - this is for .20 subnet
Netgear unmanaged switch - Flow control on, Jumbo Frames at 9k - this is for .30 subnet
Host Configuration (things I tried turning on and off):
Autotuning
RSS
Chimney Offload
I have 8 VMs stored in the CSV. When try to load all 8 up at the same time, they bog down. Each VM loads very slowly and when they eventually come up, most of the important services did not start. I have to load
them up 1 or 2 at a time. Even then the performance is nothing like if they were loading up on the Host itself (VHD stored on the host's hdd). This is what prompted me to add in more iSCSI connections to see if I can improve the VM's
performance. Even with 4 iSCSI connections, I feel nothing has changed. The VMs still start up slowly and services do not load right. If I distribute the load with 4 VMs on Host 1 and 4 VMs on Host 2, the load up
times do not change.
As a manual test for file copy speed, I moved the cluster resources to Host 1 and copied a VM from the CSV and onto the Host. The speed would start out around 250megs/sec and then eventually drop down to about 50/60 megs/sec. If I turn
off all iSCSI connections except one, it get the same speed. I can verify from the Windows Performance Tab under Task Manager that all the NICS are distributing traffic evenly, but something is just limiting the flow. Like what I stated on top,
I played around with autotuning, RSS and chimney offload and none of it makes a difference.
The VMs have been converted to VHDx and to fixed size. That did not help.
Is there something I'm not doing right? I am working with Netgear support and they are puzzled as well. The ReadyNas device should easily be able to handle it.
Please help! I pulled my hair out over this for the past two months and I'm about to give up and just ditch clustering all together and just run the VMs off the hosts themselves.
GeorgeA few things...
For starters, I recommend opening a case with Microsoft support. They will be able to dig in and help you...
Turn on the CSV Cache, it will boost your performance
http://blogs.msdn.com/b/clustering/archive/2012/03/22/10286676.aspx
A file copy has no resemblance of the unbuffered I/O a VM does... so don't use that as a comparison, as you are comparing apples to oranges.
Do you see any I/O performance difference between the coordinator node and the non-coordinator nodes? Basically, see which node owns the cluster Physical Disk resource... measure the performance. Then move the Physical Disk resource for the
CSV volume to another node, and repeat the same measure of performance... then compare them.
Your IP addressing seems odd... you show multiple networks on 192.168.0.x and also on 192.168.10.x. Remember that clustering only recognizes and uses 1 logical interface per IP subnet. I would triple check all your IP schemes...
to ensure they are all different logical networks.
Check you binding order
Make sure you NIC drivers and NIC firmware are updated
Make sure you don't have IPsec enabled, that will significantly impact your network performance
For the iSCSI Software Initiator, when you did your connection... make sure you didn't do a 'Quick Connect'... that will do a wildcard and connect over any network. You want to specify your dedicated iSCSI network
No idea what the performance capabilities of the ReadyNas is... this could all likely be associated with the shared storage.
What speed NIC's are you using? I hope at least 10 GB...
Hope that helps...
Elden
Hi Elden,
2. CSV is turned on, I have 4GB dedicated from each host to it. With IOmeter running within the VMs, I do see the read speed jumped up 4-5x fold but the write speed stays the same (which according to the doc it should). But even with the read
speed that high, the VMs are not starting up quickly.
4. I do not see any difference with IO with coordinator and non coordinator nodes.
5. I'm not 100% sure what your saying about my IPs. Maybe if I list it out, you can help explain further.
Host 1 - 192.168.0.241 (Host/LM IP), Undefined IP on the 192.168.0.x network (Hyper-V Port 1), Undefined IP on the 192.168.10.x network (Hyper- V port 2), 192.168.0.220 (iSCSI 1), 192.168.10.10 (iSCSI2), 192.168.20.10(iSCSI 3), 192.168.30.10 (iSCSI 4)
The Hyper-V ports are undefined because the VMs themselves have static ips.
0.220 host NIC connects with the .231 NIC of the NAS
10.10 host NIC connects with the 10.100 NIC of the NAS
20.10 host NIC connects with the 20.100 NIC of the NAS
30.10 host NIC connects with the 30.100 NIC of the NAS
Host 2 - 192.168.0.245 (Host/LM IP), Undefined IP on the 192.168.0.x network (Hyper-V Port 1), Undefined IP on the 192.168.10.x network (Hyper- V port 2), 192.168.0.221 (iSCSI 1), 192.168.10.20 (iSCSI2), 192.168.20.20(iSCSI 3), 192.168.30.20 (iSCSI 4)
The Hyper-V ports are undefined because the VMs themselves have static ips.
0.221 host NIC connects with the .231 NIC of the NAS
10.20 host NIC connects with the 10.100 NIC of the NAS
20.20 host NIC connects with the 20.100 NIC of the NAS
30.20 host NIC connects with the 30.100 NIC of the NAS
6. Binding orders are all correct.
7. Nic drivers are all updated. Didn't check the firmware.
8. I do not know about IPSec...let me look into it.
9. I did not do quick connect, each iscsi connection is defined using a specific source ip and specific target ip.
These are all 1gigabit nics, which is the reason why I have so many NICs...otherwise there would be no reason for me to have 4 iscsi connections. -
Hi, Everybody
My Mac is getting slower and sudenly aplications close, What can I Do?If an iOS device (iPhone, iPad or iPod touch) is badly misbehaving after an update, try setting up the device as new. WARNING!! If you do not have a good backup, you will loose all of your saved data, such as photos. So make sure you have either an iCloud backup or an iTunes backup. Preferably both.
Good, now that you have a backup, go into settings-->general-->reset-->erase all content an settings. Do it. It will warn you a few times that you are erasing everything. When the phone comes back after a few minutes, make sure you set it up as a new iPhone. See how the phone works as new. Everything should be snappy and nice. If it's not, you may have a hardware problem and trip to the genius bar is required.
If everything is good, reset the phone again and this time restore from your backup (you have it right? Right?) If you have many apps, it can take a good several hours for them to be restored from the cloud if you are doing this wirelessly. If the phone is misbehaving again it means that your backup is causing the issue. The solution is to set the phone up as new after saving off the data you care about and starting fresh. A pain, but it works.
Hope this helps, and good luck! -
My 5s is bit slow after upgrading to i os 8 , especially downloading an app,loading fb pages and web browsing help pls
Just a comment:
It would be useful if you had more detail in your question and added appropriate tags.
That would help searches & finding similar content. -
Perfmon AD counters for slow perfomance
Hello
I have a domain consisting of 6 domain controllers defined across two sites. The subnets that are in use have all been correctly defined within sites and services.
During peak logons we are seeing DNS rejects and authentication failures on one DC. Specifically this dc is called DC2.
there are several events posted to the event log regarding failure of group policy processing. Primarily these are event 1030 (group policy processing Failed) & Event 7011 (DNS Timeout)
During this window the DNS snapin console refuses to respond leaving DNS unmanageable. Additionally a DCDIAG from command prompt fails to run. Im also unable to force a secure channel change to a different DC.
There are no events in the DNS logs that point to any errors. Ive ran perfmon traces on the Disk Queue – no requests are queueing. Disk Latency is not an issue from perfmon also. CPU utilisation is a maximum of 2 – 3%. There is 2 gb of RAM unused. Disk space
is not an issue. Ive ran counters on DNS and nothing seems to be excessively getting hammered.
Any ideas of further perfmon counters to analyse or any other suggestions of log files to examine?C:\Windows\system32>ipconfig /all
Windows IP Configuration
Host Name . . . . . . . . . . . . : ****DC-2*****
Primary Dns Suffix . . . . . . . : ****.****.****.uk
Node Type . . . . . . . . . . . . : Hybrid
IP Routing Enabled. . . . . . . . : No
WINS Proxy Enabled. . . . . . . . : No
DNS Suffix Search List. . . . . . : ****.****.****.uk
Ethernet adapter Prod:
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : vmxnet3 Ethernet Adapter
Physical Address. . . . . . . . . : 00-50-56-81-78-26
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
IPv4 Address. . . . . . . . . . . : 164.134.113.146(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.255.192
Default Gateway . . . . . . . . . : 164.134.113.129
DNS Servers . . . . . . . . . . . : 164.134.113.146
164.134.113.145
Primary WINS Server . . . . . . . : 172.17.5.250
Secondary WINS Server . . . . . . : 172.31.204.50
NetBIOS over Tcpip. . . . . . . . : Enabled
Ethernet adapter UKCSN:
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : vmxnet3 Ethernet Adapter #2
Physical Address. . . . . . . . . : 00-50-56-81-50-BA
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
IPv4 Address. . . . . . . . . . . : 172.21.64.241(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.252.0
Default Gateway . . . . . . . . . :
NetBIOS over Tcpip. . . . . . . . : Enabled
Maybe you are looking for
-
Error in owb_awxml_import_expert
Hi, This utility allows you to read and import metadata from an AW XML template coming from AW Manager into the OWB Repository. (10gR2) Import succeeded succesfully of the load mappings and dimensions en cube objects. But i can only redeploy a load_m
-
Connecting to a database over the internet
Hello , Ok heres an interesting question - I would like to have an applet running on my browser which needs to connect to a central server over the internet, which in turn will need to connect to a database and query some results. These results will
-
I am having a problem within Tomcat (v. 3.3 or 4.0) where the web.xml for the webapps.war file give an error. The error is: At Invalid element 'servlet' in content of 'web-app', expected elements '[error-page, taglib, resource-ref, security-onstraint
-
Modifying multiple url hyperlinks.
Hi all. I've been working with InDesign CS3 to convert a catalogue into iPDF format for web distribution. This involves me going through the document page by page hyperlinking each program in the catalogue to the corresponding url. Recently I've been
-
Hi Gurus, My plan is to send master data to MII and process them in MII. Is there a standard template or wizard available in MII to process the IDOC? Thanks. Nagarajan.