Nexenta FC Multipath Performance Issue
I am hoping that someone can assist me here. I have a Dell 2970 connected via 4 fiber connections to a Xyratex unit. I have the Xyratex advertising each of its 48 drives as a single drive to the head unit. I was able to get Nexenta to see the drives in a multipath down all 4 connections to the Xyratex. As you can see below Nexenta is showing the disk as having 4 paths:
root@nexenta-corpclsan:/volumes# luxadm display /dev/rdsk/c7t60050CC000F00C2E000000000000005Bd0s2 DEVICE PROPERTIES for disk: /dev/rdsk/c7t60050CC000F00C2E000000000000005Bd0s2 Vendor: XYRATEX Product ID: F5404E Revision: Serial Num: 00300B20 Unformatted capacity: 707868.000 MBytes Write Cache: Enabled Read Cache: Enabled Minimum prefetch: 0x0 Maximum prefetch: 0xffff Device Type: Disk device Path(s):
/dev/rdsk/c7t60050CC000F00C2E000000000000005Bd0s2 /devices/scsi_vhci/disk@g60050cc000f00c2e000000000000005b:c,raw Controller /dev/cfg/c2 Device Address 24000050ccf00b20,0 Host controller port WWN 21000024ff2fecf0 Class primary State ONLINE Controller /dev/cfg/c3 Device Address 22000050ccf00b20,0 Host controller port WWN 21000024ff2fecf1 Class primary State ONLINE Controller /dev/cfg/c4 Device Address 23000050ccf00b20,0 Host controller port WWN 2100001b3209b6de Class primary State ONLINE Controller /dev/cfg/c5 Device Address 21000050ccf00b20,0 Host controller port WWN 2101001b3229b6de Class primary State ONLINE
The problem I am seeing is slow performance to the Xyrtex and the Nexenta only using one path not load balancing across all 4 paths. I have installed bonnie++ and while running it, I switched between the controllers on the Xyratex and the only controller showing any major traffic was controller 0 path 1.
The reason I started investigating this is because we have a small cluster of 4 VMware servers connected via NFS to the Nexenta to 4 different volumes and we are seeing horrible performance while trying to build virtual machines, to the point I would start a Windows Install using thin provisioning with only a 100GB drive (with thin provisioning this amounts to maybe a 4GB file at the most on the NFS mount) and I can come back 40 minutes later and might still be transferring files to the new disk. On the flip side, I connected that same cluster up to a straight Ubuntu NFS server and the same configuration for the Windows VM would be fully installed and rebooted in around 10 minutes. This is over a single 1G link to both the Ubuntu and the Nexenta server, the only major difference is that the Ubuntu server is about 5 switches away from the cluster while the Nexenta is on the same switch.
Any suggestions? I would have tested by using a mount point to the local disk of the Nexenta for testing its performance to make certain it wasn't Nexenta itself (which I do not believe since we have another Nexenta connected to an HP Array and it lets us build VMs easily, just have high latency when we are stress testing on the VMs themselves, according to VMTurbo, to the Storage) but Nexenta won't let me use the syspool disk to create a volume on, at least not in the NMC interface.
From my testing with Bonnie++, DD, and iozone the write access across all 4 ports on the controllers is around 3 to 5MB/s and the reads are around 50MB/s, which on 4G links is in my opinion horrible. These are QLogic cards using the native drivers in Nexenta so I would think that the system would be very fast especially to a raidz2 with 16 drives. The only thing I did was modify the /kernel/drv/scsi_vhci.conf so that it would recognize the drive array and setup Multipath across the ports.