we are just testing Nexenta as iSCSI storage for vmware.
Nexenta 3.1.2 - Xeon 5620 - 96GB RAM - 4xSAS 15k RPM - Intel x520 - SC826E1 (single expander) ESXi 5 - Xeon 5645 - 12GB RAM - Intel x520
Networking 10Gbit is using jumbo frames - MTU 9000.
For read operations - we can see 900 - 1120MB/s without any problem - RAM cache effect is huge. IOPS for reading - around 20k - 60k depends on test type.
What is questionable is the write performance.
During various test we get following performance: Stripped mirrors - write cca 130MB/s RaidZ aka R5 ------ write cca 220MB/s No redundancy --- write cca 168MB/s
Nexenta is in default config - fresh install. Only MTU has been changed.
Wee dont really understand those write results. 1) how to test pure write performance - disable sync? Disable it on zpool or zvol? 2) what do you suggest to get write performance like 600MB/s or more?
RE: 10Ge performance - Added by Jeff Gibson about 1 year ago
Hi Peter, we have a similar config except we're using a 2vdev 5disk RaidZ SSD pool. You don't say what size blocks you're testing with, but our pool maxes out at 500-700MB/s using 1MB Sequential blocks (keeping writeback cache off and sync standard) but gets about 240-280MB/s on my standard 8k blocks w/ 60% rand & 65% read qd=32.
To test purely the most that your system could write to those drives you could set sync disabled on both the pool and the vdev, but if you've left the default writeback cache on the vdev I think this achieves the same effect. You might try turning off compression and using dd to copy from dev/zero to the disks to test local performance w/o iSCSI overhead. Various use of iostat will also let you know if you're disks are the bottleneck or somewhere else (look at the %b column)
RE: 10Ge performance - Added by Peter Braun about 1 year ago
we added some SAS disks, so here is actual config:
Xeon 5645 48GB RAM LSI 9211-4i HBA IT fw latest SC826E (single expander SAS 3Gbit) Intel X520-DA2 10 x Seagate Cheetah 600GB 15k RPM SAS
Fresh install of Nexenta 3.1.2 - except enabling jumbo frames (MTU 9000) no tunning and keeping default options.
Configured various datasets and done testing. We are seeing only small effect of added spindles to pool.
Local write speed to single disk - 165MB/s
Local write speed (10 disks config): stripped mirrors - 500MB/s raidz2 - 746MB/s stripped raidz1 - 749MB/s Testing over iSCSI (both windows or esxi5): stripped mirrors - 205MB/s raidz2 - 284MB/s stripped raidz1 - 291MB/s
Testing over NFS v3: on all configs - 135MB/s
We did some research and we are not satisfied with both local and iscsi writes.
Based on this article: http://ketil.froyn.name/blosxom/blosxom.cgi --- guy is getting 1000MB/s local write on 10 drives stripped raidz1 with 7.2k RPM drives!
4 disk - raidz6 write performance 200MB/s 10 disk - raidz6 write performance 290MB/s
Is that correct?!
We looking for help with this case and probably somebody with simillar config to compare.
We are suspicious about expander - some drives are connected via mpxio, some mpt_sas. But unfortunately we dont have sas cables to test direct connection of drives yet.
Do you think that SAS2 6Gbit Expander would make difference?
Next step would be test install of OpenIndiana and CentOS.