What to expect...
basically this is a form of cross-post (http://nexentastor.org/boards/7/topics/7191), sorry for that, but i hope to get some more info here than in the newbie forum.
Key facts - Specs for the vm: 1/4 Core Opteron 6128 vCPU 16 GB Ram IBM M1015 as LSI in IT mode passed through to vm Dual E1000/vmxnet3 virtual nic Drives: 1x120gb SSD as ESX Host, with single datastore; 20 gb virtual disk for nexenta installation 2x10 gb virtual disk for nexenta log (originally 1x Intel 510 120gb as log device before i noticed sata erros, now another virtual disk 10gb) 5x Seagate 2TB 7200RPM SAS 6G in (2x2 mirrored +1HS) - i dont need max storage space, i need enough performance to handle two esx hosts with only 3 or 4 vms in parallel
Performance: Now i have (write cache enabled) 1,3G local write wite (dd, 1M blocks, 10/100GB file) and 60 MB network (cifs, separate vm switch on vmxnet3 nics, vm-vm (win2k8->nexenta)) - not really makeing me happy. Will try 128K block size dd and physical 2 vm and o/c deactivate compression which will probably greatly reduce the speed;) Blocksize 128 made no difference for dd; compression off on the volume + share had a massive impact -> down to 19mb/s with extremely slow response (ctrl-c took like a minute) and my to mirrored pools cant be that slow :( - i really dont get those numbers any more Tried accessing the vm from another esx w2k8 vm via cifs - first round (800mb, single file) fast ~100mb, second round couple of gigs ended up with 7mb/s; not to mention small files going around 4mb/s Interestingly Gui was very slow/unresponsice at that time while prstat was showing mostly idleing.
Ok, so can anyone help with what performance i should expect?? If i check here http://www.tomshardware.com/charts/enterprise-hard-drive-charts-2010/Throughput-Write-Average,2159.html then my drives should have ~100mb/s sequential, so 2 mirrors should be enough for GbEthernet... even minimum throughput ~60mb x2 should be enough, not to mention the ssd in front of those drives? Currently i am using cifs to test but ultimately it will be cifs and nfs/iscsi (whatever is better for esx)
Is there a way to set a multistage storage here? SSD (tiny)-> Fast Drives (small,15kRPM SAS)-> Slow Drives (large) ?