Best Config for my setup. -Storage project
This project is for virtualizing all our server. All my server are connected to 10gbps network.
My setup is 2 x Dell R710 connected to Dell MD1200 jbod (12 disk of near sas 2TB).
In my R710 both have one MLC SSD Talos for cache and a DDRDrive for Zil.
I was thinking of using one in mirror mode (sql server, exchange server, etc) and the other one in Raidz2 (for less demanding IO machine)
For the second one in Raidz2 should I create one Raidz2 of 12 disk or should I have two Raidz 2, is bad practice to have one Raidz2 of 12 drives ?
Right now I was using IOMeter in a windows 7 (in ESXi with NFS share) box to benchmark my configuration is this the good method of testing IOps ? I ask this because when I add the SSD and the DDRDrive to my pool my IOps are worse than without SSD + DDRdrive.
Thanks for your time
RE: Best Config for my setup. -Storage project - Added by Tommy Scherer 10 months ago
I agree that the SQL and Exchange should be on mirror + stripe. That that provide roughly 90*6= 540 Write IOPS (when ZIL or your case a SLOG is flushed) and 840 random reads. We typically see a 90+ % cache hit rate in most workloads. This is also dependent how much main memory is deployed. I would recommend a minimum of 24GB of RAM.
For the second box I have a hard time recommending RAIDZ2 for a virtual workload with that few of spindles. This because each Vdev performs like a single spindle. So a single 12 drive RAIDZ2 will give 90 write IOPS or 90 cache missed read IOPs. If you have a pair of 6 drive RAIDZ2 you will get 180 write IOPS when your SLOG is flushed and 270 IOPS on cache misses.
I would configure both systems as mirror + stripe. We have over a hundred units deployed and have around 5 raid z arrays deployed. The only time we deploy Z2 arrays are for CIFS archive or backup. The vast majority of our systems are deployed as Tier one HA Clusters for virtual workloads.
We have also deployed a few OCZ Talos C series drives with mixed results. When you run a
The have a ton of soft errors. these are all correctable errors that drive retry's the block. The other thing we have noticed is as the L2ARC is filled which it designed to the drives can become slower than spinning disk. If you want to keep the OCZ in production you should put a 100GB slice for L2ARC.
The OCZ could slow your benchmark if the test was run long enough to fill the device. The other factor is block size. there is a relationship between block size and the amount of main memory is required to drive the SSD. For virtual workloads 128k blocks should be used to reduce metadata and and utilize more memory for cache rather housekeeping.