So, Best Practices says "use (N^2)+2 disks for your raidz2".
I wanted to use 7 disk stripes not 6, just to try to balance my risk
level vs available space.
Doing some testing on my hardware, it''s hard to say there''s a
ton of
difference one way or the other - seek/create/delete is a bit faster on
a 6-disk stripe, bandwidth is a bit higher on a 7-disk stripe, maybe 5%
difference.
However, all of the volumes are going to be compression=gzip, then
filled with big but compressible files - a few million of them.
Compression ratio is about 2.5:1.
Recordsize might be 128k, but if it''s being compressed, the actual
blocksize written to disk is going to end up fairly random anyway,
right? Agreed that the final record size written to disk will be more
likely divisible by 4 than by 5, but most of the time it''s not going to
divide very cleanly either way and ZFS will have to pad out the record,
thereby making the actual vdev stripe width a little moot?
Thanks,
-bacon
(hdw in question is a supermicro build, dual 6x2.8Ghz CPU, 96G, 6
LSI2008 6Gb/s controllers dual-attached to 6 dual-expander backplanes
driving ~100 2TB constellation SAS drives (so approx 20 drives per
backplane) split into 6 raidz2 zpools. A bit of block padding probably
gets lost in the noise, and we''re quickly determining that our
bottleneck is how fast 12 cores can gzip as much as anything. :) )