Hello list. I just wanted to have an extra pair (or a dozen) of eyes look this configuration over before I commit to it (tested it in VMWare just in case, it works, so I am considering doing this on real hardware soon). I drew a nice diagram: http://www.pastebin.ca/1460089 Since it doesnt show on the diagram, let me clarify that the geom mirror consumers as well as the vdevz for ZFS RAIDZ are going to be partitions (raw disk => full disk slice => swap partition | mirror provider partition | zfs vdev partition | unused. Is there any actual downside to having a 5-way mirror vs a 2-way or a 3-way one? - Sincerely, Dan Naumov
Alson van der Meulen
2009-Jun-14 22:07 UTC
Does this disk/filesystem layout look sane to you?
* Dan Naumov <dan.naumov@gmail.com> [2009-06-14 18:17]:> I just wanted to have an extra pair (or a dozen) of eyes look this > configuration over before I commit to it (tested it in VMWare just in > case, it works, so I am considering doing this on real hardware soon). > I drew a nice diagram: http://www.pastebin.ca/1460089Looks fine to me. Note that your swap doesn't have any redundancy, so if you lose a disk, the kernel will likely panic as soon as it hits any swap (the swap space is striped across the disks), this is something you can easily test in a VM. The kernel will only use four swap devices by default. I would put the swap on gmirror. Swap performance is rarely critical (if you're hitting swap often you should buy more RAM), and if you have 2TB disks, a few extra gigabytes less is not an issue (I usually make swap slightly larger than RAM for crash dumps, sometimes twice that if I plan to add RAM later).> Is there any actual downside to having a 5-way mirror vs a 2-way or a 3-way one?Write performance is slightly slower than a single disk (you have to wait for all five disks to finish), but these partitions are rarely performance-critical. Depending on your workload, it may be an issue for /var (databases, logs, mail), but you could always move that data to a ZFS filesystem. It should be fine for a file server. Any other solution would just add more complexity. Alson
On Sun, Jun 14, 2009 at 9:17 AM, Dan Naumov <dan.naumov@gmail.com> wrote:> I just wanted to have an extra pair (or a dozen) of eyes look this > configuration over before I commit to it (tested it in VMWare just in > case, it works, so I am considering doing this on real hardware soon). > I drew a nice diagram: http://www.pastebin.ca/1460089 Since it doesnt > show on the diagram, let me clarify that the geom mirror consumers as > well as the vdevz for ZFS RAIDZ are going to be partitions (raw disk > => full disk slice => swap partition | mirror provider partition | zfs > vdev partition | unused.I don't know for sure if it's the same on FreeBSD, but on Solaris, ZFS will disable the onboard disk cache if the vdevs are not whole disks. IOW, if you use slices, partitions, or files, the onboard disk cache is disabled. This can lead to poor write performance. Unless you can use one of the ZFS-on-root facilities, I'd look into getting a couple of CompactFlash or USB sticks to use for the gmirror for / and /usr (put the rest on ZFS). Then you can dedicate the entirety of all 5 drives to ZFS. -- Freddie Cash fjwcash@gmail.com