Hi all, i'm planning an installation where 4 gluster nodes are virtualized by KVM. One gluster for one phisical server (4 in total) The same phisical node hosting gluster image, will also host some machines that should boot from gluster. Obviously, gluster should be started at first. Recap: 4 phisical nodes, each node will host at least 10 VM plus 1 gluster VM Each VM should boot from the gluster VM Do you have any advice on this configuration?
On 02/24/2013 12:52 PM, Gandalf Corvotempesta wrote:> > Hi all, > i'm planning an installation where 4 gluster nodes are virtualized by > KVM. One gluster for one phisical server (4 in total) > The same phisical node hosting gluster image, will also host some > machines that should boot from gluster. > Obviously, gluster should be started at first. > > Recap: 4 phisical nodes, each node will host at least 10 VM plus 1 gluster VM > Each VM should boot from the gluster VM > > Do you have any advice on this configuration?What does it make sense to keep gluster on VMs? tamas
On 02/24/2013 06:52 AM, Gandalf Corvotempesta wrote:> Hi all, > i'm planning an installation where 4 gluster nodes are virtualized by > KVM. One gluster for one phisical server (4 in total) > The same phisical node hosting gluster image, will also host some > machines that should boot from gluster. > Obviously, gluster should be started at first. > > Recap: 4 phisical nodes, each node will host at least 10 VM plus 1 gluster VM > Each VM should boot from the gluster VM > > Do you have any advice on this configuration?If I understand you, the boot path is, 0. boot 4 physical servers 1. boot 4 gluster VMs, one per server w/ mapped physical disks 2. mount gluster volumes on each server 3. boot generic VMs, 10 per server If you didn't do gluster VMs, a. boot 4 physical servers b. mount gluster volumes on each server c. boot generic VMs, 10 per server Doing non-virt gluster saves storing the gluster images (small, 15-20GB), the mapping of physical disks into the gluster VMs, and any potential overhead introduced by gluster working against virtualized disks. Unless you have some reason to put gluster in a VM, some advice is to keep it physical. You didn't mention it, but the 40 VMs might have just their boot partitions mounted from gluster or have both their boot and data partitions mounted from gluster. If they have both, you'll want to pay special attention to the w/r path through gluster (skip virt, possibly multiple volumes w/ bricks spread over spindles). Best, matt
On Sun, Feb 24, 2013 at 12:52:53PM +0100, Gandalf Corvotempesta wrote:> Recap: 4 phisical nodes, each node will host at least 10 VM plus 1 gluster VM > Each VM should boot from the gluster VMBy "boot from" I guess you mean that the VM's root device, e.g. hda/vda, will be a disk image file stored on the gluster filesystem?> Do you have any advice on this configuration?Yes: test it carefully to ensure it does what you want. * In my experience, write performance of KVM -> FUSE mount -> gluster is very poor (I was getting about 6MB/s). In your case you have another layer of KVM in this too. * Test carefully all the various failure scenarios, e.g. halting and restarting the gluster VMs, rebooting the whole server, pulling the power out of the whole server and restarting it. Better to learn the failure scenarios here than when in production, because Gluster has precious little documentation on how to cope with them. If the *only* thing you need to do is provide backing storage for VMs, then there are other solutions which might suit you better - you could look at Ganeti and Sheepdog. Ganeti uses LVM to create storage volumes for each VM and then creates DRBD instances on top of them to synchronise replicas between hosts. It provides a full VM cluster manager too, so the command line lets you manage all your VMs from one point. Sheepdog provides a virtual block-storage layer for KVM, where chunks of each volume are distributed and replicated between hosts. However, neither provides a general-purpose shared filesystem as Gluster does. Regards, Brian.