In our development lab, I am installing 4 new servers, that I want to use for hosting KVM. each server will have its own direct attached raid. I'd love to be able to 'pool' this storage, but over gigE, I probably shouldn't even try. most of the VM's will be running CentOS 5 and 6, some of the VM's will be postgresql database dev/test servers, others will be running java messaging workloads, and various test jobs. to date, my experience with KVM is bringing up one c6 VM on a c6 host, manually w/ virt-install and virsh... stupid questions ... whats the best storage setup for KVM when using direct attached raid? surely using disk image files on a parent ext4/xfs file system isn't the best performance? Should I use host lvm logical volumes as guest vdisks? we're going to be running various database servers in dev/test and wanting at least one or another at a time to really be able to get some serious iops. its virt-manager worth using, or is it too simplistic/incomplete ? will virt-manager or some other tool 'unify' management of these 4 VM hosts, or will it be pretty much, me-the-admin keeps track of what vm is on what host and runs the right virt-manager and manages it all fairly manually? "That may be the easy way, but its not the Cowboy Way" -- john r pierce 37N 122W somewhere on the middle of the left coast
Am Sat, 19 Oct 2013 23:22:12 -0700 schrieb John R Pierce <pierce at hogranch.com>:> In our development lab, I am installing 4 new servers, that I want to > use for hosting KVM. each server will have its own direct attached > raid. I'd love to be able to 'pool' this storage, but over gigE, I > probably shouldn't even try.I'm not sure if somebody has re-built RHEV on CentOS (couldn't find it with a quick google-search). RHEV = http://www.redhat.com/products/cloud-computing/virtualization/ Then, on top of that you'd need RedHat Storage Server. It's a stabilized build of GlusterFS with Enterprise-Support - and Enterprise-price... Also, you could try OpenStack - but I'm not sure if it's worth the hassle and if four noodes is actually enough to have a usable setup. RedHat Storage Server recommends 10G ethernet, BTW. For your setup, I'd invest more in the hardware itself (redundant PSU, more redundancy in the disks, more powerful RAID-controller with battery-backed cache, the more hardware is hot-pluggable, the better etc..) Oh, and I'd love to hear success-stories of people who actually use RHEV+RHSS. Any kind of distributed storage, actually.
On 20/10/13 02:22, John R Pierce wrote:> In our development lab, I am installing 4 new servers, that I want to > use for hosting KVM. each server will have its own direct attached > raid. I'd love to be able to 'pool' this storage, but over gigE, I > probably shouldn't even try.I've build DRBD-backed shared storage using 1 Gbit network for replication for years and the network has not been an issue. So long as your apps can work with ~110 MB/sec max throughput, you're fine. Latency is not effected because the average seek time of a platter, even 15krpm SAS drives, it higher than the network latency (assuming decent equipment).> most of the VM's will be running CentOS 5 and 6, some of the VM's will > be postgresql database dev/test servers, others will be running java > messaging workloads, and various test jobs. > > to date, my experience with KVM is bringing up one c6 VM on a c6 host, > manually w/ virt-install and virsh... > > stupid questions ... > > whats the best storage setup for KVM when using direct attached raid? > surely using disk image files on a parent ext4/xfs file system isn't the > best performance? Should I use host lvm logical volumes as guest > vdisks? we're going to be running various database servers in dev/test > and wanting at least one or another at a time to really be able to get > some serious iops.What makes the most difference is not the RAID configuration but having batery-backed (or flash-backed) write caching. With multiple VMs having high disk IO, it will get random in a hurry. The caching allows for keeping the systems responsive even under these highly random writes. As for the storage type; I use clustered LVM (with DRBD as the PVs) and give each VM a dedicated LV, as you mentioned above. This takes the FS overhead out of the equation.> its virt-manager worth using, or is it too simplistic/incomplete ?I use it from my laptop, via an ssh tunnel, to the hosts all the time. I treat it as a "remote KVM" switch as it gives me access to the VMs regardless of their network state. I don't use it for anything else.> will virt-manager or some other tool 'unify' management of these 4 VM > hosts, or will it be pretty much, me-the-admin keeps track of what vm is > on what host and runs the right virt-manager and manages it all fairly > manually?Depends what you mean by "manage" it. You can use 'virt-manager' on your main computer to connect to the four hosts (and even set them to auto-connect on start). From there, it's trivial to boot/connect/shut down the guests. If you're looking for high-availability of your VMs (setting up your servers in pairs), this might be of interest; https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial -- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education?
> whats the best storage setup for KVM when using direct attached raid? > surely using disk image files on a parent ext4/xfs file system isn't the > best performance? Should I use host lvm logical volumes as guest > vdisks? we're going to be running various database servers in dev/test > and wanting at least one or another at a time to really be able to get > some serious iops.Stay away from LVM if you want performance. There is something single threaded in it which stops you from hitting really great performance. qCow2 will work pretty okay but you cant get away from the fact that you are running a filesystem ontop of a filesystem which is never going to be awesome. In centos, use elrepo and install the "kernel-lt". This will give you a much later kernel which is great for KVM as much improvement has been made from the stock 2.6.32 kernel. If you need really good I/O then use 10G networking and turn one of the boxes into a ZFS/NFS and put your virtual images on that. My experiments with FreeNAS were extremely potent and quite stable. Remember that a single SATA harddrive has an approximate equivalent performance of 1G ethernet. 100MB/s or so.> > its virt-manager worth using, or is it too simplistic/incomplete ?Virt-Manager is an ok GUI tool but I would recommend using command line tools virt-install and virsh. They require a little more learning but ultimately give you a better understanding of the stack. They are very powerful when you learn to script with them if you dont have the ability to write python (which I dont)> > will virt-manager or some other tool 'unify' management of these 4 VM > hosts, or will it be pretty much, me-the-admin keeps track of what vm is > on what host and runs the right virt-manager and manages it all fairly > manually?Yes however this is just the creation and destruction of VMs. You cant do live migration because you dont have a shared storage device (unless you use my ZFS NAS idea) I would stay away from openstack / RHEV (which is actually a fedora project called ovirt) They add a layer of complexity and inflexibility which is not so useful in a lab environment. Ta, Andrew P.S. Here is a virt-install command that i found in my bash history :) virt-install --connect qemu:///system -n nfs-server -r 2048 --vcpus=2 --disk path=/vols/nfs-server.img,size=20,device=disk,bus=virtio --vnc --vncport=9922 --vnclisten=192.168.0.11 --noautoconsole --os-type linux --accelerate --network=bridge:br0,mac=00:00:00:00:02:00,model=virtio --hvm --cdrom=/vols/CentOS-6.4-x86_64-minimal.iso> > > > "That may be the easy way, but > its not the Cowboy Way" > > -- > john r pierce 37N 122W > somewhere on the middle of the left coast > > _______________________________________________ > CentOS mailing list > CentOS at centos.org > http://lists.centos.org/mailman/listinfo/centos