Hi, I have two KVM hosts (CentOS 7) and would like them to operate as High Availability servers, automatically migrating guests when one of the hosts goes down. My question is: Is this even possible? All the documentation for HA that I've found appears to not do this. Am I missing something? My configuration so fare includes: * SAN Storage Volumes for raw device mappings for guest vms (single volume per guest). * multipathing of iSCSI and Infiniband paths to raw devices * live migration of guests works * a cluster configuration (pcs, corosync, pacemaker) Currently when I migrate a guest, I can all too easily start it up on both hosts! There must be some way to fence these off but I'm just not sure how to do this. Any help is appreciated. Kind regards, Tom -- Tom Robinson IT Manager/System Administrator MoTeC Pty Ltd 121 Merrindale Drive Croydon South 3136 Victoria Australia T: +61 3 9761 5050 F: +61 3 9761 5051 E: tom.robinson at motec.com.au -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: OpenPGP digital signature URL: <http://lists.centos.org/pipermail/centos/attachments/20160622/4df4eb68/attachment-0001.sig>
On 22/06/16 01:01 AM, Tom Robinson wrote:> Hi, > > I have two KVM hosts (CentOS 7) and would like them to operate as High Availability servers, > automatically migrating guests when one of the hosts goes down. > > My question is: Is this even possible? All the documentation for HA that I've found appears to not > do this. Am I missing something?Very possible. It's all I've done for years now. https://alteeve.ca/w/AN!Cluster_Tutorial_2 That's for EL 6, but the basic concepts port perfectly. In EL7, just change out cman + rgmanager for pacemaker. The commands change, but the concepts don't. Also, we use DRBD but you can conceptually swap that for "SAN" and the logic is the same (though I would argue that a SAN is less reliable). There is an active mailing list for HA clustering, too: http://clusterlabs.org/mailman/listinfo/users> My configuration so fare includes: > > * SAN Storage Volumes for raw device mappings for guest vms (single volume per guest). > * multipathing of iSCSI and Infiniband paths to raw devices > * live migration of guests works > * a cluster configuration (pcs, corosync, pacemaker) > > Currently when I migrate a guest, I can all too easily start it up on both hosts! There must be some > way to fence these off but I'm just not sure how to do this.Fencing, exactly. What we do is create a small /shared/definitions (on gfs2) to host the VM XML definitions and then undefine the VMs from the nodes. This makes the servers disappear on non-cluster aware tools, like virsh/virt-manager. Pacemaker can still start the servers just fine and pacemaker, with fencing, makes sure that the server is only ever running on one node at a time.> Any help is appreciated. > > Kind regards, > TomWe also have an active freenode IRC channel; #clusterlabs. Stop on by and say hello. :) -- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education?
> > My question is: Is this even possible? All the documentation for HA that I've found appears to not > do this. Am I missing something?You can use oVirt for that (www.ovirt.org). For that small number of hosts, you would probably want to use the "hosted engine" architecture to co-locate the management engine on the same hypervisor hosts. It is included by the CentOS virtualization SIG, so on CentOS it is just a couple of 'yum install's away... HTH, -- Barak Korren bkorren at redhat.com RHEV-CI Team
On Wed, Jun 22, 2016 at 11:08 AM, Barak Korren <bkorren at redhat.com> wrote:> > > > My question is: Is this even possible? All the documentation for HA that > I've found appears to not > > do this. Am I missing something? > > You can use oVirt for that (www.ovirt.org). >When an UNCLEAN SHUDWON happens or ifdown eth0 in node1 , can OVIRT migrate VMs from node1 to node2? in that case, Is power management such as ILO needed? -- cat /etc/motd Thank you Indunil Jayasooriya http://www.theravadanet.net/ http://www.siyabas.lk/sinhala_how_to_install.html - Download Sinhala Fonts
Hi Digimer, Thanks for your reply. On 22/06/16 15:20, Digimer wrote:> On 22/06/16 01:01 AM, Tom Robinson wrote: >> Hi, >> >> I have two KVM hosts (CentOS 7) and would like them to operate as High Availability servers, >> automatically migrating guests when one of the hosts goes down. >> >> My question is: Is this even possible? All the documentation for HA that I've found appears to not >> do this. Am I missing something? > > Very possible. It's all I've done for years now. > > https://alteeve.ca/w/AN!Cluster_Tutorial_2 > > That's for EL 6, but the basic concepts port perfectly. In EL7, just > change out cman + rgmanager for pacemaker. The commands change, but the > concepts don't. Also, we use DRBD but you can conceptually swap that for > "SAN" and the logic is the same (though I would argue that a SAN is less > reliable).In what way is the SAN method less reliable? Am I going to get into a world of trouble going that way?> > There is an active mailing list for HA clustering, too: > > http://clusterlabs.org/mailman/listinfo/usersI've had a brief look at the web-site. Lots of good info there. Thanks!> >> My configuration so fare includes: >> >> * SAN Storage Volumes for raw device mappings for guest vms (single volume per guest). >> * multipathing of iSCSI and Infiniband paths to raw devices >> * live migration of guests works >> * a cluster configuration (pcs, corosync, pacemaker) >> >> Currently when I migrate a guest, I can all too easily start it up on both hosts! There must be some >> way to fence these off but I'm just not sure how to do this. > > Fencing, exactly. > > What we do is create a small /shared/definitions (on gfs2) to host the > VM XML definitions and then undefine the VMs from the nodes. This makes > the servers disappear on non-cluster aware tools, like > virsh/virt-manager. Pacemaker can still start the servers just fine and > pacemaker, with fencing, makes sure that the server is only ever running > on one node at a time.That sounds simple enough :-P. Although, I wanted to be able to easily open VM Consoles which I do currently through virt-manager. I also use virsh for all kinds of ad-hoc management. Is there an easy way to still have my cake and eat it? We also have a number of Windows VM's. Remote desktop is great but sometimes you just have to have a console.> We also have an active freenode IRC channel; #clusterlabs. Stop on by > and say hello. :)Will do. I have a bit of reading now to catch up but I'm sure I'll have a few more questions before long. Kind regards, Tom -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: OpenPGP digital signature URL: <http://lists.centos.org/pipermail/centos/attachments/20160622/f8b45de4/attachment-0001.sig>
On 6/21/2016 10:01 PM, Tom Robinson wrote:> Currently when I migrate a guest, I can all too easily start it up on both hosts! There must be some > way to fence these off but I'm just not sure how to do this.in addition to power fencing as described by others, you can also fence at the ethernet switch layer, where you disable the switch port(s) that the dead host is on. this of course requires managed switches that your cluster management software can talk to. if you're using dedicated networking for ISCSI (often done for high performance), you can just disable that port. -- john r pierce, recycling bits in santa cruz
On 22/06/16 01:38 PM, John R Pierce wrote:> On 6/21/2016 10:01 PM, Tom Robinson wrote: >> Currently when I migrate a guest, I can all too easily start it up on >> both hosts! There must be some >> way to fence these off but I'm just not sure how to do this. > > in addition to power fencing as described by others, you can also fence > at the ethernet switch layer, where you disable the switch port(s) that > the dead host is on. this of course requires managed switches that your > cluster management software can talk to. if you're using dedicated > networking for ISCSI (often done for high performance), you can just > disable that port.This is called "fabric fencing" and was originally the only supported option in the very early days of HA. It has fallen out of favour for several reasons, but it does still work fine. The main issues is that it leaves the node in an unclean state. If an admin (out of ignorance or panic) reconnects the node, all hell can break lose. So generally power cycling is much safer. -- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education?
How about trying commercial RHEV? Eero 22.6.2016 8.02 ap. "Tom Robinson" <tom.robinson at motec.com.au> kirjoitti:> Hi, > > I have two KVM hosts (CentOS 7) and would like them to operate as High > Availability servers, > automatically migrating guests when one of the hosts goes down. > > My question is: Is this even possible? All the documentation for HA that > I've found appears to not > do this. Am I missing something? > > My configuration so fare includes: > > * SAN Storage Volumes for raw device mappings for guest vms (single > volume per guest). > * multipathing of iSCSI and Infiniband paths to raw devices > * live migration of guests works > * a cluster configuration (pcs, corosync, pacemaker) > > Currently when I migrate a guest, I can all too easily start it up on both > hosts! There must be some > way to fence these off but I'm just not sure how to do this. > > Any help is appreciated. > > Kind regards, > Tom > > > -- > > Tom Robinson > IT Manager/System Administrator > > MoTeC Pty Ltd > > 121 Merrindale Drive > Croydon South > 3136 Victoria > Australia > > T: +61 3 9761 5050 > F: +61 3 9761 5051 > E: tom.robinson at motec.com.au > > > _______________________________________________ > CentOS mailing list > CentOS at centos.org > https://lists.centos.org/mailman/listinfo/centos > >
RHEV is a cloud solution with some HA features. It is not an actual HA solution. digimer On 23/06/16 12:08 AM, Eero Volotinen wrote:> How about trying commercial RHEV? > > Eero > 22.6.2016 8.02 ap. "Tom Robinson" <tom.robinson at motec.com.au> kirjoitti: > >> Hi, >> >> I have two KVM hosts (CentOS 7) and would like them to operate as High >> Availability servers, >> automatically migrating guests when one of the hosts goes down. >> >> My question is: Is this even possible? All the documentation for HA that >> I've found appears to not >> do this. Am I missing something? >> >> My configuration so fare includes: >> >> * SAN Storage Volumes for raw device mappings for guest vms (single >> volume per guest). >> * multipathing of iSCSI and Infiniband paths to raw devices >> * live migration of guests works >> * a cluster configuration (pcs, corosync, pacemaker) >> >> Currently when I migrate a guest, I can all too easily start it up on both >> hosts! There must be some >> way to fence these off but I'm just not sure how to do this. >> >> Any help is appreciated. >> >> Kind regards, >> Tom >> >> >> -- >> >> Tom Robinson >> IT Manager/System Administrator >> >> MoTeC Pty Ltd >> >> 121 Merrindale Drive >> Croydon South >> 3136 Victoria >> Australia >> >> T: +61 3 9761 5050 >> F: +61 3 9761 5051 >> E: tom.robinson at motec.com.au >> >> >> _______________________________________________ >> CentOS mailing list >> CentOS at centos.org >> https://lists.centos.org/mailman/listinfo/centos >> >> > _______________________________________________ > CentOS mailing list > CentOS at centos.org > https://lists.centos.org/mailman/listinfo/centos >-- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education?