search for: virtlock

Displaying 16 results from an estimated 16 matches for "virtlock".

Did you mean: virtlockd
2013 May 03
1
sanlockd, virtlock and GFS2
Hi, I'm trying to put in place a KVM cluster (using clvm and gfs2), but I'm running into some issues with either sanlock or virtlockd. All virtual machines are handled via the cluster (in /etc/cluser/cluster.conf) but I want some kind of locking to be in place as extra security measurement. Sanlock ======= At first I tried sanlock, but it seems if one node goes down unexpectedly, sanlock sometimes blocks everything on the ot...
2017 Nov 16
0
Re: virtlock - a VM goes read-only
...o help or offer some guidance. > > I have a non-prod POC environment with 2 CentOS7 fully updated hypervisors > and an NFS filer that serves as a VM image storage. The overall environment > works exceptionally well. However, starting a few weeks ago I have been > trying to implement virtlock in order to prevent a VM running on 2 > hypervisors at the same time. [snip] > h2 # virsh start test09 > error: Failed to start domain test09 > error: resource busy: Lockspace resource > '/storage_nfs/images_001/test09.qcow2' is locked [snip] > Now, I am pretty sure th...
2017 Nov 15
2
virtlock - a VM goes read-only
...Please if you are able to help or offer some guidance. I have a non-prod POC environment with 2 CentOS7 fully updated hypervisors and an NFS filer that serves as a VM image storage. The overall environment works exceptionally well. However, starting a few weeks ago I have been trying to implement virtlock in order to prevent a VM running on 2 hypervisors at the same time. Here is the description how the environment looks like in terms of virtlock configuration on both hypervisors: -- Content of /etc/libvirt/qemu.conf -- lock_manager = "lockd" Only the above line is uncommented for direc...
2017 Aug 23
4
GlusterFS as virtual machine storage
...crash? > I once even managed to break this setup (with arbiter) due to network > partitioning - one data node never healed and I had to restore from > backups (it was easier and kind of non-production). Be extremely > careful and plan for failure. I would use VM locking via sanlock or virtlock, so a split brain should not cause simultaneous changes on both replicas. I am more concerned about volume heal time: what will happen if the standby node crashes/reboots? Will *all* data be re-synced from the master, or only changed bit will be re-synced? As stated above, I would like to avoid...
2013 Nov 07
4
Re: RBD images locking
Eric, Well, in case where several servers may start the same virtual machines after a reboot for exemple. http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-August/003887.html I've seen this hook here : http://www.wogri.at/en/linux/ceph-libvirt-locking/ But it's a hook... Yes, I may try to write a patch. My coding skills are surely not as good as yours but I 'd be glad to make
2013 Nov 08
1
Re: RBD images locking
...> [please don't top-post on technical lists] > > > > > Well, in case where several servers may start the same virtual machines after a reboot for exemple. > > http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-August/003887.html > > Isn't the existing virtlockd support already sufficient for this? If > not, what is preventing the virtlock framework from interacting with rbd > disks? Nothing really. The current impl only deals with disks with type=file or type=block, ignoring type=network. We could extend it to cope with the latter though - either...
2017 Aug 23
3
GlusterFS as virtual machine storage
...is setup (with arbiter) due to network >> > partitioning - one data node never healed and I had to restore from >> > backups (it was easier and kind of non-production). Be extremely >> > careful and plan for failure. >> >> I would use VM locking via sanlock or virtlock, so a split brain should >> not cause simultaneous changes on both replicas. I am more concerned >> about volume heal time: what will happen if the standby node >> crashes/reboots? Will *all* data be re-synced from the master, or only >> changed bit will be re-synced? As sta...
2017 Aug 23
0
GlusterFS as virtual machine storage
...managed to break this setup (with arbiter) due to network > > partitioning - one data node never healed and I had to restore from > > backups (it was easier and kind of non-production). Be extremely > > careful and plan for failure. > > I would use VM locking via sanlock or virtlock, so a split brain should > not cause simultaneous changes on both replicas. I am more concerned > about volume heal time: what will happen if the standby node > crashes/reboots? Will *all* data be re-synced from the master, or only > changed bit will be re-synced? As stated above, I...
2014 Jan 15
2
Ceph RBD locking for libvirt-managed LXC (someday) live migrations
...'d like to build my infrastructure as if it were. That is, I would like to be sure proper locking is in place for live migrations to someday take place. In other words, I'm building things as if I were using KVM and live migration via libvirt. I've been looking at corosync, pacemaker, virtlock, sanlock, gfs2, ocfs2, glusterfs, cephfs, ceph RBD and other solutions. I admit that I'm quite confused. If oVirt, with its embedded GlusterFS and its planned self-hosted engine option, supported LXC, I'd use that. However the stars have not yet aligned for that. It seems that the most...
2017 Aug 23
0
GlusterFS as virtual machine storage
...ue to network > >> > partitioning - one data node never healed and I had to restore from > >> > backups (it was easier and kind of non-production). Be extremely > >> > careful and plan for failure. > >> > >> I would use VM locking via sanlock or virtlock, so a split brain should > >> not cause simultaneous changes on both replicas. I am more concerned > >> about volume heal time: what will happen if the standby node > >> crashes/reboots? Will *all* data be re-synced from the master, or only > >> changed bit will...
2013 Nov 07
0
Re: RBD images locking
..., NEVEU Stephane wrote: > Eric, [please don't top-post on technical lists] > > Well, in case where several servers may start the same virtual machines after a reboot for exemple. > http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-August/003887.html Isn't the existing virtlockd support already sufficient for this? If not, what is preventing the virtlock framework from interacting with rbd disks? http://libvirt.org/locking.html -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library http://libvirt.org
2017 Aug 23
0
GlusterFS as virtual machine storage
Hi, after many VM crashes during upgrades of Gluster, losing network connectivity on one node etc. I would advise running replica 2 with arbiter. I once even managed to break this setup (with arbiter) due to network partitioning - one data node never healed and I had to restore from backups (it was easier and kind of non-production). Be extremely careful and plan for failure. -ps On Mon, Aug
2014 Jan 16
0
Re: Ceph RBD locking for libvirt-managed LXC (someday) live migrations
...nfrastructure as if it were. > That is, I would like to be sure proper locking is in place for live > migrations to someday take place. In other words, I'm building things as > if I were using KVM and live migration via libvirt. > > I've been looking at corosync, pacemaker, virtlock, sanlock, gfs2, ocfs2, > glusterfs, cephfs, ceph RBD and other solutions. I admit that I'm quite > confused. If oVirt, with its embedded GlusterFS and its planned > self-hosted engine option, supported LXC, I'd use that. However the stars > have not yet aligned for that. >...
2017 Aug 25
2
GlusterFS as virtual machine storage
...en managed to break this setup (with arbiter) due to network >> partitioning - one data node never healed and I had to restore from >> backups (it was easier and kind of non-production). Be extremely >> careful and plan for failure. > > I would use VM locking via sanlock or virtlock, so a split brain > should not cause simultaneous changes on both replicas. I am more > concerned about volume heal time: what will happen if the standby node > crashes/reboots? Will *all* data be re-synced from the master, or only > changed bit will be re-synced? As stated above, I wou...
2017 Aug 24
2
GlusterFS as virtual machine storage
...er) due to network >>>>> partitioning - one data node never healed and I had to restore from >>>>> backups (it was easier and kind of non-production). Be extremely >>>>> careful and plan for failure. >>>> I would use VM locking via sanlock or virtlock, so a split brain should >>>> not cause simultaneous changes on both replicas. I am more concerned >>>> about volume heal time: what will happen if the standby node >>>> crashes/reboots? Will *all* data be re-synced from the master, or only >>>> chang...
2017 Aug 21
4
GlusterFS as virtual machine storage
Hi all, I would like to ask if, and with how much success, you are using GlusterFS for virtual machine storage. My plan: I want to setup a 2-node cluster, where VM runs on the nodes themselves and can be live-migrated on demand. I have some questions: - do you use GlusterFS for similar setup? - if so, how do you feel about it? - if a node crashes/reboots, how the system re-syncs? Will the VM