Displaying 20 results from an estimated 30000 matches similar to: "VM locking"
2015 Sep 01
0
Re: VM locking
On Mon, Aug 31, 2015 at 08:01:58PM +0000, Prof. Dr. Michael Schefczyk wrote:
> Dear All,
>
> I am trying to use VM (disk) locking on a two node Centos 7 KVM cluster. Unfortunately, I am not successful.
>
> Using virtlockd (https://libvirt.org/locking-lockd.html), I get each
> host to write the zero length file with a hashed filename to the shared
> folder specified.
2016 Jul 26
2
Live Disk Backup
Dear All,
using CentOS 7.2.1511, and libvirt from ovirt repositories (currently 1.2.17-13.el7_2.5, but without otherwise using ovirt) I am regularly backing up my VMs which are on qcow2 files. In general, I am trying to follow http://wiki.libvirt.org/page/Live-disk-backup-with-active-blockcommit
A typical backup script would be
#!/bin/bash
dt=`date +%y%m%d`
if virsh dominfo dockers10a | grep
2013 Oct 11
2
upstart script for virtlockd
Hi all,
Trying to test libvirt 1.1.3 with virtlockd locking my qcow2 images on a NFS storage between two kvm hosts.
./configure ... --with-init-script=upstart
Libvirtd upstart script is actually well generated but I can't see nothing about virtlockd... or am I blind ? :)
Nevertheless, running virtlockd -d && service libvirtd restart works fine.
Am I wrong thinking that editing
2019 Dec 28
3
Locking without virtlockd (nor sanlock)?
Hi list,
I would like to ask a clarification about how locking works. My test
system is CentOS 7.7 with libvirt-4.5.0-23.el7_7.1.x86_64
Is was understanding that, by default, libvirt does not use any locks.
From here [1]: "The out of the box configuration, however, currently
uses the nop lock manager plugin". As "lock_manager" is commented in my
qemu.conf file, I was
2017 Nov 15
2
virtlock - a VM goes read-only
Dear colleagues,
I am facing a problem that has been troubling me for last week and a half.
Please if you are able to help or offer some guidance.
I have a non-prod POC environment with 2 CentOS7 fully updated hypervisors
and an NFS filer that serves as a VM image storage. The overall environment
works exceptionally well. However, starting a few weeks ago I have been
trying to implement virtlock
2014 Jan 15
2
Ceph RBD locking for libvirt-managed LXC (someday) live migrations
Hi,
I'm trying to build an active/active virtualization cluster using a Ceph
RBD as backing for each libvirt-managed LXC. I know live migration for LXC
isn't yet possible, but I'd like to build my infrastructure as if it were.
That is, I would like to be sure proper locking is in place for live
migrations to someday take place. In other words, I'm building things as
if I were
2020 Jan 03
2
Re: Locking without virtlockd (nor sanlock)?
On Sat, Dec 28, 2019 at 02:36:27PM +0100, Gionatan Danti wrote:
> Il 28-12-2019 01:39 Gionatan Danti ha scritto:
> > Hi list,
> > I would like to ask a clarification about how locking works. My test
> > system is CentOS 7.7 with libvirt-4.5.0-23.el7_7.1.x86_64
> >
> > Is was understanding that, by default, libvirt does not use any locks.
> > From here [1]:
2018 Jul 03
1
Breaking a virtlockd lock?
I have several Qemu/kvm servers running VMs hosted on an NFS share, and am
using virtlockd. (lock_manager = "lockd" in qemu.conf) After a power
failure, one of the VMs will not start, claiming that it is locked. How do
I get out of this?
thanks,
Steve Gaarder
System Administrator, Dept of Mathematics
Cornell University, Ithaca, NY, USA
gaarder@math.cornell.edu
2013 Oct 21
1
Re: upstart script for virtlockd
Hi Eric,
Here is my try to write a quick upstart script for virtlockd.
It should be named /etc/init/virtlockd.conf, then : ln -s /lib/init/upstart-job /etc/init.d/virtlockd
It seems to work for me :
# virtlockd - Locking daemon for libvirt
description "virtlockd"
start on filesystem and runlevel [2345]
stop on starting rc RUNLEVEL=[016]
pre-start script
test -x
2020 Jan 03
2
Re: Locking without virtlockd (nor sanlock)?
On Fri, Jan 03, 2020 at 02:56:50PM +0100, Gionatan Danti wrote:
> Il 03-01-2020 11:26 Daniel P. Berrangé ha scritto:
> > virtlockd also uses fcntl(), however, it doesn't have to acquire locks
> > on
> > the file/block device directly. It can use a look-aside file for
> > locking.
> > For example a path under /var/lib/libvirt/lock. This means that locks on
>
2013 May 06
1
virtlockd, init script and kill signals
Hi,
I've read in the documentation that virtlockd uses SIGUSR1 to dump its
state and then re-execs itself.
Now I tried it and this seems to fail because virtlockd is being
launched without a full path (when using the init script), thus re-exec
fails with the error:
error : virLockDaemonPreExecRestart:1092 : Unable to restart self: No
such file or directory
Changing in the init script
2020 Jan 06
2
Re: Locking without virtlockd (nor sanlock)?
Il 06-01-2020 10:06 Peter Krempa ha scritto:
> On Fri, Jan 03, 2020 at 14:08:03 +0000, Daniel Berrange wrote:
>> As above, QEMU's locking is good enough to rely on for file based
>> images.
Hi Daniel, thank you for the direct confirmation.
>> The flaws I mention with libvirt might actually finally be something
>> we
>> have fixed in 5.10.0 with QEMU 4.2.0,
2020 Jun 08
2
Disable virtlockd
Hello!
Is it possible to disable the virtlockd daemon or VM file locking? I
start qemu with a -snapshot option which prevents and changes to the
disk image anyways.
Using <readonly /> is not supported for IDE disks.
Another option would be to not require locking on the NFS share, but i
have no idea how.
Can someone help me with that?
Regards
Felix Queißner
2013 Nov 07
4
Re: RBD images locking
Eric,
Well, in case where several servers may start the same virtual machines after a reboot for exemple.
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-August/003887.html
I've seen this hook here : http://www.wogri.at/en/linux/ceph-libvirt-locking/
But it's a hook...
Yes, I may try to write a patch. My coding skills are surely not as good as yours but I 'd be glad to make
2012 Jun 23
4
Can't run KVM Virtual Machines on a Gluster volume
I just built a 2 node(4 bricks), Distributed-Replicated and everything mounts fine.
Each node mounts using GlusterFS client on its hostname (mount -t glusterfs hostname:VOLUME /virtual-machines)
When creating a new Virtual Machine using virt-manager it creates the file on the storage, but when trying to power it On, it doesn't work and gives back an error message.(See below. Yes the folder has
2014 Oct 15
3
Re: Virt-v2v conversion issue
On Wed, Oct 15, 2014 at 03:23:39PM +0000, VONDRA Alain wrote:
> I see only qemu-img consumming some CPU and MEM :
>
> 25897 qemu 20 0 5825976 2,429g 4368 S 5,6 32,2 603:09.34 qemu-kvm
That's qemu, not qemu-img.
> I have indeed, some nfs errors :
>
> [475747.296041] nfs: server 192.203.100.247 not responding, still trying
> [475747.772022] nfs: server
2016 Nov 16
2
Re: [ovirt-users] OVA import of FC21 VM hangs during virt-v2v conversion?
Hi,
On Wed, November 16, 2016 5:15 pm, Richard W.M. Jones wrote:
> On Wed, Nov 16, 2016 at 05:09:56PM -0500, Derek Atkins wrote:
>
> I'll try to reproduce the issue here, but you can also do
> the following command directly on the guest disk image if you
> want to test something:
>
> time LIBGUESTFS_BACKEND=direct guestfish --ro -a fc21-64.qcow2 -i
> selinux-relabel
2013 Nov 08
1
Re: RBD images locking
On Thu, Nov 07, 2013 at 09:08:58AM -0700, Eric Blake wrote:
> On 11/07/2013 09:04 AM, NEVEU Stephane wrote:
> > Eric,
>
> [please don't top-post on technical lists]
>
> >
> > Well, in case where several servers may start the same virtual machines after a reboot for exemple.
> > http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-August/003887.html
2017 Aug 24
0
GlusterFS as virtual machine storage
Hi,
On Thu, Aug 24, 2017 at 2:13 AM, WK <wkmail at bneit.com> wrote:
> The default timeout for most OS versions is 30 seconds and the Gluster
> timeout is 42, so yes you can trigger an RO event.
I get read-only mount within approximately 2 seconds after failed IO.
> Though it is easy enough to raise as Pavel mentioned
>
> # echo 90 > /sys/block/sda/device/timeout
AFAIK
2018 Mar 05
2
virt-v2v 1.38 fails to convert .vmx VM: setfiles ... Multiple same specifications for /.*.
[This email is either empty or too large to be displayed at this time]