similar to: VM crash and lock manager

Displaying 20 results from an estimated 700 matches similar to: "VM crash and lock manager"

2016 Mar 30
0
Re: VM crash and lock manager
[moderator note: .pngs were stripped to avoid overwhelming the mail server and list recipients with 1M of data] -------- Forwarded Message -------- Date: Wed, 30 Mar 2016 11:25:09 +0200 Message-ID: <20160330112509.Horde.0Oad-ZfkzdZnouXf-VkEmw1@webmailperso.univ-brest.fr> From: villeneu@kassis.univ-brest.fr To: Franky Van Liedekerke <liedekef@telenet.be> Cc: libvirt-users@redhat.com
2016 Mar 31
2
How is calculate the lock with lockd_manager
2012 May 08
1
release open_disk error
Hello, I wonder what the "open error -1" / "release open_disk error" messages in sanlock.log actually mean. I saw these messages in the log on a KVM host that rebooted, and after running "/usr/sbin/virt-sanlock-cleanup" on that host. The resources where disks from 2 guests running on another KVM host. So in fact the disks are still in use, bot got cleaned up by
2019 Dec 28
3
Locking without virtlockd (nor sanlock)?
Hi list, I would like to ask a clarification about how locking works. My test system is CentOS 7.7 with libvirt-4.5.0-23.el7_7.1.x86_64 Is was understanding that, by default, libvirt does not use any locks. From here [1]: "The out of the box configuration, however, currently uses the nop lock manager plugin". As "lock_manager" is commented in my qemu.conf file, I was
2017 Nov 08
2
Does libvirt-sanlock support network disk?
Hello, As we know, libvirt sanlock support file type storage. I wonder *if it supports network storage.* I tried *iSCSI*, but found it didn't generate any resource file: Version: *qemu-2.10 libvirt-3.9 sanlock-3.5* 1. Set configuration: qemu.conf: *lock_manager = "sanlock"* qemu-sanlock.conf: *auto_disk_leases = 1disk_lease_dir = "/var/lib/libvirt/sanlock"host_id =
2012 Mar 13
2
libvirt with sanlock
Hello, I configured libvirtd with the sanlock lock manager plugin: # rpm -qa | egrep "libvirt-0|sanlock-[01]" libvirt-lock-sanlock-0.9.4-23.el6_2.4.x86_64 sanlock-1.8-2.el6.x86_64 libvirt-0.9.4-23.el6_2.4.x86_64 # egrep -v "^#|^$" /etc/libvirt/qemu-sanlock.conf auto_disk_leases = 1 disk_lease_dir = "/var/lib/libvirt/sanlock" host_id = 4 # mount | grep sanlock
2013 Oct 11
2
upstart script for virtlockd
Hi all, Trying to test libvirt 1.1.3 with virtlockd locking my qcow2 images on a NFS storage between two kvm hosts. ./configure ... --with-init-script=upstart Libvirtd upstart script is actually well generated but I can't see nothing about virtlockd... or am I blind ? :) Nevertheless, running virtlockd -d && service libvirtd restart works fine. Am I wrong thinking that editing
2019 Nov 22
2
Re: [PATCH nbdkit v2 02/10] python: Add various constants to the API.
On Fri, Nov 22, 2019 at 9:54 PM Richard W.M. Jones <rjones@redhat.com> wrote: > > These are accessible from the plugin by: > > import nbdkit > > if flags & nbdkit.FLAG_MAY_TRIM: > &c. Nice way to expose the flags! > Many (all?) of these are not yet useful for plugins, some will never > be useful, but they only consume a tiny bit of memory and
2020 Jun 08
2
Disable virtlockd
Hello! Is it possible to disable the virtlockd daemon or VM file locking? I start qemu with a -snapshot option which prevents and changes to the disk image anyways. Using <readonly /> is not supported for IDE disks. Another option would be to not require locking on the NFS share, but i have no idea how. Can someone help me with that? Regards Felix Queißner
2013 Jan 31
1
Sanlock gives up lock when VM is paused
Hello, I'm using libvirt and sanlock on qemu-kvm guests. Each guest has it's own Logical Volume for it's root filesystem. Sanlock is configured and working and prevents me from starting the same VM twice on multiple nodes and corrupting it's root filesystem. Each VM's domain XML resides on 2 servers that share the LVM volume group over fiber channel. In testing, I noticed
2012 Sep 06
1
How to properly test watchdog?
CentOS 6 Hi all, I am working on setting up sanlock + watchdog on a 2 node KVM pair. Sanlock is working beautifully and is preventing access to the VM disks by more than one process as it should across both boxes. I am attempting to test failure scenarios involving watchdog, but I am having a hard time getting it to actually reset the server. I am running wdmd with -D so I can see the
2012 Feb 22
1
[libvirt] a question about sanlock
Hi Daniel, I got a question about lock manager, if I enable 'sanlock' in qemu.conf and uncomment 'auto_disk_leases = 1' in qemu-sanlock.conf then restart libvirtd service, libvirtd will be dead, I know I should also uncomment 'host_id = 1' in qemu-sanlock.conf, because I enable 'auto_disk_leases'. The question is the libvirtd must die due to a error users
2012 Feb 23
2
lockmanager for use with clvm
Hi, i am setting up a cluster of kvm hypervisors managed with libvirt. The storage pool is on iscsi with clvm. To prevent that a vm is started on more than one hypervisor, I want to use a lockmanager with libvirt. I could only find sanlock as lockmanager, but AFSIK sanlock will not work in my setup as I don't have a shared filesystem. I have dlm running for clvm. Are there lockmanager
2019 Nov 22
1
Re: [PATCH nbdkit v2 02/10] python: Add various constants to the API.
On Fri, Nov 22, 2019 at 10:52 PM Richard W.M. Jones <rjones@redhat.com> wrote: > > On Fri, Nov 22, 2019 at 10:05:15PM +0200, Nir Soffer wrote: > > On Fri, Nov 22, 2019 at 9:54 PM Richard W.M. Jones <rjones@redhat.com> wrote: > > > > > > These are accessible from the plugin by: > > > > > > import nbdkit > > > > > > if
2019 Dec 28
0
Re: Locking without virtlockd (nor sanlock)?
Il 28-12-2019 01:39 Gionatan Danti ha scritto: > Hi list, > I would like to ask a clarification about how locking works. My test > system is CentOS 7.7 with libvirt-4.5.0-23.el7_7.1.x86_64 > > Is was understanding that, by default, libvirt does not use any locks. > From here [1]: "The out of the box configuration, however, currently > uses the nop lock manager
2020 Aug 10
1
Re: [PATCH nbdkit] file: Implement cache=none and fadvise=normal|random|sequential.
On Sat, Aug 08, 2020 at 02:14:22AM +0300, Nir Soffer wrote: > On Fri, Aug 7, 2020 at 5:36 PM Richard W.M. Jones <rjones@redhat.com> wrote: > > > > On Fri, Aug 07, 2020 at 05:29:24PM +0300, Nir Soffer wrote: > > > On Fri, Aug 7, 2020 at 5:07 PM Richard W.M. Jones <rjones@redhat.com> wrote: > > > > These ones? > > > >
2018 Jul 03
1
Breaking a virtlockd lock?
I have several Qemu/kvm servers running VMs hosted on an NFS share, and am using virtlockd. (lock_manager = "lockd" in qemu.conf) After a power failure, one of the VMs will not start, claiming that it is locked. How do I get out of this? thanks, Steve Gaarder System Administrator, Dept of Mathematics Cornell University, Ithaca, NY, USA gaarder@math.cornell.edu
2014 Jan 15
2
Ceph RBD locking for libvirt-managed LXC (someday) live migrations
Hi, I'm trying to build an active/active virtualization cluster using a Ceph RBD as backing for each libvirt-managed LXC. I know live migration for LXC isn't yet possible, but I'd like to build my infrastructure as if it were. That is, I would like to be sure proper locking is in place for live migrations to someday take place. In other words, I'm building things as if I were
2014 Feb 08
4
force group does not work
Hi I set up a samba 4.1.4 server on the latest FreeBSD RELEASE 10. Unfortunately it doesn't seem to consider the option force group. After hours ofresearch I couldn't figure out what I'm still missing. unix extensions is set to no. Setting the debug level up to 10 also didn't help ;( Is this a bug or is there simply a mistake in my setup? When *valid users = @Groupname* is
2017 Aug 23
4
GlusterFS as virtual machine storage
Il 23-08-2017 18:14 Pavel Szalbot ha scritto: > Hi, after many VM crashes during upgrades of Gluster, losing network > connectivity on one node etc. I would advise running replica 2 with > arbiter. Hi Pavel, this is bad news :( So, in your case at least, Gluster was not stable? Something as simple as an update would let it crash? > I once even managed to break this setup (with