Displaying 20 results from an estimated 3000 matches similar to: "How to properly test watchdog?"
2012 Mar 13
2
libvirt with sanlock
Hello,
I configured libvirtd with the sanlock lock manager plugin:
# rpm -qa | egrep "libvirt-0|sanlock-[01]"
libvirt-lock-sanlock-0.9.4-23.el6_2.4.x86_64
sanlock-1.8-2.el6.x86_64
libvirt-0.9.4-23.el6_2.4.x86_64
# egrep -v "^#|^$" /etc/libvirt/qemu-sanlock.conf
auto_disk_leases = 1
disk_lease_dir = "/var/lib/libvirt/sanlock"
host_id = 4
# mount | grep sanlock
2012 May 08
1
release open_disk error
Hello,
I wonder what the "open error -1" / "release open_disk error" messages in sanlock.log actually mean.
I saw these messages in the log on a KVM host that rebooted, and after running "/usr/sbin/virt-sanlock-cleanup" on that host.
The resources where disks from 2 guests running on another KVM host.
So in fact the disks are still in use, bot got cleaned up by
2014 May 20
1
abrt dump qt selinux
Hi all,
Note: selinux was in permissive prior to error
Got this with a yum update:
abrt_version: 2.0.8
cgroup:
cmdline: semodule -n -r oracle-port -b base.pp.bz2 -i
accountsd.pp.bz2 ada.pp.bz2 cachefilesd.pp.bz2 cpufreqselector.pp.bz2
chrome.pp.bz2 awstats.pp.bz2 abrt.pp.bz2 aiccu.pp.bz2 amanda.pp.bz2
afs.pp.bz2 apache.pp.bz2 arpwatch.pp.bz2 audioentropy.pp.bz2
asterisk.pp.bz2
2013 Jan 31
1
Sanlock gives up lock when VM is paused
Hello,
I'm using libvirt and sanlock on qemu-kvm guests. Each guest has it's
own Logical Volume for it's root filesystem. Sanlock is configured and
working and prevents me from starting the same VM twice on multiple
nodes and corrupting it's root filesystem. Each VM's domain XML resides
on 2 servers that share the LVM volume group over fiber channel.
In testing, I noticed
2019 Nov 22
2
Re: [PATCH nbdkit v2 02/10] python: Add various constants to the API.
On Fri, Nov 22, 2019 at 9:54 PM Richard W.M. Jones <rjones@redhat.com> wrote:
>
> These are accessible from the plugin by:
>
> import nbdkit
>
> if flags & nbdkit.FLAG_MAY_TRIM:
> &c.
Nice way to expose the flags!
> Many (all?) of these are not yet useful for plugins, some will never
> be useful, but they only consume a tiny bit of memory and
2017 Nov 08
2
Does libvirt-sanlock support network disk?
Hello,
As we know, libvirt sanlock support file type storage. I wonder *if it
supports network storage.*
I tried *iSCSI*, but found it didn't generate any resource file:
Version: *qemu-2.10 libvirt-3.9 sanlock-3.5*
1. Set configuration:
qemu.conf:
*lock_manager = "sanlock"*
qemu-sanlock.conf:
*auto_disk_leases = 1disk_lease_dir = "/var/lib/libvirt/sanlock"host_id =
2019 Nov 22
1
Re: [PATCH nbdkit v2 02/10] python: Add various constants to the API.
On Fri, Nov 22, 2019 at 10:52 PM Richard W.M. Jones <rjones@redhat.com> wrote:
>
> On Fri, Nov 22, 2019 at 10:05:15PM +0200, Nir Soffer wrote:
> > On Fri, Nov 22, 2019 at 9:54 PM Richard W.M. Jones <rjones@redhat.com> wrote:
> > >
> > > These are accessible from the plugin by:
> > >
> > > import nbdkit
> > >
> > > if
2012 Feb 22
1
[libvirt] a question about sanlock
Hi Daniel,
I got a question about lock manager, if I enable 'sanlock' in qemu.conf and
uncomment 'auto_disk_leases = 1' in qemu-sanlock.conf then restart libvirtd
service, libvirtd will be dead, I know I should also uncomment 'host_id = 1'
in qemu-sanlock.conf, because I enable 'auto_disk_leases'. The question is
the libvirtd must die due to a error users
2012 Feb 23
2
lockmanager for use with clvm
Hi,
i am setting up a cluster of kvm hypervisors managed with libvirt.
The storage pool is on iscsi with clvm. To prevent that a vm is
started on more than one hypervisor, I want to use a lockmanager
with libvirt.
I could only find sanlock as lockmanager, but AFSIK sanlock will not
work in my setup as I don't have a shared filesystem. I have dlm running
for clvm. Are there lockmanager
2013 May 03
1
sanlockd, virtlock and GFS2
Hi,
I'm trying to put in place a KVM cluster (using clvm and gfs2), but I'm
running into some issues with either sanlock or virtlockd. All virtual
machines are handled via the cluster (in /etc/cluser/cluster.conf) but I
want some kind of locking to be in place as extra security measurement.
Sanlock
=======
At first I tried sanlock, but it seems if one node goes down
unexpectedly,
2020 Aug 10
1
Re: [PATCH nbdkit] file: Implement cache=none and fadvise=normal|random|sequential.
On Sat, Aug 08, 2020 at 02:14:22AM +0300, Nir Soffer wrote:
> On Fri, Aug 7, 2020 at 5:36 PM Richard W.M. Jones <rjones@redhat.com> wrote:
> >
> > On Fri, Aug 07, 2020 at 05:29:24PM +0300, Nir Soffer wrote:
> > > On Fri, Aug 7, 2020 at 5:07 PM Richard W.M. Jones <rjones@redhat.com> wrote:
> > > > These ones?
> > > >
2012 Jul 11
1
A couple of 32-bit packages got no update in 6.3/x86_64
Namely:
* hivex
* hivex-devel
* librdmac
* librdmac-devel
* sanlock-libs
* sanlock-devel
and maybe others.
Is this on purpose (I don't know if upstream has removed or updated the
32-bit rpms, but the old ones are still in C6.3/x86_64), or is it just
the usual sloppyness (I've been told here on previous occasions the
biarch is a pain in the ass to maintain, nobody cares anyway,
2014 Jan 15
2
Ceph RBD locking for libvirt-managed LXC (someday) live migrations
Hi,
I'm trying to build an active/active virtualization cluster using a Ceph
RBD as backing for each libvirt-managed LXC. I know live migration for LXC
isn't yet possible, but I'd like to build my infrastructure as if it were.
That is, I would like to be sure proper locking is in place for live
migrations to someday take place. In other words, I'm building things as
if I were
2013 Sep 18
1
How to use watchdog daemon with hardware watchdog driver interface?
Good morning!
On a CentOS 6.4 / 64 bit server I have
installed the watchdog 5.5 package.
The rpm -qi watchdog states:
The watchdog program can be used as a powerful software watchdog
daemon or may be alternately used with a hardware watchdog device such
as the IPMI hardware watchdog driver interface to a resident Baseboard
Management Controller (BMC).
...
This configuration file is also used to
2001 Apr 09
3
[PATCH]: Heartbeat/Watchdog Patch
Dear Developers,
I've released a patch against openssh-2.5.2p2.
The patch adds heartbeat (keepalive) function to ssh(1),
and watchdog timeout function to sshd(8). The watchdog
timeout is intended to terminate user's processes
as soon as possible after the link has been lost.
http://www.ecip.tohoku.ac.jp/~hgot/sources/openssh-watchdog.html
The combination of the heartbeat and the
2007 Dec 11
1
Tripplite OMNI1000LCD Watchdog
Hello Nut Devs,
I'm working with usbhid-ups and a Tripplite OMNI1000LCD. I used a USB packet
sniffer to discover something cool about the watchdog feature in this unit.
(not sure if the other OMNI-X-LCD models work the same or not.) Basically
there's a single HID variable at Report ID 0x52
(UPS.OutletSystem.Outlet.ffff0092) that's one byte (0-255 int) and it
contains the Watchdog
2005 Feb 08
3
hardware-watchdog driver problems in linux 2.6.10-xen0
Hi!
I''m trying to run w83627hf_wdt.ko watchdog driver in domain 0 (xenlinux
2.6.10-xen0), but the driver doesn''t seem to work (the machine reboots all
the time after the watchdog-timeout set in BIOS).
Is there something that could prevent the driver from accessing the
watchdog-hardware (io-ports/registers) ?
The watchdog-driver is very simple, and you can find it in
2012 Mar 07
7
NMI: Enable watchdog by default
This patch is based on one which has been in XenServer for a very long.
To keep the trend of documentation going, it also corrects the new
command line document.
--
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
2011 Jan 27
7
Xen watchdog patch disposition?
Jeremy,
while originally I had hoped this patch, sitting in xen/next, would get
pushed for .37, that didn''t happen and now the .38 merge window
was missed too. Trying to get this to Linux on my own seems
inappropriate, so can I hope that you will include this with whatever
other changes you intend to push for .39?
Thanks, Jan
_______________________________________________
Xen-devel
2017 Aug 23
4
GlusterFS as virtual machine storage
Il 23-08-2017 18:14 Pavel Szalbot ha scritto:
> Hi, after many VM crashes during upgrades of Gluster, losing network
> connectivity on one node etc. I would advise running replica 2 with
> arbiter.
Hi Pavel, this is bad news :(
So, in your case at least, Gluster was not stable? Something as simple
as an update would let it crash?
> I once even managed to break this setup (with