Displaying 4 results from an estimated 4 matches for "lrmd".
Did you mean:
lrm
2011 Nov 23
1
Corosync init-script broken on CentOS6
...install (with or without updates) is not starting corosync
dependencies.
I've even tried using corosync/pacemaker for the EPEL 6 repo, and
still the init-script will not start corosync dependencies.
Expected:
corosync
/usr/lib64/heartbeat/stonithd
/usr/lib64/heartbeat/cib
/usr/lib64/heartbeat/lrmd
/usr/lib64/heartbeat/attrd
/usr/lib64/heartbeat/pengine
Observed:
corosync
My install options are:
%packages
@base
@core
@ha
@nfs-file-server
@network-file-system-client
@resilient-storage
@server-platform
@server-policy
@storage-client-multipath
@system-admin-tools
pax
oddjob
sgpio
pacemaker
dlm...
2019 Sep 18
1
Live-Migration not possible: error: operation failed: guest CPU doesn't match specification
...corosync, libvirt and KVM.
Recently i configured a new VirtualDomain which runs fine, but live Migration does not succeed.
This is the error:
VirtualDomain(vm_snipanalysis)[14322]: 2019/09/18_16:56:54 ERROR: snipanalysis: live migration to ha-idg-2 failed: 1
Sep 18 16:56:54 [6970] ha-idg-1 lrmd: notice: operation_finished: vm_snipanalysis_migrate_to_0:14322:stderr [ error: operation failed: guest CPU doesn't match specification: missing features: fma,movbe,xsave,avx,f16c,rdrand,fsgsbase,bmi1,avx2,smep,bmi2,erms,invpcid,md-clear,xsaveopt,abm ]
The two servers are from HP, they...
2016 Nov 25
1
Pacemaker bugs?
...e/enabled
pacemaker: active/enabled
pcsd: active/enabled
The first bug is rather serious, though a workaround exists!
The cluster works fine, but as soon as I add a cluster resource of
class "service", the cluster manager software runs havoc on node
failover. In that situation, the lrmd process hangs in an infinite
loop (neither strace nor ltrace show any outout so it seems to be
an internal loop without any system or library call) and almost any
call to the cluster manager software (crmsh or pcs) runs into a timeout.
It's quite hard to recover the whole cluster from this situ...
2009 Jul 24
1
Long delay between xen/HVM domU shutdown and releasing vbd (drbd)
Hi,
I have a running LVM-DRBD-Xen-HVM/Win2k3 cluster-stack under debian/
lenny managed by heartbeat/crm.
At takeover/failover heartbeat first is stopping my HVM-domU with
success apparently:
-------------- corresponding heartbeat log line -----------------
greatmama-n2 lrmd: [3382]: info: RA output:
(xendom_infra_win:stop:stdout) Domain infra-win2003sbs terminated All
domains terminated
---------------------------------
But it seems regarding VBDs (drbd) are not released at this time.
xend.log and investigation at manual domU shutdown shows a delay about
(exactly...