Displaying 20 results from an estimated 10000 matches similar to: "migration job: unexpectedly failed"
2018 May 23
0
Re: virsh migration job unexpectedly failed
On 05/23/2018 01:26 PM, cokabug wrote:
> Hi ,
>
> when i have run virsh migrate command , the migration progress fiailed
> at [61 %].
> command :
> virsh migrate --live --undefinesource --p2p --copy-storage-inc
> --verbose --persistent instance-00001959 qemu+tcp://host292/system
>
> result :
> Migration: [ 61 %]error: operation failed: migration
2012 Jun 08
1
virsh: migration job: unexpectedly failed
Hi,I am use virsh and test the migration command,
on server.example.com:
# rpm -qa|grep libvirt
libvirt-0.9.10-21.el6.x86_64
libvirt-python-0.9.10-21.el6.x86_64
libvirt-client-0.9.10-21.el6.x86_64
# virsh version
Compiled against library: libvir 0.9.10
Using library: libvir 0.9.10
Using API: QEMU 0.9.10
Running hypervisor: QEMU 0.14.1
#cat /etc/libvirt/libvirt.conf
uri_aliases = [
2018 Jan 18
3
Re: Could not destroy domain, current job is remoteDispatchConnectGetAllDomainStats
On 01/18/2018 08:25 AM, Ján Tomko wrote:
> On Wed, Jan 17, 2018 at 04:45:38PM +0200, Serhii Kharchenko wrote:
>> Hello libvirt-users list,
>>
>> We're catching the same bug since 3.4.0 version (3.3.0 works OK).
>> So, we have process that is permanently connected to libvirtd via socket
>> and it is collecting stats, listening to events and control the VPSes.
2018 May 23
2
virsh migration job unexpectedly failed
Hi ,
when i have run virsh migrate command , the migration progress fiailed
at [61 %].
command :
virsh migrate --live --undefinesource --p2p --copy-storage-inc
--verbose --persistent instance-00001959 qemu+tcp://host292/system
result :
Migration: [ 61 %]error: operation failed: migration job: unexpectedly
failed
os/kernel version: centos 6.7 / kernel 2.6.32
qemu-kvm
2018 Jan 17
4
Could not destroy domain, current job is remoteDispatchConnectGetAllDomainStats
Hello libvirt-users list,
We're catching the same bug since 3.4.0 version (3.3.0 works OK).
So, we have process that is permanently connected to libvirtd via socket
and it is collecting stats, listening to events and control the VPSes.
When we try to 'shutdown' a number of VPSes we often catch the bug. One of
VPSes sticks in 'in shutdown' state, no related 'qemu'
2018 Dec 04
3
concurrent migration of several domains rarely fails
Hi,
i have a two-node cluster with several domains as resources. During testing i tried several times to migrate some domains concurrently.
Usually it suceeded, but rarely it failed. I found one clue in the log:
Dec 03 16:03:02 ha-idg-1 libvirtd[3252]: 2018-12-03 15:03:02.758+0000: 3252: error : virKeepAliveTimerInternal:143 : internal error: connection closed due to keepalive timeout
The
2018 Dec 07
3
Re: concurrent migration of several domains rarely fails
On 12/6/18 10:12 AM, Lentes, Bernd wrote:
>
>> Hi,
>>
>> i have a two-node cluster with several domains as resources. During testing i
>> tried several times to migrate some domains concurrently.
>> Usually it suceeded, but rarely it failed. I found one clue in the log:
>>
>> Dec 03 16:03:02 ha-idg-1 libvirtd[3252]: 2018-12-03 15:03:02.758+0000: 3252:
2018 Aug 28
2
live migration via unix socket
Hey,
Over in KubeVirt we're investigating a use case where we'd like to perform
a live migration within a network namespace that does not provide libvirtd
with network access. In this scenario we would like to perform a live
migration by proxying the migration through a unix socket to a process in
another network namespace that does have network access. That external
process would live
2018 Sep 10
2
Re: live migration via unix socket
On Wed, Aug 29, 2018 at 4:55 AM, Daniel P. Berrangé <berrange@redhat.com>
wrote:
> On Tue, Aug 28, 2018 at 05:07:18PM -0400, David Vossel wrote:
> > Hey,
> >
> > Over in KubeVirt we're investigating a use case where we'd like to
> perform
> > a live migration within a network namespace that does not provide
> libvirtd
> > with network access.
2012 Oct 31
3
error : virPidFileAcquirePath:345 : Failed to acquire pid file '/home/corey/.libvirt/libvirtd.pid': Resource temporarily unavailable
Hi all, When I try to start libvirtd, using "libvirtd -d", error notification shown below:error : virPidFileAcquirePath:345 : Failed to acquire pid file '/$HOME/.libvirt/libvirtd.pid': Resource temporarily unavailable Using "libvirtd -v", show: "libvirtd: error: Unable to obtain pidfile. Check /var/log/messages or run without --daemon for more
2018 Sep 14
2
Re: live migration via unix socket
On Wed, Sep 12, 2018 at 6:59 AM, Martin Kletzander <mkletzan@redhat.com>
wrote:
> On Mon, Sep 10, 2018 at 02:38:48PM -0400, David Vossel wrote:
>
>> On Wed, Aug 29, 2018 at 4:55 AM, Daniel P. Berrangé <berrange@redhat.com>
>> wrote:
>>
>> On Tue, Aug 28, 2018 at 05:07:18PM -0400, David Vossel wrote:
>>> > Hey,
>>> >
>>>
2018 Sep 13
2
Re: live migration and config
13.09.2018 17:47, Jiri Denemark пишет:
> On Thu, Sep 13, 2018 at 10:35:09 +0400, Dmitry Melekhov wrote:
>> After some mistakes yesterday we ( me and my colleague ) think that it
>> will be wise for libvirt to check config file existence on remote side
> Which config file?
>
VM config file, namely qemu.
We forgot to mount shared storage (namely gluster volume), on which we
2019 Jul 01
1
live migration issues after libvirtd restart
Hi, All
There is following issue in latest libvirt-4.5.0-10.el7_6.12 package, which
could prevent live VM migrations with web sockets enabled, when libvirtd
were restarted prior to migration.
Environment:
# uname -a
Linux inv-cp1-hv3-centos7 3.10.0-957.12.2.el7.x86_64 #1 SMP Tue May 14
21:24:32 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
# cat /etc/redhat-release
CentOS Linux release 7.6.1810
2014 May 08
1
Is tunnelled "managed direct" live migration actually possible?
I have three servers set up as in this diagram:
http://libvirt.org/migration.html#flowmanageddirect
My control application has separate libvirt connections to both the
source and destination nodes. I want to migrate entirely over those
connections, not involving any third (peer-to-peer) connection between
nodes (a third connection isn't possible because of firewalls and SSH
keys).
Is
2014 Sep 26
5
increase number of libvirt threads by starting tansient guest doamin - is it a bug?
hello,
if i start a transient guest doamin via "virsh create abcd.xml" i see an additional libvirt thread and also some open files:
pstree -h `pgrep libvirtd`
libvirtd───11*[{libvirtd}]
libvirtd 3016 root 21w REG 253,0 6044 1052094 /var/log/libvirt/libxl/abcd.log
libvirtd 3016 root 22r FIFO 0,8 0t0 126124 pipe
libvirtd 3016 root
2015 Mar 19
3
Building libvirt 1.2.13 from source
Hello
I am trying to build libvirt 1.2.13 (latest) from source on a Ubuntu 14.04
64 bit box. After installing all the dependencies (libyajl, libdevmapper,
libpciaccess, libnl), I could finish the build and install. However,
invoking libvirtd throws this:
root@ubuntu:/home/hvishwanath/Downloads/libvirt-1.2.13# libvirtd
libvirtd: /usr/lib/libvirt-qemu.so.0: version `LIBVIRT_QEMU_1.2.3' not
2014 Sep 26
3
Re: [libvirt] increase number of libvirt threads by starting tansient guest doamin - is it a bug?
Hi Michal,
thank you for your answer.
so if i understand that correctly, no matter if i shutdown or destroy the domain and no matter if it is a transient or an persistent vm, the thread should disappear, right?
in my case they still exist, also an hour after i destroy the domain (and don´t start any new one).
i use libvirt-1.1.35 on fedora core 20, for information.
all the best
max
2015 Mar 24
5
libvirtd can't start
Hi experts,
The libvirtd can’t start on my server after the server interruption
of power supply, the status is below:
[root@openstack3 libvirt]# service libvirtd status
Redirecting to /bin/systemctl status libvirtd.service
libvirtd.service - Virtualization daemon
Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled)
Active: failed (Result: signal) since Tue 2015-03-24
2015 Apr 03
2
P2P live migration with non-shared storage: fails to connect to remote libvirt URI qemu+ssh
Migration without --p2p works just fine, ie. the below works:
$ virsh migrate --verbose --copy-storage-all \
--live cvm1 qemu+ssh://kashyapc@devstack3/system
Migration: [100 %]
Result:
- On the source host, the guest is shut off
- On the destination host, the guest is live migratied successfully
Migration with "--p2p" fails, a simple test below:
2013 Oct 09
3
Re: failing connections w/ virt-manager
Am 08.10.2013 14:46, schrieb Stefan G. Weichinger:
>> Try enabling the flag, re-emerging the package, setting the logs and
>> then reproduce it again. Check the logs and you should see why it's
>> disconnecting.
The docs say that libvirtd has to listen on the TCP port ... checked that:
# netstat -alnp | grep libv
tcp 0 0 0.0.0.0:16509 0.0.0.0:*