Displaying 20 results from an estimated 60000 matches similar to: "Protocols used for migration"
2018 Jan 05
4
VM migration upon shutdown in centos 7
Hi,
I have CentOS 7, two node system which allows live VM migration between
them. Live migration triggered from virsh is happily happening. I am using
GlusterFS for replicating VM disk files.
Now I want to automatically do the live migration at the time of
reboot/shutdown/halt of the host node and for this I have written a systemd
service unit [vPreShutdownHook.service] and placed the live
2014 Jul 04
1
libvirt behind stunnel4
Hi,
I'm trying this setup where an stunnel4 (listening for clients on port
16514) connects to an unencrypted libvirt backend (on port 16509). When I
point the virsh client to stunnel4 it hangs.
Looking via tshark:
1. virsh completes ssl handshake with stunnel4
2. stunnel4 completes tcp handshake with libvirt.
and that's all.
When connecting virsh client directly to libvirt (this time
2014 May 15
3
Invoking virsh console from Java
Hello,
I have a Java application from which I am invoking the "virsh console"
command to access the console of a VM. I invoke the virsh command using
ProcessBuilder.start(). However, I am unable to communicate with the stdin
of the VMs console through the OutputStream of the Process object.
When I invoke "virsh console" from within Java, I see the following
messages on
2020 Aug 15
2
unable to migrate non shared storage in tunneled mode
Hey all,
With libvirt 6.5.0 and qemu 5.1.0 migration of non shared disks in
tunneled mode does not work for me:
virsh # migrate alpinelinux3.8 qemu+tls://ratchet.lan/system --live
--persistent --undefinesource --copy-storage-all --tunneled --p2p
error: internal error: qemu unexpectedly closed the monitor: Receiving
block device images
Error unknown block device
2020-08-15T21:21:48.995016Z
2019 Feb 10
2
virsh migrate --copy-storage-inc
Hello,
I use libvirt on machines without shared storage. My VM's have all one
qcow2-disk, with the same name as the VM.
When I want to migrate a VM, I check if there is an qcow2 image on the
other host with that name. When that's not the case, I copy the image
using rsync first. If the image excist, I don't do that, and I think
that "--copy-storage-inc" will do it.
But I
2013 Jan 13
1
A issue about KVM block migration
Hello everyone,
I have a issue about the KVM block migration. Please give me some help.
1) I use the "virsh create" command to start a KVM VM in source machine.
2) And then, I use "virsh migrate" cammand to start a block migration:
# virsh migrate --live --copy-storage-all --verbose win7 qemu+ssh://
186.100.8.136/system
root at 186.100.8.136's password:
2020 Aug 18
1
Re: unable to migrate non shared storage in tunneled mode
On Mon, Aug 17, 2020 at 3:24 AM Peter Krempa <pkrempa@redhat.com> wrote:
>
> On Sat, Aug 15, 2020 at 15:38:19 -0700, Vjaceslavs Klimovs wrote:
> > Hey all,
> > With libvirt 6.5.0 and qemu 5.1.0 migration of non shared disks in
> > tunneled mode does not work for me:
> >
> > virsh # migrate alpinelinux3.8 qemu+tls://ratchet.lan/system --live
> >
2014 Feb 26
1
Re: VM Creation Timestamp
On 02/26/2014 08:21 AM, Eric Blake wrote:
> On 02/26/2014 04:42 AM, Tony Atkinson wrote:
>> Hello,
>> Is there any way to query libvirt, ideally through virsh CLI utility or
>> similar, to get a timestamp of when a VM was created.
>> Or to put it another way, a timestamp of when a domain's UUID was allocated.
>
> Sorry, there is no such information currently
2014 Sep 26
5
increase number of libvirt threads by starting tansient guest doamin - is it a bug?
hello,
if i start a transient guest doamin via "virsh create abcd.xml" i see an additional libvirt thread and also some open files:
pstree -h `pgrep libvirtd`
libvirtd───11*[{libvirtd}]
libvirtd 3016 root 21w REG 253,0 6044 1052094 /var/log/libvirt/libxl/abcd.log
libvirtd 3016 root 22r FIFO 0,8 0t0 126124 pipe
libvirtd 3016 root
2011 Jul 18
1
cannot perform tunnelled migration without using peer2peer flag
Dear All
I try to migration a kvm guest os to another host failed
server: ubuntu 11.04 server
virsh:migrate --live --tunnelled vm1 qemu+ssh://192.168.10.3/system
error:Requested operation is not valid:cannot perform
tunnelled migration without using peer2peer flag
2017 Jan 07
2
Regarding Migration Code
Greetings,
I was trying to understand the flow of Migration Code in libvirt and
have few doubts:
1) libvirt talks to QEMU/KVM guests via QEMU API. So overall, in
order to manage QEMU/KVM guests I can either use libvirt (or tools
based on libvirt like virsh) or QEMU monitor. Is it so?
2) Since libvirt is Hypervisor neutral so actual migration
algorithm(precopy or postcopy) is present in the
2023 Mar 21
1
virsh domifaddr --domain domname --source {lease, arp} not showing results with ipv6
On 3/19/23 20:21, Natxo Asenjo wrote:
> hi,
>
> I have configured a routed network on my laptop with a ipv6 subnet and
> dnsmasq is handing out ipv6 addresses to my vms and it works really wel,
> but finding out which ips have been used is not as easy as with ipv4.
>
> [root at lenovo ~]# virsh domifaddr --domain wec --source lease
> ?Name ? ? ? MAC address ? ? ? ?
2013 Nov 13
1
Migration function is not supported by the connection driver: virDomainMigrate2
Greetings;
I'm running Fedora 19, Xen 4.2.3, libvirt 1.0.5.6. I have two identical
servers x1 & x2. I've read http://libvirt.org/migration.html & I've
created certificates for tls according to
http://wiki.libvirt.org/page/TLSSetup
I can do this on both servers, and from a third admin server:
# virsh -c xen+tls://x1.localdomain hostname
x1.localdomain
and...
# virsh -c
2016 Jun 01
2
Migration problem - takes 5 minutes to start moving the memory
Hi,
I'm facing a strange issue while doing a migration from an hypervisor to another one. The migration takes for ever to start moving the memory.
The VM had no workload what so ever, just a basic ubuntu image. The versions on the hypervisors are: libvirt 1.2.21, qemu 1.2.3
Command to launche the migration:
virsh migrate --verbose --live --abort-on-error --tunnelled --p2p --auto-converge
2023 Mar 21
1
virsh domifaddr --domain domname --source {lease, arp} not showing results with ipv6
hi,
On Tue, Mar 21, 2023 at 3:40?PM Michal Pr?vozn?k <mprivozn at redhat.com>
wrote:
> On 3/19/23 20:21, Natxo Asenjo wrote:
> > hi,
> >
> > I have configured a routed network on my laptop with a ipv6 subnet and
> > dnsmasq is handing out ipv6 addresses to my vms and it works really wel,
> > but finding out which ips have been used is not as easy as with
2023 Mar 19
1
virsh domifaddr --domain domname --source {lease, arp} not showing results with ipv6
hi,
I have configured a routed network on my laptop with a ipv6 subnet and
dnsmasq is handing out ipv6 addresses to my vms and it works really wel,
but finding out which ips have been used is not as easy as with ipv4.
[root at lenovo ~]# virsh domifaddr --domain wec --source lease
Name MAC address Protocol Address
2012 Aug 08
1
migration with non-root user
Hi,
I had a VM running on c3rh2 under 'vmc' user:
[vmc at c3rh2 .ssh]$ virsh list --all
Id Name State
----------------------------------------------------
1 vs2relocate_nonRoot running
After the virsh migration command, "virsh migrate --live --unsafe
vs2relocate_nonRoot qemu+ssh://vmc at c3rh1.kirkland.ibm.com/session", this VM
2012 Jun 08
1
virsh: migration job: unexpectedly failed
Hi,I am use virsh and test the migration command,
on server.example.com:
# rpm -qa|grep libvirt
libvirt-0.9.10-21.el6.x86_64
libvirt-python-0.9.10-21.el6.x86_64
libvirt-client-0.9.10-21.el6.x86_64
# virsh version
Compiled against library: libvir 0.9.10
Using library: libvir 0.9.10
Using API: QEMU 0.9.10
Running hypervisor: QEMU 0.14.1
#cat /etc/libvirt/libvirt.conf
uri_aliases = [
2014 Jan 08
2
Canceling a live migration via virsh? (QEMU/KVM)
I am using QEMU/KVM, using Live Migrations like this:
virsh migrate --live ${name} qemu+ssh://${DESTINATION}/system
My question, running this command makes it hang in the foreground. Is
there a way for this to return immediately, so I can just poll for the
migration status? Also, is there a way to _cancel_ a migration? I see
the --timeout option, however if a given timeout is reached I would
2018 Dec 07
3
Re: concurrent migration of several domains rarely fails
On 12/6/18 10:12 AM, Lentes, Bernd wrote:
>
>> Hi,
>>
>> i have a two-node cluster with several domains as resources. During testing i
>> tried several times to migrate some domains concurrently.
>> Usually it suceeded, but rarely it failed. I found one clue in the log:
>>
>> Dec 03 16:03:02 ha-idg-1 libvirtd[3252]: 2018-12-03 15:03:02.758+0000: 3252: