similar to: vm live storage migration failure.

Displaying 20 results from an estimated 10000 matches similar to: "vm live storage migration failure."

2015 Feb 11
2
Re: [libvirt] vm live storage migration with snapshots
Hi Eric, Please see the blew: On Wed, Feb 11, 2015 at 3:12 PM, Eric Blake <eblake@redhat.com> wrote: > On 02/11/2015 02:07 PM, Edward Young wrote: > >>> What if this vm has a number of disk-only external snapshots? In the > >>> current version, how can live migrate this vm? > >> > >> Are the snapshots based on shared storage, or local-only
2015 Feb 11
2
Re: [libvirt] vm live storage migration with snapshots
Hi Eric, Thanks for your reply! I have the follow up questions blew. On Wed, Feb 11, 2015 at 11:52 AM, Eric Blake <eblake@redhat.com> wrote: > On 02/11/2015 10:08 AM, Edward Young wrote: > > Hi all, > > [probably didn't need to cross-post to quite that wide of an audience, > oh well] > > > > > I'm investigating the ways to improve the live
2020 May 14
0
Re: Storage cleaning
Thank you, that's it! virsh vol-list storage VM1   /dev/storage/VM1.img VM2   /dev/storage/VM2.img VM3   /dev/storage/VM3.img [dead] VM4   /dev/storage/VM4.img [dead] A last stupid question (I don't want to make a big mistake ...): Is virsh vol-delete VM3 virsh vol-delete VM4 the right command to get rid of the offending ones? Am 14.05.2020 um 19:10 schrieb Alvin Starr: > >
2020 May 14
0
Re: Storage cleaning
virsh list --all 15    VM1    running 16    VM2    running ps ax | grep virt 14281 ?        Sl   1170:30  /usr/libexec/qemu-kvm -name VM1 [...] 14384 ?        Sl   376:45    /usr/libexec/qemu-kvm -name VM2 [...] Am 14.05.2020 um 17:45 schrieb Alvin Starr: > List your storage pool to insure that they have been deleted from the > pool. > If they are not there anymore then check to
2015 Feb 11
0
Re: [libvirt] vm live storage migration with snapshots
[dropping multiple lists; let's just use libvirt-users] On 02/11/2015 02:45 PM, Edward Young wrote: > I perform a simple test, but failed. > > In the source, I create: base <- mid <- active (2 snapshots, the active > one is the current one) > In order to migrate this vm to the destination, I manually copy both base > and mid to the destination, and put them in the
2013 Dec 03
0
cputune shares with multiple cpu and pinning
Hi, I have found the cpu time partitioning based on cpu shares weights not very intuitive. On RHEL64, I deployed two qemu/kvm VMs VM1 with 1 vcpu and 512 cpu shares VM2 with 2 vcpus and 1024 cpu shares I pinned their vcpus to specific host pcpus: VM1 vcpu 0 to host pcpu1 VM2 vcpu 0 to host pcpu1, VM2 vcpu 1 to host pcpu2 I executed inside the VMs a simple process that consume all
2015 Apr 22
0
Zerocopy VM-to-VM networking using virtio-net
On Wed, 22 Apr 2015 18:01:38 +0100 Stefan Hajnoczi <stefanha at redhat.com> wrote: > [It may be necessary to remove virtio-dev at lists.oasis-open.org from CC > if you are a non-TC member.] > > Hi, > Some modern networking applications bypass the kernel network stack so > that rx/tx rings and DMA buffers can be directly mapped. This is > typical in DPDK applications
2012 Mar 22
1
Does libvirt check MCS labels during hot-add disk image ?
Libvirt doesn't care about security during hot add disk images. It even accepts addition of disk images of other guest running on the host. Steps followed to create this scenario : Started two VMs with following security configurations: vm1: <seclabel type='dynamic' model='selinux' relabel='yes'>
2014 Feb 25
0
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason Wang wrote: > We used to stop the handling of tx when the number of pending DMAs > exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation > of both host and guest. But it was too aggressive in some cases, since > any delay or blocking of a single packet may delay or block the guest > transmission. Consider the following
2014 Mar 17
0
[PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 03/13/2014 09:28 AM, Jason Wang wrote: > On 03/10/2014 04:03 PM, Michael S. Tsirkin wrote: >> On Fri, Mar 07, 2014 at 01:28:27PM +0800, Jason Wang wrote: >>>> We used to stop the handling of tx when the number of pending DMAs >>>> exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation >>>> of both host and guest. But it was too
2014 Feb 26
0
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
On Wed, Feb 26, 2014 at 03:11:21PM +0800, Jason Wang wrote: > On 02/26/2014 02:32 PM, Qin Chuanyu wrote: > >On 2014/2/26 13:53, Jason Wang wrote: > >>On 02/25/2014 09:57 PM, Michael S. Tsirkin wrote: > >>>On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason Wang wrote: > >>>>We used to stop the handling of tx when the number of pending DMAs >
2015 Sep 01
0
rfc: vhost user enhancements for vm2vm communication
On Tue, Sep 01, 2015 at 09:35:21AM +0200, Jan Kiszka wrote: > On 2015-08-31 16:11, Michael S. Tsirkin wrote: > > Hello! > > During the KVM forum, we discussed supporting virtio on top > > of ivshmem. > > No, not on top of ivshmem. On top of shared memory. Our model is > different from the simplistic ivshmem. > > > I have considered it, and came up with an
2014 Mar 10
0
[PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
On Fri, Mar 07, 2014 at 01:28:27PM +0800, Jason Wang wrote: > We used to stop the handling of tx when the number of pending DMAs > exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation > of both host and guest. But it was too aggressive in some cases, since > any delay or blocking of a single packet may delay or block the guest > transmission. Consider the following
2014 Feb 26
0
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 2014/2/26 13:53, Jason Wang wrote: > On 02/25/2014 09:57 PM, Michael S. Tsirkin wrote: >> On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason Wang wrote: >>> We used to stop the handling of tx when the number of pending DMAs >>> exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation >>> of both host and guest. But it was too aggressive in some
2015 Apr 22
1
Zerocopy VM-to-VM networking using virtio-net
On Wed, Apr 22, 2015 at 6:46 PM, Cornelia Huck <cornelia.huck at de.ibm.com> wrote: > On Wed, 22 Apr 2015 18:01:38 +0100 > Stefan Hajnoczi <stefanha at redhat.com> wrote: > >> [It may be necessary to remove virtio-dev at lists.oasis-open.org from CC >> if you are a non-TC member.] >> >> Hi, >> Some modern networking applications bypass the kernel
2015 Apr 22
1
Zerocopy VM-to-VM networking using virtio-net
On Wed, Apr 22, 2015 at 6:46 PM, Cornelia Huck <cornelia.huck at de.ibm.com> wrote: > On Wed, 22 Apr 2015 18:01:38 +0100 > Stefan Hajnoczi <stefanha at redhat.com> wrote: > >> [It may be necessary to remove virtio-dev at lists.oasis-open.org from CC >> if you are a non-TC member.] >> >> Hi, >> Some modern networking applications bypass the kernel
2015 Apr 27
0
[virtio-dev] Zerocopy VM-to-VM networking using virtio-net
Am 2015-04-27 um 15:01 schrieb Stefan Hajnoczi: > On Mon, Apr 27, 2015 at 1:55 PM, Jan Kiszka <jan.kiszka at siemens.com> wrote: >> Am 2015-04-27 um 14:35 schrieb Jan Kiszka: >>> Am 2015-04-27 um 12:17 schrieb Stefan Hajnoczi: >>>> On Sun, Apr 26, 2015 at 2:24 PM, Luke Gorrie <luke at snabb.co> wrote: >>>>> On 24 April 2015 at 15:22, Stefan
2015 Apr 27
0
[virtio-dev] Zerocopy VM-to-VM networking using virtio-net
Am 2015-04-27 um 14:35 schrieb Jan Kiszka: > Am 2015-04-27 um 12:17 schrieb Stefan Hajnoczi: >> On Sun, Apr 26, 2015 at 2:24 PM, Luke Gorrie <luke at snabb.co> wrote: >>> On 24 April 2015 at 15:22, Stefan Hajnoczi <stefanha at gmail.com> wrote: >>>> >>>> The motivation for making VM-to-VM fast is that while software >>>> switches on
2011 Mar 14
0
cgroups limitations on Virtual machines
I have 2 VMs launched by : 'virsh create <xml file>' . Both VMs get 2 vcpus (out of total 2 cores of the host) I then try bias their cpu cycle quota by manipulating the cpu_shares ( virsh schedinfo --set cpu_shares=<value> vm1/2 ) so that VM1 will get 3 times the cpu cycles VM2 gets. (e.g : VM1 cpu_shares = 150 . VM2 cpu_shares = 50) . There are no other VMs defined or
2015 Apr 27
0
[virtio-dev] Zerocopy VM-to-VM networking using virtio-net
On Mon, Apr 27, 2015 at 1:35 PM, Jan Kiszka <jan.kiszka at siemens.com> wrote: > Am 2015-04-27 um 12:17 schrieb Stefan Hajnoczi: >> On Sun, Apr 26, 2015 at 2:24 PM, Luke Gorrie <luke at snabb.co> wrote: >>> On 24 April 2015 at 15:22, Stefan Hajnoczi <stefanha at gmail.com> wrote: >>>> >>>> The motivation for making VM-to-VM fast is that