similar to: [RFC for qemu v4 2/2] virtio_balloon: Add dcvq to deflate continuous pages

Displaying 16 results from an estimated 16 matches similar to: "[RFC for qemu v4 2/2] virtio_balloon: Add dcvq to deflate continuous pages"

2020 Mar 12
0
[RFC for QEMU] virtio-balloon: Add option thp-order to set VIRTIO_BALLOON_F_THP_ORDER
On Thu, Mar 12, 2020 at 03:49:55PM +0800, Hui Zhu wrote: > If the guest kernel has many fragmentation pages, use virtio_balloon > will split THP of QEMU when it calls MADV_DONTNEED madvise to release > the balloon pages. > Set option thp-order to on will open flags VIRTIO_BALLOON_F_THP_ORDER. > It will set balloon size to THP size to handle the THP split issue. > >
2020 May 13
0
[RFC v3 for QEMU] virtio-balloon: Add option cont-pages to set VIRTIO_BALLOON_VQ_INFLATE_CONT
On 12.05.20 11:41, Hui Zhu wrote: This description needs an overhaul, it's hard to parse. > If the guest kernel has many fragmentation pages, use virtio_balloon > will split THP of QEMU when it calls MADV_DONTNEED madvise to release > the balloon pages. This is very unclear and confusing. You will *always* split THPs when inflating 4k pages and there are THPs around. This is
2016 Mar 03
0
[RFC qemu 2/4] virtio-balloon: Add a new feature to balloon device
On Thu, Mar 03, 2016 at 06:44:26PM +0800, Liang Li wrote: > Extend the virtio balloon device to support a new feature, this > new feature can help to get guest's free pages information, which > can be used for live migration optimzation. > > Signed-off-by: Liang Li <liang.z.li at intel.com> I don't understand why we need a new interface. Balloon already sends free
2016 Mar 03
2
[RFC qemu 2/4] virtio-balloon: Add a new feature to balloon device
Extend the virtio balloon device to support a new feature, this new feature can help to get guest's free pages information, which can be used for live migration optimzation. Signed-off-by: Liang Li <liang.z.li at intel.com> --- balloon.c | 30 ++++++++- hw/virtio/virtio-balloon.c | 81 ++++++++++++++++++++++++-
2016 Mar 04
0
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
On Fri, Mar 04, 2016 at 02:26:49PM +0000, Li, Liang Z wrote: > > Subject: Re: [Qemu-devel] [RFC qemu 0/4] A PV solution for live migration > > optimization > > > > On Fri, Mar 04, 2016 at 09:08:44AM +0000, Li, Liang Z wrote: > > > > On Fri, Mar 04, 2016 at 01:52:53AM +0000, Li, Liang Z wrote: > > > > > > I wonder if it would be possible to
2016 Mar 05
0
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
On Fri, Mar 04, 2016 at 03:49:37PM +0000, Li, Liang Z wrote: > > > > > > > Only detect the unmapped/zero mapped pages is not enough. > > > > Consider > > > > > > the > > > > > > > situation like case 2, it can't achieve the same result. > > > > > > > > > > > > Your case 2 doesn't
2016 Mar 04
2
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
> > > > > > Only detect the unmapped/zero mapped pages is not enough. > > > Consider > > > > > the > > > > > > situation like case 2, it can't achieve the same result. > > > > > > > > > > Your case 2 doesn't exist in the real world. If people could > > > > > stop their main memory
2016 Mar 04
2
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
> > > > > > Only detect the unmapped/zero mapped pages is not enough. > > > Consider > > > > > the > > > > > > situation like case 2, it can't achieve the same result. > > > > > > > > > > Your case 2 doesn't exist in the real world. If people could > > > > > stop their main memory
2016 Mar 04
5
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
> Subject: Re: [Qemu-devel] [RFC qemu 0/4] A PV solution for live migration > optimization > > On Fri, Mar 04, 2016 at 09:08:44AM +0000, Li, Liang Z wrote: > > > On Fri, Mar 04, 2016 at 01:52:53AM +0000, Li, Liang Z wrote: > > > > > I wonder if it would be possible to avoid the kernel changes > > > > > by parsing /proc/self/pagemap - if that
2016 Mar 04
5
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
> Subject: Re: [Qemu-devel] [RFC qemu 0/4] A PV solution for live migration > optimization > > On Fri, Mar 04, 2016 at 09:08:44AM +0000, Li, Liang Z wrote: > > > On Fri, Mar 04, 2016 at 01:52:53AM +0000, Li, Liang Z wrote: > > > > > I wonder if it would be possible to avoid the kernel changes > > > > > by parsing /proc/self/pagemap - if that
2015 Apr 12
2
[PATCH 1/2] virtio_balloon: header update for virtio 1
add modern header. This patch is for virtio 1.0 branch, doesn't apply to master. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- include/hw/virtio/virtio-balloon.h | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/include/hw/virtio/virtio-balloon.h b/include/hw/virtio/virtio-balloon.h index f863bfe..79eca67 100644 --- a/include/hw/virtio/virtio-balloon.h +++
2015 Apr 12
2
[PATCH 1/2] virtio_balloon: header update for virtio 1
add modern header. This patch is for virtio 1.0 branch, doesn't apply to master. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- include/hw/virtio/virtio-balloon.h | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/include/hw/virtio/virtio-balloon.h b/include/hw/virtio/virtio-balloon.h index f863bfe..79eca67 100644 --- a/include/hw/virtio/virtio-balloon.h +++
2016 Mar 03
16
[RFC qemu 0/4] A PV solution for live migration optimization
The current QEMU live migration implementation mark the all the guest's RAM pages as dirtied in the ram bulk stage, all these pages will be processed and that takes quit a lot of CPU cycles. >From guest's point of view, it doesn't care about the content in free pages. We can make use of this fact and skip processing the free pages in the ram bulk stage, it can save a lot CPU cycles
2016 Mar 03
16
[RFC qemu 0/4] A PV solution for live migration optimization
The current QEMU live migration implementation mark the all the guest's RAM pages as dirtied in the ram bulk stage, all these pages will be processed and that takes quit a lot of CPU cycles. >From guest's point of view, it doesn't care about the content in free pages. We can make use of this fact and skip processing the free pages in the ram bulk stage, it can save a lot CPU cycles
2023 Sep 09
0
[PATCH RFC v2 2/4] vdpa/mlx5: implement .reset_map driver op
Today, mlx5_vdpa gets started by preallocate 1:1 DMA mapping at device creation time, while this 1:1 mapping will be implicitly destroyed when the first .set_map call is invoked. Everytime when the .reset callback is invoked, any mapping left behind will be dropped then reset back to the initial 1:1 DMA mapping. In order to reduce excessive memory mapping cost during live migration, it is
2009 Jul 23
1
[PATCH server] changes required for fedora rawhide inclusion.
Signed-off-by: Scott Seago <sseago at redhat.com> --- AUTHORS | 17 ++++++ README | 10 +++ conf/ovirt-agent | 12 ++++ conf/ovirt-db-omatic | 12 ++++ conf/ovirt-host-browser | 12 ++++