search for: dvq

Displaying 20 results from an estimated 20 matches for "dvq".

Did you mean: dvd
2020 Jul 16
0
[RFC for qemu v4 2/2] virtio_balloon: Add dcvq to deflate continuous pages
...n_handle_output(VirtIODevice *vdev, VirtQueue *vq) > balloon_inflate_page(s, section.mr, > section.offset_within_region, > psize, &pbp); > - } else if (vq == s->dvq) { > - balloon_deflate_page(s, section.mr, section.offset_within_region); > + } else if (vq == s->dvq || vq == s->dcvq) { > + balloon_deflate_page(s, section.mr, section.offset_within_region, > +...
2016 Mar 03
0
[RFC qemu 2/4] virtio-balloon: Add a new feature to balloon device
...if (ret < 0) { > error_setg(errp, "Only one balloon device is supported"); > @@ -440,6 +518,7 @@ static void virtio_balloon_device_realize(DeviceState *dev, Error **errp) > s->ivq = virtio_add_queue(vdev, 128, virtio_balloon_handle_output); > s->dvq = virtio_add_queue(vdev, 128, virtio_balloon_handle_output); > s->svq = virtio_add_queue(vdev, 128, virtio_balloon_receive_stats); > + s->fvq = virtio_add_queue(vdev, 128, virtio_balloon_get_free_pages); > > reset_stats(s); > > diff --git a/include/hw/virtio...
2020 Mar 12
0
[RFC for QEMU] virtio-balloon: Add option thp-order to set VIRTIO_BALLOON_F_THP_ORDER
...pa); > - if (!qemu_balloon_is_inhibited()) { > - if (vq == s->ivq) { > - balloon_inflate_page(s, section.mr, > - section.offset_within_region, &pbp); > - } else if (vq == s->dvq) { > - balloon_deflate_page(s, section.mr, section.offset_within_region); > - } else { > - g_assert_not_reached(); > + trace_virtio_balloon_handle_output(memory_region_name(section.mr), > +...
2016 Mar 03
2
[RFC qemu 2/4] virtio-balloon: Add a new feature to balloon device
..._free_pages, s); if (ret < 0) { error_setg(errp, "Only one balloon device is supported"); @@ -440,6 +518,7 @@ static void virtio_balloon_device_realize(DeviceState *dev, Error **errp) s->ivq = virtio_add_queue(vdev, 128, virtio_balloon_handle_output); s->dvq = virtio_add_queue(vdev, 128, virtio_balloon_handle_output); s->svq = virtio_add_queue(vdev, 128, virtio_balloon_receive_stats); + s->fvq = virtio_add_queue(vdev, 128, virtio_balloon_get_free_pages); reset_stats(s); diff --git a/include/hw/virtio/virtio-balloon.h b/include/hw...
2016 Mar 03
16
[RFC qemu 0/4] A PV solution for live migration optimization
The current QEMU live migration implementation mark the all the guest's RAM pages as dirtied in the ram bulk stage, all these pages will be processed and that takes quit a lot of CPU cycles. >From guest's point of view, it doesn't care about the content in free pages. We can make use of this fact and skip processing the free pages in the ram bulk stage, it can save a lot CPU cycles
2016 Mar 03
16
[RFC qemu 0/4] A PV solution for live migration optimization
The current QEMU live migration implementation mark the all the guest's RAM pages as dirtied in the ram bulk stage, all these pages will be processed and that takes quit a lot of CPU cycles. >From guest's point of view, it doesn't care about the content in free pages. We can make use of this fact and skip processing the free pages in the ram bulk stage, it can save a lot CPU cycles
2016 Mar 04
2
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
..._region_get_ram_ptr is bending the rules a bit, but > should be OK because we only want a single page. */ > addr = section.offset_within_region; > balloon_page(memory_region_get_ram_ptr(section.mr) + addr, > !!(vq == s->dvq)); > memory_region_unref(section.mr); > } > > so all that happens when we get a page is balloon_page. > and > > static void balloon_page(void *addr, int deflate) { #if defined(__linux__) > if (!qemu_balloon_is_inhibited() && (!kvm_enabled()...
2016 Mar 04
2
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
..._region_get_ram_ptr is bending the rules a bit, but > should be OK because we only want a single page. */ > addr = section.offset_within_region; > balloon_page(memory_region_get_ram_ptr(section.mr) + addr, > !!(vq == s->dvq)); > memory_region_unref(section.mr); > } > > so all that happens when we get a page is balloon_page. > and > > static void balloon_page(void *addr, int deflate) { #if defined(__linux__) > if (!qemu_balloon_is_inhibited() && (!kvm_enabled()...
2016 Mar 04
0
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
.../* Using memory_region_get_ram_ptr is bending the rules a bit, but should be OK because we only want a single page. */ addr = section.offset_within_region; balloon_page(memory_region_get_ram_ptr(section.mr) + addr, !!(vq == s->dvq)); memory_region_unref(section.mr); } so all that happens when we get a page is balloon_page. and static void balloon_page(void *addr, int deflate) { #if defined(__linux__) if (!qemu_balloon_is_inhibited() && (!kvm_enabled() ||...
2016 Mar 04
5
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
> Subject: Re: [Qemu-devel] [RFC qemu 0/4] A PV solution for live migration > optimization > > On Fri, Mar 04, 2016 at 09:08:44AM +0000, Li, Liang Z wrote: > > > On Fri, Mar 04, 2016 at 01:52:53AM +0000, Li, Liang Z wrote: > > > > > I wonder if it would be possible to avoid the kernel changes > > > > > by parsing /proc/self/pagemap - if that
2016 Mar 04
5
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
> Subject: Re: [Qemu-devel] [RFC qemu 0/4] A PV solution for live migration > optimization > > On Fri, Mar 04, 2016 at 09:08:44AM +0000, Li, Liang Z wrote: > > > On Fri, Mar 04, 2016 at 01:52:53AM +0000, Li, Liang Z wrote: > > > > > I wonder if it would be possible to avoid the kernel changes > > > > > by parsing /proc/self/pagemap - if that
2016 Mar 05
0
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
...is bending the rules a bit, but > > should be OK because we only want a single page. */ > > addr = section.offset_within_region; > > balloon_page(memory_region_get_ram_ptr(section.mr) + addr, > > !!(vq == s->dvq)); > > memory_region_unref(section.mr); > > } > > > > so all that happens when we get a page is balloon_page. > > and > > > > static void balloon_page(void *addr, int deflate) { #if defined(__linux__) > > if (!qemu_balloon_is...
2015 Apr 12
2
[PATCH 1/2] virtio_balloon: header update for virtio 1
...alloonStat { uint64_t val; } QEMU_PACKED VirtIOBalloonStat; +typedef struct virtio_balloon_stat_modern { + uint16_t tag; + uint8_t reserved[6]; + uint64_t val; +} VirtIOBalloonStatModern; + typedef struct VirtIOBalloon { VirtIODevice parent_obj; VirtQueue *ivq, *dvq, *svq; -- MST
2015 Apr 12
2
[PATCH 1/2] virtio_balloon: header update for virtio 1
...alloonStat { uint64_t val; } QEMU_PACKED VirtIOBalloonStat; +typedef struct virtio_balloon_stat_modern { + uint16_t tag; + uint8_t reserved[6]; + uint64_t val; +} VirtIOBalloonStatModern; + typedef struct VirtIOBalloon { VirtIODevice parent_obj; VirtQueue *ivq, *dvq, *svq; -- MST
2023 Sep 09
0
[PATCH RFC v2 2/4] vdpa/mlx5: implement .reset_map driver op
...ft intact across virtio device reset. Leverage the .reset_map callback to reset memory mapping, then the device .reset routine can run free from having to clean up memory mappings. Signed-off-by: Si-Wei Liu <si-wei.liu at oracle.com> --- RFC v1 -> v2: - fix error path when both CVQ and DVQ fall in same asid --- drivers/vdpa/mlx5/core/mlx5_vdpa.h | 1 + drivers/vdpa/mlx5/core/mr.c | 70 +++++++++++++++++++++++--------------- drivers/vdpa/mlx5/net/mlx5_vnet.c | 18 +++++++--- 3 files changed, 56 insertions(+), 33 deletions(-) diff --git a/drivers/vdpa/mlx5/core/mlx5_vdpa.h b...
2023 Aug 02
3
[PATCH 0/2] vdpa/mlx5: Fixes for ASID handling
This patch series is based on Eugenio's fix for handling CVQs in a different ASID [0]. The first patch is the actual fix. The next 2 patches are fixing a possible issue that I found while implementing patch 1. The patches are ordered like this for clarity. [0] https://lore.kernel.org/lkml/20230112142218.725622-1-eperezma at redhat.com/ Dragos Tatulea (1): vdpa/mlx5: Fix
2023 Aug 02
3
[PATCH 0/2] vdpa/mlx5: Fixes for ASID handling
This patch series is based on Eugenio's fix for handling CVQs in a different ASID [0]. The first patch is the actual fix. The next 2 patches are fixing a possible issue that I found while implementing patch 1. The patches are ordered like this for clarity. [0] https://lore.kernel.org/lkml/20230112142218.725622-1-eperezma at redhat.com/ Dragos Tatulea (1): vdpa/mlx5: Fix
2023 Sep 09
4
[PATCH RFC v2 0/4] vdpa: decouple reset of iotlb mapping from device reset
In order to reduce needlessly high setup and teardown cost of iotlb mapping during live migration, it's crucial to decouple the vhost-vdpa iotlb abstraction from the virtio device life cycle, i.e. iotlb mappings should be left intact across virtio device reset [1]. For it to work, the on-chip IOMMU parent device should implement a separate .reset_map() operation callback to restore 1:1 DMA
2023 Sep 09
4
[PATCH RFC v3 0/4] vdpa: decouple reset of iotlb mapping from device reset
In order to reduce needlessly high setup and teardown cost of iotlb mapping during live migration, it's crucial to decouple the vhost-vdpa iotlb abstraction from the virtio device life cycle, i.e. iotlb mappings should be left intact across virtio device reset [1]. For it to work, the on-chip IOMMU parent device should implement a separate .reset_map() operation callback to restore 1:1 DMA
2009 Jul 23
1
[PATCH server] changes required for fedora rawhide inclusion.
...cnJ3$(VwZ39!;o8CP?UPh<jsMYP|0OP>otUV zl(FW*6_B4I8>uEG;j3P5vJP5wmp?2mlMvFCwr2E`+icu)#_NJCoolluDWKDALtGYS zMCxmO{e1zSkfXdS%S+bDuyq#6vb8oYuQg8o|8qC|VWEQ;(KGzvI|M~0<O#NgU0tL6 zZmBhv7;;Ug?4kvj63-NF(kKraPLhbd&qRXK(-e{kXV}t<R at W*~^o at 41h0W7wZj+b| ziR)0a`2NJcTAMu^a<SC&enm|>8DVQC!#Rx(yi>L$C4D7MOG>wkud^7NfbjdS at 7%FV zV}<jpqu(t(Nt=CcTo(L?iuy+AzBped&1^TC+2%L<5#w{mSK^LuiJ2|A!m$FD%{LS2 z$t~nJe)}b|gokLa8tiGdpxtqGhci6M=07czbY${ewBTCA+HB;TKKGd$KsJs!GOsik z(^CWImrw|XCY)+nFUuhEN;6E2>04nW63RCHxI{%ykvx5^n$qo&mG~;9XJHM{>^_r+ zlW>#7vygQnj&ARb=4RWHNVrXk...