Displaying 12 results from an estimated 12 matches for "vnuma".
Did you mean:
numa
2013 Oct 16
4
[PATCH 1/7] xen: vNUMA support for PV guests
Defines XENMEM subop hypercall for PV vNUMA
enabled guests and data structures that provide vNUMA
topology information from per-domain vnuma topology
build info.
Signed-off-by: Elena Ufimtseva <ufimtseva@gmail.com>
---
Changes since RFC v2:
- fixed code style;
- the memory copying in hypercall happens in one go for arrays;
- fixed er...
2020 Jun 29
2
[RFC 0/3] virtio: NUMA-aware memory allocation
...t. I'm publishing them so they are archived in case
> > someone picks up this work again in the future.
> >
> > The goal of these patches is to allocate virtqueues and driver state from the
> > device's NUMA node for optimal memory access latency. Only guests with a vNUMA
> > topology and virtio devices spread across vNUMA nodes benefit from this. In
> > other cases the memory placement is fine and we don't need to take NUMA into
> > account inside the guest.
> >
> > These patches could be extended to virtio_net.ko and other devic...
2020 Jun 29
2
[RFC 0/3] virtio: NUMA-aware memory allocation
...t. I'm publishing them so they are archived in case
> > someone picks up this work again in the future.
> >
> > The goal of these patches is to allocate virtqueues and driver state from the
> > device's NUMA node for optimal memory access latency. Only guests with a vNUMA
> > topology and virtio devices spread across vNUMA nodes benefit from this. In
> > other cases the memory placement is fine and we don't need to take NUMA into
> > account inside the guest.
> >
> > These patches could be extended to virtio_net.ko and other devic...
2013 Nov 18
9
[PATCH RESEND v2 2/2] xen: enable vnuma for PV guest
Enables numa if vnuma topology hypercall is supported and it is domU.
Signed-off-by: Elena Ufimtseva <ufimtseva@gmail.com>
---
arch/x86/xen/setup.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 68c054f..0aab799 100644
--- a/arch/x86/...
2020 Jun 25
5
[RFC 0/3] virtio: NUMA-aware memory allocation
...able to measure a
performance improvement. I'm publishing them so they are archived in case
someone picks up this work again in the future.
The goal of these patches is to allocate virtqueues and driver state from the
device's NUMA node for optimal memory access latency. Only guests with a vNUMA
topology and virtio devices spread across vNUMA nodes benefit from this. In
other cases the memory placement is fine and we don't need to take NUMA into
account inside the guest.
These patches could be extended to virtio_net.ko and other devices in the
future. I only tested virtio_blk.ko.
Th...
2020 Jun 25
5
[RFC 0/3] virtio: NUMA-aware memory allocation
...able to measure a
performance improvement. I'm publishing them so they are archived in case
someone picks up this work again in the future.
The goal of these patches is to allocate virtqueues and driver state from the
device's NUMA node for optimal memory access latency. Only guests with a vNUMA
topology and virtio devices spread across vNUMA nodes benefit from this. In
other cases the memory placement is fine and we don't need to take NUMA into
account inside the guest.
These patches could be extended to virtio_net.ko and other devices in the
future. I only tested virtio_blk.ko.
Th...
2020 Jun 28
0
[RFC 0/3] virtio: NUMA-aware memory allocation
...erformance improvement. I'm publishing them so they are archived in case
> someone picks up this work again in the future.
>
> The goal of these patches is to allocate virtqueues and driver state from the
> device's NUMA node for optimal memory access latency. Only guests with a vNUMA
> topology and virtio devices spread across vNUMA nodes benefit from this. In
> other cases the memory placement is fine and we don't need to take NUMA into
> account inside the guest.
>
> These patches could be extended to virtio_net.ko and other devices in the
> future. I o...
2020 Jun 29
0
[RFC 0/3] virtio: NUMA-aware memory allocation
...g them so they are archived in case
> > > someone picks up this work again in the future.
> > >
> > > The goal of these patches is to allocate virtqueues and driver state from the
> > > device's NUMA node for optimal memory access latency. Only guests with a vNUMA
> > > topology and virtio devices spread across vNUMA nodes benefit from this. In
> > > other cases the memory placement is fine and we don't need to take NUMA into
> > > account inside the guest.
> > >
> > > These patches could be extended to virt...
2010 Jul 02
2
[XEN][vNUMA][PATCH 9/9] Disable Migration with numa_strategy
We don’t preserve the NUMA properties of the VM across the migration.
Just disable the migration for now. Also, setting disable_migrate
doesn’t seem to stop one from actually migrating a VM. So, this patch
is only for documentation.
-dulloor
Signed-off-by : Dulloor <dulloor@gmail.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
2016 Dec 09
2
[Qemu-devel] [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
> On 12/08/2016 08:45 PM, Li, Liang Z wrote:
> > What's the conclusion of your discussion? It seems you want some
> > statistic before deciding whether to ripping the bitmap from the ABI,
> > am I right?
>
> I think Andrea and David feel pretty strongly that we should remove the
> bitmap, unless we have some data to support keeping it. I don't feel as
>
2016 Dec 09
2
[Qemu-devel] [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
> On 12/08/2016 08:45 PM, Li, Liang Z wrote:
> > What's the conclusion of your discussion? It seems you want some
> > statistic before deciding whether to ripping the bitmap from the ABI,
> > am I right?
>
> I think Andrea and David feel pretty strongly that we should remove the
> bitmap, unless we have some data to support keeping it. I don't feel as
>
2016 Dec 09
0
[Qemu-devel] [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
...order 0, the bitmap payoff
will regress close to linearly with the increase of RAM.
So it'd be good to check the stats or the benchmark on large guests,
at least one hundred gigabytes or so.
Changing topic but still about the ABI features needed, so it may be
relevant for this discussion:
1) vNUMA locality: i.e. allowing host to specify which vNODEs to take
memory from, using alloc_pages_node in guest. So you can ask to
take X pages from vnode A, Y pages from vnode B, in one vmenter.
2) allowing qemu to tell the guest to stop inflating the balloon and
report a fragmentation limit b...