Displaying 20 results from an estimated 120 matches similar to: "[PATCH RESEND v2 2/2] xen: enable vnuma for PV guest"
2013 Oct 16
4
[PATCH 1/7] xen: vNUMA support for PV guests
Defines XENMEM subop hypercall for PV vNUMA
enabled guests and data structures that provide vNUMA
topology information from per-domain vnuma topology
build info.
Signed-off-by: Elena Ufimtseva <ufimtseva@gmail.com>
---
Changes since RFC v2:
- fixed code style;
- the memory copying in hypercall happens in one go for arrays;
- fixed error codes logic;
---
xen/common/domain.c | 10
2020 Jun 29
2
[RFC 0/3] virtio: NUMA-aware memory allocation
On Sun, Jun 28, 2020 at 02:34:37PM +0800, Jason Wang wrote:
>
> On 2020/6/25 ??9:57, Stefan Hajnoczi wrote:
> > These patches are not ready to be merged because I was unable to measure a
> > performance improvement. I'm publishing them so they are archived in case
> > someone picks up this work again in the future.
> >
> > The goal of these patches is to
2020 Jun 29
2
[RFC 0/3] virtio: NUMA-aware memory allocation
On Sun, Jun 28, 2020 at 02:34:37PM +0800, Jason Wang wrote:
>
> On 2020/6/25 ??9:57, Stefan Hajnoczi wrote:
> > These patches are not ready to be merged because I was unable to measure a
> > performance improvement. I'm publishing them so they are archived in case
> > someone picks up this work again in the future.
> >
> > The goal of these patches is to
2020 Jun 25
5
[RFC 0/3] virtio: NUMA-aware memory allocation
These patches are not ready to be merged because I was unable to measure a
performance improvement. I'm publishing them so they are archived in case
someone picks up this work again in the future.
The goal of these patches is to allocate virtqueues and driver state from the
device's NUMA node for optimal memory access latency. Only guests with a vNUMA
topology and virtio devices spread
2020 Jun 25
5
[RFC 0/3] virtio: NUMA-aware memory allocation
These patches are not ready to be merged because I was unable to measure a
performance improvement. I'm publishing them so they are archived in case
someone picks up this work again in the future.
The goal of these patches is to allocate virtqueues and driver state from the
device's NUMA node for optimal memory access latency. Only guests with a vNUMA
topology and virtio devices spread
2010 Jul 02
2
[XEN][vNUMA][PATCH 9/9] Disable Migration with numa_strategy
We don’t preserve the NUMA properties of the VM across the migration.
Just disable the migration for now. Also, setting disable_migrate
doesn’t seem to stop one from actually migrating a VM. So, this patch
is only for documentation.
-dulloor
Signed-off-by : Dulloor <dulloor@gmail.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
2013 Feb 10
0
[PATCH 16/16] xen idle: make xen-specific macro xen-specific
From: Len Brown <len.brown@intel.com>
This macro is only invoked by Xen,
so make its definition specific to Xen.
> set_pm_idle_to_default()
< xen_set_default_idle()
Signed-off-by: Len Brown <len.brown@intel.com>
Cc: xen-devel@lists.xensource.com
---
arch/x86/include/asm/processor.h | 6 +++++-
arch/x86/kernel/process.c | 4 +++-
arch/x86/xen/setup.c | 2 +-
2014 Mar 19
1
[PATCH v7 07/11] pvqspinlock, x86: Allow unfair queue spinlock in a XEN guest
On Wed, Mar 19, 2014 at 04:14:05PM -0400, Waiman Long wrote:
> This patch adds a XEN init function to activate the unfair queue
> spinlock in a XEN guest when the PARAVIRT_UNFAIR_LOCKS kernel config
> option is selected.
>
> Signed-off-by: Waiman Long <Waiman.Long at hp.com>
> ---
> arch/x86/xen/setup.c | 19 +++++++++++++++++++
> 1 files changed, 19
2014 Mar 19
1
[PATCH v7 07/11] pvqspinlock, x86: Allow unfair queue spinlock in a XEN guest
On Wed, Mar 19, 2014 at 04:14:05PM -0400, Waiman Long wrote:
> This patch adds a XEN init function to activate the unfair queue
> spinlock in a XEN guest when the PARAVIRT_UNFAIR_LOCKS kernel config
> option is selected.
>
> Signed-off-by: Waiman Long <Waiman.Long at hp.com>
> ---
> arch/x86/xen/setup.c | 19 +++++++++++++++++++
> 1 files changed, 19
2016 Dec 09
2
[Qemu-devel] [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
> On 12/08/2016 08:45 PM, Li, Liang Z wrote:
> > What's the conclusion of your discussion? It seems you want some
> > statistic before deciding whether to ripping the bitmap from the ABI,
> > am I right?
>
> I think Andrea and David feel pretty strongly that we should remove the
> bitmap, unless we have some data to support keeping it. I don't feel as
>
2016 Dec 09
2
[Qemu-devel] [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
> On 12/08/2016 08:45 PM, Li, Liang Z wrote:
> > What's the conclusion of your discussion? It seems you want some
> > statistic before deciding whether to ripping the bitmap from the ABI,
> > am I right?
>
> I think Andrea and David feel pretty strongly that we should remove the
> bitmap, unless we have some data to support keeping it. I don't feel as
>
2020 Jun 28
0
[RFC 0/3] virtio: NUMA-aware memory allocation
On 2020/6/25 ??9:57, Stefan Hajnoczi wrote:
> These patches are not ready to be merged because I was unable to measure a
> performance improvement. I'm publishing them so they are archived in case
> someone picks up this work again in the future.
>
> The goal of these patches is to allocate virtqueues and driver state from the
> device's NUMA node for optimal memory
2020 Jun 29
0
[RFC 0/3] virtio: NUMA-aware memory allocation
On Mon, Jun 29, 2020 at 10:26:46AM +0100, Stefan Hajnoczi wrote:
> On Sun, Jun 28, 2020 at 02:34:37PM +0800, Jason Wang wrote:
> >
> > On 2020/6/25 ??9:57, Stefan Hajnoczi wrote:
> > > These patches are not ready to be merged because I was unable to measure a
> > > performance improvement. I'm publishing them so they are archived in case
> > > someone
2011 Dec 02
3
[PATCH 1/3] build: Add more suppressions for valgrind tests
---
extratests/suppressions | 20 ++++++++++++++++----
1 files changed, 16 insertions(+), 4 deletions(-)
diff --git a/extratests/suppressions b/extratests/suppressions
index 97d4b78..78ca4ab 100644
--- a/extratests/suppressions
+++ b/extratests/suppressions
@@ -3,19 +3,19 @@
Memcheck:Cond
fun:*
fun:numa_node_size64
- fun:numa_init
+ obj:/usr/lib64/libnuma.so.1
}
{
2014 Mar 19
0
[PATCH v7 07/11] pvqspinlock, x86: Allow unfair queue spinlock in a XEN guest
This patch adds a XEN init function to activate the unfair queue
spinlock in a XEN guest when the PARAVIRT_UNFAIR_LOCKS kernel config
option is selected.
Signed-off-by: Waiman Long <Waiman.Long at hp.com>
---
arch/x86/xen/setup.c | 19 +++++++++++++++++++
1 files changed, 19 insertions(+), 0 deletions(-)
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 0982233..66bb6f5
2014 Mar 21
0
[PATCH v7 07/11] pvqspinlock, x86: Allow unfair queue spinlock in a XEN guest
On Mar 20, 2014 11:40 PM, Waiman Long <waiman.long at hp.com> wrote:
>
> On 03/19/2014 04:28 PM, Konrad Rzeszutek Wilk wrote:
> > On Wed, Mar 19, 2014 at 04:14:05PM -0400, Waiman Long wrote:
> >> This patch adds a XEN init function to activate the unfair queue
> >> spinlock in a XEN guest when the PARAVIRT_UNFAIR_LOCKS kernel config
> >> option is
2014 Mar 21
0
[PATCH v7 07/11] pvqspinlock, x86: Allow unfair queue spinlock in a XEN guest
On Mar 20, 2014 11:40 PM, Waiman Long <waiman.long at hp.com> wrote:
>
> On 03/19/2014 04:28 PM, Konrad Rzeszutek Wilk wrote:
> > On Wed, Mar 19, 2014 at 04:14:05PM -0400, Waiman Long wrote:
> >> This patch adds a XEN init function to activate the unfair queue
> >> spinlock in a XEN guest when the PARAVIRT_UNFAIR_LOCKS kernel config
> >> option is
2016 Dec 09
0
[Qemu-devel] [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
Hello,
On Fri, Dec 09, 2016 at 05:35:45AM +0000, Li, Liang Z wrote:
> > On 12/08/2016 08:45 PM, Li, Liang Z wrote:
> > > What's the conclusion of your discussion? It seems you want some
> > > statistic before deciding whether to ripping the bitmap from the ABI,
> > > am I right?
> >
> > I think Andrea and David feel pretty strongly that we should
2014 Mar 19
15
[PATCH v7 00/11] qspinlock: a 4-byte queue spinlock with PV support
v6->v7:
- Remove an atomic operation from the 2-task contending code
- Shorten the names of some macros
- Make the queue waiter to attempt to steal lock when unfair lock is
enabled.
- Remove lock holder kick from the PV code and fix a race condition
- Run the unfair lock & PV code on overcommitted KVM guests to collect
performance data.
v5->v6:
- Change the optimized
2014 Mar 19
15
[PATCH v7 00/11] qspinlock: a 4-byte queue spinlock with PV support
v6->v7:
- Remove an atomic operation from the 2-task contending code
- Shorten the names of some macros
- Make the queue waiter to attempt to steal lock when unfair lock is
enabled.
- Remove lock holder kick from the PV code and fix a race condition
- Run the unfair lock & PV code on overcommitted KVM guests to collect
performance data.
v5->v6:
- Change the optimized