Displaying 20 results from an estimated 10000 matches similar to: "How to download / snapshot Dulloor''s NUMA patches"
2010 Oct 27
2
Why is cpu-to-node mapping different between Xen 4.0.2-rc1-pre and Xen 4.1-unstable?
My system is a dual Xeon E5540 (Nehalem) HP Proliant DL380G6. When
switching between Xen 4.0.2-rc1-pre and Xen 4.1-unstable I noticed
that the NUMA info as shown by the Xen ''u'' debug-key is different.
More specifically, the CPU to node mapping is alternating for 4.0.2
and grouped sequentially for 4.1. This difference affects the
allocation (wrt node/socket) of pinned VCPUs to the
2009 Nov 06
18
xenoprof: operation 9 failed for dom0 (status: -1)
Renato,
When I tried running "opcontrol --start" (after previously running
"opcontrol --start-daemon") in dom0, I get this error message:
/usr/local/bin/opcontrol: line 1639: echo: write error: Operation not
permitted
and this message in the Xen console:
(XEN) xenoprof: operation 9 failed for dom 0 (status : -1)
It looks like opcontrol is trying to do this: echo 1 >
2009 Nov 06
18
xenoprof: operation 9 failed for dom0 (status: -1)
Renato,
When I tried running "opcontrol --start" (after previously running
"opcontrol --start-daemon") in dom0, I get this error message:
/usr/local/bin/opcontrol: line 1639: echo: write error: Operation not
permitted
and this message in the Xen console:
(XEN) xenoprof: operation 9 failed for dom 0 (status : -1)
It looks like opcontrol is trying to do this: echo 1 >
2010 Jul 02
2
[XEN][vNUMA][PATCH 9/9] Disable Migration with numa_strategy
We don’t preserve the NUMA properties of the VM across the migration.
Just disable the migration for now. Also, setting disable_migrate
doesn’t seem to stop one from actually migrating a VM. So, this patch
is only for documentation.
-dulloor
Signed-off-by : Dulloor <dulloor@gmail.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
2013 Mar 06
1
Re: [PATCH 00 of 10 [RFC]] Automatically place gueston host's NUMA nodes with xl
hello,
I used the patch in xen4.1 and enabled it has the feature of numa placement,but has the earlier said problem.Can you help me that what is the reason?
And where is the newest version of the patch,pls provide the latest development branch's address.
Thanks,
Regards,
Butine huang
Zhejiang University
2013-03-06
>On mer, 2013-03-06 at 10:49 +0000, butian huang wrote:
>>
2010 Nov 12
0
Announce: Auto/Lazy-migration Patches RFC on linux-numa list
At last weeks' LPC, there was some interest in my patches for Auto/Lazy
Migration to improve locality and possibly performance of unpinned guest
VMs on a NUMA platform. As a result of these conversations I have reposted
the patches [4 series, ~40 patches] as RFCs to the linux-numa list. Links
to threads given below.
I have rebased the patches atop 3Nov10 mmotm series [2.6.36 + 3nov mmotm].
2010 Nov 12
0
Announce: Auto/Lazy-migration Patches RFC on linux-numa list
At last weeks' LPC, there was some interest in my patches for Auto/Lazy
Migration to improve locality and possibly performance of unpinned guest
VMs on a NUMA platform. As a result of these conversations I have reposted
the patches [4 series, ~40 patches] as RFCs to the linux-numa list. Links
to threads given below.
I have rebased the patches atop 3Nov10 mmotm series [2.6.36 + 3nov mmotm].
2006 Sep 29
0
[PATCH 0/6] add NUMA support to Xen
The following patchset adds NUMA support to the hypervisor. This
includes:
- A full SRAT table parser for 32-bit and 64-bit, based on the ACPI
NUMA parser from linux 2.6.16.29 with data structures to represent
NUMA cpu and memory topology, and NUMA emulation (fake=).
- Changes to the Xen page allocator adding a per-node bucket for each
zone. Xen will continue to prioritize using the
2020 Jun 28
0
[RFC 0/3] virtio: NUMA-aware memory allocation
On 2020/6/25 ??9:57, Stefan Hajnoczi wrote:
> These patches are not ready to be merged because I was unable to measure a
> performance improvement. I'm publishing them so they are archived in case
> someone picks up this work again in the future.
>
> The goal of these patches is to allocate virtqueues and driver state from the
> device's NUMA node for optimal memory
2016 Nov 21
0
Re: NUMA VM and assigning interfaces
On 11/21/2016 12:34 PM, Amir Shehata wrote:
> Hello,
>
> Hope all is well.
>
> I've been looking at how I can create a virtual machine which is NUMA
> capable. I was able to do that by:
>
> 140 <qemu:commandline>
> 143 <qemu:arg value='-numa'/>
> 144 <qemu:arg value='node'/>
> 145 <qemu:arg
2020 Jun 29
0
[RFC 0/3] virtio: NUMA-aware memory allocation
On Mon, Jun 29, 2020 at 10:26:46AM +0100, Stefan Hajnoczi wrote:
> On Sun, Jun 28, 2020 at 02:34:37PM +0800, Jason Wang wrote:
> >
> > On 2020/6/25 ??9:57, Stefan Hajnoczi wrote:
> > > These patches are not ready to be merged because I was unable to measure a
> > > performance improvement. I'm publishing them so they are archived in case
> > > someone
2010 Jan 17
2
docs/reference for NUMA usage?
i''m reading up on numa usage. so far, i''ve enabled numa on my Xen
box. @ Dom0 i see,
xm dmesg | grep -i numa
(XEN) Command line: ... numa=on ...
(XEN) No NUMA configuration found
i guess i need to ''configure'' numa. i''ve no clue how, and haven''t
found the docs for it yet, despite looking.
i found this old thread,
2018 Aug 21
0
Get Logical processor count correctly whether NUMA is enabled or disabled
Dear Arun,
thank you for the report. I agree with the analysis, detectCores() will
only report logical processors in the NUMA group in which R is running.
I don't have a system to test on, could you please check these
workarounds for me on your systems?
# number of logical processors - what detectCores() should return
out <- system("wmic cpu get numberoflogicalprocessors",
2019 May 02
0
NUMA revisited
Moin libvirters,
I'm looking into the current numa settings for a large-ish libvirt/qemu
based setup and I ended up having a couple of questions:
1) Has kernel.numa_balancing completely replaced numad or is there still
a time and place for numad when we have a modern kernel?
2) Should I pin vCPUs to numa nodes and/or use numatune at all, when
using kernel.numa_balancing?
3) The libvirt
2020 Jul 01
0
Re: no/empty NUMA cells on domain XML
On Wed, Jul 01, 2020 at 12:08:35PM +0300, Polina Agranat wrote:
> Hi ,
>
> I'm looking for a possibility to simulate a VM (to be used as a host)
> reporting no or empty NUMA <cells> in 'virsh capabilities'. 'virsh
> capabilities' usually reports single NUMA in the VMs,
> like
> <numa>
> <cell id='0' cpus='0-63'
2014 Sep 12
1
Inconsistent behavior between x86_64 and ppc64 when creating guests with NUMA node placement
Hello all,
I was recently trying out NUMA placement for my guests on both x86_64
and ppc64 machines. When booting a guest on the x86_64 machine, the
following specs were valid (obviously, just notable excepts from the xml):
<memory unit='KiB'>8388608</memory>
<currentMemory unit='KiB'>8388608</currentMemory>
<vcpu
2012 Nov 01
0
numa topology within domain XML
Hello all,
I'm trying to setup a NUMA topology identical as the machine which hosts
the qemu-kvm VirtualMachine.
numactl -H on the host:
available: 8 nodes (0-7)
node 0 cpus: 0 1 2 3 4 5
node 0 size: 8189 MB
node 0 free: 7581 MB
node 1 cpus: 6 7 8 9 10 11
node 1 size: 8192 MB
node 1 free: 7061 MB
node 2 cpus: 12 13 14 15 16 17
node 2 size: 8192 MB
node 2 free: 6644 MB
node 3 cpus: 18 19 20
2017 Oct 02
2
NUMA split mode?
John R Pierce <pierce at hogranch.com> writes:
> On 10/1/2017 8:38 AM, hw wrote:
>> HP says that what they call "NUMA split mode" should be disabled in the
>> BIOS of the Z800 workstation when running Linux. They are reasoning
>> that Linux kernels do not support this feature and even might not boot
>> if it?s enabled.
>
> hmm, that workstation is
2017 Jun 02
1
should NUMA be enabled?
Hi,
should NUMA be enabled in the BIOS of a server that has
two sockets but only a single CPU in one of the sockets?
From what I?ve been reading, it is unclear to me if NUMA
should be enabled only on systems with multiple CPUs in
multiple sockets or if multiple cores of a single CPU in
a single socket benefit from NUMA being enabled, and if
memory access in general benefits from NUMA being
2019 Sep 23
2
[PATCH RFC v3 1/9] ACPI: NUMA: export pxm_to_node
On 19.09.19 16:22, David Hildenbrand wrote:
> Will be needed by virtio-mem to identify the node from a pxm.
>
> Cc: "Rafael J. Wysocki" <rjw at rjwysocki.net>
> Cc: Len Brown <lenb at kernel.org>
> Cc: linux-acpi at vger.kernel.org
> Signed-off-by: David Hildenbrand <david at redhat.com>
> ---
> drivers/acpi/numa.c | 1 +
> 1 file changed, 1