similar to: [PATCH] 0/7 xen: Add basic NUMA support

Displaying 20 results from an estimated 8000 matches similar to: "[PATCH] 0/7 xen: Add basic NUMA support"

2006 Oct 04
2
NUMA support on Xen ?
Hi, I am a Masters student from Carnegie Mellon University. I am looking for a research topics for an Advanced OS & DS course we have. I wanted to know what is the current support for NUMA on Xen ? Does it support the IBM x440 and AMD64 Opteron ? Also, does the Xen scheduler do NUMA aware scheduling so it does not degrade the VM performace ? My group is currently looking into Scheduling
2006 Sep 29
0
[PATCH 0/6] add NUMA support to Xen
The following patchset adds NUMA support to the hypervisor. This includes: - A full SRAT table parser for 32-bit and 64-bit, based on the ACPI NUMA parser from linux 2.6.16.29 with data structures to represent NUMA cpu and memory topology, and NUMA emulation (fake=). - Changes to the Xen page allocator adding a per-node bucket for each zone. Xen will continue to prioritize using the
2006 Sep 29
4
[PATCH 4/6] xen: export NUMA topology in physinfo hcall
This patch modifies the physinfo hcall to export NUMA CPU and Memory topology information. The new physinfo hcall is integrated into libxc and xend (xm info specifically). Included in this patch is a minor tweak to xm-test''s xm info testcase. The new fields in xm info are: nr_nodes : 4 mem_chunks : node0:0x0000000000000000-0x0000000190000000
2013 Jul 04
2
Re: [libvirt] [PATCH 1/4] libxl: implement NUMA capabilities reporting
[Moving the conversation on @xen-devel and adding Jan, as that seems more appropriate] [Jan, this came up as I''m implementing some NUMA bits in libvirt but, as you see, the core of Jim''s question is purely about Xen] On lun, 2013-07-01 at 16:47 -0600, Jim Fehlig wrote: > On my non-NUMA test machine I have the cell memory reported as > > <memory
2012 Jul 27
4
3.5.0 dom0 crash on boot
Hi, I''ve not tried pv_ops for a long time but just got a new system (Supermicro X9DRL-iF) so decided to try 3.5.0 with the latest Xen 4.2-unstable, unfortunately the system crashes immediately after loading dom0: traps.c:486:d0 Unhandled invalid opcode fault/trap [#6] on VCPU 0 [ec=0000] I''ve tried loading both bzImage and vmlinuz (gzip compressed vmlinuz) with the same
2009 Aug 28
2
[PATCH] x86/numa: fix c/s 20120 (Fix SRAT check for discontig memory)
That change converted the (wrong) assumption of contiguous nodes'' memory to a similarly wrong one of assuming discontiguous memory (i.e. each node having separate E820 table entries). The code ought to be able to deal with both, though, and I hope this change makes it so. Signed-off-by: Jan Beulich <jbeulich@novell.com> --- 2009-08-24.orig/xen/arch/x86/srat.c 2009-08-28
2019 Dec 13
0
[PATCH RFC v4 01/13] ACPI: NUMA: export pxm_to_node
On 12.12.19 22:43, Rafael J. Wysocki wrote: > On Thursday, December 12, 2019 6:11:25 PM CET David Hildenbrand wrote: >> Will be needed by virtio-mem to identify the node from a pxm. >> >> Cc: "Rafael J. Wysocki" <rjw at rjwysocki.net> >> Cc: Len Brown <lenb at kernel.org> >> Cc: linux-acpi at vger.kernel.org >> Signed-off-by: David
2020 Mar 02
0
[PATCH v1 01/11] ACPI: NUMA: export pxm_to_node
Will be needed by virtio-mem to identify the node from a pxm. Acked-by: "Rafael J. Wysocki" <rafael at kernel.org> Cc: Len Brown <lenb at kernel.org> Cc: linux-acpi at vger.kernel.org Signed-off-by: David Hildenbrand <david at redhat.com> --- drivers/acpi/numa/srat.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/acpi/numa/srat.c
2020 Mar 02
1
[PATCH v1 01/11] ACPI: NUMA: export pxm_to_node
On Mon 02-03-20 14:49:31, David Hildenbrand wrote: > Will be needed by virtio-mem to identify the node from a pxm. No objection to export the symbol. But it is almost always better to add the export in the patch that actually uses it. The intention is much more clear that way. > Acked-by: "Rafael J. Wysocki" <rafael at kernel.org> > Cc: Len Brown <lenb at
2013 Sep 17
1
[PATCH] xen: numa-sched: leave node-affinity alone if not in "auto" mode
If the domain''s NUMA node-affinity is being specified by the user/toolstack (instead of being automatically computed by Xen), we really should stick to that. This means domain_update_node_affinity() is wrong when it filters out some stuff from there even in "!auto" mode. This commit fixes that. Of course, this does not mean node-affinity is always honoured (e.g., a vcpu
2011 Feb 11
4
Xen hypervisor failed to startup when booting CPUs
Hi Folks: I run into a problem when enabling Xen in next generation server platforms with Xen c/s is 21380. Xen reported "CPU Not responding" when booting up 32 CPUs( 2 sockets with 8 cores/16threads total). The log files belonging showed something wrong with APCI. So I added x2apic=0 in the xen grub line, but the symptom remained. However, Native RHEL5.5 can
2015 Dec 08
1
new install of Xen 4.6 hangs on Loading initial ramdisk
?Not sure if this actually made it to the list the first time.? Here is the SERIAL output (bottom of message after your questions).? Googling the error indicates it's something people ran into a few years back but was supposedly fixed.? Any ideas? I can verify that if I REMOVE the second CPU, it boots into Xen kernel no problem.? The CPU itself doesn't matter, as I can swap either
2020 Jun 29
2
[RFC 0/3] virtio: NUMA-aware memory allocation
On Sun, Jun 28, 2020 at 02:34:37PM +0800, Jason Wang wrote: > > On 2020/6/25 ??9:57, Stefan Hajnoczi wrote: > > These patches are not ready to be merged because I was unable to measure a > > performance improvement. I'm publishing them so they are archived in case > > someone picks up this work again in the future. > > > > The goal of these patches is to
2020 Jun 29
2
[RFC 0/3] virtio: NUMA-aware memory allocation
On Sun, Jun 28, 2020 at 02:34:37PM +0800, Jason Wang wrote: > > On 2020/6/25 ??9:57, Stefan Hajnoczi wrote: > > These patches are not ready to be merged because I was unable to measure a > > performance improvement. I'm publishing them so they are archived in case > > someone picks up this work again in the future. > > > > The goal of these patches is to
2014 Sep 12
1
Inconsistent behavior between x86_64 and ppc64 when creating guests with NUMA node placement
Hello all, I was recently trying out NUMA placement for my guests on both x86_64 and ppc64 machines. When booting a guest on the x86_64 machine, the following specs were valid (obviously, just notable excepts from the xml): <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <vcpu
2020 Jun 25
5
[RFC 0/3] virtio: NUMA-aware memory allocation
These patches are not ready to be merged because I was unable to measure a performance improvement. I'm publishing them so they are archived in case someone picks up this work again in the future. The goal of these patches is to allocate virtqueues and driver state from the device's NUMA node for optimal memory access latency. Only guests with a vNUMA topology and virtio devices spread
2020 Jun 25
5
[RFC 0/3] virtio: NUMA-aware memory allocation
These patches are not ready to be merged because I was unable to measure a performance improvement. I'm publishing them so they are archived in case someone picks up this work again in the future. The goal of these patches is to allocate virtqueues and driver state from the device's NUMA node for optimal memory access latency. Only guests with a vNUMA topology and virtio devices spread
2018 Sep 14
1
Re: NUMA issues on virtualized hosts
Hello again, when the iozone writes slow. This is how slabtop looks like: 62476752 62476728 0% 0.10K 1601968 39 6407872K buffer_head 1000678 999168 0% 0.56K 142954 7 571816K radix_tree_node 132184 125911 0% 0.03K 1066 124 4264K kmalloc-32 118496 118224 0% 0.12K 3703 32 14812K kmalloc-node 73206 56467 0% 0.19K 3486 21
2012 Jul 26
3
[PATCH v8] Some automatic NUMA placement documentation
About rationale, usage and (some small bits of) API. Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com> Acked-by: Ian Campbell <ian.campbell@citrix.com> --- Changes from v7: * avoid referring to 4.2 release as "upcoming". * libxl placement disabling key explicitly mentioned. * Limit of max 16 NUMA nodes explicitly mentioned. Changes from v6: * text updated to
2018 Sep 18
1
Re: NUMA issues on virtualized hosts
On 09/17/2018 04:59 PM, Lukas Hejtmanek wrote: > Hello, > > so the current domain configuration: > <cpu mode='host-passthrough'><topology sockets='8' cores='4' threads='1'/><numa><cell cpus='0-3' memory='62000000' /><cell cpus='4-7' memory='62000000' /><cell cpus='8-11'