similar to: [PATCH 0/6] add NUMA support to Xen

Displaying 20 results from an estimated 300 matches similar to: "[PATCH 0/6] add NUMA support to Xen"

2005 Dec 16
3
[PATCH] 0/7 xen: Add basic NUMA support
The patchset will add basic NUMA support to Xen (hypervisor only). We borrowed from Linux support for NUMA SRAT table parsing, discontiguous memory tracking (mem chunks), and cpu support (node_to_cpumask etc). The hypervisor parses the SRAT tables and constructs mappings for each node such as node to cpu mappings and memory range to node mappings. Using this information, we also modified the
2006 Oct 04
2
NUMA support on Xen ?
Hi, I am a Masters student from Carnegie Mellon University. I am looking for a research topics for an Advanced OS & DS course we have. I wanted to know what is the current support for NUMA on Xen ? Does it support the IBM x440 and AMD64 Opteron ? Also, does the Xen scheduler do NUMA aware scheduling so it does not degrade the VM performace ? My group is currently looking into Scheduling
2006 Sep 29
4
[PATCH 4/6] xen: export NUMA topology in physinfo hcall
This patch modifies the physinfo hcall to export NUMA CPU and Memory topology information. The new physinfo hcall is integrated into libxc and xend (xm info specifically). Included in this patch is a minor tweak to xm-test''s xm info testcase. The new fields in xm info are: nr_nodes : 4 mem_chunks : node0:0x0000000000000000-0x0000000190000000
2006 Jul 31
2
Problem with allp ossible combination.
Dear R Users, Suppose I have a dataset like this: a b 39700 485.00 39300 485.00 39100 480.00 38800 487.00 38800 492.00 39300 507.00 39500 493.00 39400 494.00 39500 494.00 39100 494.00 39200 490.00 Now I want get a-b for all possible combinations of a and b. Using two 'for' loop it is easy to calculate. But problem arises when row length of the data set is
2008 Jul 04
0
[PATCH 2/4] hvm: NUMA guest: extend populate_physmap to use a node
To make use of the new node aware memop hypercall, the xc_domain_memory_populate_physmap function is extended by a node parameter. Passing XENMEM_DEFAULT_NODE mimics the current behavior. Signed-off-by: Andre Przywara <andre.przywara@amd.com> -- Andre Przywara AMD-Operating System Research Center (OSRC), Dresden, Germany Tel: +49 351 277-84917 ----to satisfy European Law for business
2010 Sep 10
0
How to download / snapshot Dulloor''s NUMA patches
I have been following Dulloor''s patch series for NUMA. The last patch set was submitted in Aug 2010 (titled NUMA v2). I''m interested in testing / experimenting with his NUMA changes but cannot find them in http://xenbits.xensource.com/xen-unstable.hg. Is there some way I can download or clone the Xen branch where Dulloor''s patches have been applied? Thanks. Dante
2008 Jun 24
1
Xen / NUMA problems
Hi folks, we are using a Tyan TK8W 2885 Mainboard (latest BIOS) w/ 2 Dual Core Opteron 280EE and 8GB of RAM (4GB per Socket). Furthermore we run CentOS 5.1 w/ Xen 3.2.1. (build from SRPM). We also tried 3.2.0. I tried both, the CentOS 5.1 Xen Kernel as well as the latest RHEL 5.2 Kernel but we do not get two NUMA domains as we (in my opinion) are supposed to. Do we need to recompile anything?
2009 Nov 12
0
Is NUMA working correctly?
Hi Everyone, I''m trying to bind CPU and memory usage to particular cores using some Quad CPU 16 Core Opterons. They have 64GB RAM, 16 GB per node. It seems that xm info shows it is not working as expected though, below are details for the first node: Name ID Mem VCPUs State Time(s) Domain-0 0 4096 2
2008 Sep 05
0
3.2.1+ HVM + HAP + NUMA - Poor Memory Performance
Hi Everyone, I am running 3.2.1 on Centos 5.2 with HAP enabled, NUMA enabled, ACPI enabled and the dom0 allocated 512Mb. I have setup a single core 1Gb VM for performance testing under Windows 2008 Server. Most CPU results are within a few percent of theoretical max but Memory performance is about half what I expected. I get 3.22Gb/Sec Sandra 2009 Memory performance for a single Opteron 8350
2010 Apr 30
0
Do we need ACPI/APIC/NUMA/Schedule in dom0 ?
Hello all, My machine is a dual xeon x5520 server with NUMA support. I have some question about xen dom0 kernel. Since xen hypervisor has provided vcpu schedule, do we need to compile the following feature in dom0 kernel? - Tickless System (Dynamic Ticks) - High Resolution Timer Support - AMD IOMMU support - SMT (Hyperthreading) scheduler support - Multi-core scheduler suppor
2008 Mar 14
1
[PATCH] Allow explicit NUMA placements of guests
Hi, this patch introduces a new config file option (numanodes=[x]) to specify a list of valid NUMA nodes for guests. This will extend (but not replace) the recently introduced automatic placement. If several nodes are given, the current algorithm will choose one of them. If none of the given nodes has enough memory, this will fall back to the automatic placement. Signed-off-by: Andre
2005 Oct 09
0
RE: x86-64 Net Performance [was: Opteron server and NUMA]
> On 10/9/05, Alexander Charbonnet <alexander@charbonnet.com> wrote: > > compiled into Debian''s glibc package. I can recompile and run that > > test if you think it''s worth doing. > > Very interesting number. Have you tried doing the same tests > between two different domUs or dom0 and a domU of the same type? > > If you do recompile you
2010 Nov 12
0
Announce: Auto/Lazy-migration Patches RFC on linux-numa list
At last weeks' LPC, there was some interest in my patches for Auto/Lazy Migration to improve locality and possibly performance of unpinned guest VMs on a NUMA platform. As a result of these conversations I have reposted the patches [4 series, ~40 patches] as RFCs to the linux-numa list. Links to threads given below. I have rebased the patches atop 3Nov10 mmotm series [2.6.36 + 3nov mmotm].
2010 Nov 12
0
Announce: Auto/Lazy-migration Patches RFC on linux-numa list
At last weeks' LPC, there was some interest in my patches for Auto/Lazy Migration to improve locality and possibly performance of unpinned guest VMs on a NUMA platform. As a result of these conversations I have reposted the patches [4 series, ~40 patches] as RFCs to the linux-numa list. Links to threads given below. I have rebased the patches atop 3Nov10 mmotm series [2.6.36 + 3nov mmotm].
2012 Sep 21
0
picking a NUMA cell for pinning using virsh freecell
Hi I'd want to pin the vcpu of a guest to a pcpu. the docs clearly say https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/ch09s04.html "Locking a guest to a particular NUMA node offers no benefit if that node does not have sufficient free memory for that guest. libvirt stores information on the free memory available on
2014 Oct 28
0
Generate (vCPU pinning) from host NUMA configuration doesn't act accordingly
Hi all, It seems no matter how many vCPU I allocated to the VM, the auto-generated vCPU pinning configuration won't cover more than 8 CPUs. Is that normal? Otherwise what should I do? Will vCPU pinning by virsh be more effective? Regards, Allen 2014-10-28 Allen Qiu
2015 Jan 26
0
Re: questions around using numatune/numa/schedinfo
On 23.01.2015 19:46, Chris Friesen wrote: > Hi, > > I'm running into some problems with libvirt and hoping someone can point > me at some instructions or maybe even help me out. > > > First, are there any requirements on qemu version in order to use the > "numatune" and/or "cpu/numa/cell" elements? Or do they use cgroups and > not the native
2019 Sep 19
0
[PATCH RFC v3 1/9] ACPI: NUMA: export pxm_to_node
Will be needed by virtio-mem to identify the node from a pxm. Cc: "Rafael J. Wysocki" <rjw at rjwysocki.net> Cc: Len Brown <lenb at kernel.org> Cc: linux-acpi at vger.kernel.org Signed-off-by: David Hildenbrand <david at redhat.com> --- drivers/acpi/numa.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/acpi/numa.c b/drivers/acpi/numa.c index
2019 Sep 23
0
[PATCH RFC v3 1/9] ACPI: NUMA: export pxm_to_node
On Mon 23-09-19 12:13:11, David Hildenbrand wrote: > On 19.09.19 16:22, David Hildenbrand wrote: > > Will be needed by virtio-mem to identify the node from a pxm. > > > > Cc: "Rafael J. Wysocki" <rjw at rjwysocki.net> > > Cc: Len Brown <lenb at kernel.org> > > Cc: linux-acpi at vger.kernel.org > > Signed-off-by: David Hildenbrand
2019 Dec 13
0
[PATCH RFC v4 01/13] ACPI: NUMA: export pxm_to_node
On 12.12.19 22:43, Rafael J. Wysocki wrote: > On Thursday, December 12, 2019 6:11:25 PM CET David Hildenbrand wrote: >> Will be needed by virtio-mem to identify the node from a pxm. >> >> Cc: "Rafael J. Wysocki" <rjw at rjwysocki.net> >> Cc: Len Brown <lenb at kernel.org> >> Cc: linux-acpi at vger.kernel.org >> Signed-off-by: David