similar to: NUMA revisited

Displaying 20 results from an estimated 4000 matches similar to: "NUMA revisited"

2018 Sep 14
0
Re: NUMA issues on virtualized hosts
Hello, ok, I found that cpu pinning was wrong, so I corrected it to be 1:1. The issue with iozone remains the same. The spec is running, however, it runs slower than 1-NUMA case. The corrected XML looks like follows: <cpu mode='host-passthrough'><topology sockets='8' cores='4' threads='1'/><numa><cell cpus='0-3'
2018 Sep 14
1
Re: NUMA issues on virtualized hosts
Hello again, when the iozone writes slow. This is how slabtop looks like: 62476752 62476728 0% 0.10K 1601968 39 6407872K buffer_head 1000678 999168 0% 0.56K 142954 7 571816K radix_tree_node 132184 125911 0% 0.03K 1066 124 4264K kmalloc-32 118496 118224 0% 0.12K 3703 32 14812K kmalloc-node 73206 56467 0% 0.19K 3486 21
2015 May 22
2
libvirt with gcc5 Test failing
Hello! I'm trying to compile libvirt using GCC 5.1 but one of the test are failing and I have no idea why :( Hopefully someone of you could help me, here part of my log: ========================================== libvirt 1.2.14: tests/test-suite.log ========================================== # TOTAL: 109 # PASS: 107 # SKIP: 1 # XFAIL: 0 # FAIL: 1 # XPASS: 0 # ERROR: 0 .. contents::
2015 Feb 09
0
Re: HugePages - can't start guest that requires them
First I'll quickly summarize my understanding of how to configure numa... In "//memoryBacking/hugepages/page[@nodeset]" I am telling libvirt to use hugepages for the guest, and to get those hugepages from a particular host NUMA node. In "//numatune/memory[@nodeset]" I am telling libvirt to pin the memory allocation to the guest from a particular host numa node. In
2018 Sep 14
3
NUMA issues on virtualized hosts
Hello, I have cluster with AMD EPYC 7351 cpu. Two CPUs per node. I have performance 8-NUMA configuration: This is from hypervizor: [root@hde10 ~]# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 64 On-line CPU(s) list: 0-63 Thread(s) per core: 2 Core(s) per socket: 16 Socket(s): 2 NUMA
2018 Sep 17
0
Re: NUMA issues on virtualized hosts
Hello, so the current domain configuration: <cpu mode='host-passthrough'><topology sockets='8' cores='4' threads='1'/><numa><cell cpus='0-3' memory='62000000' /><cell cpus='4-7' memory='62000000' /><cell cpus='8-11' memory='62000000' /><cell cpus='12-15'
2015 Feb 20
0
Re: HugePages - can't start guest that requires them
On Tue, Feb 10, 2015 at 1:14 AM, Michal Privoznik <mprivozn@redhat.com> wrote: > On 09.02.2015 18:19, G. Richard Bellamy wrote: >> First I'll quickly summarize my understanding of how to configure numa... >> >> In "//memoryBacking/hugepages/page[@nodeset]" I am telling libvirt to >> use hugepages for the guest, and to get those hugepages from a
2015 Feb 10
2
Re: HugePages - can't start guest that requires them
On 09.02.2015 18:19, G. Richard Bellamy wrote: > First I'll quickly summarize my understanding of how to configure numa... > > In "//memoryBacking/hugepages/page[@nodeset]" I am telling libvirt to > use hugepages for the guest, and to get those hugepages from a > particular host NUMA node. No, @nodeset refers to guest NUMA nodes. > > In
2015 Jan 26
0
Re: questions around using numatune/numa/schedinfo
On 23.01.2015 19:46, Chris Friesen wrote: > Hi, > > I'm running into some problems with libvirt and hoping someone can point > me at some instructions or maybe even help me out. > > > First, are there any requirements on qemu version in order to use the > "numatune" and/or "cpu/numa/cell" elements? Or do they use cgroups and > not the native
2014 Jun 02
0
numa support question on centos 6.5
Hi, All The vm can't start when using numa based on centos 6.5(kernel: kernel-2.6.32-431.17.1.el6.x86_64, qemu-kvm: qemu-kvm-0.12.1.2-2.415.el6_5.8.x86_64). My numa setting in vm xml is the following: -------------------- <numatune> <memory mode='strict' nodeset='1'/> </numatune> -------------------- When 'nodeset' sets '0', the
2015 Jan 23
3
questions around using numatune/numa/schedinfo
Hi, I'm running into some problems with libvirt and hoping someone can point me at some instructions or maybe even help me out. First, are there any requirements on qemu version in order to use the "numatune" and/or "cpu/numa/cell" elements? Or do they use cgroups and not the native qemu numa support? Second, are there any instructions on how to set up cgroups? I
2017 Oct 02
2
NUMA split mode?
John R Pierce <pierce at hogranch.com> writes: > On 10/1/2017 8:38 AM, hw wrote: >> HP says that what they call "NUMA split mode" should be disabled in the >> BIOS of the Z800 workstation when running Linux. They are reasoning >> that Linux kernels do not support this feature and even might not boot >> if it?s enabled. > > hmm, that workstation is
2018 Sep 17
2
Re: NUMA issues on virtualized hosts
On 09/14/2018 03:36 PM, Lukas Hejtmanek wrote: > Hello, > > ok, I found that cpu pinning was wrong, so I corrected it to be 1:1. The issue > with iozone remains the same. > > The spec is running, however, it runs slower than 1-NUMA case. > > The corrected XML looks like follows: [Reformated XML for better reading] <cpu mode="host-passthrough">
2015 Feb 04
2
Re: HugePages - can't start guest that requires them
*facepalm* Now that I'm re-reading the documentation it's obvious that <page/> and @nodeset are for the guest, "This tells the hypervisor that the guest should have its memory allocated using hugepages instead of the normal native page size." Pretty clear there. Thank you SO much for the guidance, I'll return to my tweaking. I'll report back here with my results.
2018 Sep 18
1
Re: NUMA issues on virtualized hosts
On 09/17/2018 04:59 PM, Lukas Hejtmanek wrote: > Hello, > > so the current domain configuration: > <cpu mode='host-passthrough'><topology sockets='8' cores='4' threads='1'/><numa><cell cpus='0-3' memory='62000000' /><cell cpus='4-7' memory='62000000' /><cell cpus='8-11'
2011 Jan 27
7
[PATCH]: xl: fix broken cpupool-numa-split
Hi, the implementation of xl cpupool-numa-split is broken. It basically deals with only one poolid, but there are two to consider: the one from the original root CPUpool, the other from the newly created one. On my machine the current output looks like: root@dosorca:/data/images# xl cpupool-numa-split libxl: error: libxl.c:2803:libxl_create_cpupool Could not create cpupool error on creating
2019 Sep 15
3
virsh -c lxc:/// setvcpus and <vcpu> configuration fails
Hi folks! i created a server with this XML file: <domain type='lxc'> <name>lxctest1</name> <uuid>227bd347-dd1d-4bfd-81e1-01052e91ffe2</uuid> <metadata> <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0"> <libosinfo:os id="http://centos.org/centos/6.9"/>
2008 Jun 24
1
Xen / NUMA problems
Hi folks, we are using a Tyan TK8W 2885 Mainboard (latest BIOS) w/ 2 Dual Core Opteron 280EE and 8GB of RAM (4GB per Socket). Furthermore we run CentOS 5.1 w/ Xen 3.2.1. (build from SRPM). We also tried 3.2.0. I tried both, the CentOS 5.1 Xen Kernel as well as the latest RHEL 5.2 Kernel but we do not get two NUMA domains as we (in my opinion) are supposed to. Do we need to recompile anything?
2010 Oct 27
2
Why is cpu-to-node mapping different between Xen 4.0.2-rc1-pre and Xen 4.1-unstable?
My system is a dual Xeon E5540 (Nehalem) HP Proliant DL380G6. When switching between Xen 4.0.2-rc1-pre and Xen 4.1-unstable I noticed that the NUMA info as shown by the Xen ''u'' debug-key is different. More specifically, the CPU to node mapping is alternating for 4.0.2 and grouped sequentially for 4.1. This difference affects the allocation (wrt node/socket) of pinned VCPUs to the
2015 Feb 04
0
Re: HugePages - can't start guest that requires them
On 04.02.2015 01:59, G. Richard Bellamy wrote: > As I mentioned, I got the instances to launch... but they're only > taking HugePages from "Node 0", when I believe my setup should pull > from both nodes. > > [atlas] http://sprunge.us/FSEf > [prometheus] http://sprunge.us/PJcR [pasting interesting nits from both XMLs] <domain type='kvm'