search for: numatune

Displaying 20 results from an estimated 30 matches for "numatune".

2015 Jan 23
3
questions around using numatune/numa/schedinfo
Hi, I'm running into some problems with libvirt and hoping someone can point me at some instructions or maybe even help me out. First, are there any requirements on qemu version in order to use the "numatune" and/or "cpu/numa/cell" elements? Or do they use cgroups and not the native qemu numa support? Second, are there any instructions on how to set up cgroups? I initially hadn't had cgroups mounted and running "virsh schedinfo <domain>" gave an error. So I m...
2015 Jan 26
0
Re: questions around using numatune/numa/schedinfo
On 23.01.2015 19:46, Chris Friesen wrote: > Hi, > > I'm running into some problems with libvirt and hoping someone can point > me at some instructions or maybe even help me out. > > > First, are there any requirements on qemu version in order to use the > "numatune" and/or "cpu/numa/cell" elements? Or do they use cgroups and > not the native qemu numa support? For numatune you need a qemu with 'memory-backend-{ram,file}' objects. Those were introduced in qemu-2.1.0. As of libvirt, you'll need libvirt-1.2.7 at least. Although th...
2018 Sep 17
2
Re: NUMA issues on virtualized hosts
...7" cpuset="27"/> <vcpupin vcpu="28" cpuset="28"/> <vcpupin vcpu="29" cpuset="29"/> <vcpupin vcpu="30" cpuset="30"/> <vcpupin vcpu="31" cpuset="31"/> </cputune> <numatune> <memory mode="strict" nodeset="0-7"/> </numatune> However, this is not enough. This XML pins only vCPUs and not guest memory. So while say vCPU #0 is pinned onto physical CPU #0, the memory for guest NUMA #0 might be allocated at host NUMA #7 (for instance)....
2015 Feb 04
2
Re: HugePages - can't start guest that requires them
...ibvirt fail explicitly in this case. > > Moreover, you haven't pinned your guests onto any host numa nodes. This > means it's up to the host kernel and its scheduler where guest will take > memory from. And subsequently hugepages as well. I think you want to add: > > <numatune> > <memory mode='strict' nodeset='0'/> > </numatune> > > to guest XMLs, where @nodeset refers to host numa nodes and tells where > the guest should be placed. There are other modes too so please see > documentation to tune the XML to match your...
2018 Sep 17
0
Re: NUMA issues on virtualized hosts
...><vcpupin vcpu='27' cpuset='27' /><vcpupin vcpu='28' cpuset='28' /><vcpupin vcpu='29' cpuset='29' /><vcpupin vcpu='30' cpuset='30' /><vcpupin vcpu='31' cpuset='31' /></cputune> <numatune> <memnode cellid="0" mode="strict" nodeset="0"/> <memnode cellid="1" mode="strict" nodeset="1"/> <memnode cellid="2" mode="strict" nodeset="2"/> <memnode cellid="3" mode=&quot...
2014 Feb 12
2
Re: Help? Running into problems with migrateToURI2() and virDomainDefCheckABIStability()
...n 02/10/2014 06:46 PM, Chris Friesen wrote: >> Hi, >> >> We've run into a problem with libvirt 1.1.2 and are looking for some comments >> on whether this is a bug or design intent. >> >> We're trying to use migrateToURI() but we're using a few things (numatune, >> vcpu mask, etc.) that may need adjustment during the migration. We found that >> migrateToURI2() mostly works if we use XML created by copying the domain XML >> from the running instance and modifying the appropriate sections. >> >> The problem that we're seei...
2015 May 22
2
libvirt with gcc5 Test failing
...... OK 325) QEMU XML-2-ARGV cputune-zero-shares ... OK 326) QEMU XML-2-ARGV cputune-iothreadsched-toomuch ... OK 327) QEMU XML-2-ARGV cputune-vcpusched-overlap ... OK 328) QEMU XML-2-ARGV cputune-numatune ... OK 329) QEMU XML-2-ARGV numatune-memory ... OK 330) QEMU XML-2-ARGV numatune-memory-invalid-nodeset ... OK 331) QEMU XML-2-ARGV numatune-memnode ... libvirt: error : unsupported configuration: NUMA node 1 is u...
2015 Feb 09
0
Re: HugePages - can't start guest that requires them
First I'll quickly summarize my understanding of how to configure numa... In "//memoryBacking/hugepages/page[@nodeset]" I am telling libvirt to use hugepages for the guest, and to get those hugepages from a particular host NUMA node. In "//numatune/memory[@nodeset]" I am telling libvirt to pin the memory allocation to the guest from a particular host numa node. In "//numatune/memnode[@nodeset]" I am telling libvirt which guest NUMA node (cellid) should come from which host NUMA node (nodeset). In "//cpu/numa/cell[@id]&quo...
2018 Sep 14
3
NUMA issues on virtualized hosts
...512K NUMA node0 CPU(s): 0-3 NUMA node1 CPU(s): 4-7 NUMA node2 CPU(s): 8-11 NUMA node3 CPU(s): 12-15 NUMA node4 CPU(s): 16-19 NUMA node5 CPU(s): 20-23 NUMA node6 CPU(s): 24-27 NUMA node7 CPU(s): 28-31 This is virtual node configuration: (i tried different numatune settings but it was still the same) <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>one-55782</name> <vcpu><![CDATA[32]]></vcpu> <cputune> <shares>32768</sha...
2019 Sep 15
3
virsh -c lxc:/// setvcpus and <vcpu> configuration fails
..."> <libosinfo:os id="http://centos.org/centos/6.9"/> </libosinfo:libosinfo> </metadata> <memory unit='KiB'>1024000</memory> <currentMemory unit='KiB'>1024000</currentMemory> <vcpu>2</vcpu> <numatune> <memory mode='strict' placement='auto'/> </numatune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64'>exe</type> <init>/sbin/init</init> </os>...
2014 Feb 10
2
Help? Running into problems with migrateToURI2() and virDomainDefCheckABIStability()
Hi, We've run into a problem with libvirt 1.1.2 and are looking for some comments on whether this is a bug or design intent. We're trying to use migrateToURI() but we're using a few things (numatune, vcpu mask, etc.) that may need adjustment during the migration. We found that migrateToURI2() mostly works if we use XML created by copying the domain XML from the running instance and modifying the appropriate sections. The problem that we're seeing is that the serial console checking in...
2019 May 02
0
NUMA revisited
...g into the current numa settings for a large-ish libvirt/qemu based setup and I ended up having a couple of questions: 1) Has kernel.numa_balancing completely replaced numad or is there still a time and place for numad when we have a modern kernel? 2) Should I pin vCPUs to numa nodes and/or use numatune at all, when using kernel.numa_balancing? 3) The libvirt domain xml elements for vcpu and numatune.memory have placement options. According to the docs setting them to auto will query numad for a good placements. Should I keep numad running just for this? 4) Should I still expose the numa topo...
2014 Jun 02
0
numa support question on centos 6.5
Hi, All The vm can't start when using numa based on centos 6.5(kernel: kernel-2.6.32-431.17.1.el6.x86_64, qemu-kvm: qemu-kvm-0.12.1.2-2.415.el6_5.8.x86_64). My numa setting in vm xml is the following: -------------------- <numatune> <memory mode='strict' nodeset='1'/> </numatune> -------------------- When 'nodeset' sets '0', the vm can start; However, when setting it to 1, it can't start and report the following error: ---------------------------- error: Failed to star...
2018 Sep 14
1
Re: NUMA issues on virtualized hosts
...lt;vcpupin vcpu='27' cpuset='27' /><vcpupin vcpu='28' cpuset='28' /><vcpupin vcpu='29' cpuset='29' /><vcpupin vcpu='30' cpuset='30' /><vcpupin vcpu='31' cpuset='31' /></cputune> > <numatune><memory mode='strict' nodeset='0-7'/></numatune> > > In this case, the first part took more than 1700 seconds. 1-NUMA config > finishes in 1646 seconds. > > Hypervisor with 1-NUMA config finishes in 1470 seconds, the hypervisor with > 8-NUMA confi...
2015 Feb 10
2
Re: HugePages - can't start guest that requires them
...y understanding of how to configure numa... > > In "//memoryBacking/hugepages/page[@nodeset]" I am telling libvirt to > use hugepages for the guest, and to get those hugepages from a > particular host NUMA node. No, @nodeset refers to guest NUMA nodes. > > In "//numatune/memory[@nodeset]" I am telling libvirt to pin the > memory allocation to the guest from a particular host numa node. The <memory/> element tells what to do with not explicitly pinned guest NUMA nodes. > In "//numatune/memnode[@nodeset]" I am telling libvirt which guest...
2018 Sep 14
0
Re: NUMA issues on virtualized hosts
...><vcpupin vcpu='27' cpuset='27' /><vcpupin vcpu='28' cpuset='28' /><vcpupin vcpu='29' cpuset='29' /><vcpupin vcpu='30' cpuset='30' /><vcpupin vcpu='31' cpuset='31' /></cputune> <numatune><memory mode='strict' nodeset='0-7'/></numatune> In this case, the first part took more than 1700 seconds. 1-NUMA config finishes in 1646 seconds. Hypervisor with 1-NUMA config finishes in 1470 seconds, the hypervisor with 8-NUMA config finishes in 900 seconds. On...
2015 Feb 04
2
Re: HugePages - can't start guest that requires them
As I mentioned, I got the instances to launch... but they're only taking HugePages from "Node 0", when I believe my setup should pull from both nodes. [atlas] http://sprunge.us/FSEf [prometheus] http://sprunge.us/PJcR 2015-02-03 16:51:48 root@eanna i ~ # virsh start atlas Domain atlas started 2015-02-03 16:51:58 root@eanna i ~ # virsh start prometheus Domain prometheus started
2015 Apr 11
0
issue on fedora21 with libvirt-1.2.13 and 1.2.14 - containers won't start at all
...39;> <name>testmaster</name> <uuid>18989592-f964-4d51-90af-7ecf7719b758</uuid> <memory unit='KiB'>1048576</memory> <currentMemory unit='KiB'>1048576</currentMemory> <vcpu placement='auto'>1</vcpu> <numatune> <memory mode='strict' placement='auto'/> </numatune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64'>exe</type> <init>/sbin/init</init> </os>...
2013 Aug 07
2
Is there any virsh command to setup cpusettune for lxc?
Hi Gao feng, I noticed one of your patch which adds cpuset cgroup support for lxc have been merged in libvirt 1.0.4. But I can't find any virsh command to set cpusettune for lxc container. Is there anyone? And how can I configure cpusettune for lxc container lively? Thanks ------------------ Best regards! GuanQiang
2015 Feb 04
0
Re: HugePages - can't start guest that requires them
...I wonder if we should make libvirt fail explicitly in this case. Moreover, you haven't pinned your guests onto any host numa nodes. This means it's up to the host kernel and its scheduler where guest will take memory from. And subsequently hugepages as well. I think you want to add: <numatune> <memory mode='strict' nodeset='0'/> </numatune> to guest XMLs, where @nodeset refers to host numa nodes and tells where the guest should be placed. There are other modes too so please see documentation to tune the XML to match your use case perfectly. Michal