Displaying 20 results from an estimated 20000 matches similar to: "Does libvirt lxc driver support "cpuset" attribute?"
2018 Sep 17
2
Re: NUMA issues on virtualized hosts
On 09/14/2018 03:36 PM, Lukas Hejtmanek wrote:
> Hello,
>
> ok, I found that cpu pinning was wrong, so I corrected it to be 1:1. The issue
> with iozone remains the same.
>
> The spec is running, however, it runs slower than 1-NUMA case.
>
> The corrected XML looks like follows:
[Reformated XML for better reading]
<cpu mode="host-passthrough">
2018 Sep 14
3
NUMA issues on virtualized hosts
Hello,
I have cluster with AMD EPYC 7351 cpu. Two CPUs per node. I have performance
8-NUMA configuration:
This is from hypervizor:
[root@hde10 ~]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
NUMA
2019 Sep 15
3
virsh -c lxc:/// setvcpus and <vcpu> configuration fails
Hi folks!
i created a server with this XML file:
<domain type='lxc'>
<name>lxctest1</name>
<uuid>227bd347-dd1d-4bfd-81e1-01052e91ffe2</uuid>
<metadata>
<libosinfo:libosinfo
xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
<libosinfo:os id="http://centos.org/centos/6.9"/>
2018 Sep 14
1
Re: NUMA issues on virtualized hosts
Hello again,
when the iozone writes slow. This is how slabtop looks like:
62476752 62476728 0% 0.10K 1601968 39 6407872K buffer_head
1000678 999168 0% 0.56K 142954 7 571816K radix_tree_node
132184 125911 0% 0.03K 1066 124 4264K kmalloc-32
118496 118224 0% 0.12K 3703 32 14812K kmalloc-node
73206 56467 0% 0.19K 3486 21
2014 Jan 15
0
Re: Does libvirt lxc driver support "cpuset" attribute?
On Wed, Jan 15, 2014 at 05:49:23AM +0000, WANG Cheng D wrote:
> Dear all
>
> I allocate only one vcpu for the container by the following
> statement, that is, I want to pin the vcpu to physical core "2".
> <vcpu placement='static' cpuset="2" >1</vcpu>
> My host has 4 physical cores. Before test, all the 4 cores are
> idle. After I run 4
2018 Sep 18
1
Re: NUMA issues on virtualized hosts
On 09/17/2018 04:59 PM, Lukas Hejtmanek wrote:
> Hello,
>
> so the current domain configuration:
> <cpu mode='host-passthrough'><topology sockets='8' cores='4' threads='1'/><numa><cell cpus='0-3' memory='62000000' /><cell cpus='4-7' memory='62000000' /><cell cpus='8-11'
2013 Jun 14
0
can virsh set the cpuset attribute of <vcpu ..> (CPU Allocation) ?
Is it possible to use virsh to set the cpuset attribute of the CPU
Allocation element in a domain ?
<domain>
...
<vcpu placement='static' cpuset="1-4,^3,6" current="1">2</vcpu>
...
</domain>
I have seen that virsh vcpupin and virsh emulatorpin can be used to query
and set
the cpusets of the <vcpupin> and <emulatorpin>
2018 Sep 05
2
Domain vCPU threads affinity
Hello,
According to the docs, vcpupin will use either cgroups or sched_setaffinity
to pin vcpu threads to cpus. How is this decision made?
I observe differences even on different hosts featuring the same version of
libvirtd (1.3.1): on one host vcpupin affects cpuset.cpus (cgroup), and on
the other it affects vcpu threads affinity (observed through taskset).
Thanks,
Nikos
-------------- next
2014 Dec 22
2
why CPU pinning doesn't take effect when using lxc-enter-namespace to run an application
Dear all,
I want my container to run on the third CPU core and I define this by the following xml scrpits:
<vcpu placement="static" cpuset="3">1</vcpu>
When I run my application in a container terminal, I can see the application runs on the third core as expected.
When I run my application using lxc-enter-namespace, the CPU pinning doesn't take effect, i.e.,
2014 Mar 17
2
a question on vCPU setting for lxc
Dear all,
I am not clear about the 'vcpu' element for CPU allocation. I allocated 1 vCPU to my container, after I started the container, I ran 4 computation-intensive tasks on the container. And I found all the 4 physical core are 100% used (my host has 4 physical cores and no other application ran on the host except the container). That is, all available cores were used by the container.
2017 Apr 26
3
Tunnelled migrate Windows7 VMs halted
[moderator note: I'm forwarding a stripped down version of the original
mail which was rejected in the moderator queue. I stripped the 3.3
megabyte .tar.bz2 of the log file attachment, which is inappropriate for
a technical list. Either trim the log to the relevant portion, or host
the log externally and have your list email merely give a URL of the
externally-hosted file]
>
2014 Feb 12
2
Re: Help? Running into problems with migrateToURI2() and virDomainDefCheckABIStability()
On 02/11/2014 04:45 PM, Cole Robinson wrote:
> On 02/10/2014 06:46 PM, Chris Friesen wrote:
>> Hi,
>>
>> We've run into a problem with libvirt 1.1.2 and are looking for some comments
>> on whether this is a bug or design intent.
>>
>> We're trying to use migrateToURI() but we're using a few things (numatune,
>> vcpu mask, etc.) that may need
2012 Oct 16
1
cpuset not affecting real pid placement
Hi,
At least on 0.10.2 setting a cpuset doesn`t match a real process
placement - VM still consumes all available cores.
VM config:
.snip.
<vcpu placement='static' cpuset='0-5,12-17'>12</vcpu>
.snip.
for cpuset in $(find /cgroup/cpuset/libvirt/qemu/vmid/ -name
cpuset.cpus) ; do grep 0-5 $cpuset ; done
got: empty responce, e.g. 0-23 in my setup
expected: at least
2018 Sep 17
0
Re: NUMA issues on virtualized hosts
Hello,
so the current domain configuration:
<cpu mode='host-passthrough'><topology sockets='8' cores='4' threads='1'/><numa><cell cpus='0-3' memory='62000000' /><cell cpus='4-7' memory='62000000' /><cell cpus='8-11' memory='62000000' /><cell cpus='12-15'
2013 Jul 19
2
How to insert vcpupin in guest xml file
Hi all,
I am trying to add vcpupin in the guest xml file. I am working with openstack and the code I have is python bound. I investigated through the code and found that the elemets in xml file are set in get_guest_config function. Now the thing is I am not able to set vcpupin element. I mean I tried guest.cputune_vcpupin but it's not working.
Help me out, please!
Thanks.
~Peeyush Gupta
2023 May 12
1
Question regaring correct usage of CPU shares
Hi there,
I have a question regarding the shares option of the cputune section. I
want to illustrate my question with the following example. Let's assume
I have two virtual machines like the following on four dedicated core
with two threads each:
VM1:
<cputune>
<shares>512</shares>
<vcpupin vcpu="0" cpuset="0"/>
<vcpupin
2013 Jul 12
2
libvrtd-1.1.0 crashes when attempting to start some (but not all) LXC containers
Hello all,
I have two issues:
1) I am unable to start a seemingly correct LXC domain (I cloned it from a
working domain).
2) I am able to crash "libvirtd" by attempting to start the cloned domain,
but starting the original works just fine.
I humbly submit that item #2 is a bug - the "libvirtd" daemon should
never crash due to anything the "libvirt" client
2018 Sep 14
0
Re: NUMA issues on virtualized hosts
Hello,
ok, I found that cpu pinning was wrong, so I corrected it to be 1:1. The issue
with iozone remains the same.
The spec is running, however, it runs slower than 1-NUMA case.
The corrected XML looks like follows:
<cpu mode='host-passthrough'><topology sockets='8' cores='4' threads='1'/><numa><cell cpus='0-3'
2015 Aug 12
1
Libvirt LXC vcpu doesn't seem to work
Hi,
I seem to have a problem when creating a LXC container through virsh.
While virsh -c lxc:/// dominfo <container> shows up (for example) 2
VCPUs as defined, if I run a CPU intensive task (such as stress --cpu
10) it will max out 10 CPU cores on the host.
If I echo "0" > /cgroup/cpuset/libvirt/lxc/<domain>/cpuset.cpus
then the container is properly confined to just
2019 Aug 29
0
[libvirtd] qemu_process: reset CPU affinity to all enabled CPUs, when runs in custom cpuset
Hello All,
Since 4.5.0-23.el7 version (Red Hat 7.7), when I launch pinned VM,
libvirtd reset CPU affinity to all enabled in host CPUs, if it runs in
custom cpuset.
I can't reproduce this behavior with 4.5.0-10.el7_6.12 with the same
kernel version (Red Hat 7.7).
Libvirt runs in a custom cpuset 'libvirt', where the number of
available cpus is restricted to 0,2,4,6,8.
And this