info@layer7.net
2019-Sep-15 10:21 UTC
[libvirt-users] virsh -c lxc:/// setvcpus and <vcpu> configuration fails
Hi folks! i created a server with this XML file: <domain type='lxc'> <name>lxctest1</name> <uuid>227bd347-dd1d-4bfd-81e1-01052e91ffe2</uuid> <metadata> <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0"> <libosinfo:os id="http://centos.org/centos/6.9"/> </libosinfo:libosinfo> </metadata> <memory unit='KiB'>1024000</memory> <currentMemory unit='KiB'>1024000</currentMemory> <vcpu>2</vcpu> <numatune> <memory mode='strict' placement='auto'/> </numatune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64'>exe</type> <init>/sbin/init</init> </os> <idmap> <uid start='0' target='200000' count='65535'/> <gid start='0' target='200000' count='65535'/> </idmap> <features> <privnet/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/libexec/libvirt_lxc</emulator> <filesystem type='mount' accessmode='mapped'> <source dir='/mnt'/> <target dir='/'/> </filesystem> <interface type='network'> <mac address='00:16:3e:3e:3e:bb'/> <source network='Public Network'/> </interface> <console type='pty'> <target type='lxc' port='0'/> </console> </devices> </domain> I would expect it to have 2 cpu cores and 1 GB RAM. The RAM config works. The CPU config does not: [root@lxctest1 ~]# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 24 On-line CPU(s) list: 0-23 Thread(s) per core: 2 Core(s) per socket: 6 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 62 Model name: Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz Stepping: 4 CPU MHz: 2399.950 BogoMIPS: 4205.88 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 15360K NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22 NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23 It gives me all CPU's from the host. I also tried it with <cpu> <topology sockets='1' cores='2' threads='1'/> </cpu> That didnt help too. I tried to modify the vcpus through virsh: #virsh -c lxc:/// setvcpus lxctest1 2 error: this function is not supported by the connection driver: virDomainSetVcpus Which didnt work too. This happens on: Centos 7 Kernel: 5.1.15-1.el7.elrepo.x86_64 #virsh -V Virsh command line tool of libvirt 4.5.0 See web site at https://libvirt.org/ Compiled with support for: Hypervisors: QEMU/KVM LXC ESX Test Networking: Remote Network Bridging Interface netcf Nwfilter VirtualPort Storage: Dir Disk Filesystem SCSI Multipath iSCSI LVM RBD Gluster ZFS Miscellaneous: Daemon Nodedev SELinux Secrets Debug DTrace Readline and also on Fedora 30 Kernel: 5.2.9-200.fc30.x86_64 # virsh -V Virsh command line tool of libvirt 5.1.0 See web site at https://libvirt.org/ Compiled with support for: Hypervisors: QEMU/KVM LXC LibXL OpenVZ VMware PHYP VirtualBox ESX Hyper-V Test Networking: Remote Network Bridging Interface netcf Nwfilter VirtualPort Storage: Dir Disk Filesystem SCSI Multipath iSCSI LVM RBD Sheepdog Gluster ZFS Miscellaneous: Daemon Nodedev SELinux Secrets Debug DTrace Readline ----------- Can anyone please tell me what i am doing wrong here ? Thank you ! Greetings Oliver
Martin Kletzander
2019-Sep-16 08:58 UTC
Re: [libvirt-users] virsh -c lxc:/// setvcpus and <vcpu> configuration fails
On Sun, Sep 15, 2019 at 12:21:08PM +0200, info@layer7.net wrote:>Hi folks! > >i created a server with this XML file: > ><domain type='lxc'> > <name>lxctest1</name> > <uuid>227bd347-dd1d-4bfd-81e1-01052e91ffe2</uuid> > <metadata> > <libosinfo:libosinfo >xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0"> > <libosinfo:os id="http://centos.org/centos/6.9"/> > </libosinfo:libosinfo> > </metadata> > <memory unit='KiB'>1024000</memory> > <currentMemory unit='KiB'>1024000</currentMemory> > <vcpu>2</vcpu> > <numatune> > <memory mode='strict' placement='auto'/> > </numatune> > <resource> > <partition>/machine</partition> > </resource> > <os> > <type arch='x86_64'>exe</type> > <init>/sbin/init</init> > </os> > <idmap> > <uid start='0' target='200000' count='65535'/> > <gid start='0' target='200000' count='65535'/> > </idmap> > <features> > <privnet/> > </features> > <clock offset='utc'/> > <on_poweroff>destroy</on_poweroff> > <on_reboot>restart</on_reboot> > <on_crash>destroy</on_crash> > <devices> > <emulator>/usr/libexec/libvirt_lxc</emulator> > <filesystem type='mount' accessmode='mapped'> > <source dir='/mnt'/> > <target dir='/'/> > </filesystem> > <interface type='network'> > <mac address='00:16:3e:3e:3e:bb'/> > <source network='Public Network'/> > </interface> > <console type='pty'> > <target type='lxc' port='0'/> > </console> > </devices> ></domain> > > >I would expect it to have 2 cpu cores and 1 GB RAM. > > >The RAM config works. >The CPU config does not: >You probably checked /proc/meminfo. That is provided by libvirt using fuse filesystem, but at least it is guaranteed thanks to cgroups. We do not (and I don't think we even can, at least reliably) do that with cpuinfo. [...]>It gives me all CPU's from the host. >Although if you ran some perf benchmark it should just cap at 2 cpus.>I also tried it with > ><cpu> > <topology sockets='1' cores='2' threads='1'/> ></cpu> >We should not allow this, IMO. The reason is that we cannot guarantee or even emulate this (or even the number of CPUs for that matter). That's not how containers work. We can provide /proc/cpuinfo through a fuse filesystem, but if the code actually asks the cpu directly there is no layer in which to emulate the returned information.>That didnt help too. > >I tried to modify the vcpus through virsh: > > > >#virsh -c lxc:/// setvcpus lxctest1 2 > >error: this function is not supported by the connection driver: >virDomainSetVcpus >This should not work for LXC, but it does not make sence because if you look at the XML we allow `<vcpus>2</vcpus>`.>Which didnt work too. > > >This happens on: >Unfortunately anywhere, for the reasons said above. Ideally it should not be able to specify <vcpus/>, but rather just <cputune/>, but I don't think we can change that semantics now that we supported the former for quite some time.
Oliver Dzombic
2019-Sep-16 12:06 UTC
Re: [libvirt-users] virsh -c lxc:/// setvcpus and <vcpu> configuration fails
Hi Martin, thank you very much for your response! Answers inline :)>> Although if you ran some perf benchmark it should just cap at 2 cpus.A cpuset.cpus from /sys/fs/cgroup.... will show with libvirtd: 0-23 So with 0-23 all 24 Cores will be assigned, AND useable. I installed stress and run it with --cpu 8 or 16 or what ever. The container receives, just according to the cgroup, all 24 cores without any limitation. /proc/cpuinfo lscpu it does not matter what you ask, all cpu's are given. Nothing is limited. Also you will see the cpu load of the system, so also the accounting of CPU is given to the container ( the container will see, that there is load on the cpu ). Compared to this, a cpuset.cpus in lxc version 3 will show: 12,17,21,28 ( proxmox ) So 4 cores will be displayed, and as long as you dont put load on them, you will see ~100% idle on all 4 cores. Same goes for the implementation with lxd ( tested 3.x ). And yes, sorry, i just saw later, that virDomainSetVcpus is simply not supporting lxc. My fault, didnt read the documentation fully at that point. ------ So the general question is what is the expected result, when using libvirt ( and i would really love to use libvirt, as we already use it with kvm ) and would also like to use it with docker. So the goal would be, to have something like what lxd / lxc is giving. A container, that has N virtual CPUs, where only N virtual CPU's are seen by the container through /proc/cpuinfo with its own cpu accounting. So the question is: Can libvirt set inside cgroups cpuset and cpuacct (correctly/at all) ? The point here is the following: If the software will check the container and will see on one hand, that there are 24 CPU's available, but on the other hand only what ever else is useable, things will go down a very dark road for us. And i guess not only for us. So its mandatory that cgroups are set correctly, at least in terms of cpuset. Can libvirt do this ? Or, as it seems now, can libvirt only manage KVM's cgroups settings correctly ? We just search for a better implementation for the workload. So containers would be much better because of less overhead. Again, thank you for your time ! -- Mit freundlichen Gruessen / Best regards Oliver Am 16.09.19 um 10:58 schrieb Martin Kletzander:> On Sun, Sep 15, 2019 at 12:21:08PM +0200, info@layer7.net wrote: >> Hi folks! >> >> i created a server with this XML file: >> >> <domain type='lxc'> >> <name>lxctest1</name> >> <uuid>227bd347-dd1d-4bfd-81e1-01052e91ffe2</uuid> >> <metadata> >> <libosinfo:libosinfo >> xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0"> >> <libosinfo:os id="http://centos.org/centos/6.9"/> >> </libosinfo:libosinfo> >> </metadata> >> <memory unit='KiB'>1024000</memory> >> <currentMemory unit='KiB'>1024000</currentMemory> >> <vcpu>2</vcpu> >> <numatune> >> <memory mode='strict' placement='auto'/> >> </numatune> >> <resource> >> <partition>/machine</partition> >> </resource> >> <os> >> <type arch='x86_64'>exe</type> >> <init>/sbin/init</init> >> </os> >> <idmap> >> <uid start='0' target='200000' count='65535'/> >> <gid start='0' target='200000' count='65535'/> >> </idmap> >> <features> >> <privnet/> >> </features> >> <clock offset='utc'/> >> <on_poweroff>destroy</on_poweroff> >> <on_reboot>restart</on_reboot> >> <on_crash>destroy</on_crash> >> <devices> >> <emulator>/usr/libexec/libvirt_lxc</emulator> >> <filesystem type='mount' accessmode='mapped'> >> <source dir='/mnt'/> >> <target dir='/'/> >> </filesystem> >> <interface type='network'> >> <mac address='00:16:3e:3e:3e:bb'/> >> <source network='Public Network'/> >> </interface> >> <console type='pty'> >> <target type='lxc' port='0'/> >> </console> >> </devices> >> </domain> >> >> >> I would expect it to have 2 cpu cores and 1 GB RAM. >> >> >> The RAM config works. >> The CPU config does not: >> > > You probably checked /proc/meminfo. That is provided by libvirt using fuse > filesystem, but at least it is guaranteed thanks to cgroups. We do not > (and I > don't think we even can, at least reliably) do that with cpuinfo. > > [...] > >> It gives me all CPU's from the host. >> > > Although if you ran some perf benchmark it should just cap at 2 cpus. > >> I also tried it with >> >> <cpu> >> <topology sockets='1' cores='2' threads='1'/> >> </cpu> >> > > We should not allow this, IMO. The reason is that we cannot guarantee > or even > emulate this (or even the number of CPUs for that matter). That's not how > containers work. We can provide /proc/cpuinfo through a fuse > filesystem, but if > the code actually asks the cpu directly there is no layer in which to > emulate > the returned information. > >> That didnt help too. >> >> I tried to modify the vcpus through virsh: >> >> >> >> #virsh -c lxc:/// setvcpus lxctest1 2 >> >> error: this function is not supported by the connection driver: >> virDomainSetVcpus >> > > This should not work for LXC, but it does not make sence because if you > look at > the XML we allow `<vcpus>2</vcpus>`. > >> Which didnt work too. >> >> >> This happens on: >> > > Unfortunately anywhere, for the reasons said above. Ideally it should > not be > able to specify <vcpus/>, but rather just <cputune/>, but I don't think > we can > change that semantics now that we supported the former for quite some time.
info@layer7.net
2019-Sep-23 21:53 UTC
Re: [libvirt-users] virsh -c lxc:/// setvcpus and <vcpu> configuration fails
Hi, ok i tried: <vcpu placement='static'>2</vcpu> <iothreads>2</iothreads> <iothreadids> <iothread id='1'/> <iothread id='2'/> </iothreadids> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> <emulatorpin cpuset='0-1'/> <iothreadpin iothread='1' cpuset='0'/> <iothreadpin iothread='2' cpuset='1'/> </cputune> aswell as <vcpu placement='static' cpuset='0-1' current='2'>4</vcpu> and, it shows in the cpuset cgroup: #cat cpuset.cpus 0-1 # cat cpuset.effective_cpus 0-1 And yes, the CPU power is reduced to two cores. But still, /proc/cpuinfo will show _all_ cpu cores of the hostmachine. Also the total cpu usage / load, if queried, will be the cpu usage / load of the hostmachine. That confuse the applications as they monitor the cpu load/count and try to balance out things. Is there really no way to tell libvirt to create the cgroups in a way to just show X cpu's to the system ? When i run purely lxc or lxd, its no problem. But i would like to handle things through libvirt because also the qemu-kvm stuff is handled through libvirt. Any ideas ? I also take dark hacking stuff :) Greetings Oliver Am 16.09.19 um 10:58 schrieb Martin Kletzander:> On Sun, Sep 15, 2019 at 12:21:08PM +0200, info@layer7.net wrote: >> Hi folks! >> >> i created a server with this XML file: >> >> <domain type='lxc'> >> <name>lxctest1</name> >> <uuid>227bd347-dd1d-4bfd-81e1-01052e91ffe2</uuid> >> <metadata> >> <libosinfo:libosinfo >> xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0"> >> <libosinfo:os id="http://centos.org/centos/6.9"/> >> </libosinfo:libosinfo> >> </metadata> >> <memory unit='KiB'>1024000</memory> >> <currentMemory unit='KiB'>1024000</currentMemory> >> <vcpu>2</vcpu> >> <numatune> >> <memory mode='strict' placement='auto'/> >> </numatune> >> <resource> >> <partition>/machine</partition> >> </resource> >> <os> >> <type arch='x86_64'>exe</type> >> <init>/sbin/init</init> >> </os> >> <idmap> >> <uid start='0' target='200000' count='65535'/> >> <gid start='0' target='200000' count='65535'/> >> </idmap> >> <features> >> <privnet/> >> </features> >> <clock offset='utc'/> >> <on_poweroff>destroy</on_poweroff> >> <on_reboot>restart</on_reboot> >> <on_crash>destroy</on_crash> >> <devices> >> <emulator>/usr/libexec/libvirt_lxc</emulator> >> <filesystem type='mount' accessmode='mapped'> >> <source dir='/mnt'/> >> <target dir='/'/> >> </filesystem> >> <interface type='network'> >> <mac address='00:16:3e:3e:3e:bb'/> >> <source network='Public Network'/> >> </interface> >> <console type='pty'> >> <target type='lxc' port='0'/> >> </console> >> </devices> >> </domain> >> >> >> I would expect it to have 2 cpu cores and 1 GB RAM. >> >> >> The RAM config works. >> The CPU config does not: >> > > You probably checked /proc/meminfo. That is provided by libvirt using fuse > filesystem, but at least it is guaranteed thanks to cgroups. We do not > (and I > don't think we even can, at least reliably) do that with cpuinfo. > > [...] > >> It gives me all CPU's from the host. >> > > Although if you ran some perf benchmark it should just cap at 2 cpus. > >> I also tried it with >> >> <cpu> >> <topology sockets='1' cores='2' threads='1'/> >> </cpu> >> > > We should not allow this, IMO. The reason is that we cannot guarantee > or even > emulate this (or even the number of CPUs for that matter). That's not how > containers work. We can provide /proc/cpuinfo through a fuse > filesystem, but if > the code actually asks the cpu directly there is no layer in which to > emulate > the returned information. > >> That didnt help too. >> >> I tried to modify the vcpus through virsh: >> >> >> >> #virsh -c lxc:/// setvcpus lxctest1 2 >> >> error: this function is not supported by the connection driver: >> virDomainSetVcpus >> > > This should not work for LXC, but it does not make sence because if you > look at > the XML we allow `<vcpus>2</vcpus>`. > >> Which didnt work too. >> >> >> This happens on: >> > > Unfortunately anywhere, for the reasons said above. Ideally it should > not be > able to specify <vcpus/>, but rather just <cputune/>, but I don't think > we can > change that semantics now that we supported the former for quite some time.
Seemingly Similar Threads
- Re: virsh -c lxc:/// setvcpus and <vcpu> configuration fails
- ecrypting image file breaks efi/boot of the guest/Ubuntu - ?
- Re: NUMA issues on virtualized hosts
- Will virsh command setvcpus/vcpupin be supported by lxc driver in the future?
- NUMA issues on virtualized hosts