similar to: libvirt with gcc5 Test failing

Displaying 20 results from an estimated 60000 matches similar to: "libvirt with gcc5 Test failing"

2015 Feb 04
2
Re: HugePages - can't start guest that requires them
*facepalm* Now that I'm re-reading the documentation it's obvious that <page/> and @nodeset are for the guest, "This tells the hypervisor that the guest should have its memory allocated using hugepages instead of the normal native page size." Pretty clear there. Thank you SO much for the guidance, I'll return to my tweaking. I'll report back here with my results.
2015 Feb 10
2
Re: HugePages - can't start guest that requires them
On 09.02.2015 18:19, G. Richard Bellamy wrote: > First I'll quickly summarize my understanding of how to configure numa... > > In "//memoryBacking/hugepages/page[@nodeset]" I am telling libvirt to > use hugepages for the guest, and to get those hugepages from a > particular host NUMA node. No, @nodeset refers to guest NUMA nodes. > > In
2018 Sep 17
2
Re: NUMA issues on virtualized hosts
On 09/14/2018 03:36 PM, Lukas Hejtmanek wrote: > Hello, > > ok, I found that cpu pinning was wrong, so I corrected it to be 1:1. The issue > with iozone remains the same. > > The spec is running, however, it runs slower than 1-NUMA case. > > The corrected XML looks like follows: [Reformated XML for better reading] <cpu mode="host-passthrough">
2018 Sep 14
3
NUMA issues on virtualized hosts
Hello, I have cluster with AMD EPYC 7351 cpu. Two CPUs per node. I have performance 8-NUMA configuration: This is from hypervizor: [root@hde10 ~]# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 64 On-line CPU(s) list: 0-63 Thread(s) per core: 2 Core(s) per socket: 16 Socket(s): 2 NUMA
2018 Sep 14
1
Re: NUMA issues on virtualized hosts
Hello again, when the iozone writes slow. This is how slabtop looks like: 62476752 62476728 0% 0.10K 1601968 39 6407872K buffer_head 1000678 999168 0% 0.56K 142954 7 571816K radix_tree_node 132184 125911 0% 0.03K 1066 124 4264K kmalloc-32 118496 118224 0% 0.12K 3703 32 14812K kmalloc-node 73206 56467 0% 0.19K 3486 21
2014 Feb 12
2
Re: Help? Running into problems with migrateToURI2() and virDomainDefCheckABIStability()
On 02/11/2014 04:45 PM, Cole Robinson wrote: > On 02/10/2014 06:46 PM, Chris Friesen wrote: >> Hi, >> >> We've run into a problem with libvirt 1.1.2 and are looking for some comments >> on whether this is a bug or design intent. >> >> We're trying to use migrateToURI() but we're using a few things (numatune, >> vcpu mask, etc.) that may need
2015 Feb 09
0
Re: HugePages - can't start guest that requires them
First I'll quickly summarize my understanding of how to configure numa... In "//memoryBacking/hugepages/page[@nodeset]" I am telling libvirt to use hugepages for the guest, and to get those hugepages from a particular host NUMA node. In "//numatune/memory[@nodeset]" I am telling libvirt to pin the memory allocation to the guest from a particular host numa node. In
2015 Feb 20
0
Re: HugePages - can't start guest that requires them
On Tue, Feb 10, 2015 at 1:14 AM, Michal Privoznik <mprivozn@redhat.com> wrote: > On 09.02.2015 18:19, G. Richard Bellamy wrote: >> First I'll quickly summarize my understanding of how to configure numa... >> >> In "//memoryBacking/hugepages/page[@nodeset]" I am telling libvirt to >> use hugepages for the guest, and to get those hugepages from a
2018 Sep 17
0
Re: NUMA issues on virtualized hosts
Hello, so the current domain configuration: <cpu mode='host-passthrough'><topology sockets='8' cores='4' threads='1'/><numa><cell cpus='0-3' memory='62000000' /><cell cpus='4-7' memory='62000000' /><cell cpus='8-11' memory='62000000' /><cell cpus='12-15'
2014 Feb 10
2
Help? Running into problems with migrateToURI2() and virDomainDefCheckABIStability()
Hi, We've run into a problem with libvirt 1.1.2 and are looking for some comments on whether this is a bug or design intent. We're trying to use migrateToURI() but we're using a few things (numatune, vcpu mask, etc.) that may need adjustment during the migration. We found that migrateToURI2() mostly works if we use XML created by copying the domain XML from the running instance
2018 Sep 18
1
Re: NUMA issues on virtualized hosts
On 09/17/2018 04:59 PM, Lukas Hejtmanek wrote: > Hello, > > so the current domain configuration: > <cpu mode='host-passthrough'><topology sockets='8' cores='4' threads='1'/><numa><cell cpus='0-3' memory='62000000' /><cell cpus='4-7' memory='62000000' /><cell cpus='8-11'
2019 Oct 15
1
Re: [libvirt] Add support for vhost-user-scsi-pci/vhost-user-blk-pci
Cole Robinson <crobinso@redhat.com> 于2019年10月15日周二 上午1:48写道: > > On 10/14/19 3:12 AM, Li Feng wrote: > > Hi Cole & Michal, > > > > I'm sorry for my late response, I just end my journey today. > > Thank your response, your suggestion is very helpful to me. > > > > I have added Michal in this mail, Michal helps me review my initial patchset.
2019 May 02
0
NUMA revisited
Moin libvirters, I'm looking into the current numa settings for a large-ish libvirt/qemu based setup and I ended up having a couple of questions: 1) Has kernel.numa_balancing completely replaced numad or is there still a time and place for numad when we have a modern kernel? 2) Should I pin vCPUs to numa nodes and/or use numatune at all, when using kernel.numa_balancing? 3) The libvirt
2015 Jan 23
3
questions around using numatune/numa/schedinfo
Hi, I'm running into some problems with libvirt and hoping someone can point me at some instructions or maybe even help me out. First, are there any requirements on qemu version in order to use the "numatune" and/or "cpu/numa/cell" elements? Or do they use cgroups and not the native qemu numa support? Second, are there any instructions on how to set up cgroups? I
2015 Feb 04
2
Re: HugePages - can't start guest that requires them
As I mentioned, I got the instances to launch... but they're only taking HugePages from "Node 0", when I believe my setup should pull from both nodes. [atlas] http://sprunge.us/FSEf [prometheus] http://sprunge.us/PJcR 2015-02-03 16:51:48 root@eanna i ~ # virsh start atlas Domain atlas started 2015-02-03 16:51:58 root@eanna i ~ # virsh start prometheus Domain prometheus started
2018 Sep 14
0
Re: NUMA issues on virtualized hosts
Hello, ok, I found that cpu pinning was wrong, so I corrected it to be 1:1. The issue with iozone remains the same. The spec is running, however, it runs slower than 1-NUMA case. The corrected XML looks like follows: <cpu mode='host-passthrough'><topology sockets='8' cores='4' threads='1'/><numa><cell cpus='0-3'
2019 Oct 14
2
Re: [libvirt] Add support for vhost-user-scsi-pci/vhost-user-blk-pci
Hi Cole & Michal, I'm sorry for my late response, I just end my journey today. Thank your response, your suggestion is very helpful to me. I have added Michal in this mail, Michal helps me review my initial patchset. (https://www.spinics.net/linux/fedora/libvir/msg191339.html) All concern about this feature is the XML design. My original XML design exposes more details of Qemu.
2015 Feb 04
0
Re: HugePages - can't start guest that requires them
On 04.02.2015 01:59, G. Richard Bellamy wrote: > As I mentioned, I got the instances to launch... but they're only > taking HugePages from "Node 0", when I believe my setup should pull > from both nodes. > > [atlas] http://sprunge.us/FSEf > [prometheus] http://sprunge.us/PJcR [pasting interesting nits from both XMLs] <domain type='kvm'
2019 Sep 15
3
virsh -c lxc:/// setvcpus and <vcpu> configuration fails
Hi folks! i created a server with this XML file: <domain type='lxc'> <name>lxctest1</name> <uuid>227bd347-dd1d-4bfd-81e1-01052e91ffe2</uuid> <metadata> <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0"> <libosinfo:os id="http://centos.org/centos/6.9"/>
2013 Jul 08
4
Re: Permission problem with /dev/net/tun
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi Daniel, On 07/08/2013 11:41 AM, Daniel P. Berrange wrote: >> the symptom my libvirt LXC container suffers from is: >> root@depot:/dev/net# ls -la total 0 drwxr-xr-x 2 root root 40 >> Jun 29 16:26 . drwxr-xr-x 5 root root 480 Jun 29 16:26 .. >> root@depot:/dev/net# mknod tun c 10 200 mknod: `tun': Operation >>