similar to: Generate (vCPU pinning) from host NUMA configuration doesn't act accordingly

Displaying 20 results from an estimated 20000 matches similar to: "Generate (vCPU pinning) from host NUMA configuration doesn't act accordingly"

2014 Oct 27
0
How could the admin do to grant me with permission to run virsh as unprivileged user?
Thank you! How could the admin do to grant me with permission to run virsh as unprivileged user? Also by configurating libvirtd.conf, unix_sock_group? 2014-10-28 Allen Qiu 发件人:libvirt-users-request@redhat.com 发送时间:2014-10-28 00:00 主题:libvirt-users Digest, Vol 58, Issue 33 收件人:"libvirt-users"<libvirt-users@redhat.com> 抄送: Send libvirt-users mailing list submissions to
2012 Sep 21
0
picking a NUMA cell for pinning using virsh freecell
Hi I'd want to pin the vcpu of a guest to a pcpu. the docs clearly say https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/ch09s04.html "Locking a guest to a particular NUMA node offers no benefit if that node does not have sufficient free memory for that guest. libvirt stores information on the free memory available on
2014 Oct 27
0
Re: (no subject)
On 12/31/1969 05:00 PM, wrote: > Hi all, [your 800k .bmp file prevented your message from getting through the mailing list filter, which caps mail around 100k for a reason. Please, just describe your problem in plain text, or if you MUST point people to a screenshot, have your email simply contain a URL to a dropbox containing the screenshot, rather than sending it directly to the list.
2014 Nov 05
2
Any recommendation about benchmarking tools on Linux virtual machine?
Hi all, Do you have any recommendation about benchmarking tools on Linux virtual machine for measure the potential performance hit by virtualization, especially on multi-threads computation, since I doubt if the CentOS VM can fully make use of the 32 vCPUs (out of 64 physical cores) that I allocated to it. Regards, Allen 2014-11-05 Allen Qiu
2014 Oct 27
2
What is the difference between running "virt-manager" and "sudo virt-manager"?
Hi all, What is the difference between starting virt-manager by "virt-manager" and by "sudo virt-manager"? It seems that there are two copy of virt-manager running in the background. When I run "virt-manager", I got a error of "Unable to open a connection to the libvirt management daemon. Libvirt URI is: qemu:///system Verify that: - The 'libvirtd'
2016 Apr 06
0
[PATCH v5 2/6] virt, sched: add generic vcpu pinning support
Add generic virtualization support for pinning the current vcpu to a specified physical cpu. As this operation isn't performance critical (a very limited set of operations like BIOS calls and SMIs is expected to need this) just add a hypervisor specific indirection. Signed-off-by: Juergen Gross <jgross at suse.com> --- V4: move this patch some places up in the series WARN_ONCE in
2018 Sep 17
0
Re: NUMA issues on virtualized hosts
Hello, so the current domain configuration: <cpu mode='host-passthrough'><topology sockets='8' cores='4' threads='1'/><numa><cell cpus='0-3' memory='62000000' /><cell cpus='4-7' memory='62000000' /><cell cpus='8-11' memory='62000000' /><cell cpus='12-15'
2013 Sep 06
21
[PATCH v2 0/5] xl: allow for node-wise specification of vcpu pinning
Hi all, This is the second take of a patch that I submitted some time ago for allowing specifying vcpu pinning taking NUMA nodes into account. IOW, something like this: * "nodes:0-3": all pCPUs of nodes 0,1,2,3;  * "nodes:0-3,^node:2": all pCPUS of nodes 0,1,3;  * "1,nodes:1-2,^6": pCPU 1 plus all pCPUs of nodes 1,2    but not pCPU 6; v1 was a single patch, this is
2018 Sep 17
2
Re: NUMA issues on virtualized hosts
On 09/14/2018 03:36 PM, Lukas Hejtmanek wrote: > Hello, > > ok, I found that cpu pinning was wrong, so I corrected it to be 1:1. The issue > with iozone remains the same. > > The spec is running, however, it runs slower than 1-NUMA case. > > The corrected XML looks like follows: [Reformated XML for better reading] <cpu mode="host-passthrough">
2018 Sep 14
0
Re: NUMA issues on virtualized hosts
Hello, ok, I found that cpu pinning was wrong, so I corrected it to be 1:1. The issue with iozone remains the same. The spec is running, however, it runs slower than 1-NUMA case. The corrected XML looks like follows: <cpu mode='host-passthrough'><topology sockets='8' cores='4' threads='1'/><numa><cell cpus='0-3'
2014 Nov 06
0
Re: Any recommendation about benchmarking tools on Linux virtual machine?
On Wednesday, November 5, 2014 5:53 AM, Allen Qiu <my1stbox@163.com> wrote: > Do you have any recommendation about benchmarking tools on Linux > virtual machine for measure the potential performance hit by > virtualization, especially on multi-threads computation You could try the phoronix-test-suite which is available in the [EPEL][1] repository. Cheers, Cristian Ciupitu [1]:
2014 Oct 28
0
Will over-allocation of vCPUs be counterproductive to VM Win7 Ultimate?
Hi all, Will over-allocation of vCPUs be counterproductive to VM Win7 Ultimate? I allocated 32 (out of 64 physical cores in 8 NUMA cells, and 128G out of 2T physical RAM) to the VM. But the boot process was extremly slow, actually the desktop screen never showed up then the VM just reboot automatically. So is there anything I can do to solve this problem or the truth is Win7 by design just cannot
2008 Sep 05
0
3.2.1+ HVM + HAP + NUMA - Poor Memory Performance
Hi Everyone, I am running 3.2.1 on Centos 5.2 with HAP enabled, NUMA enabled, ACPI enabled and the dom0 allocated 512Mb. I have setup a single core 1Gb VM for performance testing under Windows 2008 Server. Most CPU results are within a few percent of theoretical max but Memory performance is about half what I expected. I get 3.22Gb/Sec Sandra 2009 Memory performance for a single Opteron 8350
2018 Sep 14
1
Re: NUMA issues on virtualized hosts
Hello again, when the iozone writes slow. This is how slabtop looks like: 62476752 62476728 0% 0.10K 1601968 39 6407872K buffer_head 1000678 999168 0% 0.56K 142954 7 571816K radix_tree_node 132184 125911 0% 0.03K 1066 124 4264K kmalloc-32 118496 118224 0% 0.12K 3703 32 14812K kmalloc-node 73206 56467 0% 0.19K 3486 21
2013 Jan 23
1
VMs fail to start with NUMA configuration
I am using libvirt 0.10.2.2 and qemu-kvm 1.2.2 (qemu-kvm 1.2.0 + qemu 1.2.2 applied on top plus a number of stability patches). Having issue where my VMs fail to start with the following message: kvm_init_vcpu failed: Cannot allocate memory Following the instructions at http://libvirt.org/formatdomain.html#elementsNUMATuning I've added the following to my VCPU configuration: <vcpu
2014 Oct 27
0
Error starting Virtual Machine Manager: Failed to contact configuration server...
Hi all! Do you have any idea about the following error message from virt-manager. I was trying to start it with "sudo virt-manager". Do you think I can fix this by simply restarting libvirtd? Regards, Allen Error starting Virtual Machine Manager: Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS
2020 Jun 28
0
[RFC 0/3] virtio: NUMA-aware memory allocation
On 2020/6/25 ??9:57, Stefan Hajnoczi wrote: > These patches are not ready to be merged because I was unable to measure a > performance improvement. I'm publishing them so they are archived in case > someone picks up this work again in the future. > > The goal of these patches is to allocate virtqueues and driver state from the > device's NUMA node for optimal memory
2020 Jun 29
0
[RFC 0/3] virtio: NUMA-aware memory allocation
On Mon, Jun 29, 2020 at 10:26:46AM +0100, Stefan Hajnoczi wrote: > On Sun, Jun 28, 2020 at 02:34:37PM +0800, Jason Wang wrote: > > > > On 2020/6/25 ??9:57, Stefan Hajnoczi wrote: > > > These patches are not ready to be merged because I was unable to measure a > > > performance improvement. I'm publishing them so they are archived in case > > > someone
2014 Oct 29
1
Can't specify filename.xml when using virsh dumpxml
Hi all, Do you have any idea about this error? virsh # dumpxml csg01win7test > domain.xml error: unexpected data '>' Regards, Allen 2014-10-28 Allen Qiu
2013 Sep 17
1
[PATCH] xen: numa-sched: leave node-affinity alone if not in "auto" mode
If the domain''s NUMA node-affinity is being specified by the user/toolstack (instead of being automatically computed by Xen), we really should stick to that. This means domain_update_node_affinity() is wrong when it filters out some stuff from there even in "!auto" mode. This commit fixes that. Of course, this does not mean node-affinity is always honoured (e.g., a vcpu