similar to: numa topology within domain XML

Displaying 20 results from an estimated 3000 matches similar to: "numa topology within domain XML"

2012 Jul 25
0
CPU-Capabilties for Nested Virtualization
Hi, I have some questions about the CPU-Capabilities for Virtualization especially for Nested Virtualization like the Turtles Project. I am trying to create a Nested Environment by following the instruction of: http://kashyapc.wordpress.com/2012/01/18/nested-virtualization-with-kvm-and-amd/ The bare-metal system is a Dell Server with following CPU-Flags (virsh capabilities) <cpu>
2012 Aug 03
1
Opteron_G4 CPU under libvirt 0.9.12
Hi, I?m using libvirt version 0.9.12 under Debian Squeeze with and AMD Opteron 4280. Executing the virsh capabilities command only show me the following flags: <cpu> <arch>x86_64</arch> <model>Opteron_G4</model> <vendor>AMD</vendor> <topology sockets='1' cores='8' threads='2'/> <feature name='nodeid_msr'/>
2012 Aug 03
1
CPU Flags libvirt 0.9.12
Hi, I?m using libvirt version 0.9.12 under Debian Squeeze with and AMD Opteron 4280. Executing the virsh capabilities command only show me the following flags: <cpu> <arch>x86_64</arch> <model>Opteron_G4</model> <vendor>AMD</vendor> <topology sockets='1' cores='8' threads='2'/> <feature name='nodeid_msr'/>
2013 Jan 23
1
VMs fail to start with NUMA configuration
I am using libvirt 0.10.2.2 and qemu-kvm 1.2.2 (qemu-kvm 1.2.0 + qemu 1.2.2 applied on top plus a number of stability patches). Having issue where my VMs fail to start with the following message: kvm_init_vcpu failed: Cannot allocate memory Following the instructions at http://libvirt.org/formatdomain.html#elementsNUMATuning I've added the following to my VCPU configuration: <vcpu
2008 Mar 14
1
[PATCH] Allow explicit NUMA placements of guests
Hi, this patch introduces a new config file option (numanodes=[x]) to specify a list of valid NUMA nodes for guests. This will extend (but not replace) the recently introduced automatic placement. If several nodes are given, the current algorithm will choose one of them. If none of the given nodes has enough memory, this will fall back to the automatic placement. Signed-off-by: Andre
2013 Dec 03
2
error: Failed to start domain
On a gentoo server with libvirt-1.1.3 I get problems with starting VMs. When I do: # virsh start vm180 error: Failed to start domain vm180 error: Input/output error This happens with 1.1.3 and 1.1.4 (I rebuilt the packages and restarted the libvirtd.service). # journalctl -f -u libvirtd.service shows: Dec 03 17:32:56 jupiter libvirtd[24020]: Input/output error Dec 03 17:33:47 jupiter
2019 May 08
2
failed to build llvm since 25de7691a0e27c29c8d783a22373cc265571f5e9 on AMD platform
Hi we observed that below errors occur on AMD platform since 25de7691a0e27c29c8d783a22373cc265571f5e9 root at lkp-opteron1 /opt/rootfs/llvm_project/src/build# cmake -DCMAKE_BUILD_TYPE=release -DLLVM_ENABLE_PROJECTS=clang -G "Unix Makefiles" ../llvm -DCMAKE_INSTALL_PREFIX=/opt/cross/ -- clang project is enabled -- clang-tools-extra project is disabled -- compiler-rt project is disabled
2018 Mar 23
2
Issue with libguestfs-test-tool on a guest hosted on VMWare ESXi
I am using a debian 9 guest, hosted on a ESXi platform with nested virtualisation enabled. On this debian 9 guest when I run libguesfs-test-tool, it fails with an error: "qemu-system-x86_64: /build/qemu-DqynNa/qemu-2.8+dfsg/target-i386/kvm.c:1805: kvm_put_msrs: Assertion `ret == cpu->kvm_msr_buf->nmsrs' failed." Instead when I use a wrapper script and hook it with the env
2006 Sep 29
4
[PATCH 4/6] xen: export NUMA topology in physinfo hcall
This patch modifies the physinfo hcall to export NUMA CPU and Memory topology information. The new physinfo hcall is integrated into libxc and xend (xm info specifically). Included in this patch is a minor tweak to xm-test''s xm info testcase. The new fields in xm info are: nr_nodes : 4 mem_chunks : node0:0x0000000000000000-0x0000000190000000
2006 Sep 29
0
[PATCH 0/6] add NUMA support to Xen
The following patchset adds NUMA support to the hypervisor. This includes: - A full SRAT table parser for 32-bit and 64-bit, based on the ACPI NUMA parser from linux 2.6.16.29 with data structures to represent NUMA cpu and memory topology, and NUMA emulation (fake=). - Changes to the Xen page allocator adding a per-node bucket for each zone. Xen will continue to prioritize using the
2020 Jun 28
0
[RFC 0/3] virtio: NUMA-aware memory allocation
On 2020/6/25 ??9:57, Stefan Hajnoczi wrote: > These patches are not ready to be merged because I was unable to measure a > performance improvement. I'm publishing them so they are archived in case > someone picks up this work again in the future. > > The goal of these patches is to allocate virtqueues and driver state from the > device's NUMA node for optimal memory
2016 Nov 21
0
Re: NUMA VM and assigning interfaces
On 11/21/2016 12:34 PM, Amir Shehata wrote: > Hello, > > Hope all is well. > > I've been looking at how I can create a virtual machine which is NUMA > capable. I was able to do that by: > > 140 <qemu:commandline> > 143 <qemu:arg value='-numa'/> > 144 <qemu:arg value='node'/> > 145 <qemu:arg
2020 Jun 29
0
[RFC 0/3] virtio: NUMA-aware memory allocation
On Mon, Jun 29, 2020 at 10:26:46AM +0100, Stefan Hajnoczi wrote: > On Sun, Jun 28, 2020 at 02:34:37PM +0800, Jason Wang wrote: > > > > On 2020/6/25 ??9:57, Stefan Hajnoczi wrote: > > > These patches are not ready to be merged because I was unable to measure a > > > performance improvement. I'm publishing them so they are archived in case > > > someone
2019 May 09
3
failed to build llvm since 25de7691a0e27c29c8d783a22373cc265571f5e9 on AMD platform
LKP framework can guarantee that all the software environment are same on AMD and INTEL platform. INTEL platform always work well, after revert this patch, AMD works well. we tried below commit on AMD. 1) 25de7691a0e27c29c8d783a22373cc265571f5e9: bad 2) a82235843b102202766115e10003c9465a8b83ae: good the error logs(build/CMakeFiles/CMakeError.log) has no difference b/w 1) and 2) on AMD platform
2018 Aug 21
0
Get Logical processor count correctly whether NUMA is enabled or disabled
Dear Arun, thank you for the report. I agree with the analysis, detectCores() will only report logical processors in the NUMA group in which R is running. I don't have a system to test on, could you please check these workarounds for me on your systems? # number of logical processors - what detectCores() should return out <- system("wmic cpu get numberoflogicalprocessors",
2019 May 02
0
NUMA revisited
Moin libvirters, I'm looking into the current numa settings for a large-ish libvirt/qemu based setup and I ended up having a couple of questions: 1) Has kernel.numa_balancing completely replaced numad or is there still a time and place for numad when we have a modern kernel? 2) Should I pin vCPUs to numa nodes and/or use numatune at all, when using kernel.numa_balancing? 3) The libvirt
2020 Jul 01
0
Re: no/empty NUMA cells on domain XML
On Wed, Jul 01, 2020 at 12:08:35PM +0300, Polina Agranat wrote: > Hi , > > I'm looking for a possibility to simulate a VM (to be used as a host) > reporting no or empty NUMA <cells> in 'virsh capabilities'. 'virsh > capabilities' usually reports single NUMA in the VMs, > like > <numa> > <cell id='0' cpus='0-63'
2010 Jan 17
2
docs/reference for NUMA usage?
i''m reading up on numa usage. so far, i''ve enabled numa on my Xen box. @ Dom0 i see, xm dmesg | grep -i numa (XEN) Command line: ... numa=on ... (XEN) No NUMA configuration found i guess i need to ''configure'' numa. i''ve no clue how, and haven''t found the docs for it yet, despite looking. i found this old thread,
2014 Sep 12
1
Inconsistent behavior between x86_64 and ppc64 when creating guests with NUMA node placement
Hello all, I was recently trying out NUMA placement for my guests on both x86_64 and ppc64 machines. When booting a guest on the x86_64 machine, the following specs were valid (obviously, just notable excepts from the xml): <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <vcpu
2010 Nov 12
0
Announce: Auto/Lazy-migration Patches RFC on linux-numa list
At last weeks' LPC, there was some interest in my patches for Auto/Lazy Migration to improve locality and possibly performance of unpinned guest VMs on a NUMA platform. As a result of these conversations I have reposted the patches [4 series, ~40 patches] as RFCs to the linux-numa list. Links to threads given below. I have rebased the patches atop 3Nov10 mmotm series [2.6.36 + 3nov mmotm].