similar to: [PATCH v8] Some automatic NUMA placement documentation

Displaying 20 results from an estimated 8000 matches similar to: "[PATCH v8] Some automatic NUMA placement documentation"

2012 Jul 04
53
[PATCH 00 of 10 v3] Automatic NUMA placement for xl
Hello, Third version of the NUMA placement series Xen 4.2. All the comments received during v2''s review have been addressed (more details in single changelogs). The most notable changes are the following: - the libxl_cpumap --> libxl_bitmap renaming has been rebased on top of the recent patches that allows us to allocate bitmaps of different sizes; - the heuristics for deciding
2013 Sep 06
21
[PATCH v2 0/5] xl: allow for node-wise specification of vcpu pinning
Hi all, This is the second take of a patch that I submitted some time ago for allowing specifying vcpu pinning taking NUMA nodes into account. IOW, something like this: * "nodes:0-3": all pCPUs of nodes 0,1,2,3;  * "nodes:0-3,^node:2": all pCPUS of nodes 0,1,3;  * "1,nodes:1-2,^6": pCPU 1 plus all pCPUs of nodes 1,2    but not pCPU 6; v1 was a single patch, this is
2013 Dec 01
70
[PATCH 00/13] Coverity fixes for libxl
Matthew Daley (13): libxl: fix unsigned less-than-0 comparison in e820_sanitize libxl: check for xc_domain_setmaxmem failure in libxl__build_pre libxl: correct file open success check in libxl__device_pci_reset libxl: don''t leak p in libxl__wait_for_backend libxl: remove unsigned less-than-0 comparison libxl: actually abort if initializing a ctx''s lock fails libxl:
2013 Jul 04
2
Re: [libvirt] [PATCH 1/4] libxl: implement NUMA capabilities reporting
[Moving the conversation on @xen-devel and adding Jan, as that seems more appropriate] [Jan, this came up as I''m implementing some NUMA bits in libvirt but, as you see, the core of Jim''s question is purely about Xen] On lun, 2013-07-01 at 16:47 -0600, Jim Fehlig wrote: > On my non-NUMA test machine I have the cell memory reported as > > <memory
2012 Nov 20
0
[PATCH 15 of 15] libxl: ocaml: add bindings for libxl_domain_create_new
# HG changeset patch # User Ian Campbell <ijc@hellion.org.uk> # Date 1353432141 0 # Node ID 72376896ba08bb7035ad4b7ed5a91c2c1b45b905 # Parent 41f0137955f4a1a5a76ad34a5a6440e32d0090ef libxl: ocaml: add bindings for libxl_domain_create_new ** NOT TO BE APPLIED ** Add a simple stub thing which should build a domain. Except it is incomplete and doesn''t actually build. Hence RFC.
2013 Mar 06
1
Re: [PATCH 00 of 10 [RFC]] Automatically place gueston host's NUMA nodes with xl
hello, I used the patch in xen4.1 and enabled it has the feature of numa placement,but has the earlier said problem.Can you help me that what is the reason? And where is the newest version of the patch,pls provide the latest development branch's address. Thanks, Regards, Butine huang Zhejiang University 2013-03-06 >On mer, 2013-03-06 at 10:49 +0000, butian huang wrote: >>
2013 Sep 17
1
[PATCH] xen: numa-sched: leave node-affinity alone if not in "auto" mode
If the domain''s NUMA node-affinity is being specified by the user/toolstack (instead of being automatically computed by Xen), we really should stick to that. This means domain_update_node_affinity() is wrong when it filters out some stuff from there even in "!auto" mode. This commit fixes that. Of course, this does not mean node-affinity is always honoured (e.g., a vcpu
2017 Feb 28
2
NUMA placement failed, performance might be affected
I just did a yum update on a CentOS 7 / Xen 4.6 server which took me from kernel-3.18.34-20.el7.x86_64 -> kernel-3.18.44-20.el7.x86_64 After rebooting, the following notice is printed immediately upon xl create'ing a domain: libxl: notice: libxl_numa.c:499:libxl__get_numa_candidate: NUMA placement failed, performance might be affected Indeed performance is significantly degraded. This
2012 Apr 20
26
xl doesn't honour the parameter cpu_weight from my config file while xm does honour it
Hi, I''ve installed xen-unstable 4.2 from actual git (last commit was 4dc7dbef5400f0608321d579aebb57f933e8f707). When I start a domU with xm all is fine include the cpu_weight I configured in my domU config. When I start the domU with xl then all my domU have the default cpu_weight of 256 instead of the configured one. Was the name of cpu_weight being changed for xl command ? My domU
2012 Apr 20
26
xl doesn't honour the parameter cpu_weight from my config file while xm does honour it
Hi, I''ve installed xen-unstable 4.2 from actual git (last commit was 4dc7dbef5400f0608321d579aebb57f933e8f707). When I start a domU with xm all is fine include the cpu_weight I configured in my domU config. When I start the domU with xl then all my domU have the default cpu_weight of 256 instead of the configured one. Was the name of cpu_weight being changed for xl command ? My domU
2014 Sep 12
1
Inconsistent behavior between x86_64 and ppc64 when creating guests with NUMA node placement
Hello all, I was recently trying out NUMA placement for my guests on both x86_64 and ppc64 machines. When booting a guest on the x86_64 machine, the following specs were valid (obviously, just notable excepts from the xml): <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <vcpu
2017 Feb 28
0
NUMA placement failed, performance might be affected
Solved. For the archives: Noticed in xl info output that only 1 core was recognised, even though nr_cpus should show 16 on this box. Tried rebooting and selecting the old .34 kernel, console printed "smpboot: do_boot_cpu failed(-1) to wakeup CPU#1" and CentOS failed to boot. Powered down the box fully and started back up with the latest .44 kernel and everything is working fine
2011 Jan 27
7
[PATCH]: xl: fix broken cpupool-numa-split
Hi, the implementation of xl cpupool-numa-split is broken. It basically deals with only one poolid, but there are two to consider: the one from the original root CPUpool, the other from the newly created one. On my machine the current output looks like: root@dosorca:/data/images# xl cpupool-numa-split libxl: error: libxl.c:2803:libxl_create_cpupool Could not create cpupool error on creating
2011 Feb 14
7
[PATCH] xl cpupool-numa-split: reduce number of Dom0 vcpus
When reducing the number of physical cpus available for Domain-0 by xl cpupool-numa-split, reduce the number of vcpus accordingly. Signed-off-by: juergen.gross@ts.fujitsu.com 1 file changed, 20 insertions(+), 2 deletions(-) tools/libxl/xl_cmdimpl.c | 22 ++++++++++++++++++++-- _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com
2013 Sep 17
1
[PATCH v2] xen: sched_credit: filter node-affinity mask against online cpus
in _csched_cpu_pick(), as not doing so may result in the domain''s node-affinity mask (as retrieved by csched_balance_cpumask() ) and online mask (as retrieved by cpupool_scheduler_cpumask() ) having an empty intersection. Therefore, when attempting a node-affinity load balancing step and running this: ... /* Pick an online CPU from the proper affinity mask */
2012 Feb 20
18
[PATCH] libxl: fix compile error of libvirt
a, libxl_event.h is included in libxl.h. So, the former one also need to be installed. b, define __XEN_TOOLS__ in libxl.h: the head file "xen/sysctl.h" need check this macro. It is the same way used by the xen libxc public headers(tools/libxc/xenctrl.h and tools/libxc/xenctrlosdep.h). Signed-off-by: Bamvor Jian Zhang <bjzhang@suse.com> diff -r 87218bd367be
2012 Nov 09
1
OVMF Bios Option
Hello Xen Users, Been experimenting with upstream-qemu and wanted to try out the OVMF bios option, but I seem to be missing something. Are there additional steps to installing OVMF beyond compiling Xen? I saw notes on patching back in february, but I thought the package was included with Xen 4.2 on release. When I attempt to set it as my bios option, the machine boots then immediately closes.
2013 Sep 18
1
[PATCH] Allow 4 MB of video RAM for Cirrus graphics on traditional QEMU
Signed-off-by: Rob Hoes <rob.hoes@citrix.com> --- docs/man/xl.cfg.pod.5 | 18 +++++++------- tools/libxl/libxl_create.c | 57 ++++++++++++++++++++++++++++++++++---------- 2 files changed, 55 insertions(+), 20 deletions(-) diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5 index 769767b..c18604d 100644 --- a/docs/man/xl.cfg.pod.5 +++ b/docs/man/xl.cfg.pod.5 @@ -1009,14
2015 Feb 01
4
Bug#776742: xen-utils-common: no support for VGA Passthrough
Package: xen-utils-common Version: 4.4.1-6 Severity: normal Dear Maintainer, (There appear to be several reports on the BTS with concerns relating to this report. Some unarchiving/merging may be necessary. Reassignment may be needed as well since I'm not sure which package this problem would fall under. Ultimately decided to file with xen-utils-common with 'xl' being the frontend
2013 Sep 09
1
[PATCH V3] xl: HVM domain S3 bugfix
From 18344216b432648605726b137b348f28ef64a4ef Mon Sep 17 00:00:00 2001 From: Liu Jinsong <jinsong.liu@intel.com> Date: Fri, 23 Aug 2013 23:30:23 +0800 Subject: [PATCH V3] xl: HVM domain S3 bugfix Currently Xen hvm s3 has a bug coming from the difference between qemu-traditioanl and qemu-xen. For qemu-traditional, the way to resume from hvm s3 is via ''xl trigger'' command.