Displaying 20 results from an estimated 1000 matches similar to: "[PATCH] xl cpupool-numa-split: reduce number of Dom0 vcpus"
2011 Jan 27
7
[PATCH]: xl: fix broken cpupool-numa-split
Hi,
the implementation of xl cpupool-numa-split is broken. It basically
deals with only one poolid, but there are two to consider: the one from
the original root CPUpool, the other from the newly created one.
On my machine the current output looks like:
root@dosorca:/data/images# xl cpupool-numa-split
libxl: error: libxl.c:2803:libxl_create_cpupool Could not create cpupool
error on creating
2011 Nov 17
12
[PATCH] Avoid panic when adjusting sedf parameters
When using sedf scheduler in a cpupool the system might panic when setting
sedf scheduling parameters for a domain.
Signed-off-by: juergen.gross@ts.fujitsu.com
1 file changed, 4 insertions(+)
xen/common/sched_sedf.c | 4 ++++
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
2010 Jul 28
22
ACPI-Tables corrupted?
Hi,
on a Nehalem system with VT-d enabled we are seeing strange ACPI-Table
contents, especially a corrupted DMAR entry.
The hypervisor shows following data on boot:
(XEN) ACPI: RSDP 000F80E0, 0024 (r2 PTLTD )
(XEN) ACPI: XSDT BF7C469E, 00D4 (r1 PTLTD XSDT 60000 LTP 0)
(XEN) ACPI: FACP BF7C9CC9, 00F4 (r3 FSC TYLERBRG 60000 PTL F4240)
(XEN) ACPI: DSDT BF7C4772, 54D3 (r1
2013 Mar 15
2
strange phenomenon on CPU affinity
Hello,
My testing machine has 2 quad-core CPU (It supports hyperthreading,
but i disable it in BIOS). I uses Xen 4.0.1 as the hypervisor. When I use 8
VMs to conduct a test, CPU affinity of the VMs is very strange. Like this:
vm_name vcpu_num cpu_affinity
Domain-0 8 any
VM1 4 1,3,5,7
VM2 4 1,3,5,7
VM3 4 1,3,5,7
VM4 4
2012 Jul 13
11
Backport requests of cs 23420..23423 for 4.0 and 4.1
Hi,
we are experiencing significant performance degradation after live migration of
hvm domains in Xen 4.0 (SLES11 SP1): after live migration the performance is
dropping to less than 90%. I did a backport of cs 23420-23423 and the
performance is okay now.
I would like to request to include these changesets in 4.0 and 4.1. The
backport is quite trivial, I can send patches if you are willing to
2011 Nov 15
2
xen-unstable/staging: qemu git file corrupt
Hi,
when I try to build xen-unstable/staging (cs 24143) tools via
make tools
I get:
...
got 1b6bfb99c2b55ff2e35ab61caf307dad3aebc82a
got efd594c960330cc3eee44e65f5fee258c798e610
got ccc9677505c0dd2c6c5054e73a42cef2d25687b4
got 86a2a2a59a8b76117b221c712ba0a156d21441c9
error: File efd594c960330cc3eee44e65f5fee258c798e610
2011 Mar 16
2
[PATCH] Remove no longer used cpu_possible definitions
cpu_possible_mask and related macros are no longer used in Xen. Remove them
and adjust comments accordingly.
Signed-off-by: juergen.gross@ts.fujitsu.com
1 file changed, 11 insertions(+), 40 deletions(-)
xen/include/xen/cpumask.h | 51 +++++++++------------------------------------
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
2010 Jul 28
23
HVM hypercalls
Hi
I need to use hypercalls from HVM domain (e.g. HYPERVISOR_add_to_physmap). However, it does not work when I am trying to invoke it from HVM Linux guest. Basically, I don''t see that anything happens on hypervisor''s side. I also grep''ed the guest code for ''vmmcall''/''vmcall'' and did not find anything. Is it possible to do it at all?
2012 Aug 15
5
[PATCH] xl: Suppress spurious warning message for cpupool-list
# HG changeset patch
# User George Dunlap <george.dunlap@eu.citrix.com>
# Date 1345022863 -3600
# Node ID 0982bad392e4f96fb39a025d6528c33be32c6c04
# Parent dc56a9defa30312a46cfb6ddb578e64cfbc6bc8b
xl: Suppress spurious warning message for cpupool-list
libxl_cpupool_list() enumerates the cpupools by "probing": calling
cpupool_info, starting at 0 and stopping when it gets an error.
2011 Jan 27
1
[PATCH] xl: remove unimplemented -l stub for cpupool-list
Hi,
although advertised via the usage output, xl cpupool-list -l just
returns ERROR_NI, which does not show up on the console. Instead the
output is empty, which is not exactly what --long hints to.
To avoid confusion remove the line from the help output and just
ignore the -l option properly until it gets finally implemented.
Signed-off-by: Andre Przywara <andre.przywara@amd.com>
2012 Aug 14
12
[TESTDAY] xl cpupool-create segfaults if given invalid configuration
# xl cpupool-create ''name="pool2" sched="credit2"''
command line:2: config parsing error near `sched'': syntax error,
unexpected IDENT, expecting NEWLINE or '';''
Failed to parse config file: Invalid argument
*** glibc detected *** xl: free(): invalid pointer: 0x0000000001a79a10 ***
Segmentation fault (core dumped)
Looking at the code
2012 Jan 10
0
Live Migration of BS2000 DomU
Hi,
we (Fujitsu) are proud to announce the successful live migration of a BS2000
domU (pvHVM). The domU had 32 GB of memory, 8 active vcpus, an active LAN
connection and about 2500 FC-disks online. The domU had an active test load
on 8 disks running and several cpu intensive test jobs.
All BS2000 peripherals are connected via a special pv-driver handling all
devices with just one
2010 Sep 08
0
[Patch] xl: correct vcpu-pin and vcpu-list parameter checking
Hi,
attached patch corrects parameter checking of vcpu-pin and vcpu-list
sub-commands.
Juergen
--
Juergen Gross Principal Developer Operating Systems
TSP ES&S SWE OS6 Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions e-mail: juergen.gross@ts.fujitsu.com
Domagkstr. 28 Internet: ts.fujitsu.com
D-80807
2012 Jul 04
53
[PATCH 00 of 10 v3] Automatic NUMA placement for xl
Hello,
Third version of the NUMA placement series Xen 4.2.
All the comments received during v2''s review have been addressed (more details
in single changelogs).
The most notable changes are the following:
- the libxl_cpumap --> libxl_bitmap renaming has been rebased on top of the
recent patches that allows us to allocate bitmaps of different sizes;
- the heuristics for deciding
2008 Dec 17
36
[Patch 2 of 2]: PV-domain SMP performance Linux-part
--
Juergen Gross Principal Developer
IP SW OS6 Telephone: +49 (0) 89 636 47950
Fujitsu Siemens Computers e-mail: juergen.gross@fujitsu-siemens.com
Otto-Hahn-Ring 6 Internet: www.fujitsu-siemens.com
D-81739 Muenchen Company details: www.fujitsu-siemens.com/imprint.html
_______________________________________________
2010 Apr 06
29
Xen 4.1 Feature Request List
Xen Community:
As many of you are aware, the Xen 4.0 hypervisor is due to ship tomorrow (shhhh, don''t tell anyone) and I wanted to get submissions underway for Xen 4.1 features. I have updated the Roadmap Wiki page (http://wiki.xensource.com/xenwiki/XenRoadMap) with a new section for Xen 4.1 features to be added. Feel free to add your ideas or send me your features and I will update the
2010 Apr 06
29
Xen 4.1 Feature Request List
Xen Community:
As many of you are aware, the Xen 4.0 hypervisor is due to ship tomorrow (shhhh, don''t tell anyone) and I wanted to get submissions underway for Xen 4.1 features. I have updated the Roadmap Wiki page (http://wiki.xensource.com/xenwiki/XenRoadMap) with a new section for Xen 4.1 features to be added. Feel free to add your ideas or send me your features and I will update the
2012 Apr 03
3
[PATCH] xl: Don't require a config file for cpupools
Since the key information can be fairly simply put on the command-line,
there''s no need to require an actual config file.
Also improve the help to cross-reference the xlcpupool.cfg manpage.
Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
diff -r 30cc13e25e01 -r 0fb728d56bae docs/man/xl.pod.1
--- a/docs/man/xl.pod.1 Tue Apr 03 19:02:19 2012 +0100
+++ b/docs/man/xl.pod.1
2008 Dec 17
4
[Patch 0 of 2]: PV-domain SMP performance
Hi,
I''ve played a little bit with the xen scheduler to enhance the performance of
paravirtualized SMP domains including Dom0.
Under heavy system load a vcpu might be descheduled in a critical section.
This in turn leads to even higher system load if other vcpus of the same
domain are waiting for the descheduled vcpu to leave the critical section.
I''ve created a patch for xen
2013 Sep 17
1
[PATCH] xen: numa-sched: leave node-affinity alone if not in "auto" mode
If the domain''s NUMA node-affinity is being specified by the
user/toolstack (instead of being automatically computed by Xen),
we really should stick to that. This means domain_update_node_affinity()
is wrong when it filters out some stuff from there even in "!auto"
mode.
This commit fixes that. Of course, this does not mean node-affinity
is always honoured (e.g., a vcpu