Displaying 20 results from an estimated 10000 matches similar to: "Dom0 vCPU restriction"
2011 May 19
5
vcpu-pin cause dom0 kernel panic
I use xen 4.0(dom0 is suse11.sp1,2.6.32 x86_64) on
Dell R710 with PERC H700 RAID adapter.
--Intel(R) Xeon(R) CPU E5620 @ 2.40GHz
--8 CPU cores.
--Memory 64G
--RAID5 4.5T
When I dedicated (pin) a CPU core only for dom0 use.
(I specify "dom0_max_vcpus=1 dom0_vcpus_pin" options for Xen)
I got dom0 kernel panic error(pin-1-5.30.bmp)
When I pin 2 core to dom0, the dom0 system can boot up,
2013 Nov 11
1
[PATCH] x86/idle: reduce contention on ACPI register accesses
Other than when they''re located in I/O port space, accessing them when
in MMIO space (currently) implies usage of some sort of global lock: In
-unstable this would be due to the use of vmap(), is older trees the
necessary locking was introduced by 2ee9cbf9 ("ACPI: fix
acpi_os_map_memory()"). This contention was observed to result in Dom0
kernel soft lockups during the loading of
2013 Feb 21
2
[PATCH v3] x86/nhvm: properly clean up after failure to set up all vCPU-s
Otherwise we may leak memory when setting up nHVM fails half way.
This implies that the individual destroy functions will have to remain
capable (in the VMX case they first need to be made so, following
26486:7648ef657fe7 and 26489:83a3fa9c8434) of being called for a vCPU
that the corresponding init function was never run on.
Once at it, also remove a redundant check from the corresponding
2012 Mar 12
3
x86/dom0: limit dom0_max_vcpus value
This caused particularly poor performance when booting a server in
uniprocessor mode for debugging reasons, and had 4 dom0 vcpus competing
for 1pcpus worth of time.
--
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
2012 Apr 21
6
[PATCH] xen: Add GS base to HVM VCPU context
Add GS base to the HVM VCPU context returned by xc_vcpu_getcontext()
Signed-off-by: Aravindh Puthiyaparambil <aravindh@virtuata.com>
diff -r e62ab14d44af -r babbb3e0f4d3 xen/arch/x86/domctl.c
--- a/xen/arch/x86/domctl.c Fri Apr 20 11:36:02 2012 -0700
+++ b/xen/arch/x86/domctl.c Fri Apr 20 17:55:49 2012 -0700
@@ -1592,6 +1592,12 @@ void arch_get_info_guest(struct vcpu *v,
2013 Feb 11
25
Xen 4.2.1 boot failure with IOMMU enabled
Hi all
I already posted about this problem on xen-users some time ago
(http://markmail.org/message/sbgtyjqh6bzmqx4s) but I couldn''t
resolve my problem using help from people on xen-users, so I''m posting here .
I have a problem with enabling IOMMU on Xen 4.2.1. When I enable it in BIOS
and in grub.conf using iommu=1 kernel option, my machine cannot boot.
I get a following error
2010 May 21
10
What''s the different for "dom0_max_vcpus=4 dom0_vcpus_pin" and "dom0_max_vcpus=4" ?
Hi experts,
Q1:What''s the different for "dom0_max_vcpus=4 dom0_vcpus_pin" and
"dom0_max_vcpus=4" ?
which will get better performance
Q2: dom0_max_vcpus=4 means "core0-3 will be just used by dom0" or means "4
cores(not dedicate cores) will be used by dom0, eg: core2-5 or core3-6?
Q3.what does mean "nosmp" , xen, dom0,domU, will just use one
2010 May 21
10
What''s the different for "dom0_max_vcpus=4 dom0_vcpus_pin" and "dom0_max_vcpus=4" ?
Hi experts,
Q1:What''s the different for "dom0_max_vcpus=4 dom0_vcpus_pin" and
"dom0_max_vcpus=4" ?
which will get better performance
Q2: dom0_max_vcpus=4 means "core0-3 will be just used by dom0" or means "4
cores(not dedicate cores) will be used by dom0, eg: core2-5 or core3-6?
Q3.what does mean "nosmp" , xen, dom0,domU, will just use one
2013 Sep 23
1
[PATCH] xen/x86: add a comment regarding how to get the VCPU ID on HVM
Add a note to the public headers regarding how to get the VCPU ID for
HVM guests (on x86).
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Keir Fraser <keir@xen.org>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Matt Wilson <msw@amazon.com>
---
This is what Linux PVHVM does AFAIK, and also what I've been
2015 Sep 01
3
poor performance with dom0 on centos7
Hi All
it is possible to tune dom0/domU for better IO/network performance?
Since I have changed to Cenots7 dom0, I have a really poor IO
performance inside a PV VM.
I have already done what is described on
http://wiki.xenproject.org/wiki/Tuning_Xen_for_Performance
It is better now but still significantly worse than with centos6 dom0
my settings:
xen parameter: dom0_mem=1024M cpufreq=xen
2013 Sep 23
1
[PATCH v2] xen/x86: add a comment regarding how to get the VCPU ID on HVM
Add a note to the public headers regarding how to get the VCPU ID for
HVM guests (on x86).
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Acked-by: Matt Wilson <msw@amazon.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Keir Fraser <keir@xen.org>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Matt Wilson <msw@amazon.com>
---
This is what Linux PVHVM
2013 May 02
5
[PATCH] x86: allow Dom0 read-only access to IO-APICs
There are BIOSes that want to map the IO-APIC MMIO region from some
ACPI method(s), and there is at least one BIOS flavor that wants to
use this mapping to clear an RTE''s mask bit. While we can''t allow the
latter, we can permit reads and simply drop write attempts, leveraging
the already existing infrastructure introduced for dealing with AMD
IOMMUs'' representation as
2008 Nov 24
10
[PATCH] Dom0 Kernel - Fixes for saving/restoring MSI/MSI-X across Dom0 S3
Hi, Keir,
This patch is a bugfix for saving and restoring MSI/MSI-X across S3. Currently, Dom0''s PCI layer unmaps MSI when S3 and maps them back when resuming. However, this triggers unexpected behaviors. For example, if the drivers still holds that irq at the point of unmapping MSI, Xen will force unbind that pirq. But after resume, we have no mechanism to rebind that pirq. The device
2016 Nov 25
7
[PATCH net-next] virtio-net: enable multiqueue by default
We use single queue even if multiqueue is enabled and let admin to
enable it through ethtool later. This is used to avoid possible
regression (small packet TCP stream transmission). But looks like an
overkill since:
- single queue user can disable multiqueue when launching qemu
- brings extra troubles for the management since it needs extra admin
tool in guest to enable multiqueue
- multiqueue
2016 Nov 25
7
[PATCH net-next] virtio-net: enable multiqueue by default
We use single queue even if multiqueue is enabled and let admin to
enable it through ethtool later. This is used to avoid possible
regression (small packet TCP stream transmission). But looks like an
overkill since:
- single queue user can disable multiqueue when launching qemu
- brings extra troubles for the management since it needs extra admin
tool in guest to enable multiqueue
- multiqueue
2009 Jan 26
24
page ref/type count overflows
With pretty trivial user mode programs being able to crash the kernel due to
the ref counter widths in Xen being more narrow than in Linux, I started an
attempt to put together a kernel side fix. While addressing the plain
hypercalls is pretty strait forward, dealing with multicalls (both when using
them for lazy mmu mode batching and when explicitly using them in e.g.
netback - the backends are
2005 Oct 10
13
[PATCH] 0/2 VCPU creation and allocation
I''ve put together two patches. The first introduces a new dom0_op,
set_max_vcpus, which with an associated variable and a check in the
VCPUOP handler fixes [1]bug 288. Also included is a new VCPUOP,
VCPUOP_create, which handles all of the vcpu creation tasks and leaves
initialization and unpausing to VCPUOP_initialize. The separation
allows for build-time allocation of vcpus which
2006 Jul 31
1
[PATCH 5/6] xen, tools: calculate nr_cpus via num_online_cpus
Once Xen calculates nr_nodes properly, all nr_cpu calculations based on
nr_nodes * sockets_per_node * cores_per_socket * threads_per_core are
broken. The easy fix is to replace those calculations with a new field,
nr_cpus in physinfo which is calculated by num_online_cpus(). This
patch does so and attempts to change all users over to nr_cpus field in
physinfo. This patch touches
2012 Feb 14
1
[PATCH] x86: don't allow Dom0 to map MSI-X table writably
With the traditional qemu tree fixed to not use PROT_WRITE anymore in
the mmap() call for this region, and with the upstream qemu tree not
being capable of handling passthrough, yet, there''s no need to treat
Dom specially here anymore.
This continues to leave unaddressed the case where PV guests map the
MSI-X table page(s) before setting up the first MSI-X interrupt (see
the original c/s
2011 Sep 01
3
DOM0 Hang on a large box....
Hi,
I''m looking at a system hang on a large box: 160 cpus, 2TB. Dom0 is
booted with 160 vcpus (don''t ask me why :)), and an HVM guest is started
with over 1.5T RAM and 128 vcpus. The system hangs without much activity
after couple hours. Xen 4.0.2 and 2.6.32 based 64bit dom0.
During hang I discovered:
Most of dom0 vcpus are in double_lock_balance spinning on one of the locks: