similar to: [Patch 2 of 2]: PV-domain SMP performance Linux-part

Displaying 20 results from an estimated 20000 matches similar to: "[Patch 2 of 2]: PV-domain SMP performance Linux-part"

2008 Dec 17
4
[Patch 0 of 2]: PV-domain SMP performance
Hi, I''ve played a little bit with the xen scheduler to enhance the performance of paravirtualized SMP domains including Dom0. Under heavy system load a vcpu might be descheduled in a critical section. This in turn leads to even higher system load if other vcpus of the same domain are waiting for the descheduled vcpu to leave the critical section. I''ve created a patch for xen
2008 Oct 28
2
late lapic timer interrupts for hvm guest
Hi, When using lapic as timer source the hypervisor delivers timer interrupts late. In the source xen/arch/x86/hvm/vpt.c function create_periodic_time creates a timer element with a "bonus" of 50% of the desired time until the interrupt: pt->scheduled = NOW() + period; /* * Offset LAPIC ticks from other timer ticks. Otherwise guests which use * LAPIC ticks for
2011 Feb 14
7
[PATCH] xl cpupool-numa-split: reduce number of Dom0 vcpus
When reducing the number of physical cpus available for Domain-0 by xl cpupool-numa-split, reduce the number of vcpus accordingly. Signed-off-by: juergen.gross@ts.fujitsu.com 1 file changed, 20 insertions(+), 2 deletions(-) tools/libxl/xl_cmdimpl.c | 22 ++++++++++++++++++++-- _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com
2010 Jul 28
22
ACPI-Tables corrupted?
Hi, on a Nehalem system with VT-d enabled we are seeing strange ACPI-Table contents, especially a corrupted DMAR entry. The hypervisor shows following data on boot: (XEN) ACPI: RSDP 000F80E0, 0024 (r2 PTLTD ) (XEN) ACPI: XSDT BF7C469E, 00D4 (r1 PTLTD XSDT 60000 LTP 0) (XEN) ACPI: FACP BF7C9CC9, 00F4 (r3 FSC TYLERBRG 60000 PTL F4240) (XEN) ACPI: DSDT BF7C4772, 54D3 (r1
2011 Nov 17
12
[PATCH] Avoid panic when adjusting sedf parameters
When using sedf scheduler in a cpupool the system might panic when setting sedf scheduling parameters for a domain. Signed-off-by: juergen.gross@ts.fujitsu.com 1 file changed, 4 insertions(+) xen/common/sched_sedf.c | 4 ++++ _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
2011 Jan 27
7
[PATCH]: xl: fix broken cpupool-numa-split
Hi, the implementation of xl cpupool-numa-split is broken. It basically deals with only one poolid, but there are two to consider: the one from the original root CPUpool, the other from the newly created one. On my machine the current output looks like: root@dosorca:/data/images# xl cpupool-numa-split libxl: error: libxl.c:2803:libxl_create_cpupool Could not create cpupool error on creating
2012 Jul 13
11
Backport requests of cs 23420..23423 for 4.0 and 4.1
Hi, we are experiencing significant performance degradation after live migration of hvm domains in Xen 4.0 (SLES11 SP1): after live migration the performance is dropping to less than 90%. I did a backport of cs 23420-23423 and the performance is okay now. I would like to request to include these changesets in 4.0 and 4.1. The backport is quite trivial, I can send patches if you are willing to
2013 Mar 15
2
strange phenomenon on CPU affinity
Hello, My testing machine has 2 quad-core CPU (It supports hyperthreading, but i disable it in BIOS). I uses Xen 4.0.1 as the hypervisor. When I use 8 VMs to conduct a test, CPU affinity of the VMs is very strange. Like this: vm_name vcpu_num cpu_affinity Domain-0 8 any VM1 4 1,3,5,7 VM2 4 1,3,5,7 VM3 4 1,3,5,7 VM4 4
2011 Nov 15
2
xen-unstable/staging: qemu git file corrupt
Hi, when I try to build xen-unstable/staging (cs 24143) tools via make tools I get: ... got 1b6bfb99c2b55ff2e35ab61caf307dad3aebc82a got efd594c960330cc3eee44e65f5fee258c798e610 got ccc9677505c0dd2c6c5054e73a42cef2d25687b4 got 86a2a2a59a8b76117b221c712ba0a156d21441c9 error: File efd594c960330cc3eee44e65f5fee258c798e610
2011 Mar 16
2
[PATCH] Remove no longer used cpu_possible definitions
cpu_possible_mask and related macros are no longer used in Xen. Remove them and adjust comments accordingly. Signed-off-by: juergen.gross@ts.fujitsu.com 1 file changed, 11 insertions(+), 40 deletions(-) xen/include/xen/cpumask.h | 51 +++++++++------------------------------------ _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com
2008 Sep 26
6
Mapping hvm guest pages in Dom0
hello, I would like to map (Read/Write) pages owned by a HVM guest from my Dom0 Linux kernel module. I have access to the "machine frame numbers" of these pages. 1. What is the right interface to do this? kmap needs ''struct page'' ptrs which I doubt exist for pages owned by a HVM guest. Is there a hypercall to do this then? 2. Do I need to modify the HVM behavior in
2008 Jan 09
0
Strange interrupt data for page fault
Hi, I''ve got a strange problem with XEN: I''m testing a 64-bit HVM-domain (a little OS written by me) with XEN 3.0.4 changeset 13138 on x86_64 (Dom0 and XEN taken from SLES 10 SP1). Always at the same point I get a page fault (which is okay) in PL=3 with strange interrupt data on the stack: the EIP is okay, but the CS saved is the TSS instead of my 64-bit user CS selector. I
2010 Jul 28
23
HVM hypercalls
Hi I need to use hypercalls from HVM domain (e.g. HYPERVISOR_add_to_physmap). However, it does not work when I am trying to invoke it from HVM Linux guest. Basically, I don''t see that anything happens on hypervisor''s side. I also grep''ed the guest code for ''vmmcall''/''vmcall'' and did not find anything. Is it possible to do it at all?
2017 Sep 06
5
[PATCH v3 0/2] guard virt_spin_lock() with a static key
With virt_spin_lock() being guarded by a static key the bare metal case can be optimized by patching the call away completely. In case a kernel running as a guest it can decide whether to use paravitualized spinlocks, the current fallback to the unfair test-and-set scheme, or to mimic the bare metal behavior. V3: - remove test for hypervisor environment from virt_spin_lock(9 as suggested by
2017 Sep 06
5
[PATCH v3 0/2] guard virt_spin_lock() with a static key
With virt_spin_lock() being guarded by a static key the bare metal case can be optimized by patching the call away completely. In case a kernel running as a guest it can decide whether to use paravitualized spinlocks, the current fallback to the unfair test-and-set scheme, or to mimic the bare metal behavior. V3: - remove test for hypervisor environment from virt_spin_lock(9 as suggested by
2015 Apr 30
12
[PATCH 0/6] x86: reduce paravirtualized spinlock overhead
Paravirtualized spinlocks produce some overhead even if the kernel is running on bare metal. The main reason are the more complex locking and unlocking functions. Especially unlocking is no longer just one instruction but so complex that it is no longer inlined. This patch series addresses this issue by adding two more pvops functions to reduce the size of the inlined spinlock functions. When
2015 Apr 30
12
[PATCH 0/6] x86: reduce paravirtualized spinlock overhead
Paravirtualized spinlocks produce some overhead even if the kernel is running on bare metal. The main reason are the more complex locking and unlocking functions. Especially unlocking is no longer just one instruction but so complex that it is no longer inlined. This patch series addresses this issue by adding two more pvops functions to reduce the size of the inlined spinlock functions. When
2017 Sep 06
4
[PATCH v2 0/2] guard virt_spin_lock() with a static key
With virt_spin_lock() being guarded by a static key the bare metal case can be optimized by patching the call away completely. In case a kernel running as a guest it can decide whether to use paravitualized spinlocks, the current fallback to the unfair test-and-set scheme, or to mimic the bare metal behavior. V2: - use static key instead of making virt_spin_lock() a pvops function Juergen Gross
2017 Sep 06
4
[PATCH v2 0/2] guard virt_spin_lock() with a static key
With virt_spin_lock() being guarded by a static key the bare metal case can be optimized by patching the call away completely. In case a kernel running as a guest it can decide whether to use paravitualized spinlocks, the current fallback to the unfair test-and-set scheme, or to mimic the bare metal behavior. V2: - use static key instead of making virt_spin_lock() a pvops function Juergen Gross
2015 May 06
2
[PATCH 0/6] x86: reduce paravirtualized spinlock overhead
On 05/05/2015 07:21 PM, Jeremy Fitzhardinge wrote: > On 05/03/2015 10:55 PM, Juergen Gross wrote: >> I did a small measurement of the pure locking functions on bare metal >> without and with my patches. >> >> spin_lock() for the first time (lock and code not in cache) dropped from >> about 600 to 500 cycles. >> >> spin_unlock() for first time dropped