similar to: [PATCH v7 00/11] implement vcpu preempted check

Displaying 20 results from an estimated 3000 matches similar to: "[PATCH v7 00/11] implement vcpu preempted check"

2016 Oct 28
16
[PATCH v6 00/11] implement vcpu preempted check
change from v5: spilt x86/kvm patch into guest/host part. introduce kvm_write_guest_offset_cached. fix some typos. rebase patch onto 4.9.2 change from v4: spilt x86 kvm vcpu preempted check into two patches. add documentation patch. add x86 vcpu preempted check patch under xen add s390 vcpu preempted check patch change from v3: add x86 vcpu preempted check patch change from v2: no code
2016 Oct 28
16
[PATCH v6 00/11] implement vcpu preempted check
change from v5: spilt x86/kvm patch into guest/host part. introduce kvm_write_guest_offset_cached. fix some typos. rebase patch onto 4.9.2 change from v4: spilt x86 kvm vcpu preempted check into two patches. add documentation patch. add x86 vcpu preempted check patch under xen add s390 vcpu preempted check patch change from v3: add x86 vcpu preempted check patch change from v2: no code
2016 Oct 20
15
[PATCH v5 0/9] implement vcpu preempted check
change from v4: spilt x86 kvm vcpu preempted check into two patches. add documentation patch. add x86 vcpu preempted check patch under xen add s390 vcpu preempted check patch change from v3: add x86 vcpu preempted check patch change from v2: no code change, fix typos, update some comments change from v1: a simplier definition of default vcpu_is_preempted skip mahcine type check on ppc,
2016 Oct 20
15
[PATCH v5 0/9] implement vcpu preempted check
change from v4: spilt x86 kvm vcpu preempted check into two patches. add documentation patch. add x86 vcpu preempted check patch under xen add s390 vcpu preempted check patch change from v3: add x86 vcpu preempted check patch change from v2: no code change, fix typos, update some comments change from v1: a simplier definition of default vcpu_is_preempted skip mahcine type check on ppc,
2016 Oct 19
3
[PATCH v4 5/5] x86, kvm: support vcpu preempted check
2016-10-19 06:20-0400, Pan Xinhui: > This is to fix some lock holder preemption issues. Some other locks > implementation do a spin loop before acquiring the lock itself. > Currently kernel has an interface of bool vcpu_is_preempted(int cpu). It > takes the cpu as parameter and return true if the cpu is preempted. Then > kernel can break the spin loops upon on the retval of
2016 Oct 19
3
[PATCH v4 5/5] x86, kvm: support vcpu preempted check
2016-10-19 06:20-0400, Pan Xinhui: > This is to fix some lock holder preemption issues. Some other locks > implementation do a spin loop before acquiring the lock itself. > Currently kernel has an interface of bool vcpu_is_preempted(int cpu). It > takes the cpu as parameter and return true if the cpu is preempted. Then > kernel can break the spin loops upon on the retval of
2016 Oct 19
10
[PATCH v4 0/5] implement vcpu preempted check
change from v3: add x86 vcpu preempted check patch change from v2: no code change, fix typos, update some comments change from v1: a simplier definition of default vcpu_is_preempted skip mahcine type check on ppc, and add config. remove dedicated macro. add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner. add more comments thanks boqun and Peter's suggestion.
2016 Oct 19
10
[PATCH v4 0/5] implement vcpu preempted check
change from v3: add x86 vcpu preempted check patch change from v2: no code change, fix typos, update some comments change from v1: a simplier definition of default vcpu_is_preempted skip mahcine type check on ppc, and add config. remove dedicated macro. add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner. add more comments thanks boqun and Peter's suggestion.
2016 Dec 19
2
[PATCH v7 08/11] x86, kvm/x86.c: support vcpu preempted check
Hello, On Wed, Nov 02, 2016 at 05:08:35AM -0400, Pan Xinhui wrote: > Support the vcpu_is_preempted() functionality under KVM. This will > enhance lock performance on overcommitted hosts (more runnable vcpus > than physical cpus in the system) as doing busy waits for preempted > vcpus will hurt system performance far worse than early yielding. > > Use one field of struct
2016 Dec 19
2
[PATCH v7 08/11] x86, kvm/x86.c: support vcpu preempted check
Hello, On Wed, Nov 02, 2016 at 05:08:35AM -0400, Pan Xinhui wrote: > Support the vcpu_is_preempted() functionality under KVM. This will > enhance lock performance on overcommitted hosts (more runnable vcpus > than physical cpus in the system) as doing busy waits for preempted > vcpus will hurt system performance far worse than early yielding. > > Use one field of struct
2016 Oct 19
2
[PATCH v2 1/1] s390/spinlock: Provide vcpu_is_preempted
On 09/29/2016 05:51 PM, Christian Borntraeger wrote: > this implements the s390 backend for commit > "kernel/sched: introduce vcpu preempted check interface" > by reworking the existing smp_vcpu_scheduled into > arch_vcpu_is_preempted. We can then also get rid of the > local cpu_is_preempted function by moving the > CIF_ENABLED_WAIT test into arch_vcpu_is_preempted.
2016 Oct 19
2
[PATCH v2 1/1] s390/spinlock: Provide vcpu_is_preempted
On 09/29/2016 05:51 PM, Christian Borntraeger wrote: > this implements the s390 backend for commit > "kernel/sched: introduce vcpu preempted check interface" > by reworking the existing smp_vcpu_scheduled into > arch_vcpu_is_preempted. We can then also get rid of the > local cpu_is_preempted function by moving the > CIF_ENABLED_WAIT test into arch_vcpu_is_preempted.
2016 Oct 20
0
[PATCH v5 6/9] x86, kvm: support vcpu preempted check
Support the vcpu_is_preempted() functionality under KVM. This will enhance lock performance on overcommitted hosts (more runnable vcpus than physical cpus in the system) as doing busy waits for preempted vcpus will hurt system performance far worse than early yielding. Use one field of struct kvm_steal_time to indicate that if one vcpu is running or not. unix benchmark result: host: kernel
2016 Oct 28
0
[Xen-devel] [PATCH v6 00/11] implement vcpu preempted check
On Fri, Oct 28, 2016 at 04:11:16AM -0400, Pan Xinhui wrote: > change from v5: > spilt x86/kvm patch into guest/host part. > introduce kvm_write_guest_offset_cached. > fix some typos. > rebase patch onto 4.9.2 > change from v4: > spilt x86 kvm vcpu preempted check into two patches. > add documentation patch. > add x86 vcpu preempted check patch under xen > add
2016 Nov 15
2
[PATCH v7 06/11] x86, paravirt: Add interface to support kvm/xen vcpu preempted check
On Wed, Nov 02, 2016 at 05:08:33AM -0400, Pan Xinhui wrote: > diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h > index 0f400c0..38c3bb7 100644 > --- a/arch/x86/include/asm/paravirt_types.h > +++ b/arch/x86/include/asm/paravirt_types.h > @@ -310,6 +310,8 @@ struct pv_lock_ops { > > void (*wait)(u8 *ptr, u8 val); > void
2016 Nov 15
2
[PATCH v7 06/11] x86, paravirt: Add interface to support kvm/xen vcpu preempted check
On Wed, Nov 02, 2016 at 05:08:33AM -0400, Pan Xinhui wrote: > diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h > index 0f400c0..38c3bb7 100644 > --- a/arch/x86/include/asm/paravirt_types.h > +++ b/arch/x86/include/asm/paravirt_types.h > @@ -310,6 +310,8 @@ struct pv_lock_ops { > > void (*wait)(u8 *ptr, u8 val); > void
2016 Dec 05
9
[PATCH v8 0/6] Implement qspinlock/pv-qspinlock on ppc
Hi All, this is the fairlock patchset. You can apply them and build successfully. patches are based on linux-next qspinlock can avoid waiter starved issue. It has about the same speed in single-thread and it can be much faster in high contention situations especially when the spinlock is embedded within the data structure to be protected. v7 -> v8: add one patch to drop a function call
2016 Dec 05
9
[PATCH v8 0/6] Implement qspinlock/pv-qspinlock on ppc
Hi All, this is the fairlock patchset. You can apply them and build successfully. patches are based on linux-next qspinlock can avoid waiter starved issue. It has about the same speed in single-thread and it can be much faster in high contention situations especially when the spinlock is embedded within the data structure to be protected. v7 -> v8: add one patch to drop a function call
2016 Dec 06
6
[PATCH v9 0/6] Implement qspinlock/pv-qspinlock on ppc
Hi All, this is the fairlock patchset. You can apply them and build successfully. patches are based on linux-next qspinlock can avoid waiter starved issue. It has about the same speed in single-thread and it can be much faster in high contention situations especially when the spinlock is embedded within the data structure to be protected. v8 -> v9: mv qspinlocm config entry to
2016 Dec 06
6
[PATCH v9 0/6] Implement qspinlock/pv-qspinlock on ppc
Hi All, this is the fairlock patchset. You can apply them and build successfully. patches are based on linux-next qspinlock can avoid waiter starved issue. It has about the same speed in single-thread and it can be much faster in high contention situations especially when the spinlock is embedded within the data structure to be protected. v8 -> v9: mv qspinlocm config entry to