search for: deschedules

Displaying 20 results from an estimated 82 matches for "deschedules".

Did you mean: descheduled
2008 Dec 17
4
[Patch 0 of 2]: PV-domain SMP performance
Hi, I''ve played a little bit with the xen scheduler to enhance the performance of paravirtualized SMP domains including Dom0. Under heavy system load a vcpu might be descheduled in a critical section. This in turn leads to even higher system load if other vcpus of the same domain are waiting for the descheduled vcpu to leave the critical section. I''ve created a patch for xen
2014 Feb 27
3
[PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
On 27/02/14 13:11, Paolo Bonzini wrote: > Il 27/02/2014 13:11, David Vrabel ha scritto: >>> > This patch adds para-virtualization support to the queue spinlock code >>> > by enabling the queue head to kick the lock holder CPU, if known, >>> > in when the lock isn't released for a certain amount of time. It >>> > also enables the mutual
2014 Feb 27
3
[PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
On 27/02/14 13:11, Paolo Bonzini wrote: > Il 27/02/2014 13:11, David Vrabel ha scritto: >>> > This patch adds para-virtualization support to the queue spinlock code >>> > by enabling the queue head to kick the lock holder CPU, if known, >>> > in when the lock isn't released for a certain amount of time. It >>> > also enables the mutual
2014 Feb 27
3
[PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
On 02/27/2014 08:15 PM, Paolo Bonzini wrote: [...] >> But neither of the VCPUs being kicked here are halted -- they're either >> running or runnable (descheduled by the hypervisor). > > /me actually looks at Waiman's code... > > Right, this is really different from pvticketlocks, where the *unlock* > primitive wakes up a sleeping VCPU. It is more similar to PLE
2014 Feb 27
3
[PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
On 02/27/2014 08:15 PM, Paolo Bonzini wrote: [...] >> But neither of the VCPUs being kicked here are halted -- they're either >> running or runnable (descheduled by the hypervisor). > > /me actually looks at Waiman's code... > > Right, this is really different from pvticketlocks, where the *unlock* > primitive wakes up a sleeping VCPU. It is more similar to PLE
2007 Apr 24
2
SMP lockup in virtualized environment
In a previous mail, Jeremy Fitzhardinge wrote: > The softlockup watchdog is currently a nuisance in a virtual machine, > since the whole system could have the CPU stolen from it for a long > period of time. While it would be unlikely for a guest domain to be > denied timer interrupts for over 10s, it could happen and any > softlockup message would be completely spurious. I wonder
2007 Apr 24
2
SMP lockup in virtualized environment
In a previous mail, Jeremy Fitzhardinge wrote: > The softlockup watchdog is currently a nuisance in a virtual machine, > since the whole system could have the CPU stolen from it for a long > period of time. While it would be unlikely for a guest domain to be > denied timer interrupts for over 10s, it could happen and any > softlockup message would be completely spurious. I wonder
2008 Dec 17
36
[Patch 2 of 2]: PV-domain SMP performance Linux-part
-- Juergen Gross Principal Developer IP SW OS6 Telephone: +49 (0) 89 636 47950 Fujitsu Siemens Computers e-mail: juergen.gross@fujitsu-siemens.com Otto-Hahn-Ring 6 Internet: www.fujitsu-siemens.com D-81739 Muenchen Company details: www.fujitsu-siemens.com/imprint.html _______________________________________________
2014 May 12
3
[PATCH v10 03/19] qspinlock: Add pending bit
2014-05-07 11:01-0400, Waiman Long: > From: Peter Zijlstra <peterz at infradead.org> > > Because the qspinlock needs to touch a second cacheline; add a pending > bit and allow a single in-word spinner before we punt to the second > cacheline. I think there is an unwanted scenario on virtual machines: 1) VCPU sets the pending bit and start spinning. 2) Pending VCPU gets
2014 May 12
3
[PATCH v10 03/19] qspinlock: Add pending bit
2014-05-07 11:01-0400, Waiman Long: > From: Peter Zijlstra <peterz at infradead.org> > > Because the qspinlock needs to touch a second cacheline; add a pending > bit and allow a single in-word spinner before we punt to the second > cacheline. I think there is an unwanted scenario on virtual machines: 1) VCPU sets the pending bit and start spinning. 2) Pending VCPU gets
2017 Apr 13
3
[PATCH v2 00/11] x86: xen cpuid() cleanup
Reduce special casing of xen_cpuid() by using cpu capabilities instead of faked cpuid nodes. This cleanup enables us remove the hypervisor specific set_cpu_features callback as the same effect can be reached via setup_[clear|force]_cpu_cap(). Removing the rest faked nodes from xen_cpuid() requires some more work as the remaining cases (mwait leafs and extended topology info) have to be handled
2017 Apr 13
3
[PATCH v2 00/11] x86: xen cpuid() cleanup
Reduce special casing of xen_cpuid() by using cpu capabilities instead of faked cpuid nodes. This cleanup enables us remove the hypervisor specific set_cpu_features callback as the same effect can be reached via setup_[clear|force]_cpu_cap(). Removing the rest faked nodes from xen_cpuid() requires some more work as the remaining cases (mwait leafs and extended topology info) have to be handled
2014 Mar 13
3
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...ock is > scheduled out. Unfair lock in a native environment is generally not a > good idea as there is a possibility of lock starvation for a heavily > contended lock. I do not think this is a good idea -- the problems with unfair locks are worse in a virtualized guest. If a waiting VCPU deschedules and has to be kicked to grab a lock then it is very likely to lose a race with another running VCPU trying to take a lock (since it takes time for the VCPU to be rescheduled). > With the unfair locking activated on bare metal 4-socket Westmere-EX > box, the execution times (in ms) of a spinl...
2014 Mar 13
3
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...ock is > scheduled out. Unfair lock in a native environment is generally not a > good idea as there is a possibility of lock starvation for a heavily > contended lock. I do not think this is a good idea -- the problems with unfair locks are worse in a virtualized guest. If a waiting VCPU deschedules and has to be kicked to grab a lock then it is very likely to lose a race with another running VCPU trying to take a lock (since it takes time for the VCPU to be rescheduled). > With the unfair locking activated on bare metal 4-socket Westmere-EX > box, the execution times (in ms) of a spinl...
2017 Apr 13
0
[PATCH v2 10/11] vmware: set cpu capabilities during platform initialization
There is no need to set the same capabilities for each cpu individually. This can be done for all cpus in platform initialization. Cc: Alok Kataria <akataria at vmware.com> Cc: Thomas Gleixner <tglx at linutronix.de> Cc: Ingo Molnar <mingo at redhat.com> Cc: "H. Peter Anvin" <hpa at zytor.com> Cc: x86 at kernel.org Cc: virtualization at lists.linux-foundation.org
2014 Mar 03
4
[PATCH RFC v5 4/8] pvqspinlock, x86: Allow unfair spinlock in a real PV environment
Il 28/02/2014 18:06, Waiman Long ha scritto: > On 02/26/2014 12:07 PM, Konrad Rzeszutek Wilk wrote: >> On Wed, Feb 26, 2014 at 10:14:24AM -0500, Waiman Long wrote: >>> Locking is always an issue in a virtualized environment as the virtual >>> CPU that is waiting on a lock may get scheduled out and hence block >>> any progress in lock acquisition even when the
2014 Mar 03
4
[PATCH RFC v5 4/8] pvqspinlock, x86: Allow unfair spinlock in a real PV environment
Il 28/02/2014 18:06, Waiman Long ha scritto: > On 02/26/2014 12:07 PM, Konrad Rzeszutek Wilk wrote: >> On Wed, Feb 26, 2014 at 10:14:24AM -0500, Waiman Long wrote: >>> Locking is always an issue in a virtualized environment as the virtual >>> CPU that is waiting on a lock may get scheduled out and hence block >>> any progress in lock acquisition even when the
2017 Apr 18
1
[PATCH v3 00/11] x86: xen cpuid() cleanup
Reduce special casing of xen_cpuid() by using cpu capabilities instead of faked cpuid nodes. This cleanup enables us remove the hypervisor specific set_cpu_features callback as the same effect can be reached via setup_[clear|force]_cpu_cap(). Removing the rest faked nodes from xen_cpuid() requires some more work as the remaining cases (mwait leafs and extended topology info) have to be handled
2017 Apr 18
1
[PATCH v3 00/11] x86: xen cpuid() cleanup
Reduce special casing of xen_cpuid() by using cpu capabilities instead of faked cpuid nodes. This cleanup enables us remove the hypervisor specific set_cpu_features callback as the same effect can be reached via setup_[clear|force]_cpu_cap(). Removing the rest faked nodes from xen_cpuid() requires some more work as the remaining cases (mwait leafs and extended topology info) have to be handled
2013 Dec 05
7
POD: soft lockups in dom0 kernel
Hi, when creating a bigger (> 50 GB) HVM guest with maxmem > memory we get softlockups from time to time. kernel: [ 802.084335] BUG: soft lockup - CPU#1 stuck for 22s! [xend:31351] I tracked this down to the call of xc_domain_set_pod_target() and further p2m_pod_set_mem_target(). Unfortunately I can this check only with xen-4.2.2 as I don''t have a machine with enough memory for