Displaying 20 results from an estimated 82 matches for "descheduling".
Did you mean:
rescheduling
2008 Dec 17
4
[Patch 0 of 2]: PV-domain SMP performance
...873.76
Second configuration: 4 vcpus, all pinned to cpu 0:
---------------------------------------------------
1 run: real: 274.06 user: 0.74 sys: 58.88
2 runs: real: 999.77 user: 1.27 sys: 98.61
4 runs: real: 1251.00 user: 16.58 sys: 291.66
This result was achieved by avoiding descheduling of a vcpu only when irqs
are blocked. Even better results might be possible with some fine tuning
(e.g. instrumenting bh_enable/bh_disable).
I think system time has dropped remarkably!
Patch 1 is hypervisor support
Patch 2 is my Linux support in irq_enable and irq_disable
Juergen
--
Juergen Gr...
2014 Feb 27
3
[PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
On 27/02/14 13:11, Paolo Bonzini wrote:
> Il 27/02/2014 13:11, David Vrabel ha scritto:
>>> > This patch adds para-virtualization support to the queue spinlock code
>>> > by enabling the queue head to kick the lock holder CPU, if known,
>>> > in when the lock isn't released for a certain amount of time. It
>>> > also enables the mutual
2014 Feb 27
3
[PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
On 27/02/14 13:11, Paolo Bonzini wrote:
> Il 27/02/2014 13:11, David Vrabel ha scritto:
>>> > This patch adds para-virtualization support to the queue spinlock code
>>> > by enabling the queue head to kick the lock holder CPU, if known,
>>> > in when the lock isn't released for a certain amount of time. It
>>> > also enables the mutual
2014 Feb 27
3
[PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
On 02/27/2014 08:15 PM, Paolo Bonzini wrote:
[...]
>> But neither of the VCPUs being kicked here are halted -- they're either
>> running or runnable (descheduled by the hypervisor).
>
> /me actually looks at Waiman's code...
>
> Right, this is really different from pvticketlocks, where the *unlock*
> primitive wakes up a sleeping VCPU. It is more similar to PLE
2014 Feb 27
3
[PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
On 02/27/2014 08:15 PM, Paolo Bonzini wrote:
[...]
>> But neither of the VCPUs being kicked here are halted -- they're either
>> running or runnable (descheduled by the hypervisor).
>
> /me actually looks at Waiman's code...
>
> Right, this is really different from pvticketlocks, where the *unlock*
> primitive wakes up a sleeping VCPU. It is more similar to PLE
2007 Apr 24
2
SMP lockup in virtualized environment
In a previous mail, Jeremy Fitzhardinge wrote:
> The softlockup watchdog is currently a nuisance in a virtual machine,
> since the whole system could have the CPU stolen from it for a long
> period of time. While it would be unlikely for a guest domain to be
> denied timer interrupts for over 10s, it could happen and any
> softlockup message would be completely spurious.
I wonder
2007 Apr 24
2
SMP lockup in virtualized environment
In a previous mail, Jeremy Fitzhardinge wrote:
> The softlockup watchdog is currently a nuisance in a virtual machine,
> since the whole system could have the CPU stolen from it for a long
> period of time. While it would be unlikely for a guest domain to be
> denied timer interrupts for over 10s, it could happen and any
> softlockup message would be completely spurious.
I wonder
2008 Dec 17
36
[Patch 2 of 2]: PV-domain SMP performance Linux-part
--
Juergen Gross Principal Developer
IP SW OS6 Telephone: +49 (0) 89 636 47950
Fujitsu Siemens Computers e-mail: juergen.gross@fujitsu-siemens.com
Otto-Hahn-Ring 6 Internet: www.fujitsu-siemens.com
D-81739 Muenchen Company details: www.fujitsu-siemens.com/imprint.html
_______________________________________________
2014 May 12
3
[PATCH v10 03/19] qspinlock: Add pending bit
2014-05-07 11:01-0400, Waiman Long:
> From: Peter Zijlstra <peterz at infradead.org>
>
> Because the qspinlock needs to touch a second cacheline; add a pending
> bit and allow a single in-word spinner before we punt to the second
> cacheline.
I think there is an unwanted scenario on virtual machines:
1) VCPU sets the pending bit and start spinning.
2) Pending VCPU gets
2014 May 12
3
[PATCH v10 03/19] qspinlock: Add pending bit
2014-05-07 11:01-0400, Waiman Long:
> From: Peter Zijlstra <peterz at infradead.org>
>
> Because the qspinlock needs to touch a second cacheline; add a pending
> bit and allow a single in-word spinner before we punt to the second
> cacheline.
I think there is an unwanted scenario on virtual machines:
1) VCPU sets the pending bit and start spinning.
2) Pending VCPU gets
2017 Apr 13
3
[PATCH v2 00/11] x86: xen cpuid() cleanup
Reduce special casing of xen_cpuid() by using cpu capabilities instead
of faked cpuid nodes.
This cleanup enables us remove the hypervisor specific set_cpu_features
callback as the same effect can be reached via
setup_[clear|force]_cpu_cap().
Removing the rest faked nodes from xen_cpuid() requires some more work
as the remaining cases (mwait leafs and extended topology info) have
to be handled
2017 Apr 13
3
[PATCH v2 00/11] x86: xen cpuid() cleanup
Reduce special casing of xen_cpuid() by using cpu capabilities instead
of faked cpuid nodes.
This cleanup enables us remove the hypervisor specific set_cpu_features
callback as the same effect can be reached via
setup_[clear|force]_cpu_cap().
Removing the rest faked nodes from xen_cpuid() requires some more work
as the remaining cases (mwait leafs and extended topology info) have
to be handled
2014 Mar 13
3
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
On 12/03/14 18:54, Waiman Long wrote:
> Locking is always an issue in a virtualized environment as the virtual
> CPU that is waiting on a lock may get scheduled out and hence block
> any progress in lock acquisition even when the lock has been freed.
>
> One solution to this problem is to allow unfair lock in a
> para-virtualized environment. In this case, a new lock acquirer
2014 Mar 13
3
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
On 12/03/14 18:54, Waiman Long wrote:
> Locking is always an issue in a virtualized environment as the virtual
> CPU that is waiting on a lock may get scheduled out and hence block
> any progress in lock acquisition even when the lock has been freed.
>
> One solution to this problem is to allow unfair lock in a
> para-virtualized environment. In this case, a new lock acquirer
2017 Apr 13
0
[PATCH v2 10/11] vmware: set cpu capabilities during platform initialization
There is no need to set the same capabilities for each cpu
individually. This can be done for all cpus in platform initialization.
Cc: Alok Kataria <akataria at vmware.com>
Cc: Thomas Gleixner <tglx at linutronix.de>
Cc: Ingo Molnar <mingo at redhat.com>
Cc: "H. Peter Anvin" <hpa at zytor.com>
Cc: x86 at kernel.org
Cc: virtualization at lists.linux-foundation.org
2014 Mar 03
4
[PATCH RFC v5 4/8] pvqspinlock, x86: Allow unfair spinlock in a real PV environment
Il 28/02/2014 18:06, Waiman Long ha scritto:
> On 02/26/2014 12:07 PM, Konrad Rzeszutek Wilk wrote:
>> On Wed, Feb 26, 2014 at 10:14:24AM -0500, Waiman Long wrote:
>>> Locking is always an issue in a virtualized environment as the virtual
>>> CPU that is waiting on a lock may get scheduled out and hence block
>>> any progress in lock acquisition even when the
2014 Mar 03
4
[PATCH RFC v5 4/8] pvqspinlock, x86: Allow unfair spinlock in a real PV environment
Il 28/02/2014 18:06, Waiman Long ha scritto:
> On 02/26/2014 12:07 PM, Konrad Rzeszutek Wilk wrote:
>> On Wed, Feb 26, 2014 at 10:14:24AM -0500, Waiman Long wrote:
>>> Locking is always an issue in a virtualized environment as the virtual
>>> CPU that is waiting on a lock may get scheduled out and hence block
>>> any progress in lock acquisition even when the
2017 Apr 18
1
[PATCH v3 00/11] x86: xen cpuid() cleanup
Reduce special casing of xen_cpuid() by using cpu capabilities instead
of faked cpuid nodes.
This cleanup enables us remove the hypervisor specific set_cpu_features
callback as the same effect can be reached via
setup_[clear|force]_cpu_cap().
Removing the rest faked nodes from xen_cpuid() requires some more work
as the remaining cases (mwait leafs and extended topology info) have
to be handled
2017 Apr 18
1
[PATCH v3 00/11] x86: xen cpuid() cleanup
Reduce special casing of xen_cpuid() by using cpu capabilities instead
of faked cpuid nodes.
This cleanup enables us remove the hypervisor specific set_cpu_features
callback as the same effect can be reached via
setup_[clear|force]_cpu_cap().
Removing the rest faked nodes from xen_cpuid() requires some more work
as the remaining cases (mwait leafs and extended topology info) have
to be handled
2013 Dec 05
7
POD: soft lockups in dom0 kernel
Hi,
when creating a bigger (> 50 GB) HVM guest with maxmem > memory we get
softlockups from time to time.
kernel: [ 802.084335] BUG: soft lockup - CPU#1 stuck for 22s! [xend:31351]
I tracked this down to the call of xc_domain_set_pod_target() and further
p2m_pod_set_mem_target().
Unfortunately I can this check only with xen-4.2.2 as I don''t have a machine
with enough memory for