Displaying 20 results from an estimated 82 matches for "descheduled".
2008 Dec 17
4
[Patch 0 of 2]: PV-domain SMP performance
Hi,
I''ve played a little bit with the xen scheduler to enhance the performance of
paravirtualized SMP domains including Dom0.
Under heavy system load a vcpu might be descheduled in a critical section.
This in turn leads to even higher system load if other vcpus of the same
domain are waiting for the descheduled vcpu to leave the critical section.
I''ve created a patch for xen and for the linux kernel to show that cooperative
scheduling can help to avoid this probl...
2014 Feb 27
3
[PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
...I
>> doubt it's possible under KVM or any other hypervisor.
>
> KVM allows any VCPU to wake up a currently halted VCPU of its choice,
> see Documentation/virtual/kvm/hypercalls.txt.
But neither of the VCPUs being kicked here are halted -- they're either
running or runnable (descheduled by the hypervisor).
David
2014 Feb 27
3
[PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
...I
>> doubt it's possible under KVM or any other hypervisor.
>
> KVM allows any VCPU to wake up a currently halted VCPU of its choice,
> see Documentation/virtual/kvm/hypercalls.txt.
But neither of the VCPUs being kicked here are halted -- they're either
running or runnable (descheduled by the hypervisor).
David
2014 Feb 27
3
[PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
On 02/27/2014 08:15 PM, Paolo Bonzini wrote:
[...]
>> But neither of the VCPUs being kicked here are halted -- they're either
>> running or runnable (descheduled by the hypervisor).
>
> /me actually looks at Waiman's code...
>
> Right, this is really different from pvticketlocks, where the *unlock*
> primitive wakes up a sleeping VCPU. It is more similar to PLE
> (pause-loop exiting).
Adding to the discussion, I see there are two pos...
2014 Feb 27
3
[PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
On 02/27/2014 08:15 PM, Paolo Bonzini wrote:
[...]
>> But neither of the VCPUs being kicked here are halted -- they're either
>> running or runnable (descheduled by the hypervisor).
>
> /me actually looks at Waiman's code...
>
> Right, this is really different from pvticketlocks, where the *unlock*
> primitive wakes up a sleeping VCPU. It is more similar to PLE
> (pause-loop exiting).
Adding to the discussion, I see there are two pos...
2007 Apr 24
2
SMP lockup in virtualized environment
...h a
long time ? The only reason I see is that the guest domain is not
scheduled at all (host domain or another higher priority guest running).
Now in SMP host and guest, what happens if a guest CPU is not scheduled
for a while ?
An example: in kernel/pid.c:alloc_pid(), if one of the guest CPUs is
descheduled when holding the pidmap_lock, what happens to the other
guest CPUs who want to alloc/free pids ? Are they blocked too ?
--
Cyprien Laplace
2007 Apr 24
2
SMP lockup in virtualized environment
...h a
long time ? The only reason I see is that the guest domain is not
scheduled at all (host domain or another higher priority guest running).
Now in SMP host and guest, what happens if a guest CPU is not scheduled
for a while ?
An example: in kernel/pid.c:alloc_pid(), if one of the guest CPUs is
descheduled when holding the pidmap_lock, what happens to the other
guest CPUs who want to alloc/free pids ? Are they blocked too ?
--
Cyprien Laplace
2008 Dec 17
36
[Patch 2 of 2]: PV-domain SMP performance Linux-part
--
Juergen Gross Principal Developer
IP SW OS6 Telephone: +49 (0) 89 636 47950
Fujitsu Siemens Computers e-mail: juergen.gross@fujitsu-siemens.com
Otto-Hahn-Ring 6 Internet: www.fujitsu-siemens.com
D-81739 Muenchen Company details: www.fujitsu-siemens.com/imprint.html
_______________________________________________
2014 May 12
3
[PATCH v10 03/19] qspinlock: Add pending bit
...gt;
> Because the qspinlock needs to touch a second cacheline; add a pending
> bit and allow a single in-word spinner before we punt to the second
> cacheline.
I think there is an unwanted scenario on virtual machines:
1) VCPU sets the pending bit and start spinning.
2) Pending VCPU gets descheduled.
- we have PLE and lock holder isn't running [1]
- the hypervisor randomly preempts us
3) Lock holder unlocks while pending VCPU is waiting in queue.
4) Subsequent lockers will see free lock with set pending bit and will
loop in trylock's 'for (;;)'
- the worst-case i...
2014 May 12
3
[PATCH v10 03/19] qspinlock: Add pending bit
...gt;
> Because the qspinlock needs to touch a second cacheline; add a pending
> bit and allow a single in-word spinner before we punt to the second
> cacheline.
I think there is an unwanted scenario on virtual machines:
1) VCPU sets the pending bit and start spinning.
2) Pending VCPU gets descheduled.
- we have PLE and lock holder isn't running [1]
- the hypervisor randomly preempts us
3) Lock holder unlocks while pending VCPU is waiting in queue.
4) Subsequent lockers will see free lock with set pending bit and will
loop in trylock's 'for (;;)'
- the worst-case i...
2017 Apr 13
3
[PATCH v2 00/11] x86: xen cpuid() cleanup
Reduce special casing of xen_cpuid() by using cpu capabilities instead
of faked cpuid nodes.
This cleanup enables us remove the hypervisor specific set_cpu_features
callback as the same effect can be reached via
setup_[clear|force]_cpu_cap().
Removing the rest faked nodes from xen_cpuid() requires some more work
as the remaining cases (mwait leafs and extended topology info) have
to be handled
2017 Apr 13
3
[PATCH v2 00/11] x86: xen cpuid() cleanup
Reduce special casing of xen_cpuid() by using cpu capabilities instead
of faked cpuid nodes.
This cleanup enables us remove the hypervisor specific set_cpu_features
callback as the same effect can be reached via
setup_[clear|force]_cpu_cap().
Removing the rest faked nodes from xen_cpuid() requires some more work
as the remaining cases (mwait leafs and extended topology info) have
to be handled
2014 Mar 13
3
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
On 12/03/14 18:54, Waiman Long wrote:
> Locking is always an issue in a virtualized environment as the virtual
> CPU that is waiting on a lock may get scheduled out and hence block
> any progress in lock acquisition even when the lock has been freed.
>
> One solution to this problem is to allow unfair lock in a
> para-virtualized environment. In this case, a new lock acquirer
2014 Mar 13
3
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
On 12/03/14 18:54, Waiman Long wrote:
> Locking is always an issue in a virtualized environment as the virtual
> CPU that is waiting on a lock may get scheduled out and hence block
> any progress in lock acquisition even when the lock has been freed.
>
> One solution to this problem is to allow unfair lock in a
> para-virtualized environment. In this case, a new lock acquirer
2017 Apr 13
0
[PATCH v2 10/11] vmware: set cpu capabilities during platform initialization
...check at
+ * bootup can fail due to a marginal offset between vcpus' TSCs (though the
+ * TSCs do not drift from each other). Also, the ACPI PM timer clocksource
+ * is not suitable as a watchdog when running on a hypervisor because the
+ * kernel may miss a wrap of the counter if the vcpu is descheduled for a
+ * long time. To skip these checks at runtime we set these capability bits,
+ * so that the kernel could just trust the hypervisor with providing a
+ * reliable virtual TSC that is suitable for timekeeping.
+ */
+static void __init vmware_set_capabilities(void)
+{
+ setup_force_cpu_cap(X86_F...
2014 Mar 03
4
[PATCH RFC v5 4/8] pvqspinlock, x86: Allow unfair spinlock in a real PV environment
Il 28/02/2014 18:06, Waiman Long ha scritto:
> On 02/26/2014 12:07 PM, Konrad Rzeszutek Wilk wrote:
>> On Wed, Feb 26, 2014 at 10:14:24AM -0500, Waiman Long wrote:
>>> Locking is always an issue in a virtualized environment as the virtual
>>> CPU that is waiting on a lock may get scheduled out and hence block
>>> any progress in lock acquisition even when the
2014 Mar 03
4
[PATCH RFC v5 4/8] pvqspinlock, x86: Allow unfair spinlock in a real PV environment
Il 28/02/2014 18:06, Waiman Long ha scritto:
> On 02/26/2014 12:07 PM, Konrad Rzeszutek Wilk wrote:
>> On Wed, Feb 26, 2014 at 10:14:24AM -0500, Waiman Long wrote:
>>> Locking is always an issue in a virtualized environment as the virtual
>>> CPU that is waiting on a lock may get scheduled out and hence block
>>> any progress in lock acquisition even when the
2017 Apr 18
1
[PATCH v3 00/11] x86: xen cpuid() cleanup
Reduce special casing of xen_cpuid() by using cpu capabilities instead
of faked cpuid nodes.
This cleanup enables us remove the hypervisor specific set_cpu_features
callback as the same effect can be reached via
setup_[clear|force]_cpu_cap().
Removing the rest faked nodes from xen_cpuid() requires some more work
as the remaining cases (mwait leafs and extended topology info) have
to be handled
2017 Apr 18
1
[PATCH v3 00/11] x86: xen cpuid() cleanup
Reduce special casing of xen_cpuid() by using cpu capabilities instead
of faked cpuid nodes.
This cleanup enables us remove the hypervisor specific set_cpu_features
callback as the same effect can be reached via
setup_[clear|force]_cpu_cap().
Removing the rest faked nodes from xen_cpuid() requires some more work
as the remaining cases (mwait leafs and extended topology info) have
to be handled
2013 Dec 05
7
POD: soft lockups in dom0 kernel
Hi,
when creating a bigger (> 50 GB) HVM guest with maxmem > memory we get
softlockups from time to time.
kernel: [ 802.084335] BUG: soft lockup - CPU#1 stuck for 22s! [xend:31351]
I tracked this down to the call of xc_domain_set_pod_target() and further
p2m_pod_set_mem_target().
Unfortunately I can this check only with xen-4.2.2 as I don''t have a machine
with enough memory for