Displaying 11 results from an estimated 11 matches for "arch_spin_lock_wait_flag".
Did you mean:
arch_spin_lock_wait_flags
2016 Oct 19
2
[PATCH v2 1/1] s390/spinlock: Provide vcpu_is_preempted
...ditionally. For LPAR rely on the
> * sense running status.
> */
> - if (!MACHINE_IS_LPAR || cpu_is_preempted(~owner)) {
> + if (!MACHINE_IS_LPAR || arch_vcpu_is_preempted(~owner)) {
> smp_yield_cpu(~owner);
> first_diag = 0;
> }
> @@ -108,7 +99,7 @@ void arch_spin_lock_wait_flags(arch_spinlock_t *lp, unsigned long flags)
> continue;
> }
> /* Check if the lock owner is running. */
> - if (first_diag && cpu_is_preempted(~owner)) {
> + if (first_diag && arch_vcpu_is_preempted(~owner)) {
> smp_yield_cpu(~owner);
> first_...
2016 Oct 19
2
[PATCH v2 1/1] s390/spinlock: Provide vcpu_is_preempted
...ditionally. For LPAR rely on the
> * sense running status.
> */
> - if (!MACHINE_IS_LPAR || cpu_is_preempted(~owner)) {
> + if (!MACHINE_IS_LPAR || arch_vcpu_is_preempted(~owner)) {
> smp_yield_cpu(~owner);
> first_diag = 0;
> }
> @@ -108,7 +99,7 @@ void arch_spin_lock_wait_flags(arch_spinlock_t *lp, unsigned long flags)
> continue;
> }
> /* Check if the lock owner is running. */
> - if (first_diag && cpu_is_preempted(~owner)) {
> + if (first_diag && arch_vcpu_is_preempted(~owner)) {
> smp_yield_cpu(~owner);
> first_...
2016 Oct 19
1
[PATCH v3] s390/spinlock: Provide vcpu_is_preempted
...pinlock_t *lp)
* yield the CPU unconditionally. For LPAR rely on the
* sense running status.
*/
- if (!MACHINE_IS_LPAR || cpu_is_preempted(~owner)) {
+ if (!MACHINE_IS_LPAR || arch_vcpu_is_preempted(~owner)) {
smp_yield_cpu(~owner);
first_diag = 0;
}
@@ -108,7 +99,7 @@ void arch_spin_lock_wait_flags(arch_spinlock_t *lp, unsigned long flags)
continue;
}
/* Check if the lock owner is running. */
- if (first_diag && cpu_is_preempted(~owner)) {
+ if (first_diag && arch_vcpu_is_preempted(~owner)) {
smp_yield_cpu(~owner);
first_diag = 0;
continue;
@@ -127,7 +...
2016 Oct 19
1
[PATCH v3] s390/spinlock: Provide vcpu_is_preempted
...pinlock_t *lp)
* yield the CPU unconditionally. For LPAR rely on the
* sense running status.
*/
- if (!MACHINE_IS_LPAR || cpu_is_preempted(~owner)) {
+ if (!MACHINE_IS_LPAR || arch_vcpu_is_preempted(~owner)) {
smp_yield_cpu(~owner);
first_diag = 0;
}
@@ -108,7 +99,7 @@ void arch_spin_lock_wait_flags(arch_spinlock_t *lp, unsigned long flags)
continue;
}
/* Check if the lock owner is running. */
- if (first_diag && cpu_is_preempted(~owner)) {
+ if (first_diag && arch_vcpu_is_preempted(~owner)) {
smp_yield_cpu(~owner);
first_diag = 0;
continue;
@@ -127,7 +...
2012 Oct 17
28
Xen PVM: Strange lockups when running PostgreSQL load
I am currently looking at a bug report[1] which is happening when
a Xen PVM guest with multiple VCPUs is running a high IO database
load (a test script is available in the bug report).
In experimenting it seems that this happens (or is getting more
likely) when the number of VCPUs is 8 or higher (though I have
not tried 6, only 2 and 4), having autogroup enabled seems to
make it more likely, too
2016 Oct 20
15
[PATCH v5 0/9] implement vcpu preempted check
change from v4:
spilt x86 kvm vcpu preempted check into two patches.
add documentation patch.
add x86 vcpu preempted check patch under xen
add s390 vcpu preempted check patch
change from v3:
add x86 vcpu preempted check patch
change from v2:
no code change, fix typos, update some comments
change from v1:
a simplier definition of default vcpu_is_preempted
skip mahcine type check on ppc,
2016 Oct 20
15
[PATCH v5 0/9] implement vcpu preempted check
change from v4:
spilt x86 kvm vcpu preempted check into two patches.
add documentation patch.
add x86 vcpu preempted check patch under xen
add s390 vcpu preempted check patch
change from v3:
add x86 vcpu preempted check patch
change from v2:
no code change, fix typos, update some comments
change from v1:
a simplier definition of default vcpu_is_preempted
skip mahcine type check on ppc,
2016 Nov 02
13
[PATCH v7 00/11] implement vcpu preempted check
change from v6:
fix typos and remove uncessary comments.
change from v5:
spilt x86/kvm patch into guest/host part.
introduce kvm_write_guest_offset_cached.
fix some typos.
rebase patch onto 4.9.2
change from v4:
spilt x86 kvm vcpu preempted check into two patches.
add documentation patch.
add x86 vcpu preempted check patch under xen
add s390 vcpu preempted check patch
change from v3:
2016 Nov 02
13
[PATCH v7 00/11] implement vcpu preempted check
change from v6:
fix typos and remove uncessary comments.
change from v5:
spilt x86/kvm patch into guest/host part.
introduce kvm_write_guest_offset_cached.
fix some typos.
rebase patch onto 4.9.2
change from v4:
spilt x86 kvm vcpu preempted check into two patches.
add documentation patch.
add x86 vcpu preempted check patch under xen
add s390 vcpu preempted check patch
change from v3:
2016 Oct 28
16
[PATCH v6 00/11] implement vcpu preempted check
change from v5:
spilt x86/kvm patch into guest/host part.
introduce kvm_write_guest_offset_cached.
fix some typos.
rebase patch onto 4.9.2
change from v4:
spilt x86 kvm vcpu preempted check into two patches.
add documentation patch.
add x86 vcpu preempted check patch under xen
add s390 vcpu preempted check patch
change from v3:
add x86 vcpu preempted check patch
change from v2:
no code
2016 Oct 28
16
[PATCH v6 00/11] implement vcpu preempted check
change from v5:
spilt x86/kvm patch into guest/host part.
introduce kvm_write_guest_offset_cached.
fix some typos.
rebase patch onto 4.9.2
change from v4:
spilt x86 kvm vcpu preempted check into two patches.
add documentation patch.
add x86 vcpu preempted check patch under xen
add s390 vcpu preempted check patch
change from v3:
add x86 vcpu preempted check patch
change from v2:
no code