search for: machine_is_lpar

Displaying 10 results from an estimated 10 matches for "machine_is_lpar".

2016 Oct 19
2
[PATCH v2 1/1] s390/spinlock: Provide vcpu_is_preempted
...arch_vcpu_is_preempted(~owner)) { > smp_yield_cpu(~owner); > first_diag = 0; > continue; > @@ -81,7 +72,7 @@ void arch_spin_lock_wait(arch_spinlock_t *lp) > * yield the CPU unconditionally. For LPAR rely on the > * sense running status. > */ > - if (!MACHINE_IS_LPAR || cpu_is_preempted(~owner)) { > + if (!MACHINE_IS_LPAR || arch_vcpu_is_preempted(~owner)) { > smp_yield_cpu(~owner); > first_diag = 0; > } > @@ -108,7 +99,7 @@ void arch_spin_lock_wait_flags(arch_spinlock_t *lp, unsigned long flags) > continue; > } > /...
2016 Oct 19
2
[PATCH v2 1/1] s390/spinlock: Provide vcpu_is_preempted
...arch_vcpu_is_preempted(~owner)) { > smp_yield_cpu(~owner); > first_diag = 0; > continue; > @@ -81,7 +72,7 @@ void arch_spin_lock_wait(arch_spinlock_t *lp) > * yield the CPU unconditionally. For LPAR rely on the > * sense running status. > */ > - if (!MACHINE_IS_LPAR || cpu_is_preempted(~owner)) { > + if (!MACHINE_IS_LPAR || arch_vcpu_is_preempted(~owner)) { > smp_yield_cpu(~owner); > first_diag = 0; > } > @@ -108,7 +99,7 @@ void arch_spin_lock_wait_flags(arch_spinlock_t *lp, unsigned long flags) > continue; > } > /...
2016 Oct 19
1
[PATCH v3] s390/spinlock: Provide vcpu_is_preempted
...~owner)) { + if (first_diag && arch_vcpu_is_preempted(~owner)) { smp_yield_cpu(~owner); first_diag = 0; continue; @@ -81,7 +72,7 @@ void arch_spin_lock_wait(arch_spinlock_t *lp) * yield the CPU unconditionally. For LPAR rely on the * sense running status. */ - if (!MACHINE_IS_LPAR || cpu_is_preempted(~owner)) { + if (!MACHINE_IS_LPAR || arch_vcpu_is_preempted(~owner)) { smp_yield_cpu(~owner); first_diag = 0; } @@ -108,7 +99,7 @@ void arch_spin_lock_wait_flags(arch_spinlock_t *lp, unsigned long flags) continue; } /* Check if the lock owner is running. */...
2016 Oct 19
1
[PATCH v3] s390/spinlock: Provide vcpu_is_preempted
...~owner)) { + if (first_diag && arch_vcpu_is_preempted(~owner)) { smp_yield_cpu(~owner); first_diag = 0; continue; @@ -81,7 +72,7 @@ void arch_spin_lock_wait(arch_spinlock_t *lp) * yield the CPU unconditionally. For LPAR rely on the * sense running status. */ - if (!MACHINE_IS_LPAR || cpu_is_preempted(~owner)) { + if (!MACHINE_IS_LPAR || arch_vcpu_is_preempted(~owner)) { smp_yield_cpu(~owner); first_diag = 0; } @@ -108,7 +99,7 @@ void arch_spin_lock_wait_flags(arch_spinlock_t *lp, unsigned long flags) continue; } /* Check if the lock owner is running. */...
2016 Oct 20
15
[PATCH v5 0/9] implement vcpu preempted check
change from v4: spilt x86 kvm vcpu preempted check into two patches. add documentation patch. add x86 vcpu preempted check patch under xen add s390 vcpu preempted check patch change from v3: add x86 vcpu preempted check patch change from v2: no code change, fix typos, update some comments change from v1: a simplier definition of default vcpu_is_preempted skip mahcine type check on ppc,
2016 Oct 20
15
[PATCH v5 0/9] implement vcpu preempted check
change from v4: spilt x86 kvm vcpu preempted check into two patches. add documentation patch. add x86 vcpu preempted check patch under xen add s390 vcpu preempted check patch change from v3: add x86 vcpu preempted check patch change from v2: no code change, fix typos, update some comments change from v1: a simplier definition of default vcpu_is_preempted skip mahcine type check on ppc,
2016 Nov 02
13
[PATCH v7 00/11] implement vcpu preempted check
change from v6: fix typos and remove uncessary comments. change from v5: spilt x86/kvm patch into guest/host part. introduce kvm_write_guest_offset_cached. fix some typos. rebase patch onto 4.9.2 change from v4: spilt x86 kvm vcpu preempted check into two patches. add documentation patch. add x86 vcpu preempted check patch under xen add s390 vcpu preempted check patch change from v3:
2016 Nov 02
13
[PATCH v7 00/11] implement vcpu preempted check
change from v6: fix typos and remove uncessary comments. change from v5: spilt x86/kvm patch into guest/host part. introduce kvm_write_guest_offset_cached. fix some typos. rebase patch onto 4.9.2 change from v4: spilt x86 kvm vcpu preempted check into two patches. add documentation patch. add x86 vcpu preempted check patch under xen add s390 vcpu preempted check patch change from v3:
2016 Oct 28
16
[PATCH v6 00/11] implement vcpu preempted check
change from v5: spilt x86/kvm patch into guest/host part. introduce kvm_write_guest_offset_cached. fix some typos. rebase patch onto 4.9.2 change from v4: spilt x86 kvm vcpu preempted check into two patches. add documentation patch. add x86 vcpu preempted check patch under xen add s390 vcpu preempted check patch change from v3: add x86 vcpu preempted check patch change from v2: no code
2016 Oct 28
16
[PATCH v6 00/11] implement vcpu preempted check
change from v5: spilt x86/kvm patch into guest/host part. introduce kvm_write_guest_offset_cached. fix some typos. rebase patch onto 4.9.2 change from v4: spilt x86 kvm vcpu preempted check into two patches. add documentation patch. add x86 vcpu preempted check patch under xen add s390 vcpu preempted check patch change from v3: add x86 vcpu preempted check patch change from v2: no code