search for: lppaca

Displaying 20 results from an estimated 25 matches for "lppaca".

2016 Jul 05
2
[PATCH v2 2/4] powerpc/spinlock: support vcpu preempted check
...on the retval of vcpu_is_preempted. > > As kernel has used this interface, So lets support it. > > Only pSeries need supoort it. And the fact is powerNV are built into same > kernel image with pSeries. So we need return false if we are runnig as > powerNV. The another fact is that lppaca->yiled_count keeps zero on > powerNV. So we can just skip the machine type. Lock holder vCPU preemption can be detected by hardware pSeries or paravirt method? Regards, Wanpeng Li
2016 Jul 05
2
[PATCH v2 2/4] powerpc/spinlock: support vcpu preempted check
...on the retval of vcpu_is_preempted. > > As kernel has used this interface, So lets support it. > > Only pSeries need supoort it. And the fact is powerNV are built into same > kernel image with pSeries. So we need return false if we are runnig as > powerNV. The another fact is that lppaca->yiled_count keeps zero on > powerNV. So we can just skip the machine type. Lock holder vCPU preemption can be detected by hardware pSeries or paravirt method? Regards, Wanpeng Li
2016 Jul 06
1
[PATCH v2 2/4] powerpc/spinlock: support vcpu preempted check
...cpu_is_preempted. >? > As kernel has used this interface, So lets support it. >? > Only pSeries need supoort it. And the fact is powerNV are built into same ???^^ support > kernel image with pSeries. So we need return false if we are runnig as > powerNV. The another fact is that lppaca->yiled_count keeps zero on ??^^ yield > powerNV. So we can just skip the machine type. >? > Suggested-by: Boqun Feng <boqun.feng at gmail.com> > Suggested-by: Peter Zijlstra (Intel) <peterz at infradead.org> > Signed-off-by: Pan Xinhui <xinhui.pan at linux.vnet...
2016 Jul 06
1
[PATCH v2 2/4] powerpc/spinlock: support vcpu preempted check
...cpu_is_preempted. >? > As kernel has used this interface, So lets support it. >? > Only pSeries need supoort it. And the fact is powerNV are built into same ???^^ support > kernel image with pSeries. So we need return false if we are runnig as > powerNV. The another fact is that lppaca->yiled_count keeps zero on ??^^ yield > powerNV. So we can just skip the machine type. >? > Suggested-by: Boqun Feng <boqun.feng at gmail.com> > Suggested-by: Peter Zijlstra (Intel) <peterz at infradead.org> > Signed-off-by: Pan Xinhui <xinhui.pan at linux.vnet...
2016 Jun 28
0
[PATCH v2 2/4] powerpc/spinlock: support vcpu preempted check
...n break the spin loops upon on the retval of vcpu_is_preempted. As kernel has used this interface, So lets support it. Only pSeries need supoort it. And the fact is powerNV are built into same kernel image with pSeries. So we need return false if we are runnig as powerNV. The another fact is that lppaca->yiled_count keeps zero on powerNV. So we can just skip the machine type. Suggested-by: Boqun Feng <boqun.feng at gmail.com> Suggested-by: Peter Zijlstra (Intel) <peterz at infradead.org> Signed-off-by: Pan Xinhui <xinhui.pan at linux.vnet.ibm.com> --- arch/powerpc/include/as...
2016 Jul 06
0
[PATCH v2 2/4] powerpc/spinlock: support vcpu preempted check
..._preempted. >> >> As kernel has used this interface, So lets support it. >> >> Only pSeries need supoort it. And the fact is powerNV are built into same >> kernel image with pSeries. So we need return false if we are runnig as >> powerNV. The another fact is that lppaca->yiled_count keeps zero on >> powerNV. So we can just skip the machine type. > > Lock holder vCPU preemption can be detected by hardware pSeries or > paravirt method? > There is one shard struct between kernel and powerVM/KVM. And we read the yield_count of this struct to detec...
2016 Jul 21
0
[PATCH v3 2/4] powerpc/spinlock: support vcpu preempted check
...an break the spin loops upon the retval of vcpu_is_preempted(). As kernel has used this interface, So lets support it. Only pSeries need support it. And the fact is powerNV are built into same kernel image with pSeries. So we need return false if we are runnig as powerNV. The another fact is that lppaca->yield_count keeps zero on powerNV. So we can just skip the machine type check. Suggested-by: Boqun Feng <boqun.feng at gmail.com> Suggested-by: Peter Zijlstra (Intel) <peterz at infradead.org> Signed-off-by: Pan Xinhui <xinhui.pan at linux.vnet.ibm.com> --- arch/powerpc/incl...
2016 Jun 28
11
[PATCH v2 0/4] implement vcpu preempted check
change fomr v1: a simplier definition of default vcpu_is_preempted skip mahcine type check on ppc, and add config. remove dedicated macro. add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner. add more comments thanks boqun and Peter's suggestion. This patch set aims to fix lock holder preemption issues. test-case: perf record -a perf bench sched messaging -g
2016 Jun 28
11
[PATCH v2 0/4] implement vcpu preempted check
change fomr v1: a simplier definition of default vcpu_is_preempted skip mahcine type check on ppc, and add config. remove dedicated macro. add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner. add more comments thanks boqun and Peter's suggestion. This patch set aims to fix lock holder preemption issues. test-case: perf record -a perf bench sched messaging -g
2016 Jul 06
2
[PATCH v2 2/4] powerpc/spinlock: support vcpu preempted check
...>>> As kernel has used this interface, So lets support it. >>> >>> Only pSeries need supoort it. And the fact is powerNV are built into same >>> kernel image with pSeries. So we need return false if we are runnig as >>> powerNV. The another fact is that lppaca->yiled_count keeps zero on >>> powerNV. So we can just skip the machine type. >> >> >> Lock holder vCPU preemption can be detected by hardware pSeries or >> paravirt method? >> > There is one shard struct between kernel and powerVM/KVM. And we read the &...
2016 Jul 06
2
[PATCH v2 2/4] powerpc/spinlock: support vcpu preempted check
...>>> As kernel has used this interface, So lets support it. >>> >>> Only pSeries need supoort it. And the fact is powerNV are built into same >>> kernel image with pSeries. So we need return false if we are runnig as >>> powerNV. The another fact is that lppaca->yiled_count keeps zero on >>> powerNV. So we can just skip the machine type. >> >> >> Lock holder vCPU preemption can be detected by hardware pSeries or >> paravirt method? >> > There is one shard struct between kernel and powerVM/KVM. And we read the &...
2016 Jul 21
5
[PATCH v3 0/4] implement vcpu preempted check
change from v2: no code change, fix typos, update some comments change from v1: a simplier definition of default vcpu_is_preempted skip mahcine type check on ppc, and add config. remove dedicated macro. add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner. add more comments thanks boqun and Peter's suggestion. This patch set aims to fix lock holder preemption
2016 Jul 21
5
[PATCH v3 0/4] implement vcpu preempted check
change from v2: no code change, fix typos, update some comments change from v1: a simplier definition of default vcpu_is_preempted skip mahcine type check on ppc, and add config. remove dedicated macro. add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner. add more comments thanks boqun and Peter's suggestion. This patch set aims to fix lock holder preemption
2016 Dec 06
6
[PATCH v9 0/6] Implement qspinlock/pv-qspinlock on ppc
Hi All, this is the fairlock patchset. You can apply them and build successfully. patches are based on linux-next qspinlock can avoid waiter starved issue. It has about the same speed in single-thread and it can be much faster in high contention situations especially when the spinlock is embedded within the data structure to be protected. v8 -> v9: mv qspinlocm config entry to
2016 Dec 06
6
[PATCH v9 0/6] Implement qspinlock/pv-qspinlock on ppc
Hi All, this is the fairlock patchset. You can apply them and build successfully. patches are based on linux-next qspinlock can avoid waiter starved issue. It has about the same speed in single-thread and it can be much faster in high contention situations especially when the spinlock is embedded within the data structure to be protected. v8 -> v9: mv qspinlocm config entry to
2016 Oct 20
15
[PATCH v5 0/9] implement vcpu preempted check
change from v4: spilt x86 kvm vcpu preempted check into two patches. add documentation patch. add x86 vcpu preempted check patch under xen add s390 vcpu preempted check patch change from v3: add x86 vcpu preempted check patch change from v2: no code change, fix typos, update some comments change from v1: a simplier definition of default vcpu_is_preempted skip mahcine type check on ppc,
2016 Oct 20
15
[PATCH v5 0/9] implement vcpu preempted check
change from v4: spilt x86 kvm vcpu preempted check into two patches. add documentation patch. add x86 vcpu preempted check patch under xen add s390 vcpu preempted check patch change from v3: add x86 vcpu preempted check patch change from v2: no code change, fix typos, update some comments change from v1: a simplier definition of default vcpu_is_preempted skip mahcine type check on ppc,
2016 Oct 19
10
[PATCH v4 0/5] implement vcpu preempted check
change from v3: add x86 vcpu preempted check patch change from v2: no code change, fix typos, update some comments change from v1: a simplier definition of default vcpu_is_preempted skip mahcine type check on ppc, and add config. remove dedicated macro. add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner. add more comments thanks boqun and Peter's suggestion.
2016 Oct 19
10
[PATCH v4 0/5] implement vcpu preempted check
change from v3: add x86 vcpu preempted check patch change from v2: no code change, fix typos, update some comments change from v1: a simplier definition of default vcpu_is_preempted skip mahcine type check on ppc, and add config. remove dedicated macro. add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner. add more comments thanks boqun and Peter's suggestion.
2016 Dec 05
9
[PATCH v8 0/6] Implement qspinlock/pv-qspinlock on ppc
Hi All, this is the fairlock patchset. You can apply them and build successfully. patches are based on linux-next qspinlock can avoid waiter starved issue. It has about the same speed in single-thread and it can be much faster in high contention situations especially when the spinlock is embedded within the data structure to be protected. v7 -> v8: add one patch to drop a function call