search for: __pv_kick

Displaying 17 results from an estimated 17 matches for "__pv_kick".

2016 May 17
0
[PATCH v2 4/6] pv-qspinlock: powerpc support pv-qspinlock
...ion) any later version. + */ + +#include <linux/spinlock.h> + +static void __native_queued_spin_unlock(struct qspinlock *lock) +{ + native_queued_spin_unlock(lock); +} + +static void __pv_wait(u8 *ptr, u8 val, int cpu) +{ + HMT_low(); + __spin_yield_cpu(cpu); + HMT_medium(); +} + +static void __pv_kick(int cpu) +{ + __spin_wake_cpu(cpu); +} + +struct pv_lock_ops pv_lock_op = { + .lock = native_queued_spin_lock_slowpath, + .unlock = __native_queued_spin_unlock, + .wait = NULL, + .kick = NULL, +}; +EXPORT_SYMBOL(pv_lock_op); + +void __init pv_lock_init(void) +{ + if (SHARED_PROCESSOR) { + __pv_ini...
2016 Apr 28
0
[PATCH] powerpc: enable qspinlock and its virtualization support
...ock(struct qspinlock *lock) +{ + native_queued_spin_unlock(lock); +} + +static void __native_wait(u8 *ptr, u8 val, int cpu) +{ +} + +static void __native_kick(int cpu) +{ +} + +static void __pv_wait(u8 *ptr, u8 val, int cpu) +{ + HMT_low(); + __spin_yield_cpu(cpu); + HMT_medium(); +} + +static void __pv_kick(int cpu) +{ + __spin_wake_cpu(cpu); +} + +struct pv_lock_ops pv_lock_op = { + .lock = native_queued_spin_lock_slowpath, + .unlock = __native_queued_spin_unlock, + .wait = __native_wait, + .kick = __native_kick, +}; +EXPORT_SYMBOL(pv_lock_op); + +void __init pv_lock_init(void) +{ + if (SHARED_PROCES...
2016 Apr 28
0
[PATCH] powerpc: enable qspinlock and its virtualization support
...ock(struct qspinlock *lock) +{ + native_queued_spin_unlock(lock); +} + +static void __native_wait(u8 *ptr, u8 val, int cpu) +{ +} + +static void __native_kick(int cpu) +{ +} + +static void __pv_wait(u8 *ptr, u8 val, int cpu) +{ + HMT_low(); + __spin_yield_cpu(cpu); + HMT_medium(); +} + +static void __pv_kick(int cpu) +{ + __spin_wake_cpu(cpu); +} + +struct pv_lock_ops pv_lock_op = { + .lock = native_queued_spin_lock_slowpath, + .unlock = __native_queued_spin_unlock, + .wait = __native_wait, + .kick = __native_kick, +}; +EXPORT_SYMBOL(pv_lock_op); + +void __init pv_lock_init(void) +{ + if (SHARED_PROCES...
2016 Apr 28
2
[PATCH resend] powerpc: enable qspinlock and its virtualization support
...ock(struct qspinlock *lock) +{ + native_queued_spin_unlock(lock); +} + +static void __native_wait(u8 *ptr, u8 val, int cpu) +{ +} + +static void __native_kick(int cpu) +{ +} + +static void __pv_wait(u8 *ptr, u8 val, int cpu) +{ + HMT_low(); + __spin_yield_cpu(cpu); + HMT_medium(); +} + +static void __pv_kick(int cpu) +{ + __spin_wake_cpu(cpu); +} + +struct pv_lock_ops pv_lock_op = { + .lock = native_queued_spin_lock_slowpath, + .unlock = __native_queued_spin_unlock, + .wait = __native_wait, + .kick = __native_kick, +}; +EXPORT_SYMBOL(pv_lock_op); + +void __init pv_lock_init(void) +{ + if (SHARED_PROCES...
2016 Apr 28
2
[PATCH resend] powerpc: enable qspinlock and its virtualization support
...ock(struct qspinlock *lock) +{ + native_queued_spin_unlock(lock); +} + +static void __native_wait(u8 *ptr, u8 val, int cpu) +{ +} + +static void __native_kick(int cpu) +{ +} + +static void __pv_wait(u8 *ptr, u8 val, int cpu) +{ + HMT_low(); + __spin_yield_cpu(cpu); + HMT_medium(); +} + +static void __pv_kick(int cpu) +{ + __spin_wake_cpu(cpu); +} + +struct pv_lock_ops pv_lock_op = { + .lock = native_queued_spin_lock_slowpath, + .unlock = __native_queued_spin_unlock, + .wait = __native_wait, + .kick = __native_kick, +}; +EXPORT_SYMBOL(pv_lock_op); + +void __init pv_lock_init(void) +{ + if (SHARED_PROCES...
2016 Dec 06
6
[PATCH v9 0/6] Implement qspinlock/pv-qspinlock on ppc
Hi All, this is the fairlock patchset. You can apply them and build successfully. patches are based on linux-next qspinlock can avoid waiter starved issue. It has about the same speed in single-thread and it can be much faster in high contention situations especially when the spinlock is embedded within the data structure to be protected. v8 -> v9: mv qspinlocm config entry to
2016 Dec 06
6
[PATCH v9 0/6] Implement qspinlock/pv-qspinlock on ppc
Hi All, this is the fairlock patchset. You can apply them and build successfully. patches are based on linux-next qspinlock can avoid waiter starved issue. It has about the same speed in single-thread and it can be much faster in high contention situations especially when the spinlock is embedded within the data structure to be protected. v8 -> v9: mv qspinlocm config entry to
2016 May 17
6
[PATCH v3 0/6] powerpc use pv-qpsinlock instead of spinlock
change fome v1: separate into 6 pathes from one patch some minor code changes. benchmark test results are below. run 3 tests on pseries IBM,8408-E8E with 32cpus, 64GB memory perf bench futex hash perf bench futex lock-pi perf record -advRT || perf bench sched messaging -g 1000 || perf report summary: _____test________________spinlcok______________pv-qspinlcok_____ |futex hash | 556370 ops |
2016 May 17
6
[PATCH v3 0/6] powerpc use pv-qpsinlock instead of spinlock
change fome v1: separate into 6 pathes from one patch some minor code changes. benchmark test results are below. run 3 tests on pseries IBM,8408-E8E with 32cpus, 64GB memory perf bench futex hash perf bench futex lock-pi perf record -advRT || perf bench sched messaging -g 1000 || perf report summary: _____test________________spinlcok______________pv-qspinlcok_____ |futex hash | 556370 ops |
2016 May 25
10
[PATCH v3 0/6] powerpc use pv-qpsinlock as the default spinlock implemention
change from v2: __spin_yeild_cpu() will yield slices to lpar if target cpu is running. remove unnecessary rmb() in __spin_yield/wake_cpu. __pv_wait() will check the *ptr == val. some commit message change change fome v1: separate into 6 pathes from one patch some minor code changes. I do several tests on pseries IBM,8408-E8E with 32cpus, 64GB memory. benchmark test results are below. 2
2016 May 25
10
[PATCH v3 0/6] powerpc use pv-qpsinlock as the default spinlock implemention
change from v2: __spin_yeild_cpu() will yield slices to lpar if target cpu is running. remove unnecessary rmb() in __spin_yield/wake_cpu. __pv_wait() will check the *ptr == val. some commit message change change fome v1: separate into 6 pathes from one patch some minor code changes. I do several tests on pseries IBM,8408-E8E with 32cpus, 64GB memory. benchmark test results are below. 2
2016 Jun 02
9
[PATCH v5 0/6] powerPC/pSeries use pv-qpsinlock as the default spinlock implemention
change from v4: BUG FIX. thanks boqun reporting this issue. struct __qspinlock has different layout in bigendian mahcine. native_queued_spin_unlock() may write value to a wrong address. now fix it. sorry for not even doing a test on bigendian machine before!!! change from v3: a big change in [PATCH v4 4/6] pv-qspinlock: powerpc support pv-qspinlock no other patch changed. and the patch
2016 Jun 02
9
[PATCH v5 0/6] powerPC/pSeries use pv-qpsinlock as the default spinlock implemention
change from v4: BUG FIX. thanks boqun reporting this issue. struct __qspinlock has different layout in bigendian mahcine. native_queued_spin_unlock() may write value to a wrong address. now fix it. sorry for not even doing a test on bigendian machine before!!! change from v3: a big change in [PATCH v4 4/6] pv-qspinlock: powerpc support pv-qspinlock no other patch changed. and the patch
2016 Dec 05
9
[PATCH v8 0/6] Implement qspinlock/pv-qspinlock on ppc
Hi All, this is the fairlock patchset. You can apply them and build successfully. patches are based on linux-next qspinlock can avoid waiter starved issue. It has about the same speed in single-thread and it can be much faster in high contention situations especially when the spinlock is embedded within the data structure to be protected. v7 -> v8: add one patch to drop a function call
2016 Dec 05
9
[PATCH v8 0/6] Implement qspinlock/pv-qspinlock on ppc
Hi All, this is the fairlock patchset. You can apply them and build successfully. patches are based on linux-next qspinlock can avoid waiter starved issue. It has about the same speed in single-thread and it can be much faster in high contention situations especially when the spinlock is embedded within the data structure to be protected. v7 -> v8: add one patch to drop a function call
2016 Jun 02
8
[PATCH v5 0/6] powerPC/pSeries use pv-qpsinlock as the default spinlock implemention
From: root <root at ltcalpine2-lp13.aus.stglabs.ibm.com> change from v4: BUG FIX. thanks boqun reporting this issue. struct __qspinlock has different layout in bigendian mahcine. native_queued_spin_unlock() may write value to a wrong address. now fix it. change from v3: a big change in [PATCH v4 4/6] pv-qspinlock: powerpc support pv-qspinlock no other patch changed. and the patch
2016 Jun 02
8
[PATCH v5 0/6] powerPC/pSeries use pv-qpsinlock as the default spinlock implemention
From: root <root at ltcalpine2-lp13.aus.stglabs.ibm.com> change from v4: BUG FIX. thanks boqun reporting this issue. struct __qspinlock has different layout in bigendian mahcine. native_queued_spin_unlock() may write value to a wrong address. now fix it. change from v3: a big change in [PATCH v4 4/6] pv-qspinlock: powerpc support pv-qspinlock no other patch changed. and the patch