search for: init_hash

Displaying 19 results from an estimated 19 matches for "init_hash".

2015 Apr 09
6
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
...ash_size); pv_taps = lfsr_taps(pv_lock_hash_bits); > +} > + > +static inline u32 hash_align(u32 hash) > +{ > + return hash & ~(PV_HB_PER_LINE - 1); > +} > + > +static struct qspinlock **pv_hash(struct qspinlock *lock, struct pv_node *node) > +{ > + unsigned long init_hash, hash = hash_ptr(lock, pv_lock_hash_bits); > + struct pv_hash_bucket *hb, *end; > + > + if (!hash) > + hash = 1; > + > + init_hash = hash; > + hb = &pv_lock_hash[hash_align(hash)]; > + for (;;) { > + for (end = hb + PV_HB_PER_LINE; hb < end; hb++) { > + if (...
2015 Apr 09
6
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
...ash_size); pv_taps = lfsr_taps(pv_lock_hash_bits); > +} > + > +static inline u32 hash_align(u32 hash) > +{ > + return hash & ~(PV_HB_PER_LINE - 1); > +} > + > +static struct qspinlock **pv_hash(struct qspinlock *lock, struct pv_node *node) > +{ > + unsigned long init_hash, hash = hash_ptr(lock, pv_lock_hash_bits); > + struct pv_hash_bucket *hb, *end; > + > + if (!hash) > + hash = 1; > + > + init_hash = hash; > + hb = &pv_lock_hash[hash_align(hash)]; > + for (;;) { > + for (end = hb + PV_HB_PER_LINE; hb < end; hb++) { > + if (...
2015 Apr 09
0
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
...meant here. >> +} >> + >> +static inline u32 hash_align(u32 hash) >> +{ >> + return hash& ~(PV_HB_PER_LINE - 1); >> +} >> + >> +static struct qspinlock **pv_hash(struct qspinlock *lock, struct pv_node *node) >> +{ >> + unsigned long init_hash, hash = hash_ptr(lock, pv_lock_hash_bits); >> + struct pv_hash_bucket *hb, *end; >> + >> + if (!hash) >> + hash = 1; >> + >> + init_hash = hash; >> + hb =&pv_lock_hash[hash_align(hash)]; >> + for (;;) { >> + for (end = hb + PV_HB_PER_LINE;...
2015 Apr 07
0
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
...ze, 0, HASH_EARLY, + &pv_lock_hash_bits, NULL, + pv_hash_size, pv_hash_size); +} + +static inline u32 hash_align(u32 hash) +{ + return hash & ~(PV_HB_PER_LINE - 1); +} + +static struct qspinlock **pv_hash(struct qspinlock *lock, struct pv_node *node) +{ + unsigned long init_hash, hash = hash_ptr(lock, pv_lock_hash_bits); + struct pv_hash_bucket *hb, *end; + + if (!hash) + hash = 1; + + init_hash = hash; + hb = &pv_lock_hash[hash_align(hash)]; + for (;;) { + for (end = hb + PV_HB_PER_LINE; hb < end; hb++) { + if (!cmpxchg(&hb->lock, NULL, lock)) { + WRI...
2015 Apr 24
0
[PATCH v16 08/14] pvqspinlock: Implement simple paravirt support for the qspinlock
...pinlock", + sizeof(struct pv_hash_bucket), + pv_hash_size, 0, HASH_EARLY, + &pv_lock_hash_bits, NULL, + pv_hash_size, pv_hash_size); +} + +static inline struct qspinlock ** +pv_hash(struct qspinlock *lock, struct pv_node *node) +{ + unsigned long init_hash, hash = hash_ptr(lock, pv_lock_hash_bits); + struct pv_hash_entry *he, *end; + + init_hash = hash; + for (;;) { + he = pv_lock_hash[hash].ent; + for (end = he + PV_HE_PER_LINE; he < end; he++) { + if (!cmpxchg(&he->lock, NULL, lock)) { + /* + * We haven't set the _Q_SLOW_VA...
2015 Apr 09
0
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
...er Zijlstra wrote: > On Mon, Apr 06, 2015 at 10:55:44PM -0400, Waiman Long wrote: > > +#define PV_HB_PER_LINE (SMP_CACHE_BYTES / sizeof(struct pv_hash_bucket)) > > +static struct qspinlock **pv_hash(struct qspinlock *lock, struct pv_node *node) > > +{ > > + unsigned long init_hash, hash = hash_ptr(lock, pv_lock_hash_bits); > > + struct pv_hash_bucket *hb, *end; > > + > > + if (!hash) > > + hash = 1; > > + > > + init_hash = hash; > > + hb = &pv_lock_hash[hash_align(hash)]; > > + for (;;) { > > + for (end = hb + PV_HB...
2015 Apr 07
18
[PATCH v15 00/15] qspinlock: a 4-byte queue spinlock with PV support
v14->v15: - Incorporate PeterZ's v15 qspinlock patch and improve upon the PV qspinlock code by dynamically allocating the hash table as well as some other performance optimization. - Simplified the Xen PV qspinlock code as suggested by David Vrabel <david.vrabel at citrix.com>. - Add benchmarking data for 3.19 kernel to compare the performance of a spinlock heavy test
2015 Apr 07
18
[PATCH v15 00/15] qspinlock: a 4-byte queue spinlock with PV support
v14->v15: - Incorporate PeterZ's v15 qspinlock patch and improve upon the PV qspinlock code by dynamically allocating the hash table as well as some other performance optimization. - Simplified the Xen PV qspinlock code as suggested by David Vrabel <david.vrabel at citrix.com>. - Add benchmarking data for 3.19 kernel to compare the performance of a spinlock heavy test
2015 Apr 07
0
[PATCH v15 13/15] pvqspinlock: Only kick CPU at unlock time
...t up an entry in the lock hash table + * This is not inlined to reduce size of generated code as it is included + * twice and is used only in the slowest path of handling CPU halting. + */ +static noinline struct qspinlock ** +pv_hash(struct qspinlock *lock, struct pv_node *node) { unsigned long init_hash, hash = hash_ptr(lock, pv_lock_hash_bits); struct pv_hash_bucket *hb, *end; @@ -178,7 +191,8 @@ static void pv_init_node(struct mcs_spinlock *node) /* * Wait for node->locked to become true, halt the vcpu after a short spin. - * pv_kick_node() is used to wake the vcpu again. + * pv_scan_n...
2015 Apr 24
16
[PATCH v16 00/14] qspinlock: a 4-byte queue spinlock with PV support
v15->v16: - Remove the lfsr patch and use linear probing as lfsr is not really necessary in most cases. - Move the paravirt PV_CALLEE_SAVE_REGS_THUNK code to an asm header. - Add a patch to collect PV qspinlock statistics which also supersedes the PV lock hash debug patch. - Add PV qspinlock performance numbers. v14->v15: - Incorporate PeterZ's v15 qspinlock patch and improve
2015 Apr 24
16
[PATCH v16 00/14] qspinlock: a 4-byte queue spinlock with PV support
v15->v16: - Remove the lfsr patch and use linear probing as lfsr is not really necessary in most cases. - Move the paravirt PV_CALLEE_SAVE_REGS_THUNK code to an asm header. - Add a patch to collect PV qspinlock statistics which also supersedes the PV lock hash debug patch. - Add PV qspinlock performance numbers. v14->v15: - Incorporate PeterZ's v15 qspinlock patch and improve
2016 Jun 02
9
[PATCH v5 0/6] powerPC/pSeries use pv-qpsinlock as the default spinlock implemention
change from v4: BUG FIX. thanks boqun reporting this issue. struct __qspinlock has different layout in bigendian mahcine. native_queued_spin_unlock() may write value to a wrong address. now fix it. sorry for not even doing a test on bigendian machine before!!! change from v3: a big change in [PATCH v4 4/6] pv-qspinlock: powerpc support pv-qspinlock no other patch changed. and the patch
2016 Jun 02
9
[PATCH v5 0/6] powerPC/pSeries use pv-qpsinlock as the default spinlock implemention
change from v4: BUG FIX. thanks boqun reporting this issue. struct __qspinlock has different layout in bigendian mahcine. native_queued_spin_unlock() may write value to a wrong address. now fix it. sorry for not even doing a test on bigendian machine before!!! change from v3: a big change in [PATCH v4 4/6] pv-qspinlock: powerpc support pv-qspinlock no other patch changed. and the patch
2016 Jun 02
8
[PATCH v5 0/6] powerPC/pSeries use pv-qpsinlock as the default spinlock implemention
From: root <root at ltcalpine2-lp13.aus.stglabs.ibm.com> change from v4: BUG FIX. thanks boqun reporting this issue. struct __qspinlock has different layout in bigendian mahcine. native_queued_spin_unlock() may write value to a wrong address. now fix it. change from v3: a big change in [PATCH v4 4/6] pv-qspinlock: powerpc support pv-qspinlock no other patch changed. and the patch
2016 Jun 02
8
[PATCH v5 0/6] powerPC/pSeries use pv-qpsinlock as the default spinlock implemention
From: root <root at ltcalpine2-lp13.aus.stglabs.ibm.com> change from v4: BUG FIX. thanks boqun reporting this issue. struct __qspinlock has different layout in bigendian mahcine. native_queued_spin_unlock() may write value to a wrong address. now fix it. change from v3: a big change in [PATCH v4 4/6] pv-qspinlock: powerpc support pv-qspinlock no other patch changed. and the patch
2016 Dec 06
6
[PATCH v9 0/6] Implement qspinlock/pv-qspinlock on ppc
Hi All, this is the fairlock patchset. You can apply them and build successfully. patches are based on linux-next qspinlock can avoid waiter starved issue. It has about the same speed in single-thread and it can be much faster in high contention situations especially when the spinlock is embedded within the data structure to be protected. v8 -> v9: mv qspinlocm config entry to
2016 Dec 06
6
[PATCH v9 0/6] Implement qspinlock/pv-qspinlock on ppc
Hi All, this is the fairlock patchset. You can apply them and build successfully. patches are based on linux-next qspinlock can avoid waiter starved issue. It has about the same speed in single-thread and it can be much faster in high contention situations especially when the spinlock is embedded within the data structure to be protected. v8 -> v9: mv qspinlocm config entry to
2016 Dec 05
9
[PATCH v8 0/6] Implement qspinlock/pv-qspinlock on ppc
Hi All, this is the fairlock patchset. You can apply them and build successfully. patches are based on linux-next qspinlock can avoid waiter starved issue. It has about the same speed in single-thread and it can be much faster in high contention situations especially when the spinlock is embedded within the data structure to be protected. v7 -> v8: add one patch to drop a function call
2016 Dec 05
9
[PATCH v8 0/6] Implement qspinlock/pv-qspinlock on ppc
Hi All, this is the fairlock patchset. You can apply them and build successfully. patches are based on linux-next qspinlock can avoid waiter starved issue. It has about the same speed in single-thread and it can be much faster in high contention situations especially when the spinlock is embedded within the data structure to be protected. v7 -> v8: add one patch to drop a function call