Displaying 13 results from an estimated 13 matches for "hash_early".
2015 Apr 13
1
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
...space from bootmem which should be page-size aligned
> >>+ * and hence cacheline aligned.
> >>+ */
> >>+ pv_lock_hash = alloc_large_system_hash("PV qspinlock",
> >>+ sizeof(struct pv_hash_bucket),
> >>+ pv_hash_size, 0, HASH_EARLY,
> >>+ &pv_lock_hash_bits, NULL,
> >>+ pv_hash_size, pv_hash_size);
> > pv_taps = lfsr_taps(pv_lock_hash_bits);
> >
>
> I don't understand what you meant here.
Let me explain (even though I propose taking all the LFSR stuff out).
pv_lock...
2015 Apr 13
1
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
...space from bootmem which should be page-size aligned
> >>+ * and hence cacheline aligned.
> >>+ */
> >>+ pv_lock_hash = alloc_large_system_hash("PV qspinlock",
> >>+ sizeof(struct pv_hash_bucket),
> >>+ pv_hash_size, 0, HASH_EARLY,
> >>+ &pv_lock_hash_bits, NULL,
> >>+ pv_hash_size, pv_hash_size);
> > pv_taps = lfsr_taps(pv_lock_hash_bits);
> >
>
> I don't understand what you meant here.
Let me explain (even though I propose taking all the LFSR stuff out).
pv_lock...
2015 Apr 09
6
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
..._MIN_BITS);
> + /*
> + * Allocate space from bootmem which should be page-size aligned
> + * and hence cacheline aligned.
> + */
> + pv_lock_hash = alloc_large_system_hash("PV qspinlock",
> + sizeof(struct pv_hash_bucket),
> + pv_hash_size, 0, HASH_EARLY,
> + &pv_lock_hash_bits, NULL,
> + pv_hash_size, pv_hash_size);
pv_taps = lfsr_taps(pv_lock_hash_bits);
> +}
> +
> +static inline u32 hash_align(u32 hash)
> +{
> + return hash & ~(PV_HB_PER_LINE - 1);
> +}
> +
> +static struct qspinlock...
2015 Apr 09
6
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
..._MIN_BITS);
> + /*
> + * Allocate space from bootmem which should be page-size aligned
> + * and hence cacheline aligned.
> + */
> + pv_lock_hash = alloc_large_system_hash("PV qspinlock",
> + sizeof(struct pv_hash_bucket),
> + pv_hash_size, 0, HASH_EARLY,
> + &pv_lock_hash_bits, NULL,
> + pv_hash_size, pv_hash_size);
pv_taps = lfsr_taps(pv_lock_hash_bits);
> +}
> +
> +static inline u32 hash_align(u32 hash)
> +{
> + return hash & ~(PV_HB_PER_LINE - 1);
> +}
> +
> +static struct qspinlock...
2015 Apr 09
0
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
...t;> + * Allocate space from bootmem which should be page-size aligned
>> + * and hence cacheline aligned.
>> + */
>> + pv_lock_hash = alloc_large_system_hash("PV qspinlock",
>> + sizeof(struct pv_hash_bucket),
>> + pv_hash_size, 0, HASH_EARLY,
>> + &pv_lock_hash_bits, NULL,
>> + pv_hash_size, pv_hash_size);
> pv_taps = lfsr_taps(pv_lock_hash_bits);
>
I don't understand what you meant here.
>> +}
>> +
>> +static inline u32 hash_align(u32 hash)
>> +{
>> + return has...
2015 Apr 07
0
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
...+ pv_hash_size = (1U << LFSR_MIN_BITS);
+ /*
+ * Allocate space from bootmem which should be page-size aligned
+ * and hence cacheline aligned.
+ */
+ pv_lock_hash = alloc_large_system_hash("PV qspinlock",
+ sizeof(struct pv_hash_bucket),
+ pv_hash_size, 0, HASH_EARLY,
+ &pv_lock_hash_bits, NULL,
+ pv_hash_size, pv_hash_size);
+}
+
+static inline u32 hash_align(u32 hash)
+{
+ return hash & ~(PV_HB_PER_LINE - 1);
+}
+
+static struct qspinlock **pv_hash(struct qspinlock *lock, struct pv_node *node)
+{
+ unsigned long init_hash, hash =...
2015 Apr 24
0
[PATCH v16 08/14] pvqspinlock: Implement simple paravirt support for the qspinlock
...e < PV_HB_MIN)
+ pv_hash_size = PV_HB_MIN;
+ /*
+ * Allocate space from bootmem which should be page-size aligned
+ * and hence cacheline aligned.
+ */
+ pv_lock_hash = alloc_large_system_hash("PV qspinlock",
+ sizeof(struct pv_hash_bucket),
+ pv_hash_size, 0, HASH_EARLY,
+ &pv_lock_hash_bits, NULL,
+ pv_hash_size, pv_hash_size);
+}
+
+static inline struct qspinlock **
+pv_hash(struct qspinlock *lock, struct pv_node *node)
+{
+ unsigned long init_hash, hash = hash_ptr(lock, pv_lock_hash_bits);
+ struct pv_hash_entry *he, *end;
+
+ init_has...
2015 May 04
1
[PATCH v16 08/14] pvqspinlock: Implement simple paravirt support for the qspinlock
...< PV_HE_MIN)
+ pv_hash_size = PV_HE_MIN;
+
+ /*
+ * Allocate space from bootmem which should be page-size aligned
+ * and hence cacheline aligned.
+ */
+ pv_lock_hash = alloc_large_system_hash("PV qspinlock",
+ sizeof(struct pv_hash_entry),
+ pv_hash_size, 0, HASH_EARLY,
+ &pv_lock_hash_bits, NULL,
+ pv_hash_size, pv_hash_size);
+}
+
+#define for_each_hash_entry(he, offset, hash) \
+ for (hash &= ~(PV_HE_PER_LINE - 1), he = &pv_lock_hash[hash], offset = 0; \
+ offset < (1 << pv_lock_hash_bits); \
+ off...
2015 May 04
1
[PATCH v16 08/14] pvqspinlock: Implement simple paravirt support for the qspinlock
...< PV_HE_MIN)
+ pv_hash_size = PV_HE_MIN;
+
+ /*
+ * Allocate space from bootmem which should be page-size aligned
+ * and hence cacheline aligned.
+ */
+ pv_lock_hash = alloc_large_system_hash("PV qspinlock",
+ sizeof(struct pv_hash_entry),
+ pv_hash_size, 0, HASH_EARLY,
+ &pv_lock_hash_bits, NULL,
+ pv_hash_size, pv_hash_size);
+}
+
+#define for_each_hash_entry(he, offset, hash) \
+ for (hash &= ~(PV_HE_PER_LINE - 1), he = &pv_lock_hash[hash], offset = 0; \
+ offset < (1 << pv_lock_hash_bits); \
+ off...
2015 Apr 07
18
[PATCH v15 00/15] qspinlock: a 4-byte queue spinlock with PV support
v14->v15:
- Incorporate PeterZ's v15 qspinlock patch and improve upon the PV
qspinlock code by dynamically allocating the hash table as well
as some other performance optimization.
- Simplified the Xen PV qspinlock code as suggested by David Vrabel
<david.vrabel at citrix.com>.
- Add benchmarking data for 3.19 kernel to compare the performance
of a spinlock heavy test
2015 Apr 07
18
[PATCH v15 00/15] qspinlock: a 4-byte queue spinlock with PV support
v14->v15:
- Incorporate PeterZ's v15 qspinlock patch and improve upon the PV
qspinlock code by dynamically allocating the hash table as well
as some other performance optimization.
- Simplified the Xen PV qspinlock code as suggested by David Vrabel
<david.vrabel at citrix.com>.
- Add benchmarking data for 3.19 kernel to compare the performance
of a spinlock heavy test
2015 Apr 24
16
[PATCH v16 00/14] qspinlock: a 4-byte queue spinlock with PV support
v15->v16:
- Remove the lfsr patch and use linear probing as lfsr is not really
necessary in most cases.
- Move the paravirt PV_CALLEE_SAVE_REGS_THUNK code to an asm header.
- Add a patch to collect PV qspinlock statistics which also
supersedes the PV lock hash debug patch.
- Add PV qspinlock performance numbers.
v14->v15:
- Incorporate PeterZ's v15 qspinlock patch and improve
2015 Apr 24
16
[PATCH v16 00/14] qspinlock: a 4-byte queue spinlock with PV support
v15->v16:
- Remove the lfsr patch and use linear probing as lfsr is not really
necessary in most cases.
- Move the paravirt PV_CALLEE_SAVE_REGS_THUNK code to an asm header.
- Add a patch to collect PV qspinlock statistics which also
supersedes the PV lock hash debug patch.
- Add PV qspinlock performance numbers.
v14->v15:
- Incorporate PeterZ's v15 qspinlock patch and improve