Displaying 20 results from an estimated 147 matches for "unhashed".
Did you mean:
hashed
2015 Apr 01
2
[PATCH 8/9] qspinlock: Generic paravirt support
On Wed, Apr 01, 2015 at 07:42:39PM +0200, Peter Zijlstra wrote:
> > Hohumm.. time to think more I think ;-)
>
> So bear with me, I've not really pondered this well so it could be full
> of holes (again).
>
> After the cmpxchg(&l->locked, _Q_LOCKED_VAL, _Q_SLOW_VAL) succeeds the
> spin_unlock() must do the hash lookup, right? We can make the lookup
> unhash.
2015 Apr 01
2
[PATCH 8/9] qspinlock: Generic paravirt support
On Wed, Apr 01, 2015 at 07:42:39PM +0200, Peter Zijlstra wrote:
> > Hohumm.. time to think more I think ;-)
>
> So bear with me, I've not really pondered this well so it could be full
> of holes (again).
>
> After the cmpxchg(&l->locked, _Q_LOCKED_VAL, _Q_SLOW_VAL) succeeds the
> spin_unlock() must do the hash lookup, right? We can make the lookup
> unhash.
2015 Apr 01
2
[PATCH 8/9] qspinlock: Generic paravirt support
...okup is guaranteed to find
> >an entry, which reduces our worst case lookup cost to whatever the worst
> >case insertion cost was.
> >
>
> I think it doesn't matter who did the unhashing. Multiple independent locks
> can be hashed to the same value. Since they can be unhashed independently,
> there is no way to know whether you have checked all the possible buckets.
oh but the crux is that you guarantee a lookup will find an entry. it will
never need to iterate the entire array.
2015 Apr 01
2
[PATCH 8/9] qspinlock: Generic paravirt support
...okup is guaranteed to find
> >an entry, which reduces our worst case lookup cost to whatever the worst
> >case insertion cost was.
> >
>
> I think it doesn't matter who did the unhashing. Multiple independent locks
> can be hashed to the same value. Since they can be unhashed independently,
> there is no way to know whether you have checked all the possible buckets.
oh but the crux is that you guarantee a lookup will find an entry. it will
never need to iterate the entire array.
2019 Sep 03
0
[PATCH v2 26/27] drm/dp_mst: Also print unhashed pointers for malloc/topology references
...%p, but
unfortunately if you're trying to debug a use-after-free error caused by
a refcounting error then this really isn't terribly useful. On the other
hand though, everything in the rest of the DP MST helpers uses hashed
pointer values as well and probably isn't useful to convert to unhashed.
So, let's just get the best of both worlds and print both the hashed and
unhashed pointer in our malloc/topology refcount debugging output. This
will hopefully make it a lot easier to figure out which port/mstb is
causing KASAN to get upset.
Cc: Juston Li <juston.li at intel.com>
Cc: Im...
2019 Sep 27
1
[PATCH v2 26/27] drm/dp_mst: Also print unhashed pointers for malloc/topology references
...unately if you're trying to debug a use-after-free error caused by
> a refcounting error then this really isn't terribly useful. On the other
> hand though, everything in the rest of the DP MST helpers uses hashed
> pointer values as well and probably isn't useful to convert to unhashed.
> So, let's just get the best of both worlds and print both the hashed and
> unhashed pointer in our malloc/topology refcount debugging output. This
> will hopefully make it a lot easier to figure out which port/mstb is
> causing KASAN to get upset.
>
> Cc: Juston Li <jus...
2015 Apr 01
0
[PATCH 8/9] qspinlock: Generic paravirt support
...t the result is that any lookup is guaranteed to find
> an entry, which reduces our worst case lookup cost to whatever the worst
> case insertion cost was.
>
I think it doesn't matter who did the unhashing. Multiple independent
locks can be hashed to the same value. Since they can be unhashed
independently, there is no way to know whether you have checked all the
possible buckets.
-Longman
2015 Apr 01
0
[PATCH 8/9] qspinlock: Generic paravirt support
...nteed to find
>>> an entry, which reduces our worst case lookup cost to whatever the worst
>>> case insertion cost was.
>>>
>> I think it doesn't matter who did the unhashing. Multiple independent locks
>> can be hashed to the same value. Since they can be unhashed independently,
>> there is no way to know whether you have checked all the possible buckets.
> oh but the crux is that you guarantee a lookup will find an entry. it will
> never need to iterate the entire array.
I am sorry that I don't quite get what you mean here. My point is that...
2015 Apr 02
3
[PATCH 8/9] qspinlock: Generic paravirt support
On Thu, Apr 02, 2015 at 12:28:30PM -0400, Waiman Long wrote:
> On 04/01/2015 05:03 PM, Peter Zijlstra wrote:
> >On Wed, Apr 01, 2015 at 03:58:58PM -0400, Waiman Long wrote:
> >>On 04/01/2015 02:48 PM, Peter Zijlstra wrote:
> >>I am sorry that I don't quite get what you mean here. My point is that in
> >>the hashing step, a cpu will need to scan an empty
2015 Apr 02
3
[PATCH 8/9] qspinlock: Generic paravirt support
On Thu, Apr 02, 2015 at 12:28:30PM -0400, Waiman Long wrote:
> On 04/01/2015 05:03 PM, Peter Zijlstra wrote:
> >On Wed, Apr 01, 2015 at 03:58:58PM -0400, Waiman Long wrote:
> >>On 04/01/2015 02:48 PM, Peter Zijlstra wrote:
> >>I am sorry that I don't quite get what you mean here. My point is that in
> >>the hashing step, a cpu will need to scan an empty
2015 Apr 01
3
[PATCH 8/9] qspinlock: Generic paravirt support
On Wed, Apr 01, 2015 at 12:20:30PM -0400, Waiman Long wrote:
> After more careful reading, I think the assumption that the presence of an
> unused bucket means there is no match is not true. Consider the scenario:
>
> 1. cpu 0 puts lock1 into hb[0]
> 2. cpu 1 puts lock2 into hb[1]
> 3. cpu 2 clears hb[0]
> 4. cpu 3 looks for lock2 and doesn't find it
Hmm, yes. The only
2015 Apr 01
3
[PATCH 8/9] qspinlock: Generic paravirt support
On Wed, Apr 01, 2015 at 12:20:30PM -0400, Waiman Long wrote:
> After more careful reading, I think the assumption that the presence of an
> unused bucket means there is no match is not true. Consider the scenario:
>
> 1. cpu 0 puts lock1 into hb[0]
> 2. cpu 1 puts lock2 into hb[1]
> 3. cpu 2 clears hb[0]
> 4. cpu 3 looks for lock2 and doesn't find it
Hmm, yes. The only
2015 Apr 13
1
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
On Thu, Apr 09, 2015 at 05:41:44PM -0400, Waiman Long wrote:
> >>+static void pv_wait_head(struct qspinlock *lock, struct mcs_spinlock *node)
> >>+{
> >>+ struct __qspinlock *l = (void *)lock;
> >>+ struct qspinlock **lp = NULL;
> >>+ struct pv_node *pn = (struct pv_node *)node;
> >>+ int slow_set = false;
> >>+ int loop;
>
2015 Apr 13
1
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
On Thu, Apr 09, 2015 at 05:41:44PM -0400, Waiman Long wrote:
> >>+static void pv_wait_head(struct qspinlock *lock, struct mcs_spinlock *node)
> >>+{
> >>+ struct __qspinlock *l = (void *)lock;
> >>+ struct qspinlock **lp = NULL;
> >>+ struct pv_node *pn = (struct pv_node *)node;
> >>+ int slow_set = false;
> >>+ int loop;
>
2016 May 26
2
[PATCH v3 5/6] pv-qspinlock: use cmpxchg_release in __pv_queued_spin_unlock
On Wed, May 25, 2016 at 04:18:08PM +0800, Pan Xinhui wrote:
> cmpxchg_release is light-wight than cmpxchg, we can gain a better
> performace then. On some arch like ppc, barrier impact the performace
> too much.
>
> Suggested-by: Boqun Feng <boqun.feng at gmail.com>
> Signed-off-by: Pan Xinhui <xinhui.pan at linux.vnet.ibm.com>
> ---
>
2016 May 26
2
[PATCH v3 5/6] pv-qspinlock: use cmpxchg_release in __pv_queued_spin_unlock
On Wed, May 25, 2016 at 04:18:08PM +0800, Pan Xinhui wrote:
> cmpxchg_release is light-wight than cmpxchg, we can gain a better
> performace then. On some arch like ppc, barrier impact the performace
> too much.
>
> Suggested-by: Boqun Feng <boqun.feng at gmail.com>
> Signed-off-by: Pan Xinhui <xinhui.pan at linux.vnet.ibm.com>
> ---
>
2020 Sep 29
12
Human readable .ssh/known_hosts?
Hi list members,
just tried to get some old records out of my known_hosts, which is 'HashKnownHosts yes'. Is there a way to unhash host names and/or IPs?
Google tells about, how to add hosts, but not the opposite, may be I miss some thing.
Is this does not work at all, is there a best practice for cleaning old hosts and keys out?
Thanks, Martin!
--
Martin
GnuPG Key Fingerprint, KeyID
2015 Apr 02
0
[PATCH 8/9] qspinlock: Generic paravirt support
On Thu, Apr 02, 2015 at 07:20:57PM +0200, Peter Zijlstra wrote:
> pv_wait_head():
>
> pv_hash()
> /* MB as per cmpxchg */
> cmpxchg(&l->locked, _Q_LOCKED_VAL, _Q_SLOW_VAL);
>
> VS
>
> __pv_queue_spin_unlock():
>
> if (xchg(&l->locked, 0) != _Q_SLOW_VAL)
> return;
>
> /* MB as per xchg */
> pv_hash_find(lock);
>
>
2018 Apr 30
2
Gluster rebalance taking many years
2007 Aug 27
2
Benchmarks with TextMate's manual
The following benchmarks have been obtained using the TextMate manual
as the input source:
<http://macromates.com/textmate/manual/source.tbz>
Using PHP Markdown, parsing the 24 files separately (with the
reference file appended to each of them), I get this (on an iBook G4
1.2 Ghz):
Total Avg. Min. Q1. Med. Q3. Max.
Parse Time (ms):