search for: fastpath

Displaying 20 results from an estimated 159 matches for "fastpath".

2009 Feb 12
11
UV fastpath after Crossbow
Hi the Crossbow team, I am testing my UV fastpath bits and I found several issues that need your suggestion: 1. dladm show-usage won''t work if UV fastpath is enabled. Since the link usage data are collected based on the statistics of the mac_client_impl_t, and UV fastpath skips the GLDv3 processing, that won''t be available....
2001 Jun 08
2
Problem with TC
...id argument Example : tc qdisc add dev eth2 handle 10: root estimator 1sec 8sec prio bands 3 priomap 0 1 2 I have tried all kinds of combinations and it refuses to do anything...Can anyone please help me?? Thanks, Anand Get 250 color business cards for FREE! http://businesscards.lycos.com/vp/fastpath/
2014 Jun 17
3
[PATCH 03/11] qspinlock: Add pending bit
...old = atomic_cmpxchg(&lock->val, val, new); > >+ if (old == _Q_LOCKED_VAL) /* YEEY! */ > >+ return; > > No, it can leave like that. The unlock path will not clear the pending bit. Err, you are right. It needs to go back in the slowpath. > We are trying to make the fastpath as simple as possible as it may be > inlined. The complexity of the queue spinlock is in the slowpath. Sure, but then it shouldn't be called slowpath anymore as it is not slow. It is a combination of fast path (the potential chance of grabbing the lock and setting the pending lock) and the...
2014 Jun 17
3
[PATCH 03/11] qspinlock: Add pending bit
...old = atomic_cmpxchg(&lock->val, val, new); > >+ if (old == _Q_LOCKED_VAL) /* YEEY! */ > >+ return; > > No, it can leave like that. The unlock path will not clear the pending bit. Err, you are right. It needs to go back in the slowpath. > We are trying to make the fastpath as simple as possible as it may be > inlined. The complexity of the queue spinlock is in the slowpath. Sure, but then it shouldn't be called slowpath anymore as it is not slow. It is a combination of fast path (the potential chance of grabbing the lock and setting the pending lock) and the...
2020 Jun 16
2
[PATCH v4 0/3] mm, treewide: Rename kzfree() to kfree_sensitive()
...ion on what is best. It will be > > introduced as a separate patch later on after this one is merged. > > To this larger audience and last week without reply: > https://lore.kernel.org/lkml/573b3fbd5927c643920e1364230c296b23e7584d.camel at perches.com/ > > Are there _any_ fastpath uses of kfree or vfree? I'd consider kfree performance critical for cases where it is called under locks. If possible the kfree is moved outside of the critical section, but we have rbtrees or lists that get deleted under locks and restructuring the code to do eg. splice and free it outside of...
2020 Jun 16
2
[PATCH v4 0/3] mm, treewide: Rename kzfree() to kfree_sensitive()
...ion on what is best. It will be > > introduced as a separate patch later on after this one is merged. > > To this larger audience and last week without reply: > https://lore.kernel.org/lkml/573b3fbd5927c643920e1364230c296b23e7584d.camel at perches.com/ > > Are there _any_ fastpath uses of kfree or vfree? I'd consider kfree performance critical for cases where it is called under locks. If possible the kfree is moved outside of the critical section, but we have rbtrees or lists that get deleted under locks and restructuring the code to do eg. splice and free it outside of...
2011 Dec 02
3
[PATCH] Btrfs: protect orphan block rsv with spin_lock
We''ve been seeing warnings coming out of the orphan commit stuff forever from ceph. Turns out it''s because we''re racing with checking if the orphan block reserve is set, because we clear it outside of the spin_lock. So leave the normal fastpath checks where they are, but take the spin_lock and _recheck_ to make sure we haven''t had an orphan block rsv added in the meantime. Then clear the root''s orphan block rsv and release the lock. With this patch a user said the warnings went away and they usually showed up pretty so...
2015 Feb 06
10
[PATCH] x86 spinlock: Fix memory corruption on completing completions
..._smp()" and released the spinlock for the fast-path, you can't access the spinlock any more. Exactly because a fast-path lock might come in, and release the whole data structure. Linus suggested that we should not do any writes to lock after unlock(), and we can move slowpath clearing to fastpath lock. However it brings additional case to be handled, viz., slowpath still could be set when somebody does arch_trylock. Handle that too by ignoring slowpath flag during lock availability check. Reported-by: Sasha Levin <sasha.levin at oracle.com> Suggested-by: Linus Torvalds <torvalds...
2015 Feb 06
10
[PATCH] x86 spinlock: Fix memory corruption on completing completions
..._smp()" and released the spinlock for the fast-path, you can't access the spinlock any more. Exactly because a fast-path lock might come in, and release the whole data structure. Linus suggested that we should not do any writes to lock after unlock(), and we can move slowpath clearing to fastpath lock. However it brings additional case to be handled, viz., slowpath still could be set when somebody does arch_trylock. Handle that too by ignoring slowpath flag during lock availability check. Reported-by: Sasha Levin <sasha.levin at oracle.com> Suggested-by: Linus Torvalds <torvalds...
2015 Feb 08
0
[PATCH] x86 spinlock: Fix memory corruption on completing completions
...spinlock any more. Exactly > because a fast-path lock might come in, and release the whole data > structure. Yeah, that's an embarrasingly obvious bug in retrospect. > Linus suggested that we should not do any writes to lock after unlock(), > and we can move slowpath clearing to fastpath lock. Yep, that seems like a sound approach. > However it brings additional case to be handled, viz., slowpath still > could be set when somebody does arch_trylock. Handle that too by ignoring > slowpath flag during lock availability check. > > Reported-by: Sasha Levin <sasha.le...
2015 Feb 08
0
[PATCH] x86 spinlock: Fix memory corruption on completing completions
...spinlock any more. Exactly > because a fast-path lock might come in, and release the whole data > structure. Yeah, that's an embarrasingly obvious bug in retrospect. > Linus suggested that we should not do any writes to lock after unlock(), > and we can move slowpath clearing to fastpath lock. Yep, that seems like a sound approach. > However it brings additional case to be handled, viz., slowpath still > could be set when somebody does arch_trylock. Handle that too by ignoring > slowpath flag during lock availability check. > > Reported-by: Sasha Levin <sasha.le...
2001 Jun 28
4
Java wrapper?
Hi, a few months ago it has already been discussed, but without final solution it seems... Is there anybody who is working on a Java JNI wrapper for libvorbis etc.? Greetings k.j. --- >8 ---- List archives: http://www.xiph.org/archives/ Ogg project homepage: http://www.xiph.org/ogg/ To unsubscribe from this list, send a message to 'vorbis-dev-request@xiph.org' containing only
2015 Feb 09
3
[PATCH] x86 spinlock: Fix memory corruption on completing completions
On 02/09/2015 02:44 AM, Jeremy Fitzhardinge wrote: > On 02/06/2015 06:49 AM, Raghavendra K T wrote: [...] > >> Linus suggested that we should not do any writes to lock after unlock(), >> and we can move slowpath clearing to fastpath lock. > > Yep, that seems like a sound approach. Current approach seem to be working now. (though we could not avoid read). Related question: Do you think we could avoid SLOWPATH_FLAG itself by checking head and tail difference. or is it costly because it may result in unnecessary unlock_kic...
2015 Feb 09
3
[PATCH] x86 spinlock: Fix memory corruption on completing completions
On 02/09/2015 02:44 AM, Jeremy Fitzhardinge wrote: > On 02/06/2015 06:49 AM, Raghavendra K T wrote: [...] > >> Linus suggested that we should not do any writes to lock after unlock(), >> and we can move slowpath clearing to fastpath lock. > > Yep, that seems like a sound approach. Current approach seem to be working now. (though we could not avoid read). Related question: Do you think we could avoid SLOWPATH_FLAG itself by checking head and tail difference. or is it costly because it may result in unnecessary unlock_kic...
2015 Feb 24
4
[PATCH for stable] x86/spinlocks/paravirt: Fix memory corruption on unlock
...t-path, you can't access the spinlock any more. Exactly > > because a fast-path lock might come in, and release the whole data > > structure. > > > > Linus suggested that we should not do any writes to lock after unlock(), > > and we can move slowpath clearing to fastpath lock. > > > > So this patch implements the fix with: > > 1. Moving slowpath flag to head (Oleg): > > Unlocked locks don't care about the slowpath flag; therefore we can keep > > it set after the last unlock, and clear it again on the first (try)lock. > > --...
2015 Feb 24
4
[PATCH for stable] x86/spinlocks/paravirt: Fix memory corruption on unlock
...t-path, you can't access the spinlock any more. Exactly > > because a fast-path lock might come in, and release the whole data > > structure. > > > > Linus suggested that we should not do any writes to lock after unlock(), > > and we can move slowpath clearing to fastpath lock. > > > > So this patch implements the fix with: > > 1. Moving slowpath flag to head (Oleg): > > Unlocked locks don't care about the slowpath flag; therefore we can keep > > it set after the last unlock, and clear it again on the first (try)lock. > > --...
2005 Apr 04
0
problem about initramfs
...96000] Kernel command line: root=/dev/ram0 [4294667.296000] Primary instruction cache 32kB, 4-way, linesize 32 bytes. [4294667.296000] Primary data cache 32kB, 4-way, linesize 32 bytes. [4294667.296000] Synthesized TLB refill handler (24 instructions). [4294667.296000] Synthesized TLB load handler fastpath (36 instructions). [4294667.296000] Synthesized TLB store handler fastpath (31 instructions). [4294667.296000] Synthesized TLB modify handler fastpath (30 instructions). [4294667.296000] PID hash table entries: 1024 (order: 10, 16384 bytes) [4294667.298000] Dentry cache hash table entries: 65536 (...
2020 Aug 13
2
Exceptions and performance
...hanism in LLVM recently, my impression is that using exceptions should instead improve performance (in the common case that no exception is thrown out), compared with the traditional approach of returning an error code in every function that can fail: no error-code-checking logic is executed in the fastpath, and error-handling code are moved from the main binary to the exception table, so the CPU is doing less work, and also instruction cache locality should be improved. Is my understanding correct? So my question is: (1) Is the argument that 'exception hurts compiler optimization, and should not...
2016 Jan 27
2
Skip redundant checks in AliasSet::aliasesUnknownInst
...iasSet. As a result, when we're merging two ASTs (which covered different loops/instructions by definition), we should never see the same instruction twice when merging AliasSets. However, using a set to represent the unknown insts would still be useful. In particular, it would give us a fastpath for determining if a particular unknown instruction was already in an alias set. If we explicitly merged AliasSets from different ASTs (i.e. add all unknown at once to a single AliasSet, and then merge if needed), this would give us a fast way to avoid redundant aliasing checks when looking fo...
2014 Jun 17
1
[PATCH 03/11] qspinlock: Add pending bit
...the lock is freed, the pending bit holder will > still have to clear the pending bit and set the lock bit as is done in > the slowpath. We cannot skip the step here. The problem of moving the > pending code here is that it includes a wait loop which we don't want to > put in the fastpath. > > > > And it is a quick path. > > > >>> We are trying to make the fastpath as simple as possible as it may be > >>> inlined. The complexity of the queue spinlock is in the slowpath. > >> Sure, but then it shouldn't be called slowpath a...