search for: __add

Displaying 20 results from an estimated 50 matches for "__add".

Did you mean: __addr
2015 Feb 11
3
[PATCH] x86 spinlock: Fix memory corruption on completing completions
On 02/10, Jeremy Fitzhardinge wrote: > > On 02/10/2015 05:26 AM, Oleg Nesterov wrote: > > On 02/10, Raghavendra K T wrote: > >> Unfortunately xadd could result in head overflow as tail is high. > >> > >> The other option was repeated cmpxchg which is bad I believe. > >> Any suggestions? > > Stupid question... what if we simply move SLOWPATH
2015 Feb 11
3
[PATCH] x86 spinlock: Fix memory corruption on completing completions
On 02/10, Jeremy Fitzhardinge wrote: > > On 02/10/2015 05:26 AM, Oleg Nesterov wrote: > > On 02/10, Raghavendra K T wrote: > >> Unfortunately xadd could result in head overflow as tail is high. > >> > >> The other option was repeated cmpxchg which is bad I believe. > >> Any suggestions? > > Stupid question... what if we simply move SLOWPATH
2015 Feb 11
0
[PATCH] x86 spinlock: Fix memory corruption on completing completions
...vent read-reordering. x86 memory ordering rules state that all writes are seen in a globally consistent order, and are globally ordered wrt reads *on the same addresses*, but reads to different addresses can be reordered wrt to writes. So, if the unlocking add were not a locked operation: __add(&lock->tickets.head, TICKET_LOCK_INC); /* not locked */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); Then the read of lock->tickets.tail can be reordered before the unlock, which introduces a race: /* read r...
2015 Feb 11
0
[PATCH] x86 spinlock: Fix memory corruption on completing completions
...vent read-reordering. x86 memory ordering rules state that all writes are seen in a globally consistent order, and are globally ordered wrt reads *on the same addresses*, but reads to different addresses can be reordered wrt to writes. So, if the unlocking add were not a locked operation: __add(&lock->tickets.head, TICKET_LOCK_INC); /* not locked */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); Then the read of lock->tickets.tail can be reordered before the unlock, which introduces a race: /* read r...
2007 Jul 27
5
File.stub!
Hi, I''m trying to stub File.open and whenever I do, rspec blows up with the following error message (undefined method `add'' for nil:NilClass), which seems to happen because method __add in class Proxy is calling #add on global $rspec_mocks, which in turn is nil. Can someone explain what I''m doing wrong, as I can''t seem to stub anything out! Here''s my code: class Foo def Foo.open( name, mode ) return name end end describe Something...
2004 Aug 05
4
newest up2date rpm
..., line 1066, in batchRun batch.run() File "up2dateBatch.py", line 74, in run File "up2dateBatch.py", line 159, in __dryRun File "up2date.py", line 395, in dryRun File "depSolver.py", line 357, in setup File "depSolver.py", line 390, in __add File "headers.py", line 37, in __getitem__ File "headers.py", line 42, in __retrievePackage File "rpcServer.py", line 304, in doCall File "repoDirector.py", line 31, in getHeader File "rpmSource.py", line 210, in getHeader File "/us...
2015 Apr 30
0
[PATCH 4/6] x86: introduce new pvops function spin_unlock
...091 100644 --- a/arch/x86/include/asm/spinlock.h +++ b/arch/x86/include/asm/spinlock.h @@ -42,6 +42,10 @@ extern struct static_key paravirt_ticketlocks_enabled; static __always_inline bool static_key_false(struct static_key *key); +static inline void ___ticket_unlock(arch_spinlock_t *lock) +{ + __add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX); +} #ifdef CONFIG_PARAVIRT_SPINLOCKS static inline void __ticket_enter_slowpath(arch_spinlock_t *lock) @@ -64,6 +68,11 @@ static inline void __ticket_clear_slowpath(arch_spinlock_t *lock, __ticket_t head) { } + +static...
2014 Jun 28
2
[RFC PATCH v2] Implement Batched (group) ticket lock
...prev = *lock; - add_smp(&lock->tickets.head, TICKET_LOCK_INC); + add_smp(&lock->tickets.head, TICKET_LOCK_UNLOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); } else - __add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX); + __add(&lock->tickets.head, TICKET_LOCK_UNLOCK_INC, + UNLOCK_LOCK_PREFIX); } static inline int arch_spin_is_locked(arch_spinlock_t *lock) @@ -171,7 +218,7 @@ static inline int arch_spin_is_contended(arch_spinlock_t...
2014 Jun 28
2
[RFC PATCH v2] Implement Batched (group) ticket lock
...prev = *lock; - add_smp(&lock->tickets.head, TICKET_LOCK_INC); + add_smp(&lock->tickets.head, TICKET_LOCK_UNLOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); } else - __add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX); + __add(&lock->tickets.head, TICKET_LOCK_UNLOCK_INC, + UNLOCK_LOCK_PREFIX); } static inline int arch_spin_is_locked(arch_spinlock_t *lock) @@ -171,7 +218,7 @@ static inline int arch_spin_is_contended(arch_spinlock_t...
2012 Apr 19
13
[PATCH RFC V7 0/12] Paravirtualized ticketlocks
...tic_key_false(&paravirt_ticketlocks_enabled))) { arch_spinlock_t prev; prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); } else __add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX); which generates: push %rbp mov %rsp,%rbp nop5 # replaced by 5-byte jmp 2f when PV enabled # non-PV unlock addb $0x2,(%rdi) 1: pop %rbp retq ### PV unlock ### 2: movzwl (%rdi),%esi # Fetch prev lock addb $0x2,...
2012 Apr 19
13
[PATCH RFC V7 0/12] Paravirtualized ticketlocks
...tic_key_false(&paravirt_ticketlocks_enabled))) { arch_spinlock_t prev; prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); } else __add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX); which generates: push %rbp mov %rsp,%rbp nop5 # replaced by 5-byte jmp 2f when PV enabled # non-PV unlock addb $0x2,(%rdi) 1: pop %rbp retq ### PV unlock ### 2: movzwl (%rdi),%esi # Fetch prev lock addb $0x2,...
2013 Jun 01
11
[PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
...tic_key_false(&paravirt_ticketlocks_enabled))) { arch_spinlock_t prev; prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); } else __add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX); which generates: push %rbp mov %rsp,%rbp nop5 # replaced by 5-byte jmp 2f when PV enabled # non-PV unlock addb $0x2,(%rdi) 1: pop %rbp retq ### PV unlock ### 2: movzwl (%rdi),%esi # Fetch prev lock addb $0x2,...
2013 Jun 01
11
[PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
...tic_key_false(&paravirt_ticketlocks_enabled))) { arch_spinlock_t prev; prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); } else __add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX); which generates: push %rbp mov %rsp,%rbp nop5 # replaced by 5-byte jmp 2f when PV enabled # non-PV unlock addb $0x2,(%rdi) 1: pop %rbp retq ### PV unlock ### 2: movzwl (%rdi),%esi # Fetch prev lock addb $0x2,...
2013 Jun 01
11
[PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
...tic_key_false(&paravirt_ticketlocks_enabled))) { arch_spinlock_t prev; prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); } else __add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX); which generates: push %rbp mov %rsp,%rbp nop5 # replaced by 5-byte jmp 2f when PV enabled # non-PV unlock addb $0x2,(%rdi) 1: pop %rbp retq ### PV unlock ### 2: movzwl (%rdi),%esi # Fetch prev lock addb $0x2,...
2012 Mar 21
15
[PATCH RFC V6 0/11] Paravirtualized ticketlocks
...static_branch(&paravirt_ticketlocks_enabled))) { arch_spinlock_t prev; prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); } else __add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX); which generates: push %rbp mov %rsp,%rbp nop5 # replaced by 5-byte jmp 2f when PV enabled # non-PV unlock addb $0x2,(%rdi) 1: pop %rbp retq ### PV unlock ### 2: movzwl (%rdi),%esi # Fetch prev lock addb $0x2,...
2012 Mar 21
15
[PATCH RFC V6 0/11] Paravirtualized ticketlocks
...static_branch(&paravirt_ticketlocks_enabled))) { arch_spinlock_t prev; prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); } else __add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX); which generates: push %rbp mov %rsp,%rbp nop5 # replaced by 5-byte jmp 2f when PV enabled # non-PV unlock addb $0x2,(%rdi) 1: pop %rbp retq ### PV unlock ### 2: movzwl (%rdi),%esi # Fetch prev lock addb $0x2,...
2015 Feb 08
0
[PATCH] x86 spinlock: Fix memory corruption on completing completions
...in practice. > + BUILD_BUG_ON(((__ticket_t)NR_CPUS) != NR_CPUS); > + __ticket_unlock_kick(lock, prev_head); Should be "prev_head + TICKET_LOCK_INC" to match the previous code, otherwise it won't find the CPU waiting for the new head. J > + } > } else > __add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX); > } > @@ -164,7 +162,7 @@ static inline int arch_spin_is_locked(arch_spinlock_t *lock) > { > struct __raw_tickets tmp = READ_ONCE(lock->tickets); > > - return tmp.tail != tmp.head; > + return (tmp.tail &...
2015 Feb 08
0
[PATCH] x86 spinlock: Fix memory corruption on completing completions
...in practice. > + BUILD_BUG_ON(((__ticket_t)NR_CPUS) != NR_CPUS); > + __ticket_unlock_kick(lock, prev_head); Should be "prev_head + TICKET_LOCK_INC" to match the previous code, otherwise it won't find the CPU waiting for the new head. J > + } > } else > __add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX); > } > @@ -164,7 +162,7 @@ static inline int arch_spin_is_locked(arch_spinlock_t *lock) > { > struct __raw_tickets tmp = READ_ONCE(lock->tickets); > > - return tmp.tail != tmp.head; > + return (tmp.tail &...
2015 Apr 30
12
[PATCH 0/6] x86: reduce paravirtualized spinlock overhead
Paravirtualized spinlocks produce some overhead even if the kernel is running on bare metal. The main reason are the more complex locking and unlocking functions. Especially unlocking is no longer just one instruction but so complex that it is no longer inlined. This patch series addresses this issue by adding two more pvops functions to reduce the size of the inlined spinlock functions. When
2015 Apr 30
12
[PATCH 0/6] x86: reduce paravirtualized spinlock overhead
Paravirtualized spinlocks produce some overhead even if the kernel is running on bare metal. The main reason are the more complex locking and unlocking functions. Especially unlocking is no longer just one instruction but so complex that it is no longer inlined. This patch series addresses this issue by adding two more pvops functions to reduce the size of the inlined spinlock functions. When