Displaying 9 results from an estimated 9 matches for "__constant_le16_to_cpu".
2014 Apr 17
2
[PATCH v9 05/19] qspinlock: Optimize for smaller NR_CPUS
...e of the queue spinlock 32-bit word
> + *
> + * *,1,0 -> *,0,1
> + */
> +static __always_inline void
> +clear_pending_set_locked(struct qspinlock *lock, u32 val)
> +{
> + struct __qspinlock *l = (void *)lock;
> +
> + ACCESS_ONCE(l->locked_pending) = 1;
You lost the __constant_le16_to_cpu(_Q_LOCKED_VAL) there. The
unconditional 1 is wrong. You also have to flip the bytes in
locked_pending.
2014 Apr 17
2
[PATCH v9 05/19] qspinlock: Optimize for smaller NR_CPUS
...e of the queue spinlock 32-bit word
> + *
> + * *,1,0 -> *,0,1
> + */
> +static __always_inline void
> +clear_pending_set_locked(struct qspinlock *lock, u32 val)
> +{
> + struct __qspinlock *l = (void *)lock;
> +
> + ACCESS_ONCE(l->locked_pending) = 1;
You lost the __constant_le16_to_cpu(_Q_LOCKED_VAL) there. The
unconditional 1 is wrong. You also have to flip the bytes in
locked_pending.
2014 Apr 18
2
[PATCH v9 05/19] qspinlock: Optimize for smaller NR_CPUS
...*,0,1
> >>+ */
> >>+static __always_inline void
> >>+clear_pending_set_locked(struct qspinlock *lock, u32 val)
> >>+{
> >>+ struct __qspinlock *l = (void *)lock;
> >>+
> >>+ ACCESS_ONCE(l->locked_pending) = 1;
> >You lost the __constant_le16_to_cpu(_Q_LOCKED_VAL) there. The
> >unconditional 1 is wrong. You also have to flip the bytes in
> >locked_pending.
>
> I don't think that is wrong. The lock byte is in the least significant 8
> bits and the pending byte is the next higher significant 8 bits irrespective
> of...
2014 Apr 18
2
[PATCH v9 05/19] qspinlock: Optimize for smaller NR_CPUS
...*,0,1
> >>+ */
> >>+static __always_inline void
> >>+clear_pending_set_locked(struct qspinlock *lock, u32 val)
> >>+{
> >>+ struct __qspinlock *l = (void *)lock;
> >>+
> >>+ ACCESS_ONCE(l->locked_pending) = 1;
> >You lost the __constant_le16_to_cpu(_Q_LOCKED_VAL) there. The
> >unconditional 1 is wrong. You also have to flip the bytes in
> >locked_pending.
>
> I don't think that is wrong. The lock byte is in the least significant 8
> bits and the pending byte is the next higher significant 8 bits irrespective
> of...
2014 Apr 17
0
[PATCH v9 05/19] qspinlock: Optimize for smaller NR_CPUS
...gt; + *
>> + * *,1,0 -> *,0,1
>> + */
>> +static __always_inline void
>> +clear_pending_set_locked(struct qspinlock *lock, u32 val)
>> +{
>> + struct __qspinlock *l = (void *)lock;
>> +
>> + ACCESS_ONCE(l->locked_pending) = 1;
> You lost the __constant_le16_to_cpu(_Q_LOCKED_VAL) there. The
> unconditional 1 is wrong. You also have to flip the bytes in
> locked_pending.
I don't think that is wrong. The lock byte is in the least significant 8
bits and the pending byte is the next higher significant 8 bits
irrespective of the endian-ness. So a valu...
2014 Apr 18
0
[PATCH v9 05/19] qspinlock: Optimize for smaller NR_CPUS
...t;>>> +static __always_inline void
>>>> +clear_pending_set_locked(struct qspinlock *lock, u32 val)
>>>> +{
>>>> + struct __qspinlock *l = (void *)lock;
>>>> +
>>>> + ACCESS_ONCE(l->locked_pending) = 1;
>>> You lost the __constant_le16_to_cpu(_Q_LOCKED_VAL) there. The
>>> unconditional 1 is wrong. You also have to flip the bytes in
>>> locked_pending.
>> I don't think that is wrong. The lock byte is in the least significant 8
>> bits and the pending byte is the next higher significant 8 bits irrespectiv...
2012 Nov 06
0
[ablock84-btrfs:btrfs-far 19/20] fs/far/far-path.c:42:2: error: implicit declaration of function 'IS_ERR'
...from include/linux/preempt.h:9,
from include/linux/spinlock.h:50,
from include/linux/vmalloc.h:4,
from fs/far/far-mem.h:23,
from fs/far/far-attr.c:21:
include/uapi/linux/byteorder/big_endian.h:23:0: warning: "__constant_le16_to_cpu" redefined [enabled by default]
In file included from include/linux/byteorder/little_endian.h:4:0,
from fs/far/far-attr.h:30,
from fs/far/far-attr.c:19:
include/uapi/linux/byteorder/little_endian.h:23:0: note: this is the location of the previous definition
In...
2014 Apr 17
33
[PATCH v9 00/19] qspinlock: a 4-byte queue spinlock with PV support
v8->v9:
- Integrate PeterZ's version of the queue spinlock patch with some
modification:
http://lkml.kernel.org/r/20140310154236.038181843 at infradead.org
- Break the more complex patches into smaller ones to ease review effort.
- Fix a racing condition in the PV qspinlock code.
v7->v8:
- Remove one unneeded atomic operation from the slowpath, thus
improving
2014 Apr 17
33
[PATCH v9 00/19] qspinlock: a 4-byte queue spinlock with PV support
v8->v9:
- Integrate PeterZ's version of the queue spinlock patch with some
modification:
http://lkml.kernel.org/r/20140310154236.038181843 at infradead.org
- Break the more complex patches into smaller ones to ease review effort.
- Fix a racing condition in the PV qspinlock code.
v7->v8:
- Remove one unneeded atomic operation from the slowpath, thus
improving