Displaying 20 results from an estimated 27 matches for "load_acquire".
2016 Jan 10
3
[PATCH v2 1/3] checkpatch.pl: add missing memory barriers
...mment.
> - if ($line =~ /\b(mb|rmb|wmb|read_barrier_depends|smp_mb|smp_rmb|smp_wmb|smp_read_barrier_depends)\(/) {
> +
> + my $barriers = qr{
> + mb|
> + rmb|
> + wmb|
> + read_barrier_depends
> + }x;
> + my $smp_barriers = qr{
> + store_release|
> + load_acquire|
> + store_mb|
> + ($barriers)
> + }x;
If I use a variable called $smp_barriers, I'd expect
it to actually be the smp_barriers, not to have to
prefix it with smp_ before using it.
my $smp_barriers = qr{
smp_store_release|
smp_load_acquire|
smp_store_mb|
smp_read_ba...
2016 Jan 10
3
[PATCH v2 1/3] checkpatch.pl: add missing memory barriers
...mment.
> - if ($line =~ /\b(mb|rmb|wmb|read_barrier_depends|smp_mb|smp_rmb|smp_wmb|smp_read_barrier_depends)\(/) {
> +
> + my $barriers = qr{
> + mb|
> + rmb|
> + wmb|
> + read_barrier_depends
> + }x;
> + my $smp_barriers = qr{
> + store_release|
> + load_acquire|
> + store_mb|
> + ($barriers)
> + }x;
If I use a variable called $smp_barriers, I'd expect
it to actually be the smp_barriers, not to have to
prefix it with smp_ before using it.
my $smp_barriers = qr{
smp_store_release|
smp_load_acquire|
smp_store_mb|
smp_read_ba...
2016 Jan 10
5
[PATCH v2 0/3] checkpatch: handling of memory barriers
As part of memory barrier cleanup, this patchset
extends checkpatch to make it easier to stop
incorrect memory barrier usage.
This applies on top of my series
arch: barrier cleanup + barriers for virt
and will be included in the next version of the series.
Changes from v2:
catch optional\s* before () in barriers
rewrite using qr{} instead of map
Michael S. Tsirkin (3):
checkpatch.pl: add
2016 Jan 10
5
[PATCH v2 0/3] checkpatch: handling of memory barriers
As part of memory barrier cleanup, this patchset
extends checkpatch to make it easier to stop
incorrect memory barrier usage.
This applies on top of my series
arch: barrier cleanup + barriers for virt
and will be included in the next version of the series.
Changes from v2:
catch optional\s* before () in barriers
rewrite using qr{} instead of map
Michael S. Tsirkin (3):
checkpatch.pl: add
2015 Dec 17
2
[PATCH] virtio: use smp_load_acquire/smp_store_release
virtio ring entries have exactly the acquire/release
semantics:
- reading used index acquires a ring entry from host
- updating the available index releases it to host
Thus when using weak barriers and building for SMP (as most people
do), smp_load_acquire and smp_store_release will do exactly
the right thing to synchronize with the host.
In fact, QEMU already uses __atomic_thread_fence(__ATOMIC_ACQUIRE) and
__atomic_thread_fence(__ATOMIC_RELEASE);
Documentation/circular-buffers.txt suggests smp_load_acquire and
smp_store_release for head and tail u...
2015 Dec 17
2
[PATCH] virtio: use smp_load_acquire/smp_store_release
virtio ring entries have exactly the acquire/release
semantics:
- reading used index acquires a ring entry from host
- updating the available index releases it to host
Thus when using weak barriers and building for SMP (as most people
do), smp_load_acquire and smp_store_release will do exactly
the right thing to synchronize with the host.
In fact, QEMU already uses __atomic_thread_fence(__ATOMIC_ACQUIRE) and
__atomic_thread_fence(__ATOMIC_RELEASE);
Documentation/circular-buffers.txt suggests smp_load_acquire and
smp_store_release for head and tail u...
2016 Jan 13
3
[PULL] virtio: barrier rework+fixes
...---------------------------------------------------------------
Yes I know that the patch by Davidlohr Bueso has a typo in the subject :(
Davidlohr Bueso (1):
lcoking/barriers, arch: Use smp barriers in smp_store_release()
Michael S. Tsirkin (40):
asm-generic: guard smp_store_release/load_acquire
ia64: rename nop->iosapic_nop
ia64: reuse asm-generic/barrier.h
powerpc: reuse asm-generic/barrier.h
s390: reuse asm-generic/barrier.h
sparc: reuse asm-generic/barrier.h
arm: reuse asm-generic/barrier.h
arm64: reuse asm-generic/barrier.h
metag: reu...
2016 Jan 13
3
[PULL] virtio: barrier rework+fixes
...---------------------------------------------------------------
Yes I know that the patch by Davidlohr Bueso has a typo in the subject :(
Davidlohr Bueso (1):
lcoking/barriers, arch: Use smp barriers in smp_store_release()
Michael S. Tsirkin (40):
asm-generic: guard smp_store_release/load_acquire
ia64: rename nop->iosapic_nop
ia64: reuse asm-generic/barrier.h
powerpc: reuse asm-generic/barrier.h
s390: reuse asm-generic/barrier.h
sparc: reuse asm-generic/barrier.h
arm: reuse asm-generic/barrier.h
arm64: reuse asm-generic/barrier.h
metag: reu...
2015 Dec 17
0
[PATCH] virtio: use smp_load_acquire/smp_store_release
On Thu, Dec 17, 2015 at 12:29:03PM +0200, Michael S. Tsirkin wrote:
> +static inline __virtio16 virtio_load_acquire(bool weak_barriers, __virtio16 *p)
> +{
> + if (!weak_barriers) {
> + rmb();
> + return READ_ONCE(*p);
> + }
> +#ifdef CONFIG_SMP
> + return smp_load_acquire(p);
> +#else
> + dma_rmb();
> + return READ_ONCE(*p);
> +#endif
> +}
This too is wrong. Look for examp...
2016 Jan 10
0
[PATCH v2 1/3] checkpatch.pl: add missing memory barriers
...{
}
}
# check for memory barriers without a comment.
- if ($line =~ /\b(mb|rmb|wmb|read_barrier_depends|smp_mb|smp_rmb|smp_wmb|smp_read_barrier_depends)\(/) {
+
+ my $barriers = qr{
+ mb|
+ rmb|
+ wmb|
+ read_barrier_depends
+ }x;
+ my $smp_barriers = qr{
+ store_release|
+ load_acquire|
+ store_mb|
+ ($barriers)
+ }x;
+ my $all_barriers = qr{
+ $barriers|
+ smp_($smp_barriers)
+ }x;
+
+ if ($line =~ /\b($all_barriers)\s*\(/) {
if (!ctx_has_comment($first_line, $linenr)) {
WARN("MEMORY_BARRIER",
"memory barrier without comment\n"...
2016 Jan 10
4
[PATCH v3 0/3] checkpatch: handling of memory barriers
As part of memory barrier cleanup, this patchset
extends checkpatch to make it easier to stop
incorrect memory barrier usage.
This replaces the checkpatch patches in my series
arch: barrier cleanup + barriers for virt
and will be included in the next version of the series.
changes from v2:
address comments by Joe Perches:
use (?: ... ) to avoid unnecessary capture groups
rename smp_barriers
2016 Jan 10
4
[PATCH v3 0/3] checkpatch: handling of memory barriers
As part of memory barrier cleanup, this patchset
extends checkpatch to make it easier to stop
incorrect memory barrier usage.
This replaces the checkpatch patches in my series
arch: barrier cleanup + barriers for virt
and will be included in the next version of the series.
changes from v2:
address comments by Joe Perches:
use (?: ... ) to avoid unnecessary capture groups
rename smp_barriers
2016 Jan 18
0
virtio pull for 4.5 (was Re: [PULL] virtio: barrier rework+fixes)
...-----------------------
>
> Yes I know that the patch by Davidlohr Bueso has a typo in the subject :(
>
> Davidlohr Bueso (1):
> lcoking/barriers, arch: Use smp barriers in smp_store_release()
>
> Michael S. Tsirkin (40):
> asm-generic: guard smp_store_release/load_acquire
> ia64: rename nop->iosapic_nop
> ia64: reuse asm-generic/barrier.h
> powerpc: reuse asm-generic/barrier.h
> s390: reuse asm-generic/barrier.h
> sparc: reuse asm-generic/barrier.h
> arm: reuse asm-generic/barrier.h
> arm64: reuse asm...
2016 Jan 11
6
[PATCH v4 0/3] checkpatch: handling of memory barriers
As part of memory barrier cleanup, this patchset
extends checkpatch to make it easier to stop
incorrect memory barrier usage.
This replaces the checkpatch patches in my series
arch: barrier cleanup + barriers for virt
and will be included in the pull request including
the series.
changes from v3:
rename smp_barrier_stems to barrier_stems
as suggested by Julian Calaby.
add (?: ... ) around a
2016 Jan 11
6
[PATCH v4 0/3] checkpatch: handling of memory barriers
As part of memory barrier cleanup, this patchset
extends checkpatch to make it easier to stop
incorrect memory barrier usage.
This replaces the checkpatch patches in my series
arch: barrier cleanup + barriers for virt
and will be included in the pull request including
the series.
changes from v3:
rename smp_barrier_stems to barrier_stems
as suggested by Julian Calaby.
add (?: ... ) around a
2019 Jul 31
0
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...en vq try to start or finish
uses the map. This means, when it was even, we're sure there's no
readers and MMU notifier is synchronized. When it was odd, it means
there's a reader we need to wait it to be even again then we are
synchronized. To avoid full memory barrier, store_release +
load_acquire on the counter is used.
Consider the read critical section is pretty small the synchronization
should be done very fast.
Note the patch lead about 3% PPS dropping.
Reported-by: Michael S. Tsirkin <mst at redhat.com>
Fixes: 7f466032dc9e ("vhost: access vq metadata through kernel virtua...
2019 Jul 31
2
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...ish
> uses the map. This means, when it was even, we're sure there's no
> readers and MMU notifier is synchronized. When it was odd, it means
> there's a reader we need to wait it to be even again then we are
> synchronized. To avoid full memory barrier, store_release +
> load_acquire on the counter is used.
Unfortunately this needs a lot of review and testing, so this can't make
rc2, and I don't think this is the kind of patch I can merge after rc3.
Subtle memory barrier tricks like this can introduce new bugs while they
are fixing old ones.
>
> Consider th...
2019 Jul 31
2
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...ish
> uses the map. This means, when it was even, we're sure there's no
> readers and MMU notifier is synchronized. When it was odd, it means
> there's a reader we need to wait it to be even again then we are
> synchronized. To avoid full memory barrier, store_release +
> load_acquire on the counter is used.
Unfortunately this needs a lot of review and testing, so this can't make
rc2, and I don't think this is the kind of patch I can merge after rc3.
Subtle memory barrier tricks like this can introduce new bugs while they
are fixing old ones.
>
> Consider th...
2019 Jul 31
14
[PATCH V2 0/9] Fixes for metadata accelreation
Hi all:
This series try to fix several issues introduced by meta data
accelreation series. Please review.
Changes from V1:
- Try not use RCU to syncrhonize MMU notifier with vhost worker
- set dirty pages after no readers
- return -EAGAIN only when we find the range is overlapped with
metadata
Jason Wang (9):
vhost: don't set uaddr for invalid address
vhost: validate MMU notifier
2019 Aug 03
1
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...hen it was even, we're sure there's no
> >> readers and MMU notifier is synchronized. When it was odd, it means
> >> there's a reader we need to wait it to be even again then we are
> >> synchronized. To avoid full memory barrier, store_release +
> >> load_acquire on the counter is used.
> >
> > Unfortunately this needs a lot of review and testing, so this can't make
> > rc2, and I don't think this is the kind of patch I can merge after rc3.
> > Subtle memory barrier tricks like this can introduce new bugs while they
> >...