search for: store_release

Displaying 20 results from an estimated 22 matches for "store_release".

2016 Jan 10
3
[PATCH v2 1/3] checkpatch.pl: add missing memory barriers
...ry barriers without a comment. > - if ($line =~ /\b(mb|rmb|wmb|read_barrier_depends|smp_mb|smp_rmb|smp_wmb|smp_read_barrier_depends)\(/) { > + > + my $barriers = qr{ > + mb| > + rmb| > + wmb| > + read_barrier_depends > + }x; > + my $smp_barriers = qr{ > + store_release| > + load_acquire| > + store_mb| > + ($barriers) > + }x; If I use a variable called $smp_barriers, I'd expect it to actually be the smp_barriers, not to have to prefix it with smp_ before using it. my $smp_barriers = qr{ smp_store_release| smp_load_acquire| smp_s...
2016 Jan 10
3
[PATCH v2 1/3] checkpatch.pl: add missing memory barriers
...ry barriers without a comment. > - if ($line =~ /\b(mb|rmb|wmb|read_barrier_depends|smp_mb|smp_rmb|smp_wmb|smp_read_barrier_depends)\(/) { > + > + my $barriers = qr{ > + mb| > + rmb| > + wmb| > + read_barrier_depends > + }x; > + my $smp_barriers = qr{ > + store_release| > + load_acquire| > + store_mb| > + ($barriers) > + }x; If I use a variable called $smp_barriers, I'd expect it to actually be the smp_barriers, not to have to prefix it with smp_ before using it. my $smp_barriers = qr{ smp_store_release| smp_load_acquire| smp_s...
2016 Jan 10
5
[PATCH v2 0/3] checkpatch: handling of memory barriers
As part of memory barrier cleanup, this patchset extends checkpatch to make it easier to stop incorrect memory barrier usage. This applies on top of my series arch: barrier cleanup + barriers for virt and will be included in the next version of the series. Changes from v2: catch optional\s* before () in barriers rewrite using qr{} instead of map Michael S. Tsirkin (3): checkpatch.pl: add
2016 Jan 10
5
[PATCH v2 0/3] checkpatch: handling of memory barriers
As part of memory barrier cleanup, this patchset extends checkpatch to make it easier to stop incorrect memory barrier usage. This applies on top of my series arch: barrier cleanup + barriers for virt and will be included in the next version of the series. Changes from v2: catch optional\s* before () in barriers rewrite using qr{} instead of map Michael S. Tsirkin (3): checkpatch.pl: add
2017 Dec 05
2
[PATCH tip/core/rcu 21/21] drivers/vhost: Remove now-redundant read_barrier_depends()
...READ_ONCE and be sure the value > is sane, but only if I also remember to put in smp_wmb before > WRITE_ONCE. Otherwise the pointer is ok but no guarantees > about the data pointed to. That was already the case on everything except Alpha. And the canonical match do the data dependency is store_release, not wmb.
2017 Dec 05
2
[PATCH tip/core/rcu 21/21] drivers/vhost: Remove now-redundant read_barrier_depends()
...READ_ONCE and be sure the value > is sane, but only if I also remember to put in smp_wmb before > WRITE_ONCE. Otherwise the pointer is ok but no guarantees > about the data pointed to. That was already the case on everything except Alpha. And the canonical match do the data dependency is store_release, not wmb.
2016 Jan 10
0
[PATCH v2 1/3] checkpatch.pl: add missing memory barriers
...6,25 @@ sub process { } } # check for memory barriers without a comment. - if ($line =~ /\b(mb|rmb|wmb|read_barrier_depends|smp_mb|smp_rmb|smp_wmb|smp_read_barrier_depends)\(/) { + + my $barriers = qr{ + mb| + rmb| + wmb| + read_barrier_depends + }x; + my $smp_barriers = qr{ + store_release| + load_acquire| + store_mb| + ($barriers) + }x; + my $all_barriers = qr{ + $barriers| + smp_($smp_barriers) + }x; + + if ($line =~ /\b($all_barriers)\s*\(/) { if (!ctx_has_comment($first_line, $linenr)) { WARN("MEMORY_BARRIER", "memory barrier withou...
2017 Dec 05
0
[PATCH tip/core/rcu 21/21] drivers/vhost: Remove now-redundant read_barrier_depends()
...ue > > is sane, but only if I also remember to put in smp_wmb before > > WRITE_ONCE. Otherwise the pointer is ok but no guarantees > > about the data pointed to. > > That was already the case on everything except Alpha. And the canonical > match do the data dependency is store_release, not wmb. Oh, interesting static __always_inline void __write_once_size(volatile void *p, void *res, int size) { switch (size) { case 1: *(volatile __u8 *)p = *(__u8 *)res; break; case 2: *(volatile __u16 *)p = *(__u16 *)res; break; case 4: *(volatile __u32 *)p = *...
2017 Dec 01
2
[PATCH tip/core/rcu 21/21] drivers/vhost: Remove now-redundant read_barrier_depends()
Because READ_ONCE() now implies read_barrier_depends(), the read_barrier_depends() in next_desc() is now redundant. This commit therefore removes it and the related comments. Signed-off-by: Paul E. McKenney <paulmck at linux.vnet.ibm.com> Cc: "Michael S. Tsirkin" <mst at redhat.com> Cc: Jason Wang <jasowang at redhat.com> Cc: <kvm at vger.kernel.org> Cc:
2017 Dec 01
2
[PATCH tip/core/rcu 21/21] drivers/vhost: Remove now-redundant read_barrier_depends()
Because READ_ONCE() now implies read_barrier_depends(), the read_barrier_depends() in next_desc() is now redundant. This commit therefore removes it and the related comments. Signed-off-by: Paul E. McKenney <paulmck at linux.vnet.ibm.com> Cc: "Michael S. Tsirkin" <mst at redhat.com> Cc: Jason Wang <jasowang at redhat.com> Cc: <kvm at vger.kernel.org> Cc:
2016 Jan 10
4
[PATCH v3 0/3] checkpatch: handling of memory barriers
As part of memory barrier cleanup, this patchset extends checkpatch to make it easier to stop incorrect memory barrier usage. This replaces the checkpatch patches in my series arch: barrier cleanup + barriers for virt and will be included in the next version of the series. changes from v2: address comments by Joe Perches: use (?: ... ) to avoid unnecessary capture groups rename smp_barriers
2016 Jan 10
4
[PATCH v3 0/3] checkpatch: handling of memory barriers
As part of memory barrier cleanup, this patchset extends checkpatch to make it easier to stop incorrect memory barrier usage. This replaces the checkpatch patches in my series arch: barrier cleanup + barriers for virt and will be included in the next version of the series. changes from v2: address comments by Joe Perches: use (?: ... ) to avoid unnecessary capture groups rename smp_barriers
2016 Jan 11
6
[PATCH v4 0/3] checkpatch: handling of memory barriers
As part of memory barrier cleanup, this patchset extends checkpatch to make it easier to stop incorrect memory barrier usage. This replaces the checkpatch patches in my series arch: barrier cleanup + barriers for virt and will be included in the pull request including the series. changes from v3: rename smp_barrier_stems to barrier_stems as suggested by Julian Calaby. add (?: ... ) around a
2016 Jan 11
6
[PATCH v4 0/3] checkpatch: handling of memory barriers
As part of memory barrier cleanup, this patchset extends checkpatch to make it easier to stop incorrect memory barrier usage. This replaces the checkpatch patches in my series arch: barrier cleanup + barriers for virt and will be included in the pull request including the series. changes from v3: rename smp_barrier_stems to barrier_stems as suggested by Julian Calaby. add (?: ... ) around a
2019 Jul 31
0
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...was increased when vq try to start or finish uses the map. This means, when it was even, we're sure there's no readers and MMU notifier is synchronized. When it was odd, it means there's a reader we need to wait it to be even again then we are synchronized. To avoid full memory barrier, store_release + load_acquire on the counter is used. Consider the read critical section is pretty small the synchronization should be done very fast. Note the patch lead about 3% PPS dropping. Reported-by: Michael S. Tsirkin <mst at redhat.com> Fixes: 7f466032dc9e ("vhost: access vq metadata throug...
2019 Jul 31
2
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...q try to start or finish > uses the map. This means, when it was even, we're sure there's no > readers and MMU notifier is synchronized. When it was odd, it means > there's a reader we need to wait it to be even again then we are > synchronized. To avoid full memory barrier, store_release + > load_acquire on the counter is used. Unfortunately this needs a lot of review and testing, so this can't make rc2, and I don't think this is the kind of patch I can merge after rc3. Subtle memory barrier tricks like this can introduce new bugs while they are fixing old ones. &g...
2019 Jul 31
2
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...q try to start or finish > uses the map. This means, when it was even, we're sure there's no > readers and MMU notifier is synchronized. When it was odd, it means > there's a reader we need to wait it to be even again then we are > synchronized. To avoid full memory barrier, store_release + > load_acquire on the counter is used. Unfortunately this needs a lot of review and testing, so this can't make rc2, and I don't think this is the kind of patch I can merge after rc3. Subtle memory barrier tricks like this can introduce new bugs while they are fixing old ones. &g...
2019 Jul 31
14
[PATCH V2 0/9] Fixes for metadata accelreation
Hi all: This series try to fix several issues introduced by meta data accelreation series. Please review. Changes from V1: - Try not use RCU to syncrhonize MMU notifier with vhost worker - set dirty pages after no readers - return -EAGAIN only when we find the range is overlapped with metadata Jason Wang (9): vhost: don't set uaddr for invalid address vhost: validate MMU notifier
2019 Aug 03
1
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...t; uses the map. This means, when it was even, we're sure there's no > >> readers and MMU notifier is synchronized. When it was odd, it means > >> there's a reader we need to wait it to be even again then we are > >> synchronized. To avoid full memory barrier, store_release + > >> load_acquire on the counter is used. > > > > Unfortunately this needs a lot of review and testing, so this can't make > > rc2, and I don't think this is the kind of patch I can merge after rc3. > > Subtle memory barrier tricks like this can introduce...
2019 Aug 01
0
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...r finish >> uses the map. This means, when it was even, we're sure there's no >> readers and MMU notifier is synchronized. When it was odd, it means >> there's a reader we need to wait it to be even again then we are >> synchronized. To avoid full memory barrier, store_release + >> load_acquire on the counter is used. > > Unfortunately this needs a lot of review and testing, so this can't make > rc2, and I don't think this is the kind of patch I can merge after rc3. > Subtle memory barrier tricks like this can introduce new bugs while they >...