search for: seqlocks

Displaying 20 results from an estimated 133 matches for "seqlocks".

Did you mean: seqlock
2019 Jul 31
2
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...t; write_seqcount_end() > > > There's no rmb() in write_seqcount_begin(), so map could be read before > write_seqcount_begin(), but it looks to me now that this doesn't harm at > all, maybe we can try this way. That is because it is a write side lock, not a read lock. IIRC seqlocks have weaker barriers because the write side needs to be serialized in some other way. The requirement I see is you need invalidate_range_start to block until another thread exits its critical section (ie stops accessing the SPTEs). That is a spinlock/mutex. You just can't invent a faste...
2019 Jul 31
2
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...t; write_seqcount_end() > > > There's no rmb() in write_seqcount_begin(), so map could be read before > write_seqcount_begin(), but it looks to me now that this doesn't harm at > all, maybe we can try this way. That is because it is a write side lock, not a read lock. IIRC seqlocks have weaker barriers because the write side needs to be serialized in some other way. The requirement I see is you need invalidate_range_start to block until another thread exits its critical section (ie stops accessing the SPTEs). That is a spinlock/mutex. You just can't invent a faste...
2019 Aug 01
3
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...you could have done this with a RCU technique instead of register/unregister. > Sync between 2) and 4): For mapping [1], both are readers, no need any > synchronization. For mapping [2], synchronize through RCU (or something > simliar to seqlock). You can't really use a seqlock, seqlocks are collision-retry locks, and the semantic here is that invalidate_range_start *MUST* not continue until thread doing #4 above is guarenteed no longer touching the memory. This must be a proper barrier, like a spinlock, mutex, or synchronize_rcu. And, again, you can't re-invent a spinlock wi...
2019 Aug 01
3
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...you could have done this with a RCU technique instead of register/unregister. > Sync between 2) and 4): For mapping [1], both are readers, no need any > synchronization. For mapping [2], synchronize through RCU (or something > simliar to seqlock). You can't really use a seqlock, seqlocks are collision-retry locks, and the semantic here is that invalidate_range_start *MUST* not continue until thread doing #4 above is guarenteed no longer touching the memory. This must be a proper barrier, like a spinlock, mutex, or synchronize_rcu. And, again, you can't re-invent a spinlock wi...
2019 Aug 02
0
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...the synchronization between MMU notifier and vhost worker. > >> Sync between 2) and 4): For mapping [1], both are readers, no need any >> synchronization. For mapping [2], synchronize through RCU (or something >> simliar to seqlock). > You can't really use a seqlock, seqlocks are collision-retry locks, > and the semantic here is that invalidate_range_start *MUST* not > continue until thread doing #4 above is guarenteed no longer touching > the memory. Yes, that's the tricky part. For hardware like CPU, kicking through IPI is sufficient for synchronizatio...
2019 Jul 31
2
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
On Wed, Jul 31, 2019 at 04:46:53AM -0400, Jason Wang wrote: > We used to use RCU to synchronize MMU notifier with worker. This leads > calling synchronize_rcu() in invalidate_range_start(). But on a busy > system, there would be many factors that may slow down the > synchronize_rcu() which makes it unsuitable to be called in MMU > notifier. > > A solution is SRCU but its
2019 Jul 31
2
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
On Wed, Jul 31, 2019 at 04:46:53AM -0400, Jason Wang wrote: > We used to use RCU to synchronize MMU notifier with worker. This leads > calling synchronize_rcu() in invalidate_range_start(). But on a busy > system, there would be many factors that may slow down the > synchronize_rcu() which makes it unsuitable to be called in MMU > notifier. > > A solution is SRCU but its
2019 Aug 01
0
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...;> >> >> There's no rmb() in write_seqcount_begin(), so map could be read before >> write_seqcount_begin(), but it looks to me now that this doesn't harm at >> all, maybe we can try this way. > That is because it is a write side lock, not a read lock. IIRC > seqlocks have weaker barriers because the write side needs to be > serialized in some other way. Yes. Having a hard thought of the code, it looks to me write_seqcount_begin()/end() is sufficient here: - Notifier will only assign NULL to map, so it doesn't harm to read map before seq, then we...
2017 Feb 10
2
[PATCH 2/2] x86/vdso: Add VCLOCK_HVCLOCK vDSO clock read method
Stephen Hemminger <sthemmin at microsoft.com> writes: > Why not use existing seqlock's? > To be honest I don't quite understand how we could use it -- the sequence locking here is done against the page updated by the hypersior, we're not creating new structures (so I don't understand how we could use struct seqcount which we don't have) but I may be
2017 Feb 10
2
[PATCH 2/2] x86/vdso: Add VCLOCK_HVCLOCK vDSO clock read method
Stephen Hemminger <sthemmin at microsoft.com> writes: > Why not use existing seqlock's? > To be honest I don't quite understand how we could use it -- the sequence locking here is done against the page updated by the hypersior, we're not creating new structures (so I don't understand how we could use struct seqcount which we don't have) but I may be
2019 Aug 07
2
[PATCH V4 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
On Wed, Aug 07, 2019 at 03:06:15AM -0400, Jason Wang wrote: > We used to use RCU to synchronize MMU notifier with worker. This leads > calling synchronize_rcu() in invalidate_range_start(). But on a busy > system, there would be many factors that may slow down the > synchronize_rcu() which makes it unsuitable to be called in MMU > notifier. > > So this patch switches use
2019 Aug 07
2
[PATCH V4 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
On Wed, Aug 07, 2019 at 03:06:15AM -0400, Jason Wang wrote: > We used to use RCU to synchronize MMU notifier with worker. This leads > calling synchronize_rcu() in invalidate_range_start(). But on a busy > system, there would be many factors that may slow down the > synchronize_rcu() which makes it unsuitable to be called in MMU > notifier. > > So this patch switches use
2010 May 26
1
Error compiling DAHDI...
I was at a client site tonight to install OSLEC on his machine running asterisk 1.6.0.22 and DAHDI 2.2.1 installed via yum. I stopped asterisk and DAHDI, downloaded the latest version of DAHDI 2.2.1 (dahdi-linux-complete-2.2.1.2+2.2.1.1) and made the necessary changes to compile OSLEC with DAHDI, but I ran into compilation issues that I had never seen before. So as a test I deleted my
2019 Aug 08
3
[PATCH V4 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
On 2019/8/7 ??10:02, Jason Wang wrote: > > On 2019/8/7 ??8:07, Jason Gunthorpe wrote: >> On Wed, Aug 07, 2019 at 03:06:15AM -0400, Jason Wang wrote: >>> We used to use RCU to synchronize MMU notifier with worker. This leads >>> calling synchronize_rcu() in invalidate_range_start(). But on a busy >>> system, there would be many factors that may
2019 Aug 07
0
[PATCH V4 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
On 2019/8/7 ??8:07, Jason Gunthorpe wrote: > On Wed, Aug 07, 2019 at 03:06:15AM -0400, Jason Wang wrote: >> We used to use RCU to synchronize MMU notifier with worker. This leads >> calling synchronize_rcu() in invalidate_range_start(). But on a busy >> system, there would be many factors that may slow down the >> synchronize_rcu() which makes it unsuitable to be
2019 Jul 31
0
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
On 2019/7/31 ??8:39, Jason Gunthorpe wrote: > On Wed, Jul 31, 2019 at 04:46:53AM -0400, Jason Wang wrote: >> We used to use RCU to synchronize MMU notifier with worker. This leads >> calling synchronize_rcu() in invalidate_range_start(). But on a busy >> system, there would be many factors that may slow down the >> synchronize_rcu() which makes it unsuitable to
2012 Jun 06
9
[PATCH] virtio-net: fix a race on 32bit arches
From: Eric Dumazet <edumazet at google.com> commit 3fa2a1df909 (virtio-net: per cpu 64 bit stats (v2)) added a race on 32bit arches. We must use separate syncp for rx and tx path as they can be run at the same time on different cpus. Thus one sequence increment can be lost and readers spin forever. Signed-off-by: Eric Dumazet <edumazet at google.com> Cc: Stephen Hemminger
2012 Jun 06
9
[PATCH] virtio-net: fix a race on 32bit arches
From: Eric Dumazet <edumazet at google.com> commit 3fa2a1df909 (virtio-net: per cpu 64 bit stats (v2)) added a race on 32bit arches. We must use separate syncp for rx and tx path as they can be run at the same time on different cpus. Thus one sequence increment can be lost and readers spin forever. Signed-off-by: Eric Dumazet <edumazet at google.com> Cc: Stephen Hemminger
2020 Jul 07
2
[RFC]: mm,power: introduce MADV_WIPEONSUSPEND
On Fri 03-07-20 15:29:22, Jann Horn wrote: > On Fri, Jul 3, 2020 at 1:30 PM Michal Hocko <mhocko at kernel.org> wrote: > > On Fri 03-07-20 10:34:09, Catangiu, Adrian Costin wrote: > > > This patch adds logic to the kernel power code to zero out contents of > > > all MADV_WIPEONSUSPEND VMAs present in the system during its transition > > > to any suspend
2020 Jul 07
2
[RFC]: mm,power: introduce MADV_WIPEONSUSPEND
On Fri 03-07-20 15:29:22, Jann Horn wrote: > On Fri, Jul 3, 2020 at 1:30 PM Michal Hocko <mhocko at kernel.org> wrote: > > On Fri 03-07-20 10:34:09, Catangiu, Adrian Costin wrote: > > > This patch adds logic to the kernel power code to zero out contents of > > > all MADV_WIPEONSUSPEND VMAs present in the system during its transition > > > to any suspend