Jason Wang
2019-Aug-05 04:36 UTC
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
On 2019/8/2 ??10:27, Michael S. Tsirkin wrote:> On Fri, Aug 02, 2019 at 09:46:13AM -0300, Jason Gunthorpe wrote: >> On Fri, Aug 02, 2019 at 05:40:07PM +0800, Jason Wang wrote: >>>> This must be a proper barrier, like a spinlock, mutex, or >>>> synchronize_rcu. >>> >>> I start with synchronize_rcu() but both you and Michael raise some >>> concern. >> I've also idly wondered if calling synchronize_rcu() under the various >> mm locks is a deadlock situation. >> >>> Then I try spinlock and mutex: >>> >>> 1) spinlock: add lots of overhead on datapath, this leads 0 performance >>> improvement. >> I think the topic here is correctness not performance improvement > The topic is whether we should revert > commit 7f466032dc9 ("vhost: access vq metadata through kernel virtual address") > > or keep it in. The only reason to keep it is performance.Maybe it's time to introduce the config option?> > Now as long as all this code is disabled anyway, we can experiment a > bit. > > I personally feel we would be best served by having two code paths: > > - Access to VM memory directly mapped into kernel > - Access to userspace > > > Having it all cleanly split will allow a bunch of optimizations, for > example for years now we planned to be able to process an incoming short > packet directly on softirq path, or an outgoing on directly within > eventfd.It's not hard consider we've already had our own accssors. But the question is (as asked in another thread), do you want permanent GUP or still use MMU notifiers. Thanks
Jason Wang
2019-Aug-05 04:41 UTC
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
On 2019/8/5 ??12:36, Jason Wang wrote:> > On 2019/8/2 ??10:27, Michael S. Tsirkin wrote: >> On Fri, Aug 02, 2019 at 09:46:13AM -0300, Jason Gunthorpe wrote: >>> On Fri, Aug 02, 2019 at 05:40:07PM +0800, Jason Wang wrote: >>>>> This must be a proper barrier, like a spinlock, mutex, or >>>>> synchronize_rcu. >>>> >>>> I start with synchronize_rcu() but both you and Michael raise some >>>> concern. >>> I've also idly wondered if calling synchronize_rcu() under the various >>> mm locks is a deadlock situation. >>> >>>> Then I try spinlock and mutex: >>>> >>>> 1) spinlock: add lots of overhead on datapath, this leads 0 >>>> performance >>>> improvement. >>> I think the topic here is correctness not performance improvement >> The topic is whether we should revert >> commit 7f466032dc9 ("vhost: access vq metadata through kernel virtual >> address") >> >> or keep it in. The only reason to keep it is performance. > > > Maybe it's time to introduce the config option?Or does it make sense if I post a V3 with: - introduce config option and disable the optimization by default - switch from synchronize_rcu() to vhost_flush_work(), but the rest are the same This can give us some breath to decide which way should go for next release? Thanks> > >> >> Now as long as all this code is disabled anyway, we can experiment a >> bit. >> >> I personally feel we would be best served by having two code paths: >> >> - Access to VM memory directly mapped into kernel >> - Access to userspace >> >> >> Having it all cleanly split will allow a bunch of optimizations, for >> example for years now we planned to be able to process an incoming short >> packet directly on softirq path, or an outgoing on directly within >> eventfd. > > > It's not hard consider we've already had our own accssors. But the > question is (as asked in another thread), do you want permanent GUP or > still use MMU notifiers. > > Thanks > > _______________________________________________ > Virtualization mailing list > Virtualization at lists.linux-foundation.org > https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Michael S. Tsirkin
2019-Aug-05 06:30 UTC
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
On Mon, Aug 05, 2019 at 12:36:40PM +0800, Jason Wang wrote:> > On 2019/8/2 ??10:27, Michael S. Tsirkin wrote: > > On Fri, Aug 02, 2019 at 09:46:13AM -0300, Jason Gunthorpe wrote: > > > On Fri, Aug 02, 2019 at 05:40:07PM +0800, Jason Wang wrote: > > > > > This must be a proper barrier, like a spinlock, mutex, or > > > > > synchronize_rcu. > > > > > > > > I start with synchronize_rcu() but both you and Michael raise some > > > > concern. > > > I've also idly wondered if calling synchronize_rcu() under the various > > > mm locks is a deadlock situation. > > > > > > > Then I try spinlock and mutex: > > > > > > > > 1) spinlock: add lots of overhead on datapath, this leads 0 performance > > > > improvement. > > > I think the topic here is correctness not performance improvement > > The topic is whether we should revert > > commit 7f466032dc9 ("vhost: access vq metadata through kernel virtual address") > > > > or keep it in. The only reason to keep it is performance. > > > Maybe it's time to introduce the config option?Depending on CONFIG_BROKEN? I'm not sure it's a good idea.> > > > > Now as long as all this code is disabled anyway, we can experiment a > > bit. > > > > I personally feel we would be best served by having two code paths: > > > > - Access to VM memory directly mapped into kernel > > - Access to userspace > > > > > > Having it all cleanly split will allow a bunch of optimizations, for > > example for years now we planned to be able to process an incoming short > > packet directly on softirq path, or an outgoing on directly within > > eventfd. > > > It's not hard consider we've already had our own accssors. But the question > is (as asked in another thread), do you want permanent GUP or still use MMU > notifiers. > > ThanksWe want THP and NUMA to work. Both are important for performance. -- MST
Michael S. Tsirkin
2019-Aug-05 06:40 UTC
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
On Mon, Aug 05, 2019 at 12:41:45PM +0800, Jason Wang wrote:> > On 2019/8/5 ??12:36, Jason Wang wrote: > > > > On 2019/8/2 ??10:27, Michael S. Tsirkin wrote: > > > On Fri, Aug 02, 2019 at 09:46:13AM -0300, Jason Gunthorpe wrote: > > > > On Fri, Aug 02, 2019 at 05:40:07PM +0800, Jason Wang wrote: > > > > > > This must be a proper barrier, like a spinlock, mutex, or > > > > > > synchronize_rcu. > > > > > > > > > > I start with synchronize_rcu() but both you and Michael raise some > > > > > concern. > > > > I've also idly wondered if calling synchronize_rcu() under the various > > > > mm locks is a deadlock situation. > > > > > > > > > Then I try spinlock and mutex: > > > > > > > > > > 1) spinlock: add lots of overhead on datapath, this leads 0 > > > > > performance > > > > > improvement. > > > > I think the topic here is correctness not performance improvement > > > The topic is whether we should revert > > > commit 7f466032dc9 ("vhost: access vq metadata through kernel > > > virtual address") > > > > > > or keep it in. The only reason to keep it is performance. > > > > > > Maybe it's time to introduce the config option? > > > Or does it make sense if I post a V3 with: > > - introduce config option and disable the optimization by default > > - switch from synchronize_rcu() to vhost_flush_work(), but the rest are the > same > > This can give us some breath to decide which way should go for next release? > > ThanksAs is, with preempt enabled? Nope I don't think blocking an invalidator on swap IO is ok, so I don't believe this stuff is going into this release at this point. So it's more a question of whether it's better to revert and apply a clean patch on top, or just keep the code around but disabled with an ifdef as is. I'm open to both options, and would like your opinion on this.> > > > > > > > > > > Now as long as all this code is disabled anyway, we can experiment a > > > bit. > > > > > > I personally feel we would be best served by having two code paths: > > > > > > - Access to VM memory directly mapped into kernel > > > - Access to userspace > > > > > > > > > Having it all cleanly split will allow a bunch of optimizations, for > > > example for years now we planned to be able to process an incoming short > > > packet directly on softirq path, or an outgoing on directly within > > > eventfd. > > > > > > It's not hard consider we've already had our own accssors. But the > > question is (as asked in another thread), do you want permanent GUP or > > still use MMU notifiers. > > > > Thanks > > > > _______________________________________________ > > Virtualization mailing list > > Virtualization at lists.linux-foundation.org > > https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Jason Wang
2019-Aug-05 08:22 UTC
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
On 2019/8/5 ??2:30, Michael S. Tsirkin wrote:> On Mon, Aug 05, 2019 at 12:36:40PM +0800, Jason Wang wrote: >> On 2019/8/2 ??10:27, Michael S. Tsirkin wrote: >>> On Fri, Aug 02, 2019 at 09:46:13AM -0300, Jason Gunthorpe wrote: >>>> On Fri, Aug 02, 2019 at 05:40:07PM +0800, Jason Wang wrote: >>>>>> This must be a proper barrier, like a spinlock, mutex, or >>>>>> synchronize_rcu. >>>>> I start with synchronize_rcu() but both you and Michael raise some >>>>> concern. >>>> I've also idly wondered if calling synchronize_rcu() under the various >>>> mm locks is a deadlock situation. >>>> >>>>> Then I try spinlock and mutex: >>>>> >>>>> 1) spinlock: add lots of overhead on datapath, this leads 0 performance >>>>> improvement. >>>> I think the topic here is correctness not performance improvement >>> The topic is whether we should revert >>> commit 7f466032dc9 ("vhost: access vq metadata through kernel virtual address") >>> >>> or keep it in. The only reason to keep it is performance. >> >> Maybe it's time to introduce the config option? > Depending on CONFIG_BROKEN? I'm not sure it's a good idea.Ok.>>> Now as long as all this code is disabled anyway, we can experiment a >>> bit. >>> >>> I personally feel we would be best served by having two code paths: >>> >>> - Access to VM memory directly mapped into kernel >>> - Access to userspace >>> >>> >>> Having it all cleanly split will allow a bunch of optimizations, for >>> example for years now we planned to be able to process an incoming short >>> packet directly on softirq path, or an outgoing on directly within >>> eventfd. >> >> It's not hard consider we've already had our own accssors. But the question >> is (as asked in another thread), do you want permanent GUP or still use MMU >> notifiers. >> >> Thanks > We want THP and NUMA to work. Both are important for performance. >Yes. Thanks
Reasonably Related Threads
- [PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
- [PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
- [PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
- [PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
- [PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker