Michael S. Tsirkin
2013-Dec-23 19:37 UTC
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
On Mon, Dec 23, 2013 at 09:27:07AM -0800, Eric Dumazet wrote:> On Mon, 2013-12-23 at 16:12 +0800, Jason Wang wrote: > > On 12/17/2013 08:16 AM, Michael Dalton wrote: > > > The virtio-net driver currently uses netdev_alloc_frag() for GFP_ATOMIC > > > mergeable rx buffer allocations. This commit migrates virtio-net to use > > > per-receive queue page frags for GFP_ATOMIC allocation. This change unifies > > > mergeable rx buffer memory allocation, which now will use skb_refill_frag() > > > for both atomic and GFP-WAIT buffer allocations. > > > > > > To address fragmentation concerns, if after buffer allocation there > > > is too little space left in the page frag to allocate a subsequent > > > buffer, the remaining space is added to the current allocated buffer > > > so that the remaining space can be used to store packet data. > > > > > > Signed-off-by: Michael Dalton <mwdalton at google.com> > > > --- > > > drivers/net/virtio_net.c | 69 ++++++++++++++++++++++++++---------------------- > > > 1 file changed, 38 insertions(+), 31 deletions(-) > > > > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c > > > index c51a988..d38d130 100644 > > > --- a/drivers/net/virtio_net.c > > > +++ b/drivers/net/virtio_net.c > > > @@ -78,6 +78,9 @@ struct receive_queue { > > > /* Chain pages by the private ptr. */ > > > struct page *pages; > > > > > > + /* Page frag for GFP_ATOMIC packet buffer allocation. */ > > > + struct page_frag atomic_frag; > > > + > > > /* RX: fragments + linear part + virtio header */ > > > struct scatterlist sg[MAX_SKB_FRAGS + 2]; > > > > > > @@ -127,9 +130,9 @@ struct virtnet_info { > > > struct mutex config_lock; > > > > > > /* Page_frag for GFP_KERNEL packet buffer allocation when we run > > > - * low on memory. > > > + * low on memory. May sleep. > > > */ > > > - struct page_frag alloc_frag; > > > + struct page_frag sleep_frag; > > > > Any reason to use two different page_frag consider only > > skb_page_frag_refill() is used? > > One is used under process context, where preemption and GFP_KERNEL are > allowed.Yes but it is always used with napi disabled.> One is used from softirq context and GFP_ATOMIC.This one is used only under napi.> You cant share a common > page_frag.So there isn't a conflict with respect to locking. Is it problematic to use same page_frag with both GFP_ATOMIC and with GFP_KERNEL? If yes why?> Acked-by: Eric Dumazet <edumazet at google.com> > >
Michael Dalton
2013-Dec-26 21:28 UTC
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
On Mon, Dec 23, 2013 at 11:37 AM, Michael S. Tsirkin <mst at redhat.com> wrote:> So there isn't a conflict with respect to locking. > > Is it problematic to use same page_frag with both GFP_ATOMIC and with > GFP_KERNEL? If yes why?I believe it is safe to use the same page_frag and I will send out a followup patchset using just the per-receive page_frags. For future consideration, Eric noted that disabling NAPI before GFP_KERNEL allocs can potentially inhibit virtio-net network processing for some time (e.g., during a blocking memory allocation or preemption). Best, Mike
Michael S. Tsirkin
2013-Dec-26 21:37 UTC
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
On Thu, Dec 26, 2013 at 01:28:58PM -0800, Michael Dalton wrote:> On Mon, Dec 23, 2013 at 11:37 AM, Michael S. Tsirkin <mst at redhat.com> wrote: > > So there isn't a conflict with respect to locking. > > > > Is it problematic to use same page_frag with both GFP_ATOMIC and with > > GFP_KERNEL? If yes why? > > I believe it is safe to use the same page_frag and I will send out a > followup patchset using just the per-receive page_frags.Seems easier to use it straight away I think.> For future > consideration, Eric noted that disabling NAPI before GFP_KERNEL > allocs can potentially inhibit virtio-net network processing for some > time (e.g., during a blocking memory allocation or preemption). > > Best, > > MikeInteresting. But if we can't allocate a buffer how can we do network processing? If we can reproduce the problem, we can maybe move allocation out of napi disabled section, but then we'll need to add more locking.
Eric Dumazet
2013-Dec-26 21:56 UTC
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
On Thu, 2013-12-26 at 13:28 -0800, Michael Dalton wrote:> On Mon, Dec 23, 2013 at 11:37 AM, Michael S. Tsirkin <mst at redhat.com> wrote: > > So there isn't a conflict with respect to locking. > > > > Is it problematic to use same page_frag with both GFP_ATOMIC and with > > GFP_KERNEL? If yes why? > > I believe it is safe to use the same page_frag and I will send out a > followup patchset using just the per-receive page_frags. For future > consideration, Eric noted that disabling NAPI before GFP_KERNEL > allocs can potentially inhibit virtio-net network processing for some > time (e.g., during a blocking memory allocation or preemption).Yep, using napi_disable() in the refill process looks quite inefficient to me, it not buggy. napi_disable() is a big hammer, while whole idea of having a process to block on GFP_KERNEL allocations is to allow some asynchronous behavior. I have hard time to convince myself virtio_net is safe anyway with this work queue thing. virtnet_open() seems racy for example : for (i = 0; i < vi->max_queue_pairs; i++) { if (i < vi->curr_queue_pairs) /* Make sure we have some buffers: if oom use wq. */ if (!try_fill_recv(&vi->rq[i], GFP_KERNEL)) schedule_delayed_work(&vi->refill, 0); virtnet_napi_enable(&vi->rq[i]); What if the workqueue is scheduled _before_ the call to virtnet_napi_enable(&vi->rq[i]) ? refill_work() will happily conflict with another cpu, two cpus could call try_fill_recv() at the same time, or worse napi_enable() would crash. I do not have time to make a full check, but I guess there are other races like this one. diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index c51a98867a40..b8e2adb5d0c2 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -726,16 +726,18 @@ again: static int virtnet_open(struct net_device *dev) { struct virtnet_info *vi = netdev_priv(dev); + bool refill = false; int i; for (i = 0; i < vi->max_queue_pairs; i++) { if (i < vi->curr_queue_pairs) /* Make sure we have some buffers: if oom use wq. */ if (!try_fill_recv(&vi->rq[i], GFP_KERNEL)) - schedule_delayed_work(&vi->refill, 0); + refill = true; virtnet_napi_enable(&vi->rq[i]); } - + if (refill) + schedule_delayed_work(&vi->refill, 0); return 0; }
Possibly Parallel Threads
- [PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
- [PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
- [PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
- [PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
- [PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs