Palagummi, Siva
2012-Aug-13 00:12 UTC
[PATCH RFC] xen/netback: Count ring slots properly when larger MTU sizes are used
Hi, I ran into an issue where netback driver is crashing with BUG_ON(npo.meta_prod > ARRAY_SIZE(netbk->meta)). It is happening in Intel 10Gbps network when larger mtu values are used. The problem seems to be the way the slots are counted. After applying this patch things ran fine in my environment. I request to validate my changes. Thanks Siva _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
Jan Beulich
2012-Aug-13 08:27 UTC
Re: [PATCH RFC] xen/netback: Count ring slots properly when larger MTU sizes are used
>>> On 13.08.12 at 02:12, "Palagummi, Siva" <Siva.Palagummi@ca.com> wrote: >--- a/drivers/net/xen-netback/netback.c 2012-01-25 19:39:32.000000000 -0500 >+++ b/drivers/net/xen-netback/netback.c 2012-08-12 15:50:50.000000000 -0400 >@@ -623,6 +623,24 @@ static void xen_netbk_rx_action(struct x > > count += nr_frags + 1; > >+ /* >+ * The logic here should be somewhat similar to >+ * xen_netbk_count_skb_slots. In case of larger MTU size,Is there a reason why you can''t simply use that function then? Afaict it''s being used on the very same skb before it gets put on rx_queue already anyway.>+ * skb head length may be more than a PAGE_SIZE. We need to >+ * consider ring slots consumed by that data. If we do not, >+ * then within this loop itself we end up consuming more meta >+ * slots turning the BUG_ON below. With this fix we may end up >+ * iterating through xen_netbk_rx_action multiple times >+ * instead of crashing netback thread. >+ */ >+ >+ >+ count += DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);This now over-accounts by one I think (due to the "+ 1" above; the calculation here really is to replace that increment). Jan>+ >+ if (skb_shinfo(skb)->gso_size) >+ count++; >+ > __skb_queue_tail(&rxq, skb); > > /* Filled the batch queue? */
Palagummi, Siva
2012-Aug-14 21:17 UTC
Re: [PATCH RFC] xen/netback: Count ring slots properly when larger MTU sizes are used
> -----Original Message----- > From: Jan Beulich [mailto:JBeulich@suse.com] > Sent: Monday, August 13, 2012 1:58 PM > To: Palagummi, Siva > Cc: xen-devel@lists.xen.org > Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots > properly when larger MTU sizes are used > > >>> On 13.08.12 at 02:12, "Palagummi, Siva" <Siva.Palagummi@ca.com> > wrote: > >--- a/drivers/net/xen-netback/netback.c 2012-01-25 19:39:32.000000000 > -0500 > >+++ b/drivers/net/xen-netback/netback.c 2012-08-12 15:50:50.000000000 > -0400 > >@@ -623,6 +623,24 @@ static void xen_netbk_rx_action(struct x > > > > count += nr_frags + 1; > > > >+ /* > >+ * The logic here should be somewhat similar to > >+ * xen_netbk_count_skb_slots. In case of larger MTU size, > > Is there a reason why you can''t simply use that function then? > Afaict it''s being used on the very same skb before it gets put on > rx_queue already anyway. >I did think about it. But this would mean iterating through similar piece of code twice with additional function calls. netbk_gop_skb-->netbk_gop_frag_copy sequence is actually executing similar code. And also not sure about any other implications. So decided to fix it by adding few lines of code in line.> >+ * skb head length may be more than a PAGE_SIZE. We need to > >+ * consider ring slots consumed by that data. If we do not, > >+ * then within this loop itself we end up consuming more > meta > >+ * slots turning the BUG_ON below. With this fix we may end > up > >+ * iterating through xen_netbk_rx_action multiple times > >+ * instead of crashing netback thread. > >+ */ > >+ > >+ > >+ count += DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE); > > This now over-accounts by one I think (due to the "+ 1" above; > the calculation here really is to replace that increment). > > Jan >I also wasn''t sure about the actual purpose of "+1" above whether it is meant to take care of skb_headlen or non zero gso_size case or some other case. That''s why I left it like that so that I can exit the loop on safer side. If someone who knows this area of code can confirm that we do not need it, I will create a new patch. In my environment I did observe that "count" is always greater than actual meta slots produced because of this additional "+1" with my patch. When I took out this extra addition then count is always equal to actual meta slots produced and loop is exiting safely with more meta slots produced under heavy traffic. Thanks Siva> >+ > >+ if (skb_shinfo(skb)->gso_size) > >+ count++; > >+ > > __skb_queue_tail(&rxq, skb); > > > > /* Filled the batch queue? */ >
Ian Campbell
2012-Aug-21 13:14 UTC
Re: [PATCH RFC] xen/netback: Count ring slots properly when larger MTU sizes are used
On Tue, 2012-08-14 at 22:17 +0100, Palagummi, Siva wrote:> > > -----Original Message----- > > From: Jan Beulich [mailto:JBeulich@suse.com] > > Sent: Monday, August 13, 2012 1:58 PM > > To: Palagummi, Siva > > Cc: xen-devel@lists.xen.org > > Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots > > properly when larger MTU sizes are used > > > > >>> On 13.08.12 at 02:12, "Palagummi, Siva" <Siva.Palagummi@ca.com> > > wrote: > > >--- a/drivers/net/xen-netback/netback.c 2012-01-25 19:39:32.000000000 > > -0500 > > >+++ b/drivers/net/xen-netback/netback.c 2012-08-12 15:50:50.000000000 > > -0400 > > >@@ -623,6 +623,24 @@ static void xen_netbk_rx_action(struct x > > > > > > count += nr_frags + 1; > > > > > >+ /* > > >+ * The logic here should be somewhat similar to > > >+ * xen_netbk_count_skb_slots. In case of larger MTU size, > > > > Is there a reason why you can''t simply use that function then? > > Afaict it''s being used on the very same skb before it gets put on > > rx_queue already anyway. > > > > I did think about it. But this would mean iterating through similar > piece of code twice with additional function calls. > netbk_gop_skb-->netbk_gop_frag_copy sequence is actually executing > similar code. And also not sure about any other implications. So > decided to fix it by adding few lines of code in line.I wonder if we could cache the result of the call to xen_netbk_count_skb_slots in xenvif_start_xmit somewhere?> > >+ * skb head length may be more than a PAGE_SIZE. We need to > > >+ * consider ring slots consumed by that data. If we do not, > > >+ * then within this loop itself we end up consuming more > > meta > > >+ * slots turning the BUG_ON below. With this fix we may end > > up > > >+ * iterating through xen_netbk_rx_action multiple times > > >+ * instead of crashing netback thread. > > >+ */ > > >+ > > >+ > > >+ count += DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE); > > > > This now over-accounts by one I think (due to the "+ 1" above; > > the calculation here really is to replace that increment). > > > > Jan > > > I also wasn''t sure about the actual purpose of "+1" above whether it > is meant to take care of skb_headlen or non zero gso_size case or some > other case.I think it''s intention was to account for skb_headlen and therefore it should be replaced.> That''s why I left it like that so that I can exit the loop on safer > side. If someone who knows this area of code can confirm that we do > not need it, I will create a new patch. In my environment I did > observe that "count" is always greater than > actual meta slots produced because of this additional "+1" with my > patch. When I took out this extra addition then count is always equal > to actual meta slots produced and loop is exiting safely with more > meta slots produced under heavy traffic.I think that''s an argument for removing it as well. The + 1 leading to an early exit seems benign when you think about one largish skb but imagine if you had 200 small (single page) skbs -- then you have effectively halved the size of the ring (or at least the batch). This: /* Filled the batch queue? */ if (count + MAX_SKB_FRAGS >= XEN_NETIF_RX_RING_SIZE) break; seems a bit iffy to me too. I wonder if MAX_SKB_FRAGS should be max_required_rx_slots(vif)? Or maybe the preflight checks from xenvif_start_xmit save us from this fate? Ian.> > Thanks > Siva > > > >+ > > >+ if (skb_shinfo(skb)->gso_size) > > >+ count++; > > >+ > > > __skb_queue_tail(&rxq, skb); > > > > > > /* Filled the batch queue? */ > > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xen.org > http://lists.xen.org/xen-devel
Palagummi, Siva
2012-Aug-22 13:14 UTC
Re: [PATCH RFC] xen/netback: Count ring slots properly when larger MTU sizes are used
> -----Original Message----- > From: Ian Campbell [mailto:Ian.Campbell@citrix.com] > Sent: Tuesday, August 21, 2012 6:45 PM > To: Palagummi, Siva > Cc: Jan Beulich; xen-devel@lists.xen.org > Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots > properly when larger MTU sizes are used > > On Tue, 2012-08-14 at 22:17 +0100, Palagummi, Siva wrote: > > > > > -----Original Message----- > > > From: Jan Beulich [mailto:JBeulich@suse.com] > > > Sent: Monday, August 13, 2012 1:58 PM > > > To: Palagummi, Siva > > > Cc: xen-devel@lists.xen.org > > > Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots > > > properly when larger MTU sizes are used > > > > > > >>> On 13.08.12 at 02:12, "Palagummi, Siva" <Siva.Palagummi@ca.com> > > > wrote: > > > >--- a/drivers/net/xen-netback/netback.c 2012-01-25 > 19:39:32.000000000 > > > -0500 > > > >+++ b/drivers/net/xen-netback/netback.c 2012-08-12 > 15:50:50.000000000 > > > -0400 > > > >@@ -623,6 +623,24 @@ static void xen_netbk_rx_action(struct x > > > > > > > > count += nr_frags + 1; > > > > > > > >+ /* > > > >+ * The logic here should be somewhat similar to > > > >+ * xen_netbk_count_skb_slots. In case of larger MTU > size, > > > > > > Is there a reason why you can''t simply use that function then? > > > Afaict it''s being used on the very same skb before it gets put on > > > rx_queue already anyway. > > > > > > > I did think about it. But this would mean iterating through similar > > piece of code twice with additional function calls. > > netbk_gop_skb-->netbk_gop_frag_copy sequence is actually executing > > similar code. And also not sure about any other implications. So > > decided to fix it by adding few lines of code in line. > > I wonder if we could cache the result of the call to > xen_netbk_count_skb_slots in xenvif_start_xmit somewhere? > > > > >+ * skb head length may be more than a PAGE_SIZE. We > need to > > > >+ * consider ring slots consumed by that data. If we > do not, > > > >+ * then within this loop itself we end up consuming > more > > > meta > > > >+ * slots turning the BUG_ON below. With this fix we > may end > > > up > > > >+ * iterating through xen_netbk_rx_action multiple > times > > > >+ * instead of crashing netback thread. > > > >+ */ > > > >+ > > > >+ > > > >+ count += DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE); > > > > > > This now over-accounts by one I think (due to the "+ 1" above; > > > the calculation here really is to replace that increment). > > > > > > Jan > > > > > I also wasn''t sure about the actual purpose of "+1" above whether it > > is meant to take care of skb_headlen or non zero gso_size case or > some > > other case. > > I think it''s intention was to account for skb_headlen and therefore it > should be replaced. > > > That''s why I left it like that so that I can exit the loop on safer > > side. If someone who knows this area of code can confirm that we do > > not need it, I will create a new patch. In my environment I did > > observe that "count" is always greater than > > actual meta slots produced because of this additional "+1" with my > > patch. When I took out this extra addition then count is always equal > > to actual meta slots produced and loop is exiting safely with more > > meta slots produced under heavy traffic. > > I think that''s an argument for removing it as well. > > The + 1 leading to an early exit seems benign when you think about one > largish skb but imagine if you had 200 small (single page) skbs -- then > you have effectively halved the size of the ring (or at least the > batch). >I agree and that''s what even I have observed where xen_netbk_kick_thread is getting invoked to finish work in next iteration. I will create a patch replacing "+1" with DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE).> This: > /* Filled the batch queue? */ > if (count + MAX_SKB_FRAGS >= XEN_NETIF_RX_RING_SIZE) > break; > seems a bit iffy to me too. I wonder if MAX_SKB_FRAGS should be > max_required_rx_slots(vif)? Or maybe the preflight checks from > xenvif_start_xmit save us from this fate? > > Ian.You are right Ian. The intention of this check seems to be to ensure that enough slots are still left prior to picking up next skb. But instead of invoking max_required_rx_slots with already received skb, we may have to do skb_peek and invoke max_required_rx_slots on skb that we are about to dequeue. Is there any better way? Thanks Siva> > > > > Thanks > > Siva > > > > > >+ > > > >+ if (skb_shinfo(skb)->gso_size) > > > >+ count++; > > > >+ > > > > __skb_queue_tail(&rxq, skb); > > > > > > > > /* Filled the batch queue? */ > > > > > > > > > _______________________________________________ > > Xen-devel mailing list > > Xen-devel@lists.xen.org > > http://lists.xen.org/xen-devel >
Ian Campbell
2012-Aug-22 13:22 UTC
Re: [PATCH RFC] xen/netback: Count ring slots properly when larger MTU sizes are used
On Wed, 2012-08-22 at 14:14 +0100, Palagummi, Siva wrote:> > > -----Original Message----- > > From: Ian Campbell [mailto:Ian.Campbell@citrix.com] > > Sent: Tuesday, August 21, 2012 6:45 PM > > To: Palagummi, Siva > > Cc: Jan Beulich; xen-devel@lists.xen.org > > Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots > > properly when larger MTU sizes are used > > > > On Tue, 2012-08-14 at 22:17 +0100, Palagummi, Siva wrote: > > > > > > > -----Original Message----- > > > > From: Jan Beulich [mailto:JBeulich@suse.com] > > > > Sent: Monday, August 13, 2012 1:58 PM > > > > To: Palagummi, Siva > > > > Cc: xen-devel@lists.xen.org > > > > Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots > > > > properly when larger MTU sizes are used > > > > > > > > >>> On 13.08.12 at 02:12, "Palagummi, Siva" <Siva.Palagummi@ca.com> > > > > wrote: > > > > >--- a/drivers/net/xen-netback/netback.c 2012-01-25 > > 19:39:32.000000000 > > > > -0500 > > > > >+++ b/drivers/net/xen-netback/netback.c 2012-08-12 > > 15:50:50.000000000 > > > > -0400 > > > > >@@ -623,6 +623,24 @@ static void xen_netbk_rx_action(struct x > > > > > > > > > > count += nr_frags + 1; > > > > > > > > > >+ /* > > > > >+ * The logic here should be somewhat similar to > > > > >+ * xen_netbk_count_skb_slots. In case of larger MTU > > size, > > > > > > > > Is there a reason why you can''t simply use that function then? > > > > Afaict it''s being used on the very same skb before it gets put on > > > > rx_queue already anyway. > > > > > > > > > > I did think about it. But this would mean iterating through similar > > > piece of code twice with additional function calls. > > > netbk_gop_skb-->netbk_gop_frag_copy sequence is actually executing > > > similar code. And also not sure about any other implications. So > > > decided to fix it by adding few lines of code in line. > > > > I wonder if we could cache the result of the call to > > xen_netbk_count_skb_slots in xenvif_start_xmit somewhere? > > > > > > >+ * skb head length may be more than a PAGE_SIZE. We > > need to > > > > >+ * consider ring slots consumed by that data. If we > > do not, > > > > >+ * then within this loop itself we end up consuming > > more > > > > meta > > > > >+ * slots turning the BUG_ON below. With this fix we > > may end > > > > up > > > > >+ * iterating through xen_netbk_rx_action multiple > > times > > > > >+ * instead of crashing netback thread. > > > > >+ */ > > > > >+ > > > > >+ > > > > >+ count += DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE); > > > > > > > > This now over-accounts by one I think (due to the "+ 1" above; > > > > the calculation here really is to replace that increment). > > > > > > > > Jan > > > > > > > I also wasn''t sure about the actual purpose of "+1" above whether it > > > is meant to take care of skb_headlen or non zero gso_size case or > > some > > > other case. > > > > I think it''s intention was to account for skb_headlen and therefore it > > should be replaced. > > > > > That''s why I left it like that so that I can exit the loop on safer > > > side. If someone who knows this area of code can confirm that we do > > > not need it, I will create a new patch. In my environment I did > > > observe that "count" is always greater than > > > actual meta slots produced because of this additional "+1" with my > > > patch. When I took out this extra addition then count is always equal > > > to actual meta slots produced and loop is exiting safely with more > > > meta slots produced under heavy traffic. > > > > I think that''s an argument for removing it as well. > > > > The + 1 leading to an early exit seems benign when you think about one > > largish skb but imagine if you had 200 small (single page) skbs -- then > > you have effectively halved the size of the ring (or at least the > > batch). > > > I agree and that''s what even I have observed where xen_netbk_kick_thread is getting invoked to finish work in next iteration. I will create a patch replacing "+1" with DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE). > > > > This: > > /* Filled the batch queue? */ > > if (count + MAX_SKB_FRAGS >= XEN_NETIF_RX_RING_SIZE) > > break; > > seems a bit iffy to me too. I wonder if MAX_SKB_FRAGS should be > > max_required_rx_slots(vif)? Or maybe the preflight checks from > > xenvif_start_xmit save us from this fate? > > > > Ian. > > You are right Ian. The intention of this check seems to be to ensure > that enough slots are still left prior to picking up next skb. But > instead of invoking max_required_rx_slots with already received skb, > we may have to do skb_peek and invoke max_required_rx_slots on skb > that we are about to dequeue. Is there any better way?max_required_rx_slots doesn''t take an skb as an argument, just a vif. It returns the worst case number of slots for any skb on that vif. Ian.
Palagummi, Siva
2012-Aug-22 13:30 UTC
Re: [PATCH RFC] xen/netback: Count ring slots properly when larger MTU sizes are used
> -----Original Message----- > From: Ian Campbell [mailto:Ian.Campbell@citrix.com] > Sent: Wednesday, August 22, 2012 6:52 PM > To: Palagummi, Siva > Cc: Jan Beulich; xen-devel@lists.xen.org > Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots > properly when larger MTU sizes are used > > On Wed, 2012-08-22 at 14:14 +0100, Palagummi, Siva wrote: > > > > > -----Original Message----- > > > From: Ian Campbell [mailto:Ian.Campbell@citrix.com] > > > Sent: Tuesday, August 21, 2012 6:45 PM > > > To: Palagummi, Siva > > > Cc: Jan Beulich; xen-devel@lists.xen.org > > > Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots > > > properly when larger MTU sizes are used > > > > > > On Tue, 2012-08-14 at 22:17 +0100, Palagummi, Siva wrote: > > > > > > > > > -----Original Message----- > > > > > From: Jan Beulich [mailto:JBeulich@suse.com] > > > > > Sent: Monday, August 13, 2012 1:58 PM > > > > > To: Palagummi, Siva > > > > > Cc: xen-devel@lists.xen.org > > > > > Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring > slots > > > > > properly when larger MTU sizes are used > > > > > > > > > > >>> On 13.08.12 at 02:12, "Palagummi, Siva" > <Siva.Palagummi@ca.com> > > > > > wrote: > > > > > >--- a/drivers/net/xen-netback/netback.c 2012-01-25 > > > 19:39:32.000000000 > > > > > -0500 > > > > > >+++ b/drivers/net/xen-netback/netback.c 2012-08-12 > > > 15:50:50.000000000 > > > > > -0400 > > > > > >@@ -623,6 +623,24 @@ static void xen_netbk_rx_action(struct x > > > > > > > > > > > > count += nr_frags + 1; > > > > > > > > > > > >+ /* > > > > > >+ * The logic here should be somewhat similar to > > > > > >+ * xen_netbk_count_skb_slots. In case of larger MTU > > > size, > > > > > > > > > > Is there a reason why you can't simply use that function then? > > > > > Afaict it's being used on the very same skb before it gets put > on > > > > > rx_queue already anyway. > > > > > > > > > > > > > I did think about it. But this would mean iterating through > similar > > > > piece of code twice with additional function calls. > > > > netbk_gop_skb-->netbk_gop_frag_copy sequence is actually > executing > > > > similar code. And also not sure about any other implications. So > > > > decided to fix it by adding few lines of code in line. > > > > > > I wonder if we could cache the result of the call to > > > xen_netbk_count_skb_slots in xenvif_start_xmit somewhere? > > > > > > > > >+ * skb head length may be more than a PAGE_SIZE. We > > > need to > > > > > >+ * consider ring slots consumed by that data. If we > > > do not, > > > > > >+ * then within this loop itself we end up consuming > > > more > > > > > meta > > > > > >+ * slots turning the BUG_ON below. With this fix we > > > may end > > > > > up > > > > > >+ * iterating through xen_netbk_rx_action multiple > > > times > > > > > >+ * instead of crashing netback thread. > > > > > >+ */ > > > > > >+ > > > > > >+ > > > > > >+ count += DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE); > > > > > > > > > > This now over-accounts by one I think (due to the "+ 1" above; > > > > > the calculation here really is to replace that increment). > > > > > > > > > > Jan > > > > > > > > > I also wasn't sure about the actual purpose of "+1" above whether > it > > > > is meant to take care of skb_headlen or non zero gso_size case or > > > some > > > > other case. > > > > > > I think it's intention was to account for skb_headlen and therefore > it > > > should be replaced. > > > > > > > That's why I left it like that so that I can exit the loop on > safer > > > > side. If someone who knows this area of code can confirm that we > do > > > > not need it, I will create a new patch. In my environment I did > > > > observe that "count" is always greater than > > > > actual meta slots produced because of this additional "+1" with > my > > > > patch. When I took out this extra addition then count is always > equal > > > > to actual meta slots produced and loop is exiting safely with > more > > > > meta slots produced under heavy traffic. > > > > > > I think that's an argument for removing it as well. > > > > > > The + 1 leading to an early exit seems benign when you think about > one > > > largish skb but imagine if you had 200 small (single page) skbs -- > then > > > you have effectively halved the size of the ring (or at least the > > > batch). > > > > > I agree and that's what even I have observed where > xen_netbk_kick_thread is getting invoked to finish work in next > iteration. I will create a patch replacing "+1" with > DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE). > > > > > > > This: > > > /* Filled the batch queue? */ > > > if (count + MAX_SKB_FRAGS >= XEN_NETIF_RX_RING_SIZE) > > > break; > > > seems a bit iffy to me too. I wonder if MAX_SKB_FRAGS should be > > > max_required_rx_slots(vif)? Or maybe the preflight checks from > > > xenvif_start_xmit save us from this fate? > > > > > > Ian. > > > > You are right Ian. The intention of this check seems to be to ensure > > that enough slots are still left prior to picking up next skb. But > > instead of invoking max_required_rx_slots with already received skb, > > we may have to do skb_peek and invoke max_required_rx_slots on skb > > that we are about to dequeue. Is there any better way? > > max_required_rx_slots doesn't take an skb as an argument, just a vif. > It > returns the worst case number of slots for any skb on that vif. > > Ian.That’s true. What I meant is to peek to next skb and get vif from that structure to invoke max_required_rx_slots. Don't you think we need to do like that? Thanks Siva _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
Ian Campbell
2012-Aug-22 13:40 UTC
Re: [PATCH RFC] xen/netback: Count ring slots properly when larger MTU sizes are used
On Wed, 2012-08-22 at 14:30 +0100, Palagummi, Siva wrote:> > > > > > > > > > This: > > > > /* Filled the batch queue? */ > > > > if (count + MAX_SKB_FRAGS >= XEN_NETIF_RX_RING_SIZE) > > > > break; > > > > seems a bit iffy to me too. I wonder if MAX_SKB_FRAGS should be > > > > max_required_rx_slots(vif)? Or maybe the preflight checks from > > > > xenvif_start_xmit save us from this fate? > > > > > > > > Ian. > > > > > > You are right Ian. The intention of this check seems to be to ensure > > > that enough slots are still left prior to picking up next skb. But > > > instead of invoking max_required_rx_slots with already received skb, > > > we may have to do skb_peek and invoke max_required_rx_slots on skb > > > that we are about to dequeue. Is there any better way? > > > > max_required_rx_slots doesn't take an skb as an argument, just a vif. > > It > > returns the worst case number of slots for any skb on that vif. > > > > Ian. > > That’s true. What I meant is to peek to next skb and get vif from that > structure to invoke max_required_rx_slots. Don't you think we need to > do like that?Do you mean something other than max_required_rx_slots? Because the prototype of that function is static int max_required_rx_slots(struct xenvif *vif) i.e. it doesn't need an skb. I think it is acceptable to check for the worst case number of slots. That's what we do e.g. in xen_netbk_rx_ring_full. Using skb_peek might work too though, assuming all the locking etc is ok -- this is a private queue so I think it is probably ok. Rather than calculating the number of slots in xen_netbk_rx_action you probably want to remember the value from the call to xen_netbk_count_skb_slots in start_xmit. Perhaps by stashing it in skb->cb? (see NETFRONT_SKB_CB for an example of how to do this) Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
Palagummi, Siva
2012-Aug-22 14:32 UTC
Re: [PATCH RFC] xen/netback: Count ring slots properly when larger MTU sizes are used
> -----Original Message----- > From: Ian Campbell [mailto:Ian.Campbell@citrix.com] > Sent: Wednesday, August 22, 2012 7:10 PM > To: Palagummi, Siva > Cc: Jan Beulich; xen-devel@lists.xen.org > Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots > properly when larger MTU sizes are used > > On Wed, 2012-08-22 at 14:30 +0100, Palagummi, Siva wrote: > > > > > > > > > > > > > This: > > > > > /* Filled the batch queue? */ > > > > > if (count + MAX_SKB_FRAGS >= XEN_NETIF_RX_RING_SIZE) > > > > > break; > > > > > seems a bit iffy to me too. I wonder if MAX_SKB_FRAGS should be > > > > > max_required_rx_slots(vif)? Or maybe the preflight checks from > > > > > xenvif_start_xmit save us from this fate? > > > > > > > > > > Ian. > > > > > > > > You are right Ian. The intention of this check seems to be to > ensure > > > > that enough slots are still left prior to picking up next skb. > But > > > > instead of invoking max_required_rx_slots with already received > skb, > > > > we may have to do skb_peek and invoke max_required_rx_slots on > skb > > > > that we are about to dequeue. Is there any better way? > > > > > > max_required_rx_slots doesn't take an skb as an argument, just a > vif. > > > It > > > returns the worst case number of slots for any skb on that vif. > > > > > > Ian. > > > > That’s true. What I meant is to peek to next skb and get vif from > that > > structure to invoke max_required_rx_slots. Don't you think we need to > > do like that? > > Do you mean something other than max_required_rx_slots? Because the > prototype of that function is > static int max_required_rx_slots(struct xenvif *vif) > i.e. it doesn't need an skb. > > I think it is acceptable to check for the worst case number of slots. > That's what we do e.g. in xen_netbk_rx_ring_full > > Using skb_peek might work too though, assuming all the locking etc is > ok oI want to use max_required_rx_slots only. So the code will look somewhat like this. skb=skb_peek(&netbk->rx_queue); if(skb == NULL) break; vif=netdev_priv(skb->dev); /*Filled the batch queue?*/ If(count + max_required_rx_slots(vif) >= XEN_NETIF_RX_RING_SIZE) break;> -- this is a private queue so I think it is probably ok. Rather than > calculating the number of slots in xen_netbk_rx_action you probably > want > to remember the value from the call to xen_netbk_count_skb_slots in > start_xmit. Perhaps by stashing it in skb->cb? (see NETFRONT_SKB_CB for > an example of how to do this) > > Ian.Ok. I will look into this as well. This will definitely save some cycles in xen_netbk_rx_action. Apart from that calculations we already discussed should work fine right? Thanks for all your input Ian. Siva _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
Ian Campbell
2012-Aug-22 14:39 UTC
Re: [PATCH RFC] xen/netback: Count ring slots properly when larger MTU sizes are used
On Wed, 2012-08-22 at 15:32 +0100, Palagummi, Siva wrote:> > > -----Original Message----- > > From: Ian Campbell [mailto:Ian.Campbell@citrix.com] > > Sent: Wednesday, August 22, 2012 7:10 PM > > To: Palagummi, Siva > > Cc: Jan Beulich; xen-devel@lists.xen.org > > Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots > > properly when larger MTU sizes are used > > > > On Wed, 2012-08-22 at 14:30 +0100, Palagummi, Siva wrote: > > > > > > > > > > > > > > > > This: > > > > > > /* Filled the batch queue? */ > > > > > > if (count + MAX_SKB_FRAGS >= XEN_NETIF_RX_RING_SIZE) > > > > > > break; > > > > > > seems a bit iffy to me too. I wonder if MAX_SKB_FRAGS should be > > > > > > max_required_rx_slots(vif)? Or maybe the preflight checks from > > > > > > xenvif_start_xmit save us from this fate? > > > > > > > > > > > > Ian. > > > > > > > > > > You are right Ian. The intention of this check seems to be to > > ensure > > > > > that enough slots are still left prior to picking up next skb. > > But > > > > > instead of invoking max_required_rx_slots with already received > > skb, > > > > > we may have to do skb_peek and invoke max_required_rx_slots on > > skb > > > > > that we are about to dequeue. Is there any better way? > > > > > > > > max_required_rx_slots doesn't take an skb as an argument, just a > > vif. > > > > It > > > > returns the worst case number of slots for any skb on that vif. > > > > > > > > Ian. > > > > > > That’s true. What I meant is to peek to next skb and get vif from > > that > > > structure to invoke max_required_rx_slots. Don't you think we need to > > > do like that? > > > > Do you mean something other than max_required_rx_slots? Because the > > prototype of that function is > > static int max_required_rx_slots(struct xenvif *vif) > > i.e. it doesn't need an skb. > > > > I think it is acceptable to check for the worst case number of slots. > > That's what we do e.g. in xen_netbk_rx_ring_full > > > > Using skb_peek might work too though, assuming all the locking etc is > > ok o > > I want to use max_required_rx_slots only. So the code will look somewhat like this. > > skb=skb_peek(&netbk->rx_queue); > if(skb == NULL) > break; > vif=netdev_priv(skb->dev);Oh, I see why you need the skb now!> /*Filled the batch queue?*/ > If(count + max_required_rx_slots(vif) >= XEN_NETIF_RX_RING_SIZE) > break;You need to to finally dequeue the skb here.> > > > -- this is a private queue so I think it is probably ok. Rather than > > calculating the number of slots in xen_netbk_rx_action you probably > > want > > to remember the value from the call to xen_netbk_count_skb_slots in > > start_xmit. Perhaps by stashing it in skb->cb? (see NETFRONT_SKB_CB for > > an example of how to do this) > > > > Ian. > > Ok. I will look into this as well. This will definitely save some > cycles in xen_netbk_rx_action. Apart from that calculations we already > discussed should work fine right?I think so. Proof in the pudding and all that ;-) Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
Reasonably Related Threads
- [PATCH RFC] xen-netback: remove guest RX path dependence on MAX_SKB_FRAGS
- [PATCH 1/1] xen/netback: correctly calculate required slots of skb.
- [PATCH v2 1/1] xen/netback: correctly calculate required slots of skb.
- [PATCH 7/8] netback: split event channels support
- xennet: skb rides the rocket: 20 slots