Michael S. Tsirkin
2013-Sep-23 07:16 UTC
[PATCH V3 4/6] vhost_net: determine whether or not to use zerocopy at one time
On Thu, Sep 05, 2013 at 10:54:44AM +0800, Jason Wang wrote:> On 09/04/2013 07:59 PM, Michael S. Tsirkin wrote: > > On Mon, Sep 02, 2013 at 04:40:59PM +0800, Jason Wang wrote: > >> Currently, even if the packet length is smaller than VHOST_GOODCOPY_LEN, if > >> upend_idx != done_idx we still set zcopy_used to true and rollback this choice > >> later. This could be avoided by determining zerocopy once by checking all > >> conditions at one time before. > >> > >> Signed-off-by: Jason Wang <jasowang at redhat.com> > >> --- > >> drivers/vhost/net.c | 47 ++++++++++++++++++++--------------------------- > >> 1 files changed, 20 insertions(+), 27 deletions(-) > >> > >> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c > >> index 8a6dd0d..3f89dea 100644 > >> --- a/drivers/vhost/net.c > >> +++ b/drivers/vhost/net.c > >> @@ -404,43 +404,36 @@ static void handle_tx(struct vhost_net *net) > >> iov_length(nvq->hdr, s), hdr_size); > >> break; > >> } > >> - zcopy_used = zcopy && (len >= VHOST_GOODCOPY_LEN || > >> - nvq->upend_idx != nvq->done_idx); > >> + > >> + zcopy_used = zcopy && len >= VHOST_GOODCOPY_LEN > >> + && (nvq->upend_idx + 1) % UIO_MAXIOV !> >> + nvq->done_idx > > Thinking about this, this looks strange. > > The original idea was that once we start doing zcopy, we keep > > using the heads ring even for short packets until no zcopy is outstanding. > > What's the reason for keep using the heads ring?To keep completions in order.> > > > What's the logic behind (nvq->upend_idx + 1) % UIO_MAXIOV != nvq->done_idx > > here? > > Because we initialize both upend_idx and done_idx to zero, so upend_idx > != done_idx could not be used to check whether or not the heads ring > were full.But what does ring full have to do with zerocopy use?> >> + && vhost_net_tx_select_zcopy(net); > >> > >> /* use msg_control to pass vhost zerocopy ubuf info to skb */ > >> if (zcopy_used) { > >> + struct ubuf_info *ubuf; > >> + ubuf = nvq->ubuf_info + nvq->upend_idx; > >> + > >> vq->heads[nvq->upend_idx].id = head; > >> - if (!vhost_net_tx_select_zcopy(net) || > >> - len < VHOST_GOODCOPY_LEN) { > >> - /* copy don't need to wait for DMA done */ > >> - vq->heads[nvq->upend_idx].len > >> - VHOST_DMA_DONE_LEN; > >> - msg.msg_control = NULL; > >> - msg.msg_controllen = 0; > >> - ubufs = NULL; > >> - } else { > >> - struct ubuf_info *ubuf; > >> - ubuf = nvq->ubuf_info + nvq->upend_idx; > >> - > >> - vq->heads[nvq->upend_idx].len > >> - VHOST_DMA_IN_PROGRESS; > >> - ubuf->callback = vhost_zerocopy_callback; > >> - ubuf->ctx = nvq->ubufs; > >> - ubuf->desc = nvq->upend_idx; > >> - msg.msg_control = ubuf; > >> - msg.msg_controllen = sizeof(ubuf); > >> - ubufs = nvq->ubufs; > >> - kref_get(&ubufs->kref); > >> - } > >> + vq->heads[nvq->upend_idx].len = VHOST_DMA_IN_PROGRESS; > >> + ubuf->callback = vhost_zerocopy_callback; > >> + ubuf->ctx = nvq->ubufs; > >> + ubuf->desc = nvq->upend_idx; > >> + msg.msg_control = ubuf; > >> + msg.msg_controllen = sizeof(ubuf); > >> + ubufs = nvq->ubufs; > >> + kref_get(&ubufs->kref); > >> nvq->upend_idx = (nvq->upend_idx + 1) % UIO_MAXIOV; > >> - } else > >> + } else { > >> msg.msg_control = NULL; > >> + ubufs = NULL; > >> + } > >> /* TODO: Check specific error and bomb out unless ENOBUFS? */ > >> err = sock->ops->sendmsg(NULL, sock, &msg, len); > >> if (unlikely(err < 0)) { > >> if (zcopy_used) { > >> - if (ubufs) > >> - vhost_net_ubuf_put(ubufs); > >> + vhost_net_ubuf_put(ubufs); > >> nvq->upend_idx = ((unsigned)nvq->upend_idx - 1) > >> % UIO_MAXIOV; > >> } > >> -- > >> 1.7.1 > > -- > > To unsubscribe from this list: send the line "unsubscribe kvm" in > > the body of a message to majordomo at vger.kernel.org > > More majordomo info at http://vger.kernel.org/majordomo-info.html
Jason Wang
2013-Sep-26 04:30 UTC
[PATCH V3 4/6] vhost_net: determine whether or not to use zerocopy at one time
On 09/23/2013 03:16 PM, Michael S. Tsirkin wrote:> On Thu, Sep 05, 2013 at 10:54:44AM +0800, Jason Wang wrote: >> > On 09/04/2013 07:59 PM, Michael S. Tsirkin wrote: >>> > > On Mon, Sep 02, 2013 at 04:40:59PM +0800, Jason Wang wrote: >>>> > >> Currently, even if the packet length is smaller than VHOST_GOODCOPY_LEN, if >>>> > >> upend_idx != done_idx we still set zcopy_used to true and rollback this choice >>>> > >> later. This could be avoided by determining zerocopy once by checking all >>>> > >> conditions at one time before. >>>> > >> >>>> > >> Signed-off-by: Jason Wang <jasowang at redhat.com> >>>> > >> --- >>>> > >> drivers/vhost/net.c | 47 ++++++++++++++++++++--------------------------- >>>> > >> 1 files changed, 20 insertions(+), 27 deletions(-) >>>> > >> >>>> > >> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c >>>> > >> index 8a6dd0d..3f89dea 100644 >>>> > >> --- a/drivers/vhost/net.c >>>> > >> +++ b/drivers/vhost/net.c >>>> > >> @@ -404,43 +404,36 @@ static void handle_tx(struct vhost_net *net) >>>> > >> iov_length(nvq->hdr, s), hdr_size); >>>> > >> break; >>>> > >> } >>>> > >> - zcopy_used = zcopy && (len >= VHOST_GOODCOPY_LEN || >>>> > >> - nvq->upend_idx != nvq->done_idx); >>>> > >> + >>>> > >> + zcopy_used = zcopy && len >= VHOST_GOODCOPY_LEN >>>> > >> + && (nvq->upend_idx + 1) % UIO_MAXIOV !>>>> > >> + nvq->done_idx >>> > > Thinking about this, this looks strange. >>> > > The original idea was that once we start doing zcopy, we keep >>> > > using the heads ring even for short packets until no zcopy is outstanding. >> > >> > What's the reason for keep using the heads ring? > To keep completions in order.Ok, I will do some test to see the impact.>>> > > >>> > > What's the logic behind (nvq->upend_idx + 1) % UIO_MAXIOV != nvq->done_idx >>> > > here? >> > >> > Because we initialize both upend_idx and done_idx to zero, so upend_idx >> > != done_idx could not be used to check whether or not the heads ring >> > were full. > But what does ring full have to do with zerocopy use? >It was used to forbid the zerocopy when heads ring are full, but since we have the limitation now, it could be removed.
Jason Wang
2013-Sep-29 09:36 UTC
[PATCH V3 4/6] vhost_net: determine whether or not to use zerocopy at one time
On 09/26/2013 12:30 PM, Jason Wang wrote:> On 09/23/2013 03:16 PM, Michael S. Tsirkin wrote: >> > On Thu, Sep 05, 2013 at 10:54:44AM +0800, Jason Wang wrote: >>>> >> > On 09/04/2013 07:59 PM, Michael S. Tsirkin wrote: >>>>>> >>> > > On Mon, Sep 02, 2013 at 04:40:59PM +0800, Jason Wang wrote: >>>>>>>> >>>> > >> Currently, even if the packet length is smaller than VHOST_GOODCOPY_LEN, if >>>>>>>> >>>> > >> upend_idx != done_idx we still set zcopy_used to true and rollback this choice >>>>>>>> >>>> > >> later. This could be avoided by determining zerocopy once by checking all >>>>>>>> >>>> > >> conditions at one time before. >>>>>>>> >>>> > >> >>>>>>>> >>>> > >> Signed-off-by: Jason Wang <jasowang at redhat.com> >>>>>>>> >>>> > >> --- >>>>>>>> >>>> > >> drivers/vhost/net.c | 47 ++++++++++++++++++++--------------------------- >>>>>>>> >>>> > >> 1 files changed, 20 insertions(+), 27 deletions(-) >>>>>>>> >>>> > >> >>>>>>>> >>>> > >> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c >>>>>>>> >>>> > >> index 8a6dd0d..3f89dea 100644 >>>>>>>> >>>> > >> --- a/drivers/vhost/net.c >>>>>>>> >>>> > >> +++ b/drivers/vhost/net.c >>>>>>>> >>>> > >> @@ -404,43 +404,36 @@ static void handle_tx(struct vhost_net *net) >>>>>>>> >>>> > >> iov_length(nvq->hdr, s), hdr_size); >>>>>>>> >>>> > >> break; >>>>>>>> >>>> > >> } >>>>>>>> >>>> > >> - zcopy_used = zcopy && (len >= VHOST_GOODCOPY_LEN || >>>>>>>> >>>> > >> - nvq->upend_idx != nvq->done_idx); >>>>>>>> >>>> > >> + >>>>>>>> >>>> > >> + zcopy_used = zcopy && len >= VHOST_GOODCOPY_LEN >>>>>>>> >>>> > >> + && (nvq->upend_idx + 1) % UIO_MAXIOV !>>>>>>>> >>>> > >> + nvq->done_idx >>>>>> >>> > > Thinking about this, this looks strange. >>>>>> >>> > > The original idea was that once we start doing zcopy, we keep >>>>>> >>> > > using the heads ring even for short packets until no zcopy is outstanding. >>>> >> > >>>> >> > What's the reason for keep using the heads ring? >> > To keep completions in order. > Ok, I will do some test to see the impact.Since the our of order completion will happen when switching between zero copy and non zero copy. I test this by using two sessions of netperf in burst mode, one with 1 byte TCP_RR another with 512 bytes of TCP_RR. There's no difference with the patch applied.
Maybe Matching Threads
- [PATCH V3 4/6] vhost_net: determine whether or not to use zerocopy at one time
- [PATCH V3 4/6] vhost_net: determine whether or not to use zerocopy at one time
- [PATCH V3 4/6] vhost_net: determine whether or not to use zerocopy at one time
- [PATCH V3 4/6] vhost_net: determine whether or not to use zerocopy at one time
- [PATCH V3 4/6] vhost_net: determine whether or not to use zerocopy at one time