search for: 2241,11

Displaying 18 results from an estimated 18 matches for "2241,11".

Did you mean: 221,11
2017 Jan 06
2
[PATCH V4 net-next 1/3] vhost: better detection of available buffers
...redhat.com> > --- > drivers/vhost/vhost.c | 8 ++++++-- > 1 file changed, 6 insertions(+), 2 deletions(-) > > diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c > index d643260..9f11838 100644 > --- a/drivers/vhost/vhost.c > +++ b/drivers/vhost/vhost.c > @@ -2241,11 +2241,15 @@ bool vhost_vq_avail_empty(struct vhost_dev *dev, struct vhost_virtqueue *vq) > __virtio16 avail_idx; > int r; > > + if (vq->avail_idx != vq->last_avail_idx) > + return false; > + > r = vhost_get_user(vq, avail_idx, &vq->avail->idx); >...
2017 Jan 06
2
[PATCH V4 net-next 1/3] vhost: better detection of available buffers
...redhat.com> > --- > drivers/vhost/vhost.c | 8 ++++++-- > 1 file changed, 6 insertions(+), 2 deletions(-) > > diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c > index d643260..9f11838 100644 > --- a/drivers/vhost/vhost.c > +++ b/drivers/vhost/vhost.c > @@ -2241,11 +2241,15 @@ bool vhost_vq_avail_empty(struct vhost_dev *dev, struct vhost_virtqueue *vq) > __virtio16 avail_idx; > int r; > > + if (vq->avail_idx != vq->last_avail_idx) > + return false; > + > r = vhost_get_user(vq, avail_idx, &vq->avail->idx); >...
2017 Jan 09
2
[PATCH V4 net-next 1/3] vhost: better detection of available buffers
...> > 1 file changed, 6 insertions(+), 2 deletions(-) > > > > > > diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c > > > index d643260..9f11838 100644 > > > --- a/drivers/vhost/vhost.c > > > +++ b/drivers/vhost/vhost.c > > > @@ -2241,11 +2241,15 @@ bool vhost_vq_avail_empty(struct vhost_dev *dev, struct vhost_virtqueue *vq) > > > __virtio16 avail_idx; > > > int r; > > > + if (vq->avail_idx != vq->last_avail_idx) > > > + return false; > > > + > > > r = vhost_g...
2017 Jan 09
2
[PATCH V4 net-next 1/3] vhost: better detection of available buffers
...> > 1 file changed, 6 insertions(+), 2 deletions(-) > > > > > > diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c > > > index d643260..9f11838 100644 > > > --- a/drivers/vhost/vhost.c > > > +++ b/drivers/vhost/vhost.c > > > @@ -2241,11 +2241,15 @@ bool vhost_vq_avail_empty(struct vhost_dev *dev, struct vhost_virtqueue *vq) > > > __virtio16 avail_idx; > > > int r; > > > + if (vq->avail_idx != vq->last_avail_idx) > > > + return false; > > > + > > > r = vhost_g...
2016 Dec 28
0
[PATCH net-next V2 1/3] vhost: better detection of available buffers
...g. Signed-off-by: Jason Wang <jasowang at redhat.com> --- drivers/vhost/vhost.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index d643260..9f11838 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -2241,11 +2241,15 @@ bool vhost_vq_avail_empty(struct vhost_dev *dev, struct vhost_virtqueue *vq) __virtio16 avail_idx; int r; + if (vq->avail_idx != vq->last_avail_idx) + return false; + r = vhost_get_user(vq, avail_idx, &vq->avail->idx); - if (r) + if (unlikely(r)) return fa...
2017 Jan 06
0
[PATCH V4 net-next 1/3] vhost: better detection of available buffers
...gt; Signed-off-by: Jason Wang <jasowang at redhat.com> --- drivers/vhost/vhost.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index d643260..9f11838 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -2241,11 +2241,15 @@ bool vhost_vq_avail_empty(struct vhost_dev *dev, struct vhost_virtqueue *vq) __virtio16 avail_idx; int r; + if (vq->avail_idx != vq->last_avail_idx) + return false; + r = vhost_get_user(vq, avail_idx, &vq->avail->idx); - if (r) + if (unlikely(r)) return fa...
2017 Jan 09
0
[PATCH V4 net-next 1/3] vhost: better detection of available buffers
...drivers/vhost/vhost.c | 8 ++++++-- >> 1 file changed, 6 insertions(+), 2 deletions(-) >> >> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c >> index d643260..9f11838 100644 >> --- a/drivers/vhost/vhost.c >> +++ b/drivers/vhost/vhost.c >> @@ -2241,11 +2241,15 @@ bool vhost_vq_avail_empty(struct vhost_dev *dev, struct vhost_virtqueue *vq) >> __virtio16 avail_idx; >> int r; >> >> + if (vq->avail_idx != vq->last_avail_idx) >> + return false; >> + >> r = vhost_get_user(vq, avail_idx, &...
2017 Jan 10
0
[PATCH V4 net-next 1/3] vhost: better detection of available buffers
...1 file changed, 6 insertions(+), 2 deletions(-) >>>> >>>> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c >>>> index d643260..9f11838 100644 >>>> --- a/drivers/vhost/vhost.c >>>> +++ b/drivers/vhost/vhost.c >>>> @@ -2241,11 +2241,15 @@ bool vhost_vq_avail_empty(struct vhost_dev *dev, struct vhost_virtqueue *vq) >>>> __virtio16 avail_idx; >>>> int r; >>>> + if (vq->avail_idx != vq->last_avail_idx) >>>> + return false; >>>> + >>>>...
2017 Jan 18
0
[PATCH net-next V5 1/3] vhost: better detection of available buffers
...gt; Signed-off-by: Jason Wang <jasowang at redhat.com> --- drivers/vhost/vhost.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index d643260..9f11838 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -2241,11 +2241,15 @@ bool vhost_vq_avail_empty(struct vhost_dev *dev, struct vhost_virtqueue *vq) __virtio16 avail_idx; int r; + if (vq->avail_idx != vq->last_avail_idx) + return false; + r = vhost_get_user(vq, avail_idx, &vq->avail->idx); - if (r) + if (unlikely(r)) return fa...
2017 Jan 06
5
[PATCH V4 net-next 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2017 Jan 06
5
[PATCH V4 net-next 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2017 Jan 18
7
[PATCH net-next V5 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2017 Jan 18
7
[PATCH net-next V5 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2016 Dec 30
5
[PATCH net-next V3 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2016 Dec 30
5
[PATCH net-next V3 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2016 Dec 28
7
[PATCH net-next V2 0/3] vhost net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2016 Dec 28
7
[PATCH net-next V2 0/3] vhost net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2010 Nov 19
5
[PATCH 1/1] Ocfs2: Teach 'coherency=full' O_DIRECT writes to correctly up_read i_alloc_sem.
Former logic of ocfs2_file_aio_write() was a bit stricky to unlock the rw_lock and i_alloc_sem, by using some private bits in struct 'iocb' to communite with ocfs2_dio_end_io(), it did work before we introduce the patch of supporting 'coherency=full,buffered' option, since rw_lock and i_alloc_sem were never acquired both at the same time, no mattar we doing buffered or direct IO or