Displaying 12 results from an estimated 12 matches for "vhost_set_vring_used_base".
Did you mean:
vhost_get_vring_used_base
2018 Oct 12
2
[PATCH net-next V2 6/8] vhost: packed ring support
...ST_GET_VRING_BASE:
> s.index = idx;
> s.num = vq->last_avail_idx;
> + if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED))
> + s.num |= vq->last_avail_wrap_counter << 31;
> + if (copy_to_user(argp, &s, sizeof(s)))
> + r = -EFAULT;
> + break;
> + case VHOST_SET_VRING_USED_BASE:
> + /* Moving base with an active backend?
> + * You don't want to do that.
> + */
> + if (vq->private_data) {
> + r = -EBUSY;
> + break;
> + }
> + if (copy_from_user(&s, argp, sizeof(s))) {
> + r = -EFAULT;
> + break;
> + }
> + if (...
2018 Oct 12
2
[PATCH net-next V2 6/8] vhost: packed ring support
...ST_GET_VRING_BASE:
> s.index = idx;
> s.num = vq->last_avail_idx;
> + if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED))
> + s.num |= vq->last_avail_wrap_counter << 31;
> + if (copy_to_user(argp, &s, sizeof(s)))
> + r = -EFAULT;
> + break;
> + case VHOST_SET_VRING_USED_BASE:
> + /* Moving base with an active backend?
> + * You don't want to do that.
> + */
> + if (vq->private_data) {
> + r = -EBUSY;
> + break;
> + }
> + if (copy_from_user(&s, argp, sizeof(s))) {
> + r = -EFAULT;
> + break;
> + }
> + if (...
2018 Oct 15
2
[PATCH net-next V2 6/8] vhost: packed ring support
...s.num = vq->last_avail_idx;
>>> + if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED))
>>> + s.num |= vq->last_avail_wrap_counter << 31;
>>> + if (copy_to_user(argp, &s, sizeof(s)))
>>> + r = -EFAULT;
>>> + break;
>>> + case VHOST_SET_VRING_USED_BASE:
>>> + /* Moving base with an active backend?
>>> + * You don't want to do that.
>>> + */
>>> + if (vq->private_data) {
>>> + r = -EBUSY;
>>> + break;
>>> + }
>>> + if (copy_from_user(&s, argp, sizeof(s))...
2018 Oct 15
2
[PATCH net-next V2 6/8] vhost: packed ring support
...s.num = vq->last_avail_idx;
>>> + if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED))
>>> + s.num |= vq->last_avail_wrap_counter << 31;
>>> + if (copy_to_user(argp, &s, sizeof(s)))
>>> + r = -EFAULT;
>>> + break;
>>> + case VHOST_SET_VRING_USED_BASE:
>>> + /* Moving base with an active backend?
>>> + * You don't want to do that.
>>> + */
>>> + if (vq->private_data) {
>>> + r = -EBUSY;
>>> + break;
>>> + }
>>> + if (copy_from_user(&s, argp, sizeof(s))...
2018 Oct 15
1
[PATCH net-next V2 6/8] vhost: packed ring support
...;> + if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED))
>>>>> + s.num |= vq->last_avail_wrap_counter << 31;
>>>>> + if (copy_to_user(argp, &s, sizeof(s)))
>>>>> + r = -EFAULT;
>>>>> + break;
>>>>> + case VHOST_SET_VRING_USED_BASE:
>>>>> + /* Moving base with an active backend?
>>>>> + * You don't want to do that.
>>>>> + */
>>>>> + if (vq->private_data) {
>>>>> + r = -EBUSY;
>>>>> + break;
>>>>> + }
>...
2018 Oct 12
0
[PATCH net-next V2 6/8] vhost: packed ring support
...= idx;
> > s.num = vq->last_avail_idx;
> > + if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED))
> > + s.num |= vq->last_avail_wrap_counter << 31;
> > + if (copy_to_user(argp, &s, sizeof(s)))
> > + r = -EFAULT;
> > + break;
> > + case VHOST_SET_VRING_USED_BASE:
> > + /* Moving base with an active backend?
> > + * You don't want to do that.
> > + */
> > + if (vq->private_data) {
> > + r = -EBUSY;
> > + break;
> > + }
> > + if (copy_from_user(&s, argp, sizeof(s))) {
> > + r = -E...
2018 Oct 15
0
[PATCH net-next V2 6/8] vhost: packed ring support
...> > + if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED))
> > > > + s.num |= vq->last_avail_wrap_counter << 31;
> > > > + if (copy_to_user(argp, &s, sizeof(s)))
> > > > + r = -EFAULT;
> > > > + break;
> > > > + case VHOST_SET_VRING_USED_BASE:
> > > > + /* Moving base with an active backend?
> > > > + * You don't want to do that.
> > > > + */
> > > > + if (vq->private_data) {
> > > > + r = -EBUSY;
> > > > + break;
> > > > + }
> >...
2018 Jul 16
0
[PATCH net-next V2 6/8] vhost: packed ring support
..._wrap_counter;
+ }
break;
case VHOST_GET_VRING_BASE:
s.index = idx;
s.num = vq->last_avail_idx;
+ if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED))
+ s.num |= vq->last_avail_wrap_counter << 31;
+ if (copy_to_user(argp, &s, sizeof(s)))
+ r = -EFAULT;
+ break;
+ case VHOST_SET_VRING_USED_BASE:
+ /* Moving base with an active backend?
+ * You don't want to do that.
+ */
+ if (vq->private_data) {
+ r = -EBUSY;
+ break;
+ }
+ if (copy_from_user(&s, argp, sizeof(s))) {
+ r = -EFAULT;
+ break;
+ }
+ if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED)) {
+ wrap_coun...
2018 Jul 16
11
[PATCH net-next V2 0/8] Packed virtqueue support for vhost
Hi all:
This series implements packed virtqueues. The code were tested with
Tiwei's guest driver series at https://patchwork.ozlabs.org/cover/942297/
Pktgen test for both RX and TX does not show obvious difference with
split virtqueues. The main bottleneck is the guest Linux driver, since
it can not stress vhost for a 100% CPU utilization. A full TCP
benchmark is ongoing. Will test
2018 Jul 16
11
[PATCH net-next V2 0/8] Packed virtqueue support for vhost
Hi all:
This series implements packed virtqueues. The code were tested with
Tiwei's guest driver series at https://patchwork.ozlabs.org/cover/942297/
Pktgen test for both RX and TX does not show obvious difference with
split virtqueues. The main bottleneck is the guest Linux driver, since
it can not stress vhost for a 100% CPU utilization. A full TCP
benchmark is ongoing. Will test
2018 Jul 03
12
[PATCH net-next 0/8] Packed virtqueue for vhost
Hi all:
This series implements packed virtqueues. The code were tested with
Tiwei's RFC V6 at https://lkml.org/lkml/2018/6/5/120.
Pktgen test for both RX and TX does not show obvious difference with
split virtqueues. The main bottleneck is the guest Linux driver, since
it can not stress vhost for a 100% CPU utilization. A full TCP
benchmark is ongoing. Will test virtio-net pmd as well when
2018 Jul 03
12
[PATCH net-next 0/8] Packed virtqueue for vhost
Hi all:
This series implements packed virtqueues. The code were tested with
Tiwei's RFC V6 at https://lkml.org/lkml/2018/6/5/120.
Pktgen test for both RX and TX does not show obvious difference with
split virtqueues. The main bottleneck is the guest Linux driver, since
it can not stress vhost for a 100% CPU utilization. A full TCP
benchmark is ongoing. Will test virtio-net pmd as well when