search for: lunp

Displaying 17 results from an estimated 17 matches for "lunp".

Did you mean: lun
2018 Dec 13
0
[PATCH] vhost: correct the related warning message
...ct work_struct *work) > > if (unlikely(!copy_from_iter_full(vc->req, vc->req_size, > &vc->out_iter))) { > - vq_err(vq, "Faulted on copy_from_iter\n"); > + vq_err(vq, "Faulted on copy_from_iter_full\n"); > } else if (unlikely(*vc->lunp != 1)) { > /* virtio-scsi spec requires byte 0 of the lun to be 1 */ > vq_err(vq, "Illegal virtio-scsi lun: %u\n", *vc->lunp); > @@ -1441,7 +1441,7 @@ static void vhost_scsi_flush(struct vhost_scsi *vs) > se_tpg = &tpg->se_tpg; > ret = target_depend_...
2014 Jul 01
0
[PATCH RFC 2/2] vhost: support urgent descriptors
...vhost_add_used_and_signal(&vs->dev, vq, head, 0); + vhost_add_used_and_signal(&vs->dev, vq, urgent, head, 0); else pr_err("Faulted on virtio_scsi_cmd_resp\n"); } @@ -980,6 +985,7 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq) u8 *target, *lunp, task_attr; bool hdr_pi; void *req, *cdb; + bool urgent; mutex_lock(&vq->mutex); /* @@ -993,7 +999,7 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq) vhost_disable_notify(&vs->dev, vq); for (;;) { - head = vhost_get_vq_desc(vq, vq->iov, +...
2014 Jul 01
5
[PATCH RFC 1/2] virtio: support for urgent descriptors
Below should be useful for some experiments Jason is doing. I thought I'd send it out for early review/feedback. Compiled-only at this point. event idx feature allows us to defer interrupts until a specific # of descriptors were used. Sometimes it might be useful to get an interrupt after a specific descriptor, regardless. This adds a descriptor flag for this, and an API to create an urgent
2014 Jul 01
5
[PATCH RFC 1/2] virtio: support for urgent descriptors
Below should be useful for some experiments Jason is doing. I thought I'd send it out for early review/feedback. Compiled-only at this point. event idx feature allows us to defer interrupts until a specific # of descriptors were used. Sometimes it might be useful to get an interrupt after a specific descriptor, regardless. This adds a descriptor flag for this, and an API to create an urgent
2014 Oct 11
10
[PATCH net-next RFC 0/3] virtio-net: Conditionally enable tx interrupt
Hello all: We free old transmitted packets in ndo_start_xmit() currently, so any packet must be orphaned also there. This was used to reduce the overhead of tx interrupt to achieve better performance. But this may not work for some protocols such as TCP stream. TCP depends on the value of sk_wmem_alloc to implement various optimization for small packets stream such as TCP small queue and auto
2014 Oct 11
10
[PATCH net-next RFC 0/3] virtio-net: Conditionally enable tx interrupt
Hello all: We free old transmitted packets in ndo_start_xmit() currently, so any packet must be orphaned also there. This was used to reduce the overhead of tx interrupt to achieve better performance. But this may not work for some protocols such as TCP stream. TCP depends on the value of sk_wmem_alloc to implement various optimization for small packets stream such as TCP small queue and auto
2018 Mar 26
12
[RFC PATCH V2 0/8] Packed ring for vhost
Hi all: This RFC implement packed ring layout. The code were tested with pmd implement by Jens at http://dpdk.org/ml/archives/dev/2018-January/089417.html. Minor change was needed for pmd codes to kick virtqueue since it assumes a busy polling backend. Test were done between localhost and guest. Testpmd (rxonly) in guest reports 2.4Mpps. Testpmd (txonly) repots about 2.1Mpps. Notes: The event
2018 Mar 26
12
[RFC PATCH V2 0/8] Packed ring for vhost
Hi all: This RFC implement packed ring layout. The code were tested with pmd implement by Jens at http://dpdk.org/ml/archives/dev/2018-January/089417.html. Minor change was needed for pmd codes to kick virtqueue since it assumes a busy polling backend. Test were done between localhost and guest. Testpmd (rxonly) in guest reports 2.4Mpps. Testpmd (txonly) repots about 2.1Mpps. Notes: The event
2018 May 16
12
[RFC V4 PATCH 0/8] Packed ring layout for vhost
Hi all: This RFC implement packed ring layout. The code were tested with Tiwei's RFC V3 ahttps://lkml.org/lkml/2018/4/25/34. Some fixups and tweaks were needed on top of Tiwei's code to make it run for event index. Pktgen reports about 20% improvement on PPS (event index is off). More testing is ongoing. Notes for tester: - Start from this version, vhost need qemu co-operation to work
2018 May 16
12
[RFC V4 PATCH 0/8] Packed ring layout for vhost
Hi all: This RFC implement packed ring layout. The code were tested with Tiwei's RFC V3 ahttps://lkml.org/lkml/2018/4/25/34. Some fixups and tweaks were needed on top of Tiwei's code to make it run for event index. Pktgen reports about 20% improvement on PPS (event index is off). More testing is ongoing. Notes for tester: - Start from this version, vhost need qemu co-operation to work
2018 Apr 23
11
[RFC V3 PATCH 0/8] Packed ring for vhost
Hi all: This RFC implement packed ring layout. The code were tested with Tiwei's RFC V2 a thttps://lkml.org/lkml/2018/4/1/48. Some fixups and tweaks were needed on top of Tiwei's code to make it run. TCP stream and pktgen does not show obvious difference compared with split ring. Changes from V2: - do not use & in checking desc_event_flags - off should be most significant bit -
2018 Apr 23
11
[RFC V3 PATCH 0/8] Packed ring for vhost
Hi all: This RFC implement packed ring layout. The code were tested with Tiwei's RFC V2 a thttps://lkml.org/lkml/2018/4/1/48. Some fixups and tweaks were needed on top of Tiwei's code to make it run. TCP stream and pktgen does not show obvious difference compared with split ring. Changes from V2: - do not use & in checking desc_event_flags - off should be most significant bit -
2018 May 29
9
[RFC V5 PATCH 0/8] Packed ring layout for vhost
Hi all: This RFC implement packed ring layout. The code were tested with Tiwei's RFC V5 at https://lkml.org/lkml/2018/5/22/138. Some fixups and tweaks were needed on top of Tiwei's code to make it run for event index. Pktgen reports about 20% improvement on TX PPS when doing pktgen from guest to host. No ovbious improvement on RX PPS. We can do lots of optimizations on top but for simple
2018 Jul 16
11
[PATCH net-next V2 0/8] Packed virtqueue support for vhost
Hi all: This series implements packed virtqueues. The code were tested with Tiwei's guest driver series at https://patchwork.ozlabs.org/cover/942297/ Pktgen test for both RX and TX does not show obvious difference with split virtqueues. The main bottleneck is the guest Linux driver, since it can not stress vhost for a 100% CPU utilization. A full TCP benchmark is ongoing. Will test
2018 Jul 16
11
[PATCH net-next V2 0/8] Packed virtqueue support for vhost
Hi all: This series implements packed virtqueues. The code were tested with Tiwei's guest driver series at https://patchwork.ozlabs.org/cover/942297/ Pktgen test for both RX and TX does not show obvious difference with split virtqueues. The main bottleneck is the guest Linux driver, since it can not stress vhost for a 100% CPU utilization. A full TCP benchmark is ongoing. Will test
2018 Jul 03
12
[PATCH net-next 0/8] Packed virtqueue for vhost
Hi all: This series implements packed virtqueues. The code were tested with Tiwei's RFC V6 at https://lkml.org/lkml/2018/6/5/120. Pktgen test for both RX and TX does not show obvious difference with split virtqueues. The main bottleneck is the guest Linux driver, since it can not stress vhost for a 100% CPU utilization. A full TCP benchmark is ongoing. Will test virtio-net pmd as well when
2018 Jul 03
12
[PATCH net-next 0/8] Packed virtqueue for vhost
Hi all: This series implements packed virtqueues. The code were tested with Tiwei's RFC V6 at https://lkml.org/lkml/2018/6/5/120. Pktgen test for both RX and TX does not show obvious difference with split virtqueues. The main bottleneck is the guest Linux driver, since it can not stress vhost for a 100% CPU utilization. A full TCP benchmark is ongoing. Will test virtio-net pmd as well when