Displaying 20 results from an estimated 300 matches similar to: "[PATCH] vhost/scsi: truncate T10 PI iov_iter to prot_bytes"
2023 May 24
4
[PATCH 0/3] vhost-scsi: Fix IO hangs when using windows
The following patches were made over Linus's tree and fix an issue
where windows guests will send iovecs with offset/lengths that result
in IOs that are not aligned to 512. The LIO layer will then send them
to Linux's block layer but it requires 512 byte alignment, so depending
on the block driver being used we will get IO errors or hung IO.
The following patches have vhost-scsi detect
2017 Jan 12
1
[patch] vhost/scsi: silence uninitialized variable warning
This is to silence an uninitialized variable warning in debug output.
The problem is this line:
pr_debug("vhost_get_vq_desc: head: %d, out: %u in: %u\n",
head, out, in);
If "head == vq->num" is true on the first iteration then "out" and "in"
aren't initialized. We handle that a few lines after the printk. I was
tempted to just delete the
2017 Jan 12
1
[patch] vhost/scsi: silence uninitialized variable warning
This is to silence an uninitialized variable warning in debug output.
The problem is this line:
pr_debug("vhost_get_vq_desc: head: %d, out: %u in: %u\n",
head, out, in);
If "head == vq->num" is true on the first iteration then "out" and "in"
aren't initialized. We handle that a few lines after the printk. I was
tempted to just delete the
2023 Jul 09
4
[PATCH v2 0/2] vhost-scsi: Fix IO hangs when using windows
The following patches were made over Linus's tree and fix an issue
where windows guests will send iovecs with offset/lengths that result
in IOs that are not aligned to 512. The LIO layer will then send them
to Linux's FS/block layer but it requires 512 byte alignment, so
depending on the FS/block driver being used we will get IO errors or
hung IO.
The following patches have vhost-scsi
2019 May 16
6
[PATCH net 0/4] Prevent vhost kthread from hogging CPU
Hi:
This series try to prvernt a guest triggerable CPU hogging through
vhost kthread. This is done by introducing and checking the weight
after each requrest. The patch has been tested with reproducer of
vsock and virtio-net. Only compile test is done for vhost-scsi.
Please review.
This addresses CVE-2019-3900.
Jason Wang (4):
vhost: introduce vhost_exceeds_weight()
vhost_net: fix possible
2019 May 17
9
[PATCH V2 0/4] Prevent vhost kthread from hogging CPU
Hi:
This series try to prevent a guest triggerable CPU hogging through
vhost kthread. This is done by introducing and checking the weight
after each requrest. The patch has been tested with reproducer of
vsock and virtio-net. Only compile test is done for vhost-scsi.
Please review.
This addresses CVE-2019-3900.
Changs from V1:
- fix user-ater-free in vosck patch
Jason Wang (4):
vhost:
2019 May 17
9
[PATCH V2 0/4] Prevent vhost kthread from hogging CPU
Hi:
This series try to prevent a guest triggerable CPU hogging through
vhost kthread. This is done by introducing and checking the weight
after each requrest. The patch has been tested with reproducer of
vsock and virtio-net. Only compile test is done for vhost-scsi.
Please review.
This addresses CVE-2019-3900.
Changs from V1:
- fix user-ater-free in vosck patch
Jason Wang (4):
vhost:
2018 Mar 26
12
[RFC PATCH V2 0/8] Packed ring for vhost
Hi all:
This RFC implement packed ring layout. The code were tested with pmd
implement by Jens at
http://dpdk.org/ml/archives/dev/2018-January/089417.html. Minor change
was needed for pmd codes to kick virtqueue since it assumes a busy
polling backend.
Test were done between localhost and guest. Testpmd (rxonly) in guest
reports 2.4Mpps. Testpmd (txonly) repots about 2.1Mpps.
Notes: The event
2018 Mar 26
12
[RFC PATCH V2 0/8] Packed ring for vhost
Hi all:
This RFC implement packed ring layout. The code were tested with pmd
implement by Jens at
http://dpdk.org/ml/archives/dev/2018-January/089417.html. Minor change
was needed for pmd codes to kick virtqueue since it assumes a busy
polling backend.
Test were done between localhost and guest. Testpmd (rxonly) in guest
reports 2.4Mpps. Testpmd (txonly) repots about 2.1Mpps.
Notes: The event
2018 May 16
12
[RFC V4 PATCH 0/8] Packed ring layout for vhost
Hi all:
This RFC implement packed ring layout. The code were tested with
Tiwei's RFC V3 ahttps://lkml.org/lkml/2018/4/25/34. Some fixups and
tweaks were needed on top of Tiwei's code to make it run for event
index.
Pktgen reports about 20% improvement on PPS (event index is off). More
testing is ongoing.
Notes for tester:
- Start from this version, vhost need qemu co-operation to work
2018 May 16
12
[RFC V4 PATCH 0/8] Packed ring layout for vhost
Hi all:
This RFC implement packed ring layout. The code were tested with
Tiwei's RFC V3 ahttps://lkml.org/lkml/2018/4/25/34. Some fixups and
tweaks were needed on top of Tiwei's code to make it run for event
index.
Pktgen reports about 20% improvement on PPS (event index is off). More
testing is ongoing.
Notes for tester:
- Start from this version, vhost need qemu co-operation to work
2018 Apr 23
11
[RFC V3 PATCH 0/8] Packed ring for vhost
Hi all:
This RFC implement packed ring layout. The code were tested with
Tiwei's RFC V2 a thttps://lkml.org/lkml/2018/4/1/48. Some fixups and
tweaks were needed on top of Tiwei's code to make it run. TCP stream
and pktgen does not show obvious difference compared with split ring.
Changes from V2:
- do not use & in checking desc_event_flags
- off should be most significant bit
-
2018 Apr 23
11
[RFC V3 PATCH 0/8] Packed ring for vhost
Hi all:
This RFC implement packed ring layout. The code were tested with
Tiwei's RFC V2 a thttps://lkml.org/lkml/2018/4/1/48. Some fixups and
tweaks were needed on top of Tiwei's code to make it run. TCP stream
and pktgen does not show obvious difference compared with split ring.
Changes from V2:
- do not use & in checking desc_event_flags
- off should be most significant bit
-
2018 May 29
9
[RFC V5 PATCH 0/8] Packed ring layout for vhost
Hi all:
This RFC implement packed ring layout. The code were tested with
Tiwei's RFC V5 at https://lkml.org/lkml/2018/5/22/138. Some fixups and
tweaks were needed on top of Tiwei's code to make it run for event
index.
Pktgen reports about 20% improvement on TX PPS when doing pktgen from
guest to host. No ovbious improvement on RX PPS. We can do lots of
optimizations on top but for simple
2018 Jul 16
11
[PATCH net-next V2 0/8] Packed virtqueue support for vhost
Hi all:
This series implements packed virtqueues. The code were tested with
Tiwei's guest driver series at https://patchwork.ozlabs.org/cover/942297/
Pktgen test for both RX and TX does not show obvious difference with
split virtqueues. The main bottleneck is the guest Linux driver, since
it can not stress vhost for a 100% CPU utilization. A full TCP
benchmark is ongoing. Will test
2018 Jul 16
11
[PATCH net-next V2 0/8] Packed virtqueue support for vhost
Hi all:
This series implements packed virtqueues. The code were tested with
Tiwei's guest driver series at https://patchwork.ozlabs.org/cover/942297/
Pktgen test for both RX and TX does not show obvious difference with
split virtqueues. The main bottleneck is the guest Linux driver, since
it can not stress vhost for a 100% CPU utilization. A full TCP
benchmark is ongoing. Will test
2018 Nov 01
4
[PULL] vhost: cleanups and fixes
The following changes since commit 84df9525b0c27f3ebc2ebb1864fa62a97fdedb7d:
Linux 4.19 (2018-10-22 07:37:37 +0100)
are available in the Git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git tags/for_linus
for you to fetch changes up to 79f800b2e76923cd8ce0aa659cb5c019d9643bc9:
MAINTAINERS: remove reference to bogus vsock file (2018-10-24 21:16:14 -0400)
2018 Nov 01
4
[PULL] vhost: cleanups and fixes
The following changes since commit 84df9525b0c27f3ebc2ebb1864fa62a97fdedb7d:
Linux 4.19 (2018-10-22 07:37:37 +0100)
are available in the Git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git tags/for_linus
for you to fetch changes up to 79f800b2e76923cd8ce0aa659cb5c019d9643bc9:
MAINTAINERS: remove reference to bogus vsock file (2018-10-24 21:16:14 -0400)
2018 Aug 08
0
[PATCH] vhost/scsi: increase VHOST_SCSI_PREALLOC_PROT_SGLS to 2048
On Wed, Aug 08, 2018 at 01:29:55PM -0600, Greg Edwards wrote:
> The current value of VHOST_SCSI_PREALLOC_PROT_SGLS is too small to
> accommodate larger I/Os, e.g. 16-32 MiB, when the VIRTIO_SCSI_F_T10_PI
> feature bit is negotiated and the backing store supports T10 PI.
>
> vhost-scsi rejects the command with errors like:
>
> [ 59.581317] vhost_scsi_calc_sgls: requested
2018 Jul 03
12
[PATCH net-next 0/8] Packed virtqueue for vhost
Hi all:
This series implements packed virtqueues. The code were tested with
Tiwei's RFC V6 at https://lkml.org/lkml/2018/6/5/120.
Pktgen test for both RX and TX does not show obvious difference with
split virtqueues. The main bottleneck is the guest Linux driver, since
it can not stress vhost for a 100% CPU utilization. A full TCP
benchmark is ongoing. Will test virtio-net pmd as well when