Displaying 20 results from an estimated 2000 matches similar to: "[PATCH 0/2] virtio-blk: improve handling of DMA mapping failures"
2020 Feb 14
1
[PATCH 1/2] virtio-blk: fix hw_queue stopped on arbitrary error
Hi Halil,
When swiotlb full is hit for virtio_blk, there is below warning for once (the
warning is not by this patch set). Is this expected or just false positive?
[ 54.767257] virtio-pci 0000:00:04.0: swiotlb buffer is full (sz: 16 bytes),
total 32768 (slots), used 258 (slots)
[ 54.767260] virtio-pci 0000:00:04.0: overflow 0x0000000075770110+16 of DMA
mask ffffffffffffffff bus limit 0
[
2020 Mar 03
1
[PATCH 0/2] virtio-blk: improve handling of DMA mapping failures
On Tue, Mar 03, 2020 at 03:12:52PM +0100, Halil Pasic wrote:
> On Thu, 13 Feb 2020 13:37:26 +0100
> Halil Pasic <pasic at linux.ibm.com> wrote:
>
> > Two patches are handling new edge cases introduced by doing DMA mappings
> > (which can fail) in virtio core.
> >
> > I stumbled upon this while stress testing I/O for Protected Virtual
> > Machines. I
2020 Feb 18
2
[PATCH 1/2] virtio-blk: fix hw_queue stopped on arbitrary error
On Thu, Feb 13, 2020 at 8:38 PM Halil Pasic <pasic at linux.ibm.com> wrote:
>
> Since nobody else is going to restart our hw_queue for us, the
> blk_mq_start_stopped_hw_queues() is in virtblk_done() is not sufficient
> necessarily sufficient to ensure that the queue will get started again.
> In case of global resource outage (-ENOMEM because mapping failure,
> because of
2020 Feb 18
2
[PATCH 1/2] virtio-blk: fix hw_queue stopped on arbitrary error
On Thu, Feb 13, 2020 at 8:38 PM Halil Pasic <pasic at linux.ibm.com> wrote:
>
> Since nobody else is going to restart our hw_queue for us, the
> blk_mq_start_stopped_hw_queues() is in virtblk_done() is not sufficient
> necessarily sufficient to ensure that the queue will get started again.
> In case of global resource outage (-ENOMEM because mapping failure,
> because of
2020 Feb 13
0
[PATCH 1/2] virtio-blk: fix hw_queue stopped on arbitrary error
Since nobody else is going to restart our hw_queue for us, the
blk_mq_start_stopped_hw_queues() is in virtblk_done() is not sufficient
necessarily sufficient to ensure that the queue will get started again.
In case of global resource outage (-ENOMEM because mapping failure,
because of swiotlb full) our virtqueue may be empty and we can get
stuck with a stopped hw_queue.
Let us not stop the queue
2020 Feb 19
1
[PATCH 1/2] virtio-blk: fix hw_queue stopped on arbitrary error
On Tue, Feb 18, 2020 at 8:35 PM Halil Pasic <pasic at linux.ibm.com> wrote:
>
> On Tue, 18 Feb 2020 10:21:18 +0800
> Ming Lei <tom.leiming at gmail.com> wrote:
>
> > On Thu, Feb 13, 2020 at 8:38 PM Halil Pasic <pasic at linux.ibm.com> wrote:
> > >
> > > Since nobody else is going to restart our hw_queue for us, the
> > >
2020 Apr 18
0
[PATCH AUTOSEL 5.5 74/75] virtio-blk: improve virtqueue error to BLK_STS
From: Halil Pasic <pasic at linux.ibm.com>
[ Upstream commit 3d973b2e9a625996ee997c7303cd793b9d197c65 ]
Let's change the mapping between virtqueue_add errors to BLK_STS
statuses, so that -ENOSPC, which indicates virtqueue full is still
mapped to BLK_STS_DEV_RESOURCE, but -ENOMEM which indicates non-device
specific resource outage is mapped to BLK_STS_RESOURCE.
Signed-off-by: Halil
2020 Apr 18
0
[PATCH AUTOSEL 5.4 73/78] virtio-blk: improve virtqueue error to BLK_STS
From: Halil Pasic <pasic at linux.ibm.com>
[ Upstream commit 3d973b2e9a625996ee997c7303cd793b9d197c65 ]
Let's change the mapping between virtqueue_add errors to BLK_STS
statuses, so that -ENOSPC, which indicates virtqueue full is still
mapped to BLK_STS_DEV_RESOURCE, but -ENOMEM which indicates non-device
specific resource outage is mapped to BLK_STS_RESOURCE.
Signed-off-by: Halil
2020 Apr 18
0
[PATCH AUTOSEL 4.19 45/47] virtio-blk: improve virtqueue error to BLK_STS
From: Halil Pasic <pasic at linux.ibm.com>
[ Upstream commit 3d973b2e9a625996ee997c7303cd793b9d197c65 ]
Let's change the mapping between virtqueue_add errors to BLK_STS
statuses, so that -ENOSPC, which indicates virtqueue full is still
mapped to BLK_STS_DEV_RESOURCE, but -ENOMEM which indicates non-device
specific resource outage is mapped to BLK_STS_RESOURCE.
Signed-off-by: Halil
2014 Jun 20
3
[PATCH v1 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi,
These patches try to support multi virtual queues(multi-vq) in one
virtio-blk device, and maps each virtual queue(vq) to blk-mq's
hardware queue.
With this approach, both scalability and performance on virtio-blk
device can get improved.
For verifying the improvement, I implements virtio-blk multi-vq over
qemu's dataplane feature, and both handling host notification
from each vq and
2014 Jun 20
3
[PATCH v1 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi,
These patches try to support multi virtual queues(multi-vq) in one
virtio-blk device, and maps each virtual queue(vq) to blk-mq's
hardware queue.
With this approach, both scalability and performance on virtio-blk
device can get improved.
For verifying the improvement, I implements virtio-blk multi-vq over
qemu's dataplane feature, and both handling host notification
from each vq and
2014 Jun 26
6
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi,
These patches try to support multi virtual queues(multi-vq) in one
virtio-blk device, and maps each virtual queue(vq) to blk-mq's
hardware queue.
With this approach, both scalability and performance on virtio-blk
device can get improved.
For verifying the improvement, I implements virtio-blk multi-vq over
qemu's dataplane feature, and both handling host notification
from each vq and
2014 Jun 26
6
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi,
These patches try to support multi virtual queues(multi-vq) in one
virtio-blk device, and maps each virtual queue(vq) to blk-mq's
hardware queue.
With this approach, both scalability and performance on virtio-blk
device can get improved.
For verifying the improvement, I implements virtio-blk multi-vq over
qemu's dataplane feature, and both handling host notification
from each vq and
2014 Jun 26
7
[PATCH v2 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi,
These patches try to support multi virtual queues(multi-vq) in one
virtio-blk device, and maps each virtual queue(vq) to blk-mq's
hardware queue.
With this approach, both scalability and performance on virtio-blk
device can get improved.
For verifying the improvement, I implements virtio-blk multi-vq over
qemu's dataplane feature, and both handling host notification
from each vq and
2014 Jun 26
7
[PATCH v2 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi,
These patches try to support multi virtual queues(multi-vq) in one
virtio-blk device, and maps each virtual queue(vq) to blk-mq's
hardware queue.
With this approach, both scalability and performance on virtio-blk
device can get improved.
For verifying the improvement, I implements virtio-blk multi-vq over
qemu's dataplane feature, and both handling host notification
from each vq and
2014 Jun 13
6
[RFC PATCH 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi,
This patches try to support multi virtual queues(multi-vq) in one
virtio-blk device, and maps each virtual queue(vq) to blk-mq's
hardware queue.
With this approach, both scalability and performance problems on
virtio-blk device get improved.
For verifying the improvement, I implements virtio-blk multi-vq over
qemu's dataplane feature, and both handling host notification
from each vq
2014 Jun 13
6
[RFC PATCH 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi,
This patches try to support multi virtual queues(multi-vq) in one
virtio-blk device, and maps each virtual queue(vq) to blk-mq's
hardware queue.
With this approach, both scalability and performance problems on
virtio-blk device get improved.
For verifying the improvement, I implements virtio-blk multi-vq over
qemu's dataplane feature, and both handling host notification
from each vq
2014 Jun 26
1
[PATCH v2 2/2] block: virtio-blk: support multi virt queues per virtio-blk device
On Thu, Jun 26, 2014 at 10:08:46AM +0800, Ming Lei wrote:
> Firstly this patch supports more than one virtual queues for virtio-blk
> device.
>
> Secondly this patch maps the virtual queue to blk-mq's hardware queue.
>
> With this approach, both scalability and performance can be improved.
>
> Signed-off-by: Ming Lei <ming.lei at canonical.com>
> ---
>
2014 Jun 26
1
[PATCH v2 2/2] block: virtio-blk: support multi virt queues per virtio-blk device
On Thu, Jun 26, 2014 at 10:08:46AM +0800, Ming Lei wrote:
> Firstly this patch supports more than one virtual queues for virtio-blk
> device.
>
> Secondly this patch maps the virtual queue to blk-mq's hardware queue.
>
> With this approach, both scalability and performance can be improved.
>
> Signed-off-by: Ming Lei <ming.lei at canonical.com>
> ---
>
2018 Mar 30
2
[PATCH v3] virtio_blk: add DISCARD and WRIET ZEROES command support
Existing virtio-blk protocol doesn't have DISCARD/WRITE ZEROES
command support, this will impact the performance when using SSD
backend over file systems.
The idea here is using 16 Bytes payload as one descriptor for
DISCARD/WRITE ZEROES command, users can put several ranges into
one command, for the purpose to support such feature, two feature
flags