Displaying 20 results from an estimated 41 matches for "blk_mq_stop_hw_queues".
Did you mean:
blk_mq_stop_hw_queue
2020 Feb 13
7
[PATCH 0/2] virtio-blk: improve handling of DMA mapping failures
Two patches are handling new edge cases introduced by doing DMA mappings
(which can fail) in virtio core.
I stumbled upon this while stress testing I/O for Protected Virtual
Machines. I deliberately chose a tiny swiotlb size and have generated
load with fio. With more than one virtio-blk disk in use I experienced
hangs.
The goal of this series is to fix those hangs.
Halil Pasic (2):
2020 Feb 13
7
[PATCH 0/2] virtio-blk: improve handling of DMA mapping failures
Two patches are handling new edge cases introduced by doing DMA mappings
(which can fail) in virtio core.
I stumbled upon this while stress testing I/O for Protected Virtual
Machines. I deliberately chose a tiny swiotlb size and have generated
load with fio. With more than one virtio-blk disk in use I experienced
hangs.
The goal of this series is to fix those hangs.
Halil Pasic (2):
2020 Feb 14
1
[PATCH 1/2] virtio-blk: fix hw_queue stopped on arbitrary error
Hi Halil,
When swiotlb full is hit for virtio_blk, there is below warning for once (the
warning is not by this patch set). Is this expected or just false positive?
[ 54.767257] virtio-pci 0000:00:04.0: swiotlb buffer is full (sz: 16 bytes),
total 32768 (slots), used 258 (slots)
[ 54.767260] virtio-pci 0000:00:04.0: overflow 0x0000000075770110+16 of DMA
mask ffffffffffffffff bus limit 0
[
2020 Feb 13
0
[PATCH 1/2] virtio-blk: fix hw_queue stopped on arbitrary error
Since nobody else is going to restart our hw_queue for us, the
blk_mq_start_stopped_hw_queues() is in virtblk_done() is not sufficient
necessarily sufficient to ensure that the queue will get started again.
In case of global resource outage (-ENOMEM because mapping failure,
because of swiotlb full) our virtqueue may be empty and we can get
stuck with a stopped hw_queue.
Let us not stop the queue
2014 Oct 06
2
[PATCH 06/16] virtio_blk: drop config_enable
...ev);
>
> /* Prevent config work handler from accessing the device. */
dito on the comment
> - mutex_lock(&vblk->config_lock);
> - vblk->config_enable = false;
> - mutex_unlock(&vblk->config_lock);
> -
> flush_work(&vblk->config_work);
>
> blk_mq_stop_hw_queues(vblk->disk->queue);
2014 Oct 06
2
[PATCH 06/16] virtio_blk: drop config_enable
...ev);
>
> /* Prevent config work handler from accessing the device. */
dito on the comment
> - mutex_lock(&vblk->config_lock);
> - vblk->config_enable = false;
> - mutex_unlock(&vblk->config_lock);
> -
> flush_work(&vblk->config_work);
>
> blk_mq_stop_hw_queues(vblk->disk->queue);
2020 Apr 18
0
[PATCH AUTOSEL 5.5 74/75] virtio-blk: improve virtqueue error to BLK_STS
From: Halil Pasic <pasic at linux.ibm.com>
[ Upstream commit 3d973b2e9a625996ee997c7303cd793b9d197c65 ]
Let's change the mapping between virtqueue_add errors to BLK_STS
statuses, so that -ENOSPC, which indicates virtqueue full is still
mapped to BLK_STS_DEV_RESOURCE, but -ENOMEM which indicates non-device
specific resource outage is mapped to BLK_STS_RESOURCE.
Signed-off-by: Halil
2020 Apr 18
0
[PATCH AUTOSEL 5.4 73/78] virtio-blk: improve virtqueue error to BLK_STS
From: Halil Pasic <pasic at linux.ibm.com>
[ Upstream commit 3d973b2e9a625996ee997c7303cd793b9d197c65 ]
Let's change the mapping between virtqueue_add errors to BLK_STS
statuses, so that -ENOSPC, which indicates virtqueue full is still
mapped to BLK_STS_DEV_RESOURCE, but -ENOMEM which indicates non-device
specific resource outage is mapped to BLK_STS_RESOURCE.
Signed-off-by: Halil
2020 Apr 18
0
[PATCH AUTOSEL 4.19 45/47] virtio-blk: improve virtqueue error to BLK_STS
From: Halil Pasic <pasic at linux.ibm.com>
[ Upstream commit 3d973b2e9a625996ee997c7303cd793b9d197c65 ]
Let's change the mapping between virtqueue_add errors to BLK_STS
statuses, so that -ENOSPC, which indicates virtqueue full is still
mapped to BLK_STS_DEV_RESOURCE, but -ENOMEM which indicates non-device
specific resource outage is mapped to BLK_STS_RESOURCE.
Signed-off-by: Halil
2014 Oct 05
0
[PATCH 06/16] virtio_blk: drop config_enable
...freeze(struct virtio_device *vdev)
vdev->config->reset(vdev);
/* Prevent config work handler from accessing the device. */
- mutex_lock(&vblk->config_lock);
- vblk->config_enable = false;
- mutex_unlock(&vblk->config_lock);
-
flush_work(&vblk->config_work);
blk_mq_stop_hw_queues(vblk->disk->queue);
@@ -823,7 +809,6 @@ static int virtblk_restore(struct virtio_device *vdev)
struct virtio_blk *vblk = vdev->priv;
int ret;
- vblk->config_enable = true;
ret = init_vq(vdev->priv);
if (!ret)
blk_mq_start_stopped_hw_queues(vblk->disk->queue, true);...
2014 Oct 06
0
[PATCH 06/16] virtio_blk: drop config_enable
...;
> dito on the comment
Same here and in -net.
Pls confirm.
> > - mutex_lock(&vblk->config_lock);
> > - vblk->config_enable = false;
> > - mutex_unlock(&vblk->config_lock);
> > -
> > flush_work(&vblk->config_work);
> >
> > blk_mq_stop_hw_queues(vblk->disk->queue);
2014 Jun 22
2
[PATCH v1 2/2] block: virtio-blk: support multi virt queues per virtio-blk device
On Fri, Jun 20, 2014 at 11:29:40PM +0800, Ming Lei wrote:
> Firstly this patch supports more than one virtual queues for virtio-blk
> device.
>
> Secondly this patch maps the virtual queue to blk-mq's hardware queue.
>
> With this approach, both scalability and performance can be improved.
>
> Signed-off-by: Ming Lei <ming.lei at canonical.com>
> ---
>
2014 Jun 22
2
[PATCH v1 2/2] block: virtio-blk: support multi virt queues per virtio-blk device
On Fri, Jun 20, 2014 at 11:29:40PM +0800, Ming Lei wrote:
> Firstly this patch supports more than one virtual queues for virtio-blk
> device.
>
> Secondly this patch maps the virtual queue to blk-mq's hardware queue.
>
> With this approach, both scalability and performance can be improved.
>
> Signed-off-by: Ming Lei <ming.lei at canonical.com>
> ---
>
2014 Oct 06
0
[PATCH v2 05/15] virtio_blk: drop config_enable
...set(vdev);
- /* Prevent config work handler from accessing the device. */
- mutex_lock(&vblk->config_lock);
- vblk->config_enable = false;
- mutex_unlock(&vblk->config_lock);
-
+ /* Make sure no work handler is accessing the device. */
flush_work(&vblk->config_work);
blk_mq_stop_hw_queues(vblk->disk->queue);
@@ -823,7 +809,6 @@ static int virtblk_restore(struct virtio_device *vdev)
struct virtio_blk *vblk = vdev->priv;
int ret;
- vblk->config_enable = true;
ret = init_vq(vdev->priv);
if (!ret)
blk_mq_start_stopped_hw_queues(vblk->disk->queue, true);...
2019 Dec 12
4
[PATCH] virtio-blk: remove VIRTIO_BLK_F_SCSI support
Since the need for a special flag to support SCSI passthrough on a
block device was added in May 2017 the SCSI passthrough support in
virtio-blk has been disabled. It has always been a bad idea
(just ask the original author..) and we have virtio-scsi for proper
passthrough. The feature also never made it into the virtio 1.0
or later specifications.
Signed-off-by: Christoph Hellwig <hch at
2014 Jun 20
3
[PATCH v1 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi,
These patches try to support multi virtual queues(multi-vq) in one
virtio-blk device, and maps each virtual queue(vq) to blk-mq's
hardware queue.
With this approach, both scalability and performance on virtio-blk
device can get improved.
For verifying the improvement, I implements virtio-blk multi-vq over
qemu's dataplane feature, and both handling host notification
from each vq and
2014 Jun 20
3
[PATCH v1 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi,
These patches try to support multi virtual queues(multi-vq) in one
virtio-blk device, and maps each virtual queue(vq) to blk-mq's
hardware queue.
With this approach, both scalability and performance on virtio-blk
device can get improved.
For verifying the improvement, I implements virtio-blk multi-vq over
qemu's dataplane feature, and both handling host notification
from each vq and
2014 Jun 26
1
[PATCH v2 2/2] block: virtio-blk: support multi virt queues per virtio-blk device
On Thu, Jun 26, 2014 at 10:08:46AM +0800, Ming Lei wrote:
> Firstly this patch supports more than one virtual queues for virtio-blk
> device.
>
> Secondly this patch maps the virtual queue to blk-mq's hardware queue.
>
> With this approach, both scalability and performance can be improved.
>
> Signed-off-by: Ming Lei <ming.lei at canonical.com>
> ---
>
2014 Jun 26
1
[PATCH v2 2/2] block: virtio-blk: support multi virt queues per virtio-blk device
On Thu, Jun 26, 2014 at 10:08:46AM +0800, Ming Lei wrote:
> Firstly this patch supports more than one virtual queues for virtio-blk
> device.
>
> Secondly this patch maps the virtual queue to blk-mq's hardware queue.
>
> With this approach, both scalability and performance can be improved.
>
> Signed-off-by: Ming Lei <ming.lei at canonical.com>
> ---
>
2014 Jun 20
0
[PATCH v1 2/2] block: virtio-blk: support multi virt queues per virtio-blk device
Firstly this patch supports more than one virtual queues for virtio-blk
device.
Secondly this patch maps the virtual queue to blk-mq's hardware queue.
With this approach, both scalability and performance can be improved.
Signed-off-by: Ming Lei <ming.lei at canonical.com>
---
drivers/block/virtio_blk.c | 70 +++++++++++++++++++++++++++++++-------------
1 file changed, 50