Displaying 20 results from an estimated 1000 matches similar to: "[PATCH] virtio-blk: Initialize blkqueue depth from virtqueue size"
2014 Mar 15
1
[PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
On March 14, 2014 11:34:31 PM EDT, Theodore Ts'o <tytso at mit.edu> wrote:
>The current virtio block sets a queue depth of 64, which is
>insufficient for very fast devices. It has been demonstrated that
>with a high IOPS device, using a queue depth of 256 can double the
>IOPS which can be sustained.
>
>As suggested by Venkatash Srinivas, set the queue depth by default
2014 Mar 15
1
[PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
On March 14, 2014 11:34:31 PM EDT, Theodore Ts'o <tytso at mit.edu> wrote:
>The current virtio block sets a queue depth of 64, which is
>insufficient for very fast devices. It has been demonstrated that
>with a high IOPS device, using a queue depth of 256 can double the
>IOPS which can be sustained.
>
>As suggested by Venkatash Srinivas, set the queue depth by default
2014 Mar 17
2
[PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
Theodore Ts'o <tytso at mit.edu> writes:
> The current virtio block sets a queue depth of 64, which is
> insufficient for very fast devices. It has been demonstrated that
> with a high IOPS device, using a queue depth of 256 can double the
> IOPS which can be sustained.
>
> As suggested by Venkatash Srinivas, set the queue depth by default to
> be one half the the
2014 Mar 17
2
[PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
Theodore Ts'o <tytso at mit.edu> writes:
> The current virtio block sets a queue depth of 64, which is
> insufficient for very fast devices. It has been demonstrated that
> with a high IOPS device, using a queue depth of 256 can double the
> IOPS which can be sustained.
>
> As suggested by Venkatash Srinivas, set the queue depth by default to
> be one half the the
2014 Mar 15
0
[PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
The current virtio block sets a queue depth of 64, which is
insufficient for very fast devices. It has been demonstrated that
with a high IOPS device, using a queue depth of 256 can double the
IOPS which can be sustained.
As suggested by Venkatash Srinivas, set the queue depth by default to
be one half the the device's virtqueue, which is the maximum queue
depth that can be supported by the
2014 Mar 19
2
[PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
tytso at mit.edu writes:
> On Mon, Mar 17, 2014 at 11:12:15AM +1030, Rusty Russell wrote:
>>
>> Note that with indirect descriptors (which is supported by Almost
>> Everyone), we can actually use the full index, so this value is a bit
>> pessimistic. But it's OK as a starting point.
>
> So is this something that can go upstream with perhaps a slight
>
2014 Mar 19
2
[PATCH] virtio-blk: make the queue depth the max supportable by the hypervisor
tytso at mit.edu writes:
> On Mon, Mar 17, 2014 at 11:12:15AM +1030, Rusty Russell wrote:
>>
>> Note that with indirect descriptors (which is supported by Almost
>> Everyone), we can actually use the full index, so this value is a bit
>> pessimistic. But it's OK as a starting point.
>
> So is this something that can go upstream with perhaps a slight
>
2014 Mar 14
2
[PATCH] virtio-blk: make the queue depth configurable
The current virtio block sets a queue depth of 64. With a
sufficiently fast device, using a queue depth of 256 can double the
IOPS which can be sustained. So make the queue depth something which
can be set at module load time or via a kernel boot-time parameter.
Signed-off-by: "Theodore Ts'o" <tytso at mit.edu>
Cc: Rusty Russell <rusty at rustcorp.com.au>
Cc:
2014 Mar 14
2
[PATCH] virtio-blk: make the queue depth configurable
The current virtio block sets a queue depth of 64. With a
sufficiently fast device, using a queue depth of 256 can double the
IOPS which can be sustained. So make the queue depth something which
can be set at module load time or via a kernel boot-time parameter.
Signed-off-by: "Theodore Ts'o" <tytso at mit.edu>
Cc: Rusty Russell <rusty at rustcorp.com.au>
Cc:
2012 Apr 10
3
[PATCH] virtio_blk: Add help function to format mass of disks
The current virtio block's naming algorithm just supports 18278
(26^3 + 26^2 + 26) disks. If there are mass of virtio blocks,
there will be disks with the same name.
Based on commit 3e1a7ff8a0a7b948f2684930166954f9e8e776fe, I add
function "virtblk_name_format()" for virtio block to support mass
of disks naming.
Signed-off-by: Ren Mingxin <renmx at cn.fujitsu.com>
---
2012 Apr 10
3
[PATCH] virtio_blk: Add help function to format mass of disks
The current virtio block's naming algorithm just supports 18278
(26^3 + 26^2 + 26) disks. If there are mass of virtio blocks,
there will be disks with the same name.
Based on commit 3e1a7ff8a0a7b948f2684930166954f9e8e776fe, I add
function "virtblk_name_format()" for virtio block to support mass
of disks naming.
Signed-off-by: Ren Mingxin <renmx at cn.fujitsu.com>
---
2014 Sep 06
5
[PATCH] virtio_blk: merge S/G list entries by default
Most virtio setups have a fairly limited number of ring entries available.
Enable S/G entry merging by default to fit into less of them. This restores
the behavior at time of the virtio-blk blk-mq conversion, which was changed
by commit "block: add queue flag for disabling SG merging" which made the
behavior optional, but didn't update the existing drivers to keep their
previous
2014 Sep 06
5
[PATCH] virtio_blk: merge S/G list entries by default
Most virtio setups have a fairly limited number of ring entries available.
Enable S/G entry merging by default to fit into less of them. This restores
the behavior at time of the virtio-blk blk-mq conversion, which was changed
by commit "block: add queue flag for disabling SG merging" which made the
behavior optional, but didn't update the existing drivers to keep their
previous
2020 Jul 15
3
[PATCH] virtio-blk: check host supplied logical block size
Linux kernel only supports logical block sizes which are power of two,
at least 512 bytes and no more that PAGE_SIZE.
Check this instead of crashing later on.
Note that there is no need to check physical block size since it is
only a hint, and virtio-blk already only supports power of two values.
Bugzilla link: https://bugzilla.redhat.com/show_bug.cgi?id=1664619
Signed-off-by: Maxim Levitsky
2020 Jul 15
3
[PATCH] virtio-blk: check host supplied logical block size
Linux kernel only supports logical block sizes which are power of two,
at least 512 bytes and no more that PAGE_SIZE.
Check this instead of crashing later on.
Note that there is no need to check physical block size since it is
only a hint, and virtio-blk already only supports power of two values.
Bugzilla link: https://bugzilla.redhat.com/show_bug.cgi?id=1664619
Signed-off-by: Maxim Levitsky
2020 Jul 15
2
[PATCH] virtio-blk: check host supplied logical block size
On Wed, 2020-07-15 at 06:06 -0400, Michael S. Tsirkin wrote:
> On Wed, Jul 15, 2020 at 12:55:18PM +0300, Maxim Levitsky wrote:
> > Linux kernel only supports logical block sizes which are power of
> > two,
> > at least 512 bytes and no more that PAGE_SIZE.
> >
> > Check this instead of crashing later on.
> >
> > Note that there is no need to check
2020 Jul 15
2
[PATCH] virtio-blk: check host supplied logical block size
On Wed, 2020-07-15 at 06:06 -0400, Michael S. Tsirkin wrote:
> On Wed, Jul 15, 2020 at 12:55:18PM +0300, Maxim Levitsky wrote:
> > Linux kernel only supports logical block sizes which are power of
> > two,
> > at least 512 bytes and no more that PAGE_SIZE.
> >
> > Check this instead of crashing later on.
> >
> > Note that there is no need to check
2012 May 03
2
[PATCH 1/2] virtio-blk: Fix hot-unplug race in remove method
If we reset the virtio-blk device before the requests already dispatched
to the virtio-blk driver from the block layer are finised, we will stuck
in blk_cleanup_queue() and the remove will fail.
blk_cleanup_queue() calls blk_drain_queue() to drain all requests queued
before DEAD marking. However it will never success if the device is
already stopped. We'll have q->in_flight[] > 0, so
2012 May 03
2
[PATCH 1/2] virtio-blk: Fix hot-unplug race in remove method
If we reset the virtio-blk device before the requests already dispatched
to the virtio-blk driver from the block layer are finised, we will stuck
in blk_cleanup_queue() and the remove will fail.
blk_cleanup_queue() calls blk_drain_queue() to drain all requests queued
before DEAD marking. However it will never success if the device is
already stopped. We'll have q->in_flight[] > 0, so
2014 Jun 20
3
[PATCH v1 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi,
These patches try to support multi virtual queues(multi-vq) in one
virtio-blk device, and maps each virtual queue(vq) to blk-mq's
hardware queue.
With this approach, both scalability and performance on virtio-blk
device can get improved.
For verifying the improvement, I implements virtio-blk multi-vq over
qemu's dataplane feature, and both handling host notification
from each vq and