Displaying 20 results from an estimated 4000 matches similar to: "[PATCH RFC 0/2] Improve virtio-blk performance"
2012 Jul 13
5
[PATCH V3 0/3] Improve virtio-blk performance
This patchset implements bio-based IO path for virito-blk to improve
performance.
Fio test shows bio-based IO path gives the following performance improvement:
1) Ramdisk device
With bio-based IO path, sequential read/write, random read/write
IOPS boost : 28%, 24%, 21%, 16%
Latency improvement: 32%, 17%, 21%, 16%
2) Fusion IO device
With bio-based IO path, sequential
2012 Jul 13
5
[PATCH V3 0/3] Improve virtio-blk performance
This patchset implements bio-based IO path for virito-blk to improve
performance.
Fio test shows bio-based IO path gives the following performance improvement:
1) Ramdisk device
With bio-based IO path, sequential read/write, random read/write
IOPS boost : 28%, 24%, 21%, 16%
Latency improvement: 32%, 17%, 21%, 16%
2) Fusion IO device
With bio-based IO path, sequential
2012 Aug 02
9
[PATCH V5 0/4] Improve virtio-blk performance
Hi folks,
This version added REQ_FLUSH and REQ_FUA support as suggested by Christoph and
rebased against latest linus's tree.
Jens, could you please consider picking up the dependencies 1/4 and 2/4 in your
tree. Thanks!
This patchset implements bio-based IO path for virito-blk to improve
performance.
Fio test shows bio-based IO path gives the following performance improvement:
1) Ramdisk
2012 Aug 02
9
[PATCH V5 0/4] Improve virtio-blk performance
Hi folks,
This version added REQ_FLUSH and REQ_FUA support as suggested by Christoph and
rebased against latest linus's tree.
Jens, could you please consider picking up the dependencies 1/4 and 2/4 in your
tree. Thanks!
This patchset implements bio-based IO path for virito-blk to improve
performance.
Fio test shows bio-based IO path gives the following performance improvement:
1) Ramdisk
2012 Jun 18
13
[PATCH v2 0/3] Improve virtio-blk performance
This patchset implements bio-based IO path for virito-blk to improve
performance.
Fio test shows it gives, 28%, 24%, 21%, 16% IOPS boost and 32%, 17%, 21%, 16%
latency improvement for sequential read/write, random read/write respectively.
Asias He (3):
block: Introduce __blk_segment_map_sg() helper
block: Add blk_bio_map_sg() helper
virtio-blk: Add bio-based IO path for virtio-blk
2012 Jun 18
13
[PATCH v2 0/3] Improve virtio-blk performance
This patchset implements bio-based IO path for virito-blk to improve
performance.
Fio test shows it gives, 28%, 24%, 21%, 16% IOPS boost and 32%, 17%, 21%, 16%
latency improvement for sequential read/write, random read/write respectively.
Asias He (3):
block: Introduce __blk_segment_map_sg() helper
block: Add blk_bio_map_sg() helper
virtio-blk: Add bio-based IO path for virtio-blk
2012 Aug 08
2
[PATCH V7 0/2] Improve virtio-blk performance
Hi, all
Changes in v7:
- Using vbr->flags to trace request type
- Dropped unnecessary struct virtio_blk *vblk parameter
- Reuse struct virtblk_req in bio done function
- Added performance data on normal SATA device and the reason why make it optional
Fio test shows bio-based IO path gives the following performance improvement:
1) Ramdisk device
With bio-based IO path, sequential
2012 Aug 08
2
[PATCH V7 0/2] Improve virtio-blk performance
Hi, all
Changes in v7:
- Using vbr->flags to trace request type
- Dropped unnecessary struct virtio_blk *vblk parameter
- Reuse struct virtblk_req in bio done function
- Added performance data on normal SATA device and the reason why make it optional
Fio test shows bio-based IO path gives the following performance improvement:
1) Ramdisk device
With bio-based IO path, sequential
2012 Jul 28
1
[PATCH V4 0/3] Improve virtio-blk performance
Hi, Jens & Rusty
This version is rebased against linux-next which resolves the conflict with
Paolo Bonzini's 'virtio-blk: allow toggling host cache between writeback and
writethrough' patch.
Patch 1/3 and 2/3 applies on linus's master as well. Since Rusty will pick up
patch 3/3 so the changes to block core (adding blk_bio_map_sg()) will have a
user.
Jens, could you please
2012 Jul 28
1
[PATCH V4 0/3] Improve virtio-blk performance
Hi, Jens & Rusty
This version is rebased against linux-next which resolves the conflict with
Paolo Bonzini's 'virtio-blk: allow toggling host cache between writeback and
writethrough' patch.
Patch 1/3 and 2/3 applies on linus's master as well. Since Rusty will pick up
patch 3/3 so the changes to block core (adding blk_bio_map_sg()) will have a
user.
Jens, could you please
2012 Aug 07
4
[PATCH V6 0/2] Improve virtio-blk performance
Hi, all
This version reworked on REQ_FLUSH and REQ_FUA support as suggested by
Christoph and dropped the block core bits since Jens has picked them up.
Fio test shows bio-based IO path gives the following performance improvement:
1) Ramdisk device
With bio-based IO path, sequential read/write, random read/write
IOPS boost : 28%, 24%, 21%, 16%
Latency improvement: 32%,
2012 Aug 07
4
[PATCH V6 0/2] Improve virtio-blk performance
Hi, all
This version reworked on REQ_FLUSH and REQ_FUA support as suggested by
Christoph and dropped the block core bits since Jens has picked them up.
Fio test shows bio-based IO path gives the following performance improvement:
1) Ramdisk device
With bio-based IO path, sequential read/write, random read/write
IOPS boost : 28%, 24%, 21%, 16%
Latency improvement: 32%,
2012 Jun 01
4
[PATCH v3] virtio_blk: unlock vblk->lock during kick
Holding the vblk->lock across kick causes poor scalability in SMP
guests. If one CPU is doing virtqueue kick and another CPU touches the
vblk->lock it will have to spin until virtqueue kick completes.
This patch reduces system% CPU utilization in SMP guests that are
running multithreaded I/O-bound workloads. The improvements are small
but show as iops and SMP are increased.
Khoa Huynh
2012 Jun 01
4
[PATCH v3] virtio_blk: unlock vblk->lock during kick
Holding the vblk->lock across kick causes poor scalability in SMP
guests. If one CPU is doing virtqueue kick and another CPU touches the
vblk->lock it will have to spin until virtqueue kick completes.
This patch reduces system% CPU utilization in SMP guests that are
running multithreaded I/O-bound workloads. The improvements are small
but show as iops and SMP are increased.
Khoa Huynh
2013 Feb 12
12
[PATCH 0/9] virtio: new API for addition of buffers, scatterlist changes
Most device drivers do not need to perform any postprocessing on the
scatterlists they receive from higher-level drivers (e.g. the block
or SCSI layer), because they translate the request metadata directly
from the various C structs into the data that is required by the device.
virtio devices however do this translation in two steps: a device-specific
step in the device driver, and generic
2013 Feb 12
12
[PATCH 0/9] virtio: new API for addition of buffers, scatterlist changes
Most device drivers do not need to perform any postprocessing on the
scatterlists they receive from higher-level drivers (e.g. the block
or SCSI layer), because they translate the request metadata directly
from the various C structs into the data that is required by the device.
virtio devices however do this translation in two steps: a device-specific
step in the device driver, and generic
2013 Feb 07
11
[RFC PATCH 0/8] virtio: new API for addition of buffers, scatterlist changes
The virtqueue_add_buf function has two limitations:
1) it requires the caller to provide all the buffers in a single call;
2) it does not support chained scatterlists: the buffers must be
provided as an array of struct scatterlist.
Because of these limitations, virtio-scsi has to copy each request into
a scatterlist internal to the driver. It cannot just use the one that
was prepared by the
2013 Feb 07
11
[RFC PATCH 0/8] virtio: new API for addition of buffers, scatterlist changes
The virtqueue_add_buf function has two limitations:
1) it requires the caller to provide all the buffers in a single call;
2) it does not support chained scatterlists: the buffers must be
provided as an array of struct scatterlist.
Because of these limitations, virtio-scsi has to copy each request into
a scatterlist internal to the driver. It cannot just use the one that
was prepared by the
2012 Mar 30
4
[PATCH] virtio_blk: Drop unused request tracking list
Benchmark shows small performance improvement on fusion io device.
Before:
seq-read : io=1,024MB, bw=19,982KB/s, iops=39,964, runt= 52475msec
seq-write: io=1,024MB, bw=20,321KB/s, iops=40,641, runt= 51601msec
rnd-read : io=1,024MB, bw=15,404KB/s, iops=30,808, runt= 68070msec
rnd-write: io=1,024MB, bw=14,776KB/s, iops=29,552, runt= 70963msec
After:
seq-read : io=1,024MB, bw=20,343KB/s,
2012 Mar 30
4
[PATCH] virtio_blk: Drop unused request tracking list
Benchmark shows small performance improvement on fusion io device.
Before:
seq-read : io=1,024MB, bw=19,982KB/s, iops=39,964, runt= 52475msec
seq-write: io=1,024MB, bw=20,321KB/s, iops=40,641, runt= 51601msec
rnd-read : io=1,024MB, bw=15,404KB/s, iops=30,808, runt= 68070msec
rnd-write: io=1,024MB, bw=14,776KB/s, iops=29,552, runt= 70963msec
After:
seq-read : io=1,024MB, bw=20,343KB/s,