Displaying 20 results from an estimated 3000 matches similar to: "[PATCH V4 0/3] Improve virtio-blk performance"
2012 Aug 08
2
[PATCH V7 0/2] Improve virtio-blk performance
Hi, all
Changes in v7:
- Using vbr->flags to trace request type
- Dropped unnecessary struct virtio_blk *vblk parameter
- Reuse struct virtblk_req in bio done function
- Added performance data on normal SATA device and the reason why make it optional
Fio test shows bio-based IO path gives the following performance improvement:
1) Ramdisk device
With bio-based IO path, sequential
2012 Aug 08
2
[PATCH V7 0/2] Improve virtio-blk performance
Hi, all
Changes in v7:
- Using vbr->flags to trace request type
- Dropped unnecessary struct virtio_blk *vblk parameter
- Reuse struct virtblk_req in bio done function
- Added performance data on normal SATA device and the reason why make it optional
Fio test shows bio-based IO path gives the following performance improvement:
1) Ramdisk device
With bio-based IO path, sequential
2012 Jul 13
5
[PATCH V3 0/3] Improve virtio-blk performance
This patchset implements bio-based IO path for virito-blk to improve
performance.
Fio test shows bio-based IO path gives the following performance improvement:
1) Ramdisk device
With bio-based IO path, sequential read/write, random read/write
IOPS boost : 28%, 24%, 21%, 16%
Latency improvement: 32%, 17%, 21%, 16%
2) Fusion IO device
With bio-based IO path, sequential
2012 Jul 13
5
[PATCH V3 0/3] Improve virtio-blk performance
This patchset implements bio-based IO path for virito-blk to improve
performance.
Fio test shows bio-based IO path gives the following performance improvement:
1) Ramdisk device
With bio-based IO path, sequential read/write, random read/write
IOPS boost : 28%, 24%, 21%, 16%
Latency improvement: 32%, 17%, 21%, 16%
2) Fusion IO device
With bio-based IO path, sequential
2012 Aug 07
4
[PATCH V6 0/2] Improve virtio-blk performance
Hi, all
This version reworked on REQ_FLUSH and REQ_FUA support as suggested by
Christoph and dropped the block core bits since Jens has picked them up.
Fio test shows bio-based IO path gives the following performance improvement:
1) Ramdisk device
With bio-based IO path, sequential read/write, random read/write
IOPS boost : 28%, 24%, 21%, 16%
Latency improvement: 32%,
2012 Aug 07
4
[PATCH V6 0/2] Improve virtio-blk performance
Hi, all
This version reworked on REQ_FLUSH and REQ_FUA support as suggested by
Christoph and dropped the block core bits since Jens has picked them up.
Fio test shows bio-based IO path gives the following performance improvement:
1) Ramdisk device
With bio-based IO path, sequential read/write, random read/write
IOPS boost : 28%, 24%, 21%, 16%
Latency improvement: 32%,
2012 Aug 02
9
[PATCH V5 0/4] Improve virtio-blk performance
Hi folks,
This version added REQ_FLUSH and REQ_FUA support as suggested by Christoph and
rebased against latest linus's tree.
Jens, could you please consider picking up the dependencies 1/4 and 2/4 in your
tree. Thanks!
This patchset implements bio-based IO path for virito-blk to improve
performance.
Fio test shows bio-based IO path gives the following performance improvement:
1) Ramdisk
2012 Aug 02
9
[PATCH V5 0/4] Improve virtio-blk performance
Hi folks,
This version added REQ_FLUSH and REQ_FUA support as suggested by Christoph and
rebased against latest linus's tree.
Jens, could you please consider picking up the dependencies 1/4 and 2/4 in your
tree. Thanks!
This patchset implements bio-based IO path for virito-blk to improve
performance.
Fio test shows bio-based IO path gives the following performance improvement:
1) Ramdisk
2012 Jun 13
4
[PATCH RFC 0/2] Improve virtio-blk performance
This patchset implements bio-based IO path for virito-blk to improve
performance.
Fio test shows it gives, 28%, 24%, 21%, 16% IOPS boost and 32%, 17%, 21%, 16%
latency improvement for sequential read/write, random read/write respectively.
Asias He (2):
block: Add blk_bio_map_sg() helper
virtio-blk: Add bio-based IO path for virtio-blk
block/blk-merge.c | 63 ++++++++++++++
2012 Jun 13
4
[PATCH RFC 0/2] Improve virtio-blk performance
This patchset implements bio-based IO path for virito-blk to improve
performance.
Fio test shows it gives, 28%, 24%, 21%, 16% IOPS boost and 32%, 17%, 21%, 16%
latency improvement for sequential read/write, random read/write respectively.
Asias He (2):
block: Add blk_bio_map_sg() helper
virtio-blk: Add bio-based IO path for virtio-blk
block/blk-merge.c | 63 ++++++++++++++
2012 Jun 18
13
[PATCH v2 0/3] Improve virtio-blk performance
This patchset implements bio-based IO path for virito-blk to improve
performance.
Fio test shows it gives, 28%, 24%, 21%, 16% IOPS boost and 32%, 17%, 21%, 16%
latency improvement for sequential read/write, random read/write respectively.
Asias He (3):
block: Introduce __blk_segment_map_sg() helper
block: Add blk_bio_map_sg() helper
virtio-blk: Add bio-based IO path for virtio-blk
2012 Jun 18
13
[PATCH v2 0/3] Improve virtio-blk performance
This patchset implements bio-based IO path for virito-blk to improve
performance.
Fio test shows it gives, 28%, 24%, 21%, 16% IOPS boost and 32%, 17%, 21%, 16%
latency improvement for sequential read/write, random read/write respectively.
Asias He (3):
block: Introduce __blk_segment_map_sg() helper
block: Add blk_bio_map_sg() helper
virtio-blk: Add bio-based IO path for virtio-blk
2012 Jun 01
4
[PATCH v3] virtio_blk: unlock vblk->lock during kick
Holding the vblk->lock across kick causes poor scalability in SMP
guests. If one CPU is doing virtqueue kick and another CPU touches the
vblk->lock it will have to spin until virtqueue kick completes.
This patch reduces system% CPU utilization in SMP guests that are
running multithreaded I/O-bound workloads. The improvements are small
but show as iops and SMP are increased.
Khoa Huynh
2012 Jun 01
4
[PATCH v3] virtio_blk: unlock vblk->lock during kick
Holding the vblk->lock across kick causes poor scalability in SMP
guests. If one CPU is doing virtqueue kick and another CPU touches the
vblk->lock it will have to spin until virtqueue kick completes.
This patch reduces system% CPU utilization in SMP guests that are
running multithreaded I/O-bound workloads. The improvements are small
but show as iops and SMP are increased.
Khoa Huynh
2017 Oct 10
2
small files performance
2017-10-10 8:25 GMT+02:00 Karan Sandha <ksandha at redhat.com>:
> Hi Gandalf,
>
> We have multiple tuning to do for small-files which decrease the time for
> negative lookups , meta-data caching, parallel readdir. Bumping the server
> and client event threads will help you out in increasing the small file
> performance.
>
> gluster v set <vol-name> group
2017 Oct 10
0
small files performance
I just tried setting:
performance.parallel-readdir on
features.cache-invalidation on
features.cache-invalidation-timeout 600
performance.stat-prefetch
performance.cache-invalidation
performance.md-cache-timeout 600
network.inode-lru-limit 50000
performance.cache-invalidation on
and clients could not see their files with ls when accessing via a fuse
mount. The files and directories were there,
2012 Aug 10
1
virtio-scsi <-> vhost multi lun/adapter performance results with 3.6-rc0
Hi folks,
The following are initial virtio-scsi + target vhost benchmark results
using multiple target LUNs per vhost and multiple virtio PCI adapters to
scale the total number of virtio-scsi LUNs into a single KVM guest.
The test setup is currently using 4x SCSI LUNs per vhost WWPN, with 8x
virtio PCI adapters for a total of 32x 500MB ramdisk LUNs into a single
guest, along with each backend
2012 Aug 10
1
virtio-scsi <-> vhost multi lun/adapter performance results with 3.6-rc0
Hi folks,
The following are initial virtio-scsi + target vhost benchmark results
using multiple target LUNs per vhost and multiple virtio PCI adapters to
scale the total number of virtio-scsi LUNs into a single KVM guest.
The test setup is currently using 4x SCSI LUNs per vhost WWPN, with 8x
virtio PCI adapters for a total of 32x 500MB ramdisk LUNs into a single
guest, along with each backend
2018 Mar 20
2
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
On Tue, Mar 20, 2018 at 8:57 AM, Sam McLeod <mailinglists at smcleod.net>
wrote:
> Hi Raghavendra,
>
>
> On 20 Mar 2018, at 1:55 pm, Raghavendra Gowdappa <rgowdapp at redhat.com>
> wrote:
>
> Aggregating large number of small writes by write-behind into large writes
> has been merged on master:
> https://github.com/gluster/glusterfs/issues/364
>
>
2012 Mar 30
4
[PATCH] virtio_blk: Drop unused request tracking list
Benchmark shows small performance improvement on fusion io device.
Before:
seq-read : io=1,024MB, bw=19,982KB/s, iops=39,964, runt= 52475msec
seq-write: io=1,024MB, bw=20,321KB/s, iops=40,641, runt= 51601msec
rnd-read : io=1,024MB, bw=15,404KB/s, iops=30,808, runt= 68070msec
rnd-write: io=1,024MB, bw=14,776KB/s, iops=29,552, runt= 70963msec
After:
seq-read : io=1,024MB, bw=20,343KB/s,