On Fri, Nov 02, 2018 at 06:21:22PM +0000, Vitaly Mayatskikh wrote:> vhost_blk is a host-side kernel mode accelerator for virtio-blk. The > driver allows VM to reach a near bare-metal disk performance. See IOPS > numbers below (fio --rw=randread --bs=4k). > > This implementation uses kiocb interface. It is slightly slower than > going directly through bio, but is simpler and also works with disk > images placed on a file system. > > # fio num-jobs > # A: bare metal over block > # B: bare metal over file > # C: virtio-blk over block > # D: virtio-blk over file > # E: vhost-blk bio over block > # F: vhost-blk kiocb over block > # G: vhost-blk kiocb over file > # > # A B C D E F G > > 1 171k 151k 148k 151k 195k 187k 175k > 2 328k 302k 249k 241k 349k 334k 296k > 3 479k 437k 179k 174k 501k 464k 404k > 4 622k 568k 143k 183k 620k 580k 492k > 5 755k 697k 136k 128k 737k 693k 579k > 6 887k 808k 131k 120k 830k 782k 640k > 7 1004k 926k 126k 131k 926k 863k 693k > 8 1099k 1015k 117k 115k 1001k 931k 712k > 9 1194k 1119k 115k 111k 1055k 991k 711k > 10 1278k 1207k 109k 114k 1130k 1046k 695k > 11 1345k 1280k 110k 108k 1119k 1091k 663k > 12 1411k 1356k 104k 106k 1201k 1142k 629k > 13 1466k 1423k 106k 106k 1260k 1170k 607k > 14 1517k 1486k 103k 106k 1296k 1179k 589k > 15 1552k 1543k 102k 102k 1322k 1191k 571k > 16 1480k 1506k 101k 102k 1346k 1202k 566k > > Vitaly Mayatskikh (1): > Add vhost_blk driverThanks! Before merging this, I'd like to get some acks from userspace that it's actually going to be used - e.g. QEMU block maintainers.> drivers/vhost/Kconfig | 13 ++ > drivers/vhost/Makefile | 3 + > drivers/vhost/blk.c | 510 +++++++++++++++++++++++++++++++++++++++++ > 3 files changed, 526 insertions(+) > create mode 100644 drivers/vhost/blk.c > > -- > 2.17.1
On 11/02/2018 07:26 PM, Michael S. Tsirkin wrote:> On Fri, Nov 02, 2018 at 06:21:22PM +0000, Vitaly Mayatskikh wrote: >> vhost_blk is a host-side kernel mode accelerator for virtio-blk. The >> driver allows VM to reach a near bare-metal disk performance. See IOPS >> numbers below (fio --rw=randread --bs=4k). >> >> This implementation uses kiocb interface. It is slightly slower than >> going directly through bio, but is simpler and also works with disk >> images placed on a file system.This should also work with other transports like virtio-ccw (instead of virtio-pci). Correct?>> >> # fio num-jobs >> # A: bare metal over block >> # B: bare metal over file >> # C: virtio-blk over block >> # D: virtio-blk over file >> # E: vhost-blk bio over block >> # F: vhost-blk kiocb over block >> # G: vhost-blk kiocb over file >> # >> # A B C D E F G >> >> 1 171k 151k 148k 151k 195k 187k 175k >> 2 328k 302k 249k 241k 349k 334k 296k >> 3 479k 437k 179k 174k 501k 464k 404k >> 4 622k 568k 143k 183k 620k 580k 492k >> 5 755k 697k 136k 128k 737k 693k 579k >> 6 887k 808k 131k 120k 830k 782k 640k >> 7 1004k 926k 126k 131k 926k 863k 693k >> 8 1099k 1015k 117k 115k 1001k 931k 712k >> 9 1194k 1119k 115k 111k 1055k 991k 711k >> 10 1278k 1207k 109k 114k 1130k 1046k 695k >> 11 1345k 1280k 110k 108k 1119k 1091k 663k >> 12 1411k 1356k 104k 106k 1201k 1142k 629k >> 13 1466k 1423k 106k 106k 1260k 1170k 607k >> 14 1517k 1486k 103k 106k 1296k 1179k 589k >> 15 1552k 1543k 102k 102k 1322k 1191k 571k >> 16 1480k 1506k 101k 102k 1346k 1202k 566k >> >> Vitaly Mayatskikh (1): >> Add vhost_blk driver > > > Thanks! > Before merging this, I'd like to get some acks from userspace that it's > actually going to be used - e.g. QEMU block maintainers. > >> drivers/vhost/Kconfig | 13 ++ >> drivers/vhost/Makefile | 3 + >> drivers/vhost/blk.c | 510 +++++++++++++++++++++++++++++++++++++++++ >> 3 files changed, 526 insertions(+) >> create mode 100644 drivers/vhost/blk.c >> >> -- >> 2.17.1 > _______________________________________________ > Virtualization mailing list > Virtualization at lists.linux-foundation.org > https://lists.linuxfoundation.org/mailman/listinfo/virtualization >
On Fri, Nov 02, 2018 at 02:26:00PM -0400, Michael S. Tsirkin wrote:> On Fri, Nov 02, 2018 at 06:21:22PM +0000, Vitaly Mayatskikh wrote: > > vhost_blk is a host-side kernel mode accelerator for virtio-blk. The > > driver allows VM to reach a near bare-metal disk performance. See IOPS > > numbers below (fio --rw=randread --bs=4k). > > > > This implementation uses kiocb interface. It is slightly slower than > > going directly through bio, but is simpler and also works with disk > > images placed on a file system. > > > > # fio num-jobs > > # A: bare metal over block > > # B: bare metal over file > > # C: virtio-blk over block > > # D: virtio-blk over file > > # E: vhost-blk bio over block > > # F: vhost-blk kiocb over block > > # G: vhost-blk kiocb over file > > # > > # A B C D E F G > > > > 1 171k 151k 148k 151k 195k 187k 175k > > 2 328k 302k 249k 241k 349k 334k 296k > > 3 479k 437k 179k 174k 501k 464k 404k > > 4 622k 568k 143k 183k 620k 580k 492k > > 5 755k 697k 136k 128k 737k 693k 579k > > 6 887k 808k 131k 120k 830k 782k 640k > > 7 1004k 926k 126k 131k 926k 863k 693k > > 8 1099k 1015k 117k 115k 1001k 931k 712k > > 9 1194k 1119k 115k 111k 1055k 991k 711k > > 10 1278k 1207k 109k 114k 1130k 1046k 695k > > 11 1345k 1280k 110k 108k 1119k 1091k 663k > > 12 1411k 1356k 104k 106k 1201k 1142k 629k > > 13 1466k 1423k 106k 106k 1260k 1170k 607k > > 14 1517k 1486k 103k 106k 1296k 1179k 589k > > 15 1552k 1543k 102k 102k 1322k 1191k 571k > > 16 1480k 1506k 101k 102k 1346k 1202k 566k > > > > Vitaly Mayatskikh (1): > > Add vhost_blk driver > > > Thanks! > Before merging this, I'd like to get some acks from userspace that it's > actually going to be used - e.g. QEMU block maintainers.I have CCed Kevin, who is the overall QEMU block layer maintainer. Also CCing Denis since I think someone was working on a QEMU userspace multiqueue virtio-blk device for maximum performance. Previously vhost_blk.ko implementations were basically the same thing as the QEMU x-data-plane=on (dedicated thread using Linux AIO), except they were using a kernel thread and maybe submitted bios. The performance differences weren't convincing enough that it seemed worthwhile maintaining another code path which loses live migration, I/O throttling, image file formats, etc (all the things that QEMU's block layer supports). Two changes since then: 1. x-data-plane=on has been replaced with a full trip down QEMU's block layer (-object iothread,id=iothread0 -device virtio-blk-pci,iothread=iothread0,...). It's slower and not truly multiqueue (yet!). So from this perspective vhost_blk.ko might be more attractive again, at least until further QEMU block layer work eliminates the multiqueue and performance overheads. 2. SPDK has become available for users who want the best I/O performance and are willing to sacrifice CPU cores for polling. If you want better performance and don't care about QEMU block layer features, could you use SPDK? People who are the target market for vhost_blk.ko would probably be willing to use SPDK and it already exists... From the QEMU userspace perspective, I think the best way to integrate vhost_blk.ko is to transparently switch to it when possible. If the user enables QEMU block layer features that are incompatible with vhost_blk.ko, then it should fall back to the QEMU block layer transparently. I'm not keen on yet another code path with it's own set of limitations and having to educate users about how to make the choice. But if it can be integrated transparently as an "accelerator", then it could be valuable. Stefan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: <http://lists.linuxfoundation.org/pipermail/virtualization/attachments/20181106/3e60c7c8/attachment.sig>