Context: https://bugzilla.redhat.com/show_bug.cgi?id=1837809#c28 https://rwmj.wordpress.com/2020/03/21/new-nbdkit-remote-tmpfs-tmpdisk-plugin/ http://libguestfs.org/nbdkit-tmpdisk-plugin.1.html https://github.com/libguestfs/nbdkit/blob/0632acc76bfeb7d70d3eefa42fc842ce6b7be4f8/plugins/tmpdisk/tmpdisk.c#L182 I did a bit of testing to try to see if this is really feasible, and yes I think it is. Discussion below, experimental results first ... I compiled qt5-qtwebkit from Rawhide dist-git by running the "fedpkg local" command. Version: qt5-qtwebkit-5.212.0-0.46.alpha4.fc33 Host: AMD Ryzen 9 3900X 12-Core Processor Disks: Fast NVME disks, but using dm-crypt and LVM. Compiling qtwebkit normally (baseline): 28m19.921s baseline with nosync enabled: 28m12.67s nbdkit file plugin backed by a local LV 28m53.259s (baseline + 2%) over a local Unix domain socket nbd.ko client (loop mounting) with multi-conn = 4 nbdkit tmpdisk plugin backed by local LV 28m14.151s over a local Unix domain socket nbd.ko client (loop mounting) with multi-conn = 1 [note 1] As you can see there's no appreciable difference in any of the times. Basically they all take the same time, the overhead of nbdkit + loop mounting is not significant. The tmpdisk plugin erases flush/FUA requests (deliberately), which is similar to but architecturally cleaner than nosync IMHO. However nosync makes no difference because it's intended for a different use case which we are not testing here: https://github.com/rpm-software-management/mock/wiki/Feature-nosync Where nbdkit might be useful for Koji builds (we're already using it for some Fedora/RISC-V builds): * Let's you create on-the-fly temporary disks backed by disk or memory. * Could be used remotely, eg. if you have builders with small amounts of storage, but a fast network and a separate storage server. * Could allow you to engineer the crap out of the storage to tune it very specifically for builds (dropping FUAs would only be the start here). Rich. [note 1] tmpdisk doesn't support multi-conn. This is because it serves a different, fresh, temporary to each client, so if a single client connects multiple times they'd see corruption since writes over one connection would not be seen on other connections. We do normally recommend using multi-conn because it can improve performance, although not in these tests. However changing tmpdisk to support multi-conn is architecturally tricky. You'd have to have a way to associate client connections to builds (exportname?) -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com Fedora Windows cross-compiler. Compile Windows programs, test, and build Windows installers. Over 100 libraries supported. http://fedoraproject.org/wiki/MinGW
Richard W.M. Jones
2020-May-23 15:12 UTC
Re: [Libguestfs] RPM package builds backed by nbdkit
On Thu, May 21, 2020 at 03:48:18PM +0100, Richard W.M. Jones wrote:> Context: > https://bugzilla.redhat.com/show_bug.cgi?id=1837809#c28I collected a few more stats. This time I'm using a full ‘fedpkg mockbuild’ of Mesa 3D from Rawhide. I chose mesa largely at random but it has the nice properties that builds take a reasonable but not too long amount of time, and it has to install a lot of dependencies in the mock chroot. As I wanted this to look as much like Koji as possible, I enabled the yum cache, disabled the root cache and disabled ccache. Baseline build: 5m15.548s, 5m14.383s - This is with /var/lib/mock mounted on a logical volume formatted with ext4. tmpfs: 4m10.350s, 4m2.618s - This is supposed to be the fastest possible case, /var/lib/mock is mounted on a tmpfs. It's the suggested disposition for Koji builders when performance is paramount. nbdkit file plugin: 5m21.668s, 4m59.460s, 5m2.020s - nbd.ko, multi-conn = 4. - Similar enough to the baseline build showing that NBD has no/low overhead. - For unclear reasons multi-conn has no effect. By adding the log filter I could see that it only ever uses one connection. nbdkit memory plugin: 4m17.861s, 4m13.609s - nbd.ko, multi-conn = 4. - This is very similar to the tmpfs, showing that NBD itself doesn't have very much overhead. (As above multi-conn has no effect). nbdkit file plugin + fuamode=discard: 4m13.213s, 4m15.510s - nbd.ko, multi-conn = 4. - This is very interesting because it shows that almost all of the performance benefits can be gained by disabling flush requests, while still using disks for backing. remote nbdkit memory plugin: 6m40.881s, 6m43.723s - nbd.ko, multi-conn = 4, over a TCP socket to a server located next to the build system through a gigabit ethernet switch. - Only 25% slower than direct access to a local disk. ---------------------------------------------------------------------- Some thoughts (sorry these are not very conclusive, your thoughts also welcome ...) (0) Performance of nbd.ko + nbdkit is excellent, even remote. (1) NVME disks are really fast. I'm sure the differences between in-memory and disk would be much larger if I was using a hard disk. (2) Why are all requests happening over a single connection? The build is highly parallel. (3) I was able to use the log filter to collect detailed log information with almost zero overhead. However I'm not sure exactly what I can do with it. http://oirase.annexia.org/2020-mesa-build.log (4) What can we do to optimize the block device for this situation? Obviously drop flushes. Can we do some hierarchical storage approach where we create a large RAM disk but back the lesser used bits by disk? (The small difference in performance between RAM and NVME makes me think this will not be very beneficial.) Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-top is 'top' for virtual machines. Tiny program with many powerful monitoring features, net stats, disk stats, logging, etc. http://people.redhat.com/~rjones/virt-top
Possibly Parallel Threads
- nextcloud-client currently not installable from EPEL in CentOS7
- [PATCH nbdkit 1/2] tmpdisk: Generate the default command from a shell script fragment.
- [PATCH nbdkit 2/2] tmpdisk: Pass any parameters as shell variables to the command.
- [PATCH nbdkit v3] tmpdisk: Pass any parameters as shell variables to the command.
- [PATCH nbdkit v2] tmpdisk: Pass any parameters as shell variables to the command.