Richard W.M. Jones
2020-May-28 11:22 UTC
[Libguestfs] [PATCH v2v] v2v: -it vddk: Don't use nbdkit readahead filter with VDDK (RHBZ#1832805).
This is the simplest solution to this problem. There are two other possible fixes I considered: Increase the documented limit (see http://libguestfs.org/virt-v2v-input-vmware.1.html#vddk:-esxi-nfc-service-memory-limits). However at the moment we know the current limit works through extensive testing (without readahead), plus I have no idea nor any way to test if larger limits are supported by all versions of VMware new and old. The limit we are recommending at the moment is the one documented by VMware. Also this would require users to change their VMware settings again, and no doubt introduce confusion for people who have already adjusted them who might not understand that they need to adjust them again for a v2v minor release. Or splitting large requests in nbdkit-vddk-plugin, but it's a bit silly to coalesce requests in a filter and then split them up at a later stage in the pipeline, not to mention error-prone when you consider multithreading etc. Rich.
Richard W.M. Jones
2020-May-28 11:22 UTC
[Libguestfs] [PATCH v2v] v2v: -it vddk: Don't use nbdkit readahead filter with VDDK (RHBZ#1832805).
This filter deliberately tries to coalesce reads into larger requests. Unfortunately VMware has low limits on the size of requests it can serve to a VDDK client and the larger requests would break with errors like this: nbdkit: vddk[3]: error: [NFC ERROR] NfcFssrvrProcessErrorMsg: received NFC error 5 from server: Failed to allocate the requested 33554456 bytes We already increase the maximum request size by changing the configuration on the VMware server, but it's not sufficient for VDDK with the readahead filter. As readahead is only an optimization, the simplest solution is to disable this filter when we're using nbdkit-vddk-plugin. Thanks: Ming Xie --- v2v/nbdkit_sources.ml | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/v2v/nbdkit_sources.ml b/v2v/nbdkit_sources.ml index 979c3773..e97583a5 100644 --- a/v2v/nbdkit_sources.ml +++ b/v2v/nbdkit_sources.ml @@ -97,9 +97,13 @@ let common_create ?bandwidth ?extra_debug ?extra_env plugin_name plugin_args let cmd = Nbdkit.add_filter_if_available cmd "retry" in (* Adding the readahead filter is always a win for our access - * patterns. However if it doesn't exist don't worry. + * patterns. If it doesn't exist don't worry. However it + * breaks VMware servers (RHBZ#1832805). *) - let cmd = Nbdkit.add_filter_if_available cmd "readahead" in + let cmd + if plugin_name <> "vddk" then + Nbdkit.add_filter_if_available cmd "readahead" + else cmd in (* Caching extents speeds up qemu-img, especially its consecutive * block_status requests with req_one=1. -- 2.18.2
Pino Toscano
2020-May-28 12:46 UTC
Re: [Libguestfs] [PATCH v2v] v2v: -it vddk: Don't use nbdkit readahead filter with VDDK (RHBZ#1832805).
On Thursday, 28 May 2020 13:22:53 CEST Richard W.M. Jones wrote:> This filter deliberately tries to coalesce reads into larger requests. > Unfortunately VMware has low limits on the size of requests it can > serve to a VDDK client and the larger requests would break with errors > like this: > > nbdkit: vddk[3]: error: [NFC ERROR] NfcFssrvrProcessErrorMsg: received NFC error 5 from server: Failed to allocate the requested 33554456 bytes > > We already increase the maximum request size by changing the > configuration on the VMware server, but it's not sufficient for VDDK > with the readahead filter. > > As readahead is only an optimization, the simplest solution is to > disable this filter when we're using nbdkit-vddk-plugin. > > Thanks: Ming Xie > ---LGTM. -- Pino Toscano
Maybe Matching Threads
- [PATCH nbdkit] v2v: Disable readahead for VMware curl sources too (RHBZ#1848862).
- [PATCH v2v] v2v: -it vddk: Don't use nbdkit readahead filter with VDDK (RHBZ#1832805).
- [PATCH v2v 0/4] v2v: vcenter: Implement cookie scripts.
- nbdkit, VDDK, extents, readahead, etc
- Re: [PATCH nbdkit] v2v: Disable readahead for VMware curl sources too (RHBZ#1848862).