Eric Blake
2020-Jul-20 15:50 UTC
Re: [Libguestfs] [PATCH libnbd PROPOSAL] Add APIs for listing exports from an NBD server.
On 7/20/20 9:43 AM, Richard W.M. Jones wrote:> A major missing feature of this library was the ability to list > exports from an NBD server. This implements the feature by adding a > new handle mode and additional functions for querying the list of > export names. > --- > .gitignore | 1 + > examples/Makefile.am | 14 +++ > examples/list-exports.c | 108 +++++++++++++++++++ > generator/API.ml | 72 +++++++++++++ > generator/Makefile.am | 1 + > generator/state_machine.ml | 39 +++++++ > generator/states-newstyle-opt-list.c | 131 +++++++++++++++++++++++ > generator/states-newstyle-opt-starttls.c | 8 +- > lib/handle.c | 50 +++++++++ > lib/internal.h | 9 ++ > lib/nbd-protocol.h | 6 ++ > 11 files changed, 435 insertions(+), 4 deletions(-) >> +++ b/examples/list-exports.c > @@ -0,0 +1,108 @@ > +/* This example shows how to list NBD exports. > + * > + * To test this with qemu-nbd: > + * $ qemu-nbd -x "hello" -t -k /tmp/sock disk.img > + * $ ./run examples/list-exports /tmp/sock > + * [0] hello > + * Which export to connect to? 0 > + * Connecting to hello ... > + * /tmp/sock: hello: size = 2048 bytes > + */It's possible to use qemu-kvm to set up multiple export names (via QMP commands) over one NBD server, if that would make the example any more powerful (although it is a lot more verbose than the one-liner of qemu-nbd which always has a single export). And I'm guessing you are hoping to add counterpart APIs into nbdkit to expose multiple exports (such as the tar or ext2 filter being able to list available exports when using the client exportname...)> + /* Set the list exports mode in the handle. */ > + nbd_set_list_exports (nbd, 1);s/1/true/, given that our C bindings actually use the bool type.> + > + /* Connect to the NBD server over a > + * Unix domain socket. A side effect of > + * connecting is to list the exports. > + * This operation can fail normally, so > + * we need to check the return value and > + * error code. > + */ > + r = nbd_connect_unix (nbd, argv[1]); > + if (r == -1 && nbd_get_errno () == ENOTSUP) { > + fprintf (stderr, "%s\n", nbd_get_error ()); > + exit (EXIT_FAILURE); > + } > + > + if (nbd_get_nr_list_exports (nbd) == 0) { > + fprintf (stderr, "Server does not support " > + "listing exports.\n"); > + exit (EXIT_FAILURE); > + } > + > + /* Display the list of exports. */ > + for (i = 0; > + i < nbd_get_nr_list_exports (nbd); > + i++) { > + name = nbd_get_list_export_name (nbd, i); > + printf ("[%d] %s\n", i, name); > + free (name); > + }Looks like a pretty simple way to do the iteration.> + printf ("Which export to connect to? "); > + if (scanf ("%d", &i) != 1) exit (EXIT_FAILURE);scanf("%d") is undefined on integer overflow, but is simple enough for use in this example. In fact, for an example I might just use atoi(), with even worse error-handling behavior but easier use.> + name = nbd_get_list_export_name (nbd, i); > + printf ("Connecting to %s ...\n", name); > + nbd_close (nbd); > + > + /* Connect again to the chosen export. */ > + nbd2 = nbd_create (); > + if (nbd2 == NULL) { > + fprintf (stderr, "%s\n", nbd_get_error ()); > + exit (EXIT_FAILURE); > + } > + > + if (nbd_set_export_name (nbd2, name) == -1) { > + fprintf (stderr, "%s\n", nbd_get_error ()); > + exit (EXIT_FAILURE); > + }I see what you did. Rather than letting a user re-use the connection (and pick an export name after obtaining the listing), you require the user to transfer the state of which export name is interesting from the first handle in list mode over to the requested export in the second handle. I don't see any convenient way to do it all on one handle, so your approach is good enough.> +++ b/generator/API.ml > @@ -588,6 +588,72 @@ the time of compilation."; > Link "aio_is_created"; Link "aio_is_ready"]; > }; > > + "set_list_exports", { > + default_call with > + args = [Bool "list"]; ret = RErr; > + permitted_states = [ Created ]; > + shortdesc = "set whether to list server exports"; > + longdesc = "\ > +Set this flag to true on a handle in order to list NBD exports > +provided by the server. > + > +In this mode, during connection we query the server for the list > +of NBD exports and collect them in the handle. You can query > +the list of exports provided by the server by calling > +C<nbd_get_nr_exports> and C<nbd_get_export_name>. After choosing > +the export you want, you should close this handle, create a new > +NBD handle (C<nbd_create>), set the export name (C<nbd_set_export_name>), > +and connect on the new handle. > + > +Some servers do not support listing exports at all. In > +that case the connect call will return error C<ENOTSUP> > +and C<nbd_get_nr_exports> will return 0. > + > +Some servers do not respond with all the exports they > +support, either because of an incomplete implementation of > +this feature, or because they only list exports relevant > +to non-TLS or TLS when a non-TLS or TLS connection is > +opened.";Here's looking at 'nbdkit info' which supports more exports than there are atoms in the universe ;)> + example = Some "examples/list-exports.c"; > + see_also = [Link "get_list_exports"; > + Link "get_nr_list_exports"; Link "get_list_export_name"]; > + };Looks nice.> + > + "get_nr_list_exports", { > + default_call with > + args = []; ret = RInt; > + permitted_states = [ Closed; Dead ]; > + shortdesc = "return the number of exports returned by the server"; > + longdesc = "\ > +If list exports mode was enabled on the handle and you connected > +to the server, this returns the number of exports returned by the > +server. This may be 0 or incomplete for reasons given in > +C<nbd_set_list_exports>."; > + see_also = [Link "set_list_exports"]; > + }; > + > + "get_list_export_name", { > + default_call with > + args = [ Int "i" ]; ret = RString; > + permitted_states = [ Closed; Dead ]; > + shortdesc = "return the i'th export name"; > + longdesc = "\ > +If list exports mode was enabled on the handle and you connected > +to the server, this can be used to return the i'th export name > +from the list returned by the server."; > + see_also = [Link "set_list_exports"]; > + };Is it worth trying to have some sort of a callback mode where the callback function is called once per export name advertised while the connection attempt is underway, rather than this post-mortem query of the data copied into the nbd handle? But as your added example code showed, this wasn't too bad to code with.> +++ b/generator/state_machine.ml > @@ -273,6 +273,7 @@ and newstyle_state_machine = [ > * state needs to run and skip to the next state in the list if not. > *) > Group ("OPT_STARTTLS", newstyle_opt_starttls_state_machine); > + Group ("OPT_LIST", newstyle_opt_list_state_machine); > Group ("OPT_STRUCTURED_REPLY", newstyle_opt_structured_reply_state_machine); > Group ("OPT_SET_META_CONTEXT", newstyle_opt_set_meta_context_state_machine); > Group ("OPT_GO", newstyle_opt_go_state_machine); > @@ -341,6 +342,44 @@ and newstyle_opt_starttls_state_machine = [ > }; > ] > > +(* Fixed newstyle NBD_OPT_LIST option. *) > +and newstyle_opt_list_state_machine = [ > + State { > + default_state with > + name = "START"; > + comment = "Start listing exports if in list mode."; > + external_events = []; > + }; > + > + State { > + default_state with > + name = "SEND"; > + comment = "Send newstyle NBD_OPT_LIST to being listing exports";s/being/begin/> +++ b/generator/states-newstyle-opt-list.c> +STATE_MACHINE { > + NEWSTYLE.OPT_LIST.START: > + if (!h->list_exports) { > + SET_NEXT_STATE (%^OPT_STRUCTURED_REPLY.START); > + return 0; > + } > + > + h->sbuf.option.version = htobe64 (NBD_NEW_VERSION); > + h->sbuf.option.option = htobe32 (NBD_OPT_LIST); > + h->sbuf.option.optlen = 0; > + h->wbuf = &h->sbuf; > + h->wlen = sizeof (h->sbuf.option); > + SET_NEXT_STATE (%SEND); > + return 0;Is it worth enhancing the code to add another list mode where we can also query NBD_OPT_INFO on each export thus listed? (Compare to 'qemu-nbd --list'). Hmm, if we do that, list mode may need to be an enum instead of a bool. But that would be a followup because it adds more complexity; this is already a good addition on its own.> + NEWSTYLE.OPT_LIST.CHECK_REPLY: > + const size_t maxpayload = sizeof h->sbuf.or.payload.server; > + uint32_t reply; > + uint32_t len; > + uint32_t elen; > + char *name; > + char **new_exports; > + > + reply = be32toh (h->sbuf.or.option_reply.reply); > + len = be32toh (h->sbuf.or.option_reply.replylen); > + switch (reply) { > + case NBD_REP_SERVER: > + /* Got one export. */ > + if (len > maxpayload) > + debug (h, "skipping too large export name reply"); > + else { > + elen = be32toh (h->sbuf.or.payload.server.server.export_name_len); > + if (elen > len - 4) { > + set_error (0, "invalid export length"); > + SET_NEXT_STATE (%.DEAD); > + return 0; > + } > + /* Copy the export name to the handle list. */ > + name = strndup (h->sbuf.or.payload.server.str, elen); > + if (name == NULL) { > + set_error (errno, "strdup"); > + SET_NEXT_STATE (%.DEAD); > + return 0; > + } > + new_exports = realloc (h->exports, sizeof (char *) * (h->nr_exports+1)); > + if (new_exports == NULL) { > + set_error (errno, "strdup"); > + SET_NEXT_STATE (%.DEAD); > + free (name); > + return 0; > + } > + h->exports = new_exports; > + h->exports[h->nr_exports++] = name; > + } > + > + /* Wait for more replies. */ > + h->rbuf = &h->sbuf; > + h->rlen = sizeof (h->sbuf.or.option_reply); > + SET_NEXT_STATE (%RECV_REPLY); > + return 0; > + > + case NBD_REP_ACK: > + /* Finished receiving the list. */ > + SET_NEXT_STATE (%^OPT_STRUCTURED_REPLY.START); > + return 0; > + > + default: > + if (handle_reply_error (h) == -1) { > + SET_NEXT_STATE (%.DEAD);Would this be better as %.CLOSED, or even a transition into a state where we send a clean NBD_OPT_ABORT for graceful shutdown rather than abrupt disconnect from the server?> + return 0; > + } > + set_error (ENOTSUP, "unexpected response, possibly the server does not " > + "support listing exports"); > + SET_NEXT_STATE (%.DEAD);whereas this definitely makes sense as %.DEAD.> + return 0; > + } > + return 0; > + > +} /* END STATE MACHINE */ > diff --git a/generator/states-newstyle-opt-starttls.c b/generator/states-newstyle-opt-starttls.c > index d220c4f..2d74e5f 100644 > --- a/generator/states-newstyle-opt-starttls.c> +++ b/lib/handle.c> +int > +nbd_unlocked_set_list_exports (struct nbd_handle *h, bool list) > +{ > + h->list_exports = true;s/true/list/ (you never really tested clearing the mode...)> +char * > +nbd_unlocked_get_list_export_name (struct nbd_handle *h, > + int i) > +{ > + char *name; > + > + if (!h->list_exports) { > + set_error (EINVAL, "list exports mode not selected on this handle"); > + return NULL; > + } > + if (i < 0 || i >= (int) h->nr_exports) {That cast makes sense, but looks odd - any server that tries to send more than 2G export names (by assuming that our size_t > 32 bits) is sending a LOT of traffic in response to a single NBD_OPT_LIST command. Would it be better to just track the list size as int, and change the state machine to fail if the server hasn't completed its listing in X number of responses (perhaps where X is 1 million, or determined by the user, or...)?> +++ b/lib/nbd-protocol.h > @@ -154,6 +154,12 @@ struct nbd_fixed_new_option_reply_info_export { > uint16_t eflags; /* per-export flags */ > } NBD_ATTRIBUTE_PACKED; > > +/* NBD_REP_SERVER reply (follows fixed_new_option_reply). */ > +struct nbd_fixed_new_option_reply_server { > + uint32_t export_name_len; /* length of export name */ > + /* followed by a string export name and description*/ > +} NBD_ATTRIBUTE_PACKED; > + > /* NBD_REP_META_CONTEXT reply (follows fixed_new_option_reply). */ > struct nbd_fixed_new_option_reply_meta_context { > uint32_t context_id; /* metadata context ID */ >We'll want this synchronized with nbdkit. -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org
Richard W.M. Jones
2020-Jul-20 16:47 UTC
Re: [Libguestfs] [PATCH libnbd PROPOSAL] Add APIs for listing exports from an NBD server.
On Mon, Jul 20, 2020 at 10:50:51AM -0500, Eric Blake wrote:> Is it worth trying to have some sort of a callback mode where the > callback function is called once per export name advertised while > the connection attempt is underway, rather than this post-mortem > query of the data copied into the nbd handle? But as your added > example code showed, this wasn't too bad to code with.My reasoning not to have a callback is that it's both a little harder to use, but more importantly ties us to a specific set of per-export fields. For example if we had a callback which was: int cb (const char *export_name); then you could only have an export name, but my reading of the protocol spec is that in fact there's already another field we are ignoring (some sort of description - not sure what servers actually set it), and there's scope for more fields in future. With the current API we can add more fields in future.> >+++ b/generator/states-newstyle-opt-list.c > > >+STATE_MACHINE { > >+ NEWSTYLE.OPT_LIST.START: > >+ if (!h->list_exports) { > >+ SET_NEXT_STATE (%^OPT_STRUCTURED_REPLY.START); > >+ return 0; > >+ } > >+ > >+ h->sbuf.option.version = htobe64 (NBD_NEW_VERSION); > >+ h->sbuf.option.option = htobe32 (NBD_OPT_LIST); > >+ h->sbuf.option.optlen = 0; > >+ h->wbuf = &h->sbuf; > >+ h->wlen = sizeof (h->sbuf.option); > >+ SET_NEXT_STATE (%SEND); > >+ return 0; > > Is it worth enhancing the code to add another list mode where we can > also query NBD_OPT_INFO on each export thus listed? (Compare to > 'qemu-nbd --list'). Hmm, if we do that, list mode may need to be an > enum instead of a bool. But that would be a followup because it > adds more complexity; this is already a good addition on its own.I'm actually a little unclear on the protocol spec around this. Would we be able to issue N x NBD_OPT_INFO (after NBD_OPT_LIST, on the same connection)? I was envisaging opening one new connection per export and just using the usual APIs to fetch the data.> >+ case NBD_REP_ACK: > >+ /* Finished receiving the list. */ > >+ SET_NEXT_STATE (%^OPT_STRUCTURED_REPLY.START); > >+ return 0; > >+ > >+ default: > >+ if (handle_reply_error (h) == -1) { > >+ SET_NEXT_STATE (%.DEAD); > > Would this be better as %.CLOSED, or even a transition into a state > where we send a clean NBD_OPT_ABORT for graceful shutdown rather > than abrupt disconnect from the server?In fact we cannot move to CLOSED. The reason is obscure, it's because you hit this code while doing the connect: https://github.com/libguestfs/libnbd/blob/3599c79480df2932b3dba0b6f9c689634768c423/lib/connect.c#L47 It doesn't really work to have connect APIs return an error for what is not in fact an error. However to complicate things, with qemu-nbd -x we do return an error from connect* each time because we cannot set the export name correctly until we've fetched the export names. Hence the example ignores the return value from connect (unless ENOTSUP). This is a bit of a mess ...> >+int > >+nbd_unlocked_set_list_exports (struct nbd_handle *h, bool list) > >+{ > >+ h->list_exports = true; > > s/true/list/ (you never really tested clearing the mode...)Oops.> >+char * > >+nbd_unlocked_get_list_export_name (struct nbd_handle *h, > >+ int i) > >+{ > >+ char *name; > >+ > >+ if (!h->list_exports) { > >+ set_error (EINVAL, "list exports mode not selected on this handle"); > >+ return NULL; > >+ } > >+ if (i < 0 || i >= (int) h->nr_exports) { > > That cast makes sense, but looks odd - any server that tries to send > more than 2G export names (by assuming that our size_t > 32 bits) is > sending a LOT of traffic in response to a single NBD_OPT_LIST > command. Would it be better to just track the list size as int, and > change the state machine to fail if the server hasn't completed its > listing in X number of responses (perhaps where X is 1 million, or > determined by the user, or...)?Yes we should definitely limit this to something reasonable. There's not a good way to handle servers which can serve any export name (nbdkit info as you pointed out). Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com libguestfs lets you edit virtual machines. Supports shell scripting, bindings from many languages. http://libguestfs.org
Eric Blake
2020-Jul-21 15:18 UTC
Re: [Libguestfs] [PATCH libnbd PROPOSAL] Add APIs for listing exports from an NBD server.
On 7/20/20 11:47 AM, Richard W.M. Jones wrote:> On Mon, Jul 20, 2020 at 10:50:51AM -0500, Eric Blake wrote: >> Is it worth trying to have some sort of a callback mode where the >> callback function is called once per export name advertised while >> the connection attempt is underway, rather than this post-mortem >> query of the data copied into the nbd handle? But as your added >> example code showed, this wasn't too bad to code with. > > My reasoning not to have a callback is that it's both a little harder > to use, but more importantly ties us to a specific set of per-export > fields. For example if we had a callback which was: > > int cb (const char *export_name); > > then you could only have an export name, but my reading of the > protocol spec is that in fact there's already another field we are > ignoring (some sort of description - not sure what servers actually > set it), and there's scope for more fields in future. > > With the current API we can add more fields in future.As mentioned in the nbdkit counterpart thread, the existing NBD_REP_SERVER only sets a description string (no room for further extension without adding a new NBD_OPT_/NBD_REP pair - although I also outlined such a pair in that thread). But 'qemu-nbd -D' is such a server that sets the description, so yes, we might as well expose it now.> >>> +++ b/generator/states-newstyle-opt-list.c >> >>> +STATE_MACHINE { >>> + NEWSTYLE.OPT_LIST.START: >>> + if (!h->list_exports) { >>> + SET_NEXT_STATE (%^OPT_STRUCTURED_REPLY.START); >>> + return 0; >>> + } >>> + >>> + h->sbuf.option.version = htobe64 (NBD_NEW_VERSION); >>> + h->sbuf.option.option = htobe32 (NBD_OPT_LIST); >>> + h->sbuf.option.optlen = 0; >>> + h->wbuf = &h->sbuf; >>> + h->wlen = sizeof (h->sbuf.option); >>> + SET_NEXT_STATE (%SEND); >>> + return 0; >> >> Is it worth enhancing the code to add another list mode where we can >> also query NBD_OPT_INFO on each export thus listed? (Compare to >> 'qemu-nbd --list'). Hmm, if we do that, list mode may need to be an >> enum instead of a bool. But that would be a followup because it >> adds more complexity; this is already a good addition on its own. > > I'm actually a little unclear on the protocol spec around this. Would > we be able to issue N x NBD_OPT_INFO (after NBD_OPT_LIST, on the same > connection)? I was envisaging opening one new connection per export > and just using the usual APIs to fetch the data.Yes, you can send N x NBD_OPT_INFO, and even NBD_OPT_GO, all on the same connection. My 'qemu-nbd --list' implementation does: NBD_OPT_LIST foreach name received: NBD_OPT_INFO NBD_OPT_ABORT to gracefully end the connection. The more I think about it, the more I wonder if we can just insert more states into the state machine, by letting the user opt-in to NBD_OPT control. Something like: back-compat with libnbd 1.2 usage: nbd_create nbd_set_export_name("name") nbd_connect* open socket NBD_OPT_GO(name from earlier) nbd_aio_is_ready but where the opt-in client with 1.4 API uses: nbd_create nbd_set_opt_mode(true) nbd_connect* open socket nbd_opt_list(cb) NBD_OPT_LIST cb(name, description) cb(name, description) ... nbd_opt_go("name") NBD_OPT_GO("name") nbd_aio_is_ready That is, without opt mode, you _have_ to set the export name before nbd_connect*, because nbd_connect will automatically jump into nbd_go; but when you set opt mode true, you can set the export name directly with the new nbd_opt_go, and can in the meantime can call as many API managing other NBD_OPT as we decide to implement. So that would mean the following APIs to be added (or modifying what we've already pushed with your initial experiment): nbd_set_opt_mode(bool) - enable or disable opt mode nbd_opt_list(cb) - valid only during opt mode, to send NBD_OPT_LIST nbd_opt_abort() - valid only during opt mode, to end connection gracefully nbd_opt_info(name) - valid only during opt mode, to send NBD_OPT_INFO nbd_opt_go(name) - valid only during opt mode, to send NBD_OPT_GO existing APIs change as follows: nbd_can_* - additionally becomes valid during opt mode after a successful nbd_opt_info nbd_connect* - performs nbd_opt_go(name) with the prior nbd_set_export_name when not in opt mode, or stops while still in negotiation when in opt mode> >>> + case NBD_REP_ACK: >>> + /* Finished receiving the list. */ >>> + SET_NEXT_STATE (%^OPT_STRUCTURED_REPLY.START); >>> + return 0; >>> + >>> + default: >>> + if (handle_reply_error (h) == -1) { >>> + SET_NEXT_STATE (%.DEAD); >> >> Would this be better as %.CLOSED, or even a transition into a state >> where we send a clean NBD_OPT_ABORT for graceful shutdown rather >> than abrupt disconnect from the server? > > In fact we cannot move to CLOSED. The reason is obscure, it's because > you hit this code while doing the connect: > > https://github.com/libguestfs/libnbd/blob/3599c79480df2932b3dba0b6f9c689634768c423/lib/connect.c#L47Again, that's because our current state machine is calling NBD_OPT_GO unconditionally, and with the name hardcoded by the last nbd_set_export_name. With a new opt-in opt mode, we then let the client pick the export name at the time the client is done with opt mode and calls nbd_opt_go.> > It doesn't really work to have connect APIs return an error for what > is not in fact an error. However to complicate things, with qemu-nbd > -x we do return an error from connect* each time because we cannot set > the export name correctly until we've fetched the export names. Hence > the example ignores the return value from connect (unless ENOTSUP). > > This is a bit of a mess ...Good thing we aren't committed to the API yet - but when we do get it nailed down, v1.4 makes sense.>>> +char * >>> +nbd_unlocked_get_list_export_name (struct nbd_handle *h, >>> + int i) >>> +{ >>> + char *name; >>> + >>> + if (!h->list_exports) { >>> + set_error (EINVAL, "list exports mode not selected on this handle"); >>> + return NULL; >>> + } >>> + if (i < 0 || i >= (int) h->nr_exports) { >> >> That cast makes sense, but looks odd - any server that tries to send >> more than 2G export names (by assuming that our size_t > 32 bits) is >> sending a LOT of traffic in response to a single NBD_OPT_LIST >> command. Would it be better to just track the list size as int, and >> change the state machine to fail if the server hasn't completed its >> listing in X number of responses (perhaps where X is 1 million, or >> determined by the user, or...)? > > Yes we should definitely limit this to something reasonable. There's > not a good way to handle servers which can serve any export name > (nbdkit info as you pointed out).The more I think about it, the more I like the idea of a callback (the client deals with the callback as it sees fit, rather than making libnbd store the memory to copy what it read from the wire just to hand it to the client later). -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org
Apparently Analagous Threads
- Re: [PATCH libnbd PROPOSAL] Add APIs for listing exports from an NBD server.
- [PATCH libnbd PROPOSAL] Add APIs for listing exports from an NBD server.
- [libnbd PATCH v4 20/25] generator: Actually request extended headers
- [PATCH libnbd PROPOSAL] Add APIs for listing exports from an NBD server.
- [libnbd PATCH 2/3] generator: Rename OPT_SET_META_CONTEXT states