Cat's out of the bag: Rich's fuzzer run found not one, but two independent assertion failures that a malicious server could trigger in my recent 64-bit extension code additions. What's more, in the process of fixing them, we've discovered another long-standing issue where nbd_get_size() returns confusing results compared to its documentation, when talking to an odd server that reports a really large export size. After off-list discussion between Rich, Laszlo, and myself, we didn't think an embargoed CVE against libnbd is necessary (the assertion failures only happen to unstable releases, and the nbd_get_size() misbehavior does not happen with normal servers and has been in place since v1.0, so it is nothing new), so I am posting the series now for public review. But we will still be reaching out to secalert for their opinion (it may be that they still see a low-priority exploit in an app that gets confused when trying to use a negative size as a loop bound, for example). Once they answer, and regardless of whether we end up doing a libnbd CVE after all, I will follow up to the mailing list with a security incident (client apps that demand a positive export size should probably favor nbd_get_size()<0 over nbd_get_size()==-1). Eric Blake (6): states: Tweak comment in OPT_GO state handler fuzzing: Disable client-side strictness checks api: Sanitize sizes larger than INT64_MAX block_status: Fix assertion with large server size block_status: Fix assertion on bad 64-bit block status reply info: Tolerate missing size generator/API.ml | 6 +++- generator/states-newstyle-opt-go.c | 5 ++- generator/states-reply-chunk.c | 50 ++++++++++++++++-------------- generator/C.ml | 2 +- lib/flags.c | 6 ++++ fuzzing/libnbd-fuzz-wrapper.c | 5 ++- info/show.c | 25 ++++++++------- 7 files changed, 60 insertions(+), 39 deletions(-) -- 2.41.0
Eric Blake
2023-Sep-21 20:58 UTC
[Libguestfs] [libnbd PATCH v2 1/6] states: Tweak comment in OPT_GO state handler
While auditing code, I stumbled across a confusing comment which
references a state name that does not exist. In addition to improving
the comment, I added an assertion, since there is action at a distance
(prepare_for_reply_payload is in states-newstyle.c) for ignoring the
oversized payload.
Signed-off-by: Eric Blake <eblake at redhat.com>
Reviewed-by: Richard W.M. Jones <rjones at redhat.com>
Reviewed-by: Laszlo Ersek <lersek at redhat.com>
---
generator/states-newstyle-opt-go.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/generator/states-newstyle-opt-go.c
b/generator/states-newstyle-opt-go.c
index 2ef440d4..5bc9a9ae 100644
--- a/generator/states-newstyle-opt-go.c
+++ b/generator/states-newstyle-opt-go.c
@@ -149,8 +149,11 @@ NEWSTYLE.OPT_GO.CHECK_REPLY:
switch (reply) {
case NBD_REP_INFO:
- if (len > maxpayload /* see RECV_NEWSTYLE_OPT_GO_REPLY */)
+ if (len > maxpayload) {
+ /* See prepare_for_reply_payload, used in RECV_REPLY */
+ assert (h->rbuf == NULL);
debug (h, "skipping large NBD_REP_INFO");
+ }
else {
uint16_t info;
uint64_t exportsize;
--
2.41.0
Eric Blake
2023-Sep-21 20:58 UTC
[Libguestfs] [libnbd PATCH v2 2/6] fuzzing: Disable client-side strictness checks
When fuzzing, it is more desirable to always provoke the server into
sending a response, rather than sometimes accidentally skipping a wire
call because a client-side strictness test failed.
[Our fuzzer could probably be made even more powerful by changing the
fuzzer input file to be a series of records, where each record is the
API to call and then the server's response; right now, the sequence of
APIs called is hard-coded, which is not as powerful at testing
potential cross-command coupling. But that's a project for another
day.]
Signed-off-by: Eric Blake <eblake at redhat.com>
Reviewed-by: Richard W.M. Jones <rjones at redhat.com>
Reviewed-by: Laszlo Ersek <lersek at redhat.com>
---
fuzzing/libnbd-fuzz-wrapper.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/fuzzing/libnbd-fuzz-wrapper.c b/fuzzing/libnbd-fuzz-wrapper.c
index cbd55380..fcd1d04c 100644
--- a/fuzzing/libnbd-fuzz-wrapper.c
+++ b/fuzzing/libnbd-fuzz-wrapper.c
@@ -193,8 +193,11 @@ client (int sock)
}
/* Note we ignore errors in these calls because we are only
- * interested in whether the process crashes.
+ * interested in whether the process crashes. Likewise, we don't
+ * want to accidentally avoid sending traffic to the server merely
+ * because client side strictness sees a problem.
*/
+ nbd_set_strict_mode (nbd, 0);
/* Enable a metadata context, for block status below. */
nbd_add_meta_context (nbd, LIBNBD_CONTEXT_BASE_ALLOCATION);
--
2.41.0
Eric Blake
2023-Sep-21 20:58 UTC
[Libguestfs] [libnbd PATCH v2 3/6] api: Sanitize sizes larger than INT64_MAX
Our stable API has always claimed that nbd_get_size() reports a
non-negative value on success, and -1 on failure. While we know of no
existing production server (such as nbdkit, qemu-nbd, nbd-server) that
would advertise a size larger than off_t, the NBD spec has not yet
committed to enforcing a hard maximum of an export size as a signed
integer; at present, the protocol mandates merely:
S: 64 bits, size of the export in bytes (unsigned)
I've considered proposing a spec patch to upstream NBD to require a
positive signed value, and can point to this patch to help justify
such a patch - but that hasn't happened yet. Thus, it should be no
surprise that our fuzzer was quickly able to emulate a theoretical
server that claims a size larger than INT64_MAX - at which point,
nbd_get_size() is returning a negative value that is not -1, and the
API documentation was unclear whether the application should treat
this as success or failure. However, as we did not crash, the fuzzer
did not flag it as interesting in v1.16.
I considered changing nbd_internal_set_size_and_flags() to move the
state machine to DEAD for any server advertising something so large
(that is, nbd_connect_*() would be unable to connect to such a
server); but without the NBD spec mandating such a limit, that is an
artificial constraint. Instead, this patch chooses to normalize all
wire values that can't fit in the int64_t return type to an EOVERFLOW
error, and clarifies the API documentation accordingly. Existing
clients that depend on a known positive size and check for
nbd_get_size()<0 are not impacted, while those that check for ==-1
will now reject such uncommon servers instead of potentially using a
negative value in subsequent computations (the latter includes
nbdinfo, which changes from reporting a negative size to instead
printing an error message). Meanwhile, clients that never called
nbd_get_size() (presumably because they infer the size from other
means, or because they intend to access offsets above 2^63 despite not
being able to reliably see that size from nbd_get_size) can still do
so.
Next, I audited all instances of 'exportsize', to see if libnbd itself
has any arithmetic issues when a large size is stored. My results for
v1.16 are as follows:
lib/nbd-protocol.h and lib/internal.h both have uint64_t exportsize
(so we are already treating size as unsigned both for wire and
internal representation - no nasty surprises with sign extension).
generator/states-{oldstyle,opt-{go,export-name}}.c grab exportsize off
the wire, byteswap if needed, and pass it to
nbd_internal_set_size_and_flags() without further inspection.
lib/flags.c has nbd_internal_set_size_and_flags() which stores the
wire value into the internal value, then nbd_unlocked_get_size() which
reports the wire value as-is (prior to this patch adding EOVERFLOW
normalization). nbd_internal_set_block_size() mentions exportsize in
comments, but does not actually compare against it.
lib/rw.c added LIBNBD_STRICT_BOUNDS checking in v1.6 via:
if ((h->strict & LIBNBD_STRICT_BOUNDS) &&
(offset > h->exportsize || count > h->exportsize - offset))
{
where offset and count are also unsigned values (count is 32-bit in
1.16, 64-bit in master); but the order of comparisons is robust
against wraparound when using unsigned math. Earlier releases, or
clients which use nbd_set_strict_mode(,0) to skip bounds checking,
have been assuming the server will reject requests with a weird
offset (most servers do, although the NBD spec only recommends
that sever behavior, by stating that the burden is on the client
to pass in-bounds requests in the first place).
generator/states-reply-chunk.c added comparisons against exportsize
for NBD_OPT_BLOCK_STATUS only after 1.16, and that's the subject of
the next patch.
No other direct uses of exportsize exist, so libnbd itself can
internally handle sizes larger than 2^63 without misbehaving, outside
of nbd_get_size(), even if such an offset was computed from taking the
negative int64_t value of nbd_get_size() and turning it back into
uint64_t offset parameter.
I also audited our other APIs - everything else uses a parameter of
type UInt64 in the generator for offsets, which is translated in C.ml
to uint64_t; and we already know we are safe when C converts negative
int64_t values into uint64_t.
For other languages, Python.ml generates code for UInt64 by using
'PyArg_ParseValue("K")' with a detour through 'unsigned
long long' in
case 'uint64_t' is a different type rank despite being the same number
of bits. The Python documentation states that "K" converts an
arbitrary python integer value to a C uint64_t without overflow
checking - so it is already possible to pass offset values larger than
2^63 in nbdsh; while values larger than 2^64 or negative values are
effectively truncated as if with modulo math. Enhancing the language
bindings to explicitly detect over/underflow is outside the scope of
this patch (and could surprise users who were depending on the current
truncation semantics).
GoLang.ml generates UInt64 via native Go 'uint64' passed through
'C.uint64_t()', and Rust.ml generates UInt64 via native Rust
'u64'
interpreted as C uint64_t. In both cases, while I am unsure whether
those languages (which have tighter type rules than C) let you get
away with directly assigning a negative value to the native type when
you really want a positive value over 2^63; but since it is a direct
map of an unsigned 64-bit value between the native type and C, there
should be no surprises to people fluent in those languages.
OCaml.ml is a bit different; as OCaml lacks a native unsigned 64-bit
type, it generates UInt64 as native 'int64' converted to C via
'Int64_val()'. Thus, an OCaml client MUST pass a negative value if
they want to access offsets beyond 2^63. But again, someone familiar
with the language should be familiar with the limitations.
Finally to demonstrate the difference in this patch, I temporarily
applied this patch to qemu (here, on top of qemu commit 49076448):
| diff --git i/nbd/server.c w/nbd/server.c
| index b5f93a20c9c..228ce66ed2b 100644
| --- i/nbd/server.c
| +++ w/nbd/server.c
| @@ -691,7 +691,7 @@ static int nbd_negotiate_handle_info(NBDClient *client,
Error **errp)
| myflags |= NBD_FLAG_SEND_DF;
| }
| trace_nbd_negotiate_new_style_size_flags(exp->size, myflags);
| - stq_be_p(buf, exp->size);
| + stq_be_p(buf, exp->size | 0x8000000000000000ULL);
| stw_be_p(buf + 8, myflags);
| rc = nbd_negotiate_send_info(client, NBD_INFO_EXPORT,
| sizeof(buf), buf, errp);
then with a just-built 'qemu-nbd -f raw -r file -t &' running, we
have
pre-patch:
$ nbdsh --base -u nbd://localhost -c - <<\EOF> try:
> print(h.get_size())
> except nbd.Error as ex:
> print(ex.string)
> EOF
-9223372036854770176
vs. post-patch:
$ ./run nbdsh --base -u nbd://localhost -c -
<<\EOF> try:
> print(h.get_size())
> except nbd.Error as ex:
> print(ex.string)
> EOF
nbd_get_size: server claims size 9223372036854781440 which does not fit in
signed result: Value too large for defined data type
A more complex patch to qemu to mask that bit back off from the offset
parameter to NBD_CMD_READ/WRITE is also possible to explore behavior
of passing large offsets over the wire, although I don't show it here.
All stable releases of NBD have had this return value issue. We
cannot guarantee whether clients may have their own arithmetic bugs
(such as treating the size as signed, then entering an infinite loop
when using a negative size as a loop bound), so we will be issuing a
security notice in case client apps need to file their own CVEs.
However, since all known production servers do not produces sizes that
large, and our audit shows that all stable releases of libnbd
gracefully handle large offsets even when a client convers a negative
int64_t result of nbd_get_size() back into large uint64_t offset
values in subsequent API calls, we did not deem it high enough risk to
issue a CVE against libnbd proper at this time, although we have
reached out to Red Hat's secalert team to see if revisiting that
decision might be warranted.
Based on recent IRC chatter, there is also a slight possibility that
some future extension to the NBD protocol could specifically allow
clients to opt in to an extension where the server reports an export
size of 2^64-1 (all ones) for a unidirectional connection where
offsets are ignored (either a read-only export of indefinite length,
or an append-only export data sink - either way, basically turning NBD
into a cross-system FIFO rather than a seekable device); if such an
extension materializes, we'd probably add a named constant negative
sentinel value distinct from -1 for actual return from nbd_get_size()
at that time.
Fixes: 40881fce75 ("lib: Expose flags and export size through the
API.", v0.1)
Signed-off-by: Eric Blake <eblake at redhat.com>
---
generator/API.ml | 6 +++++-
lib/flags.c | 6 ++++++
2 files changed, 11 insertions(+), 1 deletion(-)
diff --git a/generator/API.ml b/generator/API.ml
index 14988403..c4547615 100644
--- a/generator/API.ml
+++ b/generator/API.ml
@@ -2492,7 +2492,11 @@ "get_size", {
permitted_states = [ Negotiating; Connected; Closed ];
shortdesc = "return the export size";
longdesc = "\
-Returns the size in bytes of the NBD export."
+Returns the size in bytes of the NBD export.
+
+Note that this call fails with C<EOVERFLOW> for an unlikely
+server that advertises a size which cannot fit in a 64-bit
+signed integer."
^ non_blocking_test_call_description;
see_also = [SectionLink "Size of the export"; Link
"opt_info"];
example = Some "examples/get-size.c";
diff --git a/lib/flags.c b/lib/flags.c
index 7e6ddedd..394abe87 100644
--- a/lib/flags.c
+++ b/lib/flags.c
@@ -253,6 +253,12 @@ nbd_unlocked_get_size (struct nbd_handle *h)
return -1;
}
+ if (h->exportsize > INT64_MAX) {
+ set_error (EOVERFLOW, "server claims size %" PRIu64
+ " which does not fit in signed result",
h->exportsize);
+ return -1;
+ }
+
return h->exportsize;
}
--
2.41.0
Eric Blake
2023-Sep-21 20:58 UTC
[Libguestfs] [libnbd PATCH v2 4/6] block_status: Fix assertion with large server size
As mentioned in the previous commit ("api: Sanitize sizes larger than
INT64_MAX"), the NBD spec does not (yet) prohibit a server from
advertising a size larger than INT64_MAX. While we can't report such
size to the user, v1.16 was at least internally consistent with the
server's size everywhere else.
But when adding code to implement 64-bit block status, I intentionally
wanted to guarantee that the callback sees a positive int64_t length
even when the server's wire value can be 64 bits, for ease in writing
language bindings (OCaml in particular benefitted from that
guarantee), even though I didn't document that in the API until now.
That was because I had blindly assumed that the server's exportsize
fit in 63 bits, and therefore I didn't have to worry about arithmetic
overflow once I capped the extent length to exportsize. But the
fuzzer quickly proved me wrong. What's more, with the same one-line
hack to qemu as shown in the previous commit to advertise a size with
the high-order bit set,
$ ./run nbdsh --base -u nbd://localhost -c -
<<\EOF> def f(*k):
> pass
> h.block_status(1,0,f)
> EOF
nbdsh: generator/states-reply-chunk.c:554:
enter_STATE_REPLY_CHUNK_REPLY_RECV_BS_ENTRIES: Assertion `h->exportsize <=
INT64_MAX' failed.
Aborted (core dumped)
even though it did not dump core in v1.16.
Since my assumption was bad, rewrite the logic to increment total
after bounds-checking rather than before, and to bounds-check based on
INT64_MAX+1-64M rather than on the export size. As before, we never
report a zero-length extent to the callback. Whether or not secalert
advises us to create a CVE for the previous patch, this bug does not
deserve its own CVE as it was only introduced in recent unstable
releases.
Fixes: e8d837d306 ("block_status: Add some sanity checking of server
lengths", v1.17.4)
Thanks: Richard W.M. Jones <rjones at redhat.com>
Signed-off-by: Eric Blake <eblake at redhat.com>
---
generator/states-reply-chunk.c | 49 ++++++++++++++++++----------------
generator/C.ml | 2 +-
2 files changed, 27 insertions(+), 24 deletions(-)
diff --git a/generator/states-reply-chunk.c b/generator/states-reply-chunk.c
index 2cebe456..20407d91 100644
--- a/generator/states-reply-chunk.c
+++ b/generator/states-reply-chunk.c
@@ -547,11 +547,16 @@ REPLY.CHUNK_REPLY.RECV_BS_ENTRIES:
break;
}
+ /* Be careful to avoid arithmetic overflow, even when the user
+ * disabled LIBNBD_STRICT_BOUNDS to pass a suspect offset, or the
+ * server returns suspect lengths or advertised exportsize larger
+ * than 63 bits. We guarantee that callbacks will not see a
+ * length exceeding INT64_MAX or the advertised h->exportsize.
+ */
name = h->meta_contexts.ptr[i].name;
- total = 0;
- cap = h->exportsize - cmd->offset;
- assert (cap <= h->exportsize);
- assert (h->exportsize <= INT64_MAX);
+ total = cap = 0;
+ if (cmd->offset <= h->exportsize)
+ cap = h->exportsize - cmd->offset;
/* Need to byte-swap the entries returned into the callback size
* requested by the caller. The NBD protocol allows truncation as
@@ -560,10 +565,11 @@ REPLY.CHUNK_REPLY.RECV_BS_ENTRIES:
* don't like. We stop iterating on a zero-length extent (error
* only if it is the first extent), on an extent beyond the
* exportsize (unconditional error after truncating to
- * exportsize), and on an extent exceeding a 32-bit callback (no
- * error, and to simplify alignment, we truncate to 4G-64M); but
- * do not diagnose issues with the server's length alignments,
- * flag values, nor compliance with the REQ_ONE command flag.
+ * exportsize), and on an extent exceeding a callback length limit
+ * (no error, and to simplify alignment, we truncate to 64M before
+ * the limit); but we do not diagnose issues with the server's
+ * length alignments, flag values, nor compliance with the REQ_ONE
+ * command flag.
*/
for (i = 0, stop = false; i < h->bs_count && !stop; ++i) {
if (type == NBD_REPLY_TYPE_BLOCK_STATUS) {
@@ -572,16 +578,12 @@ REPLY.CHUNK_REPLY.RECV_BS_ENTRIES:
}
else {
orig_len = len = be64toh (h->bs_raw.wide[i].length);
- if (len > h->exportsize) {
- /* Since we already asserted exportsize is at most 63 bits,
- * this ensures the extent length will appear positive even
- * if treated as signed; treat this as an error now, rather
- * than waiting for the comparison to cap later, to avoid
- * arithmetic overflow.
+ if (len > INT64_MAX) {
+ /* Pick an aligned value rather than overflowing 64-bit
+ * callback; this does not require an error.
*/
stop = true;
- cmd->error = cmd->error ? : EPROTO;
- len = h->exportsize;
+ len = INT64_MAX + 1ULL - MAX_REQUEST_SIZE;
}
if (len > UINT32_MAX && !cmd->cb.wide) {
/* Pick an aligned value rather than overflowing 32-bit
@@ -600,7 +602,13 @@ REPLY.CHUNK_REPLY.RECV_BS_ENTRIES:
}
}
- total += len;
+ assert (total <= cap);
+ if (len > cap - total) {
+ /* Truncate and expose this extent as an error */
+ len = cap - total;
+ stop = true;
+ cmd->error = cmd->error ? : EPROTO;
+ }
if (len == 0) {
stop = true;
if (i > 0)
@@ -608,12 +616,7 @@ REPLY.CHUNK_REPLY.RECV_BS_ENTRIES:
/* Expose this extent as an error; we made no progress */
cmd->error = cmd->error ? : EPROTO;
}
- else if (total > cap) {
- /* Expose this extent as an error, after truncating to make progress */
- stop = true;
- cmd->error = cmd->error ? : EPROTO;
- len -= total - cap;
- }
+ total += len;
if (cmd->cb.wide) {
h->bs_cooked.wide[i].length = len;
h->bs_cooked.wide[i].flags = flags;
diff --git a/generator/C.ml b/generator/C.ml
index e5a2879b..ccaed116 100644
--- a/generator/C.ml
+++ b/generator/C.ml
@@ -496,7 +496,7 @@ let
pr "/* This is used in the callback for nbd_block_status_64.\n";
pr " */\n";
pr "typedef struct {\n";
- pr " uint64_t length;\n";
+ pr " uint64_t length; /* Will not exceed INT64_MAX */\n";
pr " uint64_t flags;\n";
pr "} nbd_extent;\n";
pr "\n";
--
2.41.0
Eric Blake
2023-Sep-21 20:58 UTC
[Libguestfs] [libnbd PATCH v2 5/6] block_status: Fix assertion on bad 64-bit block status reply
If a server replies to a block status command with an invalid count in NBD_REPLY_TYPE_BLOCK_STATUS_EXT, we correctly detect the server's error, but fail to mark that we've consumed enough data off the wire to resync back to the server's next reply. Rich's fuzzing run initially found this, but I was able to quickly write a one-byte patch on top of my pending qemu patches [1] to reproduce it: [1] https://lists.gnu.org/archive/html/qemu-devel/2023-08/msg05231.html | diff --git i/nbd/server.c w/nbd/server.c | index 898580a9b0b..bd8d46ba3c4 100644 | --- i/nbd/server.c | +++ w/nbd/server.c | @@ -2326,7 +2326,7 @@ nbd_co_send_extents(NBDClient *client, NBDRequest *request, NBDExtentArray *ea, | iov[1].iov_base = &meta_ext; | iov[1].iov_len = sizeof(meta_ext); | stl_be_p(&meta_ext.context_id, context_id); | - stl_be_p(&meta_ext.count, ea->count); | + stl_be_p(&meta_ext.count, !ea->count); | | nbd_extent_array_convert_to_be(ea); | iov[2].iov_base = ea->extents; then with a just-built 'qemu-nbd -f raw -r file -t &' running, we have pre-patch: $ ./run nbdsh --base -u nbd://localhost -c - <<\EOF> def f(*k): > pass > try: > h.block_status(1,0,f) > except nbd.Error as ex: > print(ex.string) > h.shutdown() > EOFnbdsh: generator/states-reply-chunk.c:701: enter_STATE_REPLY_CHUNK_REPLY_FINISH: Assertion `h->payload_left == 0' failed. Aborted (core dumped) vs. post-patch: $ ./run nbdsh --base -u nbd://localhost -c - <<\EOF> def f(*k): > pass > try: > h.block_status(1,0,f) > except nbd.Error as ex: > print(ex.string) > h.shutdown() > EOFnbd_block_status: block-status: command failed: Protocol error Appears to be a casualty of rebasing: I added h->payload_left verification fairly late in the game, then floated it earlier in the series, and missed a spot where I added a state machine jump to RESYNC without having updated h->payload_left. An audit of h->hlen modification sites show that all other chunked reads updated h->payload_left appropriately (often in the next statement, but sometimes in a later state when that made logic easier). Requires a non-compliant server, and only possible when extended headers are negotiated, which does not affect any stable released libnbd. Thus, there is no reason to create a CVE, although since I will already be doing a security info email about previous patches also addressing fuzzer findings, I can mention this at the same time. Fixes: ab992766cd ("block_status: Accept 64-bit extents during block status") Thanks: Richard W.M. Jones <rjones at redhat.com> Signed-off-by: Eric Blake <eblake at redhat.com> --- generator/states-reply-chunk.c | 1 + 1 file changed, 1 insertion(+) diff --git a/generator/states-reply-chunk.c b/generator/states-reply-chunk.c index 20407d91..5a31c192 100644 --- a/generator/states-reply-chunk.c +++ b/generator/states-reply-chunk.c @@ -476,6 +476,7 @@ REPLY.CHUNK_REPLY.RECV_BS_HEADER: if (h->bs_count != be32toh (h->sbuf.reply.payload.bs_hdr_64.count)) { h->rbuf = NULL; h->rlen = h->payload_left; + h->payload_left = 0; SET_NEXT_STATE (%RESYNC); return 0; } -- 2.41.0
Eric Blake
2023-Sep-21 20:58 UTC
[Libguestfs] [libnbd PATCH v2 6/6] info: Tolerate missing size
As previous patches showed, the NBD spec does not yet forbid a server
sending us a size that does not fit in int64_t. We should gracefully
handle this during nbdinfo, rather than giving up early.
With the same one-line hack to qemu to set the most significant bit of
the export size, output changes from pre-patch:
$ ./run nbdinfo nbd://localhost
/home/eblake/libnbd/info/.libs/nbdinfo: nbd_get_size: server claims size
9223372036854781440 which does not fit in signed result: Value too large for
defined data type
qemu-nbd: option negotiation failed: Failed to read opts magic: Unexpected
end-of-file before all data were read
to post-patch:
$ ./run nbdinfo nbd://localhost
protocol: newstyle-fixed without TLS, using extended packets
...
block_size_maximum: 33554432
or
$ ./run nbdinfo nbd://localhost --json
{
"protocol": "newstyle-fixed",
...
"block_size_maximum": 33554432,
"export-size-str": "unavailable"
} ]
}
Sadly, since writing a server with such large export sizes requires a
one-off hack, I don't see the point in adding a unit test.
Signed-off-by: Eric Blake <eblake at redhat.com>
---
info/show.c | 25 +++++++++++++------------
1 file changed, 13 insertions(+), 12 deletions(-)
diff --git a/info/show.c b/info/show.c
index a71d837e..3d80545e 100644
--- a/info/show.c
+++ b/info/show.c
@@ -46,7 +46,7 @@ show_one_export (struct nbd_handle *nbd, const char *desc,
bool first, bool last)
{
int64_t i, size;
- char size_str[HUMAN_SIZE_LONGEST];
+ char size_str[HUMAN_SIZE_LONGEST] = "unavailable";
bool human_size_flag;
char *export_name = NULL;
char *export_desc = NULL;
@@ -89,13 +89,10 @@ show_one_export (struct nbd_handle *nbd, const char *desc,
return false;
}
size = nbd_get_size (nbd);
- if (size == -1) {
- fprintf (stderr, "%s: %s\n", progname, nbd_get_error ());
- exit (EXIT_FAILURE);
+ if (size >= 0) {
+ human_size (size_str, size, &human_size_flag);
}
- human_size (size_str, size, &human_size_flag);
-
if (uri_is_meaningful ())
uri = nbd_get_uri (nbd);
@@ -130,7 +127,8 @@ show_one_export (struct nbd_handle *nbd, const char *desc,
show_context = true;
/* Get content last, as it moves the connection out of negotiating */
- content = get_content (nbd, size);
+ if (size >= 0)
+ content = get_content (nbd, size);
if (!json_output) {
ansi_colour (ANSI_FG_BOLD_BLACK, fp);
@@ -140,10 +138,12 @@ show_one_export (struct nbd_handle *nbd, const char *desc,
fprintf (fp, ":\n");
if (desc && *desc)
fprintf (fp, "\tdescription: %s\n", desc);
- if (human_size_flag)
- fprintf (fp, "\texport-size: %" PRIi64 " (%s)\n",
size, size_str);
- else
- fprintf (fp, "\texport-size: %" PRIi64 "\n", size);
+ if (size >= 0) {
+ if (human_size_flag)
+ fprintf (fp, "\texport-size: %" PRIi64 " (%s)\n",
size, size_str);
+ else
+ fprintf (fp, "\texport-size: %" PRIi64 "\n", size);
+ }
if (content)
fprintf (fp, "\tcontent: %s\n", content);
if (uri)
@@ -273,7 +273,8 @@ show_one_export (struct nbd_handle *nbd, const char *desc,
block_maximum);
/* Put this one at the end because of the stupid comma thing in JSON. */
- fprintf (fp, "\t\"export-size\": %" PRIi64
",\n", size);
+ if (size >= 0)
+ fprintf (fp, "\t\"export-size\": %" PRIi64
",\n", size);
fprintf (fp, "\t\"export-size-str\":
\"%s\"\n", size_str);
if (last)
--
2.41.0
On Thu, Sep 21, 2023 at 03:57:59PM -0500, Eric Blake wrote:> Cat's out of the bag: Rich's fuzzer run found not one, but two > independent assertion failures that a malicious server could trigger > in my recent 64-bit extension code additions. What's more, in the > process of fixing them, we've discovered another long-standing issue > where nbd_get_size() returns confusing results compared to its > documentation, when talking to an odd server that reports a really > large export size. > > After off-list discussion between Rich, Laszlo, and myself, we didn't > think an embargoed CVE against libnbd is necessary (the assertion > failures only happen to unstable releases, and the nbd_get_size() > misbehavior does not happen with normal servers and has been in place > since v1.0, so it is nothing new), so I am posting the series now for > public review. But we will still be reaching out to secalert for > their opinion (it may be that they still see a low-priority exploit in > an app that gets confused when trying to use a negative size as a loop > bound, for example). Once they answer, and regardless of whether we > end up doing a libnbd CVE after all, I will follow up to the mailing > list with a security incident (client apps that demand a positive > export size should probably favor nbd_get_size()<0 over > nbd_get_size()==-1). > > Eric Blake (6): > states: Tweak comment in OPT_GO state handler > fuzzing: Disable client-side strictness checks > api: Sanitize sizes larger than INT64_MAX > block_status: Fix assertion with large server size > block_status: Fix assertion on bad 64-bit block status reply > info: Tolerate missing sizeAfter making a few edits based on the reviews, the series now in as adf32845..f8375d3c. I'll wait for an answer from secalert before posting a followup security advisory email. -- Eric Blake, Principal Software Engineer Red Hat, Inc. Virtualization: qemu.org | libguestfs.org