Displaying 16 results from an estimated 16 matches for "backend_can_multi_conn".
2020 Feb 12
0
[PATCH nbdkit 3/3] server: filters: Remove struct b_h.
...tents (b_next);
}
static int
next_can_fua (void *nxdata)
{
- struct b_h *b_h = nxdata;
- return backend_can_fua (b_h->b);
+ struct backend *b_next = nxdata;
+ return backend_can_fua (b_next);
}
static int
next_can_multi_conn (void *nxdata)
{
- struct b_h *b_h = nxdata;
- return backend_can_multi_conn (b_h->b);
+ struct backend *b_next = nxdata;
+ return backend_can_multi_conn (b_next);
}
static int
next_can_cache (void *nxdata)
{
- struct b_h *b_h = nxdata;
- return backend_can_cache (b_h->b);
+ struct backend *b_next = nxdata;
+ return backend_can_cache (b_next);
}
static...
2020 Feb 12
5
[PATCH nbdkit 1/3] server: Rename global backend pointer to "top".
...return -1;
if (fl)
eflags |= NBD_FLAG_SEND_FLUSH;
- fl = backend_is_rotational (backend);
+ fl = backend_is_rotational (top);
if (fl == -1)
return -1;
if (fl)
eflags |= NBD_FLAG_ROTATIONAL;
/* multi-conn is useless if parallel connections are not allowed. */
- fl = backend_can_multi_conn (backend);
+ fl = backend_can_multi_conn (top);
if (fl == -1)
return -1;
- if (fl && (backend->thread_model (backend) >
+ if (fl && (top->thread_model (top) >
NBDKIT_THREAD_MODEL_SERIALIZE_CONNECTIONS))
eflags |= NBD_FLAG_CAN_MULTI_CONN;
-...
2019 Aug 30
0
[nbdkit PATCH 6/9] server: Cache per-connection can_FOO flags
...an_fua", b->name);
- return b->can_fua (b, conn);
+ if (h->can_fua == -1) {
+ r = backend_can_write (b, conn);
+ if (r != 1) {
+ h->can_fua = NBDKIT_FUA_NONE;
+ return r;
+ }
+ h->can_fua = b->can_fua (b, conn);
+ }
+ return h->can_fua;
}
int
backend_can_multi_conn (struct backend *b, struct connection *conn)
{
+ struct b_conn_handle *h = &conn->handles[b->i];
+
debug ("%s: can_multi_conn", b->name);
- return b->can_multi_conn (b, conn);
+ if (h->can_multi_conn == -1)
+ h->can_multi_conn = b->can_multi_conn (b, co...
2020 Feb 11
0
[PATCH nbdkit 3/3] server: Remove explicit connection parameter, use TLS instead.
...connection *conn)
- __attribute__((__nonnull__ (1, 2)));
-extern int backend_can_extents (struct backend *b, struct connection *conn)
- __attribute__((__nonnull__ (1, 2)));
-extern int backend_can_fua (struct backend *b, struct connection *conn)
- __attribute__((__nonnull__ (1, 2)));
-extern int backend_can_multi_conn (struct backend *b, struct connection *conn)
- __attribute__((__nonnull__ (1, 2)));
-extern int backend_can_cache (struct backend *b, struct connection *conn)
- __attribute__((__nonnull__ (1, 2)));
+extern int backend_reopen (struct backend *b, int readonly)
+ __attribute__((__nonnull__ (1)));
+...
2020 Feb 11
4
[PATCH nbdkit v2 0/3] server: Remove explicit connection parameter.
v1 was here:
https://www.redhat.com/archives/libguestfs/2020-February/msg00081.html
v2 replaces
struct connection *conn = GET_CONN;
with
GET_CONN;
which sets conn implicitly and asserts that it is non-NULL.
If we actually want to test if conn is non-NULL or behave
differently, then you must use threadlocal_get_conn() instead,
and some existing uses do that.
Rich.
2020 Feb 11
5
[PATCH nbdkit 0/3] server: Remove explicit connection parameter.
The third patch is a large but mechanical change which gets rid of
passing around struct connection * entirely within the server,
preferring instead to reference the connection through thread-local
storage.
I hope this is a gateway to simplifying other parts of the code.
Rich.
2020 Feb 12
2
[nbdkit PATCH] filters: Remove most next_* wrappers
...- struct backend *b_next = nxdata;
- return backend_can_extents (b_next);
-}
-
-static int
-next_can_fua (void *nxdata)
-{
- struct backend *b_next = nxdata;
- return backend_can_fua (b_next);
-}
-
-static int
-next_can_multi_conn (void *nxdata)
-{
- struct backend *b_next = nxdata;
- return backend_can_multi_conn (b_next);
-}
-
-static int
-next_can_cache (void *nxdata)
-{
- struct backend *b_next = nxdata;
- return backend_can_cache (b_next);
-}
-
-static int
-next_pread (void *nxdata, void *buf, uint32_t count, uint64_t offset,
- uint32_t flags, int *err)
-{
- struct backend *b_next = nxdata...
2019 Aug 30
3
[nbdkit PATCH v2 0/2] caching .can_write
This is a subset of the last half of the larger 9-patch series. The
uncontroversial first half of that series is pushed, but here, I tried
to reduce the size of the patches by splitting out some of the more
complex changes, so that the rest of the changes remaining in the
series are more mechanical. In turn, it forced me to write timing
tests, which let me spot another spot where we are wasting
2020 Feb 12
0
[PATCH nbdkit 2/3] server: Rename ‘struct b_conn_handle’ to plain ‘struct handle’.
...b->name);
@@ -424,7 +424,7 @@ int
backend_can_fua (struct backend *b)
{
GET_CONN;
- struct b_conn_handle *h = &conn->handles[b->i];
+ struct handle *h = get_handle (conn, b->i);
int r;
controlpath_debug ("%s: can_fua", b->name);
@@ -445,7 +445,7 @@ int
backend_can_multi_conn (struct backend *b)
{
GET_CONN;
- struct b_conn_handle *h = &conn->handles[b->i];
+ struct handle *h = get_handle (conn, b->i);
assert (h->handle && (h->state & HANDLE_CONNECTED));
controlpath_debug ("%s: can_multi_conn", b->name);
@@ -459,7...
2019 Aug 30
15
[nbdkit PATCH 0/9] can_FOO caching, more filter validation
It's easy to use the sh script to demonstrate that nbdkit is
inefficiently calling into .get_size, .can_fua, and friends more than
necessary. We've also commented on the list in the past that it would
be nice to ensure that when filters call into next_ops, they are not
violating constraints (as we've have to fix several bugs in the past
where we did not have such checking to protect
2020 Feb 11
1
[nbdkit PATCH] filters: Make nxdata persistent
...ndle;
+ assert (nxdata->b == b->next && nxdata->conn == conn);
if (f->filter.can_multi_conn)
- return f->filter.can_multi_conn (&next_ops, &nxdata, handle);
+ return f->filter.can_multi_conn (&next_ops, nxdata, nxdata->handle);
else
return backend_can_multi_conn (b->next, conn);
}
@@ -566,10 +598,11 @@ static int
filter_can_cache (struct backend *b, struct connection *conn, void *handle)
{
struct backend_filter *f = container_of (b, struct backend_filter, backend);
- struct b_conn nxdata = { .b = b->next, .conn = conn };
+ struct b_conn *nxda...
2019 Oct 07
6
[nbdkit PATCH 0/5] More retry fixes
I think this is my last round of patches for issues I identified with
the retry filter. With this in place, it should be safe to interject
another filter in between retry and the plugin.
Eric Blake (5):
retry: Don't call into closed plugin
tests: Refactor test-retry-reopen-fail.sh
tests: Enhance retry test to cover failed reopen
server: Move prepare/finalize/close recursion to
2020 Mar 19
5
[nbdkit PATCH 0/2] More caching of initial setup
When I added .can_FOO caching in 1.16, I missed the case that the sh
plugin itself was calling .can_flush twice in some situations (in
order to default .can_fua). Then right after, I regressed it to call
.can_zero twice (in order to default .can_fast_zero). I also missed
that .thread_model could use better caching, because at the time, I
did not add testsuite coverage. Fix that now.
Eric Blake
2019 Dec 12
9
[PATCH nbdkit 0/7] server: Allow datapath debug messages to be suppressed.
The immediate reason for this patch is to reduce the amount of
debugging in virt-v2v with using the virt-v2v -v option (because this
implies running nbdkit in verbose mode too). Most of the messages are
datapath ones about pread/pwrite requests, and in fact as we've added
more filters on top of nbdkit these messages have got more and more
verbose. However they are not particularly
2019 Oct 04
6
[nbdkit PATCH 0/5] Another round of retry fixes
I still don't have .prepare/.finalize working cleanly across reopen,
but did find a nasty bug where a botched assertion means we failed to
notice reads beyond EOF in both the xz and retry filter.
Refactoring backend.c will make .finalize work easier.
Eric Blake (5):
xz: Avoid reading beyond EOF
retry: Check size before transactions
tests: Test retry when get_size values change
2020 Feb 10
17
Cross-project NBD extension proposal: NBD_INFO_INIT_STATE
I will be following up to this email with four separate threads each
addressed to the appropriate single list, with proposed changes to:
- the NBD protocol
- qemu: both server and client
- libnbd: client
- nbdkit: server
The feature in question adds a new optional NBD_INFO_ packet to the
NBD_OPT_GO portion of handshake, adding up to 16 bits of information
that the server can advertise to the