search for: wstate

Displaying 6 results from an estimated 6 matches for "wstate".

Did you mean: state
2019 May 31
4
[libnbd] Simultaneous read and write
...unning in each handle at any time, that thread can only be reading or writing. Problem (b) : There is only one state machine per handle (h->state), whereas to handle the write and read sides separately requires two state machines. In the IRC discussion we gave these the preliminary names h->wstate and h->rstate. ---------------------------------------------------------------------- It's worth also saying how the current API works, although we might want to change it. You grab the underlying file descriptor using nbd_aio_get_fd, which is what you poll on. You also have to call nbd_...
2019 May 31
0
[libnbd] Simultaneous read and write
...d_aio_pread(), it is contending on the same lock as the nbd_aio_notify_read() in the reader thread, so we'd have to split up to several finer-grained locks (maybe keep all locking APIs with a grab on h->lock at the beginning, but while holding that lock we then grab the h->rstate or h->wstate lock for the rest of the call, and drop the main h->lock at that point even though the API doesn't return until the state machine blocks again). With my initial use of libnbd, the division of labor for which thread writes a packet falls into three classes: - if the state machine is ready, t...
2019 Jun 19
4
[libnbd PATCH] states: Never block state machine inside REPLY
When processing a server reply within the REPLY subgroup, we will often hit a situation where recv() requires us to block until the next NotifyRead. But since NotifyRead is the only permitted external action while in this group, we are effectively blocking CmdIssue and NotifyWrite events from happening until the server finishes the in-progress reply, even though those events have no strict
2019 Jun 14
10
[libnbd PATCH 0/7] state machine refactoring
I'm still playing with ideas on how to split rstate from wstate (so that we can send a request without waiting for POLLIN to complete a pending reply), but this is some preliminary refactoring I found useful. I also fixed a couple of bugs while in the area (already pushed). There's a question of whether we want nbd_handle to be nearly 5k, or if we should i...
2007 Dec 09
8
zpool kernel panics.
Hi Folks, I''ve got a 3.9 Tb zpool, and it is casing kernel panics on my Solaris 10 280r (SPARC) server. The message I get on panic is this: panic[cpu1]/thread=2a100a95cc0: zfs: freeing free segment (offset=423713792 size=1024) This seems to come about when the zpool is being used or being scrubbed - about twice a day at the moment. After the reboot, the scrub seems to have
2019 Jun 12
8
[nbdkit PATCH v3 0/5] Play with libnbd for nbdkit-nbd
libnbd-0.1.4-1 is now available in Fedora 29/30 updates testing. Diffs since v2 - rebase to master, bump from libnbd 0.1.2 to 0.1.3+, add tests to TLS usage which flushed out the need to turn relative pathnames into absolute, doc tweaks Now that the testsuite covers TLS and libnbd has been fixed to provide the things I found lacking when developing v2, I'm leaning towards pushing this on