Displaying 7 results from an estimated 7 matches for "q_lock".
Did you mean:
g_lock
2019 Jun 04
0
[PATCH libnbd v2 4/4] examples: Add concurrent writer example.
...t len;
+};
+
+/* Concurrent writer thread (one per libnbd handle). */
+struct writer_data {
+ size_t i; /* Thread index, 0 .. NR_MULTI_CONN-1 */
+ struct nbd_handle *nbd; /* NBD handle. */
+ struct queue *q, *q_end; /* Queue of items to write. */
+ pthread_mutex_t q_lock; /* Lock on queue. */
+ pthread_cond_t q_cond; /* Condition on queue. */
+};
+
+static void *start_writer_thread (void *arg);
+static int writer (void *data, const void *buf, size_t len);
+
+static void *
+start_reader_thread (void *arg)
+{
+ struct nbd_handle *nbd;
+ struct pollfd...
2019 Jun 03
0
[PATCH libnbd discussion only 5/5] examples: Add concurrent writer example.
...t len;
+};
+
+/* Concurrent writer thread (one per libnbd handle). */
+struct writer_data {
+ size_t i; /* Thread index, 0 .. NR_MULTI_CONN-1 */
+ struct nbd_handle *nbd; /* NBD handle. */
+ struct queue *q, *q_end; /* Queue of items to write. */
+ pthread_mutex_t q_lock; /* Lock on queue. */
+ pthread_cond_t q_cond; /* Condition on queue. */
+};
+
+static void *start_writer_thread (void *arg);
+static void writer (void *data, const void *buf, size_t len);
+
+static void *
+start_reader_thread (void *arg)
+{
+ struct nbd_handle *nbd;
+ struct pollfd...
2019 Jun 03
10
[PATCH libnbd discussion only 0/5] api: Implement concurrent writer.
This works, but there's no time saving and I'm still investigating
whether it does what I think it does. Nevertheless I thought I would
post it because it (probably) implements the idea I had last night
outlined in:
https://www.redhat.com/archives/libguestfs/2019-June/msg00010.html
The meat of the change is patch 4. Patch 5 is an example which I
would probably fold into patch 4 for
2019 Jun 04
9
[PATCH libnbd v2 0/4] api: Implement concurrent writer.
v1:
https://www.redhat.com/archives/libguestfs/2019-June/msg00014.html
I pushed a few bits which are uncontroversial. The main
changes since v1 are:
An extra patch removes the want_to_send / check for nbd_aio_is_ready
in examples/threaded-reads-and-writes.c. This logic was wrong since
commit 6af72b87 as was pointed out by Eric in his review. Comments
and structure of
2012 Sep 21
3
tws bug ? (LSI SAS 9750)
Hi,
I have been trying out a nice new tws controller and decided to enable
debugging in the kernel and run some stress tests. With a regular
GENERIC kernel, it boots up fine. But with debugging, it panics on
boot. Anyone know whats up ? Is this something that should be sent
directly to LSI ?
pcib0: <ACPI Host-PCI bridge> port 0xcf8-0xcff on acpi0
pci0: <ACPI PCI bus> on pcib0
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
Hi Rob & Mihai,
I wrote vhost-nvme patches on top of Christoph's NVMe target.
vhost-nvme still uses mmio. So the guest OS can run unmodified NVMe
driver. But the tests I have done didn't show competitive performance
compared to virtio-blk/virtio-scsi. The bottleneck is in mmio. Your nvme
vendor extension patches reduces greatly the number of MMIO writes.
So I'd like to push it
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
Hi Rob & Mihai,
I wrote vhost-nvme patches on top of Christoph's NVMe target.
vhost-nvme still uses mmio. So the guest OS can run unmodified NVMe
driver. But the tests I have done didn't show competitive performance
compared to virtio-blk/virtio-scsi. The bottleneck is in mmio. Your nvme
vendor extension patches reduces greatly the number of MMIO writes.
So I'd like to push it