search for: new_tail

Displaying 13 results from an estimated 13 matches for "new_tail".

Did you mean: new_mail
2015 Feb 09
2
[PATCH V2] x86 spinlock: Fix memory corruption on completing completions
Paravirt spinlock clears slowpath flag after doing unlock. As explained by Linus currently it does: prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); which
2015 Feb 09
2
[PATCH V2] x86 spinlock: Fix memory corruption on completing completions
Paravirt spinlock clears slowpath flag after doing unlock. As explained by Linus currently it does: prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); which
2015 Feb 09
0
[PATCH V2] x86 spinlock: Fix memory corruption on completing completions
...head_tail, new.head_tail); > + } > +} Can't we simplify it? We own .head, and we already know it. We only need to clear TICKET_SLOWPATH_FLAG in .tail atomically? IOW, static inline void __ticket_check_and_clear_slowpath(arch_spinlock_t *lock, __ticket_t head) { __ticket_t old_tail, new_tail; new_tail = head + TICKET_LOCK_INC; old_tail = new_tail | TICKET_SLOWPATH_FLAG; if (READ_ONCE(lock->tickets.tail) == old_tail) cmpxchg(&lock->tickets.tail, old_tail, new_tail); } Plus - __ticket_check_and_clear_slowpath(lock); + __ticket_check_and_clear_slowpath(lock, inc....
2015 Feb 09
0
[PATCH V2] x86 spinlock: Fix memory corruption on completing completions
...head_tail, new.head_tail); > + } > +} Can't we simplify it? We own .head, and we already know it. We only need to clear TICKET_SLOWPATH_FLAG in .tail atomically? IOW, static inline void __ticket_check_and_clear_slowpath(arch_spinlock_t *lock, __ticket_t head) { __ticket_t old_tail, new_tail; new_tail = head + TICKET_LOCK_INC; old_tail = new_tail | TICKET_SLOWPATH_FLAG; if (READ_ONCE(lock->tickets.tail) == old_tail) cmpxchg(&lock->tickets.tail, old_tail, new_tail); } Plus - __ticket_check_and_clear_slowpath(lock); + __ticket_check_and_clear_slowpath(lock, inc....
2015 Mar 19
0
[PATCH 8/9] qspinlock: Generic paravirt support
...(READ_ONCE(node->locked)) @@ -107,13 +145,33 @@ static void pv_kick_node(struct mcs_spin pv_kick(pn->cpu); } -static DEFINE_PER_CPU(struct qspinlock *, __pv_lock_wait); +static void pv_set_head(struct qspinlock *lock, struct pv_node *head) +{ + struct pv_node *tail, *new_tail; + + new_tail = pv_decode_tail(atomic_read(&lock->val)); + do { + tail = new_tail; + while (!READ_ONCE(tail->head)) + cpu_relax(); + + (void)xchg(&tail->head, head); + /* + * pv...
2015 Mar 19
0
[PATCH 8/9] qspinlock: Generic paravirt support
...(READ_ONCE(node->locked)) @@ -107,13 +145,33 @@ static void pv_kick_node(struct mcs_spin pv_kick(pn->cpu); } -static DEFINE_PER_CPU(struct qspinlock *, __pv_lock_wait); +static void pv_set_head(struct qspinlock *lock, struct pv_node *head) +{ + struct pv_node *tail, *new_tail; + + new_tail = pv_decode_tail(atomic_read(&lock->val)); + do { + tail = new_tail; + while (!READ_ONCE(tail->head)) + cpu_relax(); + + (void)xchg(&tail->head, head); + /* + * pv...
2015 Mar 18
2
[PATCH 8/9] qspinlock: Generic paravirt support
On 03/16/2015 09:16 AM, Peter Zijlstra wrote: > Implement simple paravirt support for the qspinlock. > > Provide a separate (second) version of the spin_lock_slowpath for > paravirt along with a special unlock path. > > The second slowpath is generated by adding a few pv hooks to the > normal slowpath, but where those will compile away for the native > case, they expand
2015 Mar 18
2
[PATCH 8/9] qspinlock: Generic paravirt support
On 03/16/2015 09:16 AM, Peter Zijlstra wrote: > Implement simple paravirt support for the qspinlock. > > Provide a separate (second) version of the spin_lock_slowpath for > paravirt along with a special unlock path. > > The second slowpath is generated by adding a few pv hooks to the > normal slowpath, but where those will compile away for the native > case, they expand
2015 Nov 18
0
[PATCH -qemu] nvme: support Google vendor extension
...cq->head = new_head; + } if (start_sqs) { NvmeSQueue *sq; QTAILQ_FOREACH(sq, &cq->sq_list, entry) { @@ -752,7 +828,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val) return; } - sq->tail = new_tail; + /* When the mapped pointer memory area is setup, we don't rely on + * the MMIO written values to update the tail pointer. */ + if (!sq->db_addr) { + sq->tail = new_tail; + } timer_mod(sq->timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +...
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
Hi Rob & Mihai, I wrote vhost-nvme patches on top of Christoph's NVMe target. vhost-nvme still uses mmio. So the guest OS can run unmodified NVMe driver. But the tests I have done didn't show competitive performance compared to virtio-blk/virtio-scsi. The bottleneck is in mmio. Your nvme vendor extension patches reduces greatly the number of MMIO writes. So I'd like to push it
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
Hi Rob & Mihai, I wrote vhost-nvme patches on top of Christoph's NVMe target. vhost-nvme still uses mmio. So the guest OS can run unmodified NVMe driver. But the tests I have done didn't show competitive performance compared to virtio-blk/virtio-scsi. The bottleneck is in mmio. Your nvme vendor extension patches reduces greatly the number of MMIO writes. So I'd like to push it
2015 Nov 20
15
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
Hi, This is the first attempt to add a new qemu nvme backend using in-kernel nvme target. Most code are ported from qemu-nvme and also borrow code from Hannes Reinecke's rts-megasas. It's similar as vhost-scsi, but doesn't use virtio. The advantage is guest can run unmodified NVMe driver. So guest can be any OS that has a NVMe driver. The goal is to get as good performance as
2015 Nov 20
15
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
Hi, This is the first attempt to add a new qemu nvme backend using in-kernel nvme target. Most code are ported from qemu-nvme and also borrow code from Hannes Reinecke's rts-megasas. It's similar as vhost-scsi, but doesn't use virtio. The advantage is guest can run unmodified NVMe driver. So guest can be any OS that has a NVMe driver. The goal is to get as good performance as