Displaying 20 results from an estimated 38 matches for "unqueue".
Did you mean:
runqueue
2016 Oct 29
1
[PATCH v6 02/11] locking/osq: Drop the overload of osq_lock()
...ll locks you change with this.
Thanks,
Davidlohr
> * vcpu_is_preempted is a macro defined by false if
>+ * arch does not support vcpu preempted check,
> */
>- if (need_resched())
>+ if (need_resched() || vcpu_is_preempted(node_cpu(node->prev)))
> goto unqueue;
>
> cpu_relax_lowlatency();
>--
>2.4.11
>
2016 Oct 29
1
[PATCH v6 02/11] locking/osq: Drop the overload of osq_lock()
...ll locks you change with this.
Thanks,
Davidlohr
> * vcpu_is_preempted is a macro defined by false if
>+ * arch does not support vcpu preempted check,
> */
>- if (need_resched())
>+ if (need_resched() || vcpu_is_preempted(node_cpu(node->prev)))
> goto unqueue;
>
> cpu_relax_lowlatency();
>--
>2.4.11
>
2016 Oct 25
0
[GIT PULL v2 4/5] processor.h: Remove cpu_relax_lowlatency users
...ing/osq_lock.c
+++ b/kernel/locking/osq_lock.c
@@ -75,7 +75,7 @@ osq_wait_next(struct optimistic_spin_queue *lock,
break;
}
- cpu_relax_lowlatency();
+ cpu_relax();
}
return next;
@@ -122,7 +122,7 @@ bool osq_lock(struct optimistic_spin_queue *lock)
if (need_resched())
goto unqueue;
- cpu_relax_lowlatency();
+ cpu_relax();
}
return true;
@@ -148,7 +148,7 @@ bool osq_lock(struct optimistic_spin_queue *lock)
if (smp_load_acquire(&node->locked))
return true;
- cpu_relax_lowlatency();
+ cpu_relax();
/*
* Or we race against a concurrent unqueue...
2014 Jun 23
0
[PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
...> what it means.
>
> Could you help a bit in explaining it in English please?
(refer to the state diagram, if we count states left->right,
top->bottom, then this is: 5->2 or 7->8
n,0 -> 0,1:
the lock is free and the tail points to the first queued; this means
that unqueueing implies wiping the tail, at the same time, acquire
the lock.
*,0 -> *,1:
the lock is free and the tail doesn't point to the first queued; this
means that unqueueing doesn't touch the tail pointer but only sets
the lock.
> > +
> > + old = atomic_cmpxchg(&am...
2016 Oct 28
0
[PATCH v6 02/11] locking/osq: Drop the overload of osq_lock()
...n block.
+ * Use vcpu_is_preempted to detech lock holder preemption issue
+ * and break. vcpu_is_preempted is a macro defined by false if
+ * arch does not support vcpu preempted check,
*/
- if (need_resched())
+ if (need_resched() || vcpu_is_preempted(node_cpu(node->prev)))
goto unqueue;
cpu_relax_lowlatency();
--
2.4.11
2014 Jun 16
4
[PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
On Sun, Jun 15, 2014 at 02:46:58PM +0200, Peter Zijlstra wrote:
> From: Waiman Long <Waiman.Long at hp.com>
>
> This patch introduces a new generic queue spinlock implementation that
> can serve as an alternative to the default ticket spinlock. Compared
> with the ticket spinlock, this queue spinlock should be almost as fair
> as the ticket spinlock. It has about the same
2014 Jun 16
4
[PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
On Sun, Jun 15, 2014 at 02:46:58PM +0200, Peter Zijlstra wrote:
> From: Waiman Long <Waiman.Long at hp.com>
>
> This patch introduces a new generic queue spinlock implementation that
> can serve as an alternative to the default ticket spinlock. Compared
> with the ticket spinlock, this queue spinlock should be almost as fair
> as the ticket spinlock. It has about the same
2011 Jun 01
11
SATA disk perf question
I figure this group will know better than any other I have contact
with, is 700-800 I/Ops reasonable for a 7200 RPM SATA drive (1 TB Sun
badged Seagate ST31000N in a J4400) ? I have a resilver running and am
seeing about 700-800 writes/sec. on the hot spare as it resilvers.
There is no other I/O activity on this box, as this is a remote
replication target for production data. I have a the
2016 Jul 21
5
[PATCH v3 0/4] implement vcpu preempted check
change from v2:
no code change, fix typos, update some comments
change from v1:
a simplier definition of default vcpu_is_preempted
skip mahcine type check on ppc, and add config. remove dedicated macro.
add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner.
add more comments
thanks boqun and Peter's suggestion.
This patch set aims to fix lock holder preemption
2016 Jul 21
5
[PATCH v3 0/4] implement vcpu preempted check
change from v2:
no code change, fix typos, update some comments
change from v1:
a simplier definition of default vcpu_is_preempted
skip mahcine type check on ppc, and add config. remove dedicated macro.
add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner.
add more comments
thanks boqun and Peter's suggestion.
This patch set aims to fix lock holder preemption
2017 Jul 07
0
Wine release 2.12
...st_async and return it in read request.
server: Close async wait handle when wait is satisfied.
server: Return async result directly instead of via APCs if it's available.
server: Use create_request_async for write requests.
server: Store fd reference in async object for unqueued asyncs.
server: Allow async_handoff users to set result themselves.
ntdll: Set iosb status in server_ioctl_file.
server: Use create_request_async in ioctl request handler.
server: Use create_request_async in flush request handler.
server: Remove no longer needed need_...
2016 Oct 28
16
[PATCH v6 00/11] implement vcpu preempted check
change from v5:
spilt x86/kvm patch into guest/host part.
introduce kvm_write_guest_offset_cached.
fix some typos.
rebase patch onto 4.9.2
change from v4:
spilt x86 kvm vcpu preempted check into two patches.
add documentation patch.
add x86 vcpu preempted check patch under xen
add s390 vcpu preempted check patch
change from v3:
add x86 vcpu preempted check patch
change from v2:
no code
2016 Oct 28
16
[PATCH v6 00/11] implement vcpu preempted check
change from v5:
spilt x86/kvm patch into guest/host part.
introduce kvm_write_guest_offset_cached.
fix some typos.
rebase patch onto 4.9.2
change from v4:
spilt x86 kvm vcpu preempted check into two patches.
add documentation patch.
add x86 vcpu preempted check patch under xen
add s390 vcpu preempted check patch
change from v3:
add x86 vcpu preempted check patch
change from v2:
no code
2016 Jun 28
11
[PATCH v2 0/4] implement vcpu preempted check
change fomr v1:
a simplier definition of default vcpu_is_preempted
skip mahcine type check on ppc, and add config. remove dedicated macro.
add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner.
add more comments
thanks boqun and Peter's suggestion.
This patch set aims to fix lock holder preemption issues.
test-case:
perf record -a perf bench sched messaging -g
2016 Jun 28
11
[PATCH v2 0/4] implement vcpu preempted check
change fomr v1:
a simplier definition of default vcpu_is_preempted
skip mahcine type check on ppc, and add config. remove dedicated macro.
add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner.
add more comments
thanks boqun and Peter's suggestion.
This patch set aims to fix lock holder preemption issues.
test-case:
perf record -a perf bench sched messaging -g
2016 Oct 25
7
[GIT PULL v2 0/5] cpu_relax: drop lowlatency, introduce yield
Peter,
here is v2 with some improved patch descriptions and some fixes. The
previous version has survived one day of linux-next and I only changed
small parts.
So unless there is some other issue, feel free to pull (or to apply
the patches) to tip/locking.
The following changes since commit 07d9a380680d1c0eb51ef87ff2eab5c994949e69:
Linux 4.9-rc2 (2016-10-23 17:10:14 -0700)
are available in
2016 Oct 25
7
[GIT PULL v2 0/5] cpu_relax: drop lowlatency, introduce yield
Peter,
here is v2 with some improved patch descriptions and some fixes. The
previous version has survived one day of linux-next and I only changed
small parts.
So unless there is some other issue, feel free to pull (or to apply
the patches) to tip/locking.
The following changes since commit 07d9a380680d1c0eb51ef87ff2eab5c994949e69:
Linux 4.9-rc2 (2016-10-23 17:10:14 -0700)
are available in
2012 Aug 30
2
[PATCH 01/11] vmci_context.patch: VMCI context list operations.
...xt)
+{
+ ASSERT(context);
+ if (atomic_dec_and_test(&context->refCount))
+ ctx_free_ctx(context);
+}
+
+/*
+ * Dequeues the next datagram and returns it to caller.
+ * The caller passes in a pointer to the max size datagram
+ * it can handle and the datagram is only unqueued if the
+ * size is less than maxSize. If larger maxSize is set to
+ * the size of the datagram to give the caller a chance to
+ * set up a larger buffer for the guestcall.
+ */
+int vmci_ctx_dequeue_datagram(struct vmci_ctx *context,
+ size_t *maxSize,
+...
2012 Aug 30
2
[PATCH 01/11] vmci_context.patch: VMCI context list operations.
...xt)
+{
+ ASSERT(context);
+ if (atomic_dec_and_test(&context->refCount))
+ ctx_free_ctx(context);
+}
+
+/*
+ * Dequeues the next datagram and returns it to caller.
+ * The caller passes in a pointer to the max size datagram
+ * it can handle and the datagram is only unqueued if the
+ * size is less than maxSize. If larger maxSize is set to
+ * the size of the datagram to give the caller a chance to
+ * set up a larger buffer for the guestcall.
+ */
+int vmci_ctx_dequeue_datagram(struct vmci_ctx *context,
+ size_t *maxSize,
+...
2016 Oct 20
15
[PATCH v5 0/9] implement vcpu preempted check
change from v4:
spilt x86 kvm vcpu preempted check into two patches.
add documentation patch.
add x86 vcpu preempted check patch under xen
add s390 vcpu preempted check patch
change from v3:
add x86 vcpu preempted check patch
change from v2:
no code change, fix typos, update some comments
change from v1:
a simplier definition of default vcpu_is_preempted
skip mahcine type check on ppc,