Displaying 18 results from an estimated 18 matches for "stgolabs".
2016 Jan 12
1
[PATCH v3 01/41] lcoking/barriers, arch: Use smp barriers in smp_store_release()
On Sun, Jan 10, 2016 at 04:16:32PM +0200, Michael S. Tsirkin wrote:
> From: Davidlohr Bueso <dave at stgolabs.net>
>
> With commit b92b8b35a2e ("locking/arch: Rename set_mb() to smp_store_mb()")
> it was made clear that the context of this call (and thus set_mb)
> is strictly for CPU ordering, as opposed to IO. As such all archs
> should use the smp variant of mb(), respecting...
2016 Jan 12
1
[PATCH v3 01/41] lcoking/barriers, arch: Use smp barriers in smp_store_release()
On Sun, Jan 10, 2016 at 04:16:32PM +0200, Michael S. Tsirkin wrote:
> From: Davidlohr Bueso <dave at stgolabs.net>
>
> With commit b92b8b35a2e ("locking/arch: Rename set_mb() to smp_store_mb()")
> it was made clear that the context of this call (and thus set_mb)
> is strictly for CPU ordering, as opposed to IO. As such all archs
> should use the smp variant of mb(), respecting...
2016 Jan 10
0
[PATCH v3 01/41] lcoking/barriers, arch: Use smp barriers in smp_store_release()
From: Davidlohr Bueso <dave at stgolabs.net>
With commit b92b8b35a2e ("locking/arch: Rename set_mb() to smp_store_mb()")
it was made clear that the context of this call (and thus set_mb)
is strictly for CPU ordering, as opposed to IO. As such all archs
should use the smp variant of mb(), respecting the semantics and
saving...
2016 Jan 28
10
[PATCH v5 0/5] x86: faster smp_mb()+documentation tweaks
mb() typically uses mfence on modern x86, but a micro-benchmark shows that it's
2 to 3 times slower than lock; addl that we use on older CPUs.
So we really should use the locked variant everywhere, except that intel manual
says that clflush is only ordered by mfence, so we can't.
Note: some callers of clflush seems to assume sfence will
order it, so there could be existing bugs around
2016 Jan 28
10
[PATCH v5 0/5] x86: faster smp_mb()+documentation tweaks
mb() typically uses mfence on modern x86, but a micro-benchmark shows that it's
2 to 3 times slower than lock; addl that we use on older CPUs.
So we really should use the locked variant everywhere, except that intel manual
says that clflush is only ordered by mfence, so we can't.
Note: some callers of clflush seems to assume sfence will
order it, so there could be existing bugs around
2018 Nov 30
8
[PATCH RFC 00/15] Zero ****s, hugload of hugs <3
On Fri, 30 Nov 2018, Kees Cook wrote:
>On Fri, Nov 30, 2018 at 11:27 AM Jarkko Sakkinen
><jarkko.sakkinen at linux.intel.com> wrote:
>>
>> In order to comply with the CoC, replace **** with a hug.
I hope this is some kind of joke. How would anyone get offended by reading
technical comments? This is all beyond me...
Thanks,
Davidlohr
2016 Jan 10
48
[PATCH v3 00/41] arch: barrier cleanup + barriers for virt
Changes since v2:
- extended checkpatch tests for barriers, and added patches
teaching it to warn about incorrect usage of barriers
(__smp_xxx barriers are for use by asm-generic code only),
should help prevent misuse by arch code
to address comments by Russell King
- patched more instances of xen to use virt_ barriers
as suggested by Stefano Stabellini
- implemented a 2 byte xchg on sh
2016 Jan 10
48
[PATCH v3 00/41] arch: barrier cleanup + barriers for virt
Changes since v2:
- extended checkpatch tests for barriers, and added patches
teaching it to warn about incorrect usage of barriers
(__smp_xxx barriers are for use by asm-generic code only),
should help prevent misuse by arch code
to address comments by Russell King
- patched more instances of xen to use virt_ barriers
as suggested by Stefano Stabellini
- implemented a 2 byte xchg on sh
2016 Jan 12
3
[PATCH 3/4] x86,asm: Re-work smp_store_mb()
On Mon, Nov 02, 2015 at 04:06:46PM -0800, Linus Torvalds wrote:
> On Mon, Nov 2, 2015 at 12:15 PM, Davidlohr Bueso <dave at stgolabs.net> wrote:
> >
> > So I ran some experiments on an IvyBridge (2.8GHz) and the cost of XCHG is
> > constantly cheaper (by at least half the latency) than MFENCE. While there
> > was a decent amount of variation, this difference remained rather constant.
>
> Mind te...
2016 Jan 12
3
[PATCH 3/4] x86,asm: Re-work smp_store_mb()
On Mon, Nov 02, 2015 at 04:06:46PM -0800, Linus Torvalds wrote:
> On Mon, Nov 2, 2015 at 12:15 PM, Davidlohr Bueso <dave at stgolabs.net> wrote:
> >
> > So I ran some experiments on an IvyBridge (2.8GHz) and the cost of XCHG is
> > constantly cheaper (by at least half the latency) than MFENCE. While there
> > was a decent amount of variation, this difference remained rather constant.
>
> Mind te...
2015 Dec 31
54
[PATCH v2 00/34] arch: barrier cleanup + barriers for virt
Changes since v1:
- replaced my asm-generic patch with an equivalent patch already in tip
- add wrappers with virt_ prefix for better code annotation,
as suggested by David Miller
- dropped XXX in patch names as this makes vger choke, Cc all relevant
mailing lists on all patches (not personal email, as the list becomes
too long then)
I parked this in vhost tree for now, but the
2015 Dec 31
54
[PATCH v2 00/34] arch: barrier cleanup + barriers for virt
Changes since v1:
- replaced my asm-generic patch with an equivalent patch already in tip
- add wrappers with virt_ prefix for better code annotation,
as suggested by David Miller
- dropped XXX in patch names as this makes vger choke, Cc all relevant
mailing lists on all patches (not personal email, as the list becomes
too long then)
I parked this in vhost tree for now, but the
2016 Oct 29
1
[PATCH v6 02/11] locking/osq: Drop the overload of osq_lock()
On Fri, 28 Oct 2016, Pan Xinhui wrote:
> /*
> * If we need to reschedule bail... so we can block.
>+ * Use vcpu_is_preempted to detech lock holder preemption issue
^^ detect
>+ * and break.
Could you please remove the rest of this comment? Its just noise to point out
that vcpu_is_preempted is a macro defined by arch/false. This is
2019 Oct 04
0
[PATCH 07/11] vhost: convert vhost_umem_interval_tree to half closed intervals
On Fri, 04 Oct 2019, Michel Lespinasse wrote:
>On Thu, Oct 03, 2019 at 01:18:54PM -0700, Davidlohr Bueso wrote:
>> @@ -1320,15 +1320,14 @@ static bool iotlb_access_ok(struct vhost_virtqueue *vq,
>> {
>> const struct vhost_umem_node *node;
>> struct vhost_umem *umem = vq->iotlb;
>> - u64 s = 0, size, orig_addr = addr, last = addr + len - 1;
>> + u64 s
2016 Oct 29
1
[PATCH v6 02/11] locking/osq: Drop the overload of osq_lock()
On Fri, 28 Oct 2016, Pan Xinhui wrote:
> /*
> * If we need to reschedule bail... so we can block.
>+ * Use vcpu_is_preempted to detech lock holder preemption issue
^^ detect
>+ * and break.
Could you please remove the rest of this comment? Its just noise to point out
that vcpu_is_preempted is a macro defined by arch/false. This is
2015 Feb 06
2
[PATCH] x86 spinlock: Fix memory corruption on completing completions
On Fri, 2015-02-06 at 08:25 -0800, Linus Torvalds wrote:
> On Fri, Feb 6, 2015 at 6:49 AM, Raghavendra K T
> <raghavendra.kt at linux.vnet.ibm.com> wrote:
> > Paravirt spinlock clears slowpath flag after doing unlock.
> [ fix edited out ]
>
> So I'm not going to be applying this for 3.19, because it's much too
> late and the patch is too scary. Plus the bug
2015 Feb 06
2
[PATCH] x86 spinlock: Fix memory corruption on completing completions
On Fri, 2015-02-06 at 08:25 -0800, Linus Torvalds wrote:
> On Fri, Feb 6, 2015 at 6:49 AM, Raghavendra K T
> <raghavendra.kt at linux.vnet.ibm.com> wrote:
> > Paravirt spinlock clears slowpath flag after doing unlock.
> [ fix edited out ]
>
> So I'm not going to be applying this for 3.19, because it's much too
> late and the patch is too scary. Plus the bug
2019 Oct 03
1
[PATCH 07/11] vhost: convert vhost_umem_interval_tree to half closed intervals
The vhost_umem interval tree really wants [a, b) intervals,
not fully closed as currently. As such convert it to use the
new interval_tree_gen.h, and also rename the 'last' endpoint
in the node to 'end', which both a more suitable name for
the half closed interval and also reduces the chances of some
caller being missed.
Cc: Michael S. Tsirkin" <mst at redhat.com>
Cc: