Displaying 20 results from an estimated 104 matches for "boqun".
2016 Jan 06
2
[PATCH v2 15/32] powerpc: define __smp_xxx
...gt; > > > smp_lwsync() get involved in this cleanup? If I understand you
> > > > correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> > > > while smp_lwsync() is internal to PPC.
> > > >
> > > > Regards,
> > > > Boqun
> > >
> > > I think you missed the leading ___ :)
> > >
> >
> > What I mean here was smp_lwsync() was originally internal to PPC, but
> > never mind ;-)
> >
> > > smp_store_release is external and it needs __smp_lwsync as
> > >...
2016 Jan 06
2
[PATCH v2 15/32] powerpc: define __smp_xxx
...gt; > > > smp_lwsync() get involved in this cleanup? If I understand you
> > > > correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> > > > while smp_lwsync() is internal to PPC.
> > > >
> > > > Regards,
> > > > Boqun
> > >
> > > I think you missed the leading ___ :)
> > >
> >
> > What I mean here was smp_lwsync() was originally internal to PPC, but
> > never mind ;-)
> >
> > > smp_store_release is external and it needs __smp_lwsync as
> > >...
2016 Jan 05
2
[PATCH v2 15/32] powerpc: define __smp_xxx
On Tue, Jan 05, 2016 at 10:51:17AM +0200, Michael S. Tsirkin wrote:
> On Tue, Jan 05, 2016 at 09:36:55AM +0800, Boqun Feng wrote:
> > Hi Michael,
> >
> > On Thu, Dec 31, 2015 at 09:07:42PM +0200, Michael S. Tsirkin wrote:
> > > This defines __smp_xxx barriers for powerpc
> > > for use by virtualization.
> > >
> > > smp_xxx barriers are removed as they are
>...
2016 Jan 05
2
[PATCH v2 15/32] powerpc: define __smp_xxx
On Tue, Jan 05, 2016 at 10:51:17AM +0200, Michael S. Tsirkin wrote:
> On Tue, Jan 05, 2016 at 09:36:55AM +0800, Boqun Feng wrote:
> > Hi Michael,
> >
> > On Thu, Dec 31, 2015 at 09:07:42PM +0200, Michael S. Tsirkin wrote:
> > > This defines __smp_xxx barriers for powerpc
> > > for use by virtualization.
> > >
> > > smp_xxx barriers are removed as they are
>...
2016 May 26
2
[PATCH v3 5/6] pv-qspinlock: use cmpxchg_release in __pv_queued_spin_unlock
On Wed, May 25, 2016 at 04:18:08PM +0800, Pan Xinhui wrote:
> cmpxchg_release is light-wight than cmpxchg, we can gain a better
> performace then. On some arch like ppc, barrier impact the performace
> too much.
>
> Suggested-by: Boqun Feng <boqun.feng at gmail.com>
> Signed-off-by: Pan Xinhui <xinhui.pan at linux.vnet.ibm.com>
> ---
> kernel/locking/qspinlock_paravirt.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/locking/qspinlock_paravirt.h b/kernel/locking/qsp...
2016 May 26
2
[PATCH v3 5/6] pv-qspinlock: use cmpxchg_release in __pv_queued_spin_unlock
On Wed, May 25, 2016 at 04:18:08PM +0800, Pan Xinhui wrote:
> cmpxchg_release is light-wight than cmpxchg, we can gain a better
> performace then. On some arch like ppc, barrier impact the performace
> too much.
>
> Suggested-by: Boqun Feng <boqun.feng at gmail.com>
> Signed-off-by: Pan Xinhui <xinhui.pan at linux.vnet.ibm.com>
> ---
> kernel/locking/qspinlock_paravirt.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/locking/qspinlock_paravirt.h b/kernel/locking/qsp...
2016 Dec 06
2
[PATCH v8 2/6] powerpc: pSeries/Kconfig: Add qspinlock build config
...gt;
> +config ARCH_USE_QUEUED_SPINLOCKS
> + default y
> + bool "Enable qspinlock"
I think you just enable qspinlock by default for all PPC platforms. I
guess you need to put
depends on PPC_PSERIES || PPC_POWERNV
here to achieve what you mean in you commit message.
Regards,
Boqun
> + help
> + Enabling this option will let kernel use qspinlock which is a kind of
> + fairlock. It has shown a good performance improvement on x86 and also ppc
> + especially in high contention cases.
> +
> config PPC_SPLPAR
> depends on PPC_PSERIES
> bool &quo...
2016 Dec 06
2
[PATCH v8 2/6] powerpc: pSeries/Kconfig: Add qspinlock build config
...gt;
> +config ARCH_USE_QUEUED_SPINLOCKS
> + default y
> + bool "Enable qspinlock"
I think you just enable qspinlock by default for all PPC platforms. I
guess you need to put
depends on PPC_PSERIES || PPC_POWERNV
here to achieve what you mean in you commit message.
Regards,
Boqun
> + help
> + Enabling this option will let kernel use qspinlock which is a kind of
> + fairlock. It has shown a good performance improvement on x86 and also ppc
> + especially in high contention cases.
> +
> config PPC_SPLPAR
> depends on PPC_PSERIES
> bool &quo...
2016 Jan 06
0
[PATCH v2 15/32] powerpc: define __smp_xxx
On Wed, Jan 06, 2016 at 09:51:52AM +0800, Boqun Feng wrote:
> On Tue, Jan 05, 2016 at 06:16:48PM +0200, Michael S. Tsirkin wrote:
> [snip]
> > > > > Another thing is that smp_lwsync() may have a third user(other than
> > > > > smp_load_acquire() and smp_store_release()):
> > > > >
> > >...
2016 Nov 25
2
[PATCH 0/3] virtio/vringh: kill off ACCESS_ONCE()
On Fri, Nov 25, 2016 at 01:40:44PM +0100, Peter Zijlstra wrote:
> #define SINGLE_LOAD(x) \
> {( \
> compiletime_assert_atomic_type(typeof(x)); \
Should be:
compiletime_assert_atomic_type(x);
> WARN_SINGLE_COPY_ALIGNMENT(&(x)); \
> READ_ONCE(x); \
> })
>
> #define SINGLE_STORE(x, v) \
> ({ \
>
2016 Nov 25
2
[PATCH 0/3] virtio/vringh: kill off ACCESS_ONCE()
On Fri, Nov 25, 2016 at 01:40:44PM +0100, Peter Zijlstra wrote:
> #define SINGLE_LOAD(x) \
> {( \
> compiletime_assert_atomic_type(typeof(x)); \
Should be:
compiletime_assert_atomic_type(x);
> WARN_SINGLE_COPY_ALIGNMENT(&(x)); \
> READ_ONCE(x); \
> })
>
> #define SINGLE_STORE(x, v) \
> ({ \
>
2016 Jun 28
11
[PATCH v2 0/4] implement vcpu preempted check
change fomr v1:
a simplier definition of default vcpu_is_preempted
skip mahcine type check on ppc, and add config. remove dedicated macro.
add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner.
add more comments
thanks boqun and Peter's suggestion.
This patch set aims to fix lock holder preemption issues.
test-case:
perf record -a perf bench sched messaging -g 400 -p && perf report
18.09% sched-messaging [kernel.vmlinux] [k] osq_lock
12.28% sched-messaging [kernel.vmlinux] [k] rwsem_spin_on_owner...
2016 Jun 28
11
[PATCH v2 0/4] implement vcpu preempted check
change fomr v1:
a simplier definition of default vcpu_is_preempted
skip mahcine type check on ppc, and add config. remove dedicated macro.
add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner.
add more comments
thanks boqun and Peter's suggestion.
This patch set aims to fix lock holder preemption issues.
test-case:
perf record -a perf bench sched messaging -g 400 -p && perf report
18.09% sched-messaging [kernel.vmlinux] [k] osq_lock
12.28% sched-messaging [kernel.vmlinux] [k] rwsem_spin_on_owner...
2016 Jan 05
2
[PATCH v2 15/32] powerpc: define __smp_xxx
...ticle.gmane.org/gmane.linux.ports.ppc.embedded/89877
I'm OK to change my patch accordingly, but do we really want
smp_lwsync() get involved in this cleanup? If I understand you
correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
while smp_lwsync() is internal to PPC.
Regards,
Boqun
> WRITE_ONCE(*p, v); \
> } while (0)
>
> -#define smp_load_acquire(p) \
> +#define __smp_load_acquire(p) \
> ({ \
> typeof(*p) ___p1 = READ_ONCE(*p); \
> compiletime_assert_atomic_type(*p); \
> - smp_lwsync(); \
> + __smp_l...
2016 Jan 05
2
[PATCH v2 15/32] powerpc: define __smp_xxx
...ticle.gmane.org/gmane.linux.ports.ppc.embedded/89877
I'm OK to change my patch accordingly, but do we really want
smp_lwsync() get involved in this cleanup? If I understand you
correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
while smp_lwsync() is internal to PPC.
Regards,
Boqun
> WRITE_ONCE(*p, v); \
> } while (0)
>
> -#define smp_load_acquire(p) \
> +#define __smp_load_acquire(p) \
> ({ \
> typeof(*p) ___p1 = READ_ONCE(*p); \
> compiletime_assert_atomic_type(*p); \
> - smp_lwsync(); \
> + __smp_l...
2016 Jan 05
0
[PATCH v2 15/32] powerpc: define __smp_xxx
On Tue, Jan 05, 2016 at 05:53:41PM +0800, Boqun Feng wrote:
> On Tue, Jan 05, 2016 at 10:51:17AM +0200, Michael S. Tsirkin wrote:
> > On Tue, Jan 05, 2016 at 09:36:55AM +0800, Boqun Feng wrote:
> > > Hi Michael,
> > >
> > > On Thu, Dec 31, 2015 at 09:07:42PM +0200, Michael S. Tsirkin wrote:
> > > >...
2016 Jul 06
1
[PATCH v2 2/4] powerpc/spinlock: support vcpu preempted check
...he fact is powerNV are built into same
???^^ support
> kernel image with pSeries. So we need return false if we are runnig as
> powerNV. The another fact is that lppaca->yiled_count keeps zero on
??^^ yield
> powerNV. So we can just skip the machine type.
>?
> Suggested-by: Boqun Feng <boqun.feng at gmail.com>
> Suggested-by: Peter Zijlstra (Intel) <peterz at infradead.org>
> Signed-off-by: Pan Xinhui <xinhui.pan at linux.vnet.ibm.com>
> ---
> ?arch/powerpc/include/asm/spinlock.h | 18 ++++++++++++++++++
> ?1 file changed, 18 insertions(+)
&g...
2016 Jul 06
1
[PATCH v2 2/4] powerpc/spinlock: support vcpu preempted check
...he fact is powerNV are built into same
???^^ support
> kernel image with pSeries. So we need return false if we are runnig as
> powerNV. The another fact is that lppaca->yiled_count keeps zero on
??^^ yield
> powerNV. So we can just skip the machine type.
>?
> Suggested-by: Boqun Feng <boqun.feng at gmail.com>
> Suggested-by: Peter Zijlstra (Intel) <peterz at infradead.org>
> Signed-off-by: Pan Xinhui <xinhui.pan at linux.vnet.ibm.com>
> ---
> ?arch/powerpc/include/asm/spinlock.h | 18 ++++++++++++++++++
> ?1 file changed, 18 insertions(+)
&g...
2016 Nov 25
2
[PATCH 0/3] virtio/vringh: kill off ACCESS_ONCE()
On Fri, Nov 25, 2016 at 3:56 PM, Boqun Feng <boqun.feng at gmail.com> wrote:
> On Fri, Nov 25, 2016 at 01:44:04PM +0100, Peter Zijlstra wrote:
>> On Fri, Nov 25, 2016 at 01:40:44PM +0100, Peter Zijlstra wrote:
>> > #define SINGLE_LOAD(x) \
>> > {(...
2016 Nov 25
2
[PATCH 0/3] virtio/vringh: kill off ACCESS_ONCE()
On Fri, Nov 25, 2016 at 3:56 PM, Boqun Feng <boqun.feng at gmail.com> wrote:
> On Fri, Nov 25, 2016 at 01:44:04PM +0100, Peter Zijlstra wrote:
>> On Fri, Nov 25, 2016 at 01:40:44PM +0100, Peter Zijlstra wrote:
>> > #define SINGLE_LOAD(x) \
>> > {(...