Displaying 20 results from an estimated 1000 matches similar to: "[patch] btrfs: fix gfp flags masking"
2020 Jul 09
4
[PATCH v3 5/6] powerpc/pseries: implement paravirt qspinlocks for SPLPAR
Nicholas Piggin <npiggin at gmail.com> writes:
> Signed-off-by: Nicholas Piggin <npiggin at gmail.com>
> ---
> arch/powerpc/include/asm/paravirt.h | 28 ++++++++
> arch/powerpc/include/asm/qspinlock.h | 66 +++++++++++++++++++
> arch/powerpc/include/asm/qspinlock_paravirt.h | 7 ++
> arch/powerpc/platforms/pseries/Kconfig | 5 ++
>
2020 Jul 09
4
[PATCH v3 5/6] powerpc/pseries: implement paravirt qspinlocks for SPLPAR
Nicholas Piggin <npiggin at gmail.com> writes:
> Signed-off-by: Nicholas Piggin <npiggin at gmail.com>
> ---
> arch/powerpc/include/asm/paravirt.h | 28 ++++++++
> arch/powerpc/include/asm/qspinlock.h | 66 +++++++++++++++++++
> arch/powerpc/include/asm/qspinlock_paravirt.h | 7 ++
> arch/powerpc/platforms/pseries/Kconfig | 5 ++
>
2020 Jul 09
1
[PATCH v3 4/6] powerpc/64s: implement queued spinlocks and rwlocks
Nicholas Piggin <npiggin at gmail.com> writes:
> These have shown significantly improved performance and fairness when
> spinlock contention is moderate to high on very large systems.
>
> [ Numbers hopefully forthcoming after more testing, but initial
> results look good ]
Would be good to have something here, even if it's preliminary.
> Thanks to the fast path,
2020 Jul 06
13
[PATCH v3 0/6] powerpc: queued spinlocks and rwlocks
v3 is updated to use __pv_queued_spin_unlock, noticed by Waiman (thank you).
Thanks,
Nick
Nicholas Piggin (6):
powerpc/powernv: must include hvcall.h to get PAPR defines
powerpc/pseries: move some PAPR paravirt functions to their own file
powerpc: move spinlock implementation to simple_spinlock
powerpc/64s: implement queued spinlocks and rwlocks
powerpc/pseries: implement paravirt
2020 Jul 06
13
[PATCH v3 0/6] powerpc: queued spinlocks and rwlocks
v3 is updated to use __pv_queued_spin_unlock, noticed by Waiman (thank you).
Thanks,
Nick
Nicholas Piggin (6):
powerpc/powernv: must include hvcall.h to get PAPR defines
powerpc/pseries: move some PAPR paravirt functions to their own file
powerpc: move spinlock implementation to simple_spinlock
powerpc/64s: implement queued spinlocks and rwlocks
powerpc/pseries: implement paravirt
2007 Oct 16
1
LFENCE instruction (was: [rfc][patch 3/3] x86: optimise barriers)
Nick Piggin <npiggin@suse.de> wrote:
>
> Also, for non-wb memory. I don't think the Intel document referenced
> says anything about this, but the AMD document says that loads can pass
> loads (page 8, rule b).
>
> This is why our rmb() is still an lfence.
BTW, Xen (in particular, the code in drivers/xen) uses mb/rmb/wmb
instead of smp_mb/smp_rmb/smp_wmb when it
2007 Oct 16
1
LFENCE instruction (was: [rfc][patch 3/3] x86: optimise barriers)
Nick Piggin <npiggin@suse.de> wrote:
>
> Also, for non-wb memory. I don't think the Intel document referenced
> says anything about this, but the AMD document says that loads can pass
> loads (page 8, rule b).
>
> This is why our rmb() is still an lfence.
BTW, Xen (in particular, the code in drivers/xen) uses mb/rmb/wmb
instead of smp_mb/smp_rmb/smp_wmb when it
2020 Jul 05
1
[PATCH v2 5/6] powerpc/pseries: implement paravirt qspinlocks for SPLPAR
On 7/3/20 3:35 AM, Nicholas Piggin wrote:
> Signed-off-by: Nicholas Piggin <npiggin at gmail.com>
> ---
> arch/powerpc/include/asm/paravirt.h | 28 ++++++++++
> arch/powerpc/include/asm/qspinlock.h | 55 +++++++++++++++++++
> arch/powerpc/include/asm/qspinlock_paravirt.h | 5 ++
> arch/powerpc/platforms/pseries/Kconfig | 5 ++
>
2020 Jul 03
7
[PATCH v2 0/6] powerpc: queued spinlocks and rwlocks
v2 is updated to account for feedback from Will, Peter, and
Waiman (thank you), and trims off a couple of RFC and unrelated
patches.
Thanks,
Nick
Nicholas Piggin (6):
powerpc/powernv: must include hvcall.h to get PAPR defines
powerpc/pseries: move some PAPR paravirt functions to their own file
powerpc: move spinlock implementation to simple_spinlock
powerpc/64s: implement queued
2020 Jul 02
3
[PATCH 5/8] powerpc/64s: implement queued spinlocks and rwlocks
On Thu, Jul 02, 2020 at 08:25:43PM +1000, Nicholas Piggin wrote:
> Excerpts from Will Deacon's message of July 2, 2020 6:02 pm:
> > On Thu, Jul 02, 2020 at 05:48:36PM +1000, Nicholas Piggin wrote:
> >> diff --git a/arch/powerpc/include/asm/qspinlock.h b/arch/powerpc/include/asm/qspinlock.h
> >> new file mode 100644
> >> index 000000000000..f84da77b6bb7
>
2020 Jul 02
3
[PATCH 5/8] powerpc/64s: implement queued spinlocks and rwlocks
On Thu, Jul 02, 2020 at 08:25:43PM +1000, Nicholas Piggin wrote:
> Excerpts from Will Deacon's message of July 2, 2020 6:02 pm:
> > On Thu, Jul 02, 2020 at 05:48:36PM +1000, Nicholas Piggin wrote:
> >> diff --git a/arch/powerpc/include/asm/qspinlock.h b/arch/powerpc/include/asm/qspinlock.h
> >> new file mode 100644
> >> index 000000000000..f84da77b6bb7
>
2020 Jul 02
12
[PATCH 0/8] powerpc: queued spinlocks and rwlocks
This series adds an option to use queued spinlocks for powerpc, and
makes it the default for the Book3S-64 subarch.
This effort starts with the generic code so it's very simple but
still very performant. There are optimisations that can be made to
slowpaths, but I think it's better to attack those incrementally
if/when we find things, and try to add the improvements to generic
code as
2020 Jul 24
8
[PATCH v4 0/6] powerpc: queued spinlocks and rwlocks
Updated with everybody's feedback (thanks all), and more performance
results.
What I've found is I might have been measuring the worst load point for
the paravirt case, and by looking at a range of loads it's clear that
queued spinlocks are overall better even on PV, doubly so when you look
at the generally much improved worst case latencies.
I have defaulted it to N even though
2020 Jul 09
0
[PATCH v3 5/6] powerpc/pseries: implement paravirt qspinlocks for SPLPAR
On 7/9/20 6:53 AM, Michael Ellerman wrote:
> Nicholas Piggin <npiggin at gmail.com> writes:
>
>> Signed-off-by: Nicholas Piggin <npiggin at gmail.com>
>> ---
>> arch/powerpc/include/asm/paravirt.h | 28 ++++++++
>> arch/powerpc/include/asm/qspinlock.h | 66 +++++++++++++++++++
>> arch/powerpc/include/asm/qspinlock_paravirt.h |
2017 Oct 13
2
[PATCH] virtio: avoid possible OOM lockup at virtballoon_oom_notify()
On Tue, Oct 10, 2017 at 07:47:37PM +0900, Tetsuo Handa wrote:
> In leak_balloon(), mutex_lock(&vb->balloon_lock) is called in order to
> serialize against fill_balloon(). But in fill_balloon(),
> alloc_page(GFP_HIGHUSER[_MOVABLE] | __GFP_NOMEMALLOC | __GFP_NORETRY) is
> called with vb->balloon_lock mutex held. Since GFP_HIGHUSER[_MOVABLE]
> implies __GFP_DIRECT_RECLAIM |
2017 Oct 13
2
[PATCH] virtio: avoid possible OOM lockup at virtballoon_oom_notify()
On Tue, Oct 10, 2017 at 07:47:37PM +0900, Tetsuo Handa wrote:
> In leak_balloon(), mutex_lock(&vb->balloon_lock) is called in order to
> serialize against fill_balloon(). But in fill_balloon(),
> alloc_page(GFP_HIGHUSER[_MOVABLE] | __GFP_NOMEMALLOC | __GFP_NORETRY) is
> called with vb->balloon_lock mutex held. Since GFP_HIGHUSER[_MOVABLE]
> implies __GFP_DIRECT_RECLAIM |
2020 Jul 02
0
[PATCH 2/8] powerpc/pseries: use smp_rmb() in H_CONFER spin yield
There is no need for rmb(), this allows faster lwsync here.
Signed-off-by: Nicholas Piggin <npiggin at gmail.com>
---
arch/powerpc/lib/locks.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/lib/locks.c b/arch/powerpc/lib/locks.c
index 6440d5943c00..47a530de733e 100644
--- a/arch/powerpc/lib/locks.c
+++ b/arch/powerpc/lib/locks.c
@@ -30,7 +30,7 @@ void
2020 Jul 06
0
[PATCH v3 1/6] powerpc/powernv: must include hvcall.h to get PAPR defines
An include goes away in future patches which breaks compilation
without this.
Signed-off-by: Nicholas Piggin <npiggin at gmail.com>
---
arch/powerpc/platforms/powernv/pci-ioda-tce.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/platforms/powernv/pci-ioda-tce.c b/arch/powerpc/platforms/powernv/pci-ioda-tce.c
index f923359d8afc..8eba6ece7808 100644
---
2020 Jul 24
0
[PATCH v4 6/6] powerpc: implement smp_cond_load_relaxed
This implements smp_cond_load_relaed with the slowpath busy loop using the
preferred SMT priority pattern.
Signed-off-by: Nicholas Piggin <npiggin at gmail.com>
---
arch/powerpc/include/asm/barrier.h | 14 ++++++++++++++
1 file changed, 14 insertions(+)
diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index 123adcefd40f..9b4671d38674 100644
---
2017 Oct 13
4
[PATCH] virtio_balloon: fix deadlock on OOM
fill_balloon doing memory allocations under balloon_lock
can cause a deadlock when leak_balloon is called from
virtballoon_oom_notify and tries to take same lock.
To fix, split page allocation and enqueue and do allocations outside the lock.
Here's a detailed analysis of the deadlock by Tetsuo Handa:
In leak_balloon(), mutex_lock(&vb->balloon_lock) is called in order to
serialize