search for: declare_static_key_false

Displaying 20 results from an estimated 29 matches for "declare_static_key_false".

2023 Feb 23
1
[PATCH v2] x86/paravirt: merge activate_mm and dup_mmap callbacks
On 07.02.23 22:09, Srivatsa S. Bhat wrote: > On 2/6/23 11:59 PM, Juergen Gross wrote: >> The two paravirt callbacks .mmu.activate_mm and .mmu.dup_mmap are >> sharing the same implementations in all cases: for Xen PV guests they >> are pinning the PGD of the new mm_struct, and for all other cases >> they are a NOP. >> >> In the end both callbacks are meant to
2020 Jul 06
0
[PATCH v3 2/6] powerpc/pseries: move some PAPR paravirt functions to their own file
...PDX-License-Identifier: GPL-2.0-or-later */ +#ifndef __ASM_PARAVIRT_H +#define __ASM_PARAVIRT_H +#ifdef __KERNEL__ + +#include <linux/jump_label.h> +#include <asm/smp.h> +#ifdef CONFIG_PPC64 +#include <asm/paca.h> +#include <asm/hvcall.h> +#endif + +#ifdef CONFIG_PPC_SPLPAR +DECLARE_STATIC_KEY_FALSE(shared_processor); + +static inline bool is_shared_processor(void) +{ + return static_branch_unlikely(&shared_processor); +} + +/* If bit 0 is set, the cpu has been preempted */ +static inline u32 yield_count_of(int cpu) +{ + __be32 yield_count = READ_ONCE(lppaca_of(cpu).yield_count); + return...
2020 Sep 16
0
[PATCH] mm: remove extra ZONE_DEVICE struct page refcount
...mm.h > index 517751310dd2..5a82037a4b26 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -1093,34 +1093,6 @@ static inline bool is_zone_device_page(const struct page *page) > #ifdef CONFIG_DEV_PAGEMAP_OPS > void free_devmap_managed_page(struct page *page); > DECLARE_STATIC_KEY_FALSE(devmap_managed_key); The export for devmap_managed_key can be dropped now. In fact I think we can remove devmap_managed_key entirely now - it is only checked in the actual page free path instead of for each refcount manipulation, so a good old unlikely is probably enough. Also free_devmap_manage...
2020 Sep 17
0
[PATCH] mm: remove extra ZONE_DEVICE struct page refcount
...2..5a82037a4b26 100644 >> --- a/include/linux/mm.h >> +++ b/include/linux/mm.h >> @@ -1093,34 +1093,6 @@ static inline bool is_zone_device_page(const struct page *page) >> #ifdef CONFIG_DEV_PAGEMAP_OPS >> void free_devmap_managed_page(struct page *page); >> DECLARE_STATIC_KEY_FALSE(devmap_managed_key); > > The export for devmap_managed_key can be dropped now. In fact I think > we can remove devmap_managed_key entirely now - it is only checked in > the actual page free path instead of for each refcount manipulation, > so a good old unlikely is probably enough....
2020 Jul 15
2
[PATCH v4 45/75] x86/sev-es: Adjust #VC IST Stack on entering NMI handler
...dec_return(nmi_state)) I really hate all this #VC stuff :-( So the above will make the NMI do 4 unconditional extra CALL+RET, a LOAD (which will potentially miss) and a compare and branch. How's that a win for normal people? Can we please turn all these sev_es_*() hooks into something like: DECLARE_STATIC_KEY_FALSE(sev_es_enabled_key); static __always_inline void sev_es_foo() { if (static_branch_unlikely(&sev_es_enabled_key)) __sev_es_foo(); } So that normal people will only see an extra NOP? > diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c > index d415368f16ec..2a7cc72db1d5...
2020 Jul 15
2
[PATCH v4 45/75] x86/sev-es: Adjust #VC IST Stack on entering NMI handler
...dec_return(nmi_state)) I really hate all this #VC stuff :-( So the above will make the NMI do 4 unconditional extra CALL+RET, a LOAD (which will potentially miss) and a compare and branch. How's that a win for normal people? Can we please turn all these sev_es_*() hooks into something like: DECLARE_STATIC_KEY_FALSE(sev_es_enabled_key); static __always_inline void sev_es_foo() { if (static_branch_unlikely(&sev_es_enabled_key)) __sev_es_foo(); } So that normal people will only see an extra NOP? > diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c > index d415368f16ec..2a7cc72db1d5...
2020 Sep 14
5
[PATCH] mm: remove extra ZONE_DEVICE struct page refcount
...include/linux/mm.h b/include/linux/mm.h index 517751310dd2..5a82037a4b26 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1093,34 +1093,6 @@ static inline bool is_zone_device_page(const struct page *page) #ifdef CONFIG_DEV_PAGEMAP_OPS void free_devmap_managed_page(struct page *page); DECLARE_STATIC_KEY_FALSE(devmap_managed_key); - -static inline bool page_is_devmap_managed(struct page *page) -{ - if (!static_branch_unlikely(&devmap_managed_key)) - return false; - if (!is_zone_device_page(page)) - return false; - switch (page->pgmap->type) { - case MEMORY_DEVICE_PRIVATE: - case MEMORY_DEVICE...
2020 Sep 26
1
[PATCH 2/2] mm: remove extra ZONE_DEVICE struct page refcount
.....2159c2477aa3 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -1092,39 +1092,6 @@ static inline bool is_zone_device_page(const struct page *page) > } > #endif > > -#ifdef CONFIG_DEV_PAGEMAP_OPS > -void free_devmap_managed_page(struct page *page); > -DECLARE_STATIC_KEY_FALSE(devmap_managed_key); > - > -static inline bool page_is_devmap_managed(struct page *page) > -{ > - if (!static_branch_unlikely(&devmap_managed_key)) > - return false; > - if (!is_zone_device_page(page)) > - return false; > - switch (page->pgmap->type) { > - cas...
2019 Sep 06
0
[vhost:linux-next 13/15] htmldocs: mm/page_alloc.c:2207: warning: Function parameter or member 'order' not described in 'free_reported_page'
...;sk_validate_xmit_skb' not described in 'sock' include/net/sock.h:515: warning: Function parameter or member 'sk_bpf_storage' not described in 'sock' include/net/sock.h:2439: warning: Function parameter or member 'tcp_rx_skb_cache_key' not described in 'DECLARE_STATIC_KEY_FALSE' include/net/sock.h:2439: warning: Excess function parameter 'sk' description in 'DECLARE_STATIC_KEY_FALSE' include/net/sock.h:2439: warning: Excess function parameter 'skb' description in 'DECLARE_STATIC_KEY_FALSE' include/linux/netdevice.h:2040: warnin...
2020 Jul 15
0
[PATCH v4 45/75] x86/sev-es: Adjust #VC IST Stack on entering NMI handler
On Wed, Jul 15, 2020 at 11:47:02AM +0200, Peter Zijlstra wrote: > On Tue, Jul 14, 2020 at 02:08:47PM +0200, Joerg Roedel wrote: > DECLARE_STATIC_KEY_FALSE(sev_es_enabled_key); > > static __always_inline void sev_es_foo() > { > if (static_branch_unlikely(&sev_es_enabled_key)) > __sev_es_foo(); > } > > So that normal people will only see an extra NOP? Yes, that is a good idea, I will use a static key for these cases....
2019 Jun 13
0
[PATCH 09/22] memremap: lift the devmap_enable manipulation into devm_memremap_pages
...00644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -921,8 +921,6 @@ static inline bool is_zone_device_page(const struct page *page) #endif #ifdef CONFIG_DEV_PAGEMAP_OPS -void dev_pagemap_get_ops(void); -void dev_pagemap_put_ops(void); void __put_devmap_managed_page(struct page *page); DECLARE_STATIC_KEY_FALSE(devmap_managed_key); static inline bool put_devmap_managed_page(struct page *page) @@ -969,14 +967,6 @@ static inline bool is_pci_p2pdma_page(const struct page *page) #endif /* CONFIG_PCI_P2PDMA */ #else /* CONFIG_DEV_PAGEMAP_OPS */ -static inline void dev_pagemap_get_ops(void) -{ -} - -static...
2019 Jun 17
0
[PATCH 10/25] memremap: lift the devmap_enable manipulation into devm_memremap_pages
...00644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -921,8 +921,6 @@ static inline bool is_zone_device_page(const struct page *page) #endif #ifdef CONFIG_DEV_PAGEMAP_OPS -void dev_pagemap_get_ops(void); -void dev_pagemap_put_ops(void); void __put_devmap_managed_page(struct page *page); DECLARE_STATIC_KEY_FALSE(devmap_managed_key); static inline bool put_devmap_managed_page(struct page *page) @@ -969,14 +967,6 @@ static inline bool is_pci_p2pdma_page(const struct page *page) #endif /* CONFIG_PCI_P2PDMA */ #else /* CONFIG_DEV_PAGEMAP_OPS */ -static inline void dev_pagemap_get_ops(void) -{ -} - -static...
2019 Jun 26
0
[PATCH 11/25] memremap: lift the devmap_enable manipulation into devm_memremap_pages
...00644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -932,8 +932,6 @@ static inline bool is_zone_device_page(const struct page *page) #endif #ifdef CONFIG_DEV_PAGEMAP_OPS -void dev_pagemap_get_ops(void); -void dev_pagemap_put_ops(void); void __put_devmap_managed_page(struct page *page); DECLARE_STATIC_KEY_FALSE(devmap_managed_key); static inline bool put_devmap_managed_page(struct page *page) @@ -973,14 +971,6 @@ static inline bool is_pci_p2pdma_page(const struct page *page) #endif /* CONFIG_PCI_P2PDMA */ #else /* CONFIG_DEV_PAGEMAP_OPS */ -static inline void dev_pagemap_get_ops(void) -{ -} - -static...
2019 Jun 26
1
[PATCH 11/25] memremap: lift the devmap_enable manipulation into devm_memremap_pages
...lude/linux/mm.h > @@ -932,8 +932,6 @@ static inline bool is_zone_device_page(const struct page *page) > #endif > > #ifdef CONFIG_DEV_PAGEMAP_OPS > -void dev_pagemap_get_ops(void); > -void dev_pagemap_put_ops(void); > void __put_devmap_managed_page(struct page *page); > DECLARE_STATIC_KEY_FALSE(devmap_managed_key); > static inline bool put_devmap_managed_page(struct page *page) > @@ -973,14 +971,6 @@ static inline bool is_pci_p2pdma_page(const struct page *page) > #endif /* CONFIG_PCI_P2PDMA */ > > #else /* CONFIG_DEV_PAGEMAP_OPS */ > -static inline void dev_pagemap...
2020 Sep 25
6
[RFC PATCH v2 0/2] mm: remove extra ZONE_DEVICE struct page refcount
Matthew Wilcox, Ira Weiny, and others have complained that ZONE_DEVICE struct page reference counting is ugly because they are "free" when the reference count is one instead of zero. This leads to explicit checks for ZONE_DEVICE pages in places like put_page(), GUP, THP splitting, and page migration which have to adjust the expected reference count when determining if the page is
2020 Feb 07
11
[PATCH 0/6] drm: Provide a simple encoder
Many DRM drivers implement an encoder with an empty implementation. This patchset adds drm_simple_encoder_init() and drm_simple_encoder_create(), which can be used by drivers instead. Except for the destroy callback, the simple encoder's implementation is empty. The patchset also converts 4 encoder instances to use the simple-encoder helpers. But there are at least 11 other drivers which can
2020 Sep 25
0
[PATCH 2/2] mm: remove extra ZONE_DEVICE struct page refcount
.../mm.h b/include/linux/mm.h index b2f370f0b420..2159c2477aa3 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1092,39 +1092,6 @@ static inline bool is_zone_device_page(const struct page *page) } #endif -#ifdef CONFIG_DEV_PAGEMAP_OPS -void free_devmap_managed_page(struct page *page); -DECLARE_STATIC_KEY_FALSE(devmap_managed_key); - -static inline bool page_is_devmap_managed(struct page *page) -{ - if (!static_branch_unlikely(&devmap_managed_key)) - return false; - if (!is_zone_device_page(page)) - return false; - switch (page->pgmap->type) { - case MEMORY_DEVICE_PRIVATE: - case MEMORY_DEVICE...
2020 Oct 01
0
[RFC PATCH v3 2/2] mm: remove extra ZONE_DEVICE struct page refcount
.../mm.h b/include/linux/mm.h index 27c64d0d7520..3d71a820ae38 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1100,39 +1100,6 @@ static inline bool is_zone_device_page(const struct page *page) } #endif -#ifdef CONFIG_DEV_PAGEMAP_OPS -void free_devmap_managed_page(struct page *page); -DECLARE_STATIC_KEY_FALSE(devmap_managed_key); - -static inline bool page_is_devmap_managed(struct page *page) -{ - if (!static_branch_unlikely(&devmap_managed_key)) - return false; - if (!is_zone_device_page(page)) - return false; - switch (page->pgmap->type) { - case MEMORY_DEVICE_PRIVATE: - case MEMORY_DEVICE...
2020 Jul 03
7
[PATCH v2 0/6] powerpc: queued spinlocks and rwlocks
v2 is updated to account for feedback from Will, Peter, and Waiman (thank you), and trims off a couple of RFC and unrelated patches. Thanks, Nick Nicholas Piggin (6): powerpc/powernv: must include hvcall.h to get PAPR defines powerpc/pseries: move some PAPR paravirt functions to their own file powerpc: move spinlock implementation to simple_spinlock powerpc/64s: implement queued
2020 Jul 24
8
[PATCH v4 0/6] powerpc: queued spinlocks and rwlocks
Updated with everybody's feedback (thanks all), and more performance results. What I've found is I might have been measuring the worst load point for the paravirt case, and by looking at a range of loads it's clear that queued spinlocks are overall better even on PV, doubly so when you look at the generally much improved worst case latencies. I have defaulted it to N even though