search for: max_nod

Displaying 20 results from an estimated 43 matches for "max_nod".

Did you mean: max_lod
2012 Jul 04
53
[PATCH 00 of 10 v3] Automatic NUMA placement for xl
Hello, Third version of the NUMA placement series Xen 4.2. All the comments received during v2''s review have been addressed (more details in single changelogs). The most notable changes are the following: - the libxl_cpumap --> libxl_bitmap renaming has been rebased on top of the recent patches that allows us to allocate bitmaps of different sizes; - the heuristics for deciding
2015 Mar 16
0
[PATCH 8/9] qspinlock: Generic paravirt support
...; * Peter Zijlstra <peterz at infradead.org> */ + +#ifndef _GEN_PV_LOCK_SLOWPATH + #include <linux/smp.h> #include <linux/bug.h> #include <linux/cpumask.h> @@ -65,13 +68,21 @@ #include "mcs_spinlock.h" +#ifdef CONFIG_PARAVIRT_SPINLOCKS +#define MAX_NODES 8 +#else +#define MAX_NODES 4 +#endif + /* * Per-CPU queue node structures; we can never have more than 4 nested * contexts: task, softirq, hardirq, nmi. * * Exactly fits one 64-byte cacheline on a 64-bit architecture. + * + * PV doubles the storage and uses the second cacheline for PV s...
2015 Mar 16
0
[PATCH 8/9] qspinlock: Generic paravirt support
...; * Peter Zijlstra <peterz at infradead.org> */ + +#ifndef _GEN_PV_LOCK_SLOWPATH + #include <linux/smp.h> #include <linux/bug.h> #include <linux/cpumask.h> @@ -65,13 +68,21 @@ #include "mcs_spinlock.h" +#ifdef CONFIG_PARAVIRT_SPINLOCKS +#define MAX_NODES 8 +#else +#define MAX_NODES 4 +#endif + /* * Per-CPU queue node structures; we can never have more than 4 nested * contexts: task, softirq, hardirq, nmi. * * Exactly fits one 64-byte cacheline on a 64-bit architecture. + * + * PV doubles the storage and uses the second cacheline for PV s...
2015 Apr 08
2
[PATCH v15 16/16] unfair qspinlock: a queue based unfair lock
...defined(_GEN_PV_LOCK_SLOWPATH) || defined(_GEN_UNFAIR_LOCK_SLOWPATH) +#define _GEN_LOCK_SLOWPATH +#endif + +#ifndef _GEN_LOCK_SLOWPATH #include <linux/smp.h> #include <linux/bug.h> @@ -68,12 +72,6 @@ #include "mcs_spinlock.h" -#ifdef CONFIG_PARAVIRT_SPINLOCKS -#define MAX_NODES 8 -#else -#define MAX_NODES 4 -#endif - /* * Per-CPU queue node structures; we can never have more than 4 nested * contexts: task, softirq, hardirq, nmi. @@ -81,7 +79,9 @@ * Exactly fits one 64-byte cacheline on a 64-bit architecture. * * PV doubles the storage and uses the second cach...
2015 Apr 08
2
[PATCH v15 16/16] unfair qspinlock: a queue based unfair lock
...defined(_GEN_PV_LOCK_SLOWPATH) || defined(_GEN_UNFAIR_LOCK_SLOWPATH) +#define _GEN_LOCK_SLOWPATH +#endif + +#ifndef _GEN_LOCK_SLOWPATH #include <linux/smp.h> #include <linux/bug.h> @@ -68,12 +72,6 @@ #include "mcs_spinlock.h" -#ifdef CONFIG_PARAVIRT_SPINLOCKS -#define MAX_NODES 8 -#else -#define MAX_NODES 4 -#endif - /* * Per-CPU queue node structures; we can never have more than 4 nested * contexts: task, softirq, hardirq, nmi. @@ -81,7 +79,9 @@ * Exactly fits one 64-byte cacheline on a 64-bit architecture. * * PV doubles the storage and uses the second cach...
2015 Apr 07
0
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
...; * Peter Zijlstra <peterz at infradead.org> */ + +#ifndef _GEN_PV_LOCK_SLOWPATH + #include <linux/smp.h> #include <linux/bug.h> #include <linux/cpumask.h> @@ -65,13 +68,21 @@ #include "mcs_spinlock.h" +#ifdef CONFIG_PARAVIRT_SPINLOCKS +#define MAX_NODES 8 +#else +#define MAX_NODES 4 +#endif + /* * Per-CPU queue node structures; we can never have more than 4 nested * contexts: task, softirq, hardirq, nmi. * * Exactly fits one 64-byte cacheline on a 64-bit architecture. + * + * PV doubles the storage and uses the second cacheline for PV s...
2014 Jun 15
0
[PATCH 10/11] qspinlock: Paravirt support
...: linux-2.6/kernel/locking/qspinlock.c =================================================================== --- linux-2.6.orig/kernel/locking/qspinlock.c +++ linux-2.6/kernel/locking/qspinlock.c @@ -56,13 +56,33 @@ #include "mcs_spinlock.h" +#ifdef CONFIG_PARAVIRT_SPINLOCKS + +#define MAX_NODES 8 + +static inline bool pv_enabled(void) +{ + return static_key_false(&paravirt_spinlocks_enabled); +} +#else /* !PARAVIRT_SPINLOCKS */ + +#define MAX_NODES 4 + +static inline bool pv_enabled(void) +{ + return false; +} +#endif /* PARAVIRT_SPINLOCKS */ + /* * Per-CPU queue node structures;...
2015 Apr 24
0
[PATCH v16 08/14] pvqspinlock: Implement simple paravirt support for the qspinlock
...; * Peter Zijlstra <peterz at infradead.org> */ + +#ifndef _GEN_PV_LOCK_SLOWPATH + #include <linux/smp.h> #include <linux/bug.h> #include <linux/cpumask.h> @@ -65,13 +68,21 @@ #include "mcs_spinlock.h" +#ifdef CONFIG_PARAVIRT_SPINLOCKS +#define MAX_NODES 8 +#else +#define MAX_NODES 4 +#endif + /* * Per-CPU queue node structures; we can never have more than 4 nested * contexts: task, softirq, hardirq, nmi. * * Exactly fits one 64-byte cacheline on a 64-bit architecture. + * + * PV doubles the storage and uses the second cacheline for PV s...
2015 May 04
1
[PATCH v16 08/14] pvqspinlock: Implement simple paravirt support for the qspinlock
...; * Peter Zijlstra <peterz at infradead.org> */ + +#ifndef _GEN_PV_LOCK_SLOWPATH + #include <linux/smp.h> #include <linux/bug.h> #include <linux/cpumask.h> @@ -65,13 +68,21 @@ #include "mcs_spinlock.h" +#ifdef CONFIG_PARAVIRT_SPINLOCKS +#define MAX_NODES 8 +#else +#define MAX_NODES 4 +#endif + /* * Per-CPU queue node structures; we can never have more than 4 nested * contexts: task, softirq, hardirq, nmi. * * Exactly fits one 64-byte cacheline on a 64-bit architecture. + * + * PV doubles the storage and uses the second cacheline for PV s...
2015 May 04
1
[PATCH v16 08/14] pvqspinlock: Implement simple paravirt support for the qspinlock
...; * Peter Zijlstra <peterz at infradead.org> */ + +#ifndef _GEN_PV_LOCK_SLOWPATH + #include <linux/smp.h> #include <linux/bug.h> #include <linux/cpumask.h> @@ -65,13 +68,21 @@ #include "mcs_spinlock.h" +#ifdef CONFIG_PARAVIRT_SPINLOCKS +#define MAX_NODES 8 +#else +#define MAX_NODES 4 +#endif + /* * Per-CPU queue node structures; we can never have more than 4 nested * contexts: task, softirq, hardirq, nmi. * * Exactly fits one 64-byte cacheline on a 64-bit architecture. + * + * PV doubles the storage and uses the second cacheline for PV s...
2017 Sep 06
4
[PATCH v2 0/2] guard virt_spin_lock() with a static key
With virt_spin_lock() being guarded by a static key the bare metal case can be optimized by patching the call away completely. In case a kernel running as a guest it can decide whether to use paravitualized spinlocks, the current fallback to the unfair test-and-set scheme, or to mimic the bare metal behavior. V2: - use static key instead of making virt_spin_lock() a pvops function Juergen Gross
2017 Sep 06
4
[PATCH v2 0/2] guard virt_spin_lock() with a static key
With virt_spin_lock() being guarded by a static key the bare metal case can be optimized by patching the call away completely. In case a kernel running as a guest it can decide whether to use paravitualized spinlocks, the current fallback to the unfair test-and-set scheme, or to mimic the bare metal behavior. V2: - use static key instead of making virt_spin_lock() a pvops function Juergen Gross
2014 May 30
0
[PATCH v11 14/16] pvqspinlock: Add qspinlock para-virtualization support
...l/locking/qspinlock.c b/kernel/locking/qspinlock.c index 8deedcf..adedd75 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -60,12 +60,17 @@ * Check the pending bit spinning threshold only if PV qspinlock is enabled */ #define PSPIN_THRESHOLD (1 << 10) -#define MAX_NODES 4 +/* + * We need to double the number of per-cpu mcs_spinlock structures to hold + * additional fields specific to para-virtualization support. + */ #ifdef CONFIG_PARAVIRT_SPINLOCKS -#define pv_qspinlock_enabled() static_key_false(&paravirt_spinlocks_enabled) +# define MAX_NODES 8 +# de...
2015 Mar 18
2
[PATCH 8/9] qspinlock: Generic paravirt support
...gt; */ > + > +#ifndef _GEN_PV_LOCK_SLOWPATH > + > #include<linux/smp.h> > #include<linux/bug.h> > #include<linux/cpumask.h> > @@ -65,13 +68,21 @@ > > #include "mcs_spinlock.h" > > +#ifdef CONFIG_PARAVIRT_SPINLOCKS > +#define MAX_NODES 8 > +#else > +#define MAX_NODES 4 > +#endif > + > /* > * Per-CPU queue node structures; we can never have more than 4 nested > * contexts: task, softirq, hardirq, nmi. > * > * Exactly fits one 64-byte cacheline on a 64-bit architecture. > + * > + * P...
2015 Mar 18
2
[PATCH 8/9] qspinlock: Generic paravirt support
...gt; */ > + > +#ifndef _GEN_PV_LOCK_SLOWPATH > + > #include<linux/smp.h> > #include<linux/bug.h> > #include<linux/cpumask.h> > @@ -65,13 +68,21 @@ > > #include "mcs_spinlock.h" > > +#ifdef CONFIG_PARAVIRT_SPINLOCKS > +#define MAX_NODES 8 > +#else > +#define MAX_NODES 4 > +#endif > + > /* > * Per-CPU queue node structures; we can never have more than 4 nested > * contexts: task, softirq, hardirq, nmi. > * > * Exactly fits one 64-byte cacheline on a 64-bit architecture. > + * > + * P...
2017 Sep 06
0
[PATCH v2 1/2] paravirt/locks: use new static key for controlling call of virt_spin_lock()
...line(me); + native_pv_lock_init(); } void __init native_smp_cpus_done(unsigned int max_cpus) diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index 294294c71ba4..838d235b87ef 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -76,6 +76,10 @@ #define MAX_NODES 4 #endif +#ifdef CONFIG_PARAVIRT +DEFINE_STATIC_KEY_TRUE(virt_spin_lock_key); +#endif + /* * Per-CPU queue node structures; we can never have more than 4 nested * contexts: task, softirq, hardirq, nmi. -- 2.12.3
2014 Oct 16
2
[PATCH v12 09/11] pvqspinlock, x86: Add para-virtualization support
...XPORT_SYMBOL(pv_lock_ops); diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index 1c1926a..1662dbd 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -63,13 +63,33 @@ #include "mcs_spinlock.h" +#ifdef CONFIG_PARAVIRT_SPINLOCKS + +#define MAX_NODES 8 + +static inline bool pv_enabled(void) +{ + return static_key_false(&paravirt_spinlocks_enabled); +} +#else /* !PARAVIRT_SPINLOCKS */ + +#define MAX_NODES 4 + +static inline bool pv_enabled(void) +{ + return false; +} +#endif /* PARAVIRT_SPINLOCKS */ + /* * Per-CPU queue node structures;...
2014 Oct 16
2
[PATCH v12 09/11] pvqspinlock, x86: Add para-virtualization support
...XPORT_SYMBOL(pv_lock_ops); diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index 1c1926a..1662dbd 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -63,13 +63,33 @@ #include "mcs_spinlock.h" +#ifdef CONFIG_PARAVIRT_SPINLOCKS + +#define MAX_NODES 8 + +static inline bool pv_enabled(void) +{ + return static_key_false(&paravirt_spinlocks_enabled); +} +#else /* !PARAVIRT_SPINLOCKS */ + +#define MAX_NODES 4 + +static inline bool pv_enabled(void) +{ + return false; +} +#endif /* PARAVIRT_SPINLOCKS */ + /* * Per-CPU queue node structures;...
2017 Sep 06
2
[PATCH v2 1/2] paravirt/locks: use new static key for controlling call of virt_spin_lock()
...t; > void __init native_smp_cpus_done(unsigned int max_cpus) > diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c > index 294294c71ba4..838d235b87ef 100644 > --- a/kernel/locking/qspinlock.c > +++ b/kernel/locking/qspinlock.c > @@ -76,6 +76,10 @@ > #define MAX_NODES 4 > #endif > > +#ifdef CONFIG_PARAVIRT > +DEFINE_STATIC_KEY_TRUE(virt_spin_lock_key); > +#endif > + > /* > * Per-CPU queue node structures; we can never have more than 4 nested > * contexts: task, softirq, hardirq, nmi. Acked-by: Waiman Long <longman at redh...
2017 Sep 06
2
[PATCH v2 1/2] paravirt/locks: use new static key for controlling call of virt_spin_lock()
...t; > void __init native_smp_cpus_done(unsigned int max_cpus) > diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c > index 294294c71ba4..838d235b87ef 100644 > --- a/kernel/locking/qspinlock.c > +++ b/kernel/locking/qspinlock.c > @@ -76,6 +76,10 @@ > #define MAX_NODES 4 > #endif > > +#ifdef CONFIG_PARAVIRT > +DEFINE_STATIC_KEY_TRUE(virt_spin_lock_key); > +#endif > + > /* > * Per-CPU queue node structures; we can never have more than 4 nested > * contexts: task, softirq, hardirq, nmi. Acked-by: Waiman Long <longman at redh...