Christian Borntraeger
2016-Oct-04 10:17 UTC
[PATCH v2 1/1] s390/spinlock: Provide vcpu_is_preempted
On 09/30/2016 08:35 AM, Pan Xinhui wrote:> > > ? 2016/9/30 13:52, Boqun Feng ??: >> On Fri, Sep 30, 2016 at 12:49:52PM +0800, Pan Xinhui wrote: >>> >>> >>> ? 2016/9/29 23:51, Christian Borntraeger ??: >>>> this implements the s390 backend for commit >>>> "kernel/sched: introduce vcpu preempted check interface" >>>> by reworking the existing smp_vcpu_scheduled into >>>> arch_vcpu_is_preempted. We can then also get rid of the >>>> local cpu_is_preempted function by moving the >>>> CIF_ENABLED_WAIT test into arch_vcpu_is_preempted. >>>> >>>> Signed-off-by: Christian Borntraeger <borntraeger at de.ibm.com> >>>> --- >>> >>> hi, Christian >>> thanks for your patch! >>> >>>> arch/s390/include/asm/spinlock.h | 3 +++ >>>> arch/s390/kernel/smp.c | 9 +++++++-- >>>> arch/s390/lib/spinlock.c | 25 ++++++++----------------- >>>> 3 files changed, 18 insertions(+), 19 deletions(-) >>>> >>>> diff --git a/arch/s390/include/asm/spinlock.h b/arch/s390/include/asm/spinlock.h >>>> index 63ebf37..e16e02f 100644 >>>> --- a/arch/s390/include/asm/spinlock.h >>>> +++ b/arch/s390/include/asm/spinlock.h >>>> @@ -21,6 +21,9 @@ _raw_compare_and_swap(unsigned int *lock, unsigned int old, unsigned int new) >>>> return __sync_bool_compare_and_swap(lock, old, new); >>>> } >>>> >>>> +bool arch_vcpu_is_preempted(int cpu); >>>> +#define vcpu_is_preempted arch_vcpu_is_preempted >>>> + >>>> /* >>>> * Simple spin lock operations. There are two variants, one clears IRQ's >>>> * on the local processor, one does not. >>>> diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c >>>> index 7b89a75..4aadd16 100644 >>>> --- a/arch/s390/kernel/smp.c >>>> +++ b/arch/s390/kernel/smp.c >>>> @@ -376,10 +376,15 @@ int smp_find_processor_id(u16 address) >>>> return -1; >>>> } >>>> >>>> -int smp_vcpu_scheduled(int cpu) >>> root at ltcalpine2-lp13:~/linux# git grep -wn smp_vcpu_scheduled arch/s390/ >>> arch/s390/include/asm/smp.h:34:extern int smp_vcpu_scheduled(int cpu); >>> arch/s390/include/asm/smp.h:56:static inline int smp_vcpu_scheduled(int cpu) { return 1; } >>> arch/s390/kernel/smp.c:371:int smp_vcpu_scheduled(int cpu) >>> arch/s390/lib/spinlock.c:44: if (smp_vcpu_scheduled(cpu)) >>> >>>> +bool arch_vcpu_is_preempted(int cpu) >>>> { >>>> - return pcpu_running(pcpu_devices + cpu); >>>> + if (test_cpu_flag_of(CIF_ENABLED_WAIT, cpu)) >>>> + return false; >>>> + if (pcpu_running(pcpu_devices + cpu)) >>>> + return false; >>> I saw smp_vcpu_scheduled() returns true always on !SMP system. >>> >>> maybe we can do somegthing silimar. like below >>> >>> #ifndef CONFIG_SMP >>> static inline bool arch_vcpu_is_preempted(int cpu) { return !test_cpu_flag_of(CIF_ENABLED_WAIT, cpu); } >>> #else >>> ... >>> >>> but I can't help thinking that if this is a!SMP system, maybe we could only >>> #ifndef CONFIG_SMP >>> static inline bool arch_vcpu_is_preempted(int cpu) { return false; } >>> #else >> >> Why do we need a vcpu_is_preempted() implementation for UP? Where will >> you use it? >> > yep, I also wonder that :) > > But there is a definitaion of smp_vcpu_scheduled() for !SMP kernel. > So I am a little worried that some code has included this spinlock.h for UP kernel also. > > Hi, Christian > Could you help confirms that your patch works on UP? :)My patch as is seems to work fine for !SMP. So it looks like the extra define is not necessary and we could simply go with v2
Apparently Analagous Threads
- [PATCH v2 1/1] s390/spinlock: Provide vcpu_is_preempted
- [PATCH v3] s390/spinlock: Provide vcpu_is_preempted
- [PATCH v3] s390/spinlock: Provide vcpu_is_preempted
- [PATCH v2 1/1] s390/spinlock: Provide vcpu_is_preempted
- [PATCH v2 1/1] s390/spinlock: Provide vcpu_is_preempted