Wei, Gang
2010-Jun-17 07:38 UTC
[Xen-devel] [PATCH6/6] cpuidle: redefine cpumask_lock as rwlock_t
cpuidle: redefine cpumask_lock as rwlock_t
Multiple cpus clear hpet channel cpumask simultaneously is ok. So we can change
the cpumask_lock to rwlock_t.
Signed-off-by: Wei Gang <gang.wei@intel.com>
diff -r b3acfa50b013 xen/arch/x86/hpet.c
--- a/xen/arch/x86/hpet.c Sun Jun 13 07:45:17 2010 +0800
+++ b/xen/arch/x86/hpet.c Sun Jun 13 07:46:32 2010 +0800
@@ -34,7 +34,7 @@ struct hpet_event_channel
int shift;
s_time_t next_event;
cpumask_t cpumask;
- spinlock_t cpumask_lock;
+ rwlock_t cpumask_lock;
spinlock_t lock;
void (*event_handler)(struct hpet_event_channel *);
@@ -197,7 +197,7 @@ again:
/* find all expired events */
for_each_cpu_mask(cpu, ch->cpumask)
{
- spin_lock_irq(&ch->cpumask_lock);
+ write_lock_irq(&ch->cpumask_lock);
if ( cpumask_test_cpu(cpu, ch->cpumask) )
{
@@ -207,7 +207,7 @@ again:
next_event = per_cpu(timer_deadline_end, cpu);
}
- spin_unlock_irq(&ch->cpumask_lock);
+ write_unlock_irq(&ch->cpumask_lock);
}
/* wakeup the cpus which have an expired event. */
@@ -580,7 +580,7 @@ void hpet_broadcast_init(void)
hpet_events[i].next_event = STIME_MAX;
hpet_events[i].event_handler = handle_hpet_broadcast;
spin_lock_init(&hpet_events[i].lock);
- spin_lock_init(&hpet_events[i].cpumask_lock);
+ rwlock_init(&hpet_events[i].cpumask_lock);
}
return;
@@ -615,7 +615,7 @@ void hpet_broadcast_init(void)
legacy_hpet_event.idx = 0;
legacy_hpet_event.flags = 0;
spin_lock_init(&legacy_hpet_event.lock);
- spin_lock_init(&legacy_hpet_event.cpumask_lock);
+ rwlock_init(&legacy_hpet_event.cpumask_lock);
if ( !force_hpet_broadcast )
pv_rtc_handler = handle_rtc_once;
@@ -692,9 +692,9 @@ void hpet_broadcast_exit(void)
if ( !reprogram_timer(this_cpu(timer_deadline_start)) )
raise_softirq(TIMER_SOFTIRQ);
- spin_lock_irq(&ch->cpumask_lock);
+ read_lock_irq(&ch->cpumask_lock);
cpu_clear(cpu, ch->cpumask);
- spin_unlock_irq(&ch->cpumask_lock);
+ read_unlock_irq(&ch->cpumask_lock);
if ( ch != &legacy_hpet_event )
{
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Jan Beulich
2010-Jun-17 08:29 UTC
Re: [Xen-devel] [PATCH6/6] cpuidle: redefine cpumask_lock as rwlock_t
>>> On 17.06.10 at 09:38, "Wei, Gang" <gang.wei@intel.com> wrote: > cpuidle: redefine cpumask_lock as rwlock_t > > Multiple cpus clear hpet channel cpumask simultaneously is ok. So we can > change the cpumask_lock to rwlock_t.Now this I think really needs a comment in the code: You use a read lock on the writer side and a write lock on the reader side - quite the opposite of normal behavior. Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Wei, Gang
2010-Jun-17 08:54 UTC
RE: [Xen-devel] [PATCH6/6] cpuidle: redefine cpumask_lock as rwlock_t
On Thursday, 2010-6-17 4:30 PM, Jan Beulich wrote:>>>> On 17.06.10 at 09:38, "Wei, Gang" <gang.wei@intel.com> wrote: >> cpuidle: redefine cpumask_lock as rwlock_t >> >> Multiple cpus clear hpet channel cpumask simultaneously is ok. So we >> can change the cpumask_lock to rwlock_t. > > Now this I think really needs a comment in the code: You use a read > lock on the writer side and a write lock on the reader side - quite > the opposite of normal behavior.Good suggestion, I will add it. Jimmy _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel