Noticed this a few days ago while working on the hvm-guest-time on xen-system-time patch, but forgot about it... Throughout xen/arch/x86/hvm/vpt.c, there are uses of a spinlock called tm_lock. But it appears that this spinlock is declared and used as part of a per-vcpu data structure. So is this somehow protecting against vcpu re-entrancy (didn't think that could happen) or is it supposed to be locking out one vcpu against another (and not doing the job because each vcpu has a separate lock)? Or am I misunderstanding something entirely? If this should be a domain-wide lock, I'll spin a patch. (And it might explain some of the weirder time problems?) Thanks, Dan ==================================Thanks... for the memory I really could use more / My throughput's on the floor The balloon is flat / My swap disk's fat / I've OOM's in store Overcommitted so much (with apologies to the late great Bob Hope) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Locking is required to prevent multiple concurrent access to a single periodic timer. This is actually implemented at vcpu granularity to avoid lock juggling in functions which walk lists of timers. Since those lists are held per-vcpu, the convenient locking scope is the vcpu. -- Keir On 23/5/08 21:27, "Dan Magenheimer" <dan.magenheimer@oracle.com> wrote:> Noticed this a few days ago while working on the hvm-guest-time > on xen-system-time patch, but forgot about it... > > Throughout xen/arch/x86/hvm/vpt.c, there are uses of a spinlock > called tm_lock. But it appears that this spinlock is declared > and used as part of a per-vcpu data structure. So is this > somehow protecting against vcpu re-entrancy (didn''t think that > could happen) or is it supposed to be locking out one vcpu > against another (and not doing the job because each vcpu has > a separate lock)? Or am I misunderstanding something entirely? > > If this should be a domain-wide lock, I''ll spin a patch. > (And it might explain some of the weirder time problems?) > > Thanks, > Dan > > ==================================> Thanks... for the memory > I really could use more / My throughput''s on the floor > The balloon is flat / My swap disk''s fat / I''ve OOM''s in store > Overcommitted so much > (with apologies to the late great Bob Hope)_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
OK, thanks. So a per-vcpu list lock and a separate per-domain monotonicity lock are unfortunately both necessary (in my patch).> -----Original Message----- > From: xen-devel-bounces@lists.xensource.com > [mailto:xen-devel-bounces@lists.xensource.com]On Behalf Of Keir Fraser > Sent: Friday, May 23, 2008 2:39 PM > To: dan.magenheimer@oracle.com; Xen-Devel (E-mail) > Cc: Dave Winchell > Subject: [Xen-devel] Re: hvm vpt lock strangeness > > > Locking is required to prevent multiple concurrent access to a single > periodic timer. This is actually implemented at vcpu > granularity to avoid > lock juggling in functions which walk lists of timers. Since > those lists are > held per-vcpu, the convenient locking scope is the vcpu. > > -- Keir > > On 23/5/08 21:27, "Dan Magenheimer" > <dan.magenheimer@oracle.com> wrote: > > > Noticed this a few days ago while working on the hvm-guest-time > > on xen-system-time patch, but forgot about it... > > > > Throughout xen/arch/x86/hvm/vpt.c, there are uses of a spinlock > > called tm_lock. But it appears that this spinlock is declared > > and used as part of a per-vcpu data structure. So is this > > somehow protecting against vcpu re-entrancy (didn''t think that > > could happen) or is it supposed to be locking out one vcpu > > against another (and not doing the job because each vcpu has > > a separate lock)? Or am I misunderstanding something entirely? > > > > If this should be a domain-wide lock, I''ll spin a patch. > > (And it might explain some of the weirder time problems?) > > > > Thanks, > > Dan > > > > ==================================> > Thanks... for the memory > > I really could use more / My throughput''s on the floor > > The balloon is flat / My swap disk''s fat / I''ve OOM''s in store > > Overcommitted so much > > (with apologies to the late great Bob Hope) > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Agreed, for those timer modes which are aiming for domain-wide monotonicity. Which is most of the new ones and also all the ones that you guys use in your product. -- Keir On 23/5/08 23:39, "Dan Magenheimer" <dan.magenheimer@oracle.com> wrote:> OK, thanks. So a per-vcpu list lock and a separate > per-domain monotonicity lock are unfortunately both > necessary (in my patch). > >> -----Original Message----- >> From: xen-devel-bounces@lists.xensource.com >> [mailto:xen-devel-bounces@lists.xensource.com]On Behalf Of Keir Fraser >> Sent: Friday, May 23, 2008 2:39 PM >> To: dan.magenheimer@oracle.com; Xen-Devel (E-mail) >> Cc: Dave Winchell >> Subject: [Xen-devel] Re: hvm vpt lock strangeness >> >> >> Locking is required to prevent multiple concurrent access to a single >> periodic timer. This is actually implemented at vcpu >> granularity to avoid >> lock juggling in functions which walk lists of timers. Since >> those lists are >> held per-vcpu, the convenient locking scope is the vcpu. >> >> -- Keir >> >> On 23/5/08 21:27, "Dan Magenheimer" >> <dan.magenheimer@oracle.com> wrote: >> >>> Noticed this a few days ago while working on the hvm-guest-time >>> on xen-system-time patch, but forgot about it... >>> >>> Throughout xen/arch/x86/hvm/vpt.c, there are uses of a spinlock >>> called tm_lock. But it appears that this spinlock is declared >>> and used as part of a per-vcpu data structure. So is this >>> somehow protecting against vcpu re-entrancy (didn''t think that >>> could happen) or is it supposed to be locking out one vcpu >>> against another (and not doing the job because each vcpu has >>> a separate lock)? Or am I misunderstanding something entirely? >>> >>> If this should be a domain-wide lock, I''ll spin a patch. >>> (And it might explain some of the weirder time problems?) >>> >>> Thanks, >>> Dan >>> >>> ==================================>>> Thanks... for the memory >>> I really could use more / My throughput''s on the floor >>> The balloon is flat / My swap disk''s fat / I''ve OOM''s in store >>> Overcommitted so much >>> (with apologies to the late great Bob Hope) >> >> >> >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xensource.com >> http://lists.xensource.com/xen-devel >> >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel