During the last Xen Summit there were informal discussions about the status of wait queues in the hypervisor. To recap: 1. Wait queues are used in mem event, when events generated by a vcpu overflow the ring size 2. We would like to use wait queues when the hypervisor needs a paged out frame (say for hvm_copy) 3. We would like to use wait queues to avoid the two decoupled mmio emulation passes 4. We would like to use wait queues when the hypervisor needs write access to a shared frame (say for hvm copy), and unsharing temporarily fails with ENOMEM. Conceivably more uses for wait queues may come down the line. Use-cases 2. and 4. were left out of the time frame of 4.2, because a vcpu cannot go to sleep on a wait queue while holding a spinlock, and such situations would frequently arise. Preliminary patches from Tim Deegan have floated on the list (http://lists.xen.org/archives/html/xen-devel/2012-02/msg02133.html). We would like this functionality to be present on the mm side for 4.3, and then proceed to remove the "thinking" that consumers of the p2m interface now need to perform. The current maintainer (effectively) for wait queues is Keir. Keir, any ideas on a schedule for the cleanup? Thanks Andres
On 12/10/2012 16:38, "Andres Lagar-Cavilla" <andreslc@gridcentric.ca> wrote:> During the last Xen Summit there were informal discussions about the status of > wait queues in the hypervisor. > > To recap: > 1. Wait queues are used in mem event, when events generated by a vcpu overflow > the ring size > 2. We would like to use wait queues when the hypervisor needs a paged out > frame (say for hvm_copy) > 3. We would like to use wait queues to avoid the two decoupled mmio emulation > passes > 4. We would like to use wait queues when the hypervisor needs write access to > a shared frame (say for hvm copy), and unsharing temporarily fails with > ENOMEM. > > Conceivably more uses for wait queues may come down the line. > > Use-cases 2. and 4. were left out of the time frame of 4.2, because a vcpu > cannot go to sleep on a wait queue while holding a spinlock, and such > situations would frequently arise. Preliminary patches from Tim Deegan have > floated on the list > (http://lists.xen.org/archives/html/xen-devel/2012-02/msg02133.html). We would > like this functionality to be present on the mm side for 4.3, and then proceed > to remove the "thinking" that consumers of the p2m interface now need to > perform. > > The current maintainer (effectively) for wait queues is Keir. Keir, any ideas > on a schedule for the cleanup?I maintain the wait-queue mechanism, but not every (potential) user of it! The only use-case above that might fall into my domain is 3, I think. -- Keir> Thanks > Andres >
On Oct 12, 2012, at 12:30 PM, Keir Fraser wrote:> On 12/10/2012 16:38, "Andres Lagar-Cavilla" <andreslc@gridcentric.ca> wrote: > >> During the last Xen Summit there were informal discussions about the status of >> wait queues in the hypervisor. >> >> To recap: >> 1. Wait queues are used in mem event, when events generated by a vcpu overflow >> the ring size >> 2. We would like to use wait queues when the hypervisor needs a paged out >> frame (say for hvm_copy) >> 3. We would like to use wait queues to avoid the two decoupled mmio emulation >> passes >> 4. We would like to use wait queues when the hypervisor needs write access to >> a shared frame (say for hvm copy), and unsharing temporarily fails with >> ENOMEM. >> >> Conceivably more uses for wait queues may come down the line. >> >> Use-cases 2. and 4. were left out of the time frame of 4.2, because a vcpu >> cannot go to sleep on a wait queue while holding a spinlock, and such >> situations would frequently arise. Preliminary patches from Tim Deegan have >> floated on the list >> (http://lists.xen.org/archives/html/xen-devel/2012-02/msg02133.html). We would >> like this functionality to be present on the mm side for 4.3, and then proceed >> to remove the "thinking" that consumers of the p2m interface now need to >> perform. >> >> The current maintainer (effectively) for wait queues is Keir. Keir, any ideas >> on a schedule for the cleanup? > > I maintain the wait-queue mechanism, but not every (potential) user of it!That''s the end of my cunning scheme. It was worth a try ...> The only use-case above that might fall into my domain is 3, I think.So perhaps mm should wait for that to happen to then tackle wait queues for paging/sharing? Thanks, Andres> > -- Keir > >> Thanks >> Andres >> > >
On 12/10/2012 17:43, "Andres Lagar-Cavilla" <andreslc@gridcentric.ca> wrote:>>> The current maintainer (effectively) for wait queues is Keir. Keir, any >>> ideas >>> on a schedule for the cleanup? >> >> I maintain the wait-queue mechanism, but not every (potential) user of it! > > That''s the end of my cunning scheme. It was worth a try ... > >> The only use-case above that might fall into my domain is 3, I think. > > So perhaps mm should wait for that to happen to then tackle wait queues for > paging/sharing?I don''t see any dependency. And Tim has already proposed patches for use-case 2, in fact, as you noted. So, shouldn''t they simply be polished up and applied as soon as possible? Basically, for mm type things, Tim is your man, either as author or reviewer. -- Keir> Thanks, > Andres
On Oct 12, 2012, at 1:26 PM, Keir Fraser wrote:> On 12/10/2012 17:43, "Andres Lagar-Cavilla" <andreslc@gridcentric.ca> wrote: > >>>> The current maintainer (effectively) for wait queues is Keir. Keir, any >>>> ideas >>>> on a schedule for the cleanup? >>> >>> I maintain the wait-queue mechanism, but not every (potential) user of it! >> >> That''s the end of my cunning scheme. It was worth a try ... >> >>> The only use-case above that might fall into my domain is 3, I think. >> >> So perhaps mm should wait for that to happen to then tackle wait queues for >> paging/sharing? > > I don''t see any dependency. And Tim has already proposed patches for > use-case 2, in fact, as you noted. So, shouldn''t they simply be polished up > and applied as soon as possible? Basically, for mm type things, Tim is your > man, either as author or reviewer.Ok. Wanted to ascertain where we stand on things. Tim''s patches crash guests because there are all sorts of spinlocks being held. That''s the gist of the mm work that needs to be done. And a separate discussion. Thanks Andres> > -- Keir > >> Thanks, >> Andres > >
On 12/10/2012 19:53, "Andres Lagar-Cavilla" <andreslc@gridcentric.ca> wrote:>> I don''t see any dependency. And Tim has already proposed patches for >> use-case 2, in fact, as you noted. So, shouldn''t they simply be polished up >> and applied as soon as possible? Basically, for mm type things, Tim is your >> man, either as author or reviewer. > > Ok. Wanted to ascertain where we stand on things. > > Tim''s patches crash guests because there are all sorts of spinlocks being > held. That''s the gist of the mm work that needs to be done. And a separate > discussion.Yes, that''s a *mm* can of worms. :) Tim is the first port of call, and working out who actually does the work, and what work that is, will be the the ensuing discussion. -- Keir
On 12/10/12 20:34, Keir Fraser wrote:> On 12/10/2012 19:53, "Andres Lagar-Cavilla" <andreslc@gridcentric.ca> wrote: > >>> I don''t see any dependency. And Tim has already proposed patches for >>> use-case 2, in fact, as you noted. So, shouldn''t they simply be polished up >>> and applied as soon as possible? Basically, for mm type things, Tim is your >>> man, either as author or reviewer. >> Ok. Wanted to ascertain where we stand on things. >> >> Tim''s patches crash guests because there are all sorts of spinlocks being >> held. That''s the gist of the mm work that needs to be done. And a separate >> discussion. > Yes, that''s a *mm* can of worms. :) Tim is the first port of call, and > working out who actually does the work, and what work that is, will be the > the ensuing discussion.So does it make sense to track this on the 4.3 feature list (obviously with "owner: ?" until that''s sorted out)? Andres, would the summary below be accurate enough? * Waitqueues for hypervisor accesses to shared frames which fail with -ENOMEM owner: ? status: First draft posted back in February, much more work to do. -George
On Oct 15, 2012, at 9:47 AM, George Dunlap <George.Dunlap@eu.citrix.com> wrote:> On 12/10/12 20:34, Keir Fraser wrote: >> On 12/10/2012 19:53, "Andres Lagar-Cavilla" <andreslc@gridcentric.ca> wrote: >> >>>> I don''t see any dependency. And Tim has already proposed patches for >>>> use-case 2, in fact, as you noted. So, shouldn''t they simply be polished up >>>> and applied as soon as possible? Basically, for mm type things, Tim is your >>>> man, either as author or reviewer. >>> Ok. Wanted to ascertain where we stand on things. >>> >>> Tim''s patches crash guests because there are all sorts of spinlocks being >>> held. That''s the gist of the mm work that needs to be done. And a separate >>> discussion. >> Yes, that''s a *mm* can of worms. :) Tim is the first port of call, and >> working out who actually does the work, and what work that is, will be the >> the ensuing discussion. > So does it make sense to track this on the 4.3 feature list (obviously with "owner: ?" until that''s sorted out)? Andres, would the summary below be accurate enough? > > * Waitqueues for hypervisor accesses to shared frames which fail with -ENOMEM > owner: ? > status: First draft posted back in February, much more work to do.I''d be more generic and say "wait queues for mm" (paging also needs this). I don''t know about "wait queues for mmio emulation" as a separate item. Andres> > -George >