Since execution sequence within the hypervisor is non-preemptable, is there an upper bound on time that can be spent in the hypervisor? Thanks, K. Y _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Hi, I did some basic measurements on worst case execution time inside the hypervisor a while ago (6-8 months ago). From what i remember, the worst case where between 1-2msec on a 3GHz P4. This was under some load (make -j64 bzImage). A big culprit was the mmu-ext hypercall which in certain cases recursively unpins a page table. Handling certain page faults also took a while. This was just a preliminary study so i didn''t push the analysis very far. cheers, geoffrey On 5/8/07, Ky Srinivasan <ksrinivasan@novell.com> wrote:> Since execution sequence within the hypervisor is non-preemptable, is there an upper bound on time that can be spent in the hypervisor? > > Thanks, > > K. Y > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> I did some basic measurements on worst case execution time inside the > hypervisor a while ago (6-8 months ago). From what i remember, the > worst case where between 1-2msec on a 3GHz P4. This was under some > load (make -j64 bzImage). A big culprit was the mmu-ext hypercall > which in certain cases recursively unpins a page table. Handling > certain page faults also took a while. This was just a preliminary > study so i didn''t push the analysis very far.The pin/unpin operations are certainly by far the longest running operations in Xen, and it''s been on the to-do list to make them preemptable for a long time. This should be very simple as we can exit the hypervisor leaving the EIP on the hypercall, and next time the guest calls in we''ll pick up where we left off. Anyone who cares about real time on x86 up for implementing this? Ian _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Thanks Geoffrey. Regards, K. Y>>> On Tue, May 8, 2007 at 4:39 PM, in message<c3c918090705081339j2439155j30381e55c51ad418@mail.gmail.com>, "Geoffrey Lefebvre" <geoffrey@cs.ubc.ca> wrote:> Hi, > > I did some basic measurements on worst case execution time inside the > hypervisor a while ago (6- 8 months ago). From what i remember, the > worst case where between 1- 2msec on a 3GHz P4. This was under some > load (make - j64 bzImage). A big culprit was the mmu- ext hypercall > which in certain cases recursively unpins a page table. Handling > certain page faults also took a while. This was just a preliminary > study so i didn''t push the analysis very far. > > cheers, > > geoffrey > > > > > > > > On 5/8/07, Ky Srinivasan <ksrinivasan@novell.com> wrote: >> Since execution sequence within the hypervisor is non- preemptable, is there > an upper bound on time that can be spent in the hypervisor? >> >> Thanks, >> >> K. Y >> >> >> >> _______________________________________________ >> Xen- devel mailing list >> Xen- devel@lists.xensource.com >> http://lists.xensource.com/xen- devel >>_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
>>> On Tue, May 8, 2007 at 4:56 PM, in message<8A87A9A84C201449A0C56B728ACF491E0BA4E3@liverpoolst.ad.cl.cam.ac.uk>, "Ian Pratt" <Ian.Pratt@cl.cam.ac.uk> wrote:>> I did some basic measurements on worst case execution time inside the >> hypervisor a while ago (6- 8 months ago). From what i remember, the >> worst case where between 1- 2msec on a 3GHz P4. This was under some >> load (make - j64 bzImage). A big culprit was the mmu- ext hypercall >> which in certain cases recursively unpins a page table. Handling >> certain page faults also took a while. This was just a preliminary >> study so i didn''t push the analysis very far. > > The pin/unpin operations are certainly by far the longest running > operations in Xen, and it''s been on the to- do list to make them > preemptable for a long time. This should be very simple as we can exit > the hypervisor leaving the EIP on the hypercall, and next time the guest > calls in we''ll pick up where we left off. > > Anyone who cares about real time on x86 up for implementing this?Thanks Ian. We would be interested in looking at this. Regards, K. Y _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> The pin/unpin operations are certainly by far the longest running > operations in Xen, and it''s been on the to-do list to make them > preemptable for a long time. This should be very simple as we can exit > the hypervisor leaving the EIP on the hypercall, and next time the guest > calls in we''ll pick up where we left off.Hi, I have thought about real-time support in the past and depending on the type of real-time guest you want to run, I believe the problem might be more complicated than it first seems but could be wrong. Feel free to enlighten me if my comments below don''t make sense :) I think if all you want to do is preempt one guest to be able to run a different guest then the solution as stated above will work. Things get more complicated if you want to be able to deliver an event to a guest while that same guest is trapped in the hypervisor. You potentially want this feature to avoid a scenario such as a low priority thread that invoked a long hypercall delaying a high priority thread waiting for an event (such as a timer). The problem is that once you allow this upcall into the guest, there is no guarantee that the next hypercall will be the re-execution of the hypercall you preempted. If the real-time guest you are running is quite simple (a la mini-os) then you can avoid this kind of scenario but the problem gets harder to avoid (i think) if you are running something like Linux + preempt RT as a guest. cheers, geoffrey _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
>>> On Fri, May 11, 2007 at 3:43 AM, in message<c3c918090705110043l157d31b7wf94a2c6c63386e0a@mail.gmail.com>, "Geoffrey Lefebvre" <geoffrey@cs.ubc.ca> wrote:>> The pin/unpin operations are certainly by far the longest running >> operations in Xen, and it''s been on the to- do list to make them >> preemptable for a long time. This should be very simple as we can exit >> the hypervisor leaving the EIP on the hypercall, and next time the guest >> calls in we''ll pick up where we left off. > > Hi, > > I have thought about real- time support in the past and depending on > the type of real- time guest you want to run, I believe the problem > might be more complicated than it first seems but could be wrong. Feel > free to enlighten me if my comments below don''t make sense :) > > I think if all you want to do is preempt one guest to be able to run a > different guest then the solution as stated above will work. Things > get more complicated if you want to be able to deliver an event to a > guest while that same guest is trapped in the hypervisor. You > potentially want this feature to avoid a scenario such as a low > priority thread that invoked a long hypercall delaying a high priority > thread waiting for an event (such as a timer). The problem is that > once you allow this upcall into the guest, there is no guarantee that > the next hypercall will be the re- execution of the hypercall you > preempted. > > If the real- time guest you are running is quite simple (a la mini- os) > then you can avoid this kind of scenario but the problem gets harder > to avoid (i think) if you are running something like Linux + preempt > RT as a guest.We are interested in a predictable system that has (a) bounded dispatch latency and (b) does not suffer from priority inversion problems. With a dead-line scheduling policy and admission control we can address the dispatch latency issues provided we can guarantee that the non-preemtable execution in the hypervisor is bounded and is "reasonably" small. One way to deal with the problem you mention here is to have the guest OS have the smarts to (a) check for preemption conditions that may arise because a higher priority thread in the guest is runnable and (b) re-issue the hypercall that was only partially completed. Regards, K. Y _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> One way to deal with the problem you mention here is to have the guest OS have the smarts to (a) check for preemption conditions that may arise because a higher priority thread in the guest is runnable and (b) re-issue the hypercall that was only partially completed.Hi, Yes i agree. Preempting the hypercall and returning an indication of forward progress to the guest as is currently done with multicall, etc should work. I misunderstood Ian Pratt''s earlier suggestion. I thought his idea was to preempt the hypercall and keep preemption state inside the hypervisor. My mistake :). cheers, Geoffrey _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel