Ben Guthro
2007-Oct-24 21:15 UTC
[Xen-devel] [PATCH] Fix hvm guest time to be more accurate
The vpt timer code in effect accumulates missed ticks when a guest is running but has interrupts disabled or when the platform timer is starved. For guests like 64 bit Linux which calculates missed ticks on each clock interrupt based on the current tsc and the tsc of the last interrupt and then adds missed ticks to jiffies there is redundant accounting. This change subtracts off the hypervisor calculated missed ticks while guest running for 64 bit guests using the pit. Missed ticks when vcpu 0 is descheduled are unaffected. Signed-off-by: Ben Guthro <bguthro@virtualron.com> Signed-off-by: Dave Winchell <dwinchell@virtualiron.com> _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Dong, Eddie
2007-Oct-25 05:52 UTC
RE: [Xen-devel] [PATCH] Fix hvm guest time to be more accurate
> >The vpt timer code in effect accumulates missed ticks >when a guest is running but has interrupts disabled >or when the platform timer is starved. For guestsThis case, VMM will pick up the lost ticks into pending_intr_nr. The only issue is that if a guest is suspended or save/restored for long time such as several hours or days, we may see tons of lost ticks, which is difficult to be injected back (cost minutes of times or even longer). So we give up those amount of pending_intr_nr. In all above case, guest need to re-sync its timer with others like network time for example. So it is harmless. Similar situation happens when somebody is debugging a guest.>like 64 bit Linux which calculates missed ticks on each >clock interrupt based on the current tsc and the tsc >of the last interrupt and then adds missed ticks to jiffies >there is redundant accounting. > >This change subtracts off the hypervisor calculated missed >ticks while guest running for 64 bit guests using the pit. >Missed ticks when vcpu 0 is descheduled are unaffected. >I think this one is not the right direction. The problem in time virtualization is that we don''t how guest will use it. Latest 64 bit Linux can pick up the missed ticks from TSC like you mentioned, but it is not true for other 64 bits guest even linux such as 2.6.16, nor for Windows. Besides PV timer approach which is not always ready, basically we have 3 HVM time virtualization approaches: 1: Current one: Freeze guest time when the guest is descheduled and thus sync all guest time resource together. This one precisely solve the guest time cross-reference issues, guest TSC precisely represent guest time and thus can be cross-referenced in guest to pick up lossed ticks if have. but the logic is relatively complicated and is easy to see bugs :-( 2: Pin guest time to host time. This is simplest approach, guest TSC is always pinned to host TSC with a fixed offset no matter the vCPU is descheduled or not. In this case, other guest periodic IRQ driven time resource are not synced to guest TSC. Base on this, we have 2 deviations: A: Accumulate pending_intr_nr like current #1 approach. B: Give up accumulated pending_intr_nr. We only inject one IRQ for a periodic IRQ driven guest time such as PIT. What you mentioned here is a special case of 2B. Since we don''t know how guest behaviors, what we are proposing recently is to implement all of above, and let administrate tools to choose the one to use base on knowledge of guest OS type. thanks, eddie _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Dave Winchell
2007-Oct-25 14:45 UTC
Re: [Xen-devel] [PATCH] Fix hvm guest time to be more accurate
Hi Doug, Thanks for these comments. Dong, Eddie wrote:>>The vpt timer code in effect accumulates missed ticks >>when a guest is running but has interrupts disabled >>or when the platform timer is starved. For guests >> >> > >This case, VMM will pick up the lost ticks into pending_intr_nr. >The only issue is that if a guest is suspended or save/restored >for long time such as several hours or days, we may see tons >of lost ticks, which is difficult to be injected back (cost minutes >of times or even longer). So we give up those amount of >pending_intr_nr. In all above case, guest need to re-sync its >timer with others like network time for example. So it is >harmless. > >Similar situation happens when somebody is debugging a guest. > >The solution we provided removes the one second limit on missed ticks. Our testing showed that this is often exceeded under some loads, such as many guests, each running loads. Setting missed ticks to 1 tick when 1000 is exceeded is a source of timing error. In the code, where its set to one there is a TBD sync with guest comment, but no action. In terms of re-syncing with network time, our goal was to have the timekeeping accurate enough so that the guest could run ntpd. To do that, the under lying timekeeping needs to be accurate to .05%, or so. Our measurements show that with this patch the core timekeeping is accurate to .02%, approximately, even under loads where many guests run loads. Without this patch, timekeeping is off by more than 10% and ntpd cannot sync it.> > >>like 64 bit Linux which calculates missed ticks on each >>clock interrupt based on the current tsc and the tsc >>of the last interrupt and then adds missed ticks to jiffies >>there is redundant accounting. >> >>This change subtracts off the hypervisor calculated missed >>ticks while guest running for 64 bit guests using the pit. >>Missed ticks when vcpu 0 is descheduled are unaffected. >> >> >> >I think this one is not the right direction. > >The problem in time virtualization is that we don''t how guest will use >it. >Latest 64 bit Linux can pick up the missed ticks from TSC like you >mentioned, but it is not true for other 64 bits guest even linux >such as 2.6.16, nor for Windows. > >Ours is a specific solution. Let me explain our logic. We configure all our Linux guests with clock=pit. The 32bit Linux guests we run don''t calculate missed ticks and so don''t need cancellation. All the 64bit Linux guests that we run calculate missed ticks and need cancellation. I just checked 2.26.16 and it does calculate missed ticks in arch/x86_64/lermel/time.c, main_timer_handler(), when using pit for timekeeping. The missed ticks cancellation code is activated in this patch when the guest has configured the pit for timekeeping and the guest has four level page tables (ie 64 bit). The windows guests we run use rtc for timekeeping and don''t need or get cancellation. So the simplifying assumption here is that a 64bit guest using pit is calculating missed ticks. I would be in favor of a method where xen is told directly whether to do missed ticks cancellation. Perhaps its part of the guest configuration information.>Besides PV timer approach which is not always ready, basically >we have 3 HVM time virtualization approaches: > >1: Current one: > Freeze guest time when the guest is descheduled and >thus sync all guest time resource together. This one >precisely solve the guest time cross-reference issues, guest TSC >precisely represent guest time and thus can be cross-referenced > in guest to pick up lossed ticks if have. but the logic >is relatively complicated and is easy to see bugs :-( > > >2: Pin guest time to host time. > This is simplest approach, guest TSC is always pinned to >host TSC with a fixed offset no matter the vCPU is descheduled or >not. In this case, other guest periodic IRQ driven time resource >are not synced to guest TSC. > Base on this, we have 2 deviations: > A: Accumulate pending_intr_nr like current #1 approach. > B: Give up accumulated pending_intr_nr. We only inject >one IRQ for a periodic IRQ driven guest time such as PIT. > > What you mentioned here is a special case of 2B. > > Since we don''t know how guest behaviors, what we are >proposing recently is to implement all of above, and let administrate >tools to choose the one to use base on knowledge of guest OS >type. > >thanks, eddie > >I agree with you on having various policies for timekeeping based on the guest being run. This patch addresses specifically the problem of pit users who calculate missed ticks. Note that in the solution, de-scheduled missed ticks are not canceled, they are still needed as the tsc is continuous in the current methods. We are only canceling those pending_intr_nr that accumulate while the guest is running. These are due to inaccuracies in the xen time expirations due to interrupt loads or long dom0 interrupt disable periods. They are also due to extended periods where the guest has interrupts disabled. In these cases, as the tsc has been running, the guest will calculated missed ticks at the time of first clock interrupt injection and then xen will deliver pending_intr_nr additional interrupts resulting in jiffies moving by 2*pending_intr_nr instead of the desired pending_intr_nr. regards, Dave _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Dong, Eddie
2007-Oct-26 06:48 UTC
RE: [Xen-devel] [PATCH] Fix hvm guest time to be more accurate
Dave Winchell wrote:> Hi Doug, > > Thanks for these comments. > > Dong, Eddie wrote: > >>> The vpt timer code in effect accumulates missed ticks >>> when a guest is running but has interrupts disabled >>> or when the platform timer is starved. For guests >>> >>> >> >> This case, VMM will pick up the lost ticks into pending_intr_nr. >> The only issue is that if a guest is suspended or save/restored >> for long time such as several hours or days, we may see tons >> of lost ticks, which is difficult to be injected back (cost minutes >> of times or even longer). So we give up those amount of >> pending_intr_nr. In all above case, guest need to re-sync its >> timer with others like network time for example. So it is >> harmless. >> >> Similar situation happens when somebody is debugging a guest. >> >> > The solution we provided removes the one second limit on missed ticks. > Our testing showed that this is often exceeded under some loads, > such as many guests, each running loads. Setting missed ticks to 1 > tick when 1000 is exceeded is a source of timing error. In the code, > where its set to one there is a TBD sync with guest comment, but no > action.That is possible, So we should increase 1000 to be more bigger. Make it to be around 10s should be OK?> > In terms of re-syncing with network time, our goal was to have the > timekeeping accurate enough so that the guest could run ntpd. > To do that, the under lying timekeeping needs to be accurate to .05%, > or so. Our measurements show that with this patch the core > timekeeping is > accurate to .02%, approximately, even under loads where many > guests run > loads. > Without this patch, timekeeping is off by more than 10% and ntpd > cannot sync it. > >> >> >>> like 64 bit Linux which calculates missed ticks on each >>> clock interrupt based on the current tsc and the tsc >>> of the last interrupt and then adds missed ticks to jiffies >>> there is redundant accounting. >>> >>> This change subtracts off the hypervisor calculated missed >>> ticks while guest running for 64 bit guests using the pit. >>> Missed ticks when vcpu 0 is descheduled are unaffected. >>> >>> >>> >> I think this one is not the right direction. >> >> The problem in time virtualization is that we don''t how guest will >> use it. Latest 64 bit Linux can pick up the missed ticks from TSC >> like you mentioned, but it is not true for other 64 bits guest even >> linux >> such as 2.6.16, nor for Windows. >> >> > Ours is a specific solution. > Let me explain our logic.Yes, it can fit for some situation :-) But I think we need a generic solution. How to choose the time virtualization policy can be argued. And we may use some experiemental data. What you found is definitely one of the good data :-)> > We configure all our Linux guests with clock=pit.Just curious: why you favor PIT instead of HPET? Does HPET bring more deviation?> > The 32bit Linux guests we run don''t calculate missed ticks and so > don''t need cancellation. All the 64bit Linux guests that we run > calculate missed ticks and need cancellation. > I just checked 2.26.16 and it does calculate missed ticks in > arch/x86_64/lermel/time.c, main_timer_handler(), when using pit for > timekeeping.But this is reported as lost ticks which will prink something. In theory with guest TSC synchronized with guest periodic timer. This issue can be removed, but somehow (maybe bug or virtualization overhead) we may still see them :-(> > The missed ticks cancellation code is activated in this patch when the > guest has configured the pit for timekeeping and the guest has four > level page tables (ie 64 bit). > > The windows guests we run use rtc for timekeeping and don''t need > or get cancellation. > > So the simplifying assumption here is that a 64bit guest using pit is > calculating missed ticks. > > I would be in favor of a method where xen is told directly > whether to do > missed ticks cancellation. Perhaps its part of the guest > configuration information. > >> Besides PV timer approach which is not always ready, basically >> we have 3 HVM time virtualization approaches: >> >> 1: Current one: >> Freeze guest time when the guest is descheduled and >> thus sync all guest time resource together. This one >> precisely solve the guest time cross-reference issues, guest TSC >> precisely represent guest time and thus can be cross-referenced >> in guest to pick up lossed ticks if have. but the logic >> is relatively complicated and is easy to see bugs :-( >> >> >> 2: Pin guest time to host time. >> This is simplest approach, guest TSC is always pinned to >> host TSC with a fixed offset no matter the vCPU is descheduled or >> not. In this case, other guest periodic IRQ driven time resource >> are not synced to guest TSC. >> Base on this, we have 2 deviations: >> A: Accumulate pending_intr_nr like current #1 approach. >> B: Give up accumulated pending_intr_nr. We only inject >> one IRQ for a periodic IRQ driven guest time such as PIT. >> >> What you mentioned here is a special case of 2B. >> >> Since we don''t know how guest behaviors, what we are >> proposing recently is to implement all of above, and let administrate >> tools to choose the one to use base on knowledge of guest OS >> type. >> >> thanks, eddie >> >> > I agree with you on having various policies for timekeeping based on > the guest being run. > > This patch addresses specifically the problem > of pit users who calculate missed ticks. Note that in the solution, > de-scheduled missed ticks are not canceled, they are still needed > as the tsc is continuous in the current methods. We are onlyIf we rely on guest to pick up the lost ticks, why not just do it thoroughly? i..e even deschedule missed ticks can rely on guest to pick up. That is what 2.B proposed. In some cases, we saw issues in Windows (XP32) with 2B, guest wall clock becomes slow. Maybe XP64 behaviors different like you saw, but we need windows expert to double check. Some rough idea in my mind is: Policy #1 works best for 32 bits Liunux (and old 64 bits Linux). Policy #2B works for latest 64 bits Linux. Policy #2A works for Windows (32 & 64 bits).> canceling those > pending_intr_nr that accumulate while the guest is running. > These are due > to inaccuracies in the xen time expirations due to interrupt loads or > long dom0 interrupt disable periods. They are also due to extended > periods where the guest has interrupts disabled. In these cases, as > the tsc has been running, the guest will calculated missed ticks at > the time of first > clock interrupt > injection and then xen will deliver pending_intr_nr additional > interrupts resulting in jiffies moving by 2*pending_intr_nr instead > of the desired pending_intr_nr. > > regards, > Davethx, eddie _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Dave Winchell
2007-Oct-26 13:56 UTC
Re: [Xen-devel] [PATCH] Fix hvm guest time to be more accurate
Dong, Eddie wrote:>Dave Winchell wrote: > > >>Hi Doug, >> >>Thanks for these comments. >> >>Dong, Eddie wrote: >> >> >> >>>>The vpt timer code in effect accumulates missed ticks >>>>when a guest is running but has interrupts disabled >>>>or when the platform timer is starved. For guests >>>> >>>> >>>> >>>> >>>This case, VMM will pick up the lost ticks into pending_intr_nr. >>>The only issue is that if a guest is suspended or save/restored >>>for long time such as several hours or days, we may see tons >>>of lost ticks, which is difficult to be injected back (cost minutes >>>of times or even longer). So we give up those amount of >>>pending_intr_nr. In all above case, guest need to re-sync its >>>timer with others like network time for example. So it is >>>harmless. >>> >>>Similar situation happens when somebody is debugging a guest. >>> >>> >>> >>> >>The solution we provided removes the one second limit on missed ticks. >>Our testing showed that this is often exceeded under some loads, >>such as many guests, each running loads. Setting missed ticks to 1 >>tick when 1000 is exceeded is a source of timing error. In the code, >>where its set to one there is a TBD sync with guest comment, but no >>action. >> >> > >That is possible, So we should increase 1000 to be more bigger. >Make it to be around 10s should be OK? > > >Agreed.>>In terms of re-syncing with network time, our goal was to have the >>timekeeping accurate enough so that the guest could run ntpd. >>To do that, the under lying timekeeping needs to be accurate to .05%, >>or so. Our measurements show that with this patch the core >>timekeeping is >>accurate to .02%, approximately, even under loads where many >>guests run >>loads. >>Without this patch, timekeeping is off by more than 10% and ntpd >>cannot sync it. >> >> >> >>> >>> >>>>like 64 bit Linux which calculates missed ticks on each >>>>clock interrupt based on the current tsc and the tsc >>>>of the last interrupt and then adds missed ticks to jiffies >>>>there is redundant accounting. >>>> >>>>This change subtracts off the hypervisor calculated missed >>>>ticks while guest running for 64 bit guests using the pit. >>>>Missed ticks when vcpu 0 is descheduled are unaffected. >>>> >>>> >>>> >>>> >>>> >>>I think this one is not the right direction. >>> >>>The problem in time virtualization is that we don''t how guest will >>>use it. Latest 64 bit Linux can pick up the missed ticks from TSC >>>like you mentioned, but it is not true for other 64 bits guest even >>>linux >>>such as 2.6.16, nor for Windows. >>> >>> >>> >>> >>Ours is a specific solution. >>Let me explain our logic. >> >> > >Yes, it can fit for some situation :-) >But I think we need a generic solution. > >How to choose the time virtualization policy can be argued. >And we may use some experiemental data. What you found >is definitely one of the good data :-) > > > >>We configure all our Linux guests with clock=pit. >> >> > >Just curious: why you favor PIT instead of HPET? >Does HPET bring more deviation? > >We started with pit because it kept such good time for 32 bit Linux. Based on this, we thought that the problems with 64bit pit would be manageable. One of these days we will characterize HPET. Based on rtc performing well, I would think that HPET would do well too. If not, then the reasons could be investigated.> > >>The 32bit Linux guests we run don''t calculate missed ticks and so >>don''t need cancellation. All the 64bit Linux guests that we run >>calculate missed ticks and need cancellation. >>I just checked 2.26.16 and it does calculate missed ticks in >>arch/x86_64/lermel/time.c, main_timer_handler(), when using pit for >>timekeeping. >> >> > >But this is reported as lost ticks which will prink something. >In theory with guest TSC synchronized with guest periodic >timer. This issue can be removed, but somehow (maybe bug >or virtualization overhead) we may still see them :-( > > > >>The missed ticks cancellation code is activated in this patch when the >>guest has configured the pit for timekeeping and the guest has four >>level page tables (ie 64 bit). >> >>The windows guests we run use rtc for timekeeping and don''t need >>or get cancellation. >> >>So the simplifying assumption here is that a 64bit guest using pit is >>calculating missed ticks. >> >>I would be in favor of a method where xen is told directly >>whether to do >>missed ticks cancellation. Perhaps its part of the guest >>configuration information. >> >> >> >>>Besides PV timer approach which is not always ready, basically >>>we have 3 HVM time virtualization approaches: >>> >>>1: Current one: >>> Freeze guest time when the guest is descheduled and >>>thus sync all guest time resource together. This one >>>precisely solve the guest time cross-reference issues, guest TSC >>>precisely represent guest time and thus can be cross-referenced >>>in guest to pick up lossed ticks if have. but the logic >>>is relatively complicated and is easy to see bugs :-( >>> >>> >>>2: Pin guest time to host time. >>> This is simplest approach, guest TSC is always pinned to >>>host TSC with a fixed offset no matter the vCPU is descheduled or >>>not. In this case, other guest periodic IRQ driven time resource >>>are not synced to guest TSC. >>> Base on this, we have 2 deviations: >>> A: Accumulate pending_intr_nr like current #1 approach. >>> B: Give up accumulated pending_intr_nr. We only inject >>>one IRQ for a periodic IRQ driven guest time such as PIT. >>> >>> What you mentioned here is a special case of 2B. >>> >>> Since we don''t know how guest behaviors, what we are >>>proposing recently is to implement all of above, and let administrate >>>tools to choose the one to use base on knowledge of guest OS >>>type. >>> >>>thanks, eddie >>> >>> >>> >>> >>I agree with you on having various policies for timekeeping based on >>the guest being run. >> >>This patch addresses specifically the problem >>of pit users who calculate missed ticks. Note that in the solution, >>de-scheduled missed ticks are not canceled, they are still needed >>as the tsc is continuous in the current methods. We are only >> >> > >If we rely on guest to pick up the lost ticks, why not just do it >thoroughly? >i..e even deschedule missed ticks can rely on guest to pick up. > >I have considered this. I was worried that if the descheduled period was too large that the guest would do something funny, like declare lost to be 1 ;-) However, the descheduled periods are probably no longer than the interrupts disabled periods, given some of the problems we have with guests in spinlock_irq code. Also, since we have the Linux guest code, and have been relying on being able to read it to make timekeeping policy, we can see that they don''t set lost to 1. Actually, the more I think about this, the more I like the idea. It would mean that we wouldn''t have to deliver all those pent up interrupts to the guest. It solves some other problems as well. We could probably use this policy for most guests and timekeeping sources. Linux 32bit with pit might be the exception.>That is what 2.B proposed. >In some cases, we saw issues in Windows (XP32) with 2B, guest wall clock >becomes slow. Maybe XP64 behaviors different like you saw, but we need >windows expert to double check. > >Some rough idea in my mind is: > Policy #1 works best for 32 bits Liunux (and old 64 bits Linux). > Policy #2B works for latest 64 bits Linux. > Policy #2A works for Windows (32 & 64 bits). > >I agree with this breakdown. The next step is to do some experiments, I think.> > >>canceling those >>pending_intr_nr that accumulate while the guest is running. >>These are due >>to inaccuracies in the xen time expirations due to interrupt loads or >>long dom0 interrupt disable periods. They are also due to extended >>periods where the guest has interrupts disabled. In these cases, as >>the tsc has been running, the guest will calculated missed ticks at >>the time of first >>clock interrupt >>injection and then xen will deliver pending_intr_nr additional >>interrupts resulting in jiffies moving by 2*pending_intr_nr instead >>of the desired pending_intr_nr. >> >>regards, >>Dave >> >> > >thx, eddie > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Dave Winchell
2007-Oct-26 18:18 UTC
Re: [Xen-devel] [PATCH] Fix hvm guest time to be more accurate
Eddie, I implemented #2B and ran a three hour test with sles9-64 and rh4u4-64 guests. Each guest had 8 vcpus and the box was Intel with 2 physical processors. The guests were running large loads. Clock was pit. This is my usual test setup, except that I just as often used AMD nodes with more processors. The time error was .02%, good enough for ntpd. The implementation keeps a constant guest tsc offset. There is no pending_nr cancellation. When the vpt.c timer expires, it only increments pending_nr if its value is zero. Missed_ticks() is still calculated, but only to update the new timeout value. There is no adjustment to the tsc offset (set_guest_time()) at clock interrupt delivery time nor at re-scheduling time. So, I like this method better than the pending_nr subtract. I''m going to work on this some more and, if all goes well, propose a new code submission soon. I''ll put some kind of policy switch in too, which we can discuss and modify, but it will be along the lines of what we discussed below. Thanks for your input! -Dave Dave Winchell wrote:> Dong, Eddie wrote: > >> Dave Winchell wrote: >> >> >>> Hi Doug, >>> >>> Thanks for these comments. >>> >>> Dong, Eddie wrote: >>> >>> >>> >>>>> The vpt timer code in effect accumulates missed ticks >>>>> when a guest is running but has interrupts disabled >>>>> or when the platform timer is starved. For guests >>>>> >>>>> >>>>> >>>> >>>> This case, VMM will pick up the lost ticks into pending_intr_nr. >>>> The only issue is that if a guest is suspended or save/restored >>>> for long time such as several hours or days, we may see tons >>>> of lost ticks, which is difficult to be injected back (cost minutes >>>> of times or even longer). So we give up those amount of >>>> pending_intr_nr. In all above case, guest need to re-sync its >>>> timer with others like network time for example. So it is >>>> harmless. >>>> >>>> Similar situation happens when somebody is debugging a guest. >>>> >>>> >>>> >>> >>> The solution we provided removes the one second limit on missed ticks. >>> Our testing showed that this is often exceeded under some loads, >>> such as many guests, each running loads. Setting missed ticks to 1 >>> tick when 1000 is exceeded is a source of timing error. In the code, >>> where its set to one there is a TBD sync with guest comment, but no >>> action. >> >> >> That is possible, So we should increase 1000 to be more bigger. >> Make it to be around 10s should be OK? >> >> >> > Agreed. > >>> In terms of re-syncing with network time, our goal was to have the >>> timekeeping accurate enough so that the guest could run ntpd. >>> To do that, the under lying timekeeping needs to be accurate to .05%, >>> or so. Our measurements show that with this patch the core >>> timekeeping is >>> accurate to .02%, approximately, even under loads where many >>> guests run >>> loads. >>> Without this patch, timekeeping is off by more than 10% and ntpd >>> cannot sync it. >>> >>> >>>> >>>> >>>>> like 64 bit Linux which calculates missed ticks on each >>>>> clock interrupt based on the current tsc and the tsc >>>>> of the last interrupt and then adds missed ticks to jiffies >>>>> there is redundant accounting. >>>>> >>>>> This change subtracts off the hypervisor calculated missed >>>>> ticks while guest running for 64 bit guests using the pit. >>>>> Missed ticks when vcpu 0 is descheduled are unaffected. >>>>> >>>>> >>>>> >>>>> >>>> >>>> I think this one is not the right direction. >>>> >>>> The problem in time virtualization is that we don''t how guest will >>>> use it. Latest 64 bit Linux can pick up the missed ticks from TSC >>>> like you mentioned, but it is not true for other 64 bits guest even >>>> linux such as 2.6.16, nor for Windows. >>>> >>>> >>>> >>> >>> Ours is a specific solution. >>> Let me explain our logic. >>> >> >> >> Yes, it can fit for some situation :-) >> But I think we need a generic solution. >> >> How to choose the time virtualization policy can be argued. >> And we may use some experiemental data. What you found >> is definitely one of the good data :-) >> >> >> >>> We configure all our Linux guests with clock=pit. >>> >> >> >> Just curious: why you favor PIT instead of HPET? >> Does HPET bring more deviation? >> >> > We started with pit because it kept such good time for > 32 bit Linux. Based on this, we thought that > the problems with 64bit pit would be manageable. > > One of these days we will characterize HPET. > Based on rtc performing well, I would think that HPET would do well too. > If not, then the reasons could be investigated. > >> >> >>> The 32bit Linux guests we run don''t calculate missed ticks and so >>> don''t need cancellation. All the 64bit Linux guests that we run >>> calculate missed ticks and need cancellation. >>> I just checked 2.26.16 and it does calculate missed ticks in >>> arch/x86_64/lermel/time.c, main_timer_handler(), when using pit for >>> timekeeping. >> >> >> But this is reported as lost ticks which will prink something. >> In theory with guest TSC synchronized with guest periodic >> timer. This issue can be removed, but somehow (maybe bug >> or virtualization overhead) we may still see them :-( >> >> >> >>> The missed ticks cancellation code is activated in this patch when the >>> guest has configured the pit for timekeeping and the guest has four >>> level page tables (ie 64 bit). >>> >>> The windows guests we run use rtc for timekeeping and don''t need >>> or get cancellation. >>> >>> So the simplifying assumption here is that a 64bit guest using pit is >>> calculating missed ticks. >>> >>> I would be in favor of a method where xen is told directly >>> whether to do >>> missed ticks cancellation. Perhaps its part of the guest >>> configuration information. >>> >>> >>>> Besides PV timer approach which is not always ready, basically >>>> we have 3 HVM time virtualization approaches: >>>> >>>> 1: Current one: >>>> Freeze guest time when the guest is descheduled and >>>> thus sync all guest time resource together. This one >>>> precisely solve the guest time cross-reference issues, guest TSC >>>> precisely represent guest time and thus can be cross-referenced >>>> in guest to pick up lossed ticks if have. but the logic >>>> is relatively complicated and is easy to see bugs :-( >>>> >>>> >>>> 2: Pin guest time to host time. >>>> This is simplest approach, guest TSC is always pinned to >>>> host TSC with a fixed offset no matter the vCPU is descheduled or >>>> not. In this case, other guest periodic IRQ driven time resource >>>> are not synced to guest TSC. >>>> Base on this, we have 2 deviations: >>>> A: Accumulate pending_intr_nr like current #1 approach. >>>> B: Give up accumulated pending_intr_nr. We only inject >>>> one IRQ for a periodic IRQ driven guest time such as PIT. >>>> >>>> What you mentioned here is a special case of 2B. >>>> >>>> Since we don''t know how guest behaviors, what we are >>>> proposing recently is to implement all of above, and let administrate >>>> tools to choose the one to use base on knowledge of guest OS >>>> type. >>>> >>>> thanks, eddie >>>> >>>> >>>> >>> >>> I agree with you on having various policies for timekeeping based on >>> the guest being run. >>> This patch addresses specifically the problem >>> of pit users who calculate missed ticks. Note that in the solution, >>> de-scheduled missed ticks are not canceled, they are still needed >>> as the tsc is continuous in the current methods. We are only >>> >> >> >> If we rely on guest to pick up the lost ticks, why not just do it >> thoroughly? >> i..e even deschedule missed ticks can rely on guest to pick up. >> >> > I have considered this. I was worried that if the descheduled period > was too large that the guest would do something funny, like declare lost > to be 1 ;-) > However, the descheduled periods are probably no longer than the > interrupts disabled periods, given some of the problems we have with > guests in spinlock_irq code. Also, since we have the Linux guest code, > and have been relying on being able to read it to make timekeeping > policy, > we can see that they don''t set lost to 1. > > Actually, the more I think about this, the more I like the idea. > It would mean that we wouldn''t have to deliver all those pent up > interrupts to the guest. It solves some other problems as well. > We could probably use this policy for most guests and timekeeping > sources. Linux 32bit with pit might be the exception. > >> That is what 2.B proposed. >> In some cases, we saw issues in Windows (XP32) with 2B, guest wall clock >> becomes slow. Maybe XP64 behaviors different like you saw, but we need >> windows expert to double check. >> >> Some rough idea in my mind is: >> Policy #1 works best for 32 bits Liunux (and old 64 bits Linux). >> Policy #2B works for latest 64 bits Linux. >> Policy #2A works for Windows (32 & 64 bits). >> >> > I agree with this breakdown. > The next step is to do some experiments, I think. > >> >> >>> canceling those >>> pending_intr_nr that accumulate while the guest is running. >>> These are due >>> to inaccuracies in the xen time expirations due to interrupt loads or >>> long dom0 interrupt disable periods. They are also due to extended >>> periods where the guest has interrupts disabled. In these cases, as >>> the tsc has been running, the guest will calculated missed ticks at >>> the time of first clock interrupt >>> injection and then xen will deliver pending_intr_nr additional >>> interrupts resulting in jiffies moving by 2*pending_intr_nr instead >>> of the desired pending_intr_nr. >>> regards, >>> Dave >>> >> >> >> thx, eddie >> >> >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Dong, Eddie
2007-Oct-29 09:57 UTC
RE: [Xen-devel] [PATCH] Fix hvm guest time to be more accurate
Dave Winchell wrote:> Dong, Eddie wrote: >>> >> That is possible, So we should increase 1000 to be more bigger. >> Make it to be around 10s should be OK? >> >> >> > Agreed.Thanks! And will wait for your patches :-)>> >> Just curious: why you favor PIT instead of HPET? >> Does HPET bring more deviation? >> >> > We started with pit because it kept such good time for > 32 bit Linux. Based on this, we thought that > the problems with 64bit pit would be manageable. > > One of these days we will characterize HPET. > Based on rtc performing well, I would think that HPET would do > well too. > If not, then the reasons could be investigated.Yes!> >> >> If we rely on guest to pick up the lost ticks, why not just do it >> thoroughly? i..e even deschedule missed ticks can rely on guest to >> pick up. >> >> > I have considered this. I was worried that if the descheduled period > was too large that the guest would do something funny, like > declare lost > to be 1 ;-) > However, the descheduled periods are probably no longer than the > interrupts disabled periods, given some of the problems we have with > guests in spinlock_irq code. Also, since we have the Linux guest code, > and have been relying on being able to read it to make > timekeeping policy, > we can see that they don''t set lost to 1. > > Actually, the more I think about this, the more I like the idea. > It would mean that we wouldn''t have to deliver all those pent up > interrupts to the guest. It solves some other problems as well. > We could probably use this policy for most guests and timekeeping > sources. Linux 32bit with pit might be the exception.Great! Eddie _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Dong, Eddie
2007-Oct-29 09:58 UTC
RE: [Xen-devel] [PATCH] Fix hvm guest time to be more accurate
Dave Winchell wrote:> Eddie, > > I implemented #2B and ran a three hour test > with sles9-64 and rh4u4-64 guests. Each guest had 8 vcpus > and the box was Intel with 2 physical processors. > The guests were running large loads. > Clock was pit. This is my usual test setup, except that I just > as often used AMD nodes with more processors. > > The time error was .02%, good enough for ntpd. > > The implementation keeps a constant guest tsc offset. > There is no pending_nr cancellation. > When the vpt.c timer expires, it only increments pending_nr > if its value is zero. > Missed_ticks() is still calculated, but only to update the new > timeout value. There is no adjustment to the tsc offset > (set_guest_time()) > at clock interrupt delivery time nor at re-scheduling time. > > So, I like this method better than the pending_nr subtract. > I''m going to work on this some more and, if all goes well, > propose a new code submission soon. > I''ll put some kind of policy switch in too, which we can discuss > and modify, but it will be along the lines of what we discussed below. > > Thanks for your input! > > -Dave >Haitao Shai may posted his patch, can u check if there are something missed? thx,eddie _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Dave Winchell
2007-Oct-29 15:00 UTC
Re: [Xen-devel] [PATCH] Fix hvm guest time to be more accurate
Eddie, Haitao: The patch looks good with the following comments. 1. Since you are in missed_ticks(), why not increase the threshold to 10 sec? 2. In missed_ticks() you should only increment pending_intr_nr by missed_ticks calculated when pt_support_time_frozen(domain). 3. You might as well fix this one too since its what we discussed and is so related to constant tsc offset: In pt_timer_fn, if !pt_support_time_frozen(domain) then pending_intr_nr should end up with a maximum value of one. regards, Dave Dong, Eddie wrote:>Dave Winchell wrote: > > >>Eddie, >> >>I implemented #2B and ran a three hour test >>with sles9-64 and rh4u4-64 guests. Each guest had 8 vcpus >>and the box was Intel with 2 physical processors. >>The guests were running large loads. >>Clock was pit. This is my usual test setup, except that I just >>as often used AMD nodes with more processors. >> >>The time error was .02%, good enough for ntpd. >> >>The implementation keeps a constant guest tsc offset. >>There is no pending_nr cancellation. >>When the vpt.c timer expires, it only increments pending_nr >>if its value is zero. >>Missed_ticks() is still calculated, but only to update the new >>timeout value. There is no adjustment to the tsc offset >>(set_guest_time()) >>at clock interrupt delivery time nor at re-scheduling time. >> >>So, I like this method better than the pending_nr subtract. >>I''m going to work on this some more and, if all goes well, >>propose a new code submission soon. >>I''ll put some kind of policy switch in too, which we can discuss >>and modify, but it will be along the lines of what we discussed below. >> >>Thanks for your input! >> >>-Dave >> >> >> > > >Haitao Shai may posted his patch, can u check if there are something >missed? >thx,eddie > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2007-Oct-29 17:29 UTC
Re: [Xen-devel] [PATCH] Fix hvm guest time to be more accurate
I thought the point of the mode in Haitao''s patch was to still deliver the ''right'' number of pending interrupts, but not stall the guest TSC while delivering them? That''s what I checked in as c/s 16237 (in staging tree). If we want other modes too they can be added to the enumeration that c/s defines. -- Keir On 29/10/07 15:00, "Dave Winchell" <dwinchell@virtualiron.com> wrote:> Eddie, Haitao: > > The patch looks good with the following comments. > > 1. Since you are in missed_ticks(), why not increase the threshold > to 10 sec? > > 2. In missed_ticks() you should only increment pending_intr_nr by > missed_ticks > calculated when pt_support_time_frozen(domain). > > 3. You might as well fix this one too since its what we discussed and is so > related to constant tsc offset: > In pt_timer_fn, if !pt_support_time_frozen(domain) then > pending_intr_nr should end up with a maximum value of one. > > regards, > Dave > > > Dong, Eddie wrote: > >> Dave Winchell wrote: >> >> >>> Eddie, >>> >>> I implemented #2B and ran a three hour test >>> with sles9-64 and rh4u4-64 guests. Each guest had 8 vcpus >>> and the box was Intel with 2 physical processors. >>> The guests were running large loads. >>> Clock was pit. This is my usual test setup, except that I just >>> as often used AMD nodes with more processors. >>> >>> The time error was .02%, good enough for ntpd. >>> >>> The implementation keeps a constant guest tsc offset. >>> There is no pending_nr cancellation. >>> When the vpt.c timer expires, it only increments pending_nr >>> if its value is zero. >>> Missed_ticks() is still calculated, but only to update the new >>> timeout value. There is no adjustment to the tsc offset >>> (set_guest_time()) >>> at clock interrupt delivery time nor at re-scheduling time. >>> >>> So, I like this method better than the pending_nr subtract. >>> I''m going to work on this some more and, if all goes well, >>> propose a new code submission soon. >>> I''ll put some kind of policy switch in too, which we can discuss >>> and modify, but it will be along the lines of what we discussed below. >>> >>> Thanks for your input! >>> >>> -Dave >>> >>> >>> >> >> >> Haitao Shai may posted his patch, can u check if there are something >> missed? >> thx,eddie >> >> > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Dave Winchell
2007-Oct-29 19:55 UTC
Re: [Xen-devel] [PATCH] Fix hvm guest time to be more accurate
Keir, I think its a good idea to have other modes. However, I don''t believe that the mode checked in to the staging tree will keep good time for a 64 bit Linux guest, if that was what was intended. Here''s why: The guest running under the new option gets a clock interrupt after being de-scheduled for a while. It calculates missed_ticks and bumps jiffies by missed_ticks. Jiffies is now correct. Then, with the new mode as submitted, the guest will get missed_ticks additional interrupts. For each, the guest will add 1 to jiffies. The guest is now missed_ticks * clock_period ahead of where it should be. Under the old/other option, the guest tsc is continuous after a de-scheduled period, and thus the missed_ticks calculation in the guest results in zero. Then missed_ticks interrupts are delivered and jiffies is correct. I just ran a test with two 64bit Linux guests, one Red Hat and one Sles, under load. The hypervisor has constant tsc offset per the code submitted to the staging tree. In each 5 sec period the guest gained 6-10 seconds against ntp time, an error of almost 200%. [root@vs079 ~]# while :; do ntpdate -q 0.us.pool.ntp.org; sleep 5; done server 8.15.10.42, stratum 2, offset -0.061007, delay 0.04959 29 Oct 15:21:21 ntpdate[3892]: adjust time server 8.15.10.42 offset -0.061007 sec server 8.15.10.42, stratum 2, offset -0.077763, delay 0.07129 29 Oct 15:21:28 ntpdate[3894]: adjust time server 8.15.10.42 offset -0.077763 sec server 8.15.10.42, stratum 2, offset -1.733141, delay 0.20813 (load started here.) 29 Oct 15:21:35 ntpdate[3968]: step time server 8.15.10.42 offset -1.733141 sec server 8.15.10.42, stratum 2, offset -9.648700, delay 0.04861 29 Oct 15:21:54 ntpdate[4002]: step time server 8.15.10.42 offset -9.648700 sec server 8.15.10.42, stratum 2, offset -22.872883, delay 0.05319 29 Oct 15:22:21 ntpdate[4027]: step time server 8.15.10.42 offset -22.872883 sec server 8.15.10.42, stratum 2, offset -29.036008, delay 0.19337 29 Oct 15:22:38 ntpdate[4039]: step time server 8.15.10.42 offset -29.036008 sec server 8.15.10.42, stratum 2, offset -34.880845, delay 0.04944 29 Oct 15:22:46 ntpdate[4058]: step time server 8.15.10.42 offset -34.880845 sec With these three changes to the constant tsc offset policy in staging, the error compared to ntp is about .02% under this load. > 1. Since you are in missed_ticks(), why not increase the threshold > to 10 sec? > > 2. In missed_ticks() you should only increment pending_intr_nr by > missed_ticks > calculated when pt_support_time_frozen(domain). > > 3. You might as well fix this one too since its what we discussed and is so > related to constant tsc offset: > In pt_timer_fn, if !pt_support_time_frozen(domain) then > pending_intr_nr should end up with a maximum value of one. > So, I think these changes are necessary for a 64bit Linux policy. If you agree, should they go in as fixes to the constant tsc offset policy in staging now or as a new policy? thanks, Dave Keir Fraser wrote:>I thought the point of the mode in Haitao''s patch was to still deliver the >''right'' number of pending interrupts, but not stall the guest TSC while >delivering them? That''s what I checked in as c/s 16237 (in staging tree). If >we want other modes too they can be added to the enumeration that c/s >defines. > > -- Keir > >On 29/10/07 15:00, "Dave Winchell" <dwinchell@virtualiron.com> wrote: > > > >>Eddie, Haitao: >> >>The patch looks good with the following comments. >> >>1. Since you are in missed_ticks(), why not increase the threshold >> to 10 sec? >> >>2. In missed_ticks() you should only increment pending_intr_nr by >>missed_ticks >> calculated when pt_support_time_frozen(domain). >> >>3. You might as well fix this one too since its what we discussed and is so >> related to constant tsc offset: >> In pt_timer_fn, if !pt_support_time_frozen(domain) then >> pending_intr_nr should end up with a maximum value of one. >> >>regards, >>Dave >> >> >>Dong, Eddie wrote: >> >> >> >>>Dave Winchell wrote: >>> >>> >>> >>> >>>>Eddie, >>>> >>>>I implemented #2B and ran a three hour test >>>>with sles9-64 and rh4u4-64 guests. Each guest had 8 vcpus >>>>and the box was Intel with 2 physical processors. >>>>The guests were running large loads. >>>>Clock was pit. This is my usual test setup, except that I just >>>>as often used AMD nodes with more processors. >>>> >>>>The time error was .02%, good enough for ntpd. >>>> >>>>The implementation keeps a constant guest tsc offset. >>>>There is no pending_nr cancellation. >>>>When the vpt.c timer expires, it only increments pending_nr >>>>if its value is zero. >>>>Missed_ticks() is still calculated, but only to update the new >>>>timeout value. There is no adjustment to the tsc offset >>>>(set_guest_time()) >>>>at clock interrupt delivery time nor at re-scheduling time. >>>> >>>>So, I like this method better than the pending_nr subtract. >>>>I''m going to work on this some more and, if all goes well, >>>>propose a new code submission soon. >>>>I''ll put some kind of policy switch in too, which we can discuss >>>>and modify, but it will be along the lines of what we discussed below. >>>> >>>>Thanks for your input! >>>> >>>>-Dave >>>> >>>> >>>> >>>> >>>> >>>Haitao Shai may posted his patch, can u check if there are something >>>missed? >>>thx,eddie >>> >>> >>> >>> >>_______________________________________________ >>Xen-devel mailing list >>Xen-devel@lists.xensource.com >>http://lists.xensource.com/xen-devel >> >> > > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2007-Oct-29 20:40 UTC
Re: [Xen-devel] [PATCH] Fix hvm guest time to be more accurate
On 29/10/07 19:55, "Dave Winchell" <dwinchell@virtualiron.com> wrote:> So, I think these changes are necessary for a 64bit Linux policy. If you > agree, should they go in > as fixes to the constant tsc offset policy in staging now or as a new > policy?It''s easy to add another one with an appropriate name. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Dave Winchell
2007-Oct-29 20:44 UTC
Re: [Xen-devel] [PATCH] Fix hvm guest time to be more accurate
Keir Fraser wrote:>On 29/10/07 19:55, "Dave Winchell" <dwinchell@virtualiron.com> wrote: > > > >>So, I think these changes are necessary for a 64bit Linux policy. If you >>agree, should they go in >>as fixes to the constant tsc offset policy in staging now or as a new >>policy? >> >> > >It''s easy to add another one with an appropriate name. > > -- Keir > >Ok, we''ll submit a patch per the discussion. -Dave> > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Dong, Eddie
2007-Oct-30 11:45 UTC
RE: [Xen-devel] [PATCH] Fix hvm guest time to be more accurate
I guess another alternative is missed. We need to add 3rd choice to ignore pending_intr_nr for X64 Linux. thx,eddie>-----Original Message----- >From: Keir Fraser [mailto:Keir.Fraser@cl.cam.ac.uk] >Sent: 2007年10月30日 1:30 >To: Dave Winchell; Dong, Eddie >Cc: xen-devel; Ben Guthro; Shan, Haitao >Subject: Re: [Xen-devel] [PATCH] Fix hvm guest time to be more accurate > >I thought the point of the mode in Haitao''s patch was to still >deliver the >''right'' number of pending interrupts, but not stall the guest TSC while >delivering them? That''s what I checked in as c/s 16237 (in >staging tree). If >we want other modes too they can be added to the enumeration that c/s >defines. > > -- Keir > >On 29/10/07 15:00, "Dave Winchell" <dwinchell@virtualiron.com> wrote: > >> Eddie, Haitao: >> >> The patch looks good with the following comments. >> >> 1. Since you are in missed_ticks(), why not increase the threshold >> to 10 sec? >> >> 2. In missed_ticks() you should only increment pending_intr_nr by >> missed_ticks >> calculated when pt_support_time_frozen(domain). >> >> 3. You might as well fix this one too since its what we >discussed and is so >> related to constant tsc offset: >> In pt_timer_fn, if !pt_support_time_frozen(domain) then >> pending_intr_nr should end up with a maximum value of one. >> >> regards, >> Dave >> >> >> Dong, Eddie wrote: >> >>> Dave Winchell wrote: >>> >>> >>>> Eddie, >>>> >>>> I implemented #2B and ran a three hour test >>>> with sles9-64 and rh4u4-64 guests. Each guest had 8 vcpus >>>> and the box was Intel with 2 physical processors. >>>> The guests were running large loads. >>>> Clock was pit. This is my usual test setup, except that I just >>>> as often used AMD nodes with more processors. >>>> >>>> The time error was .02%, good enough for ntpd. >>>> >>>> The implementation keeps a constant guest tsc offset. >>>> There is no pending_nr cancellation. >>>> When the vpt.c timer expires, it only increments pending_nr >>>> if its value is zero. >>>> Missed_ticks() is still calculated, but only to update the new >>>> timeout value. There is no adjustment to the tsc offset >>>> (set_guest_time()) >>>> at clock interrupt delivery time nor at re-scheduling time. >>>> >>>> So, I like this method better than the pending_nr subtract. >>>> I''m going to work on this some more and, if all goes well, >>>> propose a new code submission soon. >>>> I''ll put some kind of policy switch in too, which we can discuss >>>> and modify, but it will be along the lines of what we >discussed below. >>>> >>>> Thanks for your input! >>>> >>>> -Dave >>>> >>>> >>>> >>> >>> >>> Haitao Shai may posted his patch, can u check if there are something >>> missed? >>> thx,eddie >>> >>> >> >> >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xensource.com >> http://lists.xensource.com/xen-devel >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel