On Sat, 2007-03-10 at 14:52 -0800, Jeremy Fitzhardinge wrote:> When booting under Xen, you'll get this if you're using both the xen > clocksource and clockevent drivers. However, it seems that during boot > on a NO_HZ HIGHRES_TIMERS system, the kernel does not use the Xen > clocksource until it switches to highres timer mode. This means that > during boot the kernel's monotonic clock is drifting with respect to the > hypervisor, and all timeouts are unreliable.The clocksource is not used until the clocksource is installed. Also the periodic mode during boot, when the clock event device supports periodic mode, is not reading the time. It relies on the clock event device getting it straight. That's not a big deal during boot and on a kernel with NO_HZ=n and HIGHRES=n the periodic tick only updates jiffies. If the only clocksource is jiffies, then we have to live with it and we do not switch to NO_HZ/HIGHRES as we would lose track of time. Once we switch to NO_HZ or HIGHRES the clock event device is directly coupled to the clock event source.> Initially I was just computing the kernel-hypervisor offset at boot > time, but then I changed it to recompute it every time the timer mode > changes. However, this didn't really help, and I was still getting > unpredictable timeouts during boot. I've changed it to just compute the > hypervisor absolute time directly using the delta each time the oneshot > timer is set, which will definitely be reliable (if the kernel and > hypervisor have drifting timebases then the meaning of Xns delta will be > different, but at least thats a local error rather than a long-term > cumulative error).We do not really care up to the point, where the high resolution clocksource (e.g. TSC, PM-Timer or HPET on real hardware) becomes active. Early boot is fragile and we switch over to high res clocksource and highres/nohz when things have stabilized.> My analysis might be wrong here (I suspect the Xen periodic timer may > have unexpected behaviour), but the overall conclusion still stands: > using an absolute timeout only works if the kernel and hypervisor have > non-drifting timebases. I think its too fragile for a clockevent > implementation to assume that a particular clocksource is in use to get > reliable results.Once we switched over to the clocksource, everything should be in perfect sync.> Or perhaps this is a property of the whole clock subsystem: that > clockevents must be paired with clocksources. But its not obvious to me > that this enforced, or even acknowledged.It's simply enforced in NO_HZ, HIGHRES mode as we operate in absolute time, which is read back from the clocksource, even if we use a relative value for real hardware clock event devices to program the next event. We calculate the delta between the absolute event and now. So we never get an accumulating error. What problem are you observing ? tglx
I've been thinking a bit more about how useful an absolute timeout is for a oneshot timer in a virtual environment. In principle, absolute times are generally preferable. A relative timeout means "timeout in X ns from now", but the meaning of "now" is ambiguous, particularly if the vcpu can be preempted at any time, which means the determination of "now" can be arbitrarily deferred. However, an absolute time is only meaningful if the kernel and hypervisor are operating off the same timebase (ie, no drift). In general, the kernel's monotonic timer is going to start from 0ns when the virtual machine is booted, and the hypervisor's is going to start at 0ns when the hypervisor is booted. If they're operating off the same timebase, then in principle you can work out a constant offset between the two, and use that for converting a kernel absolute time into a hypervisor absolute time. When booting under Xen, you'll get this if you're using both the xen clocksource and clockevent drivers. However, it seems that during boot on a NO_HZ HIGHRES_TIMERS system, the kernel does not use the Xen clocksource until it switches to highres timer mode. This means that during boot the kernel's monotonic clock is drifting with respect to the hypervisor, and all timeouts are unreliable. Initially I was just computing the kernel-hypervisor offset at boot time, but then I changed it to recompute it every time the timer mode changes. However, this didn't really help, and I was still getting unpredictable timeouts during boot. I've changed it to just compute the hypervisor absolute time directly using the delta each time the oneshot timer is set, which will definitely be reliable (if the kernel and hypervisor have drifting timebases then the meaning of Xns delta will be different, but at least thats a local error rather than a long-term cumulative error). My analysis might be wrong here (I suspect the Xen periodic timer may have unexpected behaviour), but the overall conclusion still stands: using an absolute timeout only works if the kernel and hypervisor have non-drifting timebases. I think its too fragile for a clockevent implementation to assume that a particular clocksource is in use to get reliable results. Or perhaps this is a property of the whole clock subsystem: that clockevents must be paired with clocksources. But its not obvious to me that this enforced, or even acknowledged. (Of course, if the drift can be characterized, then you can compensate for it, but this seems too complex to be the right answer. And drift compensation is numerically much simpler for small 32-bit deltas compared to 64-bit absolute times.) J