Tian, Kevin
2007-Jan-30 08:26 UTC
[Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
Stamp softlockup thread earlier before do_timer, because the latter is the one to actually trigger lock warning for long-time offline. Or else, I obserevd softlockup warning easily at manual vcpu hot-remove/plug, or when suspend cancel into old context. One point here is to cover both stolen and blocked time to compare with offline threshold. vcpu hotplug falls into ''stolen'' case, but it''s not enough. Considering xen time model is tickless at idle, it''s possible that big block time is requested which also inflames softlockup thread. Signed-off-by Kevin Tian <kevin.tian@intel.com> Thanks, Kevin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2007-Jan-30 09:37 UTC
Re: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
On 30/1/07 08:26, "Tian, Kevin" <kevin.tian@intel.com> wrote:> Stamp softlockup thread earlier before do_timer, because the > latter is the one to actually trigger lock warning for > long-time offline. Or else, I obserevd softlockup warning > easily at manual vcpu hot-remove/plug, or when suspend cancel > into old context.Actually the softlockup check is triggered from run_local_timers() which is called very near the end of timer_interrupt(). So the existing location for stamping the softlockup thread should be fine.> One point here is to cover both stolen and blocked time to > compare with offline threshold. vcpu hotplug falls into ''stolen'' > case, but it''s not enough. Considering xen time model is tickless > at idle, it''s possible that big block time is requested which > also inflames softlockup thread.Every vcpu has a softlockup thread which regularly sleeps for some short period. If the vcpu sets a timeout beyond that sleep time then we have a bug. We shouldn''t need to take into account blocked time -- Xen already ensures that wakeup latency is accounted as stolen time. Blocked time only includes time which the vcpu was willing to give up because it had no work to do. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tian, Kevin
2007-Jan-30 09:54 UTC
RE: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
>From: Keir Fraser [mailto:Keir.Fraser@cl.cam.ac.uk] >Sent: 2007年1月30日 17:38 > >On 30/1/07 08:26, "Tian, Kevin" <kevin.tian@intel.com> wrote: > >> Stamp softlockup thread earlier before do_timer, because the >> latter is the one to actually trigger lock warning for >> long-time offline. Or else, I obserevd softlockup warning >> easily at manual vcpu hot-remove/plug, or when suspend cancel >> into old context. > >Actually the softlockup check is triggered from run_local_timers() which >is >called very near the end of timer_interrupt(). So the existing location for >stamping the softlockup thread should be fine.Yep, you''re right. For this part, I looked at an old source. :-(> >> One point here is to cover both stolen and blocked time to >> compare with offline threshold. vcpu hotplug falls into ''stolen'' >> case, but it''s not enough. Considering xen time model is tickless >> at idle, it''s possible that big block time is requested which >> also inflames softlockup thread. > >Every vcpu has a softlockup thread which regularly sleeps for some >short >period. If the vcpu sets a timeout beyond that sleep time then we have a >bug. We shouldn''t need to take into account blocked time -- Xen already >ensures that wakeup latency is accounted as stolen time. Blocked time >only >includes time which the vcpu was willing to give up because it had no >work >to do. >If we don''t take into account blocked time, maybe we have to disable softlockup check. Say an idle process gets a timeout value larger than 10s by next_timer_interrupt, and then blocked. If, unfortunately, there''s no other events happening before that timeout value, this vcpu will see softlockup warning after that timeout immediately since this period is not categorized into stolen time. For example, when I hotremove and then hot-plug a vcpu on domU by: Echo "0" > /sys/devices/system/cpu/cpu3/online Echo "1" > /sys/devices/system/cpu/cpu3/online After cpu3 is up, idle process sometimes get a big timeout value (0x40000000) by next_timer_interrupt. Then virtual timer for that vcpu is disabled, and vcpu itself blocks. Sometime later (larger than 10s), other events (like IPI) may wake this vcpu. In this case, if without including blocked time, I think it difficult to prevent softlockup warning Another simple approach to trigger such warning is to let __xen_suspend() jumps to smp_resume immediately after smp_suspend, as a test case for suspend cancel. People can observe all vcpus except vcpu0 fall into that warning frequently. Thanks, Kevin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2007-Jan-30 10:08 UTC
Re: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
On 30/1/07 09:54, "Tian, Kevin" <kevin.tian@intel.com> wrote:> If we don''t take into account blocked time, maybe we have to disable > softlockup check. Say an idle process gets a timeout value larger than > 10s by next_timer_interrupt, and then blocked. If, unfortunately, there''s > no other events happening before that timeout value, this vcpu will see > softlockup warning after that timeout immediately since this period is > not categorized into stolen time.Presumably softlockup threads are killed and re-created when VCPUs are offlined and onlined. Perhaps the re-creation is taking a long time? But 10s would be a *very* long time. And once it is created and bound to the correct VCPU we should never see long timeouts when blocking (since softlockup thread timeout is never longer than a few seconds). Perhaps there is a bug in our cpu onlining code -- a big timeout like that does need investigating. I don''t think we can claim this bug is root-caused yet so it''s premature to be applying patches. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2007-Jan-30 10:10 UTC
Re: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
On 30/1/07 09:54, "Tian, Kevin" <kevin.tian@intel.com> wrote:> Another simple approach to trigger such warning is to let > __xen_suspend() jumps to smp_resume immediately after > smp_suspend, as a test case for suspend cancel. People can > observe all vcpus except vcpu0 fall into that warning frequently.Do you know if this problem has been observed across many versions of Xen or e.g., only after the upgrade to 2.6.18? -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tian, Kevin
2007-Jan-30 12:11 UTC
RE: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
>From: Keir Fraser [mailto:Keir.Fraser@cl.cam.ac.uk] >Sent: 2007年1月30日 18:09 > >On 30/1/07 09:54, "Tian, Kevin" <kevin.tian@intel.com> wrote: > >> If we don''t take into account blocked time, maybe we have to disable >> softlockup check. Say an idle process gets a timeout value larger than >> 10s by next_timer_interrupt, and then blocked. If, unfortunately, there''s >> no other events happening before that timeout value, this vcpu will see >> softlockup warning after that timeout immediately since this period is >> not categorized into stolen time. > >Presumably softlockup threads are killed and re-created when VCPUs >are >offlined and onlined. Perhaps the re-creation is taking a long time? ButThat should not be the case, since the softlockup warning continues to jump out after cpu is brought online.>10s >would be a *very* long time. And once it is created and bound to the >correct >VCPU we should never see long timeouts when blocking (since >softlockup >thread timeout is never longer than a few seconds).Yeah, I noted this point just after sending out the mail.> >Perhaps there is a bug in our cpu onlining code -- a big timeout like that >does need investigating. I don''t think we can claim this bug is >root-caused >yet so it''s premature to be applying patches. >Agree. I''ll do more investigation on this point. Just quickly compared the watchdog thread between 2.6.18 and 2.6.16. Previously in 2.6.16, an explicit schedule timeout with 1s is used, while 2.6.18 wakes up the watchdog thread per second from timer interrupt (softlockup_tick). One distinct difference on this change is, watchdog thread in 2.6.16 will have a soft timer registered while 2.6.18 not. I''m doubting that this may make some difference to decision of next_timer_interrupt. By the way, do you think whether scheduler may do something to punish new-online vcpu? Just from code, I didn''t see that since new awaken vcpu is always boosted... However in the actual, I found that virtual timer interrupt number increased slowly for that cpu by ''cat /proc/interrupts''. Sometimes it may even freeze for dozen of seconds. But yes, this may the phenomenon instead of reason. :-) Thanks, Kevin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tian, Kevin
2007-Jan-30 12:14 UTC
RE: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
>From: Keir Fraser [mailto:Keir.Fraser@cl.cam.ac.uk] >Sent: 2007年1月30日 18:10 > > >On 30/1/07 09:54, "Tian, Kevin" <kevin.tian@intel.com> wrote: > >> Another simple approach to trigger such warning is to let >> __xen_suspend() jumps to smp_resume immediately after >> smp_suspend, as a test case for suspend cancel. People can >> observe all vcpus except vcpu0 fall into that warning frequently. > >Do you know if this problem has been observed across many versions of >Xen or >e.g., only after the upgrade to 2.6.18? > > -- KeirDunno yet. I just found this issue when adding light weight suspend, and softlockup warning jumps out immediately after resuming back to old context. Then I tried manual cpu hotplug with same finding. I''ll try an old 2.6.16 version to see whether it happens as comparison. Thanks, Kevin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tian, Kevin
2007-Jan-30 12:45 UTC
RE: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
Actually I''m a bit interested in this case, where watchdog thread depends on timer interrupt to be awaken, while next timer interval depends on soft timer wheel. For the new online cpu, all its processes previously running have been migrated to others before offline. Thus when just coming back online, there may be no meaningful timer wheel and few activities on that vcpu. In this case, a (LONG_MAX >> 1) may be returned as a big timeout. So saying this new watchdog model, simply walking timer wheel is not enough. Maybe we can force max timeout value to 1s in safe_halt to special this case? I''ll make a try on this. But this will make current tick-less model to a bit tick-ful back. :-) Thanks, Kevin>From: Tian Kevin >Sent: 2007年1月30日 20:12 > >> >>Perhaps there is a bug in our cpu onlining code -- a big timeout like that >>does need investigating. I don''t think we can claim this bug is >>root-caused >>yet so it''s premature to be applying patches. >> > >Agree. I''ll do more investigation on this point. Just quickly compared >the watchdog thread between 2.6.18 and 2.6.16. Previously in 2.6.16, >an explicit schedule timeout with 1s is used, while 2.6.18 wakes up >the watchdog thread per second from timer interrupt (softlockup_tick). >One distinct difference on this change is, watchdog thread in 2.6.16 >will have a soft timer registered while 2.6.18 not. I''m doubting that >this may make some difference to decision of next_timer_interrupt. > >By the way, do you think whether scheduler may do something to >punish new-online vcpu? Just from code, I didn''t see that since new >awaken vcpu is always boosted... However in the actual, I found >that virtual timer interrupt number increased slowly for that cpu by >''cat /proc/interrupts''. Sometimes it may even freeze for dozen of >seconds. But yes, this may the phenomenon instead of reason. :-) > >Thanks, >Kevin >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2007-Jan-30 12:57 UTC
Re: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
On 30/1/07 12:45 pm, "Tian, Kevin" <kevin.tian@intel.com> wrote:> Actually I''m a bit interested in this case, where watchdog thread > depends on timer interrupt to be awaken, while next timer interval > depends on soft timer wheel. For the new online cpu, all its > processes previously running have been migrated to others before > offline. Thus when just coming back online, there may be no > meaningful timer wheel and few activities on that vcpu. In this case, > a (LONG_MAX >> 1) may be returned as a big timeout.Yeah, but the thread should get migrated back again (or recreated) in fairly short order. I think we can agree it should take rather less than 10 seconds. :-)> So saying this new watchdog model, simply walking timer wheel is > not enough. Maybe we can force max timeout value to 1s in safe_halt > to special this case? I''ll make a try on this. But this will make current > tick-less model to a bit tick-ful back. :-)I''m sure this will fix the issue. But who knows what real underlying issue it might be hiding? -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2007-Jan-30 12:58 UTC
Re: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
On 30/1/07 12:11 pm, "Tian, Kevin" <kevin.tian@intel.com> wrote:>> Presumably softlockup threads are killed and re-created when VCPUs >> are >> offlined and onlined. Perhaps the re-creation is taking a long time? But > > That should not be the case, since the softlockup warning continues > to jump out after cpu is brought online.You are confusing the two parts of the softlockup mechanism. The thread is responsible only for periodically touching the watchdog. The warning mechanism is driven off the timer interrupt handler. So it is entirely possible for warnings to appear when the thread does not exist (in fact, if the thread does not exist then we *expect* warnings to appear!). -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2007-Jan-30 13:01 UTC
Re: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
On 30/1/07 12:57 pm, "Keir Fraser" <Keir.Fraser@cl.cam.ac.uk> wrote:>> So saying this new watchdog model, simply walking timer wheel is >> not enough. Maybe we can force max timeout value to 1s in safe_halt >> to special this case? I''ll make a try on this. But this will make current >> tick-less model to a bit tick-ful back. :-) > > I''m sure this will fix the issue. But who knows what real underlying issue > it might be hiding?There could be a bug in next_timer_event(), for example. Maybe events a long way out (multiple seconds) don''t always get considered but we are normally saved by the fact that CPUs have a few sooner events also queued up. But that may not be the case for a newly-onlined CPU. This is just an example hypothesis to explain why we need to properly track this down. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tian, Kevin
2007-Jan-30 13:09 UTC
RE: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
>From: Keir Fraser [mailto:Keir.Fraser@cl.cam.ac.uk] >Sent: 2007年1月30日 20:57 > >On 30/1/07 12:45 pm, "Tian, Kevin" <kevin.tian@intel.com> wrote: > >> Actually I''m a bit interested in this case, where watchdog thread >> depends on timer interrupt to be awaken, while next timer interval >> depends on soft timer wheel. For the new online cpu, all its >> processes previously running have been migrated to others before >> offline. Thus when just coming back online, there may be no >> meaningful timer wheel and few activities on that vcpu. In this case, >> a (LONG_MAX >> 1) may be returned as a big timeout. > >Yeah, but the thread should get migrated back again (or recreated) in >fairly >short order. I think we can agree it should take rather less than 10 >seconds. :-)So my test is on an ''idle'' domain which does nothing. In this case, I''m not sure whether processes except those per-cpu kernel threads will be migrated back when one cpu is still easy to handle them. For the per-cpu kernel threads, yes they''ll be re-created, but will they be awaken immediately within 10s to do anything when there''s no meaningful workload on that cpu? Actually this bug may not show when domain is under heavy load...> >> So saying this new watchdog model, simply walking timer wheel is >> not enough. Maybe we can force max timeout value to 1s in safe_halt >> to special this case? I''ll make a try on this. But this will make current >> tick-less model to a bit tick-ful back. :-) > >I''m sure this will fix the issue. But who knows what real underlying issue >it might be hiding? > > -- KeirI''m not sure whether it hides something. But the current situation seems like a self-trap to me: watchdog waits for timer interrupt to be awaken in 1s interval, while timer interrupt deliberately schedules a longer interval without considering watchdog and then blames watchdog thread not running within 10s. :-) Thanks, Kevin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tian, Kevin
2007-Jan-30 13:12 UTC
RE: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
>From: Keir Fraser [mailto:Keir.Fraser@cl.cam.ac.uk] >Sent: 2007年1月30日 20:59 >On 30/1/07 12:11 pm, "Tian, Kevin" <kevin.tian@intel.com> wrote: > >>> Presumably softlockup threads are killed and re-created when >VCPUs >>> are >>> offlined and onlined. Perhaps the re-creation is taking a long time? >But >> >> That should not be the case, since the softlockup warning continues >> to jump out after cpu is brought online. > >You are confusing the two parts of the softlockup mechanism. The thread >is >responsible only for periodically touching the watchdog. The warning >mechanism is driven off the timer interrupt handler. So it is entirely >possible for warnings to appear when the thread does not exist (in fact, if >the thread does not exist then we *expect* warnings to appear!). > > -- KeirI added a debug print inside the warning by: printk(KERN_ERR "BUG: drift by 0x%lx\n", jiffies - touch_timestamp); This drift doesn''t increment monotonically. Most time it is about 1s (an interesting fact!), and seldom dozen of seconds. But anyway, it indicates that watchdog thread is still scheduled. :-) Thanks, Kevin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2007-Jan-30 13:12 UTC
Re: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
On 30/1/07 1:09 pm, "Tian, Kevin" <kevin.tian@intel.com> wrote:>> I''m sure this will fix the issue. But who knows what real underlying issue >> it might be hiding? >> >> -- Keir > > I''m not sure whether it hides something. But the current situation > seems like a self-trap to me: watchdog waits for timer interrupt to be > awaken in 1s interval, while timer interrupt deliberately schedules a > longer interval without considering watchdog and then blames > watchdog thread not running within 10s. :-)Actually I think you''re right -- if this fixes the issue then it points to a problem in the next_timer_event code. So it would actually be interesting to try clamping the timeout to one second. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tian, Kevin
2007-Jan-30 13:15 UTC
RE: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
>From: Tian Kevin >Sent: 2007年1月30日 21:12 >>>>You are confusing the two parts of the softlockup mechanism. The >thread >>is >>responsible only for periodically touching the watchdog. The warning >>mechanism is driven off the timer interrupt handler. So it is entirely >>possible for warnings to appear when the thread does not exist (in fact, >if >>the thread does not exist then we *expect* warnings to appear!). >> >> -- Keir > >I added a debug print inside the warning by: > printk(KERN_ERR "BUG: drift by 0x%lx\n", > jiffies - touch_timestamp); > >This drift doesn''t increment monotonically. Most time it is about 1s >(an interesting fact!), and seldom dozen of seconds. But anyway, it >indicates that watchdog thread is still scheduled. :-) > >Thanks, >Kevin >Sorry that I mean most time 10s here. Thanks, Kevin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tian, Kevin
2007-Jan-30 14:11 UTC
RE: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
>From: Keir Fraser [mailto:Keir.Fraser@cl.cam.ac.uk] >Sent: 2007年1月30日 21:13 >On 30/1/07 1:09 pm, "Tian, Kevin" <kevin.tian@intel.com> wrote: > >>> I''m sure this will fix the issue. But who knows what real underlying >issue >>> it might be hiding? >>> >>> -- Keir >> >> I''m not sure whether it hides something. But the current situation >> seems like a self-trap to me: watchdog waits for timer interrupt to be >> awaken in 1s interval, while timer interrupt deliberately schedules a >> longer interval without considering watchdog and then blames >> watchdog thread not running within 10s. :-) > >Actually I think you''re right -- if this fixes the issue then it points to a >problem in the next_timer_event code. So it would actually be interesting >to >try clamping the timeout to one second. > > -- KeirBy a simple change like this: @@ -962,7 +962,8 @@ u64 jiffies_to_st(unsigned long j) } else if (((unsigned long)delta >> (BITS_PER_LONG-3)) != 0) { /* Very long timeout means there is no pending timer. * We indicate this to Xen by passing zero timeout. */ - st = 0; + //st = 0; + st = processed_system_time + HZ * (u64)NS_PER_TICK; } else { st = processed_system_time + delta * (u64)NS_PER_TICK; } I really expected to say it as the root fix, however I can''t though this change made it better. I created a domU with 4 VCPUs on 2 CPUs box, and tried to hot-remove/plug vcpu 1,2,3 alternatively. After about ten rounds test, everything is just OK. However several minutes later, I saw that warning again, though far less frequent than before. So I have to dig more into this bug. The first thing I plan to do, is to make sure whether such long timeout is requested as what guest wants, or it''s xen to enlarge that timeout underlyingly... :-( BTW, do you think whether it''s worthy to destroy vcpu from scheduler when it''s down and then re-init that vcpu into scheduler when it''s on? I don''t know whether this will make any influence to accounting of scheduler. Actually domain save/restore doesn''t show this bug, and one obvious distinct compared to vcpu-hotplug is that domain is restored in a new context... Thanks, Kevin P.S. some trace log attached. You can see that drift in each warning is just around 1000 ticks. [root@localhost ~]# BUG: soft lockup detected on CPU#1! BUG: drift by 0x41e [<c0151301>] softlockup_tick+0xd1/0x100 [<c01095d4>] timer_interrupt+0x4e4/0x640 [<c011bbae>] try_to_wake_up+0x24e/0x300 [<c0151c89>] handle_IRQ_event+0x59/0xa0 [<c0151d65>] __do_IRQ+0x95/0x120 [<c010708f>] do_IRQ+0x3f/0xa0 [<c0103070>] xen_idle+0x0/0x60 [<c024e355>] evtchn_do_upcall+0xb5/0x120 [<c0103070>] xen_idle+0x0/0x60 [<c01057a5>] hypervisor_callback+0x3d/0x48 [<c0103070>] xen_idle+0x0/0x60 [<c0109d40>] raw_safe_halt+0x20/0x50 [<c01030a1>] xen_idle+0x31/0x60 [<c010316e>] cpu_idle+0x9e/0xf0 BUG: soft lockup detected on CPU#2! BUG: drift by 0x447 [<c0151301>] softlockup_tick+0xd1/0x100 [<c01095d4>] timer_interrupt+0x4e4/0x640 [<c011bbae>] try_to_wake_up+0x24e/0x300 [<c0151c89>] handle_IRQ_event+0x59/0xa0 [<c0151d65>] __do_IRQ+0x95/0x120 [<c010708f>] do_IRQ+0x3f/0xa0 [<c0103070>] xen_idle+0x0/0x60 [<c024e355>] evtchn_do_upcall+0xb5/0x120 [<c0103070>] xen_idle+0x0/0x60 [<c01057a5>] hypervisor_callback+0x3d/0x48 [<c0103070>] xen_idle+0x0/0x60 [<c0109d40>] raw_safe_halt+0x20/0x50 [<c01030a1>] xen_idle+0x31/0x60 [<c010316e>] cpu_idle+0x9e/0xf0 BUG: soft lockup detected on CPU#1! BUG: drift by 0x43f [<c0151301>] softlockup_tick+0xd1/0x100 [<c01095d4>] timer_interrupt+0x4e4/0x640 [<c011bbae>] try_to_wake_up+0x24e/0x300 [<c0151c89>] handle_IRQ_event+0x59/0xa0 [<c0151d65>] __do_IRQ+0x95/0x120 [<c010708f>] do_IRQ+0x3f/0xa0 [<c0103070>] xen_idle+0x0/0x60 [<c024e355>] evtchn_do_upcall+0xb5/0x120 [<c0103070>] xen_idle+0x0/0x60 [<c01057a5>] hypervisor_callback+0x3d/0x48 [<c0103070>] xen_idle+0x0/0x60 [<c0109d40>] raw_safe_halt+0x20/0x50 [<c01030a1>] xen_idle+0x31/0x60 [<c010316e>] cpu_idle+0x9e/0xf0 BUG: soft lockup detected on CPU#1! BUG: drift by 0x3ea [<c0151301>] softlockup_tick+0xd1/0x100 [<c01095d4>] timer_interrupt+0x4e4/0x640 [<c0137699>] __rcu_process_callbacks+0x99/0x100 [<c0129867>] tasklet_action+0x87/0x130 [<c0151c89>] handle_IRQ_event+0x59/0xa0 [<c0151d65>] __do_IRQ+0x95/0x120 [<c010708f>] do_IRQ+0x3f/0xa0 [<c0103070>] xen_idle+0x0/0x60 [<c024e355>] evtchn_do_upcall+0xb5/0x120 [<c0103070>] xen_idle+0x0/0x60 [<c01057a5>] hypervisor_callback+0x3d/0x48 [<c0103070>] xen_idle+0x0/0x60 [<c0109d40>] raw_safe_halt+0x20/0x50 [<c01030a1>] xen_idle+0x31/0x60 [<c010316e>] cpu_idle+0x9e/0xf0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2007-Jan-30 14:22 UTC
Re: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
On 30/1/07 2:11 pm, "Tian, Kevin" <kevin.tian@intel.com> wrote:> BTW, do you think whether it''s worthy to destroy vcpu from > scheduler when it''s down and then re-init that vcpu into scheduler > when it''s on? I don''t know whether this will make any influence to > accounting of scheduler. Actually domain save/restore doesn''t show > this bug, and one obvious distinct compared to vcpu-hotplug is that > domain is restored in a new context...I wouldn''t expect this to make any significant difference to scheduling accounting, certainly over a multi-second time period. Does the time you hoy-unplug the vcpu for make a difference to how often you see this problem? Did you try repro''ing with a 2.6.16 kernel? -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tian, Kevin
2007-Jan-30 14:33 UTC
RE: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
>From: Keir Fraser [mailto:Keir.Fraser@cl.cam.ac.uk] >Sent: 2007年1月30日 22:23 > >On 30/1/07 2:11 pm, "Tian, Kevin" <kevin.tian@intel.com> wrote: > >> BTW, do you think whether it''s worthy to destroy vcpu from >> scheduler when it''s down and then re-init that vcpu into scheduler >> when it''s on? I don''t know whether this will make any influence to >> accounting of scheduler. Actually domain save/restore doesn''t show >> this bug, and one obvious distinct compared to vcpu-hotplug is that >> domain is restored in a new context... > >I wouldn''t expect this to make any significant difference to scheduling >accounting, certainly over a multi-second time period. > >Does the time you hoy-unplug the vcpu for make a difference to how >often you >see this problem? Did you try repro''ing with a 2.6.16 kernel? > > -- KeirI can''t tell, since I didn''t use same pace in each round by manual operation. I tried both immediate plug after unplug, and a longer interval than 10s. But the first warning jumped out when I finished the test and ready to send out the ''good'' news. :-( I''ll repro 2.6.16 kernel tomorrow, because remote box crashed just now. Thanks, Kevin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tian, Kevin
2007-Jan-30 14:54 UTC
RE: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
>From: Tian, Kevin >Sent: 2007年1月30日 22:34 >>I wouldn''t expect this to make any significant difference to scheduling >>accounting, certainly over a multi-second time period. >> >>Does the time you hoy-unplug the vcpu for make a difference to how >>often you >>see this problem? Did you try repro''ing with a 2.6.16 kernel? >> >> -- Keir > >I can''t tell, since I didn''t use same pace in each round by manual >operation. I tried both immediate plug after unplug, and a longer >interval than 10s. But the first warning jumped out when I finished >the test and ready to send out the ''good'' news. :-( > >I''ll repro 2.6.16 kernel tomorrow, because remote box crashed just >now. > >Thanks, >KevinI have to say previous change incomplete, because it only limit timeout to 1s for very long timeout case (BITS_PER_LONG-3), while exclude the case in the middle. I should keep that 1s limit on all branches, in case there''re some not very long, but longer than 10s timeout is hit. However due to crashed box, I have to verify it tomorrow too. Thanks, Kevin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Graham, Simon
2007-Jan-30 19:29 UTC
RE: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
> On 30/1/07 09:54, "Tian, Kevin" <kevin.tian@intel.com> wrote: > > > Another simple approach to trigger such warning is to let > > __xen_suspend() jumps to smp_resume immediately after > > smp_suspend, as a test case for suspend cancel. People can > > observe all vcpus except vcpu0 fall into that warning frequently. > > Do you know if this problem has been observed across many versions of > Xen or > e.g., only after the upgrade to 2.6.18? >I''m not sure but I think that we''ve been seeing something very similar when live migrating domains with 3.0.3/2.6.16.29) -- my understanding is that the live migration code takes the domain down to UP, does the migration and then restores SMP -- we VERY often see soft lockup messages following this (several times per night in our regression testing) with stack traces identical to those posted by Kevin. I also added some instrumentation and in every single case, the ''stolen'' time is > 5s when we see the soft lockup. Simon _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tian, Kevin
2007-Jan-31 05:42 UTC
RE: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
>From: Graham, Simon [mailto:Simon.Graham@stratus.com] >Sent: 2007年1月31日 3:29 >> On 30/1/07 09:54, "Tian, Kevin" <kevin.tian@intel.com> wrote: >> >> > Another simple approach to trigger such warning is to let >> > __xen_suspend() jumps to smp_resume immediately after >> > smp_suspend, as a test case for suspend cancel. People can >> > observe all vcpus except vcpu0 fall into that warning frequently. >> >> Do you know if this problem has been observed across many versions >of >> Xen or >> e.g., only after the upgrade to 2.6.18? >> > >I''m not sure but I think that we''ve been seeing something very similar >when live migrating domains with 3.0.3/2.6.16.29) -- my understanding is >that the live migration code takes the domain down to UP, does the >migration and then restores SMP -- we VERY often see soft lockup >messages following this (several times per night in our regression >testing) with stack traces identical to those posted by Kevin. > >I also added some instrumentation and in every single case, the ''stolen'' >time is > 5s when we see the soft lockup. > >SimonHi, Simon, You case should be different as what I saw, which may be fixed by the original patch I posted which however doesn''t apply to latest. In 2.6.16 version, it''s do_timer to call softlock_tick instead of run_local_timers. So the check on "stolen > 5s" is a bit late to still allow warning jumped out though adjusted later. Could you try attached patch to see whether fixing for your live migration case? Thanks, Kevin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tian, Kevin
2007-Jan-31 06:17 UTC
[PATCH][RESEND]RE: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
>From: Keir Fraser [mailto:Keir.Fraser@cl.cam.ac.uk] >Sent: 2007年1月30日 22:23 > >On 30/1/07 2:11 pm, "Tian, Kevin" <kevin.tian@intel.com> wrote: > >> BTW, do you think whether it''s worthy to destroy vcpu from >> scheduler when it''s down and then re-init that vcpu into scheduler >> when it''s on? I don''t know whether this will make any influence to >> accounting of scheduler. Actually domain save/restore doesn''t show >> this bug, and one obvious distinct compared to vcpu-hotplug is that >> domain is restored in a new context... > >I wouldn''t expect this to make any significant difference to scheduling >accounting, certainly over a multi-second time period. > >Does the time you hoy-unplug the vcpu for make a difference to how >often you >see this problem? Did you try repro''ing with a 2.6.16 kernel? > > -- KeirHi, Keir, I verified that attached patch does fix the issue by restricting max timeout to 1s. Either vcpu unplug/plug, or suspend cancel works fine. Actually domain runs well several hours after intensive testing. I also tried 2.6.16, and it''s immune to this issue. I add some debug info in both 2.6.16 and 2.6.18, to print out delta value when delta > 1s. The results further proves our analysis. In 2.6.16, all the prints are: Delta 101 > HZ for cpuN Delta 101 > HZ for cpuN Delta 101 > HZ for cpuN ... While in 2.6.18, something like: Delta 199 > HZ for cpuN Delta 156 > HZ for cpuN Delta 192 > HZ for cpuN Delta 102 > HZ for cpuN ... After unplug/plug a cpu: Delta 951 > HZ for cpuN ... And then soflockup warning jumps out. So in 2.6.16, watchdog thread itself promises max timeout to about 1s by hooking a timer, while In 2.6.18, the max timeout value is volatile So I''m inclined to consider it as a fix, since there''s no easy way to deduce an appropriate timeout without explicit/hard-code knowledge on such requirement like watchdog thread. How do you think? :-) P.S. The warning reported by Simon on 2.6.16 may be fixed by my previous patch, due to the late check. Thanks, Kevin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Graham, Simon
2007-Jan-31 20:27 UTC
RE: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
> Hi, Simon, > You case should be different as what I saw, which may be fixed > by the original patch I posted which however doesn''t apply to latest. > In 2.6.16 version, it''s do_timer to call softlock_tick instead of > run_local_timers. So the check on "stolen > 5s" is a bit late to still > allow warning jumped out though adjusted later. Could you try > attached patch to see whether fixing for your live migration case? >Thanks - that explains why the original patch didn''t work! I will try this out and see how it goes. Simon _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Graham, Simon
2007-Feb-01 14:31 UTC
RE: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
Kevin,> > Hi, Simon, > You case should be different as what I saw, which may be fixed > by the original patch I posted which however doesn''t apply to latest. > In 2.6.16 version, it''s do_timer to call softlock_tick instead of > run_local_timers. So the check on "stolen > 5s" is a bit late to still > allow warning jumped out though adjusted later. Could you try > attached patch to see whether fixing for your live migration case? >So, I tried this last night - I don''t see any problems following live migration but I am still seeing soft lockups all of which are related to cases where there is a large stolen value - I haven''t looked at all the logs yet, but I did see a couple of things: 1. There were a ton of occasions when the test for stolen > 5s fired but the value of stolen was actually negative - is a -ve stolen value expected? I think the patch needs to be modified to define stolen_threshold as s64 instead of u64 if this is expected... 2. Following save/restore, I see absolutely massive positive values of stolen of the order of the time the domain was saved (seems reasonable) but then I immediately see a soft lockup even though we touched the watchdog. Shouldn''t this patch also fix soft lockup after save/restore? 3. I actually saw a bunch of cases where there was a mongo stolen value during apparently normal operation (in the ones I''ve looked at, the system as a whole was not particularly stressed); I need to work on exactly why the domain is not being secheduled, but in the meantime, shouldn''t this patch stop the incorrect soft lockup in DomU when the hypervisor fails to schedule the domain for a long period? (not exactly related to VCPU hotplug I know) Simon _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2007-Feb-01 18:24 UTC
Re: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
On 1/2/07 14:31, "Graham, Simon" <Simon.Graham@stratus.com> wrote:> 3. I actually saw a bunch of cases where there was a mongo stolen value > during apparently normal > operation (in the ones I''ve looked at, the system as a whole was not > particularly stressed); I > need to work on exactly why the domain is not being secheduled, but > in the meantime, shouldn''t > this patch stop the incorrect soft lockup in DomU when the hypervisor > fails to schedule the > domain for a long period? (not exactly related to VCPU hotplug I > know)No, the patch that Kevin provided cannot work because it touches the watchdog before jiffies has been updated. Since both jiffy update and watchdog check happens inside do_timer(), this is a hard problem to fix for Linux 2.6.16. You could push the watchdog touch inside the following loop that calls do_timer(): I think that would work! -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2007-Feb-01 18:41 UTC
Re: [PATCH][RESEND]RE: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
On 31/1/07 06:17, "Tian, Kevin" <kevin.tian@intel.com> wrote:> So in 2.6.16, watchdog thread itself promises max timeout > to about 1s by hooking a timer, while In 2.6.18, the max timeout > value is volatileBut the softlockup thread implementation has not changed between 2.6.16 and 2.6.18. The periodic delay is caused by the thread itself calling msleep_interruptible(1000) which should, as part of its implementation, queue up a timer. So this erratic behaviour on 2.6.18 is still worrying -- it''s certainly concerning that clamping the time we block for makes a difference. It would seem to imply that we are missing work to do sometimes when we block (e.g., perhaps they are on a timer wheel that we do not check, or there is some outstanding rcu work that we do not check for, or something like that). It feels like maybe something about the way that deferred work is managed has changed a bit in core Linux and the Xen parts haven''t caught up. :-) -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Graham, Simon
2007-Feb-01 18:54 UTC
RE: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
> No, the patch that Kevin provided cannot work because it touches the > watchdog before jiffies has been updated. Since both jiffy update and > watchdog check happens inside do_timer(), this is a hard problem tofix> for > Linux 2.6.16. You could push the watchdog touch inside the following > loop > that calls do_timer(): I think that would work! >Thanks Keir -- I think it''s time I moved to 3.0.4 and the later kernel! Once I''ve done that, I''ll get back to seeing if this issue still exists. Simon _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Graham, Simon
2007-Feb-01 22:40 UTC
RE: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
> > No, the patch that Kevin provided cannot work because it touches the > watchdog before jiffies has been updated. Since both jiffy update and > watchdog check happens inside do_timer(), this is a hard problem tofix> for > Linux 2.6.16. You could push the watchdog touch inside the following > loop > that calls do_timer(): I think that would work! >OK, I''ve spent a little time to really understand this today (hopefully!) and I think I know now why none of the patches to date (for 2.6.16 anyway) work -- the problem is they only touched the wdt one time BUT timer_interrupt in time-xen.c has a loop that repeatedly calls do_timer to advance the jiffies and check for timeout until the entire delta time since the last time called is accounted for... any single one of those do_timer calls might result in a watchdog timer expiration. It''s also not really correct to only touch the watchdog if the stolen time is > 5s -- you might be currently sitting at 8s since the watchdog was last updated and get called after 2s of stolen time and that will cause a timeout. What''s more, if you get called with more than 20s of stolen time (e.g. after save/restore or pause/unpause), you really need to tickle the watchdog timer multiple times (at least once for every 10s worth of jiffies in the total stolen time). So -- my proposal (patch attached for 2.6.16) is to touch the watchdog inside the loop that calls do_timer(), right after the call IF the remaining amount of stolen time is greater than NS_PER_TICK -- since each call to do_timer advances jiffies by one, this could only go wrong if there was only a single jiffy left until the watchdog timer expires on entry and I think that''s OK! I also considered only touching the watchdog timer every 5s or so, but I think the code to do that would have more overhead than simply touching it for every do_timer() call (since it''s just a call that copies jiffies to the per-cpu watchdog timer value). Take a look and let me know what you think (the printk could be removed -- I just put it in so I could tell the code was running). Simon _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2007-Feb-01 23:26 UTC
Re: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
On 1/2/07 22:40, "Graham, Simon" <Simon.Graham@stratus.com> wrote:> I also considered only touching the watchdog timer every 5s or so, but I > think the code to do that would have more overhead than simply touching > it for every do_timer() call (since it''s just a call that copies jiffies > to the per-cpu watchdog timer value). > > Take a look and let me know what you think (the printk could be removed > -- I just put it in so I could tell the code was running).The test inside the loop should check against NS_PER_TICK*100, not just 0. You only want to override the usual running of the watchdog if you get a big bunch of time stolen from you. Actually, five seconds (NS_PER_TICK*5*HZ) would be good: no reason to make the comparison dependent on the duration of a jiffy. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Graham, Simon
2007-Feb-01 23:44 UTC
RE: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
> The test inside the loop should check against NS_PER_TICK*100, notjust> 0. > You only want to override the usual running of the watchdog if you get > a big > bunch of time stolen from you. Actually, five seconds > (NS_PER_TICK*5*HZ) > would be good: no reason to make the comparison dependent on the > duration of > a jiffy. >I thought about this - the problem is I don''t know what the current value of the watchdog is, so if stolen is greater than zero, I need to do it once immediately and then once every 5s or so in the loop - I cant just do it the first n times through the loop because then I might do 10s worth of jiffy updates following all the watchdog touches... (BTW - the test for NS_PER_TICK*100 was just for the purposes of instrumentation) So - to move to a scheme where we only touch the watchdog every 5s of simulated time, I''d need to track if it''s been 5s since the last time I did it... that would mean maintaining another variable to track when the last time I updated the watchdog was and I thought this would actually be more overhead than simply updating it everytime round the loop. I do agree that the test should be against NS_PER_TICK rather than 0 - I''ll make that change. If you really think it''s bad to touch the watchdog on each loop, then I''d suggest doing this I think: int next_wd = 0; /* System-wide jiffy work. */ while (delta >= NS_PER_TICK) { delta -= NS_PER_TICK; processed_system_time += NS_PER_TICK; do_timer(regs); if (adjust_watchdog >= NS_PER_TICK) { if (next_wd == 0) { /* Avoid lockup warnings */ touch_softlockup_watchdog(); next_wd = HZ*5; // Dont adjust again for another 5s } else next_wd--; adjust_watchdog -= NS_PER_TICK; } } Simon _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2007-Feb-01 23:58 UTC
Re: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
On 1/2/07 23:44, "Graham, Simon" <Simon.Graham@stratus.com> wrote:> I thought about this - the problem is I don''t know what the current > value of the watchdog is, so if stolen is greater than zero, I need to > do it once immediately and then once every 5s or so in the loop - I cant > just do it the first n times through the loop because then I might do > 10s worth of jiffy updates following all the watchdog touches... (BTW - > the test for NS_PER_TICK*100 was just for the purposes of > instrumentation)I don''t mean to touch it only every 5s in the loop, I mean to touch it every time round the loop but only if stolen is greater than five seconds: while (delta >= NS_PER_TICK) { ...; if (stolen > <five seconds>) touch_softlockup_watchdog(); } The point is that you don''t want to touch the watchdog whenever you have small amounts of time stolen from you because that will happen very often (wakeup latencies, preemption) and cause the watchdog to not do its job properly and/or in a timely fashion when something *does* go wrong. If you touch it just about every time you enter the timer ISR you may as well disable the softlockup mechanism altogether! :-) The only theoretical problem with this approach is if you got time stolen that accumulated to more than five seconds, but this happened in two or more bursts, back-to-back. Then no one stolen period would be enough to trigger the touch, but also the guest may not be running for long enough to schedule the softlockup thread. I really don''t believe this would be an issue ever in practise however, given sane scheduling parameters and load on the system. If the system were loaded/configured so it could happen, the guest would be in dire straits for other reasons. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tian, Kevin
2007-Feb-02 01:10 UTC
RE: [PATCH][RESEND]RE: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
>From: Keir Fraser [mailto:Keir.Fraser@cl.cam.ac.uk] >Sent: 2007年2月2日 2:42 >On 31/1/07 06:17, "Tian, Kevin" <kevin.tian@intel.com> wrote: > >> So in 2.6.16, watchdog thread itself promises max timeout >> to about 1s by hooking a timer, while In 2.6.18, the max timeout >> value is volatile > >But the softlockup thread implementation has not changed between >2.6.16 and >2.6.18. The periodic delay is caused by the thread itself calling >msleep_interruptible(1000) which should, as part of its implementation, >queue up a timer. So this erratic behaviour on 2.6.18 is still worrying --So, am I looking at wrong code? In 2.6.16: while (!kthread_should_stop()) { msleep_interruptible(1000); touch_softlockup_watchdog(); } While in 2.6.18: while (!kthread_should_stop()) { set_current_state(TASK_INTERRUPTIBLE); touch_softlockup_watchdog(); schedule(); } I don''t think same logic kept there. :-) Thanks, Kevin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tian, Kevin
2007-Feb-02 01:12 UTC
RE: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
>From: Keir Fraser [mailto:Keir.Fraser@cl.cam.ac.uk] >Sent: 2007年2月2日 2:25 >> 3. I actually saw a bunch of cases where there was a mongo stolen >value >> during apparently normal >> operation (in the ones I''ve looked at, the system as a whole was >not >> particularly stressed); I >> need to work on exactly why the domain is not being secheduled, >but >> in the meantime, shouldn''t >> this patch stop the incorrect soft lockup in DomU when the >hypervisor >> fails to schedule the >> domain for a long period? (not exactly related to VCPU hotplug I >> know) > >No, the patch that Kevin provided cannot work because it touches the >watchdog before jiffies has been updated. Since both jiffy update and >watchdog check happens inside do_timer(), this is a hard problem to fix >for >Linux 2.6.16. You could push the watchdog touch inside the following >loop >that calls do_timer(): I think that would work! > > -- KeirAgree. Thanks, Kevin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2007-Feb-02 01:23 UTC
Re: [PATCH][RESEND]RE: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
On 2/2/07 01:10, "Tian, Kevin" <kevin.tian@intel.com> wrote:> So, am I looking at wrong code? In 2.6.16: > while (!kthread_should_stop()) { > msleep_interruptible(1000); > touch_softlockup_watchdog(); > } > > While in 2.6.18: > while (!kthread_should_stop()) { > set_current_state(TASK_INTERRUPTIBLE); > touch_softlockup_watchdog(); > schedule(); > } > > I don''t think same logic kept there. :-)Fair point! I must have compared two 2.6.16 trees... Well, that is interesting. I have no idea how SCHED_FIFO/sched_priority=99 interacts with timer wheels and/or tickless idle modes. I wonder why this was changed at all? Perhaps a question for lkml... -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tian, Kevin
2007-Feb-02 01:29 UTC
RE: [PATCH][RESEND]RE: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
>From: Keir Fraser [mailto:Keir.Fraser@cl.cam.ac.uk] >Sent: 2007年2月2日 9:23 > >On 2/2/07 01:10, "Tian, Kevin" <kevin.tian@intel.com> wrote: > >> So, am I looking at wrong code? In 2.6.16: >> while (!kthread_should_stop()) { >> msleep_interruptible(1000); >> touch_softlockup_watchdog(); >> } >> >> While in 2.6.18: >> while (!kthread_should_stop()) { >> set_current_state(TASK_INTERRUPTIBLE); >> touch_softlockup_watchdog(); >> schedule(); >> } >> >> I don''t think same logic kept there. :-) > >Fair point! I must have compared two 2.6.16 trees... > >Well, that is interesting. I have no idea how >SCHED_FIFO/sched_priority=99 >interacts with timer wheels and/or tickless idle modes. I wonder why this >was changed at all? Perhaps a question for lkml... > > -- KeirYeah, that''s the question. I can post it to lkml for an answer. But at the same time, do you think whether this patch is OK to be accepted into xen tree or not? Whatever the reason lkml may have to change that logic, we have to make it working correctly under xen... ;-) BTW, I''m not sure for generic tick-less model, but at least for 2.6.18, seems s390 is the only user on CONFIG_NO_IDLE_HZ which disables softlockup check instead. Thanks, Kevin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2007-Feb-02 01:39 UTC
Re: [PATCH][RESEND]RE: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
On 2/2/07 01:23, "Keir Fraser" <Keir.Fraser@cl.cam.ac.uk> wrote:> Fair point! I must have compared two 2.6.16 trees... > > Well, that is interesting. I have no idea how SCHED_FIFO/sched_priority=99 > interacts with timer wheels and/or tickless idle modes. I wonder why this > was changed at all? Perhaps a question for lkml...Odder still, the softlockup threads are SCHED_FIFO/99 in 2.6.16 too. So the main change is that rather than an explicit sleep of one second, the thread now sleeps as TASK_INTERRUPTIBLE. I wonder how it gets kicked back to TASK_RUNNING? http://lwn.net/Articles/173648/ is worrying since it seems to state that the patch intends to make the thread timer-interrupt driven rather than softirq timer driven. If that means it is jiffy-ticker driver, then perhaps the softlockup module is incompatible with tickless idle mode (no-idle-hz). I should probably ask Ingo about it. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2007-Feb-02 01:41 UTC
Re: [PATCH][RESEND]RE: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
On 2/2/07 01:29, "Tian, Kevin" <kevin.tian@intel.com> wrote:> Yeah, that''s the question. I can post it to lkml for an answer. But at the > same time, do you think whether this patch is OK to be accepted into > xen tree or not? Whatever the reason lkml may have to change that > logic, we have to make it working correctly under xen... ;-) > > BTW, I''m not sure for generic tick-less model, but at least for 2.6.18, > seems s390 is the only user on CONFIG_NO_IDLE_HZ which > disables softlockup check instead.We may have to do the same. If the softlockup mechanism is incompatible with no-idle-hz then we must either: 1. Fix softlockup mechanism (or provide fallback implementation for no-idle-hz). 2. Disable softlockup mechanism. 3. Disable no-idle-hz, or keep some fallback rate of ticks (your approach). My own opinion is we should do (1) or (2). -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2007-Feb-02 02:06 UTC
Re: [PATCH][RESEND]RE: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
On 2/2/07 01:39, "Keir Fraser" <Keir.Fraser@cl.cam.ac.uk> wrote:> Odder still, the softlockup threads are SCHED_FIFO/99 in 2.6.16 too. So the > main change is that rather than an explicit sleep of one second, the thread > now sleeps as TASK_INTERRUPTIBLE. I wonder how it gets kicked back to > TASK_RUNNING? > > http://lwn.net/Articles/173648/ is worrying since it seems to state that the > patch intends to make the thread timer-interrupt driven rather than softirq > timer driven. If that means it is jiffy-ticker driver, then perhaps the > softlockup module is incompatible with tickless idle mode (no-idle-hz).Okay, I now see how this works -- the thread is kicked from softlockup_tick(), from the timer ISR. So this wakeup event is hidden from next_timer_interrupt(), which only searches timer wheels and hrtimers. The strictly correct fix here is to make next_timer_interrupt() softlockup-aware. I would say it is currently incorrect in the presence of softlockup since it is not doing its job (telling an idle process what the next time-based event is that it must wake up for). We can do this by adding a softlockup_get_next_event(), called from the bottom of next_timer_interrupt(). I would pass it the current return value and have it return an adjusted value: so in the absence of softlockup it would simply return its argument unmodified. In the presence of softlockup it would return a sooner value if softlockup is the next event to fire. Do you want to try coding this up? -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tian, Kevin
2007-Feb-02 02:48 UTC
RE: [PATCH][RESEND]RE: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
>From: Keir Fraser [mailto:Keir.Fraser@cl.cam.ac.uk] >Sent: 2007年2月2日 10:07 > >Okay, I now see how this works -- the thread is kicked from >softlockup_tick(), from the timer ISR. So this wakeup event is hidden >from >next_timer_interrupt(), which only searches timer wheels and hrtimers.Exactly.> >The strictly correct fix here is to make next_timer_interrupt() >softlockup-aware. I would say it is currently incorrect in the presence of >softlockup since it is not doing its job (telling an idle process what the >next time-based event is that it must wake up for). > >We can do this by adding a softlockup_get_next_event(), called from the >bottom of next_timer_interrupt(). I would pass it the current return value >and have it return an adjusted value: so in the absence of softlockup it >would simply return its argument unmodified. In the presence of >softlockup >it would return a sooner value if softlockup is the next event to fire. > >Do you want to try coding this up? > > -- KeirSure. Thanks, Kevin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Graham, Simon
2007-Feb-02 03:47 UTC
RE: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
> > I don''t mean to touch it only every 5s in the loop, I mean to touch it > every > time round the loop but only if stolen is greater than five seconds: >Ah right -- got it now; good point.> The only theoretical problem with this approach is if you got time > stolen > that accumulated to more than five seconds, but this happened in twoor> more > bursts, back-to-back. Then no one stolen period would be enough to > trigger > the touch, but also the guest may not be running for long enough to > schedule > the softlockup thread. I really don''t believe this would be an issue > ever in > practise however, given sane scheduling parameters and load on the > system. > If the system were loaded/configured so it could happen, the guest > would be > in dire straits for other reasons.How about using a slightly smaller value like 1 or 2 s -- something larger than the expected wakeup latency etc but small enough that it would take multiple back-to-back bursts to hit 10s... Simon _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tian, Kevin
2007-Feb-02 04:36 UTC
RE: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
>From: Graham, Simon [mailto:Simon.Graham@stratus.com] >Sent: 2007年2月2日 11:47 >> The only theoretical problem with this approach is if you got time >> stolen >> that accumulated to more than five seconds, but this happened in two >or >> more >> bursts, back-to-back. Then no one stolen period would be enough to >> trigger >> the touch, but also the guest may not be running for long enough to >> schedule >> the softlockup thread. I really don''t believe this would be an issue >> ever in >> practise however, given sane scheduling parameters and load on the >> system. >> If the system were loaded/configured so it could happen, the guest >> would be >> in dire straits for other reasons. > >How about using a slightly smaller value like 1 or 2 s -- something >larger than the expected wakeup latency etc but small enough that it >would take multiple back-to-back bursts to hit 10s... > >SimonIf you really concern this value, how about make it configurable with a default value? Or even a boot option? Smaller the value is, the effect of watchdog thread to check weird behavior is wakened. On the contrary, bigger value enlarges possibility of accumulated stolen time. However for now, anyway we have no concrete example to prove how frequent back-to-back bursts may happen and whether accumulated case does happen. Most likely that may happen in some heavy loaded system with many vcpus. But in that case, maybe scalability issue on other areas will jump out first. :-) Thanks, Kevin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tian, Kevin
2007-Feb-02 13:57 UTC
RE: [PATCH][RESEND]RE: [Xen-devel] [PATCH] Fix softlockup issue aftervcpu hotplug
Hi, Keir, Please check whether attached patch matches your suggestion. Test OK with vcpu hotplug and save/restore. Thanks, Kevin>From: Tian, Kevin >Sent: 2007年2月2日 10:49 >>Okay, I now see how this works -- the thread is kicked from >>softlockup_tick(), from the timer ISR. So this wakeup event is hidden >>from >>next_timer_interrupt(), which only searches timer wheels and hrtimers. > >Exactly. > >> >>The strictly correct fix here is to make next_timer_interrupt() >>softlockup-aware. I would say it is currently incorrect in >the presence of >>softlockup since it is not doing its job (telling an idle >process what the >>next time-based event is that it must wake up for). >> >>We can do this by adding a softlockup_get_next_event(), >called from the >>bottom of next_timer_interrupt(). I would pass it the current >return value >>and have it return an adjusted value: so in the absence of >softlockup it >>would simply return its argument unmodified. In the presence of >>softlockup >>it would return a sooner value if softlockup is the next >event to fire. >> >>Do you want to try coding this up? >> >> -- Keir > >Sure. > >Thanks, >Kevin_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2007-Feb-02 15:05 UTC
Re: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
On 2/2/07 03:47, "Graham, Simon" <Simon.Graham@stratus.com> wrote:> How about using a slightly smaller value like 1 or 2 s -- something > larger than the expected wakeup latency etc but small enough that it > would take multiple back-to-back bursts to hit 10s...The choice of 5s was hardly scientific. I would say a choice of between say 2s and 5s is just fine and unlikely to really affect likelihood of false positives. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tian, Kevin
2007-Feb-04 12:30 UTC
RE: [PATCH][RESEND]RE: [Xen-devel] [PATCH] Fix softlockup issue aftervcpu hotplug
Keir, how about your opinion about this version? Maybe I missed your reply... Thanks, Kevin>-----Original Message----- >From: Tian, Kevin >Sent: 2007年2月2日 21:57 >To: Tian, Kevin; Keir Fraser; xen-devel@lists.xensource.com >Subject: RE: [PATCH][RESEND]RE: [Xen-devel] [PATCH] Fix >softlockup issue aftervcpu hotplug > >Hi, Keir, > Please check whether attached patch matches >your suggestion. Test OK with vcpu hotplug and save/restore. > >Thanks, >Kevin > >>From: Tian, Kevin >>Sent: 2007年2月2日 10:49 >>>Okay, I now see how this works -- the thread is kicked from >>>softlockup_tick(), from the timer ISR. So this wakeup event is hidden >>>from >>>next_timer_interrupt(), which only searches timer wheels and >hrtimers. >> >>Exactly. >> >>> >>>The strictly correct fix here is to make next_timer_interrupt() >>>softlockup-aware. I would say it is currently incorrect in >>the presence of >>>softlockup since it is not doing its job (telling an idle >>process what the >>>next time-based event is that it must wake up for). >>> >>>We can do this by adding a softlockup_get_next_event(), >>called from the >>>bottom of next_timer_interrupt(). I would pass it the current >>return value >>>and have it return an adjusted value: so in the absence of >>softlockup it >>>would simply return its argument unmodified. In the presence of >>>softlockup >>>it would return a sooner value if softlockup is the next >>event to fire. >>> >>>Do you want to try coding this up? >>> >>> -- Keir >> >>Sure. >> >>Thanks, >>Kevin >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2007-Feb-04 14:41 UTC
Re: [PATCH][RESEND]RE: [Xen-devel] [PATCH] Fix softlockup issue aftervcpu hotplug
I''m travelling so I''m not getting through the patch queue as quickly as usual. The patch looks fine -- it may get checked in today. -- Keir On 4/2/07 12:30, "Tian, Kevin" <kevin.tian@intel.com> wrote:> Keir, how about your opinion about this version? Maybe I missed > your reply... > > Thanks, > Kevin > >> -----Original Message----- >> From: Tian, Kevin >> Sent: 2007年2月2日 21:57 >> To: Tian, Kevin; Keir Fraser; xen-devel@lists.xensource.com >> Subject: RE: [PATCH][RESEND]RE: [Xen-devel] [PATCH] Fix >> softlockup issue aftervcpu hotplug >> >> Hi, Keir, >> Please check whether attached patch matches >> your suggestion. Test OK with vcpu hotplug and save/restore. >> >> Thanks, >> Kevin >> >>> From: Tian, Kevin >>> Sent: 2007年2月2日 10:49 >>>> Okay, I now see how this works -- the thread is kicked from >>>> softlockup_tick(), from the timer ISR. So this wakeup event is hidden >>>> from >>>> next_timer_interrupt(), which only searches timer wheels and >> hrtimers. >>> >>> Exactly. >>> >>>> >>>> The strictly correct fix here is to make next_timer_interrupt() >>>> softlockup-aware. I would say it is currently incorrect in >>> the presence of >>>> softlockup since it is not doing its job (telling an idle >>> process what the >>>> next time-based event is that it must wake up for). >>>> >>>> We can do this by adding a softlockup_get_next_event(), >>> called from the >>>> bottom of next_timer_interrupt(). I would pass it the current >>> return value >>>> and have it return an adjusted value: so in the absence of >>> softlockup it >>>> would simply return its argument unmodified. In the presence of >>>> softlockup >>>> it would return a sooner value if softlockup is the next >>> event to fire. >>>> >>>> Do you want to try coding this up? >>>> >>>> -- Keir >>> >>> Sure. >>> >>> Thanks, >>> Kevin >>_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Graham, Simon
2007-Mar-01 19:12 UTC
[Xen-devel] [PATCH] 3.0.4: Fix softlockup issue after migration
We had a discussion about this back at the beginning of Jan; in the end, I made a patch for 3.0.4 that seems to fix the soft lockups when a domain is not scheduled for a significant amount of time (for example, following migration) - I''m submitting this for inclusion in 3.0.4 rather than unstable because the fix would be different (I''m not even sure it''s required there yet). Basic idea is to keep touching the watchdog timer inside the loop that handles simulating all the timer ticks that have been missed. We''ve been running this for the past month and it has completely fixed the spurious soft lockups whilst not masking real soft lockups due to hangs inside the domain. Simon -------------------------------------------------------------- Remove spurious soft lockups that occur when a domain is not scheduled for a long time. Signed-off-by: Simon graham <Simon.graham@stratus.com> _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Reasonably Related Threads
- [PATCH RFC] Change softlockup watchdog to ignore stolen time
- [PATCH RFC] Change softlockup watchdog to ignore stolen time
- [patch 0/4] Revised softlockup watchdog improvement patches
- [patch 0/4] Revised softlockup watchdog improvement patches
- [patch 0/2] softlockup watchdog improvements