remove HVM halt timer. It''s no longer needed since interrupts can wake it up now; using vcpu_unblock instead of vcpu_kick because timer callback functions are executed on the precossor the target vcpu is on. Signed-off-by: Xin Li <xin.b.li@intel.com> _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 10/11/06 8:04 am, "Li, Xin B" <xin.b.li@intel.com> wrote:> remove HVM halt timer. > It''s no longer needed since interrupts can wake it up now; using > vcpu_unblock instead of vcpu_kick because timer callback functions are > executed on the precossor the target vcpu is on.Why do you replace use of the schedop_block hypercall with direct setting of the blocked flag and softirq? Also vcpu_kick() has negligible extra cost compared with vcpu_unblock() and it''s less confusing just to use it everywhere. Most people don''t remember the semantic difference. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> >> remove HVM halt timer. >> It''s no longer needed since interrupts can wake it up now; using >> vcpu_unblock instead of vcpu_kick because timer callback >functions are >> executed on the precossor the target vcpu is on. > >Why do you replace use of the schedop_block hypercall with >direct setting of the blocked flag and softirq?Currently there are 3 points where vmx may gets schuduled out: 1) just before vmentry in exits.S 2) wait_on_xen_event_channel in hvm_do_resume, but I think now it''s never reachable. 3) in hvm_hlt. Actually 3 can be merged into 1, and we can do some statistic jobs there. See prepare_wait_on_xen_event_channel, it''s the same way.>Also vcpu_kick() has negligible extra cost compared with vcpu_unblock()and it''s less confusing just to use it>everywhere. Most people don''t remember the semantic difference. >OK, I''ll change back to use vcpu_kick, I also feel vcpu_unblock is somewhat misleading . -Xin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 10/11/06 8:34 am, "Li, Xin B" <xin.b.li@intel.com> wrote:>> Why do you replace use of the schedop_block hypercall with >> direct setting of the blocked flag and softirq? > > Currently there are 3 points where vmx may gets schuduled out: > 1) just before vmentry in exits.S > 2) wait_on_xen_event_channel in hvm_do_resume, but I think now it''s > never reachable. > 3) in hvm_hlt. > Actually 3 can be merged into 1, and we can do some statistic jobs > there. > See prepare_wait_on_xen_event_channel, it''s the same way.The advantage of the blocking hypercall is that it safely checks local_events_need_delivery() before actually blocking. This is necessary to avoid wakeup-waiting races. Now, this doesn''t actually work for HVM guests (yet) as it checks the wrong condition (event-channel upcall flag rather than state of the PIC/LAPIC) *but* it will mean that when the local_events_need_delivery() macro is fixed, the blocking hypercall will automatically do the right thing for us. The blocking code in your patch will not. If you want to collect stats when you''re scheduled out, the correct place to do that is context_switch(). But we already collect generic scheduler stats and generate trace entries. What else might be of interest? -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
>The advantage of the blocking hypercall is that it safely checks >local_events_need_delivery() before actually blocking. This is >necessary to >avoid wakeup-waiting races. Now, this doesn''t actually work >for HVM guests >(yet) as it checks the wrong condition (event-channel upcall >flag rather >than state of the PIC/LAPIC) *but* it will mean that when the >local_events_need_delivery() macro is fixed, the blocking >hypercall will >automatically do the right thing for us. The blocking code in >your patch >will not.New patch attached.> >If you want to collect stats when you''re scheduled out, the >correct place to >do that is context_switch(). But we already collect generic >scheduler stats >and generate trace entries. What else might be of interest? >Some vmexit info there, acturally current way should work too, but I don''t want to insert code in multiple place. -Xin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 10/11/06 08:59, "Li, Xin B" <xin.b.li@intel.com> wrote:> New patch attached.Why did you remove the test of VCPU_running in pt_timer_fn()? There''s no need to continue running the timer when a vcpu is descheduled, as long as it fires the first time (which it will). Apart from that it looks good so I''ll check in the rest. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
>> New patch attached. > >Why did you remove the test of VCPU_running in pt_timer_fn()? >There''s no >need to continue running the timer when a vcpu is descheduled, >as long as it >fires the first time (which it will). >At the time of the test, it''s should always be true because the callback can only be executed when current is the target vcpu, and when hvm vcpu is scheduled out, the timer will be stopped, and so can not run. -Xin>Apart from that it looks good so I''ll check in the rest. > > -- Keir >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel