NISHIGUCHI Naoki
2008-Dec-18 02:57 UTC
[Xen-devel] [RFC][PATCH 0/4] Modification of credit scheduler rev2
Hi all, The patchset is revised version of patches that I was posted 10 days ago. This patchset is consist of the following 4 patches. 1. Subtract credit consumed accurately and shorten cpu time per one credit 2. Change the handling of credits over upper bound. 3. Balance credits of each vcpu of a domain 4. Introduce boost credit for latency-sensitive domain It was not possible to separate these cleanly. Please apply these patches in numerical order. Please review these patches. Any comments are appreciated. Best regards, Naoki Nishiguchi _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
NISHIGUCHI Naoki
2008-Dec-18 03:00 UTC
[Xen-devel] [RFC][PATCH 1/4] sched: more accurate credit scheduling
By applying this patch, the credit scheduler subtracts accurately credit consumed and sets correctly priority. CSCHED_CREDITS_PER_TICK is changed from 100 to 10000, because vcpu''s credit is subtracted in csched_schedule(). The difference between this patch and last patch is that start_time variable was moved from csched_vcpu structure to csched_pcpu structure. Best regards, Naoki Nishiguchi _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
NISHIGUCHI Naoki
2008-Dec-18 03:02 UTC
[Xen-devel] [RFC][PATCH 2/4] sched: change the handling of credits over upper bound
By applying this patch, the credit scheduler don''t reset vcpu''s credit (set to 0) when the credit would be over upper bound. And it prevents a vcpu from missing becoming active. The difference between this patch and last patch is when vcpu is put back on active list. This patch puts vcpu back on active list only in csched_acct(). Best regards, Naoki Nishiguchi _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
NISHIGUCHI Naoki
2008-Dec-18 03:04 UTC
[Xen-devel] [RFC][PATCH 3/4] sched: balance credits of each vcpu of a domain
By applying this patch, the credit scheduler balances credits of each active vcpu of a domain. There is no change in this patch. Best regards, Naoki Nishiguchi _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
NISHIGUCHI Naoki
2008-Dec-18 03:06 UTC
[Xen-devel] [RFC][PATCH 4/4] sched: introduce boost credit for latency-sensitive domain
I attached the following two patches. credit_rev2_4_boost_xen.patch : modification to xen hypervisor credit_rev2_4_boost_tools.patch: modification to tools By applying these two patches, boost credit is introduced to the credit scheduler. The credit scheduler comes to be able to give priority to latency-sensitive domain. The differences between these patches and last patches are as follows. - When a vcpu is waked up and set to BOOST state, add CSCHED_CREDITS_PER_TICK to boost_credit and subtract CSCHED_CREDITS_PER_TICK from credit. This prevents the vcpu from returning to UNDER state immediately. Especially dom0 is affected largely. - Even if the vcpu has boost credit, if current tims slice is 2ms then don''t send scheduler interrupt. - If credit of a vcpu is subtracted over CSCHED_CREDITS_PER_TSLICE, adjust the credit. Best regards, Naoki Nishiguchi _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Su, Disheng
2009-Jan-13 08:10 UTC
[Xen-devel] RE: [RFC][PATCH 0/4] Modification of credit scheduler rev2
Hi Naoki, Thanks for your excellent work. This days, I tested the playing audio/video with your patches. With the default credit scheduler, the audio effect is really bad(a lot of audio glitches). But I got a better result with your patches. I list my findings here, FYI. 1. What''s the latency requirement for audio? I am not good at this one:) I find some links regarding to it( http://www.soundonsound.com/sos/jan05/articles/pcmusician.htm and http://www.podcomplex.com/blog/setting-buffers-and-latency-for-your-audio-interface/). In native env, setting the buffer size of audio hardware to produce a latency of 23ms is acceptable even for many musicians. It''s safe to say we have to schedule in the VM for each 23ms for such case in virtual env when playing audio in VM. Even worse for Vista, which has 10ms requirement ( http://blogs.technet.com/markrussinovich/archive/2007/08/27/1833290.aspx ). Apparently, the default credit scheduler can''t handle well for this case. 2. Test env: hardware: Cpu: INTEL Core 2 Duo E6850 Chipset: 82G33 Memory: 2G software: Xen upstream(cs: 18881) doms configuration: guest A: primary HVM guest(integreted graphic card, sound, USB controller directly assigned), playing mp3 with WMP in foreground + copying large files(e.g. 2G) in background. 2 vcpus, 1G memory. Guest OS is Windows XP or Vista. guest B: secondary HVM guest(also copying large files in guest, no devices assigned). 2 vcpus, 128M memory. Guest OS is Windows XP. 3. Configure the scheduler and Xen: a. the weight of guest B must be lower as much as possible(e.g. 10 for it, but 256 for guest A and dom0). Guest B is competing with Guest A for dom0. The lower the weight, the lesser chance to be scheduled in. b. the boost credit needs to be larger as much as possible.(e.g 1000 for both primary guest and dom0). To make sure the guest A stays in boost priority longer when doing heavy I/O. c. vcpus of guest A need to be pinned to physical cpu. Without pinned and guest is smp, the scheduler will dynamically migrate vcpus between physcial cpus, and the audio glitches is also obvious. One of possible reason is high freq of migration and the small runtime when the vcpu be scheduled in. The migration rate is about 60~110 per second, and each migration has the migration cost(such as cache, TLB miss, etc..). And the runtime is small, 90% of runtime is less than 30us. It sounds not reasonable to migrate a vcpu, but it just runs for a tens of microseconds. With this configuration, both xp/vista guest works well, no glitches usually. 4. issues left: a. Abrupt glitches are still generated when the QEMU emulated mouse being used and moving mouse quickly in guest A. Passing-through USB mouse/keyboard to guest A, then no glitches. b. vcpu migration. As said before, without vcpu pinned, glitches are obvious. c. the limitation of weight for guest B. I have to set the weight of guest B to 10. It may not be reasonable in real usage case. Do you have the experience with audio? I don''t know I have properly configured your scheduler or not. Hope the your scheduler can solve the audio issues also. NISHIGUCHI Naoki wrote:> Hi all, > > The patchset is revised version of patches that I was posted 10 days > ago. This patchset is consist of the following 4 patches. > > 1. Subtract credit consumed accurately and shorten cpu time per one > credit > 2. Change the handling of credits over upper bound. > 3. Balance credits of each vcpu of a domain > 4. Introduce boost credit for latency-sensitive domain > > It was not possible to separate these cleanly. > Please apply these patches in numerical order. > > Please review these patches. > Any comments are appreciated. > > Best regards, > Naoki NishiguchiBest Regards, Disheng, Su _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
NISHIGUCHI Naoki
2009-Jan-15 02:04 UTC
Re: [Xen-devel] RE: [RFC][PATCH 0/4] Modification of credit scheduler rev2
Hi Disheng, Thank you for evaluating patches and reporting results. Su, Disheng wrote:> Hi Naoki, > Thanks for your excellent work. > This days, I tested the playing audio/video with your patches. With the default credit scheduler, the audio effect is really bad(a lot of audio glitches). But I got a better result with your patches. I list my findings here, FYI. > > 1. What''s the latency requirement for audio? I am not good at this one:) I find some links regarding to it( http://www.soundonsound.com/sos/jan05/articles/pcmusician.htm and http://www.podcomplex.com/blog/setting-buffers-and-latency-for-your-audio-interface/). In native env, setting the buffer size of audio hardware to produce a latency of 23ms is acceptable even for many musicians. It''s safe to say we have to schedule in the VM for each 23ms for such case in virtual env when playing audio in VM. Even worse for Vista, which has 10ms requirement ( http://blogs.technet.com/markrussinovich/archive/2007/08/27/1833290.aspx ). Apparently, the default credit scheduler can''t handle well for this case.Thanks for your information. I''ll see these links.> 2. Test env: > hardware: > Cpu: INTEL Core 2 Duo E6850 > Chipset: 82G33 > Memory: 2G > software: > Xen upstream(cs: 18881) > doms configuration: > guest A: primary HVM guest(integreted graphic card, sound, USB controller directly assigned), playing mp3 with WMP in foreground + copying large files(e.g. 2G) in background. 2 vcpus, 1G memory. Guest OS is Windows XP or Vista. > guest B: secondary HVM guest(also copying large files in guest, no devices assigned). 2 vcpus, 128M memory. Guest OS is Windows XP. > > 3. Configure the scheduler and Xen: > a. the weight of guest B must be lower as much as possible(e.g. 10 for it, but 256 for guest A and dom0). Guest B is competing with Guest A for dom0. The lower the weight, the lesser chance to be scheduled in. > b. the boost credit needs to be larger as much as possible.(e.g 1000 for both primary guest and dom0). To make sure the guest A stays in boost priority longer when doing heavy I/O. > c. vcpus of guest A need to be pinned to physical cpu. Without pinned and guest is smp, the scheduler will dynamically migrate vcpus between physcial cpus, and the audio glitches is also obvious. One of possible reason is high freq of migration and the small runtime when the vcpu be scheduled in. The migration rate is about 60~110 per second, and each migration has the migration cost(such as cache, TLB miss, etc..). And the runtime is small, 90% of runtime is less than 30us. It sounds not reasonable to migrate a vcpu, but it just runs for a tens of microseconds. > With this configuration, both xp/vista guest works well, no glitches usually. > > 4. issues left: > a. Abrupt glitches are still generated when the QEMU emulated mouse being used and moving mouse quickly in guest A. Passing-through USB mouse/keyboard to guest A, then no glitches.I also noticed that. Though I don''t know the precise cause, I found that dom0 and guest A would consume largely CPU time (hundreds of milliseconds) in such situation. In this case, the priority of dom0 and guest A falls rapidly, then guest B runs until the priority of dom0 and guest A becomes BOOST. In worst case, it will take about 120ms. I tried to solve this issue as follows, but the scheduler correctly didn''t schedule according to the weight of a domain. - In csched_schedule(), if a vcpu runs over current time slice then the time slice is subtracted from the vcpu''s credit. I think to need investigate deeply.> b. vcpu migration. As said before, without vcpu pinned, glitches are obvious.I think that this issue would be solved by adding the condition for migrating the vcpu. e.g. If the vcpu has boost credit, don''t migrate the vcpu. I''ll try to test.> c. the limitation of weight for guest B. I have to set the weight of guest B to 10. It may not be reasonable in real usage case.Is copying large files in background on guest A indispensable? In my test, guest A runs only video playing. I think that my approach couldn''t solve this issue.> Do you have the experience with audio? I don''t know I have properly configured your scheduler or not. Hope the your scheduler can solve the audio issues also.Sorry, I don''t have the experience with audio. But I''ll try to reproduce your configuration and investigate. Regards, Naoki Nishiguchi> > NISHIGUCHI Naoki wrote: >> Hi all, >> >> The patchset is revised version of patches that I was posted 10 days >> ago. This patchset is consist of the following 4 patches. >> >> 1. Subtract credit consumed accurately and shorten cpu time per one >> credit >> 2. Change the handling of credits over upper bound. >> 3. Balance credits of each vcpu of a domain >> 4. Introduce boost credit for latency-sensitive domain >> >> It was not possible to separate these cleanly. >> Please apply these patches in numerical order. >> >> Please review these patches. >> Any comments are appreciated. >> >> Best regards, >> Naoki Nishiguchi > > > > Best Regards, > Disheng, Su > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tian, Kevin
2009-Jan-15 02:56 UTC
RE: [Xen-devel] RE: [RFC][PATCH 0/4] Modification of credit scheduler rev2
>From:NISHIGUCHI Naoki >Sent: Thursday, January 15, 2009 10:05 AM >> 4. issues left: >> a. Abrupt glitches are still generated when the >QEMU emulated mouse being used and moving mouse quickly in >guest A. Passing-through USB mouse/keyboard to guest A, then >no glitches. > >I also noticed that. Though I don''t know the precise cause, I >found that >dom0 and guest A would consume largely CPU time (hundreds of >milliseconds) in such situation. In this case, the priority of >dom0 and >guest A falls rapidly, then guest B runs until the priority of >dom0 and >guest A becomes BOOST. In worst case, it will take about 120ms.I remember that Disheng once told me that BOOST only happens when vcpu is waken up and its current priority is UNDER. In your case guest A should be in OVER after running hundreds of ms, and then it waits enough long time to become UNDER and then BOOST. If this is the case, your enhancement on BOOST level seems only solving part of the latency issue. Here either assigning a static priority, or adding more BOOST source (like event, intr, etc) seems more complete solution.> >> b. vcpu migration. As said before, without vcpu >pinned, glitches are obvious. > >I think that this issue would be solved by adding the condition for >migrating the vcpu. >e.g. If the vcpu has boost credit, don''t migrate the vcpu.Is it over-kill? how about you already get 3 BOOST vcpu in runqueue of current cpu, when other cpus are all running OVER vcpus? Boost itself looks not the only determinative factor for migration, and instead what you concern is the relative priority in system wide. Thanks, Kevin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
NISHIGUCHI Naoki
2009-Jan-15 04:42 UTC
Re: [Xen-devel] RE: [RFC][PATCH 0/4] Modification of credit scheduler rev2
Hi, Kevin Tian, Kevin wrote:>> From:NISHIGUCHI Naoki >> Sent: Thursday, January 15, 2009 10:05 AM >>> 4. issues left: >>> a. Abrupt glitches are still generated when the >> QEMU emulated mouse being used and moving mouse quickly in >> guest A. Passing-through USB mouse/keyboard to guest A, then >> no glitches. >> >> I also noticed that. Though I don''t know the precise cause, I >> found that >> dom0 and guest A would consume largely CPU time (hundreds of >> milliseconds) in such situation. In this case, the priority of >> dom0 and >> guest A falls rapidly, then guest B runs until the priority of >> dom0 and >> guest A becomes BOOST. In worst case, it will take about 120ms. > > I remember that Disheng once told me that BOOST only happens > when vcpu is waken up and its current priority is UNDER. In your > case guest A should be in OVER after running hundreds of ms, > and then it waits enough long time to become UNDER and then > BOOST. If this is the case, your enhancement on BOOST level > seems only solving part of the latency issue. Here either assigning > a static priority, or adding more BOOST source (like event, intr, > etc) seems more complete solution.In my case, though the vcpu should be switched to other vcpu in time slice, the cpu running the vcpu doesn''t schedule during hundreds of ms. I don''t know why this happens. In credit scheduler, credit consumed by the vcpu must be subtracted. Therefore I think it is correct that dom0 and guest A are OVER because my approach is to boost the vcpu within the range of weight. I think assigning a static priority is one solution. However, I think that it affects credit accounting because we don''t know how long the domain with the static priority (probably highest priority) is run. About adding more BOOST source, could you explain more to me?>>> b. vcpu migration. As said before, without vcpu >> pinned, glitches are obvious. >> >> I think that this issue would be solved by adding the condition for >> migrating the vcpu. >> e.g. If the vcpu has boost credit, don''t migrate the vcpu. > > Is it over-kill? how about you already get 3 BOOST vcpu in > runqueue of current cpu, when other cpus are all running > OVER vcpus? Boost itself looks not the only determinative > factor for migration, and instead what you concern is the > relative priority in system wide.Yes, you are right. I''ll consider about runqueue of each cpu and so on. Thanks for your advice. Regards, Naoki _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Su, Disheng
2009-Jan-15 04:55 UTC
RE: [Xen-devel] RE: [RFC][PATCH 0/4] Modification of credit scheduler rev2
NISHIGUCHI Naoki wrote:> >> c. the limitation of weight for guest B. I have to set the weight >> of guest B to 10. It may not be reasonable in real usage case. > > Is copying large files in background on guest A indispensable? > In my test, guest A runs only video playing. > I think that my approach couldn''t solve this issue.You know, guest A is the primary guest to end user, so we can''t make any assumption about the user''s operation in guest A, which is the big challenge for client virtualization IMO. Weight, Cap, Boost credit, are all can be used together, or adding new mechanism, such as static priority as Kevin said, to solve the problem.> >> Do you have the experience with audio? I don''t know I have properly >> configured your scheduler or not. Hope the your scheduler can solve >> the audio issues also. > > Sorry, I don''t have the experience with audio. > But I''ll try to reproduce your configuration and investigate. >Glad to see you have interest with audio also. Any problem when you reproduce the audio issues, pls let me know.> Regards, > Naoki Nishiguchi > >>Best Regards, Disheng, Su _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tian, Kevin
2009-Jan-15 05:04 UTC
RE: [Xen-devel] RE: [RFC][PATCH 0/4] Modification of credit scheduler rev2
>From: NISHIGUCHI Naoki [mailto:nisiguti@jp.fujitsu.com] >Sent: Thursday, January 15, 2009 12:43 PM > >Hi, Kevin > >Tian, Kevin wrote: >>> From:NISHIGUCHI Naoki >>> Sent: Thursday, January 15, 2009 10:05 AM >>>> 4. issues left: >>>> a. Abrupt glitches are still generated when the >>> QEMU emulated mouse being used and moving mouse quickly in >>> guest A. Passing-through USB mouse/keyboard to guest A, then >>> no glitches. >>> >>> I also noticed that. Though I don''t know the precise cause, I >>> found that >>> dom0 and guest A would consume largely CPU time (hundreds of >>> milliseconds) in such situation. In this case, the priority of >>> dom0 and >>> guest A falls rapidly, then guest B runs until the priority of >>> dom0 and >>> guest A becomes BOOST. In worst case, it will take about 120ms. >> >> I remember that Disheng once told me that BOOST only happens >> when vcpu is waken up and its current priority is UNDER. In your >> case guest A should be in OVER after running hundreds of ms, >> and then it waits enough long time to become UNDER and then >> BOOST. If this is the case, your enhancement on BOOST level >> seems only solving part of the latency issue. Here either assigning >> a static priority, or adding more BOOST source (like event, intr, >> etc) seems more complete solution. > >In my case, though the vcpu should be switched to other vcpu in time >slice, the cpu running the vcpu doesn''t schedule during >hundreds of ms. >I don''t know why this happens.What''s running within your guest B? Unless full cpu intensive workload happens within guest B, there''s chance for guest B to issue block hypercall once it enters idle loop, and then once it''s blocked, Xen credit scheduler can pick dom0 or guest A anyway. So 1st thing you could figure out the activity within guest B. If guest B does be always busy, then you may need to check the 30ms credit allocation algorithm in credit scheduler. It looks like some sequence that guest A may be always granted as OVER priority due to its earlier overrun, until guestB also overruns a similar length. Then in this punish period, guest A has no chance to be boosted with all cycles granted to guest B instead. if it''s intended for fairness p.o.v, it may not suit for rt usage.>In credit scheduler, credit consumed by the vcpu must be subtracted. >Therefore I think it is correct that dom0 and guest A are OVER because >my approach is to boost the vcpu within the range of weight. > >I think assigning a static priority is one solution. However, I think >that it affects credit accounting because we don''t know how long the >domain with the static priority (probably highest priority) is run.It could be one configurable option for some client usages, where a coarse-level static priority could better ensure the deterministic to satisfy specific rt requirement.> >About adding more BOOST source, could you explain more to me?Current the only source for boost is the wakeup event on a vcpu with UNDER priority to catch up which is simply from fairness p.o.v But for vcpu with RT requirement, more boost sources can be added. E.g. when audio interrupt (either emulated, or passthrough), boost target vcpu and trigger a reschedule softirq immediately to reduce uncertainty of schedule latency. We need such a manual boost interface which is then inserted into some critical event paths where we believe immediate schedule is necessary. Disheng is working on this area now, I think. :-) Thanks, Kevin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
NISHIGUCHI Naoki
2009-Jan-15 05:19 UTC
Re: [Xen-devel] RE: [RFC][PATCH 0/4] Modification of credit scheduler rev2
Su, Disheng wrote:> NISHIGUCHI Naoki wrote: >>> c. the limitation of weight for guest B. I have to set the weight >>> of guest B to 10. It may not be reasonable in real usage case. >> Is copying large files in background on guest A indispensable? >> In my test, guest A runs only video playing. >> I think that my approach couldn''t solve this issue. > > You know, guest A is the primary guest to end user, so we can''t make any assumption about the user''s operation in guest A, which is the big challenge for client virtualization IMO. > Weight, Cap, Boost credit, are all can be used together, or adding new mechanism, such as static priority as Kevin said, to solve the problem.I see. That is really the big challenge. I think we should experiment in various configuration and clarify some problems.>>> Do you have the experience with audio? I don''t know I have properly >>> configured your scheduler or not. Hope the your scheduler can solve >>> the audio issues also. >> Sorry, I don''t have the experience with audio. >> But I''ll try to reproduce your configuration and investigate. >> > > Glad to see you have interest with audio also. Any problem when you reproduce the audio issues, pls let me know.Thanks. Regards, Naoki _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
NISHIGUCHI Naoki
2009-Jan-15 06:05 UTC
Re: [Xen-devel] RE: [RFC][PATCH 0/4] Modification of credit scheduler rev2
Tian, Kevin wrote:>>>>> 4. issues left: >>>>> a. Abrupt glitches are still generated when the >>>> QEMU emulated mouse being used and moving mouse quickly in >>>> guest A. Passing-through USB mouse/keyboard to guest A, then >>>> no glitches. >>>> >>>> I also noticed that. Though I don''t know the precise cause, I >>>> found that >>>> dom0 and guest A would consume largely CPU time (hundreds of >>>> milliseconds) in such situation. In this case, the priority of >>>> dom0 and >>>> guest A falls rapidly, then guest B runs until the priority of >>>> dom0 and >>>> guest A becomes BOOST. In worst case, it will take about 120ms. >>> I remember that Disheng once told me that BOOST only happens >>> when vcpu is waken up and its current priority is UNDER. In your >>> case guest A should be in OVER after running hundreds of ms, >>> and then it waits enough long time to become UNDER and then >>> BOOST. If this is the case, your enhancement on BOOST level >>> seems only solving part of the latency issue. Here either assigning >>> a static priority, or adding more BOOST source (like event, intr, >>> etc) seems more complete solution. >> In my case, though the vcpu should be switched to other vcpu in time >> slice, the cpu running the vcpu doesn''t schedule during >> hundreds of ms. >> I don''t know why this happens. > > What''s running within your guest B? Unless full cpu intensive workload > happens within guest B, there''s chance for guest B to issue block > hypercall once it enters idle loop, and then once it''s blocked, Xen > credit scheduler can pick dom0 or guest A anyway. So 1st thing you > could figure out the activity within guest B. > > If guest B does be always busy, then you may need to check the 30ms > credit allocation algorithm in credit scheduler. It looks like some sequence > that guest A may be always granted as OVER priority due to its earlier > overrun, until guestB also overruns a similar length. Then in this punish > period, guest A has no chance to be boosted with all cycles granted to > guest B instead. if it''s intended for fairness p.o.v, it may not suit for rt > usage.Sorry, I didn''t explain well. I mean that softirq for scheduling (SCHEDULE_SOFTIRQ) might not occur during hundreds of ms. I found similar issue when connecting vncviewer to guest B. Guest B runs nothing. But I don''t use Disheng''s configuration. I assumed that this issue (Disheng said) is the same issue as mine.>> In credit scheduler, credit consumed by the vcpu must be subtracted. >> Therefore I think it is correct that dom0 and guest A are OVER because >> my approach is to boost the vcpu within the range of weight. >> >> I think assigning a static priority is one solution. However, I think >> that it affects credit accounting because we don''t know how long the >> domain with the static priority (probably highest priority) is run. > > It could be one configurable option for some client usages, where > a coarse-level static priority could better ensure the deterministic > to satisfy specific rt requirement.I see.>> About adding more BOOST source, could you explain more to me? > > Current the only source for boost is the wakeup event on a vcpu > with UNDER priority to catch up which is simply from fairness p.o.v > But for vcpu with RT requirement, more boost sources can be added. > E.g. when audio interrupt (either emulated, or passthrough), boost > target vcpu and trigger a reschedule softirq immediately to reduce > uncertainty of schedule latency. We need such a manual boost > interface which is then inserted into some critical event paths where > we believe immediate schedule is necessary. Disheng is working on > this area now, I think. :-)Thanks. Regards, Naoki _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tian, Kevin
2009-Jan-15 06:41 UTC
RE: [Xen-devel] RE: [RFC][PATCH 0/4] Modification of credit scheduler rev2
>From: NISHIGUCHI Naoki [mailto:nisiguti@jp.fujitsu.com] >Sent: Thursday, January 15, 2009 2:06 PM >> If guest B does be always busy, then you may need to check the 30ms >> credit allocation algorithm in credit scheduler. It looks >like some sequence >> that guest A may be always granted as OVER priority due to >its earlier >> overrun, until guestB also overruns a similar length. Then >in this punish >> period, guest A has no chance to be boosted with all cycles >granted to >> guest B instead. if it''s intended for fairness p.o.v, it may >not suit for rt >> usage. > >Sorry, I didn''t explain well. >I mean that softirq for scheduling (SCHEDULE_SOFTIRQ) might not occur >during hundreds of ms. I found similar issue when connecting vncviewer >to guest B. Guest B runs nothing. But I don''t use Disheng''s >configuration. >I assumed that this issue (Disheng said) is the same issue as mine.Could you make sure of your statistics? Every schedule will have a 30ms timer set, regardless of whether current vcpu is repicked or a new vcpu is chosen. s_timer_fn then issues SCHEDULE_SOFTIRQ in 30ms interval. My above writing is more about that time-sharing purpose for boost is not enough toward rt purpose. Thanks, Kevin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
NISHIGUCHI Naoki
2009-Jan-15 07:01 UTC
Re: [Xen-devel] RE: [RFC][PATCH 0/4] Modification of credit scheduler rev2
Tian, Kevin wrote:>> From: NISHIGUCHI Naoki [mailto:nisiguti@jp.fujitsu.com] >> Sent: Thursday, January 15, 2009 2:06 PM >>> If guest B does be always busy, then you may need to check the 30ms >>> credit allocation algorithm in credit scheduler. It looks >> like some sequence >>> that guest A may be always granted as OVER priority due to >> its earlier >>> overrun, until guestB also overruns a similar length. Then >> in this punish >>> period, guest A has no chance to be boosted with all cycles >> granted to >>> guest B instead. if it''s intended for fairness p.o.v, it may >> not suit for rt >>> usage. >> Sorry, I didn''t explain well. >> I mean that softirq for scheduling (SCHEDULE_SOFTIRQ) might not occur >> during hundreds of ms. I found similar issue when connecting vncviewer >> to guest B. Guest B runs nothing. But I don''t use Disheng''s >> configuration. >> I assumed that this issue (Disheng said) is the same issue as mine. > > Could you make sure of your statistics? Every schedule will have a > 30ms timer set, regardless of whether current vcpu is repicked or a > new vcpu is chosen. s_timer_fn then issues SCHEDULE_SOFTIRQ > in 30ms interval.When connecting vncviewer to guest B, s_timer_fn wasn''t called in 30ms interval.> My above writing is more about that time-sharing purpose for boost > is not enough toward rt purpose.I agree that my approach is not enough for rt usage. Regards, Naoki _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tian, Kevin
2009-Jan-15 07:04 UTC
RE: [Xen-devel] RE: [RFC][PATCH 0/4] Modification of credit scheduler rev2
>From: NISHIGUCHI Naoki [mailto:nisiguti@jp.fujitsu.com] >Sent: Thursday, January 15, 2009 3:02 PM > >When connecting vncviewer to guest B, s_timer_fn wasn''t called in 30ms >interval. >Then I would think it as a bug possibly from screwed system time? :-) Thanks, Kevin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel