Atsushi SAKAI
2006-Oct-18 11:49 UTC
[Xen-devel] [Q] about Credit Scheduler Dom0 Scheduling policy.
Hi, Emmanuel, Thank you for your previous comment. I still waiting for second item comments. Anyway, I want to ask you a opinion about the credit scheduler''s vcpu scheduling policy. (compare with the "sedf" scheduler) The reason of this asking is this matter should be related to I/O performance in Multi-VM. My testing was done by following 3 DomUs. DomU1, DomU2(CPU intensive jobs are running) DomU3(I/O intensive jobs are running) By seeing xentrace, scheduling behavior seems differentiated by schedulers especially Dom0(Driver Domain) vcpu scheduling policy (I checked vcpu_wake and dispatched time difference.) In SEDF-scheduler, When Dom0 vcpu wakes, Dom0 vcpu gets pcpu resources immediately. (in worst case the delay is within 1msec) But in CREDIT-scheduler, Sometime Dom0 vcpu have large latency.(sometimes 60msec) I think this delayed reason is the behavior of credit scheduler. In the credit scheduler, When Dom0 vcpu wakes, Dom0 vcpu just adds in runq tail. (this makes a problem.) I think Dom0 vcpu wake up should be handled prioritize the UNDER runq. Since it reflects response time especially I/O. Of course, CPU intensive job(in following document VCPU SPIN) domain is just one, It has no problem, since runq has no vcpu. http://www.xensource.com/files/summit_3/sched.pdf#page=4 In this case, if requesting context switch(SCHEDULE_SOFTIRQ) is occured, it switches to VCPU I/O successfully. But my example case(2 SPIN VCPU), a VCPU SPIN is in runq (since 2 SPIN VCPU domain are exists). In this case SCHEDULE_SOFTIRQ, just pushes another VCPU SPIN, So VCPU I/O is not started. Waiting for your opinion. Thanks Atsushi SAKAI ============================================================My Test Configuration is #pcpu=1 #vcpu=1(Dom0, DomU1, DomU2, DomU3) CPU intensive domain: DomU1 and DomU2 I/O intensive domain: DomU3 Privileged Domain : Dom0 CPU intensive domain runs cpu-loop like follows, main(){int a;while(1){a++;};} I/O intensive domain runs fsdisk (which is included in unixbench 4.1.0) Privileged Domain runs xenmon.py or xentrace in standard configuration(uses tools/xentrace/formats) The xentrace output is included in this mail. (I picked up the problem from xentrace outputs.) ============================================================ _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Emmanuel Ackaouy
2006-Oct-18 13:11 UTC
Re: [Xen-devel] [Q] about Credit Scheduler Dom0 Scheduling policy.
On Wed, Oct 18, 2006 at 08:49:51PM +0900, Atsushi SAKAI wrote:> My Test Configuration is > > #pcpu=1 > #vcpu=1(Dom0, DomU1, DomU2, DomU3) > > CPU intensive domain: DomU1 and DomU2 > I/O intensive domain: DomU3 > Privileged Domain : Dom0 > > CPU intensive domain runs cpu-loop > like follows, > main(){int a;while(1){a++;};} > > I/O intensive domain runs fsdisk > (which is included in unixbench 4.1.0) > > Privileged Domain runs > xenmon.py or > xentrace in standard configuration(uses tools/xentrace/formats) > The xentrace output is included in this mail. > (I picked up the problem from xentrace outputs.) > ============================================================How much CPU resources are being consumed by all your domains as reported by xentop? If DomU1 and DomU2 are CPU bound, and DomU3 and Dom0 are cooperatively doing I/O work with minimal CPU usage, one would expect DomU3 and Dom0 to preempt both U1 and U2 when they become runnable. When you say dom0 sometimes takes 60ms to run, how often is that compared to the number of times it preempts and immediately runs on the CPU? If dom0 isn''t preempting, in theory that is for one of two reasons: 1- It''s consumed its fair share of CPU already. 2- It''s been inactive for a while. Is this having a noticeable effect with credit vs sedf on fdisk performance when competing against your spinner domains? _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Emmanuel Ackaouy
2006-Oct-18 13:24 UTC
Re: [Xen-devel] [Q] about Credit Scheduler Dom0 Scheduling policy.
On Wed, Oct 18, 2006 at 02:11:15PM +0100, Emmanuel Ackaouy wrote:> If dom0 isn''t preempting, in theory that is for one of > two reasons: > 1- It''s consumed its fair share of CPU already. > 2- It''s been inactive for a while.I should add that if (1), then increasing the weight of dom0 should solve the problem. Try doubling it. If (2), then I''d like to see the xentrace output for a few seconds worth of running your workload. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Atsushi SAKAI
2006-Oct-23 04:13 UTC
Re: [Xen-devel] [Q] about Credit Scheduler Dom0 Scheduling policy.
Hi, Emmanuel Thank you for your comments. To be crealify, I am doing unix bench 4.1.0 w_test@fsdisk/fsbuffer/fstime with adding fsync(). (N.B.)I change the measurement configuration from file to physical disk. Since previously I just want to show the blocking effect by xentrace. Of course, other configuration are same. The results are follows, credit credit(dom0weightx3) sedf fsbuffer(256) 16 16 33 fstime(1024) 66 66 133 fsdisk(4096) 266 266 266 unit:KBps ()means block size. N.B. Dom0 Weight is added to 3times, since it equals other domain weight. Dom0:U1:U2:U3=0.1:49.9:49.9:0.1 for Credit Dom0 Weightx3(768) 0.1:49.9:49.9:0.1 for Credit Dom0 Weight=256 0.9:48.7:48.7:0.8 for SEDF I think blocking effect by CPU SPIN Domain should be considered. And I think the reason is the 2 of your suggestion. In my guess, The credit scheduler runq has 2 priority. If DomU1/2 are OVER running credit, no problem at all. But DomU1/2 are UNDER running credit, the priorities are same as Dom0/U3. So Dom0/U3 running is blocked by DomU1/2. And weight seems no effect under this condition. I guess credit_xtra makes inactivate the Dom0/U1 weight. For including files For xentrace log, output size is huge 10lines/KB. I just included 100 line. Thanks, Atsushi SAKAI>On Wed, Oct 18, 2006 at 02:11:15PM +0100, Emmanuel Ackaouy wrote: >> If dom0 isn''t preempting, in theory that is for one of >> two reasons: >> 1- It''s consumed its fair share of CPU already. >> 2- It''s been inactive for a while. > >I should add that if (1), then increasing the weight of dom0 >should solve the problem. Try doubling it. > >If (2), then I''d like to see the xentrace output for a few >seconds worth of running your workload. > >_______________________________________________ >Xen-devel mailing list >Xen-devel@lists.xensource.com >http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Emmanuel Ackaouy
2006-Oct-23 14:32 UTC
Re: [Xen-devel] [Q] about Credit Scheduler Dom0 Scheduling policy.
On Mon, Oct 23, 2006 at 01:13:45PM +0900, Atsushi SAKAI wrote:> For including files > For xentrace log, output size is huge 10lines/KB. > I just included 100 line.These logs are interesting. According to the logs, some of our prior assumptions are shown to be incorrect. For one thing, it looks like SEDF does not do a good job at all at running either the I/O domU or dom0 quickly after they are made runnable. Often, it schedules both spinning domUs for a full time slice each before it gets to the I/O domU or dom0. The credit scheduler seems to schedule the I/O domU and dom0 much more quickly when they become runnable. Basically it seems to work as advertised and preempt the CPU from the spinners. There is another weird thing going on: Every once in a while, both the I/O domU and dom0 are blocked. The sequence goes like this: I/O domU blocks, dom0 wakes and runs and blocks. A spinner runs a full time slice. Then, the I/O domU is woken up and runs. It takes a full time slice for this to happen though and time slices in the credit scheduler appears to be 60x that using SEDF (60 x !!!). The credit scheduler time slice is 30 millisecs. The sedf scheduler appears to run the spinners for half a millisecond only even when it''s only running both spinners and nothing else. Argueably, this is quite bad were the spinners to actually do anything useful with their cache. Because the time slices are that much shorter with SEDF, the dom0 actually often yields the CPU to the spinners before it can complete the work necessary to wake up the I/O domU. So my reading of the logs indicates to me that -- contrary to our initial theories -- the credit scheduler is much better in this workload than sedf at preempting CPU bound VCPUs to run I/O bound ones. The problem seems to be this odd behaviour where an I/O bound domU isn''t woken up by dom0 until after an unrelated VCPU has completed a full time slice. Something seems broken there either with the tracing or with the I/O sleep/wake code because, according to the traces, dom0 at times runs and blocks without waking up the I/O domU. Are the chunks of traces you sent representative of the overall behaviour of the system? Also, your CPU is approx 1.48Ghz, right? _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Atsushi SAKAI
2006-Oct-24 06:46 UTC
Re: [Xen-devel] [Q] about Credit Scheduler Dom0 Scheduling policy.
Hi, Emmanuel, Thank you for your comments, Currently, I am testing on IA64. CPU=1595MHz itc=1595MHz Thanks Atsushi SAKAI>On Mon, Oct 23, 2006 at 01:13:45PM +0900, Atsushi SAKAI wrote: >> For including files >> For xentrace log, output size is huge 10lines/KB. >> I just included 100 line. > >These logs are interesting. > >According to the logs, some of our prior assumptions are shown >to be incorrect. > >For one thing, it looks like SEDF does not do a good job at >all at running either the I/O domU or dom0 quickly after they >are made runnable. Often, it schedules both spinning domUs for >a full time slice each before it gets to the I/O domU or dom0. > >The credit scheduler seems to schedule the I/O domU and dom0 >much more quickly when they become runnable. Basically it seems >to work as advertised and preempt the CPU from the spinners. > >There is another weird thing going on: Every once in a while, >both the I/O domU and dom0 are blocked. The sequence goes like >this: I/O domU blocks, dom0 wakes and runs and blocks. A >spinner runs a full time slice. Then, the I/O domU is woken up >and runs. It takes a full time slice for this to happen though >and time slices in the credit scheduler appears to be 60x that >using SEDF (60 x !!!). The credit scheduler time slice is 30 >millisecs. The sedf scheduler appears to run the spinners for >half a millisecond only even when it''s only running both spinners >and nothing else. Argueably, this is quite bad were the spinners >to actually do anything useful with their cache. > >Because the time slices are that much shorter with SEDF, the >dom0 actually often yields the CPU to the spinners before it >can complete the work necessary to wake up the I/O domU. > >So my reading of the logs indicates to me that -- contrary to >our initial theories -- the credit scheduler is much better in >this workload than sedf at preempting CPU bound VCPUs to run >I/O bound ones. The problem seems to be this odd behaviour >where an I/O bound domU isn''t woken up by dom0 until after an >unrelated VCPU has completed a full time slice. Something >seems broken there either with the tracing or with the I/O >sleep/wake code because, according to the traces, dom0 at >times runs and blocks without waking up the I/O domU. > >Are the chunks of traces you sent representative of the >overall behaviour of the system? > >Also, your CPU is approx 1.48Ghz, right? >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Emmanuel Ackaouy
2006-Oct-24 09:49 UTC
Re: [Xen-devel] [Q] about Credit Scheduler Dom0 Scheduling policy.
On Tue, Oct 24, 2006 at 03:46:59PM +0900, Atsushi SAKAI wrote:> Currently, I am testing on IA64. > CPU=1595MHz > itc=1595MHzIn the chunk of the trace file you sent, the scheduler is behaving quite well. The domU being woken up at various times on a clock boundary (10ms) must be due to some timers. There is one case where dom0 wakes up and doesn''t preempt the CPU (the first dom0 wake in the trace file) though. I would like to see a bigger chunk of your trace file and scan it to see if there are other examples of this. If it''s too big for the list, can you send it to me privately or put it up on some publicly accessible server somewhere? Thanks. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Emmanuel Ackaouy
2006-Oct-25 10:29 UTC
Re: [Xen-devel] [Q] about Credit Scheduler Dom0 Scheduling policy.
Thanks for sending me the full logs! I took a look and I do indeed see some cycles during which dom0 and the I/O generating domU don''t preempt the spinners. I believe this is because those domains don''t always consume enough CPU to appear in the accounting paths. I have coded up a fix which should make things better for I/O intensive domains that use few CPU resources. I am including the patch here. It applies to tip of untable. Can you try out this patch and let me know how it works? Thanks, Emmanuel. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Atsushi SAKAI
2006-Oct-25 13:02 UTC
Re: [Xen-devel] [Q] about Credit Scheduler Dom0 Scheduling policy.
Hi, Emmanuel Thank you for your patches. I tested on my environment 1)Credit w/ Boost 2)Credit(previous) 3)SEDF(previous) 1 2 3 44 16 33 133 66 133 533 266 266 (Kbps) With this patches, the CREDIT scheduler changed for I/O aware. (At vcpu_wake, the priority changes from UNDER to BOOST, At vcpu_acct, the priority changes from BOOST to UNDER.) It seems reasonable fixes! But I am afraid many I/O intensive GuestOSes are running. (I hope this prospect is needless fear.) Thanks Atsushi SAKAI>Thanks for sending me the full logs! > >I took a look and I do indeed see some cycles during which >dom0 and the I/O generating domU don''t preempt the spinners. >I believe this is because those domains don''t always consume >enough CPU to appear in the accounting paths. > >I have coded up a fix which should make things better for >I/O intensive domains that use few CPU resources. I am >including the patch here. It applies to tip of untable. > >Can you try out this patch and let me know how it works? > >Thanks, >Emmanuel.>diff -r 0c7923eb6b98 xen/common/sched_credit.c >--- a/xen/common/sched_credit.c Wed Oct 25 10:27:03 2006 +0100 >+++ b/xen/common/sched_credit.c Wed Oct 25 11:11:22 2006 +0100 >@@ -46,6 +46,7 @@ > /* > * Priorities > */ >+#define CSCHED_PRI_TS_BOOST 0 /* time-share waking up */ > #define CSCHED_PRI_TS_UNDER -1 /* time-share w/ credits */ > #define CSCHED_PRI_TS_OVER -2 /* time-share w/o credits */ > #define CSCHED_PRI_IDLE -64 /* idle */ >@@ -410,6 +411,14 @@ csched_vcpu_acct(struct csched_vcpu *svc > > spin_unlock_irqrestore(&csched_priv.lock, flags); > } >+ >+ /* >+ * If this VCPU''s priority was boosted when it last awoke, reset it. >+ * If the VCPU is found here, then it''s consuming a non-negligeable >+ * amount of CPU resources and should no longer be boosted. >+ */ >+ if ( svc->pri == CSCHED_PRI_TS_BOOST ) >+ svc->pri = CSCHED_PRI_TS_UNDER; > } > > static inline void >@@ -566,6 +575,25 @@ csched_vcpu_wake(struct vcpu *vc) > else > CSCHED_STAT_CRANK(vcpu_wake_not_runnable); > >+ /* >+ * We temporarly boost the priority of awaking VCPUs! >+ * >+ * If this VCPU consumes a non negligeable amount of CPU, it >+ * will eventually find itself in the credit accounting code >+ * path where its priority will be reset to normal. >+ * >+ * If on the other hand the VCPU consumes little CPU and is >+ * blocking and awoken a lot (doing I/O for example), its >+ * priority will remain boosted, optimizing it''s wake-to-run >+ * latencies. >+ * >+ * This allows wake-to-run latency sensitive VCPUs to preempt >+ * more CPU resource intensive VCPUs without impacting overall >+ * system fairness. >+ */ >+ if ( svc->pri == CSCHED_PRI_TS_UNDER ) >+ svc->pri = CSCHED_PRI_TS_BOOST; >+ > /* Put the VCPU on the runq and tickle CPUs */ > __runq_insert(cpu, svc); > __runq_tickle(cpu, svc); >@@ -659,7 +687,7 @@ csched_runq_sort(unsigned int cpu) > next = elem->next; > svc_elem = __runq_elem(elem); > >- if ( svc_elem->pri == CSCHED_PRI_TS_UNDER ) >+ if ( svc_elem->pri >= CSCHED_PRI_TS_UNDER ) > { > /* does elem need to move up the runq? */ > if ( elem->prev != last_under )>_______________________________________________ >Xen-devel mailing list >Xen-devel@lists.xensource.com >http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Emmanuel Ackaouy
2006-Oct-25 14:03 UTC
Re: [Xen-devel] [Q] about Credit Scheduler Dom0 Scheduling policy.
On Wed, Oct 25, 2006 at 10:02:35PM +0900, Atsushi SAKAI wrote:> Hi, Emmanuel > > Thank you for your patches. > I tested on my environment > > 1)Credit w/ Boost > 2)Credit(previous) > 3)SEDF(previous) > > 1 2 3 > 44 16 33 > 133 66 133 > 533 266 266 > (Kbps)Wow. This is quite an improvement! Out of curiosity, what are the numbers like when running this benchmark with no spinning VCPUs competing?> With this patches, the CREDIT scheduler changed for I/O aware. > (At vcpu_wake, the priority changes from UNDER to BOOST, > At vcpu_acct, the priority changes from BOOST to UNDER.) > > It seems reasonable fixes! > But I am afraid many I/O intensive GuestOSes are running. > (I hope this prospect is needless fear.)I''ve been careful to prevent BOOSTed VCPUs from taking over the system or otherwise impacting fairness: - Only VCPUs with positive credits can be boosted. - While boosted, a VCPU is charged for any substential CPU resources consumed. - VCPUs can run uninterrupted with a boosted priority for no more than 10ms (1/3-rd of a full time slice). Only VCPUs which consume a negligeable amount of CPU resources should get real benefit from boosting. When multiple VCPUs are boosted, they will round robin or be queued FIFO. The idea is for a boosted VCPU to preempt spinners but not other boosted I/O intensive guests. A VCPU cannot use the boosting mechanism to consume more CPU than its allocated fair share. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Atsushi SAKAI
2006-Oct-26 06:15 UTC
Re: [Xen-devel] [Q] about Credit Scheduler Dom0 Scheduling policy.
The bench mark is same as previous one. Only Dom0 and DomU3 are no spinning vcpus. Other DomU1 and DomU2 are spinning vcpus. #pcpu(s)=1 #vcpu(s)=2(no spinning) #vcpu(s)=2(spinning) #vcpu(s)=4(total spinning and no spinning)> >Out of curiosity, what are the numbers like when running this >benchmark with no spinning VCPUs competing? > >> With this patches, the CREDIT scheduler changed for I/O aware. >> (At vcpu_wake, the priority changes from UNDER to BOOST, >> At vcpu_acct, the priority changes from BOOST to UNDER.) >> >> It seems reasonable fixes! >> But I am afraid many I/O intensive GuestOSes are running. >> (I hope this prospect is needless fear.) > >I''ve been careful to prevent BOOSTed VCPUs from taking over the >system or otherwise impacting fairness: > >- Only VCPUs with positive credits can be boosted. >- While boosted, a VCPU is charged for any substential CPU > resources consumed. >- VCPUs can run uninterrupted with a boosted priority for no > more than 10ms (1/3-rd of a full time slice). > >Only VCPUs which consume a negligeable amount of CPU resources >should get real benefit from boosting. When multiple VCPUs are >boosted, they will round robin or be queued FIFO. The idea is >for a boosted VCPU to preempt spinners but not other boosted >I/O intensive guests. A VCPU cannot use the boosting mechanism >to consume more CPU than its allocated fair share.I agree. Thanks Atsushi SAKAI _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Atsushi SAKAI
2006-Oct-26 08:35 UTC
Re: [Xen-devel] [Q] about Credit Scheduler Dom0 Scheduling policy.
Hi, Emmanuel Sorry for mis-reading it. The measured value to omit 2 SPIN DomU is same as w/ 2SPIN DomU. Result 44 133 533 (Kbps) And xentop says the CPU usage for Dom0 DomU3 is same as w/ 2 SPIN DomU (Both Dom0 and DomU3 usages are 1.0 to 1.2 % ) Thanks, Atsushi SAKAI>The bench mark is same as previous one. > >Only Dom0 and DomU3 are no spinning vcpus. >Other DomU1 and DomU2 are spinning vcpus. > >#pcpu(s)=1 >#vcpu(s)=2(no spinning) >#vcpu(s)=2(spinning) >#vcpu(s)=4(total spinning and no spinning) > >> >>Out of curiosity, what are the numbers like when running this >>benchmark with no spinning VCPUs competing? >> >>> With this patches, the CREDIT scheduler changed for I/O aware. >>> (At vcpu_wake, the priority changes from UNDER to BOOST, >>> At vcpu_acct, the priority changes from BOOST to UNDER.) >>> >>> It seems reasonable fixes! >>> But I am afraid many I/O intensive GuestOSes are running. >>> (I hope this prospect is needless fear.) >> >>I''ve been careful to prevent BOOSTed VCPUs from taking over the >>system or otherwise impacting fairness: >> >>- Only VCPUs with positive credits can be boosted. >>- While boosted, a VCPU is charged for any substential CPU >> resources consumed. >>- VCPUs can run uninterrupted with a boosted priority for no >> more than 10ms (1/3-rd of a full time slice). >> >>Only VCPUs which consume a negligeable amount of CPU resources >>should get real benefit from boosting. When multiple VCPUs are >>boosted, they will round robin or be queued FIFO. The idea is >>for a boosted VCPU to preempt spinners but not other boosted >>I/O intensive guests. A VCPU cannot use the boosting mechanism >>to consume more CPU than its allocated fair share. > >I agree. > >Thanks >Atsushi SAKAI > > > > >_______________________________________________ >Xen-devel mailing list >Xen-devel@lists.xensource.com >http://lists.xensource.com/xen-devel >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Emmanuel Ackaouy
2006-Oct-26 08:42 UTC
Re: [Xen-devel] [Q] about Credit Scheduler Dom0 Scheduling policy.
Thanks. That indicates the dom0 and I/O generating domU are preempting the spinners but not each other and are therefore running optimally. That is great news. On Thu, Oct 26, 2006 at 05:35:20PM +0900, Atsushi SAKAI wrote:> Hi, Emmanuel > > Sorry for mis-reading it. > > The measured value to omit 2 SPIN DomU is same as w/ 2SPIN DomU. > > Result > > 44 > 133 > 533 > (Kbps) > > And xentop says > the CPU usage for Dom0 DomU3 is same as w/ 2 SPIN DomU > (Both Dom0 and DomU3 usages are 1.0 to 1.2 % ) > > Thanks, > Atsushi SAKAI > > >The bench mark is same as previous one. > > > >Only Dom0 and DomU3 are no spinning vcpus. > >Other DomU1 and DomU2 are spinning vcpus. > > > >#pcpu(s)=1 > >#vcpu(s)=2(no spinning) > >#vcpu(s)=2(spinning) > >#vcpu(s)=4(total spinning and no spinning) > > > >> > >>Out of curiosity, what are the numbers like when running this > >>benchmark with no spinning VCPUs competing? > >> > >>> With this patches, the CREDIT scheduler changed for I/O aware. > >>> (At vcpu_wake, the priority changes from UNDER to BOOST, > >>> At vcpu_acct, the priority changes from BOOST to UNDER.) > >>> > >>> It seems reasonable fixes! > >>> But I am afraid many I/O intensive GuestOSes are running. > >>> (I hope this prospect is needless fear.) > >> > >>I''ve been careful to prevent BOOSTed VCPUs from taking over the > >>system or otherwise impacting fairness: > >> > >>- Only VCPUs with positive credits can be boosted. > >>- While boosted, a VCPU is charged for any substential CPU > >> resources consumed. > >>- VCPUs can run uninterrupted with a boosted priority for no > >> more than 10ms (1/3-rd of a full time slice). > >> > >>Only VCPUs which consume a negligeable amount of CPU resources > >>should get real benefit from boosting. When multiple VCPUs are > >>boosted, they will round robin or be queued FIFO. The idea is > >>for a boosted VCPU to preempt spinners but not other boosted > >>I/O intensive guests. A VCPU cannot use the boosting mechanism > >>to consume more CPU than its allocated fair share. > > > >I agree. > > > >Thanks > >Atsushi SAKAI > > > > > > > > > >_______________________________________________ > >Xen-devel mailing list > >Xen-devel@lists.xensource.com > >http://lists.xensource.com/xen-devel > > > > > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Pasi Kärkkäinen
2006-Oct-26 12:52 UTC
Re: [Xen-devel] [Q] about Credit Scheduler Dom0 Scheduling policy.
On Thu, Oct 26, 2006 at 09:42:11AM +0100, Emmanuel Ackaouy wrote:> Thanks. > > That indicates the dom0 and I/O generating domU are preempting > the spinners but not each other and are therefore running > optimally. > > That is great news. >Is this patch going to be submitted for 3.0.3-1 or only for unstable (3.0.4)? -- Pasi> On Thu, Oct 26, 2006 at 05:35:20PM +0900, Atsushi SAKAI wrote: > > Hi, Emmanuel > > > > Sorry for mis-reading it. > > > > The measured value to omit 2 SPIN DomU is same as w/ 2SPIN DomU. > > > > Result > > > > 44 > > 133 > > 533 > > (Kbps) > > > > And xentop says > > the CPU usage for Dom0 DomU3 is same as w/ 2 SPIN DomU > > (Both Dom0 and DomU3 usages are 1.0 to 1.2 % ) > > > > Thanks, > > Atsushi SAKAI > > > > >The bench mark is same as previous one. > > > > > >Only Dom0 and DomU3 are no spinning vcpus. > > >Other DomU1 and DomU2 are spinning vcpus. > > > > > >#pcpu(s)=1 > > >#vcpu(s)=2(no spinning) > > >#vcpu(s)=2(spinning) > > >#vcpu(s)=4(total spinning and no spinning) > > > > > >> > > >>Out of curiosity, what are the numbers like when running this > > >>benchmark with no spinning VCPUs competing? > > >> > > >>> With this patches, the CREDIT scheduler changed for I/O aware. > > >>> (At vcpu_wake, the priority changes from UNDER to BOOST, > > >>> At vcpu_acct, the priority changes from BOOST to UNDER.) > > >>> > > >>> It seems reasonable fixes! > > >>> But I am afraid many I/O intensive GuestOSes are running. > > >>> (I hope this prospect is needless fear.) > > >> > > >>I''ve been careful to prevent BOOSTed VCPUs from taking over the > > >>system or otherwise impacting fairness: > > >> > > >>- Only VCPUs with positive credits can be boosted. > > >>- While boosted, a VCPU is charged for any substential CPU > > >> resources consumed. > > >>- VCPUs can run uninterrupted with a boosted priority for no > > >> more than 10ms (1/3-rd of a full time slice). > > >> > > >>Only VCPUs which consume a negligeable amount of CPU resources > > >>should get real benefit from boosting. When multiple VCPUs are > > >>boosted, they will round robin or be queued FIFO. The idea is > > >>for a boosted VCPU to preempt spinners but not other boosted > > >>I/O intensive guests. A VCPU cannot use the boosting mechanism > > >>to consume more CPU than its allocated fair share. > > > > > >I agree. > > > > > >Thanks > > >Atsushi SAKAI > > > > > > > > > > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Emmanuel Ackaouy
2006-Oct-26 13:04 UTC
Re: [Xen-devel] [Q] about Credit Scheduler Dom0 Scheduling policy.
On Thu, Oct 26, 2006 at 03:52:45PM +0300, Pasi K?rkk?inen wrote:> Is this patch going to be submitted for 3.0.3-1 or only for unstable (3.0.4)? > > -- PasiI''m going to push the patch to xen-unstable. I hadn''t planned to ask for it to trickle down to 3.0.3. I''m not sure it''s that critical a fix. Why do you ask? _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Pasi Kärkkäinen
2006-Oct-26 13:47 UTC
Re: [Xen-devel] [Q] about Credit Scheduler Dom0 Scheduling policy.
On Thu, Oct 26, 2006 at 02:04:13PM +0100, Emmanuel Ackaouy wrote:> On Thu, Oct 26, 2006 at 03:52:45PM +0300, Pasi K?rkk?inen wrote: > > Is this patch going to be submitted for 3.0.3-1 or only for unstable (3.0.4)? > > > > -- Pasi > > I''m going to push the patch to xen-unstable. > > I hadn''t planned to ask for it to trickle down to 3.0.3. > I''m not sure it''s that critical a fix. Why do you ask?Thanks for your answer. I''m asking so that I know what I should be running when I test/benchmark workloads affected by this :) -- Pasi _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel