Atsushi SAKAI
2007-Jun-27 07:58 UTC
[Xen-devel] [PATCH][RFC] consider vcpu-pin weight on Credit Scheduler TAKE2
Hi, Keir This patch intends to consider vcpu-pin weight on credit scheduler TAKE2. http://lists.xensource.com/archives/html/xen-devel/2007-06/msg00359.html The difference from previous one is 1) Coding style clean up 2) Skip loop for unused vcpu-pin-count. 3) Remove if pin_count ==1 in multiple loop. Then pin_count ==1 is another loop. Signed-off-by: Atsushi SAKAI <sakaia@jp.fujitsu.com> And one question, Does this patch need following tune up for reducing multiple loop?>From following- /* sort weight */ - for(j=0;j<pin_count;j++) - { - sortflag = 0; - for(k=1;k<pin_count;k++) - { - if ( pcpu_weight[pcpu_id_list[k-1]] > pcpu_weight[pcpu_id_list[k]] ) - { - sortflag = 1; - pcpu_id_handle = pcpu_id_list[k-1]; - pcpu_id_list[k-1] = pcpu_id_list[k]; - pcpu_id_list[k] = pcpu_id_handle; - } - } - if( sortflag == 0)break; - } To following + /* sort weight */ + for(k=1;k<pin_count;k++) + { + if ( pcpu_weight[pcpu_id_list[k-1]] > pcpu_weight[pcpu_id_list[k]] ) + { + pcpu_id_handle = pcpu_id_list[k-1]; + pcpu_id_list[k-1] = pcpu_id_list[k]; + pcpu_id_list[k] = pcpu_id_handle; + if (k > 1) k -= 2; + } + } Thanks Atsushi SAKAI _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Emmanuel Ackaouy
2007-Jun-27 11:46 UTC
Re: [Xen-devel] [PATCH][RFC] consider vcpu-pin weight on Credit Scheduler TAKE2
I think this patch is too large and intrusive in the common paths. I understand the problem you are trying to fix. I don''t think it is serious enough to call for such a large change. The accounting code is already tricky enough, don''t you think? If you reduce the scope of the problem you''re addressing, I think we should be able to get a much smaller, cleaner and robust change in place. There are many different scenarios when using pinning that screws with set weights. Have you considered them all? For example: VCPU0.0:0-1, VCPU0.1:1-2 weight 256 VCPU1.0-0-2, VCPU1.1:0-2 weight 512 Does your patch deal with cases when there are multiple domains with multiple VCPUs each and not all sharing the same cpu affinity mask? I''m not even sure myself what should happen in some of these situations... I argue that the general problem isn''t important to solve. The interesting problem is a small subset: When a set of physical CPUs are set aside for a specific group of domains, setting weights for those domains should behave as expected. For example, on an 8way host, you could set aside 2CPUs for development work and assign different weights to domains running in that dev group. You would expect the weights to work normally. The best way to do this though is not to screw around with weights and credit when VCPUs are pinned. The cleanest modification is to run distinct credit schedulers: 1 for dev on 2CPUs, and 1 for the rest. You could probably achieve this in a much smaller patch which would include administrative interfaces for creating and destroying these dynamic CPU partition groups as well assigning domains to them. On Jun 27, 2007, at 9:58, Atsushi SAKAI wrote:> Hi, Keir > > This patch intends > to consider vcpu-pin weight on credit scheduler TAKE2. > http://lists.xensource.com/archives/html/xen-devel/2007-06/ > msg00359.html > > The difference from previous one is > 1) Coding style clean up > 2) Skip loop for unused vcpu-pin-count. > 3) Remove if pin_count ==1 in multiple loop. > Then pin_count ==1 is another loop. > > Signed-off-by: Atsushi SAKAI <sakaia@jp.fujitsu.com> > > And one question, > Does this patch need following tune up for reducing multiple loop? > >> From following > > - /* sort weight */ > - for(j=0;j<pin_count;j++) > - { > - sortflag = 0; > - for(k=1;k<pin_count;k++) > - { > - if ( pcpu_weight[pcpu_id_list[k-1]] > > pcpu_weight[pcpu_id_list[k]] > ) > - { > - sortflag = 1; > - pcpu_id_handle = pcpu_id_list[k-1]; > - pcpu_id_list[k-1] = pcpu_id_list[k]; > - pcpu_id_list[k] = pcpu_id_handle; > - } > - } > - if( sortflag == 0)break; > - } > > To following > > + /* sort weight */ > + for(k=1;k<pin_count;k++) > + { > + if ( pcpu_weight[pcpu_id_list[k-1]] > > pcpu_weight[pcpu_id_list[k]] > ) > + { > + pcpu_id_handle = pcpu_id_list[k-1]; > + pcpu_id_list[k-1] = pcpu_id_list[k]; > + pcpu_id_list[k] = pcpu_id_handle; > + if (k > 1) k -= 2; > + } > + } > > > Thanks > Atsushi SAKAI > > > <vcpupinweight0627.patch>______________________________________________ > _ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Atsushi SAKAI
2007-Jun-27 12:07 UTC
Re: [Xen-devel] [PATCH][RFC] consider vcpu-pin weight on CreditScheduler TAKE2
Hi, Emmanuel Thank you for commenting my patch. I am waiting for this kind of discussion. My patch is a kind of full-set solution for this vcpu-pin weight issue. Of couse, your suggested complex configuration considers it. Since my patch is calculate each pcpu credit. (It makes a kind of long code((+)365line)). Anyway your suggested, Segmenting pcpu is another useful solution. Since in typical use, seg-1 for dev seg-2 for rest. But flexibility is reduced from previous one. In this case vcpu-pin cannot define over the multiple segmentation. Which is the best way to solve? Thanks Atsushi SAKAI Emmanuel Ackaouy <ackaouy@gmail.com> wrote:> I think this patch is too large and intrusive in the common paths. > I understand the problem you are trying to fix. I don''t think it is > serious enough to call for such a large change. The accounting > code is already tricky enough, don''t you think? If you reduce the > scope of the problem you''re addressing, I think we should be > able to get a much smaller, cleaner and robust change in place. > > There are many different scenarios when using pinning that > screws with set weights. Have you considered them all? > > For example: > > VCPU0.0:0-1, VCPU0.1:1-2 weight 256 > VCPU1.0-0-2, VCPU1.1:0-2 weight 512 > > Does your patch deal with cases when there are multiple > domains with multiple VCPUs each and not all sharing the > same cpu affinity mask? I''m not even sure myself what > should happen in some of these situations... > > I argue that the general problem isn''t important to solve. The > interesting problem is a small subset: When a set of physical > CPUs are set aside for a specific group of domains, setting > weights for those domains should behave as expected. For > example, on an 8way host, you could set aside 2CPUs for > development work and assign different weights to domains > running in that dev group. You would expect the weights to > work normally. > > The best way to do this though is not to screw around with > weights and credit when VCPUs are pinned. The cleanest > modification is to run distinct credit schedulers: 1 for dev on > 2CPUs, and 1 for the rest. > > You could probably achieve this in a much smaller patch which > would include administrative interfaces for creating and destroying > these dynamic CPU partition groups as well assigning domains to > them. > > On Jun 27, 2007, at 9:58, Atsushi SAKAI wrote: > > > Hi, Keir > > > > This patch intends > > to consider vcpu-pin weight on credit scheduler TAKE2. > > http://lists.xensource.com/archives/html/xen-devel/2007-06/ > > msg00359.html > > > > The difference from previous one is > > 1) Coding style clean up > > 2) Skip loop for unused vcpu-pin-count. > > 3) Remove if pin_count ==1 in multiple loop. > > Then pin_count ==1 is another loop. > > > > Signed-off-by: Atsushi SAKAI <sakaia@jp.fujitsu.com> > > > > And one question, > > Does this patch need following tune up for reducing multiple loop? > > > >> From following > > > > - /* sort weight */ > > - for(j=0;j<pin_count;j++) > > - { > > - sortflag = 0; > > - for(k=1;k<pin_count;k++) > > - { > > - if ( pcpu_weight[pcpu_id_list[k-1]] > > > pcpu_weight[pcpu_id_list[k]] > > ) > > - { > > - sortflag = 1; > > - pcpu_id_handle = pcpu_id_list[k-1]; > > - pcpu_id_list[k-1] = pcpu_id_list[k]; > > - pcpu_id_list[k] = pcpu_id_handle; > > - } > > - } > > - if( sortflag == 0)break; > > - } > > > > To following > > > > + /* sort weight */ > > + for(k=1;k<pin_count;k++) > > + { > > + if ( pcpu_weight[pcpu_id_list[k-1]] > > > pcpu_weight[pcpu_id_list[k]] > > ) > > + { > > + pcpu_id_handle = pcpu_id_list[k-1]; > > + pcpu_id_list[k-1] = pcpu_id_list[k]; > > + pcpu_id_list[k] = pcpu_id_handle; > > + if (k > 1) k -= 2; > > + } > > + } > > > > > > Thanks > > Atsushi SAKAI > > > > > > <vcpupinweight0627.patch>______________________________________________ > > _ > > Xen-devel mailing list > > Xen-devel@lists.xensource.com > > http://lists.xensource.com/xen-devel > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Emmanuel Ackaouy
2007-Jun-27 12:21 UTC
Re: [Xen-devel] [PATCH][RFC] consider vcpu-pin weight on CreditScheduler TAKE2
On Jun 27, 2007, at 14:07, Atsushi SAKAI wrote:> Thank you for commenting my patch. > I am waiting for this kind of discussion. > > My patch is a kind of full-set solution for this vcpu-pin weight issue. > Of couse, your suggested complex configuration considers it. > Since my patch is calculate each pcpu credit. > (It makes a kind of long code((+)365line)). > > Anyway your suggested, > Segmenting pcpu is another useful solution. > Since in typical use, seg-1 for dev seg-2 for rest. > But flexibility is reduced from previous one. > In this case vcpu-pin cannot define over the multiple segmentation. > > Which is the best way to solve?If you could solve the generic problem in a simpler way, I would not be opposed to it. But +365 lines in what is already a fairly complex accounting code path doesn''t seem right to me. I can''t even understand what weights mean when every CPU has a different pin cpumask. Weights only make sense to me when VMs compete for resources. In my opinion, adding the concept of dynamic partitioning (or segmentation) of the host system would allow a bunch of people to no longer have to pin their VCPUs. This is desirable. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2007-Jun-27 12:31 UTC
Re: [Xen-devel] [PATCH][RFC] consider vcpu-pin weight on CreditScheduler TAKE2
On 27/6/07 13:21, "Emmanuel Ackaouy" <ackaouy@gmail.com> wrote:>> Which is the best way to solve? > > If you could solve the generic problem in a simpler way, I would > not be opposed to it. But +365 lines in what is already a fairly > complex accounting code path doesn''t seem right to me. > > I can''t even understand what weights mean when every CPU > has a different pin cpumask. Weights only make sense to me > when VMs compete for resources. > > In my opinion, adding the concept of dynamic partitioning (or > segmentation) of the host system would allow a bunch of people > to no longer have to pin their VCPUs. This is desirable.Partitioning is a very sensible simplification in many (most?) cases, but would need plumbing all the way up through xen-api, which is a pain. I still suspect that the patch could be simplified even without interface changes. I don''t understand the need to add extra complexity on every accounting period. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Emmanuel Ackaouy
2007-Jun-27 12:52 UTC
Re: [Xen-devel] [PATCH][RFC] consider vcpu-pin weight on CreditScheduler TAKE2
On Jun 27, 2007, at 14:31, Keir Fraser wrote:> Partitioning is a very sensible simplification in many (most?) cases, > but > would need plumbing all the way up through xen-api, which is a pain. I > still > suspect that the patch could be simplified even without interface > changes. I > don''t understand the need to add extra complexity on every accounting > period.If you don''t want to change the API, the partitioning could just automagically happen when all distinct cpu affinity masks set using the existing interface are non-overlapping. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Atsushi SAKAI
2007-Jun-28 09:23 UTC
Re: [Xen-devel] [PATCH][RFC] consider vcpu-pin weight onCreditScheduler TAKE2
Hi, Keir and Emmanuel Thank you for your discussions. It is very meaningful for me. I try it again for simplify the code. Anyway the reason of this kind of complex is come from. Following 5 conditions should consider for this issue(vcpu-pin-weight). 1)Domain 1-n 2)vcpu 1-n 3)vcpu-pin 1-n 4)Each domain has weight, but it should be treated as vcpu-weight. since each vcpu pin-map is not same for each domain. 5)Each vcpu has credit, but it is changed each time slice. (Sometimes accounted, and othertimes are not.) Thanks Atsushi SAKAI Keir Fraser <keir@xensource.com> wrote:> On 27/6/07 13:21, "Emmanuel Ackaouy" <ackaouy@gmail.com> wrote: > > >> Which is the best way to solve? > > > > If you could solve the generic problem in a simpler way, I would > > not be opposed to it. But +365 lines in what is already a fairly > > complex accounting code path doesn''t seem right to me. > > > > I can''t even understand what weights mean when every CPU > > has a different pin cpumask. Weights only make sense to me > > when VMs compete for resources. > > > > In my opinion, adding the concept of dynamic partitioning (or > > segmentation) of the host system would allow a bunch of people > > to no longer have to pin their VCPUs. This is desirable. > > Partitioning is a very sensible simplification in many (most?) cases, but > would need plumbing all the way up through xen-api, which is a pain. I still > suspect that the patch could be simplified even without interface changes. I > don''t understand the need to add extra complexity on every accounting > period. > > -- Keir_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel