Juergen Gross
2010-Sep-17 05:51 UTC
[Xen-devel] [Patch] update cpumask handling for cpu pools in libxc and python
Hi, attached patch updates the cpumask handling for cpu pools in libxc and python to support arbitrary numbers of cpus. Juergen -- Juergen Gross Principal Developer Operating Systems TSP ES&S SWE OS6 Telephone: +49 (0) 89 3222 2967 Fujitsu Technology Solutions e-mail: juergen.gross@ts.fujitsu.com Domagkstr. 28 Internet: ts.fujitsu.com D-80807 Muenchen Company details: ts.fujitsu.com/imprint.html _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Sep-17 09:00 UTC
[Xen-devel] Re: [Patch] update cpumask handling for cpu pools in libxc and python
On Fri, 2010-09-17 at 06:51 +0100, Juergen Gross wrote:> Signed-off-by: juergen.gross@ts.fujitsu.com > > To be able to support arbitrary numbers of physical cpus it was > necessary to > include the size of cpumaps in the xc-interfaces for cpu pools. > These were: > definition of xc_cpupoolinfo_t > xc_cpupool_getinfo() > xc_cpupool_freeinfo() > > diff -r d978675f3d53 tools/libxc/xc_cpupool.c > --- a/tools/libxc/xc_cpupool.c Thu Sep 16 18:29:26 2010 +0100 > +++ b/tools/libxc/xc_cpupool.c Fri Sep 17 07:42:30 2010 +0200 > @@ -34,6 +34,20 @@ static int do_sysctl_save(xc_interface * > return ret; > } > > +static int get_cpumap_size(xc_interface *xch) > +{ > + static int cpumap_size = 0; > + xc_physinfo_t physinfo; > + > + if ( cpumap_size ) > + return cpumap_size; > + > + if ( !xc_physinfo(xch, &physinfo) ) > + cpumap_size = (physinfo.max_cpu_id + 8) / 8;The + 8 / 8 thing still jumps out at me every time I look at it. Perhaps the intention would be clearer if written as: int nr_cpus = physinfo.max_cpu_id + 1; cpumap_size = (nr+cpus + 7) / 8; ?? If xc_physinfo fails (which seems like a reasonably fatal type of error) then this function returns 0 but the caller does not seem to check for this case.> +xc_cpupoolinfo_t *xc_cpupool_getinfo(xc_interface *xch, > + uint32_t poolid) > { > [...] > + local_size = get_cpumap_size(xch); > + cpumap_size = (local_size + 7) / 8;local_size has already been rounded up in get_cpumap_size. Do we really need to do it again?> + size = sizeof(xc_cpupoolinfo_t) + cpumap_size * 8 + local_size;Why do we need both "cpumap_size * 8" and local_size additional bytes here? Both contain the number of bytes necessary to contain a cpumap bitmask and in fact I suspect they are both equal at this point (see point about rounding above). Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Juergen Gross
2010-Sep-17 09:20 UTC
Re: [Xen-devel] Re: [Patch] update cpumask handling for cpu pools in libxc and python
On 09/17/10 11:00, Ian Campbell wrote:> On Fri, 2010-09-17 at 06:51 +0100, Juergen Gross wrote: >> Signed-off-by: juergen.gross@ts.fujitsu.com >> >> To be able to support arbitrary numbers of physical cpus it was >> necessary to >> include the size of cpumaps in the xc-interfaces for cpu pools. >> These were: >> definition of xc_cpupoolinfo_t >> xc_cpupool_getinfo() >> xc_cpupool_freeinfo() >> >> diff -r d978675f3d53 tools/libxc/xc_cpupool.c >> --- a/tools/libxc/xc_cpupool.c Thu Sep 16 18:29:26 2010 +0100 >> +++ b/tools/libxc/xc_cpupool.c Fri Sep 17 07:42:30 2010 +0200 >> @@ -34,6 +34,20 @@ static int do_sysctl_save(xc_interface * >> return ret; >> } >> >> +static int get_cpumap_size(xc_interface *xch) >> +{ >> + static int cpumap_size = 0; >> + xc_physinfo_t physinfo; >> + >> + if ( cpumap_size ) >> + return cpumap_size; >> + >> + if ( !xc_physinfo(xch,&physinfo) ) >> + cpumap_size = (physinfo.max_cpu_id + 8) / 8; > > The + 8 / 8 thing still jumps out at me every time I look at it. Perhaps > the intention would be clearer if written as: > int nr_cpus = physinfo.max_cpu_id + 1; > cpumap_size = (nr+cpus + 7) / 8;I modified it to make it more clear.> If xc_physinfo fails (which seems like a reasonably fatal type of error) > then this function returns 0 but the caller does not seem to check for > this case.Okay, test added.> >> +xc_cpupoolinfo_t *xc_cpupool_getinfo(xc_interface *xch, >> + uint32_t poolid) >> { >> [...] >> + local_size = get_cpumap_size(xch); >> + cpumap_size = (local_size + 7) / 8; > > local_size has already been rounded up in get_cpumap_size. Do we really > need to do it again?I''ve made it more clear that this is rounding to uint64.> >> + size = sizeof(xc_cpupoolinfo_t) + cpumap_size * 8 + local_size; > > Why do we need both "cpumap_size * 8" and local_size additional bytes > here? Both contain the number of bytes necessary to contain a cpumap > bitmask and in fact I suspect they are both equal at this point (see > point about rounding above).The hypervisor returns a cpumask based on bytes, the tools use uint64-based cpumasks. In practice this is equivalent as long as multiple of 8 bytes are written by the hypervisor and the system is running little endian. I prefer a clean interface mapping here. Juergen -- Juergen Gross Principal Developer Operating Systems TSP ES&S SWE OS6 Telephone: +49 (0) 89 3222 2967 Fujitsu Technology Solutions e-mail: juergen.gross@ts.fujitsu.com Domagkstr. 28 Internet: ts.fujitsu.com D-80807 Muenchen Company details: ts.fujitsu.com/imprint.html _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Sep-17 09:44 UTC
Re: [Xen-devel] Re: [Patch] update cpumask handling for cpu pools in libxc and python
On Fri, 2010-09-17 at 10:20 +0100, Juergen Gross wrote:> On 09/17/10 11:00, Ian Campbell wrote: > > On Fri, 2010-09-17 at 06:51 +0100, Juergen Gross wrote: > >> Signed-off-by: juergen.gross@ts.fujitsu.com > >> > >> To be able to support arbitrary numbers of physical cpus it was > >> necessary to > >> include the size of cpumaps in the xc-interfaces for cpu pools. > >> These were: > >> definition of xc_cpupoolinfo_t > >> xc_cpupool_getinfo() > >> xc_cpupool_freeinfo() > >> > >> diff -r d978675f3d53 tools/libxc/xc_cpupool.c > >> --- a/tools/libxc/xc_cpupool.c Thu Sep 16 18:29:26 2010 +0100 > >> +++ b/tools/libxc/xc_cpupool.c Fri Sep 17 07:42:30 2010 +0200 > >> @@ -34,6 +34,20 @@ static int do_sysctl_save(xc_interface * > >> return ret; > >> } > >> > >> +static int get_cpumap_size(xc_interface *xch) > >> +{ > >> + static int cpumap_size = 0; > >> + xc_physinfo_t physinfo; > >> + > >> + if ( cpumap_size ) > >> + return cpumap_size; > >> + > >> + if ( !xc_physinfo(xch,&physinfo) ) > >> + cpumap_size = (physinfo.max_cpu_id + 8) / 8; > > > > The + 8 / 8 thing still jumps out at me every time I look at it. Perhaps > > the intention would be clearer if written as: > > int nr_cpus = physinfo.max_cpu_id + 1; > > cpumap_size = (nr+cpus + 7) / 8; > > I modified it to make it more clear.Thanks.> > > If xc_physinfo fails (which seems like a reasonably fatal type of error) > > then this function returns 0 but the caller does not seem to check for > > this case. > > Okay, test added. > > > > >> +xc_cpupoolinfo_t *xc_cpupool_getinfo(xc_interface *xch, > >> + uint32_t poolid) > >> { > >> [...] > >> + local_size = get_cpumap_size(xch); > >> + cpumap_size = (local_size + 7) / 8; > > > > local_size has already been rounded up in get_cpumap_size. Do we really > > need to do it again? > > I''ve made it more clear that this is rounding to uint64.Wouldn''t that be "(.. + 63) / 8" then?> > > > >> + size = sizeof(xc_cpupoolinfo_t) + cpumap_size * 8 + local_size; > > > > Why do we need both "cpumap_size * 8" and local_size additional bytes > > here? Both contain the number of bytes necessary to contain a cpumap > > bitmask and in fact I suspect they are both equal at this point (see > > point about rounding above). > > The hypervisor returns a cpumask based on bytes, the tools use uint64-based > cpumasks.Oh, I see, as well as xc_cpupool_info_t and the cpumap which it contains being allocated together in a single buffer you are also including a third buffer which is for local use in this function only but which is included in the memory region returned to the caller (who doesn''t know about it). Seems a bit odd to me, why not just allocate it locally then free it (or use alloca)? Actually, when I complete my hypercall buffer patch this memory will need to be separately allocated any way since it needs to come from a special pool. I''d prefer it if you just used explicit separate allocation for this buffer from the start.> In practice this is equivalent as long as multiple of 8 bytes are > written by the hypervisor and the system is running little endian. > I prefer a clean interface mapping here.Using a single uint64 when there was a limit of 64 cpus made sense but now that it is variable length why not just use bytes everywhere? It would avoid a lot of confusion about what various size units are at various points etc. You would avoid needing to translate between the hypervisor and tools representations too, wouldn''t you? Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Juergen Gross
2010-Sep-17 10:04 UTC
Re: [Xen-devel] Re: [Patch] update cpumask handling for cpu pools in libxc and python
On 09/17/10 11:44, Ian Campbell wrote:> On Fri, 2010-09-17 at 10:20 +0100, Juergen Gross wrote: >> On 09/17/10 11:00, Ian Campbell wrote: >>> local_size has already been rounded up in get_cpumap_size. Do we really >>> need to do it again? >> >> I''ve made it more clear that this is rounding to uint64. > > Wouldn''t that be "(.. + 63) / 8" then?No, local_size is already bytes...>> >>> >>>> + size = sizeof(xc_cpupoolinfo_t) + cpumap_size * 8 + local_size; >>> >>> Why do we need both "cpumap_size * 8" and local_size additional bytes >>> here? Both contain the number of bytes necessary to contain a cpumap >>> bitmask and in fact I suspect they are both equal at this point (see >>> point about rounding above). >> >> The hypervisor returns a cpumask based on bytes, the tools use uint64-based >> cpumasks. > > Oh, I see, as well as xc_cpupool_info_t and the cpumap which it contains > being allocated together in a single buffer you are also including a > third buffer which is for local use in this function only but which is > included in the memory region returned to the caller (who doesn''t know > about it). Seems a bit odd to me, why not just allocate it locally then > free it (or use alloca)? > > Actually, when I complete my hypercall buffer patch this memory will > need to be separately allocated any way since it needs to come from a > special pool. I''d prefer it if you just used explicit separate > allocation for this buffer from the start.Okay.> >> In practice this is equivalent as long as multiple of 8 bytes are >> written by the hypervisor and the system is running little endian. >> I prefer a clean interface mapping here. > > Using a single uint64 when there was a limit of 64 cpus made sense but > now that it is variable length why not just use bytes everywhere? It > would avoid a lot of confusion about what various size units are at > various points etc. You would avoid needing to translate between the > hypervisor and tools representations too, wouldn''t you?This would suggest changing xc_vcpu_setaffinity() and -- Juergen Gross Principal Developer Operating Systems TSP ES&S SWE OS6 Telephone: +49 (0) 89 3222 2967 Fujitsu Technology Solutions e-mail: juergen.gross@ts.fujitsu.com Domagkstr. 28 Internet: ts.fujitsu.com D-80807 Muenchen Company details: ts.fujitsu.com/imprint.html _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Juergen Gross
2010-Sep-17 10:08 UTC
Re: [Xen-devel] Re: [Patch] update cpumask handling for cpu pools in libxc and python
Please ignore previous mail, I hit the send-button too early... On 09/17/10 11:44, Ian Campbell wrote:> On Fri, 2010-09-17 at 10:20 +0100, Juergen Gross wrote: >> On 09/17/10 11:00, Ian Campbell wrote: >>> local_size has already been rounded up in get_cpumap_size. Do we really >>> need to do it again? >> >> I''ve made it more clear that this is rounding to uint64. > > Wouldn''t that be "(.. + 63) / 8" then?No, local_size is already bytes...>> >>> >>>> + size = sizeof(xc_cpupoolinfo_t) + cpumap_size * 8 + local_size; >>> >>> Why do we need both "cpumap_size * 8" and local_size additional bytes >>> here? Both contain the number of bytes necessary to contain a cpumap >>> bitmask and in fact I suspect they are both equal at this point (see >>> point about rounding above). >> >> The hypervisor returns a cpumask based on bytes, the tools use uint64-based >> cpumasks. > > Oh, I see, as well as xc_cpupool_info_t and the cpumap which it contains > being allocated together in a single buffer you are also including a > third buffer which is for local use in this function only but which is > included in the memory region returned to the caller (who doesn''t know > about it). Seems a bit odd to me, why not just allocate it locally then > free it (or use alloca)? > > Actually, when I complete my hypercall buffer patch this memory will > need to be separately allocated any way since it needs to come from a > special pool. I''d prefer it if you just used explicit separate > allocation for this buffer from the start.Okay.> >> In practice this is equivalent as long as multiple of 8 bytes are >> written by the hypervisor and the system is running little endian. >> I prefer a clean interface mapping here. > > Using a single uint64 when there was a limit of 64 cpus made sense but > now that it is variable length why not just use bytes everywhere? It > would avoid a lot of confusion about what various size units are at > various points etc. You would avoid needing to translate between the > hypervisor and tools representations too, wouldn''t you?This would suggest changing xc_vcpu_setaffinity() and xc_vcpu_getaffinity(), too. -- Juergen Gross Principal Developer Operating Systems TSP ES&S SWE OS6 Telephone: +49 (0) 89 3222 2967 Fujitsu Technology Solutions e-mail: juergen.gross@ts.fujitsu.com Domagkstr. 28 Internet: ts.fujitsu.com D-80807 Muenchen Company details: ts.fujitsu.com/imprint.html _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Sep-17 10:10 UTC
Re: [Xen-devel] Re: [Patch] update cpumask handling for cpu pools in libxc and python
On Fri, 2010-09-17 at 11:04 +0100, Juergen Gross wrote:> On 09/17/10 11:44, Ian Campbell wrote: > > On Fri, 2010-09-17 at 10:20 +0100, Juergen Gross wrote: > >> On 09/17/10 11:00, Ian Campbell wrote: > >>> local_size has already been rounded up in get_cpumap_size. Do we really > >>> need to do it again? > >> > >> I''ve made it more clear that this is rounding to uint64. > > > > Wouldn''t that be "(.. + 63) / 8" then? > > No, local_size is already bytes...Oh yeah. I think that helps make my point about the confusion inherent in using variously bits, bytes and uint64s as you move up the callstack ;-)> This would suggest changing xc_vcpu_setaffinity() andNot sure if others will agree but personally I''d be fine with that. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2010-Sep-17 13:40 UTC
Re: [Xen-devel] Re: [Patch] update cpumask handling for cpu pools in libxc and python
On Fri, 2010-09-17 at 11:08 +0100, Juergen Gross wrote:> Please ignore previous mail, I hit the send-button too early... > > On 09/17/10 11:44, Ian Campbell wrote: > > On Fri, 2010-09-17 at 10:20 +0100, Juergen Gross wrote: > >> On 09/17/10 11:00, Ian Campbell wrote: > >>> local_size has already been rounded up in get_cpumap_size. Do we really > >>> need to do it again? > >> > >> I''ve made it more clear that this is rounding to uint64. > > > > Wouldn''t that be "(.. + 63) / 8" then? > > No, local_size is already bytes... > > >> > >>> > >>>> + size = sizeof(xc_cpupoolinfo_t) + cpumap_size * 8 + local_size; > >>> > >>> Why do we need both "cpumap_size * 8" and local_size additional bytes > >>> here? Both contain the number of bytes necessary to contain a cpumap > >>> bitmask and in fact I suspect they are both equal at this point (see > >>> point about rounding above). > >> > >> The hypervisor returns a cpumask based on bytes, the tools use uint64-based > >> cpumasks. > > > > Oh, I see, as well as xc_cpupool_info_t and the cpumap which it contains > > being allocated together in a single buffer you are also including a > > third buffer which is for local use in this function only but which is > > included in the memory region returned to the caller (who doesn''t know > > about it). Seems a bit odd to me, why not just allocate it locally then > > free it (or use alloca)? > > > > Actually, when I complete my hypercall buffer patch this memory will > > need to be separately allocated any way since it needs to come from a > > special pool. I''d prefer it if you just used explicit separate > > allocation for this buffer from the start. > > Okay. > > > > >> In practice this is equivalent as long as multiple of 8 bytes are > >> written by the hypervisor and the system is running little endian. > >> I prefer a clean interface mapping here. > > > > Using a single uint64 when there was a limit of 64 cpus made sense but > > now that it is variable length why not just use bytes everywhere? It > > would avoid a lot of confusion about what various size units are at > > various points etc. You would avoid needing to translate between the > > hypervisor and tools representations too, wouldn''t you? > > This would suggest changing xc_vcpu_setaffinity() and xc_vcpu_getaffinity(), > too.Juergen told me privately that he will do this after this patchset, which seems reasonable given the number of iterations we''ve been through so far. So: Acked-by: Ian Campbell <ian.campbell@citrix.com> _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Jackson
2010-Sep-17 15:51 UTC
Re: [Xen-devel] Re: [Patch] update cpumask handling for cpu pools in libxc and python
Juergen Gross writes ("Re: [Xen-devel] Re: [Patch] update cpumask handling for cpu pools in libxc and python"):> The hypervisor returns a cpumask based on bytes, the tools use uint64-based > cpumasks. In practice this is equivalent as long as multiple of 8 bytes are > written by the hypervisor and the system is running little endian.Surely we don''t intend to tie our code permanently to little-endian cpus ? I don''t know whether we have supported any big-endian architectures in the past but I don''t think we should rule it out where it''s not necessary to do so. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Jackson
2010-Sep-21 15:04 UTC
Re: [Xen-devel] Re: [Patch] update cpumask handling for cpu pools in libxc and python [and 1 more messages]
Juergen Gross writes ("Re: [Xen-devel] Re: [Patch] update cpumask handling for cpu pools in libxc and python"):> On 09/20/10 18:11, Ian Jackson wrote: > > The cpumask one still seems to assume little-endian, doesn''t it ? > > No, I don''t think so. > The hypervisor is using byte arrays for cpumasks in its interface to the > tools. There bitmap_byte_to_64() is being used to convert it to uint64_t > arrays.Ah, OK.> The main reason to change cpumask representation in the tools to a byte array > (which I will do soon in another patch) is to avoid extra allocation of a > buffer for the interface to the hypervisor.OK. However, I haven''t applied your patch because it breaks the libxl build. You need to fix up all callers of xc_cpupool_getinfo (libxl.c:622 and perhaps elsewhere). Thanks, Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel