Hi list, I am playing with grant table device in xen-3.1 (http://xenbits.xensource.com/xen-3.1-testing.hg) these days and I am not sure about the following code snippet for gntdev_ioctl function. These could be a bug IMO: gntdev.c: line 899-912 ===================================================================/* Unmap pages and add them to the free list. */ for (i = 0; i < op.count; ++i) { private_data->grants[start_index+i].state GNTDEV_SLOT_INVALID; private_data->grants[start_index+i].u.free_list_index private_data->free_list_size; private_data->free_list[private_data->free_list_size] start_index + i; ++private_data->free_list_size; } compress_free_list(flip); =================================================================== The reason I think this could be a bug is through this simple example. Take the case MAX_GRANTS=4. In gntdev_open, private_data->grants[i].u.free_list_index is initialized as (MAX_GRANTS - i - 1), so for entry 0, 1, 2, 3, their corresponding free_list_index is: entry # free_list_index 0 3 1 2 2 1 3 0 Now, suppose we map 4 pages one by one, and then unmap those pages one by one. According to the above code, the free_list_index will become the following. When you unmap and unset the 0th entry, the free_list_size at that time is 0, ...: entry # free_list_index 0 0 1 1 2 2 3 3 Now, let''s again map 1 page and unmap it. After IOCTL_GNTDEV_UNMAP_GRANT_REF, the indices will be (free_list_size will be 3 at the point of unmap): entry # free_list_index 0 3 1 1 2 2 3 3 To here, the does not look correct to me. Actually if now I map 4 pages at at one, we will think we still have 1 mapping slot not used. Am I making the correct observation? Thanks. Regards, Wei Huang Dept. of Computer Science and Engineering Ohio State University _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
I''m wondering how you resolve the issue of structure packing when an application that calls hypervisor functions (say, the grant table map reference hypercall that uses struct grant_entry) is compiled with a 32-bit compiler and the hypervisor is compiled with a 64-bit version. There is a discrepancy in the sizes and alignment of the fields within the structure, depending on which compiler is being used. So, my problem arises that I have a 64-bit compiler and a 32-bit HVM guest making a hypercall using the public H header files. Is there some packing being done in other files or at compile time that I''m not seeing? struct grant_entry { /* GTF_xxx: various type and flag information. [XEN,GST] */ uint16_t flags; /* The domain being granted foreign privileges. [GST] */ domid_t domid; /* * GTF_permit_access: Frame that @domid is allowed to map and access. [GST] * GTF_accept_transfer: Frame whose ownership transferred by @domid. [XEN] */ uint32_t frame; }; typedef struct grant_entry grant_entry_t; _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
See arch/x86/hvm/hvm.c. Some 32-to-64 conversion gets done in there, as a wrapper around the core hypercall routines. -- Keir On 22/10/07 20:14, "Roger Cruz" <rcruz@marathontechnologies.com> wrote:> > I''m wondering how you resolve the issue of structure packing when an > application that calls hypervisor functions (say, the grant table map > reference hypercall that uses struct grant_entry) is compiled with a > 32-bit compiler and the hypervisor is compiled with a 64-bit version. > There is a discrepancy in the sizes and alignment of the fields within > the structure, depending on which compiler is being used. > > So, my problem arises that I have a 64-bit compiler and a 32-bit HVM > guest making a hypercall using the public H header files. Is there some > packing being done in other files or at compile time that I''m not > seeing? > > > struct grant_entry { > /* GTF_xxx: various type and flag information. [XEN,GST] */ > uint16_t flags; > /* The domain being granted foreign privileges. [GST] */ > domid_t domid; > /* > * GTF_permit_access: Frame that @domid is allowed to map and > access. [GST] > * GTF_accept_transfer: Frame whose ownership transferred by @domid. > [XEN] > */ > uint32_t frame; > }; > typedef struct grant_entry grant_entry_t; > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Thanks Keir, but I didn''t see anything in that file dealing specifically with packing issues. Did I miss it? We found a couple of problems when we upgraded our hypervisor from 32 to 64-bits this weekend. 1) unsigned long, which is used to define xen_pfn_t is 32-bits in Linux 32 and Windows 32 and Windows 64 but in Linux 64, it is 64-bits. Therefore, when we compile grant_table.h in Windows 64 and make the grant_copy hypercall, the field alignment and size of the structure is different and the hypervisor doesn''t access the correct fields. 2) GCC x64 compiler appears to align some of the files to a 64-bit boundary, where as a Windows 32 compilation would align them to the next 32-bit boundary. Since we support both 32 and 64-bit Windows, we have to make sure that the structures used to make the hypercall are hand-tweaked to match the 64-bit hypervisor. What I was looking for was public hypercall header files that would work no matter what OS/compiler the guest application and the hypersor were built with. There doesn''t appear to be any #pragma pack around the grant_table.h header files that would force fields to some specific alignment. Thanks again. Roger -----Original Message----- From: Keir Fraser [mailto:Keir.Fraser@cl.cam.ac.uk] Sent: Monday, October 22, 2007 3:31 PM To: Roger Cruz; xen-devel@lists.xensource.com Subject: Re: [Xen-devel] Structure packing when using hypercall. See arch/x86/hvm/hvm.c. Some 32-to-64 conversion gets done in there, as a wrapper around the core hypercall routines. -- Keir On 22/10/07 20:14, "Roger Cruz" <rcruz@marathontechnologies.com> wrote:> > I''m wondering how you resolve the issue of structure packing when an > application that calls hypervisor functions (say, the grant table map > reference hypercall that uses struct grant_entry) is compiled with a > 32-bit compiler and the hypervisor is compiled with a 64-bit version. > There is a discrepancy in the sizes and alignment of the fields within > the structure, depending on which compiler is being used. > > So, my problem arises that I have a 64-bit compiler and a 32-bit HVM > guest making a hypercall using the public H header files. Is theresome> packing being done in other files or at compile time that I''m not > seeing? > > > struct grant_entry { > /* GTF_xxx: various type and flag information. [XEN,GST] */ > uint16_t flags; > /* The domain being granted foreign privileges. [GST] */ > domid_t domid; > /* > * GTF_permit_access: Frame that @domid is allowed to map and > access. [GST] > * GTF_accept_transfer: Frame whose ownership transferred by@domid.> [XEN] > */ > uint32_t frame; > }; > typedef struct grant_entry grant_entry_t; > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 22/10/07 22:55, "Roger Cruz" <rcruz@marathontechnologies.com> wrote:> Thanks Keir, but I didn''t see anything in that file dealing specifically > with packing issues. Did I miss it?There''s only one example right now: do_memory_op_compat32(). The only valid grant table operation currently is GNTTABOP_query_size, which does not need 32-to-64 conversion. Other grant-table operations will need treatment similar to memory_op().> What I was looking for was public hypercall header files that would work > no matter what OS/compiler the guest application and the hypersor were > built with. There doesn''t appear to be any #pragma pack around the > grant_table.h header files that would force fields to some specific > alignment.Yes, we used to have gcc-specific packing attributes, but that got complaints from those who don''t use gcc to build their OS. So we ended up with vanilla header files and the hypervisor ABI is defined by those header files interpreted according to the Linux/x86 C ABI rules. If you need different then you have to hand tweak. It''d be great to script generation of appropriate header files -- we could put the source header files in a more formal format to aid that -- but I don''t think anyone is immediately likely to step forward to do that. It could also allow us to clean up the hand-crafted 32-on-64 compat goop (there''s not much of it for HVM guests as yet, but there''s quite a lot for 32-on-64 PV!) by auto-generating some more of that too. At least you only have to hand-tweak once -- the hypervisor interfaces aren''t changing and you probably only use a small subset of the hypercalls. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Hi, Have you tried executing this example and observing if it does cause a bug and/or an unexpected error (such as ENOMEM)? When you map the single grant in your third step, it does not take the first free slot in the grants array; instead it takes the slot that is at the end of the free_list, which would be entry #3 in your example. When this grant is unmapped, entry #3 is re-appended to the free_list. The rationale behind this algorithm is that it is able to find a free single slot (the assumed common case) in O(1) time, rather than performing an O(n) search of the grants array. Regards, Derek Murray. wei huang wrote:> Hi list, > > I am playing with grant table device in xen-3.1 > (http://xenbits.xensource.com/xen-3.1-testing.hg) these days and I am not > sure about the following code snippet for gntdev_ioctl function. These > could be a bug IMO: > > gntdev.c: line 899-912 > ===================================================================> /* Unmap pages and add them to the free list. > */ > for (i = 0; i < op.count; ++i) { > private_data->grants[start_index+i].state > GNTDEV_SLOT_INVALID; > > private_data->grants[start_index+i].u.free_list_index > private_data->free_list_size; > > private_data->free_list[private_data->free_list_size] > start_index + i; > ++private_data->free_list_size; > } > compress_free_list(flip); > ===================================================================> > The reason I think this could be a bug is through this simple example. > Take the case MAX_GRANTS=4. In gntdev_open, > private_data->grants[i].u.free_list_index is initialized as (MAX_GRANTS - > i - 1), so for entry 0, 1, 2, 3, their corresponding free_list_index is: > entry # free_list_index > 0 3 > 1 2 > 2 1 > 3 0 > > Now, suppose we map 4 pages one by one, and then unmap those pages one by > one. According to the above code, the free_list_index will become the > following. When you unmap and unset the 0th entry, the free_list_size at > that time is 0, ...: > > entry # free_list_index > 0 0 > 1 1 > 2 2 > 3 3 > > Now, let''s again map 1 page and unmap it. After > IOCTL_GNTDEV_UNMAP_GRANT_REF, the indices will be (free_list_size will be > 3 at the point of unmap): > entry # free_list_index > 0 3 > 1 1 > 2 2 > 3 3 > > To here, the does not look correct to me. Actually if now I map 4 pages > at at one, we will think we still have 1 mapping slot not used. > > Am I making the correct observation? > > Thanks. > > Regards, > Wei Huang > > Dept. of Computer Science and Engineering > Ohio State University > > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Hi Derek I had a slightly modified version of this (I add some other stuff to discover the VMs on the same host). And apparently I got the problem I decribed using my version. I came up with this question when I was going through line-by-line comparison trying to figure out my mistakes. Perhaps the error is at my side that I have not fully understood your code. I will work on it more and let you know. Thanks. - Wei> Hi, > > Have you tried executing this example and observing if it does cause a > bug and/or an unexpected error (such as ENOMEM)? > > When you map the single grant in your third step, it does not take the > first free slot in the grants array; instead it takes the slot that is > at the end of the free_list, which would be entry #3 in your example. > When this grant is unmapped, entry #3 is re-appended to the free_list. > > The rationale behind this algorithm is that it is able to find a free > single slot (the assumed common case) in O(1) time, rather than > performing an O(n) search of the grants array. > > Regards, > > Derek Murray. > > wei huang wrote: > > Hi list, > > > > I am playing with grant table device in xen-3.1 > > (http://xenbits.xensource.com/xen-3.1-testing.hg) these days and I am not > > sure about the following code snippet for gntdev_ioctl function. These > > could be a bug IMO: > > > > gntdev.c: line 899-912 > > ===================================================================> > /* Unmap pages and add them to the free list. > > */ > > for (i = 0; i < op.count; ++i) { > > private_data->grants[start_index+i].state > > GNTDEV_SLOT_INVALID; > > > > private_data->grants[start_index+i].u.free_list_index > > private_data->free_list_size; > > > > private_data->free_list[private_data->free_list_size] > > start_index + i; > > ++private_data->free_list_size; > > } > > compress_free_list(flip); > > ===================================================================> > > > The reason I think this could be a bug is through this simple example. > > Take the case MAX_GRANTS=4. In gntdev_open, > > private_data->grants[i].u.free_list_index is initialized as (MAX_GRANTS - > > i - 1), so for entry 0, 1, 2, 3, their corresponding free_list_index is: > > entry # free_list_index > > 0 3 > > 1 2 > > 2 1 > > 3 0 > > > > Now, suppose we map 4 pages one by one, and then unmap those pages one by > > one. According to the above code, the free_list_index will become the > > following. When you unmap and unset the 0th entry, the free_list_size at > > that time is 0, ...: > > > > entry # free_list_index > > 0 0 > > 1 1 > > 2 2 > > 3 3 > > > > Now, let''s again map 1 page and unmap it. After > > IOCTL_GNTDEV_UNMAP_GRANT_REF, the indices will be (free_list_size will be > > 3 at the point of unmap): > > entry # free_list_index > > 0 3 > > 1 1 > > 2 2 > > 3 3 > > > > To here, the does not look correct to me. Actually if now I map 4 pages > > at at one, we will think we still have 1 mapping slot not used. > > > > Am I making the correct observation? > > > > Thanks. > > > > Regards, > > Wei Huang > > > > Dept. of Computer Science and Engineering > > Ohio State University > > > > > > > > > > _______________________________________________ > > Xen-devel mailing list > > Xen-devel@lists.xensource.com > > http://lists.xensource.com/xen-devel >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel