Glauber de Oliveira Costa
2006-Nov-24 14:08 UTC
[Xen-devel] [PATCH/RFC] Implement the memory_map hypercall
Keir, Here''s a first draft on an implementation on the memory_map hypercall. I would like to have comments on this, specially at: 1) I set a new field in the domain structure, and use it whenever it''s set to determine the maximum map. In case it''s not, using max_mem will most probably give us a better bound than tot_pages, as it may allow us to balloon up later even when using tools that does not call the new domctl (yet to come) that sets the map limit. 2) However, as it currently breaks dom0, I''m leaving it unimplemented in this case, and plan to do better than that when you apply the changes you said you would in dom0 max_mem representation. I''m currently working on the domctl side of things, but I''d like to have this sorted out first. Thank you! -- Glauber de Oliveira Costa Red Hat Inc. "Free as in Freedom" _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jun Koi
2006-Nov-24 14:36 UTC
Re: [Xen-devel] [PATCH/RFC] Implement the memory_map hypercall
Glauber, what is this hypercall for? To map hypervisor memory from Dom0? Thanks. J On 11/24/06, Glauber de Oliveira Costa <gcosta@redhat.com> wrote:> Keir, > > Here''s a first draft on an implementation on the memory_map > hypercall. I would like to have comments on this, specially at: > > 1) I set a new field in the domain structure, and use it whenever it''s > set to determine the maximum map. In case it''s not, using max_mem will > most probably give us a better bound than tot_pages, as it may allow us > to balloon up later even when using tools that does not call the new > domctl (yet to come) that sets the map limit. > > 2) However, as it currently breaks dom0, I''m leaving it unimplemented in > this case, and plan to do better than that when you apply the changes > you said you would in dom0 max_mem representation. > > I''m currently working on the domctl side of things, but I''d like to have > this sorted out first. > > Thank you! > > -- > Glauber de Oliveira Costa > Red Hat Inc. > "Free as in Freedom" > > > # HG changeset patch > # User gcosta@redhat.com > # Date 1164380458 18000 > # Node ID da7aa8896ab07932160406c8b19a6ad4a61b3af7 > # Parent 47fcd5f768fef50cba2fc6dbadc7b75de55e88a5 > [XEN] Implement the memory_map hypercall > > It''s needed to provide guests with an idea of a physical > mapping that may differ from simply what''s needed to fit > tot_pages. > > Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> > > diff -r 47fcd5f768fe -r da7aa8896ab0 xen/arch/x86/mm.c > --- a/xen/arch/x86/mm.c Fri Nov 17 08:30:43 2006 -0500 > +++ b/xen/arch/x86/mm.c Fri Nov 24 10:00:58 2006 -0500 > @@ -2976,7 +2976,45 @@ long arch_memory_op(int op, XEN_GUEST_HA > > case XENMEM_memory_map: > { > - return -ENOSYS; > + struct xen_memory_map memmap; > + struct domain *d; > + XEN_GUEST_HANDLE(e820entry_t) buffer; > + struct e820entry map; > + > + if ( IS_PRIV(current->domain) ) > + return -ENOSYS; > + > + d = current->domain; > + > + if ( copy_from_guest(&memmap, arg, 1) ) > + return -EFAULT; > + > + buffer = guest_handle_cast(memmap.buffer, e820entry_t); > + if ( unlikely(guest_handle_is_null(buffer)) ) > + return -EFAULT; > + > + memmap.nr_entries = 1; > + > + /* if we were not supplied with proper information, the best we can > + * do is rely on the current max_pages information as a sane bound */ > + if (d->memory_map_limit) > + map.size = d->memory_map_limit; > + else > + map.size = d->max_pages << PAGE_SHIFT; > + > + /* 8MB slack (to balance backend allocations). */ > + map.size += 8 << 20; > + map.addr = 0ULL; > + map.type = E820_RAM; > + > + if ( copy_to_guest(arg, &memmap, 1) ) > + return -EFAULT; > + > + if ( copy_to_guest(buffer, &map, 1) < 0 ) > + return -EFAULT; > + > + return 0; > + > } > > case XENMEM_machine_memory_map: >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Glauber de Oliveira Costa
2006-Nov-24 14:57 UTC
Re: [Xen-devel] [PATCH/RFC] Implement the memory_map hypercall
On Fri, Nov 24, 2006 at 11:36:36PM +0900, Jun Koi wrote:> Glauber, what is this hypercall for? To map hypervisor memory from Dom0?This hypercall (already declared, but currently always returning ENOSYS) is meant to give a guest (any guest) the idea on how should this physical memory mapping look like. Currently, linux guest kernels checks for the result of such a call, and stabilish a memory mapping on their own if it returns ENOSYS. However, such mapping is not proving itself to be the most suitable one, specially in a long term. That said, when you boot a 300mb guest, instead of: BIOS-provided physical RAM map: Xen: 0000000000000000 - 0000000013400000 (usable) You''d see your RAM mapping being extended to whatever value is set in d->memory_map_limit ( or even for some reason, in the future, a differently organized map) -- Glauber de Oliveira Costa Red Hat Inc. "Free as in Freedom" _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel