Julien Grall
2013-Dec-13 19:37 UTC
[PATCH v4 00/11] xen/arm: Handle correctly foreign mapping
Hello, This patch series aims to fix "Failed to unmap" message in dom0 when a guest is creating. Without this patch series, dom0 will leak memory each time a domain is created. It should be considered as a blocker for Xen 4.4 release. Compare to the previous version, I have removed the dependency on pvh dom0 patch series (http://lists.xenproject.org/archives/html/xen-devel/2013-12/msg01026.html). This will allow to push this serie as soon as possible without waiting the support for dom0 pvh (which is only x86 specific). - Patch #1: patch #4 of the dom0 pvh series - Patch #2-3: prepare work for the others patches - Patch #4-7: add support for p2m type - Patch #8-10: handle correctly foreign mapping. Patch #9, contains common remove code taken from patch #7 of dom0 pvh - Patch #11: it''s not really part of this series. It adds support for read-only grant-mapping For all the changes, see in each patch. Sincerely yours, Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com> Cc: George Dunlap <george.dunlap@eu.citrix.com> Cc: Jan Beulich <jbeulich@suse.com> Cc: Keir Fraser <keir@xen.org> Cc: Mukesh Rathor <mukesh.rathor@oracle.com> Julien Grall (10): xen/arm: Introduce steps in domain_relinquish_resource xen/arm: move mfn_to_p2m_entry in arch/arm/p2m.c xen/arm: Implement p2m_type_t as an enum xen/arm: Store p2m type in each page of the guest xen/arm: p2m: Extend p2m_lookup parameters to retrieve the p2m type xen/arm: Retrieve p2m type in get_page_from_gfn xen/arm: Handle remove foreign mapping xen/arm: Add relinquish_p2m_mapping to remove reference on every mapped page xen/arm: Set foreign page type to p2m_map_foreign xen/arm: grant-table: Support read-only mapping Mukesh Rathor (1): pvh dom0: Introduce p2m_map_foreign xen/arch/arm/domain.c | 45 ++++++++++-- xen/arch/arm/mm.c | 39 +++++++---- xen/arch/arm/p2m.c | 158 +++++++++++++++++++++++++++++++++++++----- xen/arch/arm/traps.c | 6 +- xen/arch/x86/mm/p2m-ept.c | 1 + xen/arch/x86/mm/p2m-pt.c | 1 + xen/arch/x86/mm/p2m.c | 28 +++++--- xen/common/memory.c | 12 +++- xen/include/asm-arm/domain.h | 9 +++ xen/include/asm-arm/p2m.h | 69 +++++++++++++++--- xen/include/asm-arm/page.h | 24 +------ xen/include/asm-x86/p2m.h | 10 +++ 12 files changed, 318 insertions(+), 84 deletions(-) -- 1.7.10.4
From: Mukesh Rathor <mukesh.rathor@oracle.com> In this patch, a new type p2m_map_foreign is introduced for pages that toolstack on PVH dom0 maps from foreign domains that its creating or supporting during it''s run time. Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com> Cc: Jan Beulich <jbeulich@suse.com> Cc: Keir Fraser <keir@xen.org> Cc: George Dunlap <george.dunlap@eu.citrix.com> --- Changes in v4: - Use the patch #4 from dom0 pvh v6 series. We need it for ARM --- xen/arch/x86/mm/p2m-ept.c | 1 + xen/arch/x86/mm/p2m-pt.c | 1 + xen/arch/x86/mm/p2m.c | 28 ++++++++++++++++++++-------- xen/include/asm-x86/p2m.h | 4 ++++ 4 files changed, 26 insertions(+), 8 deletions(-) diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c index 92d9e2d..08d1d72 100644 --- a/xen/arch/x86/mm/p2m-ept.c +++ b/xen/arch/x86/mm/p2m-ept.c @@ -75,6 +75,7 @@ static void ept_p2m_type_to_flags(ept_entry_t *entry, p2m_type_t type, p2m_acces entry->w = 0; break; case p2m_grant_map_rw: + case p2m_map_foreign: entry->r = entry->w = 1; entry->x = 0; break; diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c index a1d5650..09b60ce 100644 --- a/xen/arch/x86/mm/p2m-pt.c +++ b/xen/arch/x86/mm/p2m-pt.c @@ -89,6 +89,7 @@ static unsigned long p2m_type_to_flags(p2m_type_t t, mfn_t mfn) case p2m_ram_rw: return flags | P2M_BASE_FLAGS | _PAGE_RW; case p2m_grant_map_rw: + case p2m_map_foreign: return flags | P2M_BASE_FLAGS | _PAGE_RW | _PAGE_NX_BIT; case p2m_mmio_direct: if ( !rangeset_contains_singleton(mmio_ro_ranges, mfn_x(mfn)) ) diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 8f380ed..0659ef1 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -525,7 +525,7 @@ p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn, unsigned long mfn, for ( i = 0; i < (1UL << page_order); i++ ) { mfn_return = p2m->get_entry(p2m, gfn + i, &t, &a, 0, NULL); - if ( !p2m_is_grant(t) && !p2m_is_shared(t) ) + if ( !p2m_is_grant(t) && !p2m_is_shared(t) && !p2m_is_foreign(t) ) set_gpfn_from_mfn(mfn+i, INVALID_M2P_ENTRY); ASSERT( !p2m_is_valid(t) || mfn + i == mfn_x(mfn_return) ); } @@ -756,10 +756,9 @@ void p2m_change_type_range(struct domain *d, p2m_unlock(p2m); } - - -int -set_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn) +static int +set_typed_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn, + p2m_type_t gfn_p2mt) { int rc = 0; p2m_access_t a; @@ -784,16 +783,29 @@ set_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn) set_gpfn_from_mfn(mfn_x(omfn), INVALID_M2P_ENTRY); } - P2M_DEBUG("set mmio %lx %lx\n", gfn, mfn_x(mfn)); - rc = set_p2m_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2m_mmio_direct, p2m->default_access); + P2M_DEBUG("set %d %lx %lx\n", gfn_p2mt, gfn, mfn_x(mfn)); + rc = set_p2m_entry(p2m, gfn, mfn, PAGE_ORDER_4K, gfn_p2mt, + p2m->default_access); gfn_unlock(p2m, gfn, 0); if ( 0 == rc ) gdprintk(XENLOG_ERR, - "set_mmio_p2m_entry: set_p2m_entry failed! mfn=%08lx\n", + "%s: set_p2m_entry failed! mfn=%08lx\n", __func__, mfn_x(get_gfn_query_unlocked(p2m->domain, gfn, &ot))); return rc; } +/* Returns: True for success. */ +int set_foreign_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn) +{ + return set_typed_p2m_entry(d, gfn, mfn, p2m_map_foreign); +} + +int +set_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn) +{ + return set_typed_p2m_entry(d, gfn, mfn, p2m_mmio_direct); +} + int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn) { diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index f4e7253..d5d6391 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -70,6 +70,7 @@ typedef enum { p2m_ram_paging_in = 11, /* Memory that is being paged in */ p2m_ram_shared = 12, /* Shared or sharable memory */ p2m_ram_broken = 13, /* Broken page, access cause domain crash */ + p2m_map_foreign = 14, /* ram pages from foreign domain */ } p2m_type_t; /* @@ -180,6 +181,7 @@ typedef unsigned int p2m_query_t; #define p2m_is_sharable(_t) (p2m_to_mask(_t) & P2M_SHARABLE_TYPES) #define p2m_is_shared(_t) (p2m_to_mask(_t) & P2M_SHARED_TYPES) #define p2m_is_broken(_t) (p2m_to_mask(_t) & P2M_BROKEN_TYPES) +#define p2m_is_foreign(_t) (p2m_to_mask(_t) & p2m_to_mask(p2m_map_foreign)) /* Per-p2m-table state */ struct p2m_domain { @@ -510,6 +512,8 @@ p2m_type_t p2m_change_type(struct domain *d, unsigned long gfn, int set_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn); int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn); +/* Set foreign mfn in the current guest''s p2m table. */ +int set_foreign_p2m_entry(struct domain *domp, unsigned long gfn, mfn_t mfn); /* * Populate-on-demand -- 1.7.10.4
Julien Grall
2013-Dec-13 19:37 UTC
[PATCH v4 02/11] xen/arm: Introduce steps in domain_relinquish_resource
In a later patch, a new step will be added. It will avoid to check every step when the function was preempted. Signed-off-by: Julien Grall <julien.grall@linaro.org> Acked-by: Ian Campbell <ian.campbell@citrix.com> --- Changes in v2: - Introduce the patch --- xen/arch/arm/domain.c | 37 ++++++++++++++++++++++++++++++------- xen/include/asm-arm/domain.h | 8 ++++++++ 2 files changed, 38 insertions(+), 7 deletions(-) diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index 67c65c3..1590708 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -497,6 +497,8 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags) { int rc; + d->arch.relmem = RELMEM_not_started; + /* Idle domains do not need this setup */ if ( is_idle_domain(d) ) return 0; @@ -696,15 +698,36 @@ int domain_relinquish_resources(struct domain *d) { int ret = 0; - ret = relinquish_memory(d, &d->xenpage_list); - if ( ret ) - return ret; + switch ( d->arch.relmem ) + { + case RELMEM_not_started: + d->arch.relmem = RELMEM_xen; + /* Falltrough */ - ret = relinquish_memory(d, &d->page_list); - if ( ret ) - return ret; + case RELMEM_xen: + ret = relinquish_memory(d, &d->xenpage_list); + if ( ret ) + return ret; - return ret; + d->arch.relmem = RELMEM_page; + /* Fallthrough */ + + case RELMEM_page: + ret = relinquish_memory(d, &d->page_list); + if ( ret ) + return ret; + + d->arch.relmem = RELMEM_done; + /* Fallthrough */ + + case RELMEM_done: + break; + + default: + BUG(); + } + + return 0; } void arch_dump_domain_info(struct domain *d) diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h index 0733c5e..53c9895 100644 --- a/xen/include/asm-arm/domain.h +++ b/xen/include/asm-arm/domain.h @@ -107,6 +107,14 @@ struct arch_domain struct hvm_domain hvm_domain; xen_pfn_t *grant_table_gpfn; + /* Continuable domain_relinquish_resources(). */ + enum { + RELMEM_not_started, + RELMEM_xen, + RELMEM_page, + RELMEM_done, + } relmem; + /* Virtual CPUID */ uint32_t vpidr; -- 1.7.10.4
Julien Grall
2013-Dec-13 19:37 UTC
[PATCH v4 03/11] xen/arm: move mfn_to_p2m_entry in arch/arm/p2m.c
The function mfn_to_p2m_entry will be extended in a following patch to handle p2m_type_t. It will break compilation because p2m_type_t is not defined (interdependence between includes). It''s easier to move the function in arch/arm/p2m.c and it''s not harmful as the function is only used in this file. Signed-off-by: Julien Grall <julien.grall@linaro.org> Acked-by: Ian Campbell <ian.campbell@citrix.com> --- xen/arch/arm/p2m.c | 22 ++++++++++++++++++++++ xen/include/asm-arm/page.h | 22 ---------------------- 2 files changed, 22 insertions(+), 22 deletions(-) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 083f8bf..74636df 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -128,6 +128,28 @@ int p2m_pod_decrease_reservation(struct domain *d, return -ENOSYS; } +static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr) +{ + paddr_t pa = ((paddr_t) mfn) << PAGE_SHIFT; + lpae_t e = (lpae_t) { + .p2m.xn = 0, + .p2m.af = 1, + .p2m.sh = LPAE_SH_OUTER, + .p2m.read = 1, + .p2m.write = 1, + .p2m.mattr = mattr, + .p2m.table = 1, + .p2m.valid = 1, + }; + + ASSERT(!(pa & ~PAGE_MASK)); + ASSERT(!(pa & ~PADDR_MASK)); + + e.bits |= pa; + + return e; +} + /* Allocate a new page table page and hook it in via the given entry */ static int p2m_create_table(struct domain *d, lpae_t *entry) diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h index d468418..0625464 100644 --- a/xen/include/asm-arm/page.h +++ b/xen/include/asm-arm/page.h @@ -213,28 +213,6 @@ static inline lpae_t mfn_to_xen_entry(unsigned long mfn) return e; } -static inline lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr) -{ - paddr_t pa = ((paddr_t) mfn) << PAGE_SHIFT; - lpae_t e = (lpae_t) { - .p2m.xn = 0, - .p2m.af = 1, - .p2m.sh = LPAE_SH_OUTER, - .p2m.write = 1, - .p2m.read = 1, - .p2m.mattr = mattr, - .p2m.table = 1, - .p2m.valid = 1, - }; - - ASSERT(!(pa & ~PAGE_MASK)); - ASSERT(!(pa & ~PADDR_MASK)); - - e.bits |= pa; - - return e; -} - #if defined(CONFIG_ARM_32) # include <asm/arm32/page.h> #elif defined(CONFIG_ARM_64) -- 1.7.10.4
Julien Grall
2013-Dec-13 19:37 UTC
[PATCH v4 04/11] xen/arm: Implement p2m_type_t as an enum
Until now, Xen doesn''t know the type of the page (ram, foreign page, mmio,...). Introduce p2m_type_t with basic types: - p2m_invalid: Nothing is mapped here - p2m_ram_rw: Normal read/write guest RAM - p2m_ram_ro: Read-only guest RAM - p2m_mmio_direct: Read/write mapping of device memory - p2m_map_foreign: RAM page from foreign guest - p2m_grant_map_rw: Read/write grant mapping - p2m_grant_map_ro: Read-only grant mapping Signed-off-by: Julien Grall <julien.grall@linaro.org> Acked-by: Ian Campbell <ian.campbell@citrix.com> --- Changes in v3: - s/ / / - Replace p2m by either pte or p2m entry - Fix compilation (unitialized value) - Add BUILD_BUG_ON (from patch #4) and fix it Changes in v2: - Add comment for future improvement - Add p2m_max_real_type. Will be use later to check the size of the enum - Let the compiler choose the value for each name of the enum - Add grant mapping type --- xen/arch/arm/p2m.c | 2 ++ xen/include/asm-arm/p2m.h | 20 +++++++++++++++++++- 2 files changed, 21 insertions(+), 1 deletion(-) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 74636df..691cdfa 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -142,6 +142,8 @@ static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr) .p2m.valid = 1, }; + BUILD_BUG_ON(p2m_max_real_type > (1 << 4)); + ASSERT(!(pa & ~PAGE_MASK)); ASSERT(!(pa & ~PADDR_MASK)); diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h index c660820..bc86a49 100644 --- a/xen/include/asm-arm/p2m.h +++ b/xen/include/asm-arm/p2m.h @@ -20,6 +20,25 @@ struct p2m_domain { uint8_t vmid; }; +/* List of possible type for each page in the p2m entry. + * The number of available bit per page in the pte for this purpose is 4 bits. + * So it''s possible to only have 16 fields. If we run out of value in the + * future, it''s possible to use higher value for pseudo-type and don''t store + * them in the p2m entry. + */ +typedef enum { + p2m_invalid = 0, /* Nothing mapped here */ + p2m_ram_rw, /* Normal read/write guest RAM */ + p2m_ram_ro, /* Read-only; writes are silently dropped */ + p2m_mmio_direct, /* Read/write mapping of genuine MMIO area */ + p2m_map_foreign, /* Ram pages from foreign domain */ + p2m_grant_map_rw, /* Read/write grant mapping */ + p2m_grant_map_ro, /* Read-only grant mapping */ + p2m_max_real_type, /* Types after this won''t be store in the p2m */ +} p2m_type_t; + +#define p2m_is_foreign(_t) ((_t) == p2m_map_foreign) + /* Initialise vmid allocator */ void p2m_vmid_allocator_init(void); @@ -72,7 +91,6 @@ p2m_pod_decrease_reservation(struct domain *d, unsigned int order); /* Look up a GFN and take a reference count on the backing page. */ -typedef int p2m_type_t; typedef unsigned int p2m_query_t; #define P2M_ALLOC (1u<<0) /* Populate PoD and paged-out entries */ #define P2M_UNSHARE (1u<<1) /* Break CoW sharing */ -- 1.7.10.4
Julien Grall
2013-Dec-13 19:37 UTC
[PATCH v4 05/11] xen/arm: Store p2m type in each page of the guest
Use the field ''avail'' to store the type of the page. Rename it to ''type'' for convenience. The information stored in this field will be retrieved in a future patch to change the behaviour when the page is removed. Also introduce guest_physmap_add_entry to map and set a specific p2m type for a page. Signed-off-by: Julien Grall <julien.grall@linaro.org> --- Changes in v3: - Typo in the commit message - Rename ''p2mt'' field to ''type'' - Remove default in switch to let the compiler warns - Move BUILD_BUG_ON to patch #3 Changes in v2: - Rename ''avail'' field to ''p2mt'' in the p2m structure - Add BUILD_BUG_ON to check if the enum value will fit in the field - Implement grant mapping type --- xen/arch/arm/p2m.c | 56 ++++++++++++++++++++++++++++++++------------ xen/include/asm-arm/p2m.h | 18 ++++++++++---- xen/include/asm-arm/page.h | 2 +- 3 files changed, 56 insertions(+), 20 deletions(-) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 691cdfa..3a79927 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -128,7 +128,8 @@ int p2m_pod_decrease_reservation(struct domain *d, return -ENOSYS; } -static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr) +static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr, + p2m_type_t t) { paddr_t pa = ((paddr_t) mfn) << PAGE_SHIFT; lpae_t e = (lpae_t) { @@ -136,14 +137,34 @@ static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr) .p2m.af = 1, .p2m.sh = LPAE_SH_OUTER, .p2m.read = 1, - .p2m.write = 1, .p2m.mattr = mattr, .p2m.table = 1, .p2m.valid = 1, + .p2m.type = t, }; BUILD_BUG_ON(p2m_max_real_type > (1 << 4)); + switch (t) + { + case p2m_grant_map_rw: + e.p2m.xn = 1; + /* Fallthrough */ + case p2m_ram_rw: + case p2m_mmio_direct: + case p2m_map_foreign: + e.p2m.write = 1; + break; + + case p2m_grant_map_ro: + e.p2m.xn = 1; + /* Fallthrough */ + case p2m_invalid: + case p2m_ram_ro: + default: + e.p2m.write = 0; + } + ASSERT(!(pa & ~PAGE_MASK)); ASSERT(!(pa & ~PADDR_MASK)); @@ -173,7 +194,7 @@ static int p2m_create_table(struct domain *d, clear_page(p); unmap_domain_page(p); - pte = mfn_to_p2m_entry(page_to_mfn(page), MATTR_MEM); + pte = mfn_to_p2m_entry(page_to_mfn(page), MATTR_MEM, p2m_invalid); write_pte(entry, pte); @@ -191,7 +212,8 @@ static int create_p2m_entries(struct domain *d, paddr_t start_gpaddr, paddr_t end_gpaddr, paddr_t maddr, - int mattr) + int mattr, + p2m_type_t t) { int rc, flush; struct p2m_domain *p2m = &d->arch.p2m; @@ -271,14 +293,15 @@ static int create_p2m_entries(struct domain *d, goto out; } - pte = mfn_to_p2m_entry(page_to_mfn(page), mattr); + pte = mfn_to_p2m_entry(page_to_mfn(page), mattr, t); write_pte(&third[third_table_offset(addr)], pte); } break; case INSERT: { - lpae_t pte = mfn_to_p2m_entry(maddr >> PAGE_SHIFT, mattr); + lpae_t pte = mfn_to_p2m_entry(maddr >> PAGE_SHIFT, + mattr, t); write_pte(&third[third_table_offset(addr)], pte); maddr += PAGE_SIZE; } @@ -313,7 +336,8 @@ int p2m_populate_ram(struct domain *d, paddr_t start, paddr_t end) { - return create_p2m_entries(d, ALLOCATE, start, end, 0, MATTR_MEM); + return create_p2m_entries(d, ALLOCATE, start, end, + 0, MATTR_MEM, p2m_ram_rw); } int map_mmio_regions(struct domain *d, @@ -321,18 +345,20 @@ int map_mmio_regions(struct domain *d, paddr_t end_gaddr, paddr_t maddr) { - return create_p2m_entries(d, INSERT, start_gaddr, end_gaddr, maddr, MATTR_DEV); + return create_p2m_entries(d, INSERT, start_gaddr, end_gaddr, + maddr, MATTR_DEV, p2m_mmio_direct); } -int guest_physmap_add_page(struct domain *d, - unsigned long gpfn, - unsigned long mfn, - unsigned int page_order) +int guest_physmap_add_entry(struct domain *d, + unsigned long gpfn, + unsigned long mfn, + unsigned long page_order, + p2m_type_t t) { return create_p2m_entries(d, INSERT, pfn_to_paddr(gpfn), - pfn_to_paddr(gpfn + (1<<page_order)), - pfn_to_paddr(mfn), MATTR_MEM); + pfn_to_paddr(gpfn + (1 << page_order)), + pfn_to_paddr(mfn), MATTR_MEM, t); } void guest_physmap_remove_page(struct domain *d, @@ -342,7 +368,7 @@ void guest_physmap_remove_page(struct domain *d, create_p2m_entries(d, REMOVE, pfn_to_paddr(gpfn), pfn_to_paddr(gpfn + (1<<page_order)), - pfn_to_paddr(mfn), MATTR_MEM); + pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid); } int p2m_alloc_table(struct domain *d) diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h index bc86a49..5742bbc 100644 --- a/xen/include/asm-arm/p2m.h +++ b/xen/include/asm-arm/p2m.h @@ -68,11 +68,21 @@ int p2m_populate_ram(struct domain *d, paddr_t start, paddr_t end); int map_mmio_regions(struct domain *d, paddr_t start_gaddr, paddr_t end_gaddr, paddr_t maddr); +int guest_physmap_add_entry(struct domain *d, + unsigned long gfn, + unsigned long mfn, + unsigned long page_order, + p2m_type_t t); + /* Untyped version for RAM only, for compatibility */ -int guest_physmap_add_page(struct domain *d, - unsigned long gfn, - unsigned long mfn, - unsigned int page_order); +static inline int guest_physmap_add_page(struct domain *d, + unsigned long gfn, + unsigned long mfn, + unsigned int page_order) +{ + return guest_physmap_add_entry(d, gfn, mfn, page_order, p2m_ram_rw); +} + void guest_physmap_remove_page(struct domain *d, unsigned long gpfn, unsigned long mfn, unsigned int page_order); diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h index 0625464..670d4e7 100644 --- a/xen/include/asm-arm/page.h +++ b/xen/include/asm-arm/page.h @@ -153,7 +153,7 @@ typedef struct { unsigned long contig:1; /* In a block of 16 contiguous entries */ unsigned long sbz2:1; unsigned long xn:1; /* eXecute-Never */ - unsigned long avail:4; /* Ignored by hardware */ + unsigned long type:4; /* Ignore by hardware. Used to store p2m types */ unsigned long sbz1:5; } __attribute__((__packed__)) lpae_p2m_t; -- 1.7.10.4
Julien Grall
2013-Dec-13 19:37 UTC
[PATCH v4 06/11] xen/arm: p2m: Extend p2m_lookup parameters to retrieve the p2m type
Signed-off-by: Julien Grall <julien.grall@linaro.org> Acked-by: Ian Campbell <ian.campbell@citrix.com> --- Changes in v3: - ''p2mt'' field was renamed in ''type'' - missing one p2m_lookup Changes in v2: - This patch was "Add p2m_get_entry" on the previous version - Don''t add a new function but only extend p2m_lookup --- xen/arch/arm/mm.c | 2 +- xen/arch/arm/p2m.c | 13 +++++++++++-- xen/arch/arm/traps.c | 6 +++--- xen/include/asm-arm/p2m.h | 2 +- 4 files changed, 16 insertions(+), 7 deletions(-) diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 654281a..b1222be 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -1032,7 +1032,7 @@ static int xenmem_add_to_physmap_one( return rc; } - maddr = p2m_lookup(od, pfn_to_paddr(idx)); + maddr = p2m_lookup(od, pfn_to_paddr(idx), NULL); if ( maddr == INVALID_PADDR ) { dump_p2m_lookup(od, pfn_to_paddr(idx)); diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 3a79927..39d8a03 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -72,11 +72,17 @@ static lpae_t *p2m_map_first(struct p2m_domain *p2m, paddr_t addr) * There are no processor functions to do a stage 2 only lookup therefore we * do a a software walk. */ -paddr_t p2m_lookup(struct domain *d, paddr_t paddr) +paddr_t p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t) { struct p2m_domain *p2m = &d->arch.p2m; lpae_t pte, *first = NULL, *second = NULL, *third = NULL; paddr_t maddr = INVALID_PADDR; + p2m_type_t _t; + + /* Allow t to be NULL */ + t = t ?: &_t; + + *t = p2m_invalid; spin_lock(&p2m->lock); @@ -102,7 +108,10 @@ paddr_t p2m_lookup(struct domain *d, paddr_t paddr) done: if ( pte.p2m.valid ) + { maddr = (pte.bits & PADDR_MASK & PAGE_MASK) | (paddr & ~PAGE_MASK); + *t = pte.p2m.type; + } if (third) unmap_domain_page(third); if (second) unmap_domain_page(second); @@ -511,7 +520,7 @@ err: unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn) { - paddr_t p = p2m_lookup(d, pfn_to_paddr(gpfn)); + paddr_t p = p2m_lookup(d, pfn_to_paddr(gpfn), NULL); return p >> PAGE_SHIFT; } diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index e01045e..7c5ab19 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -1421,7 +1421,7 @@ void dump_guest_s1_walk(struct domain *d, vaddr_t addr) printk("dom%d VA 0x%08"PRIvaddr"\n", d->domain_id, addr); printk(" TTBCR: 0x%08"PRIregister"\n", ttbcr); printk(" TTBR0: 0x%016"PRIx64" = 0x%"PRIpaddr"\n", - ttbr0, p2m_lookup(d, ttbr0 & PAGE_MASK)); + ttbr0, p2m_lookup(d, ttbr0 & PAGE_MASK, NULL)); if ( ttbcr & TTBCR_EAE ) { @@ -1434,7 +1434,7 @@ void dump_guest_s1_walk(struct domain *d, vaddr_t addr) return; } - paddr = p2m_lookup(d, ttbr0 & PAGE_MASK); + paddr = p2m_lookup(d, ttbr0 & PAGE_MASK, NULL); if ( paddr == INVALID_PADDR ) { printk("Failed TTBR0 maddr lookup\n"); @@ -1449,7 +1449,7 @@ void dump_guest_s1_walk(struct domain *d, vaddr_t addr) !(first[offset] & 0x2) ) goto done; - paddr = p2m_lookup(d, first[offset] & PAGE_MASK); + paddr = p2m_lookup(d, first[offset] & PAGE_MASK, NULL); if ( paddr == INVALID_PADDR ) { diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h index 5742bbc..9d0d1e7 100644 --- a/xen/include/asm-arm/p2m.h +++ b/xen/include/asm-arm/p2m.h @@ -58,7 +58,7 @@ int p2m_alloc_table(struct domain *d); void p2m_load_VTTBR(struct domain *d); /* Look up the MFN corresponding to a domain''s PFN. */ -paddr_t p2m_lookup(struct domain *d, paddr_t gpfn); +paddr_t p2m_lookup(struct domain *d, paddr_t gpfn, p2m_type_t *t); /* Setup p2m RAM mapping for domain d from start-end. */ int p2m_populate_ram(struct domain *d, paddr_t start, paddr_t end); -- 1.7.10.4
Julien Grall
2013-Dec-13 19:37 UTC
[PATCH v4 07/11] xen/arm: Retrieve p2m type in get_page_from_gfn
Signed-off-by: Julien Grall <julien.grall@linaro.org> --- Changes in v3: - Return NULL when p2m type is invalid or mmio Changes in v2: - Use p2m_lookup as p2m_get_entry was removed --- xen/include/asm-arm/p2m.h | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h index 9d0d1e7..0eb07a8 100644 --- a/xen/include/asm-arm/p2m.h +++ b/xen/include/asm-arm/p2m.h @@ -109,11 +109,17 @@ static inline struct page_info *get_page_from_gfn( struct domain *d, unsigned long gfn, p2m_type_t *t, p2m_query_t q) { struct page_info *page; - unsigned long mfn = gmfn_to_mfn(d, gfn); + p2m_type_t p2mt; + paddr_t maddr = p2m_lookup(d, pfn_to_paddr(gfn), &p2mt); + unsigned long mfn = maddr >> PAGE_SHIFT; - ASSERT(t == NULL); + if (t) + *t = p2mt; - if (!mfn_valid(mfn)) + if ( p2mt == p2m_invalid || p2mt == p2m_mmio_direct ) + return NULL; + + if ( !mfn_valid(mfn) ) return NULL; page = mfn_to_page(mfn); if ( !get_page(page, d) ) -- 1.7.10.4
Julien Grall
2013-Dec-13 19:37 UTC
[PATCH v4 08/11] xen/arm: Handle remove foreign mapping
Introduce p2m_remove_foreign to remove foreign mapping. This function will will release the reference taken when the mapping is added in the p2m. As the function is not yet implemented on x86, add stubs. Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com> [common/memory.c] Signed-off-by: Julien Grall <julien.grall@linaro.org> Cc: Jan Beulich <jbeulich@suse.com> Cc: Keir Fraser <keir@xen.org> --- Changes in v4: - Split patch #6 from dom0 pvh series v6.2 to retrieve only common code. - Rework commit title - Rename xen_rem_foreign_from_p2m in p2m_remove_foreign - Get the mfn from the pte. We are not sure that maddr given in parameters is valid Changes in v3: - Move put_page in create_p2m_entries - Move xenmem_rem_foreign_from_p2m in arch/arm/p2m.c Changes in v2: - Introduce the patch --- xen/arch/arm/p2m.c | 25 +++++++++++++++++++++++-- xen/common/memory.c | 12 +++++++++++- xen/include/asm-arm/p2m.h | 2 ++ xen/include/asm-x86/p2m.h | 6 ++++++ 4 files changed, 42 insertions(+), 3 deletions(-) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 39d8a03..cfdc19c 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -317,10 +317,21 @@ static int create_p2m_entries(struct domain *d, break; case REMOVE: { - lpae_t pte; + lpae_t pte = third[third_table_offset(addr)]; + unsigned long mfn; + + maddr = (pte.bits & PADDR_MASK & PAGE_MASK); + mfn = paddr_to_pfn(maddr); + + /* TODO: Handle other p2m type */ + if ( pte.p2m.valid && p2m_is_foreign(pte.p2m.type) ) + { + ASSERT(mfn_valid(mfn)); + put_page(mfn_to_page(mfn)); + } + memset(&pte, 0x00, sizeof(pte)); write_pte(&third[third_table_offset(addr)], pte); - maddr += PAGE_SIZE; } break; } @@ -380,6 +391,16 @@ void guest_physmap_remove_page(struct domain *d, pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid); } +int p2m_remove_foreign(struct domain *d, unsigned long gpfn) +{ + unsigned long mfn = gmfn_to_mfn(d, gpfn); + + ASSERT(mfn_valid(mfn)); + guest_physmap_remove_page(d, gpfn, mfn, 0); + + return 0; +} + int p2m_alloc_table(struct domain *d) { struct p2m_domain *p2m = &d->arch.p2m; diff --git a/xen/common/memory.c b/xen/common/memory.c index eb7b72b..61791a4 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -678,6 +678,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg) struct xen_remove_from_physmap xrfp; struct page_info *page; struct domain *d; + p2m_type_t p2mt; if ( copy_from_guest(&xrfp, arg, 1) ) return -EFAULT; @@ -693,12 +694,21 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg) return rc; } - page = get_page_from_gfn(d, xrfp.gpfn, NULL, P2M_ALLOC); + /* + * If autotranslate guest, (eg pvh), the gfn could be mapped to a mfn + * from foreign domain by the user space tool during domain creation. + * We need to check for that, free it up from the p2m, and release + * refcnt on it. In such a case, page would be NULL and the following + * call would not have refcnt''d the page. + */ + page = get_page_from_gfn(d, xrfp.gpfn, &p2mt, P2M_ALLOC); if ( page ) { guest_physmap_remove_page(d, xrfp.gpfn, page_to_mfn(page), 0); put_page(page); } + else if ( p2m_is_foreign(p2mt) ) + rc = p2m_remove_foreign(d, xrfp.gpfn); else rc = -ENOENT; diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h index 0eb07a8..844ef9d 100644 --- a/xen/include/asm-arm/p2m.h +++ b/xen/include/asm-arm/p2m.h @@ -144,6 +144,8 @@ static inline int get_page_and_type(struct page_info *page, return rc; } +int p2m_remove_foreign(struct domain *d, unsigned long gpfn); + #endif /* _XEN_P2M_H */ /* diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index d5d6391..9288043 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -515,6 +515,12 @@ int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn); /* Set foreign mfn in the current guest''s p2m table. */ int set_foreign_p2m_entry(struct domain *domp, unsigned long gfn, mfn_t mfn); +/* Remove foreign mapping from the guest''s p2m table. */ +static inline int p2m_remove_foreign(struct domain *d, unsigned long gpfn) +{ + return -ENOSYS; +} + /* * Populate-on-demand */ -- 1.7.10.4
Julien Grall
2013-Dec-13 19:37 UTC
[PATCH v4 09/11] xen/arm: Add relinquish_p2m_mapping to remove reference on every mapped page
This function will be called when the domain relinquishes its memory. It removes refcount on every mapped page to a valid MFN. Currently, Xen doesn''t take reference on every new mapping but only for foreign mapping. Restrict the function only on foreign mapping. Signed-off-by: Julien Grall <julien.grall@linaro.org> --- Changes in v4: - Use LPAE_ENTRIES instead of hardcoded value Changes in v3: - Rework title - Reuse create_p2m_entries to remove reference - Don''t forget to set relmem! - Fix compilation (missing include) Changes in v2: - Introduce the patch --- xen/arch/arm/domain.c | 8 ++++++++ xen/arch/arm/p2m.c | 44 +++++++++++++++++++++++++++++++++++++++++- xen/include/asm-arm/domain.h | 1 + xen/include/asm-arm/p2m.h | 15 ++++++++++++++ 4 files changed, 67 insertions(+), 1 deletion(-) diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index 1590708..4099e88 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -717,6 +717,14 @@ int domain_relinquish_resources(struct domain *d) if ( ret ) return ret; + d->arch.relmem = RELMEM_mapping; + /* Fallthrough */ + + case RELMEM_mapping: + ret = relinquish_p2m_mapping(d); + if ( ret ) + return ret; + d->arch.relmem = RELMEM_done; /* Fallthrough */ diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index cfdc19c..cf2e178 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -6,6 +6,8 @@ #include <xen/bitops.h> #include <asm/flushtlb.h> #include <asm/gic.h> +#include <asm/event.h> +#include <asm/hardirq.h> /* First level P2M is 2 consecutive pages */ #define P2M_FIRST_ORDER 1 @@ -213,7 +215,8 @@ static int p2m_create_table(struct domain *d, enum p2m_operation { INSERT, ALLOCATE, - REMOVE + REMOVE, + RELINQUISH, }; static int create_p2m_entries(struct domain *d, @@ -231,6 +234,7 @@ static int create_p2m_entries(struct domain *d, unsigned long cur_first_page = ~0, cur_first_offset = ~0, cur_second_offset = ~0; + unsigned long count = 0; spin_lock(&p2m->lock); @@ -315,6 +319,7 @@ static int create_p2m_entries(struct domain *d, maddr += PAGE_SIZE; } break; + case RELINQUISH: case REMOVE: { lpae_t pte = third[third_table_offset(addr)]; @@ -338,6 +343,29 @@ static int create_p2m_entries(struct domain *d, if ( flush ) flush_tlb_all_local(); + + + count++; + + if ( op == RELINQUISH && count == LPAE_ENTRIES && + hypercall_preempt_check() ) + { + p2m->next_gfn_to_relinquish = maddr >> PAGE_SHIFT; + rc = -EAGAIN; + goto out; + } + } + + /* When the function will remove mapping, p2m type should always + * be p2m_invalid. */ + if ( (t == p2m_ram_rw) || (t == p2m_ram_ro) || (t == p2m_map_foreign)) + { + unsigned long sgfn = paddr_to_pfn(start_gpaddr); + unsigned long egfn = paddr_to_pfn(end_gpaddr); + + p2m->max_mapped_gfn = MAX(p2m->max_mapped_gfn, egfn); + /* Use next_gfn_to_relinquish to store the lowest gfn mapped */ + p2m->next_gfn_to_relinquish = MIN(p2m->next_gfn_to_relinquish, sgfn); } rc = 0; @@ -533,12 +561,26 @@ int p2m_init(struct domain *d) p2m->first_level = NULL; + p2m->max_mapped_gfn = 0; + p2m->next_gfn_to_relinquish = ULONG_MAX; + err: spin_unlock(&p2m->lock); return rc; } +int relinquish_p2m_mapping(struct domain *d) +{ + struct p2m_domain *p2m = &d->arch.p2m; + + return create_p2m_entries(d, RELINQUISH, + pfn_to_paddr(p2m->next_gfn_to_relinquish), + pfn_to_paddr(p2m->max_mapped_gfn), + pfn_to_paddr(INVALID_MFN), + MATTR_MEM, p2m_invalid); +} + unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn) { paddr_t p = p2m_lookup(d, pfn_to_paddr(gpfn), NULL); diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h index 53c9895..28d39a0 100644 --- a/xen/include/asm-arm/domain.h +++ b/xen/include/asm-arm/domain.h @@ -112,6 +112,7 @@ struct arch_domain RELMEM_not_started, RELMEM_xen, RELMEM_page, + RELMEM_mapping, RELMEM_done, } relmem; diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h index 844ef9d..dc9d058 100644 --- a/xen/include/asm-arm/p2m.h +++ b/xen/include/asm-arm/p2m.h @@ -18,6 +18,15 @@ struct p2m_domain { /* Current VMID in use */ uint8_t vmid; + + /* Highest guest frame that''s ever been mapped in the p2m + * Take only into account ram and foreign mapping + */ + unsigned long max_mapped_gfn; + + /* When releasing mapped gfn''s in a preemptible manner, recall where + * to resume the search */ + unsigned long next_gfn_to_relinquish; }; /* List of possible type for each page in the p2m entry. @@ -48,6 +57,12 @@ int p2m_init(struct domain *d); /* Return all the p2m resources to Xen. */ void p2m_teardown(struct domain *d); +/* Remove mapping refcount on each mapping page in the p2m + * + * TODO: For the moment only foreign mapping is handled + */ +int relinquish_p2m_mapping(struct domain *d); + /* Allocate a new p2m table for a domain. * * Returns 0 for success or -errno. -- 1.7.10.4
Julien Grall
2013-Dec-13 19:37 UTC
[PATCH v4 10/11] xen/arm: Set foreign page type to p2m_map_foreign
Xen needs to know that the current page belongs to another domain. Also take a reference to this page. The current process to add a foreign page is: 1) get the page from the foreign p2m 2) take a reference on the page with the foreign domain in parameters 3) add the page to the current domain p2m If the foreign domain drops the page: - before 2), get_page will return NULL because the page doesn''t belong anymore to the domain - after 2), the current domain already have a reference. Write will occur to an old page which is not yet released. It can corrupt the foreign domain. Signed-off-by: Julien Grall <julien.grall@linaro.org> --- Changes in v4: - Typo s/release/released/ - Improve commit message Changes in v3: - Typoes - Check if the foreign domain is different from the current domain Changes in v2: - Even if gcc is buggy (see http://gcc.gnu.org/bugzilla/show_bug.cgi?id=18501) define p2m type per mapspace to let the compiler warns about unitialized values. --- xen/arch/arm/mm.c | 27 +++++++++++++++++++-------- 1 file changed, 19 insertions(+), 8 deletions(-) diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index b1222be..0434296 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -977,6 +977,7 @@ static int xenmem_add_to_physmap_one( { unsigned long mfn = 0; int rc; + p2m_type_t t; switch ( space ) { @@ -1009,22 +1010,29 @@ static int xenmem_add_to_physmap_one( d->arch.grant_table_gpfn[idx] = gpfn; + t = p2m_ram_rw; + spin_unlock(&d->grant_table->lock); break; case XENMAPSPACE_shared_info: - if ( idx == 0 ) - mfn = virt_to_mfn(d->shared_info); - else + if ( idx != 0 ) return -EINVAL; + + mfn = virt_to_mfn(d->shared_info); + t = p2m_ram_rw; + break; case XENMAPSPACE_gmfn_foreign: { - paddr_t maddr; struct domain *od; + struct page_info *page; od = rcu_lock_domain_by_any_id(foreign_domid); if ( od == NULL ) return -ESRCH; + if ( od == d ) + return -EINVAL; + rc = xsm_map_gmfn_foreign(XSM_TARGET, d, od); if ( rc ) { @@ -1032,15 +1040,18 @@ static int xenmem_add_to_physmap_one( return rc; } - maddr = p2m_lookup(od, pfn_to_paddr(idx), NULL); - if ( maddr == INVALID_PADDR ) + /* Take reference to the foreign domain page. + * Reference will be released in XENMEM_remove_from_physmap */ + page = get_page_from_gfn(od, idx, NULL, P2M_ALLOC); + if ( !page ) { dump_p2m_lookup(od, pfn_to_paddr(idx)); rcu_unlock_domain(od); return -EINVAL; } - mfn = maddr >> PAGE_SHIFT; + mfn = page_to_mfn(page); + t = p2m_map_foreign; rcu_unlock_domain(od); break; @@ -1051,7 +1062,7 @@ static int xenmem_add_to_physmap_one( } /* Map at new location. */ - rc = guest_physmap_add_page(d, gpfn, mfn, 0); + rc = guest_physmap_add_entry(d, gpfn, mfn, 0, t); return rc; } -- 1.7.10.4
Julien Grall
2013-Dec-13 19:37 UTC
[PATCH v4 11/11] xen/arm: grant-table: Support read-only mapping
Signed-off-by: Julien Grall <julien.grall@linaro.org> Acked-by: Ian Campbell <ian.campbell@citrix.com> --- Changes in v2: - Use p2m grant type to map grant-table mapping --- xen/arch/arm/mm.c | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 0434296..f8bda79 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -1296,19 +1296,17 @@ int create_grant_host_mapping(unsigned long addr, unsigned long frame, unsigned int flags, unsigned int cache_flags) { int rc; + p2m_type_t t = p2m_grant_map_rw; if ( cache_flags || (flags & ~GNTMAP_readonly) != GNTMAP_host_map ) return GNTST_general_error; - /* XXX: read only mappings */ if ( flags & GNTMAP_readonly ) - { - gdprintk(XENLOG_WARNING, "read only mappings not implemented yet\n"); - return GNTST_general_error; - } + t = p2m_grant_map_ro; + + rc = guest_physmap_add_entry(current->domain, addr >> PAGE_SHIFT, + frame, 0, t); - rc = guest_physmap_add_page(current->domain, - addr >> PAGE_SHIFT, frame, 0); if ( rc ) return GNTST_general_error; else -- 1.7.10.4
Ian Campbell
2013-Dec-13 23:19 UTC
Re: [PATCH v4 01/11] pvh dom0: Introduce p2m_map_foreign
On Fri, 2013-12-13 at 19:37 +0000, Julien Grall wrote:> From: Mukesh Rathor <mukesh.rathor@oracle.com> > > In this patch, a new type p2m_map_foreign is introduced for pages > that toolstack on PVH dom0 maps from foreign domains that its creating > or supporting during it''s run time. > > Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com> > Cc: Jan Beulich <jbeulich@suse.com> > Cc: Keir Fraser <keir@xen.org> > Cc: George Dunlap <george.dunlap@eu.citrix.com> > > --- > Changes in v4: > - Use the patch #4 from dom0 pvh v6 series. We need it for ARMIs this really the right patch? I''d have thought it would need to include some generic/!x86 bits but:> --- > xen/arch/x86/mm/p2m-ept.c | 1 + > xen/arch/x86/mm/p2m-pt.c | 1 + > xen/arch/x86/mm/p2m.c | 28 ++++++++++++++++++++-------- > xen/include/asm-x86/p2m.h | 4 ++++ > 4 files changed, 26 insertions(+), 8 deletions(-)Don''t you want patch #6 from the PVH series instead? Ian.
Julien Grall
2013-Dec-14 00:00 UTC
Re: [PATCH v4 01/11] pvh dom0: Introduce p2m_map_foreign
On 12/13/2013 11:19 PM, Ian Campbell wrote:> On Fri, 2013-12-13 at 19:37 +0000, Julien Grall wrote: >> From: Mukesh Rathor <mukesh.rathor@oracle.com> >> >> In this patch, a new type p2m_map_foreign is introduced for pages >> that toolstack on PVH dom0 maps from foreign domains that its creating >> or supporting during it''s run time. >> >> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com> >> Cc: Jan Beulich <jbeulich@suse.com> >> Cc: Keir Fraser <keir@xen.org> >> Cc: George Dunlap <george.dunlap@eu.citrix.com> >> >> --- >> Changes in v4: >> - Use the patch #4 from dom0 pvh v6 series. We need it for ARM > > Is this really the right patch? I''d have thought it would need to > include some generic/!x86 bits but:This patch introduce p2m_map_foreign type. I think it''s safe to push it without the other patches. Moreover, it will avoid dummy #define p2m_is_foreign in my patch #8.> >> --- >> xen/arch/x86/mm/p2m-ept.c | 1 + >> xen/arch/x86/mm/p2m-pt.c | 1 + >> xen/arch/x86/mm/p2m.c | 28 ++++++++++++++++++++-------- >> xen/include/asm-x86/p2m.h | 4 ++++ >> 4 files changed, 26 insertions(+), 8 deletions(-) > > Don''t you want patch #6 from the PVH series instead?I made a mistake in my cover letter. I was talking about patch #6 not #7. I have taken the common code in merge into my patch #8. -- Julien Grall
Julien Grall
2013-Dec-14 00:01 UTC
Re: [PATCH v4 00/11] xen/arm: Handle correctly foreign mapping
On 12/13/2013 07:37 PM, Julien Grall wrote:> Hello, > > This patch series aims to fix "Failed to unmap" message in dom0 when a guest is > creating. Without this patch series, dom0 will leak memory each time a domain > is created. It should be considered as a blocker for Xen 4.4 release. > > Compare to the previous version, I have removed the dependency on pvh dom0 > patch series (http://lists.xenproject.org/archives/html/xen-devel/2013-12/msg01026.html). > This will allow to push this serie as soon as possible without waiting the support > for dom0 pvh (which is only x86 specific). > > - Patch #1: patch #4 of the dom0 pvh series > - Patch #2-3: prepare work for the others patches > - Patch #4-7: add support for p2m type > - Patch #8-10: handle correctly foreign mapping. Patch #9, contains > common remove code taken from patch #7 of dom0 pvhI made a mistake here. The right sentence is: Patch #8-10: handle correctly foreign mapping. Patch #8, contains common remove code taken from patch #6 of dom0 pvh series. -- Julien Grall