Julien Grall
2013-Dec-10 14:18 UTC
[PATCH v3 00/10] xen/arm: Handle correctly foreign mapping
Hello, This patch series aims to fix "Failed to unmap" message in dom0 when a guest is creating. Without this patch series, dom0 will leak memory each time a domain is created. It should be considered as a blocker for Xen 4.4 release. The series is based on pvh dom0 v6 patch series from Mukesh with patch #7 (http://lists.xenproject.org/archives/html/xen-devel/2013-12/msg01026.html) replaced by the version V6.1 (http://lists.xenproject.org/archives/html/xen-devel/2013-12/msg01168.html) - Patch #1-2: prepare work for the others patches - Patch #3-6: add support for p2m type - Patch #7-9: handle correctly foreign mapping - Patch #10: it''s not really part of this series. It adds support for read-only grant-mapping All the patches can be found on this git: git clone -b map-foreign-v3 git://xenbits.xen.org/people/julieng/xen-unstable.git Major changes since v2: - Rework relinquish p2m mapping to reuse existing code - Fixing typoes - Reordering code - Fixing separate compilation to allow bisection For all the changes, see in each patch. Sincerely yours, Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com> Julien Grall (10): xen/arm: Introduce steps in domain_relinquish_resource xen/arm: move mfn_to_p2m_entry in arch/arm/p2m.c xen/arm: Implement p2m_type_t as an enum xen/arm: Store p2m type in each page of the guest xen/arm: p2m: Extend p2m_lookup parameters to retrieve the p2m type xen/arm: Retrieve p2m type in get_page_from_gfn xen/arm: Implement xen_rem_foreign_from_p2m xen/arm: Add relinquish_p2m_mapping to remove reference on every mapped page xen/arm: Set foreign page type to p2m_map_foreign xen/arm: grant-table: Support read-only mapping xen/arch/arm/domain.c | 45 +++++++++++-- xen/arch/arm/mm.c | 39 ++++++----- xen/arch/arm/p2m.c | 151 +++++++++++++++++++++++++++++++++++++----- xen/arch/arm/traps.c | 6 +- xen/include/asm-arm/domain.h | 9 +++ xen/include/asm-arm/p2m.h | 74 +++++++++++++++++---- xen/include/asm-arm/page.h | 24 +------ 7 files changed, 269 insertions(+), 79 deletions(-) -- 1.7.10.4
Julien Grall
2013-Dec-10 14:18 UTC
[PATCH v3 01/10] xen/arm: Introduce steps in domain_relinquish_resource
In a later patch, a new step will be added. It will avoid to check every step when the function was preempted. Signed-off-by: Julien Grall <julien.grall@linaro.org> Acked-by: Ian Campbell <ian.campbell@citrix.com> --- Changes in v2: - Introduce the patch --- xen/arch/arm/domain.c | 37 ++++++++++++++++++++++++++++++------- xen/include/asm-arm/domain.h | 8 ++++++++ 2 files changed, 38 insertions(+), 7 deletions(-) diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index 67c65c3..1590708 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -497,6 +497,8 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags) { int rc; + d->arch.relmem = RELMEM_not_started; + /* Idle domains do not need this setup */ if ( is_idle_domain(d) ) return 0; @@ -696,15 +698,36 @@ int domain_relinquish_resources(struct domain *d) { int ret = 0; - ret = relinquish_memory(d, &d->xenpage_list); - if ( ret ) - return ret; + switch ( d->arch.relmem ) + { + case RELMEM_not_started: + d->arch.relmem = RELMEM_xen; + /* Falltrough */ - ret = relinquish_memory(d, &d->page_list); - if ( ret ) - return ret; + case RELMEM_xen: + ret = relinquish_memory(d, &d->xenpage_list); + if ( ret ) + return ret; - return ret; + d->arch.relmem = RELMEM_page; + /* Fallthrough */ + + case RELMEM_page: + ret = relinquish_memory(d, &d->page_list); + if ( ret ) + return ret; + + d->arch.relmem = RELMEM_done; + /* Fallthrough */ + + case RELMEM_done: + break; + + default: + BUG(); + } + + return 0; } void arch_dump_domain_info(struct domain *d) diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h index 8ebee3e..922eda3 100644 --- a/xen/include/asm-arm/domain.h +++ b/xen/include/asm-arm/domain.h @@ -70,6 +70,14 @@ struct arch_domain struct hvm_domain hvm_domain; xen_pfn_t *grant_table_gpfn; + /* Continuable domain_relinquish_resources(). */ + enum { + RELMEM_not_started, + RELMEM_xen, + RELMEM_page, + RELMEM_done, + } relmem; + /* Virtual CPUID */ uint32_t vpidr; -- 1.7.10.4
Julien Grall
2013-Dec-10 14:18 UTC
[PATCH v3 02/10] xen/arm: move mfn_to_p2m_entry in arch/arm/p2m.c
The function mfn_to_p2m_entry will be extended in a following patch to handle p2m_type_t. It will break compilation because p2m_type_t is not defined (interdependence between includes). It''s easier to move the function in arch/arm/p2m.c and it''s not harmful as the function is only used in this file. Signed-off-by: Julien Grall <julien.grall@linaro.org> Acked-by: Ian Campbell <ian.campbell@citrix.com> --- xen/arch/arm/p2m.c | 22 ++++++++++++++++++++++ xen/include/asm-arm/page.h | 22 ---------------------- 2 files changed, 22 insertions(+), 22 deletions(-) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 083f8bf..74636df 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -128,6 +128,28 @@ int p2m_pod_decrease_reservation(struct domain *d, return -ENOSYS; } +static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr) +{ + paddr_t pa = ((paddr_t) mfn) << PAGE_SHIFT; + lpae_t e = (lpae_t) { + .p2m.xn = 0, + .p2m.af = 1, + .p2m.sh = LPAE_SH_OUTER, + .p2m.read = 1, + .p2m.write = 1, + .p2m.mattr = mattr, + .p2m.table = 1, + .p2m.valid = 1, + }; + + ASSERT(!(pa & ~PAGE_MASK)); + ASSERT(!(pa & ~PADDR_MASK)); + + e.bits |= pa; + + return e; +} + /* Allocate a new page table page and hook it in via the given entry */ static int p2m_create_table(struct domain *d, lpae_t *entry) diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h index d468418..0625464 100644 --- a/xen/include/asm-arm/page.h +++ b/xen/include/asm-arm/page.h @@ -213,28 +213,6 @@ static inline lpae_t mfn_to_xen_entry(unsigned long mfn) return e; } -static inline lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr) -{ - paddr_t pa = ((paddr_t) mfn) << PAGE_SHIFT; - lpae_t e = (lpae_t) { - .p2m.xn = 0, - .p2m.af = 1, - .p2m.sh = LPAE_SH_OUTER, - .p2m.write = 1, - .p2m.read = 1, - .p2m.mattr = mattr, - .p2m.table = 1, - .p2m.valid = 1, - }; - - ASSERT(!(pa & ~PAGE_MASK)); - ASSERT(!(pa & ~PADDR_MASK)); - - e.bits |= pa; - - return e; -} - #if defined(CONFIG_ARM_32) # include <asm/arm32/page.h> #elif defined(CONFIG_ARM_64) -- 1.7.10.4
Julien Grall
2013-Dec-10 14:18 UTC
[PATCH v3 03/10] xen/arm: Implement p2m_type_t as an enum
Until now, Xen doesn''t know the type of the page (ram, foreign page, mmio,...). Introduce p2m_type_t with basic types: - p2m_invalid: Nothing is mapped here - p2m_ram_rw: Normal read/write guest RAM - p2m_ram_ro: Read-only guest RAM - p2m_mmio_direct: Read/write mapping of device memory - p2m_map_foreign: RAM page from foreign guest - p2m_grant_map_rw: Read/write grant mapping - p2m_grant_map_ro: Read-only grant mapping Signed-off-by: Julien Grall <julien.grall@linaro.org> Acked-by: Ian Campbell <ian.campbell@citrix.com> --- Changes in v3: - s/ / / - Replace p2m by either pte or p2m entry - Fix compilation (unitialized value) - Add BUILD_BUG_ON (from patch #4) and fix it Changes in v2: - Add comment for future improvement - Add p2m_max_real_type. Will be use later to check the size of the enum - Let the compiler choose the value for each name of the enum - Add grant mapping type --- xen/arch/arm/p2m.c | 2 ++ xen/include/asm-arm/p2m.h | 24 ++++++++++++++++++++++-- 2 files changed, 24 insertions(+), 2 deletions(-) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 74636df..691cdfa 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -142,6 +142,8 @@ static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr) .p2m.valid = 1, }; + BUILD_BUG_ON(p2m_max_real_type > (1 << 4)); + ASSERT(!(pa & ~PAGE_MASK)); ASSERT(!(pa & ~PADDR_MASK)); diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h index 0c554a5..c833c39 100644 --- a/xen/include/asm-arm/p2m.h +++ b/xen/include/asm-arm/p2m.h @@ -20,6 +20,25 @@ struct p2m_domain { uint8_t vmid; }; +/* List of possible type for each page in the p2m entry. + * The number of available bit per page in the pte for this purpose is 4 bits. + * So it''s possible to only have 16 fields. If we run out of value in the + * future, it''s possible to use higher value for pseudo-type and don''t store + * them in the p2m entry. + */ +typedef enum { + p2m_invalid = 0, /* Nothing mapped here */ + p2m_ram_rw, /* Normal read/write guest RAM */ + p2m_ram_ro, /* Read-only; writes are silently dropped */ + p2m_mmio_direct, /* Read/write mapping of genuine MMIO area */ + p2m_map_foreign, /* Ram pages from foreign domain */ + p2m_grant_map_rw, /* Read/write grant mapping */ + p2m_grant_map_ro, /* Read-only grant mapping */ + p2m_max_real_type, /* Types after this won''t be store in the p2m */ +} p2m_type_t; + +#define p2m_is_foreign(_t) ((_t) == p2m_map_foreign) + /* Initialise vmid allocator */ void p2m_vmid_allocator_init(void); @@ -72,7 +91,6 @@ p2m_pod_decrease_reservation(struct domain *d, unsigned int order); /* Look up a GFN and take a reference count on the backing page. */ -typedef int p2m_type_t; typedef unsigned int p2m_query_t; #define P2M_ALLOC (1u<<0) /* Populate PoD and paged-out entries */ #define P2M_UNSHARE (1u<<1) /* Break CoW sharing */ @@ -83,6 +101,9 @@ static inline struct page_info *get_page_from_gfn( struct page_info *page; unsigned long mfn = gmfn_to_mfn(d, gfn); + if ( t ) + *t = p2m_invalid; + if (!mfn_valid(mfn)) return NULL; page = mfn_to_page(mfn); @@ -108,7 +129,6 @@ static inline int get_page_and_type(struct page_info *page, return rc; } -#define p2m_is_foreign(_t) (0 && (_t)) static inline int xenmem_rem_foreign_from_p2m(struct domain *d, unsigned long gpfn) { -- 1.7.10.4
Julien Grall
2013-Dec-10 14:18 UTC
[PATCH v3 04/10] xen/arm: Store p2m type in each page of the guest
Use the field ''avail'' to store the type of the page. Rename it to ''type'' for convenience. The information stored in this field will be retrieved in a future patch to change the behaviour when the page is removed. Also introduce guest_physmap_add_entry to map and set a specific p2m type for a page. Signed-off-by: Julien Grall <julien.grall@linaro.org> --- Changes in v3: - Typo in the commit message - Rename ''p2mt'' field to ''type'' - Remove default in switch to let the compiler warns - Move BUILD_BUG_ON to patch #3 Changes in v2: - Rename ''avail'' field to ''p2mt'' in the p2m structure - Add BUILD_BUG_ON to check if the enum value will fit in the field - Implement grant mapping type --- xen/arch/arm/p2m.c | 56 ++++++++++++++++++++++++++++++++------------ xen/include/asm-arm/p2m.h | 18 ++++++++++---- xen/include/asm-arm/page.h | 2 +- 3 files changed, 56 insertions(+), 20 deletions(-) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 691cdfa..3a79927 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -128,7 +128,8 @@ int p2m_pod_decrease_reservation(struct domain *d, return -ENOSYS; } -static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr) +static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr, + p2m_type_t t) { paddr_t pa = ((paddr_t) mfn) << PAGE_SHIFT; lpae_t e = (lpae_t) { @@ -136,14 +137,34 @@ static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr) .p2m.af = 1, .p2m.sh = LPAE_SH_OUTER, .p2m.read = 1, - .p2m.write = 1, .p2m.mattr = mattr, .p2m.table = 1, .p2m.valid = 1, + .p2m.type = t, }; BUILD_BUG_ON(p2m_max_real_type > (1 << 4)); + switch (t) + { + case p2m_grant_map_rw: + e.p2m.xn = 1; + /* Fallthrough */ + case p2m_ram_rw: + case p2m_mmio_direct: + case p2m_map_foreign: + e.p2m.write = 1; + break; + + case p2m_grant_map_ro: + e.p2m.xn = 1; + /* Fallthrough */ + case p2m_invalid: + case p2m_ram_ro: + default: + e.p2m.write = 0; + } + ASSERT(!(pa & ~PAGE_MASK)); ASSERT(!(pa & ~PADDR_MASK)); @@ -173,7 +194,7 @@ static int p2m_create_table(struct domain *d, clear_page(p); unmap_domain_page(p); - pte = mfn_to_p2m_entry(page_to_mfn(page), MATTR_MEM); + pte = mfn_to_p2m_entry(page_to_mfn(page), MATTR_MEM, p2m_invalid); write_pte(entry, pte); @@ -191,7 +212,8 @@ static int create_p2m_entries(struct domain *d, paddr_t start_gpaddr, paddr_t end_gpaddr, paddr_t maddr, - int mattr) + int mattr, + p2m_type_t t) { int rc, flush; struct p2m_domain *p2m = &d->arch.p2m; @@ -271,14 +293,15 @@ static int create_p2m_entries(struct domain *d, goto out; } - pte = mfn_to_p2m_entry(page_to_mfn(page), mattr); + pte = mfn_to_p2m_entry(page_to_mfn(page), mattr, t); write_pte(&third[third_table_offset(addr)], pte); } break; case INSERT: { - lpae_t pte = mfn_to_p2m_entry(maddr >> PAGE_SHIFT, mattr); + lpae_t pte = mfn_to_p2m_entry(maddr >> PAGE_SHIFT, + mattr, t); write_pte(&third[third_table_offset(addr)], pte); maddr += PAGE_SIZE; } @@ -313,7 +336,8 @@ int p2m_populate_ram(struct domain *d, paddr_t start, paddr_t end) { - return create_p2m_entries(d, ALLOCATE, start, end, 0, MATTR_MEM); + return create_p2m_entries(d, ALLOCATE, start, end, + 0, MATTR_MEM, p2m_ram_rw); } int map_mmio_regions(struct domain *d, @@ -321,18 +345,20 @@ int map_mmio_regions(struct domain *d, paddr_t end_gaddr, paddr_t maddr) { - return create_p2m_entries(d, INSERT, start_gaddr, end_gaddr, maddr, MATTR_DEV); + return create_p2m_entries(d, INSERT, start_gaddr, end_gaddr, + maddr, MATTR_DEV, p2m_mmio_direct); } -int guest_physmap_add_page(struct domain *d, - unsigned long gpfn, - unsigned long mfn, - unsigned int page_order) +int guest_physmap_add_entry(struct domain *d, + unsigned long gpfn, + unsigned long mfn, + unsigned long page_order, + p2m_type_t t) { return create_p2m_entries(d, INSERT, pfn_to_paddr(gpfn), - pfn_to_paddr(gpfn + (1<<page_order)), - pfn_to_paddr(mfn), MATTR_MEM); + pfn_to_paddr(gpfn + (1 << page_order)), + pfn_to_paddr(mfn), MATTR_MEM, t); } void guest_physmap_remove_page(struct domain *d, @@ -342,7 +368,7 @@ void guest_physmap_remove_page(struct domain *d, create_p2m_entries(d, REMOVE, pfn_to_paddr(gpfn), pfn_to_paddr(gpfn + (1<<page_order)), - pfn_to_paddr(mfn), MATTR_MEM); + pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid); } int p2m_alloc_table(struct domain *d) diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h index c833c39..dba44fd 100644 --- a/xen/include/asm-arm/p2m.h +++ b/xen/include/asm-arm/p2m.h @@ -68,11 +68,21 @@ int p2m_populate_ram(struct domain *d, paddr_t start, paddr_t end); int map_mmio_regions(struct domain *d, paddr_t start_gaddr, paddr_t end_gaddr, paddr_t maddr); +int guest_physmap_add_entry(struct domain *d, + unsigned long gfn, + unsigned long mfn, + unsigned long page_order, + p2m_type_t t); + /* Untyped version for RAM only, for compatibility */ -int guest_physmap_add_page(struct domain *d, - unsigned long gfn, - unsigned long mfn, - unsigned int page_order); +static inline int guest_physmap_add_page(struct domain *d, + unsigned long gfn, + unsigned long mfn, + unsigned int page_order) +{ + return guest_physmap_add_entry(d, gfn, mfn, page_order, p2m_ram_rw); +} + void guest_physmap_remove_page(struct domain *d, unsigned long gpfn, unsigned long mfn, unsigned int page_order); diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h index 0625464..670d4e7 100644 --- a/xen/include/asm-arm/page.h +++ b/xen/include/asm-arm/page.h @@ -153,7 +153,7 @@ typedef struct { unsigned long contig:1; /* In a block of 16 contiguous entries */ unsigned long sbz2:1; unsigned long xn:1; /* eXecute-Never */ - unsigned long avail:4; /* Ignored by hardware */ + unsigned long type:4; /* Ignore by hardware. Used to store p2m types */ unsigned long sbz1:5; } __attribute__((__packed__)) lpae_p2m_t; -- 1.7.10.4
Julien Grall
2013-Dec-10 14:18 UTC
[PATCH v3 05/10] xen/arm: p2m: Extend p2m_lookup parameters to retrieve the p2m type
Signed-off-by: Julien Grall <julien.grall@linaro.org> Acked-by: Ian Campbell <ian.campbell@citrix.com> --- Changes in v3: - ''p2mt'' field was renamed in ''type'' - missing one p2m_lookup Changes in v2: - This patch was "Add p2m_get_entry" on the previous version - Don''t add a new function but only extend p2m_lookup --- xen/arch/arm/mm.c | 2 +- xen/arch/arm/p2m.c | 13 +++++++++++-- xen/arch/arm/traps.c | 6 +++--- xen/include/asm-arm/p2m.h | 2 +- 4 files changed, 16 insertions(+), 7 deletions(-) diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index d187e86..619aec9 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -1032,7 +1032,7 @@ static int xenmem_add_to_physmap_one( return rc; } - maddr = p2m_lookup(od, pfn_to_paddr(idx)); + maddr = p2m_lookup(od, pfn_to_paddr(idx), NULL); if ( maddr == INVALID_PADDR ) { dump_p2m_lookup(od, pfn_to_paddr(idx)); diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 3a79927..39d8a03 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -72,11 +72,17 @@ static lpae_t *p2m_map_first(struct p2m_domain *p2m, paddr_t addr) * There are no processor functions to do a stage 2 only lookup therefore we * do a a software walk. */ -paddr_t p2m_lookup(struct domain *d, paddr_t paddr) +paddr_t p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t) { struct p2m_domain *p2m = &d->arch.p2m; lpae_t pte, *first = NULL, *second = NULL, *third = NULL; paddr_t maddr = INVALID_PADDR; + p2m_type_t _t; + + /* Allow t to be NULL */ + t = t ?: &_t; + + *t = p2m_invalid; spin_lock(&p2m->lock); @@ -102,7 +108,10 @@ paddr_t p2m_lookup(struct domain *d, paddr_t paddr) done: if ( pte.p2m.valid ) + { maddr = (pte.bits & PADDR_MASK & PAGE_MASK) | (paddr & ~PAGE_MASK); + *t = pte.p2m.type; + } if (third) unmap_domain_page(third); if (second) unmap_domain_page(second); @@ -511,7 +520,7 @@ err: unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn) { - paddr_t p = p2m_lookup(d, pfn_to_paddr(gpfn)); + paddr_t p = p2m_lookup(d, pfn_to_paddr(gpfn), NULL); return p >> PAGE_SHIFT; } diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 458128e..0a811e7 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -1267,7 +1267,7 @@ void dump_guest_s1_walk(struct domain *d, vaddr_t addr) printk("dom%d VA 0x%08"PRIvaddr"\n", d->domain_id, addr); printk(" TTBCR: 0x%08"PRIregister"\n", ttbcr); printk(" TTBR0: 0x%016"PRIx64" = 0x%"PRIpaddr"\n", - ttbr0, p2m_lookup(d, ttbr0 & PAGE_MASK)); + ttbr0, p2m_lookup(d, ttbr0 & PAGE_MASK, NULL)); if ( ttbcr & TTBCR_EAE ) { @@ -1280,7 +1280,7 @@ void dump_guest_s1_walk(struct domain *d, vaddr_t addr) return; } - paddr = p2m_lookup(d, ttbr0 & PAGE_MASK); + paddr = p2m_lookup(d, ttbr0 & PAGE_MASK, NULL); if ( paddr == INVALID_PADDR ) { printk("Failed TTBR0 maddr lookup\n"); @@ -1295,7 +1295,7 @@ void dump_guest_s1_walk(struct domain *d, vaddr_t addr) !(first[offset] & 0x2) ) goto done; - paddr = p2m_lookup(d, first[offset] & PAGE_MASK); + paddr = p2m_lookup(d, first[offset] & PAGE_MASK, NULL); if ( paddr == INVALID_PADDR ) { diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h index dba44fd..597e90a 100644 --- a/xen/include/asm-arm/p2m.h +++ b/xen/include/asm-arm/p2m.h @@ -58,7 +58,7 @@ int p2m_alloc_table(struct domain *d); void p2m_load_VTTBR(struct domain *d); /* Look up the MFN corresponding to a domain''s PFN. */ -paddr_t p2m_lookup(struct domain *d, paddr_t gpfn); +paddr_t p2m_lookup(struct domain *d, paddr_t gpfn, p2m_type_t *t); /* Setup p2m RAM mapping for domain d from start-end. */ int p2m_populate_ram(struct domain *d, paddr_t start, paddr_t end); -- 1.7.10.4
Julien Grall
2013-Dec-10 14:18 UTC
[PATCH v3 06/10] xen/arm: Retrieve p2m type in get_page_from_gfn
Signed-off-by: Julien Grall <julien.grall@linaro.org> --- Changes in v3: - Return NULL when p2m type is invalid or mmio Changes in v2: - Use p2m_lookup as p2m_get_entry was removed --- xen/include/asm-arm/p2m.h | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h index 597e90a..52b33ce 100644 --- a/xen/include/asm-arm/p2m.h +++ b/xen/include/asm-arm/p2m.h @@ -109,12 +109,17 @@ static inline struct page_info *get_page_from_gfn( struct domain *d, unsigned long gfn, p2m_type_t *t, p2m_query_t q) { struct page_info *page; - unsigned long mfn = gmfn_to_mfn(d, gfn); + p2m_type_t p2mt; + paddr_t maddr = p2m_lookup(d, pfn_to_paddr(gfn), &p2mt); + unsigned long mfn = maddr >> PAGE_SHIFT; - if ( t ) - *t = p2m_invalid; + if (t) + *t = p2mt; - if (!mfn_valid(mfn)) + if ( p2mt == p2m_invalid || p2mt == p2m_mmio_direct ) + return NULL; + + if ( !mfn_valid(mfn) ) return NULL; page = mfn_to_page(mfn); if ( !get_page(page, d) ) -- 1.7.10.4
Julien Grall
2013-Dec-10 14:18 UTC
[PATCH v3 07/10] xen/arm: Implement xen_rem_foreign_from_p2m
Signed-off-by: Julien Grall <julien.grall@linaro.org> --- Changes in v3: - Move put_page in create_p2m_entries - Move xenmem_rem_foreign_from_p2m in arch/arm/p2m.c Changes in v2: - Introduce the patch --- xen/arch/arm/p2m.c | 19 ++++++++++++++++++- xen/include/asm-arm/p2m.h | 6 +----- 2 files changed, 19 insertions(+), 6 deletions(-) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 39d8a03..72946c6 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -317,7 +317,14 @@ static int create_p2m_entries(struct domain *d, break; case REMOVE: { - lpae_t pte; + lpae_t pte = third[third_table_offset(addr)]; + unsigned long mfn = maddr >> PAGE_SHIFT; + p2m_type_t t = pte.p2m.type; + + /* TODO: Handle other p2m type */ + if ( mfn_valid(mfn) && p2m_is_foreign(t) ) + put_page(mfn_to_page(mfn)); + memset(&pte, 0x00, sizeof(pte)); write_pte(&third[third_table_offset(addr)], pte); maddr += PAGE_SIZE; @@ -380,6 +387,16 @@ void guest_physmap_remove_page(struct domain *d, pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid); } +int xenmem_rem_foreign_from_p2m(struct domain *d, unsigned long gpfn) +{ + unsigned long mfn = gmfn_to_mfn(d, gpfn); + + ASSERT(mfn_valid(mfn)); + guest_physmap_remove_page(d, gpfn, mfn, 0); + + return 0; +} + int p2m_alloc_table(struct domain *d) { struct p2m_domain *p2m = &d->arch.p2m; diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h index 52b33ce..3d1696c 100644 --- a/xen/include/asm-arm/p2m.h +++ b/xen/include/asm-arm/p2m.h @@ -144,11 +144,7 @@ static inline int get_page_and_type(struct page_info *page, return rc; } -static inline int xenmem_rem_foreign_from_p2m(struct domain *d, - unsigned long gpfn) -{ - return -ENOSYS; -} +int xenmem_rem_foreign_from_p2m(struct domain *d, unsigned long gpfn); #endif /* _XEN_P2M_H */ -- 1.7.10.4
Julien Grall
2013-Dec-10 14:18 UTC
[PATCH v3 08/10] xen/arm: Add relinquish_p2m_mapping to remove reference on every mapped page
This function will be called when the domain relinquishes its memory. It removes refcount on every mapped page to a valid MFN. Currently, Xen doesn''t take reference on every new mapping but only for foreign mapping. Restrict the function only on foreign mapping. Signed-off-by: Julien Grall <julien.grall@linaro.org> --- Changes in v3: - Rework title - Reuse create_p2m_entries to remove reference - Don''t forget to set relmem! - Fix compilation (missing include) Changes in v2: - Introduce the patch --- xen/arch/arm/domain.c | 8 ++++++++ xen/arch/arm/p2m.c | 43 +++++++++++++++++++++++++++++++++++++++++- xen/include/asm-arm/domain.h | 1 + xen/include/asm-arm/p2m.h | 15 +++++++++++++++ 4 files changed, 66 insertions(+), 1 deletion(-) diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index 1590708..4099e88 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -717,6 +717,14 @@ int domain_relinquish_resources(struct domain *d) if ( ret ) return ret; + d->arch.relmem = RELMEM_mapping; + /* Fallthrough */ + + case RELMEM_mapping: + ret = relinquish_p2m_mapping(d); + if ( ret ) + return ret; + d->arch.relmem = RELMEM_done; /* Fallthrough */ diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 72946c6..778a07f 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -6,6 +6,8 @@ #include <xen/bitops.h> #include <asm/flushtlb.h> #include <asm/gic.h> +#include <asm/event.h> +#include <asm/hardirq.h> /* First level P2M is 2 consecutive pages */ #define P2M_FIRST_ORDER 1 @@ -213,7 +215,8 @@ static int p2m_create_table(struct domain *d, enum p2m_operation { INSERT, ALLOCATE, - REMOVE + REMOVE, + RELINQUISH, }; static int create_p2m_entries(struct domain *d, @@ -231,6 +234,7 @@ static int create_p2m_entries(struct domain *d, unsigned long cur_first_page = ~0, cur_first_offset = ~0, cur_second_offset = ~0; + unsigned long count = 0; spin_lock(&p2m->lock); @@ -315,6 +319,7 @@ static int create_p2m_entries(struct domain *d, maddr += PAGE_SIZE; } break; + case RELINQUISH: case REMOVE: { lpae_t pte = third[third_table_offset(addr)]; @@ -334,6 +339,28 @@ static int create_p2m_entries(struct domain *d, if ( flush ) flush_tlb_all_local(); + + + count++; + + if ( op == RELINQUISH && count == 512 && hypercall_preempt_check() ) + { + p2m->next_gfn_to_relinquish = maddr >> PAGE_SHIFT; + rc = -EAGAIN; + goto out; + } + } + + /* When the function will remove mapping, p2m type should always + * be p2m_invalid. */ + if ( (t == p2m_ram_rw) || (t == p2m_ram_ro) || (t == p2m_map_foreign)) + { + unsigned long sgfn = paddr_to_pfn(start_gpaddr); + unsigned long egfn = paddr_to_pfn(end_gpaddr); + + p2m->max_mapped_gfn = MAX(p2m->max_mapped_gfn, egfn); + /* Use next_gfn_to_relinquish to store the lowest gfn mapped */ + p2m->next_gfn_to_relinquish = MIN(p2m->next_gfn_to_relinquish, sgfn); } rc = 0; @@ -529,12 +556,26 @@ int p2m_init(struct domain *d) p2m->first_level = NULL; + p2m->max_mapped_gfn = 0; + p2m->next_gfn_to_relinquish = ULONG_MAX; + err: spin_unlock(&p2m->lock); return rc; } +int relinquish_p2m_mapping(struct domain *d) +{ + struct p2m_domain *p2m = &d->arch.p2m; + + return create_p2m_entries(d, RELINQUISH, + pfn_to_paddr(p2m->next_gfn_to_relinquish), + pfn_to_paddr(p2m->max_mapped_gfn), + pfn_to_paddr(INVALID_MFN), + MATTR_MEM, p2m_invalid); +} + unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn) { paddr_t p = p2m_lookup(d, pfn_to_paddr(gpfn), NULL); diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h index 922eda3..4a4c018 100644 --- a/xen/include/asm-arm/domain.h +++ b/xen/include/asm-arm/domain.h @@ -75,6 +75,7 @@ struct arch_domain RELMEM_not_started, RELMEM_xen, RELMEM_page, + RELMEM_mapping, RELMEM_done, } relmem; diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h index 3d1696c..7f5d7b2 100644 --- a/xen/include/asm-arm/p2m.h +++ b/xen/include/asm-arm/p2m.h @@ -18,6 +18,15 @@ struct p2m_domain { /* Current VMID in use */ uint8_t vmid; + + /* Highest guest frame that''s ever been mapped in the p2m + * Take only into account ram and foreign mapping + */ + unsigned long max_mapped_gfn; + + /* When releasing mapped gfn''s in a preemptible manner, recall where + * to resume the search */ + unsigned long next_gfn_to_relinquish; }; /* List of possible type for each page in the p2m entry. @@ -48,6 +57,12 @@ int p2m_init(struct domain *d); /* Return all the p2m resources to Xen. */ void p2m_teardown(struct domain *d); +/* Remove mapping refcount on each mapping page in the p2m + * + * TODO: For the moment only foreign mapping is handled + */ +int relinquish_p2m_mapping(struct domain *d); + /* Allocate a new p2m table for a domain. * * Returns 0 for success or -errno. -- 1.7.10.4
Julien Grall
2013-Dec-10 14:18 UTC
[PATCH v3 09/10] xen/arm: Set foreign page type to p2m_map_foreign
Xen needs to know that the current page belongs to another domain. Also take a reference to this page. Signed-off-by: Julien Grall <julien.grall@linaro.org> --- Changes in v3: - Typoes - Check if the foreign domain is different from the current domain Changes in v2: - Even if gcc is buggy (see http://gcc.gnu.org/bugzilla/show_bug.cgi?id=18501) define p2m type per mapspace to let the compiler warns about unitialized values. --- xen/arch/arm/mm.c | 27 +++++++++++++++++++-------- 1 file changed, 19 insertions(+), 8 deletions(-) diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 619aec9..2a2c769 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -977,6 +977,7 @@ static int xenmem_add_to_physmap_one( { unsigned long mfn = 0; int rc; + p2m_type_t t; switch ( space ) { @@ -1009,22 +1010,29 @@ static int xenmem_add_to_physmap_one( d->arch.grant_table_gpfn[idx] = gpfn; + t = p2m_ram_rw; + spin_unlock(&d->grant_table->lock); break; case XENMAPSPACE_shared_info: - if ( idx == 0 ) - mfn = virt_to_mfn(d->shared_info); - else + if ( idx != 0 ) return -EINVAL; + + mfn = virt_to_mfn(d->shared_info); + t = p2m_ram_rw; + break; case XENMAPSPACE_gmfn_foreign: { - paddr_t maddr; struct domain *od; + struct page_info *page; od = rcu_lock_domain_by_any_id(foreign_domid); if ( od == NULL ) return -ESRCH; + if ( od == d ) + return -EINVAL; + rc = xsm_map_gmfn_foreign(XSM_TARGET, d, od); if ( rc ) { @@ -1032,15 +1040,18 @@ static int xenmem_add_to_physmap_one( return rc; } - maddr = p2m_lookup(od, pfn_to_paddr(idx), NULL); - if ( maddr == INVALID_PADDR ) + /* Take reference to the foreign domain page. + * Reference will be release in XENMEM_remove_from_physmap */ + page = get_page_from_gfn(od, idx, NULL, P2M_ALLOC); + if ( !page ) { dump_p2m_lookup(od, pfn_to_paddr(idx)); rcu_unlock_domain(od); return -EINVAL; } - mfn = maddr >> PAGE_SHIFT; + mfn = page_to_mfn(page); + t = p2m_map_foreign; rcu_unlock_domain(od); break; @@ -1051,7 +1062,7 @@ static int xenmem_add_to_physmap_one( } /* Map at new location. */ - rc = guest_physmap_add_page(d, gpfn, mfn, 0); + rc = guest_physmap_add_entry(d, gpfn, mfn, 0, t); return rc; } -- 1.7.10.4
Julien Grall
2013-Dec-10 14:18 UTC
[PATCH v3 10/10] xen/arm: grant-table: Support read-only mapping
Signed-off-by: Julien Grall <julien.grall@linaro.org> Acked-by: Ian Campbell <ian.campbell@citrix.com> --- Changes in v2: - Use p2m grant type to map grant-table mapping --- xen/arch/arm/mm.c | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 2a2c769..da1676f 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -1295,19 +1295,17 @@ int create_grant_host_mapping(unsigned long addr, unsigned long frame, unsigned int flags, unsigned int cache_flags) { int rc; + p2m_type_t t = p2m_grant_map_rw; if ( cache_flags || (flags & ~GNTMAP_readonly) != GNTMAP_host_map ) return GNTST_general_error; - /* XXX: read only mappings */ if ( flags & GNTMAP_readonly ) - { - gdprintk(XENLOG_WARNING, "read only mappings not implemented yet\n"); - return GNTST_general_error; - } + t = p2m_grant_map_ro; + + rc = guest_physmap_add_entry(current->domain, addr >> PAGE_SHIFT, + frame, 0, t); - rc = guest_physmap_add_page(current->domain, - addr >> PAGE_SHIFT, frame, 0); if ( rc ) return GNTST_general_error; else -- 1.7.10.4