search for: page_info

Displaying 20 results from an estimated 52 matches for "page_info".

2006 Sep 29
0
[PATCH 2/6] xen: add per-node bucks to page allocator
...e + * checking the node_id of the previous page. If they differ and the + * latter is not on a MAX_ORDER boundary, then we reserve the page by + * not freeing it to the buddy allocator. + */ +#define MAX_ORDER_ALIGNED (1UL << (MAX_ORDER)) void init_heap_pages( unsigned int zone, struct page_info *pg, unsigned long nr_pages) { + unsigned int nid_curr,nid_prev; unsigned long i; ASSERT(zone < NR_ZONES); + if ( likely(page_to_mfn(pg) != 0) ) + nid_prev = phys_to_nid(page_to_maddr(pg-1)); + else + nid_prev = phys_to_nid(page_to_maddr(pg)); + for ( i...
2013 Dec 06
0
[Patch v2] xen/tmem: Fix uses of unmatched __map_domain_page()
...ns(-) diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h index b26c6fa..cccc98e 100644 --- a/xen/include/xen/tmem_xen.h +++ b/xen/include/xen/tmem_xen.h @@ -228,26 +228,24 @@ static inline bool_t tmem_current_is_privileged(void) static inline uint8_t tmem_get_first_byte(struct page_info *pfp) { - void *p = __map_domain_page(pfp); + const uint8_t *p = __map_domain_page(pfp); + uint8_t byte = p[0]; - return (uint8_t)(*(char *)p); + unmap_domain_page(p); + + return byte; } static inline int tmem_page_cmp(struct page_info *pfp1, struct page_info *pfp2) { -...
2009 Jan 26
24
page ref/type count overflows
With pretty trivial user mode programs being able to crash the kernel due to the ref counter widths in Xen being more narrow than in Linux, I started an attempt to put together a kernel side fix. While addressing the plain hypercalls is pretty strait forward, dealing with multicalls (both when using them for lazy mmu mode batching and when explicitly using them in e.g. netback - the backends are
2007 May 29
0
Fw: [RFC] makedumpfile: xen extraction
.../ + SYMBOL_INIT(xen_heap_start, "xen_heap_start"); /* ia64 */ + SYMBOL_INIT(xen_pstart, "xen_pstart"); /* ia64 */ + SYMBOL_INIT(frametable_pg_dir, "frametable_pg_dir"); /* ia64 */ + + return TRUE; +} + +int +get_structure_info_xen(struct DumpInfo *info) +{ + SIZE_INIT(page_info, "page_info"); + OFFSET_INIT(page_info.count_info, "page_info", "count_info"); + /* _domain is the first member of union u */ + OFFSET_INIT(page_info._domain, "page_info", "u"); + + SIZE_INIT(domain, "domain"); + OFFSET_INIT(domain.domain_...
2013 Nov 14
4
[PATCH] xen/arm: Allow balooning working with 1:1 memory mapping
...#include <xsm/xsm.h> #include <xen/trace.h> +#ifdef CONFIG_ARM +#include <asm/platform.h> +#endif struct memop_args { /* INPUT */ @@ -90,7 +93,7 @@ static void increase_reservation(struct memop_args *a) static void populate_physmap(struct memop_args *a) { - struct page_info *page; + struct page_info *page = NULL; unsigned long i, j; xen_pfn_t gpfn, mfn; struct domain *d = a->domain; @@ -122,7 +125,33 @@ static void populate_physmap(struct memop_args *a) } else { - page = alloc_domheap_pages(d, a->extent_ord...
2011 Sep 23
2
Some problems about xenpaging
...9-05 20:39:30.000000000 +0800 +++ ./b/xen/arch/x86/mm/p2m.c 2011-09-23 23:46:19.000000000 +0800 @@ -675,6 +675,23 @@ BUG_ON(p2md->pod.entry_count < 0); pod--; } + else if ( steal_for_cache && p2m_is_paging(t) ) + { + struct page_info *page; + /* alloc a new page to compensate the pod list */ + page = alloc_domheap_page(d, 0); + if ( unlikely(page == NULL) ) + { + goto out_entry_check; + } + set_p2m_entry(d, gpfn + i, _mfn(INVALID_MFN), 0, p2m_inval...
2013 Dec 06
36
[V6 PATCH 0/7]: PVH dom0....
Hi, V6: The only change from V5 is in patch #6: - changed comment to reflect autoxlate - removed a redundant ASSERT - reworked logic a bit so that get_page_from_gfn() is called with NULL for p2m type as before. arm has ASSERT wanting it to be NULL. Tim: patch 4 needs your approval. Daniel: patch 5 needs your approval. These patches implement PVH dom0. Patches 1 and 2
2007 Oct 03
0
[PATCH 3/3] TLB flushing and IO memory mapping
...grant table ops diff -r 749b60ccc177 xen/arch/x86/mm.c --- a/xen/arch/x86/mm.c Wed Jul 25 14:03:08 2007 +0100 +++ b/xen/arch/x86/mm.c Wed Jul 25 14:03:12 2007 +0100 @@ -594,6 +594,14 @@ get_##level##_linear_pagetable( return 1; \ } + +int iomem_page_test(unsigned long mfn, struct page_info *page) +{ + return unlikely(!mfn_valid(mfn)) || + unlikely(page_get_owner(page) == dom_io); +} + + int get_page_from_l1e( l1_pgentry_t l1e, struct domain *d) @@ -611,8 +619,7 @@ get_page_from_l1e( return 0; } - if ( unlikely(!mfn_valid(mfn)) || - unlikely...
2011 Nov 01
2
xenpaing: one way to avoid paging out the page, when the corresponding mfn is in use.
...| Check type ok paged out; | try to map What we want is: When the gfn_to_mfn() action happens during paging nomination, the nomination should abort immediately. Our solution prototype is like this : 1. Introduce a new member named last_access in page_info struct to save the last access time and access tag. 2. when the mfn is obtained through gfn_to_mfn(), we save time stamp and access tag in the page_info. 3. Paging nominate procedure use access information as a criterion. How it works? 1.Using time stamp to avoid case 1. When the mfn is obtained...
2013 Dec 04
5
[PATCH] coverity: Store the modelling file in the source tree.
...k up modifications automatically. The model file + * must be uploaded by an admin in the analysis. + */ + +/* Definitions */ +#define NULL (void *)0 +#define PAGE_SIZE 4096UL +#define PAGE_MASK (~(PAGE_SIZE-1)) + +#define assert(cond) /* empty */ +#define page_to_mfn(p) (unsigned long)(p) + +struct page_info {}; + +/* + * map_domain_page() takes an existing domain page and possibly maps it into + * the Xen pagetables, to allow for direct access. Model this as a memory + * allocation of exactly 1 page. + * + * map_domain_page() never fails (It will BUG() before returning NULL), and + * will only ever r...
2013 Sep 05
5
Shared memory between Dom0 and DomU
...up a environment. I search on the internet. I find some patch for old version. However, I want to running on Xen 4.1 or 4.2 version. Does this still work? Can you help me? Can you give some instruction on this? or Can you give me some information? also, I want to know how the data structure "page_info" in xen connect with data structure "page" in linux kernel. Which source code files do I need to read? Thanks for you reading. Thank you! _______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
2011 Jan 17
8
[PATCH 0 of 3] Miscellaneous populate-on-demand bugs
This patch series includes a series of bugs related to p2m, ept, and PoD code which were found as part of our XenServer product testing. Each of these fixes actual bugs, and the 3.4-based version of the patch has been tested thoroughly. (There may be bugs in porting the patches, but most of them are simple enough as to make it unlikely.) Each patch is conceptually independent, so they can each
2006 Sep 22
0
[XenPPC] Re: [PATCH] Fix BUG in alloc_heap_pages
...> page_alloc.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff -r 5418062d2da8 xen/common/page_alloc.c > --- a/xen/common/page_alloc.c Tue Sep 19 11:26:00 2006 -0500 > +++ b/xen/common/page_alloc.c Thu Sep 21 17:38:41 2006 -0400 > @@ -313,7 +313,7 @@ struct page_info *alloc_heap_pages(unsig > > found: > pg = list_entry(heap[zone][i].next, struct page_info, list); > - list_del(&pg->list); > + list_del_init(&pg->list); > > /* We may have to halve the chunk a number of times. */ > while ( i != order )...
2008 Dec 15
8
[PATCH 0/2] MCA support with page offlining
Hi all, I had posted about MCA support for Intel64 before. It had only a function to log the MCA error data received from hypervisor. http://lists.xensource.com/archives/html/xen-devel/2008-09/msg00876.html I attach patches that support not only error logging but also Page Offlining function. The page where an MCA occurs will offline and not reuse. A new flag ''PGC_reserved''
2012 Nov 15
1
[RFC/PATCH v4] XENMEM_claim_pages (subop of existing) hypercall
...set_owner(page, dom_cow); - d->tot_pages--; + domain_decrease_tot_pages(d, 1); drop_dom_ref = (d->tot_pages == 0); page_list_del(page, &d->page_list); spin_unlock(&d->page_alloc_lock); @@ -680,7 +680,7 @@ static int page_make_private(struct domain *d, struct page_info *page) ASSERT(page_get_owner(page) == dom_cow); page_set_owner(page, d); - if ( d->tot_pages++ == 0 ) + if ( domain_increase_tot_pages(d, 1) == 0 ) get_domain(d); page_list_add_tail(page, &d->page_list); spin_unlock(&d->page_alloc_lock); diff --...
2008 Nov 04
7
[PATCH 1/1] Xen PV support for hugepages
...\ ((d != dom_io) && \ (rangeset_is_empty((d)->iomem_caps) && \ @@ -584,6 +587,26 @@ static int get_page_and_type_from_pagenr return rc; } +static int get_data_page(struct page_info *page, struct domain *d, int writeable) +{ + int rc; + + if ( writeable ) + rc = get_page_and_type(page, d, PGT_writable_page); + else + rc = get_page(page, d); + + return rc; +} + +static void put_data_page(struct page_info *page, int writeable) +{ + if ( writeable ) +...
2012 Feb 06
1
[PATCH] ia64: fix build (next instance)
...n/include/xsm/xsm.h +++ b/xen/include/xsm/xsm.h @@ -106,6 +106,7 @@ struct xsm_operations { int (*memory_adjust_reservation) (struct domain *d1, struct domain *d2); int (*memory_stat_reservation) (struct domain *d1, struct domain *d2); int (*memory_pin_page) (struct domain *d, struct page_info *page); + int (*remove_from_physmap) (struct domain *d1, struct domain *d2); int (*console_io) (struct domain *d, int cmd); @@ -174,7 +175,6 @@ struct xsm_operations { int (*update_va_mapping) (struct domain *d, struct domain *f,...
2013 Nov 06
0
[PATCH v5 5/6] xen/arm: Implement hypercall for dirty page tracing
...get_gma_start_end(d, &gma_start, &gma_end); + + nr_bytes = (PFN_DOWN(gma_end - gma_start) + 7) / 8; + nr_pages = (nr_bytes + PAGE_SIZE - 1) / PAGE_SIZE; + + BUG_ON( nr_pages > MAX_DIRTY_BITMAP_PAGES ); + + for ( i = 0; i < nr_pages; ++i ) + { + struct page_info *page; + page = alloc_domheap_page(NULL, 0); + if ( page == NULL ) + goto cleanup_on_failure; + + d->arch.dirty.bitmap[i] = map_domain_page_global(__page_to_mfn(page)); + clear_page(d->arch.dirty.bitmap[i]); + } + + d->arch.dirty.bitmap...
2012 Oct 11
14
alloc_heap_pages is low efficient with more CPUs
...started the VM, it just took 3s, But for the second starting it took 30s. After studied it by printing log, I have located a place in the hypervisor where cost too much time, occupied 98% of the whole starting time. xen/common/page_alloc.c /* Allocate 2^@order contiguous pages. */ static struct page_info *alloc_heap_pages( unsigned int zone_lo, unsigned int zone_hi, unsigned int node, unsigned int order, unsigned int memflags) { if ( pg[i].u.free.need_tlbflush ) { /* Add in extra CPUs that need flushing because of this page. */ cpus_andnot(extra_cpus_...
2012 Jun 08
18
[PATCH 0 of 4 RFC] Populate-on-demand: Check pages being returned by the balloon driver
Populate-on-demand: Check pages being returned by the balloon driver This patch series is the second result of my work last summer on decreasing fragmentation of superpages in a guests'' p2m when using populate-on-demand. This patch series is against 4.1; I''m posting it to get feedback on the viability of getting a ported version of this patch into 4.2. As with the previous