Displaying 9 results from an estimated 9 matches for "p2m_entry".
Did you mean:
bm_entry
2011 May 06
14
[PATCH 0 of 4] Use superpages on restore/migrate
This patch series restores the use of superpages when restoring or
migrating a VM, while retaining efficient batching of 4k pages when
superpages are not appropriate or available.
Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
2011 Jan 17
8
[PATCH 0 of 3] Miscellaneous populate-on-demand bugs
This patch series includes a series of bugs related to p2m, ept, and
PoD code which were found as part of our XenServer product testing.
Each of these fixes actual bugs, and the 3.4-based version of the patch
has been tested thoroughly. (There may be bugs in porting the patches,
but most of them are simple enough as to make it unlikely.)
Each patch is conceptually independent, so they can each
2007 Jan 05
0
The mfn_valid on shadow_set_p2m_entry()
Noticed that on change set 12568, we change the valid_mfn to be in fact
the mfn_valid(). Then on change set 12572, we change the
shadow_set_p2m_entry() to use mfn_valid().
I''m a bit confused by this change. I think current meaning of
mfn_valid() is to check if the mfn is valid RAM. It is ok for page table
pages. But for p2m table, considering shadow for driver domain or
considering IOMMU in future, would it be possible that the mfn is w...
2011 Mar 25
2
[RFC PATCH 2/3] AMD IOMMU: Implement p2m sharing
--
Advanced Micro Devices GmbH
Sitz: Dornach, Gemeinde Aschheim,
Landkreis München Registergericht München,
HRB Nr. 43632
WEEE-Reg-Nr: DE 12919551
Geschäftsführer:
Alberto Bozzo, Andrew Bowd
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
2013 Oct 17
42
[PATCH v8 0/19] enable swiotlb-xen on arm and arm64
Hi all,
this patch series enables xen-swiotlb on arm and arm64.
It has been heavily reworked compared to the previous versions in order
to achieve better performances and to address review comments.
We are not using dma_mark_clean to ensure coherency anymore. We call the
platform implementation of map_page and unmap_page.
We assume that dom0 has been mapped 1:1 (physical address ==
machine
2012 Mar 01
14
[PATCH 0 of 3] RFC Paging support for AMD NPT V2
There has been some progress, but still no joy. Definitely not intended for
inclusion at this point.
Tim, Wei, I added a Xen command line toggle to disable IOMMU and P2M table
sharing.
Tim, I verified that changes to p2m-pt.c don''t break shadow mode (64bit
hypervisor and Win 7 guest).
Hongkaixing, I incorporated your suggestion in patch 2, so I should add your
Signed-off-by eventually.
2012 Jun 08
18
[PATCH 0 of 4 RFC] Populate-on-demand: Check pages being returned by the balloon driver
Populate-on-demand: Check pages being returned by the balloon driver
This patch series is the second result of my work last summer on
decreasing fragmentation of superpages in a guests'' p2m when using
populate-on-demand.
This patch series is against 4.1; I''m posting it to get feedback on
the viability of getting a ported version of this patch into 4.2.
As with the previous
2006 Dec 01
1
[PATCH 2/10] Add support for netfront/netback acceleration drivers
...xen/include/asm-ia64/mm.h
--- a/xen/include/asm-ia64/mm.h Fri Dec 01 16:21:46 2006 +0000
+++ b/xen/include/asm-ia64/mm.h Fri Dec 01 16:22:41 2006 +0000
@@ -432,7 +432,7 @@ extern unsigned long lookup_domain_mpa(s
extern unsigned long lookup_domain_mpa(struct domain *d, unsigned long
mpaddr, struct p2m_entry* entry);
extern void *domain_mpa_to_imva(struct domain *d, unsigned long
mpaddr);
extern volatile pte_t *lookup_noalloc_domain_pte(struct domain* d,
unsigned long mpaddr);
-extern unsigned long assign_domain_mmio_page(struct domain *d, unsigned
long mpaddr, unsigned long size);
+extern unsigned l...
2007 May 30
30
[VTD][patch 0/5] HVM device assignment using vt-d
...Q_dpci(struct domain *d, unsigned int irq);
int dpci_ioport_intercept(ioreq_t *p, int type);
int iommu_map_page(struct domain *d,
unsigned long gfn, unsigned long mfn);
int iommu_unmap_page(
struct domain *d, unsigned long gfn);
void iommu_flush(struct domain *d, unsigned long gfn, u64 *p2m_entry);
void iommu_set_pgd(struct domain *d);
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel