search for: smfn

Displaying 4 results from an estimated 4 matches for "smfn".

Did you mean: mfn
2006 Jul 01
3
Page fault is 4 times faster with XI shadow mechanism
...t the R/W flag in the shadow L3 PTE. Perhaps the XI code could do a better job of validating guest page table entries but I was reluctant to be more rigorous about checking guest PTEs than real hardware is. In your latest email, you ask "Do we really need to reserve one snapshot page for each smfn at first and retain it until the HVM domain is destroyed?" Well I don''t. I simply pre-allocate a pool of SPTI''s. It can be quite a large pool but certainly not one-SPTI per MFN. SPTIs are allocated on demand (when a guest page needs to be shadowed) and, when the pool runs...
2005 Jun 30
0
[PATCH][10/10] Use copy_from_user when accessing guest_pt
...ff-by: Arun Sharma <arun.sharma@intel.com> diff -r 2d289d7ab961 -r d0eccea63a24 xen/arch/x86/shadow.c --- a/xen/arch/x86/shadow.c Thu Jun 30 05:26:09 2005 +++ b/xen/arch/x86/shadow.c Thu Jun 30 05:26:24 2005 @@ -1906,7 +1906,7 @@ unsigned long gpfn, unsigned index) { unsigned long smfn = __shadow_status(d, gpfn, PGT_snapshot); - l1_pgentry_t *snapshot; // could be L1s or L2s or ... + l1_pgentry_t *snapshot, gpte; // could be L1s or L2s or ... int entries_match; perfc_incrc(snapshot_entry_matches_calls); @@ -1916,10 +1916,14 @@ snapshot = map_domain_page(s...
2005 Mar 14
4
[patch/unstable] page table cleanups
...a >> PAGE_SHIFT], + &spte, sizeof(spte))) { return; } } @@ -715,17 +719,18 @@ int shadow_fault(unsigned long va, struc void shadow_l1_normal_pt_update( - unsigned long pa, unsigned long gpte, + unsigned long pa, l1_pgentry_t gpte, unsigned long *prev_smfn_ptr, l1_pgentry_t **prev_spl1e_ptr) { - unsigned long smfn, spte, prev_smfn = *prev_smfn_ptr; + l1_pgentry_t spte; + unsigned long smfn, prev_smfn = *prev_smfn_ptr; l1_pgentry_t *spl1e, *prev_spl1e = *prev_spl1e_ptr; /* N.B. To get here, we know the l1 page *must*...
2006 Jun 30
5
[PATCH - proposed] XI Shadow Page Table Mechanism]
Hi, Robert, I found out another confusing code snippet: in void xi_invl_mfn(struct domain *d, unsigned long mfn) if (ext && pfn < ext->large_page_aligned_size) According to the code, it should be if (ext && (pfn>>SPT_ENTRIES_ORDER) < ext->large_page_aligned_size) If I made any mistake, please point it out.