Displaying 20 results from an estimated 59 matches for "spte".
Did you mean:
pte
2005 Mar 14
4
[patch/unstable] page table cleanups
...g_table[
(va>>L1_PAGETABLE_SHIFT) & ~(L1_PAGETABLE_ENTRIES-1)]);
for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
@@ -584,7 +586,8 @@ static void shadow_map_l1_into_current_l
void shadow_invlpg(struct exec_domain *ed, unsigned long va)
{
- unsigned long gpte, spte;
+ l1_pgentry_t gpte, spte;
+ l1_pgentry_t zero = mk_l1_pgentry(0);
ASSERT(shadow_mode_enabled(ed->domain));
@@ -592,21 +595,22 @@ void shadow_invlpg(struct exec_domain *e
* XXX KAF: Why is this set-to-zero required?
* Why, on failure, must we bin all our shad...
2008 Jan 17
1
[PATCH 0/7] More lguest massage.
This series takes one more step towards cpu-ification of lguest.
As for rusty's last suggestion, I get rid of the whole bunch
of "struct lguest *lg = cpu->lg" statements around by using
lg_cpu as our base structure wherever it matters. (this saves us
11 lines)
2008 Jan 17
1
[PATCH 0/7] More lguest massage.
This series takes one more step towards cpu-ification of lguest.
As for rusty's last suggestion, I get rid of the whole bunch
of "struct lguest *lg = cpu->lg" statements around by using
lg_cpu as our base structure wherever it matters. (this saves us
11 lines)
2020 Jul 21
0
[PATCH v9 34/84] KVM: x86: page_track: add support for preread, prewrite and preexec
..._arch_write_log_dirty(struct kvm_vcpu *vcpu, gpa_t l2_gpa);
int kvm_mmu_post_init_vm(struct kvm *kvm);
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 038a0e028e77..ede8ef6d1e34 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1579,6 +1579,31 @@ static bool spte_write_protect(u64 *sptep, bool pt_protect)
return mmu_spte_update(sptep, spte);
}
+static bool spte_read_protect(u64 *sptep)
+{
+ u64 spte = *sptep;
+ bool exec_only_supported = (shadow_present_mask == 0ull);
+
+ rmap_printk("rmap_read_protect: spte %p %llx\n", sptep, *sptep);
+
+ WA...
2009 Sep 21
1
[PATCH 2/5] lguest: use set_pte/set_pmd uniformly for real page table entries
...ge) | pmd_flags(gpmd)));
+ set_pmd(spmd, __pmd(__pa(ptepage) | pmd_flags(gpmd)));
}
/*
@@ -447,7 +447,7 @@ bool demand_page(struct lg_cpu *cpu, uns
* we will come back here when a write does actually occur, so
* we can update the Guest's _PAGE_DIRTY flag.
*/
- native_set_pte(spte, gpte_to_spte(cpu, pte_wrprotect(gpte), 0));
+ set_pte(spte, gpte_to_spte(cpu, pte_wrprotect(gpte), 0));
/*
* Finally, we write the Guest PTE entry back: we've set the
@@ -528,7 +528,7 @@ static void release_pmd(pmd_t *spmd)
/* Now we can free the page of PTEs */
free_page((long)p...
2009 Sep 21
1
[PATCH 2/5] lguest: use set_pte/set_pmd uniformly for real page table entries
...ge) | pmd_flags(gpmd)));
+ set_pmd(spmd, __pmd(__pa(ptepage) | pmd_flags(gpmd)));
}
/*
@@ -447,7 +447,7 @@ bool demand_page(struct lg_cpu *cpu, uns
* we will come back here when a write does actually occur, so
* we can update the Guest's _PAGE_DIRTY flag.
*/
- native_set_pte(spte, gpte_to_spte(cpu, pte_wrprotect(gpte), 0));
+ set_pte(spte, gpte_to_spte(cpu, pte_wrprotect(gpte), 0));
/*
* Finally, we write the Guest PTE entry back: we've set the
@@ -528,7 +528,7 @@ static void release_pmd(pmd_t *spmd)
/* Now we can free the page of PTEs */
free_page((long)p...
2016 Mar 10
0
EPT Misconfiguration
...or some reason they go from 'state=running' to
'state=paused'; and they can't be restarted. They are un-usable.
When this happens I get the following in dmesg output:
[ 46.595265] EPT: Misconfiguration.
[ 46.595271] EPT: GPA: 0xfee000b0
[ 46.595275] ept_misconfig_inspect_spte: spte 0x7e4968107 level 4
[ 46.595279] ept_misconfig_inspect_spte: spte 0x7e4961107 level 3
[ 46.595281] ept_misconfig_inspect_spte: spte 0x7e4960107 level 2
[ 46.595284] ept_misconfig_inspect_spte: spte 0x7e592ff77 level 1
I have seen discussions about this topic, but for older code version...
2010 Jul 14
1
Entropy of a Markov chain
Does anyone have any "R" code for computing the entropy of a simple
first or second order Markov chain, given a transition matrix something
like the following (or the symbol vector from which it is computed)?
AGRe ARIe CSRe DIRe DSCe eos
HRMe SPTe TOBe
AGRe 0.0000000 0.0000000 0.0000000 0.0000000 1.0000000 0.0000000
0.0000000 0.0000000 0.0000000
ARIe 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 1.0000000
0.0000000 0.0000000 0.0000000
CSRe 0.0000000 0.0000000 0.0000000 0.0000000 1.0000000 0.0000000
0.0000000 0.0000000 0.000...
2009 Apr 22
7
Consult some concepts about shadow paging mechanism
Dear All:
I am pretty new to xen-devel, please correct me in the following.
Assume we have the following terms
GPT: guest page table
SPT: shadow page table
(Question a) When guest OS is running, is it always using SPT for
address translation? If it is the case, how does guest OS refer and
modify its own GPT content? It seems that there is a page table entry
in SPT for the GPT page.
(Question
2009 Jun 05
1
[PATCH] lguest: PAE support
..._pfn(spgd) << PAGE_SHIFT);
+
+ return &page[index];
+}
+#endif
+
/* This routine then takes the page directory entry returned above, which
* contains the address of the page table entry (PTE) page. It then returns a
* pointer to the PTE entry for the given address. */
-static pte_t *spte_addr(pgd_t spgd, unsigned long vaddr)
+static pte_t *spte_addr(struct lg_cpu *cpu, pgd_t spgd, unsigned long vaddr)
{
+#ifdef CONFIG_X86_PAE
+ pmd_t *pmd = spmd_addr(cpu, spgd, vaddr);
+ pte_t *page = __va(pmd_pfn(*pmd) << PAGE_SHIFT);
+
+ /* You should never call this if the PMD entry wasn&...
2009 Jun 05
1
[PATCH] lguest: PAE support
..._pfn(spgd) << PAGE_SHIFT);
+
+ return &page[index];
+}
+#endif
+
/* This routine then takes the page directory entry returned above, which
* contains the address of the page table entry (PTE) page. It then returns a
* pointer to the PTE entry for the given address. */
-static pte_t *spte_addr(pgd_t spgd, unsigned long vaddr)
+static pte_t *spte_addr(struct lg_cpu *cpu, pgd_t spgd, unsigned long vaddr)
{
+#ifdef CONFIG_X86_PAE
+ pmd_t *pmd = spmd_addr(cpu, spgd, vaddr);
+ pte_t *page = __va(pmd_pfn(*pmd) << PAGE_SHIFT);
+
+ /* You should never call this if the PMD entry wasn&...
2009 Apr 16
1
NULL pointer dereference at __switch_to() ( __unlazy_fpu ) with lguest PAE patch
...gd_pfn(spgd) << PAGE_SHIFT);
+ return &page[index];
+}
+#endif
+
/* This routine then takes the page directory entry returned above, which
* contains the address of the page table entry (PTE) page. It then returns a
* pointer to the PTE entry for the given address. */
-static pte_t *spte_addr(pgd_t spgd, unsigned long vaddr)
+static pte_t *spte_addr(struct lg_cpu *cpu, pgd_t spgd, unsigned long vaddr)
{
+#ifdef CONFIG_X86_PAE
+ pmd_t *pmd = spmd_addr(cpu, spgd, vaddr);
+ pte_t *page = __va(pmd_pfn(*pmd) << PAGE_SHIFT);
+
+ /* You should never call this if the PMD entry wasn&...
2009 Apr 16
1
NULL pointer dereference at __switch_to() ( __unlazy_fpu ) with lguest PAE patch
...gd_pfn(spgd) << PAGE_SHIFT);
+ return &page[index];
+}
+#endif
+
/* This routine then takes the page directory entry returned above, which
* contains the address of the page table entry (PTE) page. It then returns a
* pointer to the PTE entry for the given address. */
-static pte_t *spte_addr(pgd_t spgd, unsigned long vaddr)
+static pte_t *spte_addr(struct lg_cpu *cpu, pgd_t spgd, unsigned long vaddr)
{
+#ifdef CONFIG_X86_PAE
+ pmd_t *pmd = spmd_addr(cpu, spgd, vaddr);
+ pte_t *page = __va(pmd_pfn(*pmd) << PAGE_SHIFT);
+
+ /* You should never call this if the PMD entry wasn&...
2019 Aug 13
1
[RFC PATCH v6 70/92] kvm: x86: filter out access rights only when tracked by the introspection tool
.../kvm/mmu.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 65b6acba82da..fd64cf1115da 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -2660,6 +2660,9 @@ static void clear_sp_write_flooding_count(u64 *spte)
> static unsigned int kvm_mmu_page_track_acc(struct kvm_vcpu *vcpu, gfn_t gfn,
> unsigned int acc)
> {
> + if (!kvmi_tracked_gfn(vcpu, gfn))
> + return acc;
> +
> if (kvm_page_track_is_active(vcpu, gfn, KVM_PAGE_TRACK_PREREAD))
> acc &= ~ACC_USER_MASK;...
2007 May 09
1
[patch 3/9] lguest: the host code
...and the guest and
+ * shadow pagetables don't, we can't use the normal pte_t/pgd_t. */
+typedef union {
+ struct { unsigned flags:12, pfn:20; };
+ struct { unsigned long val; } raw;
+} spgd_t;
+typedef union {
+ struct { unsigned flags:12, pfn:20; };
+ struct { unsigned long val; } raw;
+} spte_t;
+typedef union {
+ struct { unsigned flags:12, pfn:20; };
+ struct { unsigned long val; } raw;
+} gpgd_t;
+typedef union {
+ struct { unsigned flags:12, pfn:20; };
+ struct { unsigned long val; } raw;
+} gpte_t;
+#define mkgpte(_val) ((gpte_t){.raw.val = _val})
+#define mkgpgd(_val) ((gpgd_t){.r...
2007 May 09
1
[patch 3/9] lguest: the host code
...and the guest and
+ * shadow pagetables don't, we can't use the normal pte_t/pgd_t. */
+typedef union {
+ struct { unsigned flags:12, pfn:20; };
+ struct { unsigned long val; } raw;
+} spgd_t;
+typedef union {
+ struct { unsigned flags:12, pfn:20; };
+ struct { unsigned long val; } raw;
+} spte_t;
+typedef union {
+ struct { unsigned flags:12, pfn:20; };
+ struct { unsigned long val; } raw;
+} gpgd_t;
+typedef union {
+ struct { unsigned flags:12, pfn:20; };
+ struct { unsigned long val; } raw;
+} gpte_t;
+#define mkgpte(_val) ((gpte_t){.raw.val = _val})
+#define mkgpgd(_val) ((gpgd_t){.r...
2019 Jul 31
2
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...is gets the whole thing backwards, the common pattern is to
> > protect the 'shadow pte' data with a seqlock (usually open coded),
> > such that the mmu notififer side has the write side of that lock and
> > the read side is consumed by the thread accessing or updating the SPTE.
>
>
> Yes, I've considered something like that. But the problem is, mmu notifier
> (writer) need to wait for the vhost worker to finish the read before it can
> do things like setting dirty pages and unmapping page.? It looks to me
> seqlock doesn't provide things like...
2019 Jul 31
2
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...is gets the whole thing backwards, the common pattern is to
> > protect the 'shadow pte' data with a seqlock (usually open coded),
> > such that the mmu notififer side has the write side of that lock and
> > the read side is consumed by the thread accessing or updating the SPTE.
>
>
> Yes, I've considered something like that. But the problem is, mmu notifier
> (writer) need to wait for the vhost worker to finish the read before it can
> do things like setting dirty pages and unmapping page.? It looks to me
> seqlock doesn't provide things like...
2019 Mar 14
2
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On 2019/3/14 ??6:42, Michael S. Tsirkin wrote:
>>>>> Which means after we fix vhost to add the flush_dcache_page after
>>>>> kunmap, Parisc will get a double hit (but it also means Parisc
>>>>> was the only one of those archs needed explicit cache flushes,
>>>>> where vhost worked correctly so far.. so it kinds of proofs your
2019 Mar 14
2
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On 2019/3/14 ??6:42, Michael S. Tsirkin wrote:
>>>>> Which means after we fix vhost to add the flush_dcache_page after
>>>>> kunmap, Parisc will get a double hit (but it also means Parisc
>>>>> was the only one of those archs needed explicit cache flushes,
>>>>> where vhost worked correctly so far.. so it kinds of proofs your