search for: pte_flags

Displaying 20 results from an estimated 24 matches for "pte_flags".

2008 May 23
0
[PATCH] x86/paravirt: add pte_flags to just get pte flags
Add pte_flags() to extract the flags from a pte. This is a special case of pte_val() which is only guaranteed to return the pte's flags correctly; the page number may be corrupted or missing. The intent is to allow paravirt implementations to return pte flags without having to do any translation of the pag...
2008 May 23
0
[PATCH] x86/paravirt: add pte_flags to just get pte flags
Add pte_flags() to extract the flags from a pte. This is a special case of pte_val() which is only guaranteed to return the pte's flags correctly; the page number may be corrupted or missing. The intent is to allow paravirt implementations to return pte flags without having to do any translation of the pag...
2010 Aug 06
7
[GIT PULL] devel/pat + devel/kms.fixes-0.5
Hey Jeremy, Please pull from devel/pat (based off your xen/dom0/core tree) which has one patch: Konrad Rzeszutek Wilk (1): xen/pat: make pte_flags(x) a pvops function. which is neccessary for the drivers/gpu/drm/radeon driver to work properly with AGP based cards (which look to be the only ones that try to set WC on pages). Also please pull from devel/kms.fixes-05 (based off your xen/dom0/agp) which has the following patches: Daniel De Gra...
2012 Feb 14
3
ftrace_enabled set to 1 on bootup, slow downs with CONFIG_FUNCTION_TRACER in virt environments?
...ump_bytes I do see instructions such as e8 6a 90 60 e1 get replaced with 66 66 66 90 so I see the the instructions getting patched over. To get a better feel for this I tried this on baremetal, and (this is going to sound a bit round-about way, but please bear with me), I was working on making the pte_flags be paravirt (so it is a function instead of being a macro) and noticed that on on an AMD A8-3850, with a CONFIG_PARAVIRT and CONFIG_FUNCTION_TRACER and running kernelbench it would run slower than without CONFIG_FUNCTION_TRACER. I am not really sure what the problem is, but based on those experime...
2008 Jan 17
1
[PATCH 0/7] More lguest massage.
This series takes one more step towards cpu-ification of lguest. As for rusty's last suggestion, I get rid of the whole bunch of "struct lguest *lg = cpu->lg" statements around by using lg_cpu as our base structure wherever it matters. (this saves us 11 lines)
2008 Jan 17
1
[PATCH 0/7] More lguest massage.
This series takes one more step towards cpu-ification of lguest. As for rusty's last suggestion, I get rid of the whole bunch of "struct lguest *lg = cpu->lg" statements around by using lg_cpu as our base structure wherever it matters. (this saves us 11 lines)
2009 Jun 05
1
[PATCH] lguest: PAE support
...if (!(pgd_flags(*spgd) & _PAGE_PRESENT)) return false; +#ifdef CONFIG_X86_PAE + spmd = spmd_addr(cpu, *spgd, vaddr); + if (!(pmd_flags(*spmd) & _PAGE_PRESENT)) + return false; +#endif + /* Check the flags on the pte entry itself: it must be present and * writable. */ - flags = pte_flags(*(spte_addr(*spgd, vaddr))); + flags = pte_flags(*(spte_addr(cpu, *spgd, vaddr))); return (flags & (_PAGE_PRESENT|_PAGE_RW)) == (_PAGE_PRESENT|_PAGE_RW); } @@ -322,6 +440,41 @@ void pin_page(struct lg_cpu *cpu, unsigned long vaddr) kill_guest(cpu, "bad stack page %#lx", vaddr)...
2009 Jun 05
1
[PATCH] lguest: PAE support
...if (!(pgd_flags(*spgd) & _PAGE_PRESENT)) return false; +#ifdef CONFIG_X86_PAE + spmd = spmd_addr(cpu, *spgd, vaddr); + if (!(pmd_flags(*spmd) & _PAGE_PRESENT)) + return false; +#endif + /* Check the flags on the pte entry itself: it must be present and * writable. */ - flags = pte_flags(*(spte_addr(*spgd, vaddr))); + flags = pte_flags(*(spte_addr(cpu, *spgd, vaddr))); return (flags & (_PAGE_PRESENT|_PAGE_RW)) == (_PAGE_PRESENT|_PAGE_RW); } @@ -322,6 +440,41 @@ void pin_page(struct lg_cpu *cpu, unsigned long vaddr) kill_guest(cpu, "bad stack page %#lx", vaddr)...
2009 Apr 16
1
NULL pointer dereference at __switch_to() ( __unlazy_fpu ) with lguest PAE patch
...if (!(pgd_flags(*spgd) & _PAGE_PRESENT)) return false; +#ifdef CONFIG_X86_PAE + spmd = spmd_addr(cpu, *spgd, vaddr); + if (!(pmd_flags(*spmd) & _PAGE_PRESENT)) + return false; +#endif + /* Check the flags on the pte entry itself: it must be present and * writable. */ - flags = pte_flags(*(spte_addr(*spgd, vaddr))); + flags = pte_flags(*(spte_addr(cpu, *spgd, vaddr))); return (flags & (_PAGE_PRESENT|_PAGE_RW)) == (_PAGE_PRESENT|_PAGE_RW); } @@ -322,8 +439,45 @@ void pin_page(struct lg_cpu *cpu, unsigned long vaddr) kill_guest(cpu, "bad stack page %#lx", vaddr)...
2009 Apr 16
1
NULL pointer dereference at __switch_to() ( __unlazy_fpu ) with lguest PAE patch
...if (!(pgd_flags(*spgd) & _PAGE_PRESENT)) return false; +#ifdef CONFIG_X86_PAE + spmd = spmd_addr(cpu, *spgd, vaddr); + if (!(pmd_flags(*spmd) & _PAGE_PRESENT)) + return false; +#endif + /* Check the flags on the pte entry itself: it must be present and * writable. */ - flags = pte_flags(*(spte_addr(*spgd, vaddr))); + flags = pte_flags(*(spte_addr(cpu, *spgd, vaddr))); return (flags & (_PAGE_PRESENT|_PAGE_RW)) == (_PAGE_PRESENT|_PAGE_RW); } @@ -322,8 +439,45 @@ void pin_page(struct lg_cpu *cpu, unsigned long vaddr) kill_guest(cpu, "bad stack page %#lx", vaddr)...
2009 Sep 21
1
[PATCH 2/5] lguest: use set_pte/set_pmd uniformly for real page table entries
...md_t *spmd) /* Now we can free the page of PTEs */ free_page((long)ptepage); /* And zero out the PMD entry so we never release it twice. */ - native_set_pmd(spmd, __pmd(0)); + set_pmd(spmd, __pmd(0)); } } @@ -833,15 +833,15 @@ static void do_set_pte(struct lg_cpu *cp */ if (pte_flags(gpte) & (_PAGE_DIRTY | _PAGE_ACCESSED)) { check_gpte(cpu, gpte); - native_set_pte(spte, - gpte_to_spte(cpu, gpte, + set_pte(spte, + gpte_to_spte(cpu, gpte, pte_flags(gpte) & _PAGE_DIRTY)); } else { /* * Otherwise kill it and we can demand_page()...
2009 Sep 21
1
[PATCH 2/5] lguest: use set_pte/set_pmd uniformly for real page table entries
...md_t *spmd) /* Now we can free the page of PTEs */ free_page((long)ptepage); /* And zero out the PMD entry so we never release it twice. */ - native_set_pmd(spmd, __pmd(0)); + set_pmd(spmd, __pmd(0)); } } @@ -833,15 +833,15 @@ static void do_set_pte(struct lg_cpu *cp */ if (pte_flags(gpte) & (_PAGE_DIRTY | _PAGE_ACCESSED)) { check_gpte(cpu, gpte); - native_set_pte(spte, - gpte_to_spte(cpu, gpte, + set_pte(spte, + gpte_to_spte(cpu, gpte, pte_flags(gpte) & _PAGE_DIRTY)); } else { /* * Otherwise kill it and we can demand_page()...
2008 May 31
9
[PATCH 0 of 4] mm+paravirt+xen: add pte read-modify-write abstraction (take 2)
Hi all, [ Change since last post: change name to ptep_modify_prot_, on the grounds that it isn't really a general pte-modification interface. ] This little series adds a new transaction-like abstraction for doing RMW updates to a pte, hooks it into paravirt_ops, and then makes use of it in Xen. The basic problem is that mprotect is very slow under Xen (up to 50x slower than native),
2008 May 31
9
[PATCH 0 of 4] mm+paravirt+xen: add pte read-modify-write abstraction (take 2)
Hi all, [ Change since last post: change name to ptep_modify_prot_, on the grounds that it isn't really a general pte-modification interface. ] This little series adds a new transaction-like abstraction for doing RMW updates to a pte, hooks it into paravirt_ops, and then makes use of it in Xen. The basic problem is that mprotect is very slow under Xen (up to 50x slower than native),
2008 May 31
9
[PATCH 0 of 4] mm+paravirt+xen: add pte read-modify-write abstraction (take 2)
Hi all, [ Change since last post: change name to ptep_modify_prot_, on the grounds that it isn't really a general pte-modification interface. ] This little series adds a new transaction-like abstraction for doing RMW updates to a pte, hooks it into paravirt_ops, and then makes use of it in Xen. The basic problem is that mprotect is very slow under Xen (up to 50x slower than native),
2009 Feb 06
2
Xen pv_ops domU :: BUG() in remove_from_page_cache()
Hi, 2.6.29-rc3 x86_64 guest on x86_64 RHEL5.3 host: https://bugzilla.redhat.com/484295 kernel BUG at mm/filemap.c:123! invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC last sysfs file: /sys/devices/vbd-51712/block/xvda/xvda2/dev CPU 0 Modules linked in: ipv6 xts lrw gf128mul sha256_generic cbc dm_crypt
2012 Feb 20
2
[PATCH] Disable PAT support when running under Xen (v1).
The issue at hand is that any prolonged usage of radeon or nouveau driver ends up corrupting the file system or we end up with mysterious crashes of applications. There are three ways of fixing it: a). A proper fix: https://lkml.org/lkml/2012/2/10/228 . I posted the same fix for 3.2 way back in December but it got nowhere. The recent posting has also been meet with silence. Not being happy
2012 Jun 05
7
Re: XEN MTRR
On Sun, Jun 03, 2012 at 05:31:32PM +1000, aorchis@gmail.com wrote: > Hi Jeremy and Konrad, CC-ing xen-devel. > > Basically the driver NVIDIA provided is a binary blob and recent > versions does not work with the PAT layout of XEN so it falls back to > MTRR to provide write combining (please correct me if I''m wrong). OK? Which is still OK. Are you using a v3.4 kernel
2008 May 23
6
[PATCH 0 of 4] mm+paravirt+xen: add pte read-modify-write abstraction
Hi all, This little series adds a new transaction-like abstraction for doing RMW updates to a pte, hooks it into paravirt_ops, and then makes use of it in Xen. The basic problem is that mprotect is very slow under Xen (up to 50x slower than native), primarily because of the ptent = ptep_get_and_clear(mm, addr, pte); ptent = pte_modify(ptent, newprot); /* ... */ set_pte_at(mm, addr, pte,
2008 May 23
6
[PATCH 0 of 4] mm+paravirt+xen: add pte read-modify-write abstraction
Hi all, This little series adds a new transaction-like abstraction for doing RMW updates to a pte, hooks it into paravirt_ops, and then makes use of it in Xen. The basic problem is that mprotect is very slow under Xen (up to 50x slower than native), primarily because of the ptent = ptep_get_and_clear(mm, addr, pte); ptent = pte_modify(ptent, newprot); /* ... */ set_pte_at(mm, addr, pte,