search for: _page_pse

Displaying 20 results from an estimated 64 matches for "_page_pse".

2008 Nov 04
7
[PATCH 1/1] Xen PV support for hugepages
....000000000 -0600 @@ -115,7 +115,7 @@ extern unsigned int PAGE_HYPERVISOR_NOCA #define BASE_DISALLOW_MASK (0xFFFFF198U & ~_PAGE_NX) #define L1_DISALLOW_MASK (BASE_DISALLOW_MASK | _PAGE_GNTTAB) -#define L2_DISALLOW_MASK (BASE_DISALLOW_MASK) +#define L2_DISALLOW_MASK (BASE_DISALLOW_MASK & ~_PAGE_PSE) #define L3_DISALLOW_MASK 0xFFFFF1FEU /* must-be-zero */ #endif /* __X86_32_PAGE_H__ */ --- xen-unstable//./xen/include/asm-x86/x86_64/page.h 2008-10-02 14:23:17.000000000 -0500 +++ xen-hpage/./xen/include/asm-x86/x86_64/page.h 2008-11-04 08:24:35.000000000 -0600 @@ -115,7 +115,7 @@ typedef l4_...
2007 Oct 31
5
[PATCH 0/7] (Re-)introducing pvops for x86_64 - Real pvops work part
Hey folks, This is the part-of-pvops-implementation-that-is-not-exactly-a-merge. Neat, uh? This is the majority of the work. The first patch in the series does not really belong here. It was already sent to lkml separetedly before, but I'm including it again, for a very simple reason: Try to test the paravirt patches without it, and you'll fail miserably ;-) (and it was not yet
2007 Oct 31
5
[PATCH 0/7] (Re-)introducing pvops for x86_64 - Real pvops work part
Hey folks, This is the part-of-pvops-implementation-that-is-not-exactly-a-merge. Neat, uh? This is the majority of the work. The first patch in the series does not really belong here. It was already sent to lkml separetedly before, but I'm including it again, for a very simple reason: Try to test the paravirt patches without it, and you'll fail miserably ;-) (and it was not yet
2008 May 23
0
[PATCH] x86/paravirt: add pte_flags to just get pte flags
...ite(pte_t pte) { - return pte_val(pte) & _PAGE_RW; + return pte_flags(pte) & _PAGE_RW; } static inline int pte_file(pte_t pte) { - return pte_val(pte) & _PAGE_FILE; + return pte_flags(pte) & _PAGE_FILE; } static inline int pte_huge(pte_t pte) { - return pte_val(pte) & _PAGE_PSE; + return pte_flags(pte) & _PAGE_PSE; } static inline int pte_global(pte_t pte) { - return pte_val(pte) & _PAGE_GLOBAL; + return pte_flags(pte) & _PAGE_GLOBAL; } static inline int pte_exec(pte_t pte) { - return !(pte_val(pte) & _PAGE_NX); + return !(pte_flags(pte) & _P...
2008 May 23
0
[PATCH] x86/paravirt: add pte_flags to just get pte flags
...ite(pte_t pte) { - return pte_val(pte) & _PAGE_RW; + return pte_flags(pte) & _PAGE_RW; } static inline int pte_file(pte_t pte) { - return pte_val(pte) & _PAGE_FILE; + return pte_flags(pte) & _PAGE_FILE; } static inline int pte_huge(pte_t pte) { - return pte_val(pte) & _PAGE_PSE; + return pte_flags(pte) & _PAGE_PSE; } static inline int pte_global(pte_t pte) { - return pte_val(pte) & _PAGE_GLOBAL; + return pte_flags(pte) & _PAGE_GLOBAL; } static inline int pte_exec(pte_t pte) { - return !(pte_val(pte) & _PAGE_NX); + return !(pte_flags(pte) & _P...
2012 Feb 20
2
[PATCH] Disable PAT support when running under Xen (v1).
...) return pteval; @@ -463,7 +463,7 @@ void xen_set_pat(u64 pat) static pte_t xen_make_pte(pteval_t pte) { phys_addr_t addr = (pte & PTE_PFN_MASK); - +#if 0 /* If Linux is trying to set a WC pte, then map to the Xen WC. * If _PAGE_PAT is set, then it probably means it is really * _PAGE_PSE, so avoid fiddling with the PAT mapping and hope @@ -476,7 +476,7 @@ static pte_t xen_make_pte(pteval_t pte) if ((pte & (_PAGE_PCD | _PAGE_PWT)) == _PAGE_PWT) pte = (pte & ~(_PAGE_PCD | _PAGE_PWT)) | _PAGE_PAT; } - +#endif /* * Unprivileged domains are allowed to do IOMAPpings...
2007 Feb 14
2
[PATCH 8/8] 2.6.17: scan DMI early
...{ - return ioremap(addr, size); + unsigned long map = round_down(addr, LARGE_PAGE_SIZE); + + /* actually usually some more */ + if (size >= LARGE_PAGE_SIZE) { + printk("SMBIOS area too long %lu\n", size); + return NULL; + } + set_pmd(temp_mappings[0].pmd, __pmd(map | _KERNPG_TABLE | _PAGE_PSE)); + map += LARGE_PAGE_SIZE; + set_pmd(temp_mappings[1].pmd, __pmd(map | _KERNPG_TABLE | _PAGE_PSE)); + __flush_tlb(); + return temp_mappings[0].address + (addr & (LARGE_PAGE_SIZE-1)); } /* To avoid virtual aliases later */ __init void early_iounmap(void *addr, unsigned long size) { - io...
2007 Nov 09
11
[PATCH 0/24] paravirt_ops for unified x86 - that's me again!
Hey folks, Here's a new spin of the pvops64 patch series. We didn't get that many comments from the last time, so it should be probably almost ready to get in. Heya! >From the last version, the most notable changes are: * consolidation of system.h, merging jeremy's comments about ordering concerns * consolidation of smp functions that goes through smp_ops. They're sharing
2007 Nov 09
11
[PATCH 0/24] paravirt_ops for unified x86 - that's me again!
Hey folks, Here's a new spin of the pvops64 patch series. We didn't get that many comments from the last time, so it should be probably almost ready to get in. Heya! >From the last version, the most notable changes are: * consolidation of system.h, merging jeremy's comments about ordering concerns * consolidation of smp functions that goes through smp_ops. They're sharing
2011 Jan 17
8
[PATCH 0 of 3] Miscellaneous populate-on-demand bugs
This patch series includes a series of bugs related to p2m, ept, and PoD code which were found as part of our XenServer product testing. Each of these fixes actual bugs, and the 3.4-based version of the patch has been tested thoroughly. (There may be bugs in porting the patches, but most of them are simple enough as to make it unlikely.) Each patch is conceptually independent, so they can each
2011 May 06
14
[PATCH 0 of 4] Use superpages on restore/migrate
This patch series restores the use of superpages when restoring or migrating a VM, while retaining efficient batching of 4k pages when superpages are not appropriate or available. Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com> _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
2007 Jul 02
3
Walking an HVM''s shadow page tables and other memory management questions.
...the same file, has this code executed after it walks the guest page tables to obtain the walk_t gw variable. if ( gw.l2e && (guest_l2e_get_flags(*gw.l2e) & _PAGE_PRESENT) && !(guest_supports_superpages(v) && (guest_l2e_get_flags(*gw.l2e) & _PAGE_PSE)) ) (XEN) mm.c:2573:d1 grant host mapping: va:81696000 frame:0x15f140 (XEN) mm.c:2507:d1 grant va mapping: va:81696000 (XEN) multi.c:236:d1 guest_walk_tables: va: 0x81696000. (XEN) multi.c:257:d1 guest_walk_tables: get l3e from cache: 0xff1a6ed0. (XEN) multi.c:270:d1 guest_walk_tables:...
2007 Aug 10
9
[PATCH 0/25 -v2] paravirt_ops for x86_64, second round
Here is an slightly updated version of the paravirt_ops patch. If your comments and criticism were welcome before, now it's even more! There are some issues that are _not_ addressed in this revision, and here are the causes: * split debugreg into multiple functions, suggested by Andi: - Me and jsfg agree that introducing more pvops (specially 14!) is not worthwhile. So, although we do
2007 Aug 10
9
[PATCH 0/25 -v2] paravirt_ops for x86_64, second round
Here is an slightly updated version of the paravirt_ops patch. If your comments and criticism were welcome before, now it's even more! There are some issues that are _not_ addressed in this revision, and here are the causes: * split debugreg into multiple functions, suggested by Andi: - Me and jsfg agree that introducing more pvops (specially 14!) is not worthwhile. So, although we do
2007 Aug 15
13
[PATCH 0/25][V3] pvops_64 last round (hopefully)
This is hopefully the last iteration of the pvops64 patch. >From the last version, we have only one change, which is include/asm-x86_64/processor.h: There were still one survivor in raw asm. Also, git screwed me up for some reason, and the 25th patch was missing the new files, paravirt.{c,h}. (although I do remember having git-add'ed it, but who knows...) Andrew, could you please push it
2007 Aug 15
13
[PATCH 0/25][V3] pvops_64 last round (hopefully)
This is hopefully the last iteration of the pvops64 patch. >From the last version, we have only one change, which is include/asm-x86_64/processor.h: There were still one survivor in raw asm. Also, git screwed me up for some reason, and the 25th patch was missing the new files, paravirt.{c,h}. (although I do remember having git-add'ed it, but who knows...) Andrew, could you please push it
2009 Apr 22
7
Consult some concepts about shadow paging mechanism
Dear All: I am pretty new to xen-devel, please correct me in the following. Assume we have the following terms GPT: guest page table SPT: shadow page table (Question a) When guest OS is running, is it always using SPT for address translation? If it is the case, how does guest OS refer and modify its own GPT content? It seems that there is a page table entry in SPT for the GPT page. (Question
2020 Apr 28
0
[PATCH v3 22/75] x86/boot/compressed/64: Add set_page_en/decrypted() helpers
...signed long page_flags; + unsigned long address; + pte_t *pte; + pmd_t pmd; + int i; + + pte = (pte_t *)info->alloc_pgt_page(info->context); + if (!pte) + return NULL; + + address = __address & PMD_MASK; + /* No large page - clear PSE flag */ + page_flags = info->page_flag & ~_PAGE_PSE; + + /* Populate the PTEs */ + for (i = 0; i < PTRS_PER_PMD; i++) { + set_pte(&pte[i], __pte(address | page_flags)); + address += PAGE_SIZE; + } + + /* + * Ideally we need to clear the large PMD first and do a TLB + * flush before we write the new PMD. But the 2M range of the + * PMD mi...
2007 May 29
0
Fw: [RFC] makedumpfile: xen extraction
.../* only used by PAE translators */ +#define PTRS_PER_PMD (512) /* only used by PAE translators */ +#define PTE_SHIFT (12) /* only used by PAE translators */ +#define PTRS_PER_PTE (512) /* only used by PAE translators */ + +#define _PAGE_PRESENT 0x001 +#define _PAGE_PSE 0x080 + +#define ENTRY_MASK (~0x8000000000000fffULL) + +unsigned long long kvtop_xen_x86(struct DumpInfo *info, unsigned long kvaddr); +#define kvtop_xen(X, Y) kvtop_xen_x86(X, Y) + +int get_xen_info_x86(struct DumpInfo *info); +#define get_xen_info_arch(X) get_xen_info_x86(X) + +#endif /* __...
2006 Aug 31
5
x86-64''s paging_init()
While adding code to create the compatibility p2m table mappings it seemed to me that the creation of the native ones is restricted to memory below the 512G boundary - otherwise, additional L2 tables would need to be allocated (currently other memory following the one L2 page getting allocated would be blindly overwritten). While I realize that machines this big aren''t likely to be