search for: base_gfn

Displaying 19 results from an estimated 19 matches for "base_gfn".

2018 Jul 20
4
Memory Read Only Enforcement: VMM assisted kernel rootkit mitigation for KVM V4
Here is change log from V3 To V4: - Fixing spelling/grammar mistakes suggested by Randy Dunlap - Changing the hypercall interface to be able to process multiple pages per one hypercall also suggested by Randy Dunlap. It turns out that this will save lots of vmexist/memory slot flushes when protecting many pages. [PATCH RFC V4 1/3] KVM: X86: Memory ROE documentation [PATCH RFC V4 2/3] KVM:
2018 Jul 20
0
[PATCH RFC V4 3/3] KVM: X86: Adding skeleton for Memory ROE
...ead, pt_protect, d); +#endif + return __rmap_write_protection(kvm, rmap_head, pt_protect); +} + static bool spte_clear_dirty(u64 *sptep) { u64 spte = *sptep; @@ -1517,7 +1548,7 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, while (mask) { rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), PT_PAGE_TABLE_LEVEL, slot); - __rmap_write_protect(kvm, rmap_head, false, NULL); + __rmap_write_protection(kvm, rmap_head, false); /* clear the first set bit */ mask &= mask - 1; @@ -1593,11 +1624,15 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm...
2018 Jul 19
0
[PATCH 3/3] [RFC V3] KVM: X86: Adding skeleton for Memory ROE
...ead, pt_protect, d); +#endif + return __rmap_write_protection(kvm, rmap_head, pt_protect); +} + static bool spte_clear_dirty(u64 *sptep) { u64 spte = *sptep; @@ -1517,7 +1548,7 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, while (mask) { rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), PT_PAGE_TABLE_LEVEL, slot); - __rmap_write_protect(kvm, rmap_head, false, NULL); + __rmap_write_protection(kvm, rmap_head, false); /* clear the first set bit */ mask &= mask - 1; @@ -1593,11 +1624,15 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm...
2018 Jul 19
8
Memory Read Only Enforcement: VMM assisted kernel rootkit mitigation for KVM
Hi, This is my first set of patches that works as I would expect, and the third revision I sent to mailing lists. Following up with my previous discussions about kernel rootkit mitigation via placing R/O protection on critical data structure, static data, privileged registers with static content. These patches present the first part where it is only possible to place these protections on memory
2018 Jul 19
8
Memory Read Only Enforcement: VMM assisted kernel rootkit mitigation for KVM
Hi, This is my first set of patches that works as I would expect, and the third revision I sent to mailing lists. Following up with my previous discussions about kernel rootkit mitigation via placing R/O protection on critical data structure, static data, privileged registers with static content. These patches present the first part where it is only possible to place these protections on memory
2011 Dec 16
4
[PATCH 0/2] vhot-net: Use kvm_memslots instead of vhost_memory to translate GPA to HVA
From: Hongyong Zang <zanghongyong at huawei.com> Vhost-net uses its own vhost_memory, which results from user space (qemu) info, to translate GPA to HVA. Since kernel's kvm structure already maintains the address relationship in its member *kvm_memslots*, these patches use kernel's kvm_memslots directly without the need of initialization and maintenance of vhost_memory. Hongyong
2011 Dec 16
4
[PATCH 0/2] vhot-net: Use kvm_memslots instead of vhost_memory to translate GPA to HVA
From: Hongyong Zang <zanghongyong at huawei.com> Vhost-net uses its own vhost_memory, which results from user space (qemu) info, to translate GPA to HVA. Since kernel's kvm structure already maintains the address relationship in its member *kvm_memslots*, these patches use kernel's kvm_memslots directly without the need of initialization and maintenance of vhost_memory. Hongyong
2020 Jul 21
0
[PATCH v9 04/84] KVM: add kvm_get_max_gfn()
...idx; + + idx = srcu_read_lock(&kvm->srcu); + spin_lock(&kvm->mmu_lock); + + slots = kvm_memslots(kvm); + kvm_for_each_memslot(memslot, slots) + if (memslot->id < KVM_USER_MEM_SLOTS && + (memslot->flags & skip_mask) == 0) + max_gfn = max(max_gfn, memslot->base_gfn + + memslot->npages); + + spin_unlock(&kvm->mmu_lock); + srcu_read_unlock(&kvm->srcu, idx); + + return max_gfn; +} + #ifndef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT /** * kvm_get_dirty_log - get a snapshot of dirty pages
2019 Aug 09
0
[RFC PATCH v6 27/92] kvm: introspection: use page track
...t = __kvmi_track_preexec(vcpu, gpa, gva, node); + + kvmi_put(vcpu->kvm); + + return ret; +} + +static void kvmi_track_create_slot(struct kvm *kvm, + struct kvm_memory_slot *slot, + unsigned long npages, + struct kvm_page_track_notifier_node *node) +{ + struct kvmi *ikvm; + gfn_t start = slot->base_gfn; + const gfn_t end = start + npages; + int idx; + + ikvm = kvmi_get(kvm); + if (!ikvm) + return; + + idx = srcu_read_lock(&kvm->srcu); + spin_lock(&kvm->mmu_lock); + read_lock(&ikvm->access_tree_lock); + + while (start < end) { + struct kvmi_mem_access *m; + + m = __kvmi_...
2020 Feb 07
0
[RFC PATCH v7 11/78] KVM: x86: add .control_cr3_intercept() to struct kvm_x86_ops
...40 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index d279195dac97..9032c996ebdb 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -134,6 +134,10 @@ static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t base_gfn, int level) #define KVM_NR_FIXED_MTRR_REGION 88 #define KVM_NR_VAR_MTRR 8 +#define CR_TYPE_R 1 +#define CR_TYPE_W 2 +#define CR_TYPE_RW 3 + #define ASYNC_PF_PER_VCPU 64 enum kvm_reg { @@ -1064,6 +1068,8 @@ struct kvm_x86_ops { void (*set_cr0)(struct kvm_vcpu *vcpu, unsigned long cr0); v...
2020 Jul 21
0
[PATCH v9 10/84] KVM: x86: add .control_cr3_intercept() to struct kvm_x86_ops
...40 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 78fe3c7c814c..89c0bd6529a5 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -136,6 +136,10 @@ static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t base_gfn, int level) #define KVM_NR_FIXED_MTRR_REGION 88 #define KVM_NR_VAR_MTRR 8 +#define CR_TYPE_R 1 +#define CR_TYPE_W 2 +#define CR_TYPE_RW 3 + #define ASYNC_PF_PER_VCPU 64 enum kvm_reg { @@ -1111,6 +1115,8 @@ struct kvm_x86_ops { void (*get_cs_db_l_bits)(struct kvm_vcpu *vcpu, int *db, int *...
2020 Jul 21
0
[PATCH v9 17/84] KVM: x86: use MSR_TYPE_R, MSR_TYPE_W and MSR_TYPE_RW with AMD
...3 insertions(+), 18 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 6be832ba9c97..a3230ab377db 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -140,6 +140,10 @@ static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t base_gfn, int level) #define CR_TYPE_W 2 #define CR_TYPE_RW 3 +#define MSR_TYPE_R 1 +#define MSR_TYPE_W 2 +#define MSR_TYPE_RW 3 + #define ASYNC_PF_PER_VCPU 64 enum kvm_reg { diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 4e5b07606891..e16be80edd7e 100644 --- a/arch/x86/kvm/svm/s...
2020 Feb 07
0
[RFC PATCH v7 16/78] KVM: x86: use MSR_TYPE_R, MSR_TYPE_W and MSR_TYPE_RW with AMD code too
...2 insertions(+), 18 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 8cdb6cece618..2136f273645a 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -138,6 +138,10 @@ static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t base_gfn, int level) #define CR_TYPE_W 2 #define CR_TYPE_RW 3 +#define MSR_TYPE_R 1 +#define MSR_TYPE_W 2 +#define MSR_TYPE_RW 3 + #define ASYNC_PF_PER_VCPU 64 enum kvm_reg { diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index e3369562d6fe..0021d8c2feca 100644 --- a/arch/x86/kvm/svm.c +++ b/a...
2012 Dec 10
26
[PATCH 00/11] Add virtual EPT support Xen.
From: Zhang Xiantao <xiantao.zhang@intel.com> With virtual EPT support, L1 hyerpvisor can use EPT hardware for L2 guest''s memory virtualization. In this way, L2 guest''s performance can be improved sharply. According to our testing, some benchmarks can show > 5x performance gain. Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Zhang Xiantao (11):
2020 Jul 22
34
[RFC PATCH v1 00/34] VM introspection - EPT Views and Virtualization Exceptions
This patch series is based on the VM introspection patches (https://lore.kernel.org/kvm/20200721210922.7646-1-alazar at bitdefender.com/), extending the introspection API with EPT Views and Virtualization Exceptions (#VE) support. The purpose of this series is to get an initial feedback and to see if we are on the right track, especially because the changes made to add the EPT views are not small
2020 Feb 07
78
[RFC PATCH v7 00/78] VM introspection
The KVM introspection subsystem provides a facility for applications running on the host or in a separate VM, to control the execution of other VMs (pause, resume, shutdown), query the state of the vCPUs (GPRs, MSRs etc.), alter the page access bits in the shadow page tables (only for the hardware backed ones, eg. Intel's EPT) and receive notifications when events of interest have taken place
2020 Jul 21
87
[PATCH v9 00/84] VM introspection
The KVM introspection subsystem provides a facility for applications running on the host or in a separate VM, to control the execution of other VMs (pause, resume, shutdown), query the state of the vCPUs (GPRs, MSRs etc.), alter the page access bits in the shadow page tables (only for the hardware backed ones, eg. Intel's EPT) and receive notifications when events of interest have taken place
2019 Aug 09
117
[RFC PATCH v6 00/92] VM introspection
The KVM introspection subsystem provides a facility for applications running on the host or in a separate VM, to control the execution of other VM-s (pause, resume, shutdown), query the state of the vCPUs (GPRs, MSRs etc.), alter the page access bits in the shadow page tables (only for the hardware backed ones, eg. Intel's EPT) and receive notifications when events of interest have taken place
2019 Aug 09
117
[RFC PATCH v6 00/92] VM introspection
The KVM introspection subsystem provides a facility for applications running on the host or in a separate VM, to control the execution of other VM-s (pause, resume, shutdown), query the state of the vCPUs (GPRs, MSRs etc.), alter the page access bits in the shadow page tables (only for the hardware backed ones, eg. Intel's EPT) and receive notifications when events of interest have taken place