Displaying 12 results from an estimated 12 matches for "vm_list".
Did you mean:
va_list
2009 May 19
1
[PATCH node-image] Fixing the autotest script.
...t;
- exit 2
+ send_log "Unexpected end of file."
+ exit 2
}
}
@@ -676,13 +674,15 @@ setup_for_testing () {
# cleans up any loose ends
cleanup_after_testing () {
+ debug "Cleaning up"
stop_dnsmasq
stop_networking
# destroy any running vms
vm_list=$(sudo virsh list --all | awk '/'${vm_prefix}-'/ { print $2 }')
test -n "$vm_list" && for vm in $vm_list; do
- destroy_node $vm
+ destroy_node $vm
done
+ stop_networking
}
# check commandline options
@@ -715,15 +715,30 @@ set +u
if [ $# -gt...
2011 Dec 16
4
[PATCH 0/2] vhot-net: Use kvm_memslots instead of vhost_memory to translate GPA to HVA
From: Hongyong Zang <zanghongyong at huawei.com>
Vhost-net uses its own vhost_memory, which results from user space (qemu) info,
to translate GPA to HVA. Since kernel's kvm structure already maintains the
address relationship in its member *kvm_memslots*, these patches use kernel's
kvm_memslots directly without the need of initialization and maintenance of
vhost_memory.
Hongyong
2011 Dec 16
4
[PATCH 0/2] vhot-net: Use kvm_memslots instead of vhost_memory to translate GPA to HVA
From: Hongyong Zang <zanghongyong at huawei.com>
Vhost-net uses its own vhost_memory, which results from user space (qemu) info,
to translate GPA to HVA. Since kernel's kvm structure already maintains the
address relationship in its member *kvm_memslots*, these patches use kernel's
kvm_memslots directly without the need of initialization and maintenance of
vhost_memory.
Hongyong
2010 Mar 26
3
[PATCH node] Update autobuild and autotest scripts for new build structure
...ORK.100
+ debug "NODE_ADDRESS=${NODE_ADDRESS}"
+ DNSMASQ_PID=0
+ debug "preserve_vm=${preserve_vm}"
+}
+
+# cleans up any loose ends
+cleanup_after_testing () {
+ debug "Cleaning up"
+ stop_dnsmasq
+ stop_networking
+ # destroy any running vms
+ vm_list=$(sudo virsh list --all | awk '/'${vm_prefix}-'/ { print $2 }')
+ test -n "$vm_list" && for vm in $vm_list; do
+ destroy_node $vm
+ done
+ stop_networking
+
+ # do not delete the work directory if preserve was specified
+ if $preserve_vm; then...
2019 Aug 09
0
[RFC PATCH v6 01/92] kvm: introduce KVMI (VM introspection subsystem)
...ude <linux/bsearch.h>
+#include <linux/kvmi.h>
#include <asm/processor.h>
#include <asm/io.h>
@@ -680,6 +681,8 @@ static struct kvm *kvm_create_vm(unsigned long type)
if (r)
goto out_err;
+ kvmi_create_vm(kvm);
+
spin_lock(&kvm_lock);
list_add(&kvm->vm_list, &vm_list);
spin_unlock(&kvm_lock);
@@ -725,6 +728,7 @@ static void kvm_destroy_vm(struct kvm *kvm)
int i;
struct mm_struct *mm = kvm->mm;
+ kvmi_destroy_vm(kvm);
kvm_uevent_notify_change(KVM_EVENT_DESTROY_VM, kvm);
kvm_destroy_vm_debugfs(kvm);
kvm_arch_sync_events(kvm);
@@...
2019 Aug 12
2
[RFC PATCH v6 01/92] kvm: introduce KVMI (VM introspection subsystem)
...gt;
>
> #include <asm/processor.h>
> #include <asm/io.h>
> @@ -680,6 +681,8 @@ static struct kvm *kvm_create_vm(unsigned long type)
> if (r)
> goto out_err;
>
> + kvmi_create_vm(kvm);
> +
> spin_lock(&kvm_lock);
> list_add(&kvm->vm_list, &vm_list);
> spin_unlock(&kvm_lock);
> @@ -725,6 +728,7 @@ static void kvm_destroy_vm(struct kvm *kvm)
> int i;
> struct mm_struct *mm = kvm->mm;
>
> + kvmi_destroy_vm(kvm);
> kvm_uevent_notify_change(KVM_EVENT_DESTROY_VM, kvm);
> kvm_destroy_vm_debug...
2014 May 28
1
Re: redirecting guest stdio to the host
On Wed, May 28, 2014 at 11:28:19AM -0600, Eric Blake wrote:
> On 05/28/2014 09:14 AM, Alexander Binun wrote:
>
> [can you convince your mailer to wrap long lines?]
>
> >
> > I have a program running on a VM guest. Its output is valuable (for VM introspection) so I want to let the host module know about it. I prefer to redirect ' stdio" of a guest into a device
2009 May 19
1
re-sending outstanding controller refactoring patches after rebase
I've rebased the patch series to the current next branch and am sending them again.
2019 Aug 09
117
[RFC PATCH v6 00/92] VM introspection
The KVM introspection subsystem provides a facility for applications running
on the host or in a separate VM, to control the execution of other VM-s
(pause, resume, shutdown), query the state of the vCPUs (GPRs, MSRs etc.),
alter the page access bits in the shadow page tables (only for the hardware
backed ones, eg. Intel's EPT) and receive notifications when events of
interest have taken place
2019 Aug 09
117
[RFC PATCH v6 00/92] VM introspection
The KVM introspection subsystem provides a facility for applications running
on the host or in a separate VM, to control the execution of other VM-s
(pause, resume, shutdown), query the state of the vCPUs (GPRs, MSRs etc.),
alter the page access bits in the shadow page tables (only for the hardware
backed ones, eg. Intel's EPT) and receive notifications when events of
interest have taken place
2020 Feb 07
78
[RFC PATCH v7 00/78] VM introspection
The KVM introspection subsystem provides a facility for applications
running on the host or in a separate VM, to control the execution of
other VMs (pause, resume, shutdown), query the state of the vCPUs (GPRs,
MSRs etc.), alter the page access bits in the shadow page tables (only
for the hardware backed ones, eg. Intel's EPT) and receive notifications
when events of interest have taken place
2020 Jul 21
87
[PATCH v9 00/84] VM introspection
The KVM introspection subsystem provides a facility for applications
running on the host or in a separate VM, to control the execution of
other VMs (pause, resume, shutdown), query the state of the vCPUs (GPRs,
MSRs etc.), alter the page access bits in the shadow page tables (only
for the hardware backed ones, eg. Intel's EPT) and receive notifications
when events of interest have taken place