search for: __hypervisor_hvm_op

Displaying 20 results from an estimated 49 matches for "__hypervisor_hvm_op".

Did you mean: __hypervisor_acm_op
2012 Jan 17
2
Problems calling HVMOP_flush_tlbs
...the comment where HVMOP_flush_tlbs is defined. What is the correct way to invoke this hypercall? If I call it like this, I receive an invalid parameter (EINVAL) error: struct privcmd_hypercall { uint64_t op; uint64_t arg[5]; } hypercall; hypercall.op = __HYPERVISOR_hvm_op; hypercall.arg[0] = HVMOP_flush_tlbs; hypercall.arg[1] = 0; ret = do_xen_hypercall(xch, (void*)&hypercall); If I call it like this, I get function not implemented (ENOSYS), where 3 is a valid domain id: hypercall.op = __HYPERVISOR_hvm_op; hypercall.arg[0] = HVMOP...
2020 Feb 07
0
[RFC PATCH v7 57/78] KVM: introspection: add KVMI_EVENT_HYPERCALL
...wing +registers, which differ between 32bit and 64bit, have the following values: + + 32bit 64bit value + --------------------------- + ebx (a0) rdi KVM_HC_XEN_HVM_OP_GUEST_REQUEST_VM_EVENT + ecx (a1) rsi 0 + +This specification copies Xen's { __HYPERVISOR_hvm_op, +HVMOP_guest_request_vm_event } hypercall and can originate from kernel or +userspace. + +It returns 0 if successful, or a negative POSIX.1 error code if it fails. The +absence of an active VMI application is not signaled in any way. + +The following registers are clobbered: + + * 32bit: edx, esi...
2006 Aug 25
1
[PATCH][RFC]xenperf hypercall pretty print TAKE 2
This patch pretty prints the hypercall section for $xenperf -f Each hypercall count is tagged by its name. Reference: http://lists.xensource.com/archives/html/xen-ia64-devel/2006-08/msg00261.html Signed-off-by Ken Hironaka <kenny@logos.ic.i.u-tokyo.ac.jp> _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com
2012 Sep 04
2
[PATCH] valgrind: Support for ioctls used by Xen toolstack processes.
...subop(tid, layout, arrghs, status, flags, + "__HYPERVISOR_domctl", domctl->cmd); + break; + } +#undef PRE_XEN_DOMCTL_READ +#undef __PRE_XEN_DOMCTL_READ +} + +PRE(hvm_op) +{ + unsigned long op = ARG1; + void *arg = (void *)(unsigned long)ARG2; + + PRINT("__HYPERVISOR_hvm_op ( %ld, %p )", op, arg); + +#define __PRE_XEN_HVMOP_READ(_hvm_op, _type, _field) \ + PRE_MEM_READ("XEN_HVMOP_" # _hvm_op, \ + (Addr)&((_type*)arg)->_field, \ + sizeof(((_type*)arg)->_field)) +#define PRE_XEN_HVMOP_...
2006 Dec 01
0
[PATCH 3/10] Add support for netfront/netback acceleration drivers
...27 +#define __HYPERVISOR_nmi_op 28 +#define __HYPERVISOR_sched_op 29 +#define __HYPERVISOR_callback_op 30 +#define __HYPERVISOR_xenoprof_op 31 +#define __HYPERVISOR_event_channel_op 32 +#define __HYPERVISOR_physdev_op 33 +#define __HYPERVISOR_hvm_op 34 +#define __HYPERVISOR_sysctl 35 +#define __HYPERVISOR_domctl 36 +#define __HYPERVISOR_kexec_op 37 + +/* Architecture-specific hypercall definitions. */ +#define __HYPERVISOR_arch_0 48 +#define __HYPERVISOR_arch_1 4...
2011 Jul 25
1
linux-next: Tree for July 25 (xen)
...86/xen/trace.c:37: error: '__HYPERVISOR_physdev_op' undeclared here (not in a function) arch/x86/xen/trace.c:37: error: array index in initializer not of integer type arch/x86/xen/trace.c:37: error: (near initialization for 'xen_hypercall_names') arch/x86/xen/trace.c:38: error: '__HYPERVISOR_hvm_op' undeclared here (not in a function) arch/x86/xen/trace.c:38: error: array index in initializer not of integer type arch/x86/xen/trace.c:38: error: (near initialization for 'xen_hypercall_names') arch/x86/xen/trace.c:41: error: '__HYPERVISOR_arch_0' undeclared here (not in a fun...
2011 Jul 25
1
linux-next: Tree for July 25 (xen)
...86/xen/trace.c:37: error: '__HYPERVISOR_physdev_op' undeclared here (not in a function) arch/x86/xen/trace.c:37: error: array index in initializer not of integer type arch/x86/xen/trace.c:37: error: (near initialization for 'xen_hypercall_names') arch/x86/xen/trace.c:38: error: '__HYPERVISOR_hvm_op' undeclared here (not in a function) arch/x86/xen/trace.c:38: error: array index in initializer not of integer type arch/x86/xen/trace.c:38: error: (near initialization for 'xen_hypercall_names') arch/x86/xen/trace.c:41: error: '__HYPERVISOR_arch_0' undeclared here (not in a fun...
2011 Jul 25
1
linux-next: Tree for July 25 (xen)
...86/xen/trace.c:37: error: '__HYPERVISOR_physdev_op' undeclared here (not in a function) arch/x86/xen/trace.c:37: error: array index in initializer not of integer type arch/x86/xen/trace.c:37: error: (near initialization for 'xen_hypercall_names') arch/x86/xen/trace.c:38: error: '__HYPERVISOR_hvm_op' undeclared here (not in a function) arch/x86/xen/trace.c:38: error: array index in initializer not of integer type arch/x86/xen/trace.c:38: error: (near initialization for 'xen_hypercall_names') arch/x86/xen/trace.c:41: error: '__HYPERVISOR_arch_0' undeclared here (not in a fun...
2008 Mar 28
12
[PATCH 00/12] Xen arch portability patches (take 4)
Hi Jeremy. According to your suggestion, I recreated patches for Ingo's x86.git tree. And this patch series includes Eddie's modification. Please review and forward them. (or push back to respin.) Recently the xen-ia64 community started to make efforts to merge xen/ia64 Linux to upstream. The first step is to merge up domU portion. This patchset is preliminary for xen/ia64 domU linux
2008 Mar 28
12
[PATCH 00/12] Xen arch portability patches (take 4)
Hi Jeremy. According to your suggestion, I recreated patches for Ingo's x86.git tree. And this patch series includes Eddie's modification. Please review and forward them. (or push back to respin.) Recently the xen-ia64 community started to make efforts to merge xen/ia64 Linux to upstream. The first step is to merge up domU portion. This patchset is preliminary for xen/ia64 domU linux
2012 Dec 27
30
[PATCH v3 00/11] xen: Initial kexec/kdump implementation
Hi, This set of patches contains initial kexec/kdump implementation for Xen v3. Currently only dom0 is supported, however, almost all infrustructure required for domU support is ready. Jan Beulich suggested to merge Xen x86 assembler code with baremetal x86 code. This could simplify and reduce a bit size of kernel code. However, this solution requires some changes in baremetal x86 code. First of
2012 Dec 27
30
[PATCH v3 00/11] xen: Initial kexec/kdump implementation
Hi, This set of patches contains initial kexec/kdump implementation for Xen v3. Currently only dom0 is supported, however, almost all infrustructure required for domU support is ready. Jan Beulich suggested to merge Xen x86 assembler code with baremetal x86 code. This could simplify and reduce a bit size of kernel code. However, this solution requires some changes in baremetal x86 code. First of
2012 Dec 27
30
[PATCH v3 00/11] xen: Initial kexec/kdump implementation
Hi, This set of patches contains initial kexec/kdump implementation for Xen v3. Currently only dom0 is supported, however, almost all infrustructure required for domU support is ready. Jan Beulich suggested to merge Xen x86 assembler code with baremetal x86 code. This could simplify and reduce a bit size of kernel code. However, this solution requires some changes in baremetal x86 code. First of
2012 Nov 20
12
[PATCH v2 00/11] xen: Initial kexec/kdump implementation
Hi, This set of patches contains initial kexec/kdump implementation for Xen v2 (previous version were posted to few people by mistake; sorry for that). Currently only dom0 is supported, however, almost all infrustructure required for domU support is ready. Jan Beulich suggested to merge Xen x86 assembler code with baremetal x86 code. This could simplify and reduce a bit size of kernel code.
2012 Nov 20
12
[PATCH v2 00/11] xen: Initial kexec/kdump implementation
Hi, This set of patches contains initial kexec/kdump implementation for Xen v2 (previous version were posted to few people by mistake; sorry for that). Currently only dom0 is supported, however, almost all infrustructure required for domU support is ready. Jan Beulich suggested to merge Xen x86 assembler code with baremetal x86 code. This could simplify and reduce a bit size of kernel code.
2012 Nov 20
12
[PATCH v2 00/11] xen: Initial kexec/kdump implementation
Hi, This set of patches contains initial kexec/kdump implementation for Xen v2 (previous version were posted to few people by mistake; sorry for that). Currently only dom0 is supported, however, almost all infrustructure required for domU support is ready. Jan Beulich suggested to merge Xen x86 assembler code with baremetal x86 code. This could simplify and reduce a bit size of kernel code.
2019 Aug 09
117
[RFC PATCH v6 00/92] VM introspection
The KVM introspection subsystem provides a facility for applications running on the host or in a separate VM, to control the execution of other VM-s (pause, resume, shutdown), query the state of the vCPUs (GPRs, MSRs etc.), alter the page access bits in the shadow page tables (only for the hardware backed ones, eg. Intel's EPT) and receive notifications when events of interest have taken place
2019 Aug 09
117
[RFC PATCH v6 00/92] VM introspection
The KVM introspection subsystem provides a facility for applications running on the host or in a separate VM, to control the execution of other VM-s (pause, resume, shutdown), query the state of the vCPUs (GPRs, MSRs etc.), alter the page access bits in the shadow page tables (only for the hardware backed ones, eg. Intel's EPT) and receive notifications when events of interest have taken place
2020 Feb 07
78
[RFC PATCH v7 00/78] VM introspection
The KVM introspection subsystem provides a facility for applications running on the host or in a separate VM, to control the execution of other VMs (pause, resume, shutdown), query the state of the vCPUs (GPRs, MSRs etc.), alter the page access bits in the shadow page tables (only for the hardware backed ones, eg. Intel's EPT) and receive notifications when events of interest have taken place
2020 Jul 21
87
[PATCH v9 00/84] VM introspection
The KVM introspection subsystem provides a facility for applications running on the host or in a separate VM, to control the execution of other VMs (pause, resume, shutdown), query the state of the vCPUs (GPRs, MSRs etc.), alter the page access bits in the shadow page tables (only for the hardware backed ones, eg. Intel's EPT) and receive notifications when events of interest have taken place