search for: kvm_state

Displaying 15 results from an estimated 15 matches for "kvm_state".

2020 Aug 24
0
[PATCH v6 01/76] KVM: SVM: nested: Don't allocate VMCB structures on stack
...__user *) &user_kvm_nested_state->data.svm[0]; - struct vmcb_control_area ctl; - struct vmcb_save_area save; + struct vmcb_control_area *ctl; + struct vmcb_save_area *save; + int ret; u32 cr0; + BUILD_BUG_ON(sizeof(struct vmcb_control_area) + sizeof(struct vmcb_save_area) > + KVM_STATE_NESTED_SVM_VMCB_SIZE); + if (kvm_state->format != KVM_STATE_NESTED_FORMAT_SVM) return -EINVAL; @@ -1095,13 +1099,22 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu, return -EINVAL; if (kvm_state->size < sizeof(*kvm_state) + KVM_STATE_NESTED_SVM_VMCB_SIZE) return -EI...
2015 Oct 26
9
[PATCH 0/7] Hyper-V Synthetic interrupt controller
Hyper-V SynIC (synthetic interrupt controller) device implementation. The implementation contains: * msr's support * irq routing setup * irq injection * irq ack callback registration * event/message pages changes tracking at Hyper-V exit * Hyper-V test device to test SynIC by kvm-unit-tests Andrey Smetanin (7): standard-headers/x86: add Hyper-V SynIC constants target-i386/kvm: Hyper-V
2015 Oct 26
9
[PATCH 0/7] Hyper-V Synthetic interrupt controller
Hyper-V SynIC (synthetic interrupt controller) device implementation. The implementation contains: * msr's support * irq routing setup * irq injection * irq ack callback registration * event/message pages changes tracking at Hyper-V exit * Hyper-V test device to test SynIC by kvm-unit-tests Andrey Smetanin (7): standard-headers/x86: add Hyper-V SynIC constants target-i386/kvm: Hyper-V
2009 Aug 13
0
[PATCHv2 3/3] qemu-kvm: vhost-net implementation
...vm_irqfd call = { }; + int r; + + if (vector >= proxy->pci_dev.msix_entries_nr) + return -EINVAL; + if (!proxy->pci_dev.msix_entry_used[vector]) + return -ENOENT; + call.fd = fd; + call.gsi = proxy->pci_dev.msix_irq_entries[vector].gsi; + r = kvm_vm_ioctl(kvm_state, KVM_IRQFD, &call); + if (r < 0) + return r; + return 0; +} + +static int virtio_pci_queuefd(void * opaque, int n, int fd) +{ + VirtIOPCIProxy *proxy = opaque; + struct kvm_ioeventfd kick = { + .datamatch = n, + .addr = proxy->addr + VIRTIO_PCI_QUEUE_NOTIF...
2009 Aug 13
0
[PATCHv2 3/3] qemu-kvm: vhost-net implementation
...vm_irqfd call = { }; + int r; + + if (vector >= proxy->pci_dev.msix_entries_nr) + return -EINVAL; + if (!proxy->pci_dev.msix_entry_used[vector]) + return -ENOENT; + call.fd = fd; + call.gsi = proxy->pci_dev.msix_irq_entries[vector].gsi; + r = kvm_vm_ioctl(kvm_state, KVM_IRQFD, &call); + if (r < 0) + return r; + return 0; +} + +static int virtio_pci_queuefd(void * opaque, int n, int fd) +{ + VirtIOPCIProxy *proxy = opaque; + struct kvm_ioeventfd kick = { + .datamatch = n, + .addr = proxy->addr + VIRTIO_PCI_QUEUE_NOTIF...
2009 Aug 17
1
[PATCHv3 3/4] qemu-kvm: vhost-net implementation
...vm_irqfd call = { }; + int r; + + if (vector >= proxy->pci_dev.msix_entries_nr) + return -EINVAL; + if (!proxy->pci_dev.msix_entry_used[vector]) + return -ENOENT; + call.fd = fd; + call.gsi = proxy->pci_dev.msix_irq_entries[vector].gsi; + r = kvm_vm_ioctl(kvm_state, KVM_IRQFD, &call); + if (r < 0) + return r; + return 0; +} + +static int virtio_pci_queuefd(void * opaque, int n, int fd) +{ + VirtIOPCIProxy *proxy = opaque; + struct kvm_ioeventfd kick = { + .datamatch = n, + .addr = proxy->addr + VIRTIO_PCI_QUEUE_NOTIF...
2009 Aug 17
1
[PATCHv3 3/4] qemu-kvm: vhost-net implementation
...vm_irqfd call = { }; + int r; + + if (vector >= proxy->pci_dev.msix_entries_nr) + return -EINVAL; + if (!proxy->pci_dev.msix_entry_used[vector]) + return -ENOENT; + call.fd = fd; + call.gsi = proxy->pci_dev.msix_irq_entries[vector].gsi; + r = kvm_vm_ioctl(kvm_state, KVM_IRQFD, &call); + if (r < 0) + return r; + return 0; +} + +static int virtio_pci_queuefd(void * opaque, int n, int fd) +{ + VirtIOPCIProxy *proxy = opaque; + struct kvm_ioeventfd kick = { + .datamatch = n, + .addr = proxy->addr + VIRTIO_PCI_QUEUE_NOTIF...
2009 Aug 10
0
[PATCH 3/3] qemu-kvm: vhost-net implementation
...vm_irqfd call = { }; + int r; + + if (vector >= proxy->pci_dev.msix_entries_nr) + return -EINVAL; + if (!proxy->pci_dev.msix_entry_used[vector]) + return -ENOENT; + call.fd = fd; + call.gsi = proxy->pci_dev.msix_irq_entries[vector].gsi; + r = kvm_vm_ioctl(kvm_state, KVM_IRQFD, &call); + if (r < 0) + return r; + return 0; +} + +static int virtio_pci_queuefd(void * opaque, int n, int fd) +{ + VirtIOPCIProxy *proxy = opaque; + struct kvm_ioeventfd kick = { + .datamatch = n, + .addr = proxy->addr + VIRTIO_PCI_QUEUE_NOTIF...
2009 Aug 10
0
[PATCH 3/3] qemu-kvm: vhost-net implementation
...vm_irqfd call = { }; + int r; + + if (vector >= proxy->pci_dev.msix_entries_nr) + return -EINVAL; + if (!proxy->pci_dev.msix_entry_used[vector]) + return -ENOENT; + call.fd = fd; + call.gsi = proxy->pci_dev.msix_irq_entries[vector].gsi; + r = kvm_vm_ioctl(kvm_state, KVM_IRQFD, &call); + if (r < 0) + return r; + return 0; +} + +static int virtio_pci_queuefd(void * opaque, int n, int fd) +{ + VirtIOPCIProxy *proxy = opaque; + struct kvm_ioeventfd kick = { + .datamatch = n, + .addr = proxy->addr + VIRTIO_PCI_QUEUE_NOTIF...
2009 Nov 02
2
[PATCHv4 6/6] qemu-kvm: vhost-net implementation
...vm_irqfd call = { }; + int r; + + if (vector >= proxy->pci_dev.msix_entries_nr) + return -EINVAL; + if (!proxy->pci_dev.msix_entry_used[vector]) + return -ENOENT; + call.fd = fd; + call.gsi = proxy->pci_dev.msix_irq_entries[vector].gsi; + r = kvm_vm_ioctl(kvm_state, KVM_IRQFD, &call); + if (r < 0) + return r; + return 0; +} + +static int virtio_pci_queuefd(void * opaque, int n, int fd) +{ + VirtIOPCIProxy *proxy = opaque; + struct kvm_ioeventfd kick = { + .datamatch = n, + .addr = proxy->addr + VIRTIO_PCI_QUEUE_NOTIF...
2009 Nov 02
2
[PATCHv4 6/6] qemu-kvm: vhost-net implementation
...vm_irqfd call = { }; + int r; + + if (vector >= proxy->pci_dev.msix_entries_nr) + return -EINVAL; + if (!proxy->pci_dev.msix_entry_used[vector]) + return -ENOENT; + call.fd = fd; + call.gsi = proxy->pci_dev.msix_irq_entries[vector].gsi; + r = kvm_vm_ioctl(kvm_state, KVM_IRQFD, &call); + if (r < 0) + return r; + return 0; +} + +static int virtio_pci_queuefd(void * opaque, int n, int fd) +{ + VirtIOPCIProxy *proxy = opaque; + struct kvm_ioeventfd kick = { + .datamatch = n, + .addr = proxy->addr + VIRTIO_PCI_QUEUE_NOTIF...
2019 Apr 04
1
Proof of concept for GPU forwarding for Linux guest on Linux host.
Hi, This is a proof of concept of GPU forwarding for Linux guest on Linux host. I'd like to get comments and suggestions from community before I put more time on it. To summarize what it is: 1. It's a solution to bring GPU acceleration for Linux vm guest on Linux host. It could works with different GPU although the current proof of concept only works with Intel GPU. 2. The basic idea
2020 Aug 24
96
[PATCH v6 00/76] x86: SEV-ES Guest Support
From: Joerg Roedel <jroedel at suse.de> Hi, here is the new version of the SEV-ES client enabling patch-set. It is based on the latest tip/master branch and contains the necessary changes. In particular those ar: - Enabling CR4.FSGSBASE early on supported processors so that early #VC exceptions on APs can be handled. - Add another patch (patch 1) to fix a KVM frame-size build
2020 Sep 07
84
[PATCH v7 00/72] x86: SEV-ES Guest Support
From: Joerg Roedel <jroedel at suse.de> Hi, here is a new version of the SEV-ES Guest Support patches for x86. The previous versions can be found as a linked list starting here: https://lore.kernel.org/lkml/20200824085511.7553-1-joro at 8bytes.org/ I updated the patch-set based on ther review comments I got and the discussions around it. Another important change is that the early IDT
2020 Sep 07
84
[PATCH v7 00/72] x86: SEV-ES Guest Support
From: Joerg Roedel <jroedel at suse.de> Hi, here is a new version of the SEV-ES Guest Support patches for x86. The previous versions can be found as a linked list starting here: https://lore.kernel.org/lkml/20200824085511.7553-1-joro at 8bytes.org/ I updated the patch-set based on ther review comments I got and the discussions around it. Another important change is that the early IDT