search for: kvm_irqfd

Displaying 15 results from an estimated 15 matches for "kvm_irqfd".

2013 May 07
2
[PATCH] KVM: Fix kvm_irqfd_init initialization
In commit a0f155e96 'KVM: Initialize irqfd from kvm_init()', when kvm_init() is called the second time (e.g kvm-amd.ko and kvm-intel.ko), kvm_arch_init() will fail with -EEXIST, then kvm_irqfd_exit() will be called on the error handling path. This way, the kvm_irqfd system will not be ready. This patch fix the following: BUG: unable to handle kernel NULL pointer dereference at (null) IP: [<ffffffff81c0721e>] _raw_spin_lock+0xe/0x30 PGD 0 Oops: 0002 [#1] SMP Modules link...
2013 May 07
2
[PATCH] KVM: Fix kvm_irqfd_init initialization
In commit a0f155e96 'KVM: Initialize irqfd from kvm_init()', when kvm_init() is called the second time (e.g kvm-amd.ko and kvm-intel.ko), kvm_arch_init() will fail with -EEXIST, then kvm_irqfd_exit() will be called on the error handling path. This way, the kvm_irqfd system will not be ready. This patch fix the following: BUG: unable to handle kernel NULL pointer dereference at (null) IP: [<ffffffff81c0721e>] _raw_spin_lock+0xe/0x30 PGD 0 Oops: 0002 [#1] SMP Modules link...
2014 Nov 16
6
vhost + multiqueue + RSS question.
...he NIC, just like I would expect, > but in a guest all 4 virtio-input interrupt are incremented. Am I > missing any configuration? I don't see anything obviously wrong with what you describe. Maybe, somehow, same irqfd got bound to multiple MSI vectors? To see, can you try dumping struct kvm_irqfd that's passed to kvm? > -- > Gleb.
2014 Nov 16
6
vhost + multiqueue + RSS question.
...he NIC, just like I would expect, > but in a guest all 4 virtio-input interrupt are incremented. Am I > missing any configuration? I don't see anything obviously wrong with what you describe. Maybe, somehow, same irqfd got bound to multiple MSI vectors? To see, can you try dumping struct kvm_irqfd that's passed to kvm? > -- > Gleb.
2014 Nov 17
0
vhost + multiqueue + RSS question.
...uld expect, >> but in a guest all 4 virtio-input interrupt are incremented. Am I >> missing any configuration? > I don't see anything obviously wrong with what you describe. > Maybe, somehow, same irqfd got bound to multiple MSI vectors? > To see, can you try dumping struct kvm_irqfd that's passed to kvm? > > >> -- >> Gleb. This sounds like a regression, which kernel/qemu version did you use?
2014 Nov 17
0
vhost + multiqueue + RSS question.
...NIC, just like I would expect, > but in a guest all 4 virtio-input interrupt are incremented. Am I > missing any configuration? I don't see anything obviously wrong with what you describe. Maybe, somehow, same irqfd got bound to multiple MSI vectors? To see, can you try dumping struct kvm_irqfd that's passed to kvm? > -- > Gleb. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo at vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
2014 Nov 17
0
vhost + multiqueue + RSS question.
...NIC, just like I would expect, > but in a guest all 4 virtio-input interrupt are incremented. Am I > missing any configuration? I don't see anything obviously wrong with what you describe. Maybe, somehow, same irqfd got bound to multiple MSI vectors? To see, can you try dumping struct kvm_irqfd that's passed to kvm? > -- > Gleb. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo at vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
2009 Aug 13
0
[PATCHv2 3/3] qemu-kvm: vhost-net implementation
...dev, vdev->config_vector); @@ -365,12 +369,48 @@ static void virtio_write_config(PCIDevice *pci_dev, uint32_t address, msix_write_config(pci_dev, address, val, len); } +static int virtio_pci_irqfd(void * opaque, uint16_t vector, int fd) +{ + VirtIOPCIProxy *proxy = opaque; + struct kvm_irqfd call = { }; + int r; + + if (vector >= proxy->pci_dev.msix_entries_nr) + return -EINVAL; + if (!proxy->pci_dev.msix_entry_used[vector]) + return -ENOENT; + call.fd = fd; + call.gsi = proxy->pci_dev.msix_irq_entries[vector].gsi; + r = kvm_vm_ioctl(kvm_stat...
2009 Aug 13
0
[PATCHv2 3/3] qemu-kvm: vhost-net implementation
...dev, vdev->config_vector); @@ -365,12 +369,48 @@ static void virtio_write_config(PCIDevice *pci_dev, uint32_t address, msix_write_config(pci_dev, address, val, len); } +static int virtio_pci_irqfd(void * opaque, uint16_t vector, int fd) +{ + VirtIOPCIProxy *proxy = opaque; + struct kvm_irqfd call = { }; + int r; + + if (vector >= proxy->pci_dev.msix_entries_nr) + return -EINVAL; + if (!proxy->pci_dev.msix_entry_used[vector]) + return -ENOENT; + call.fd = fd; + call.gsi = proxy->pci_dev.msix_irq_entries[vector].gsi; + r = kvm_vm_ioctl(kvm_stat...
2009 Aug 17
1
[PATCHv3 3/4] qemu-kvm: vhost-net implementation
...dev, vdev->config_vector); @@ -365,12 +369,48 @@ static void virtio_write_config(PCIDevice *pci_dev, uint32_t address, msix_write_config(pci_dev, address, val, len); } +static int virtio_pci_irqfd(void * opaque, uint16_t vector, int fd) +{ + VirtIOPCIProxy *proxy = opaque; + struct kvm_irqfd call = { }; + int r; + + if (vector >= proxy->pci_dev.msix_entries_nr) + return -EINVAL; + if (!proxy->pci_dev.msix_entry_used[vector]) + return -ENOENT; + call.fd = fd; + call.gsi = proxy->pci_dev.msix_irq_entries[vector].gsi; + r = kvm_vm_ioctl(kvm_stat...
2009 Aug 17
1
[PATCHv3 3/4] qemu-kvm: vhost-net implementation
...dev, vdev->config_vector); @@ -365,12 +369,48 @@ static void virtio_write_config(PCIDevice *pci_dev, uint32_t address, msix_write_config(pci_dev, address, val, len); } +static int virtio_pci_irqfd(void * opaque, uint16_t vector, int fd) +{ + VirtIOPCIProxy *proxy = opaque; + struct kvm_irqfd call = { }; + int r; + + if (vector >= proxy->pci_dev.msix_entries_nr) + return -EINVAL; + if (!proxy->pci_dev.msix_entry_used[vector]) + return -ENOENT; + call.fd = fd; + call.gsi = proxy->pci_dev.msix_irq_entries[vector].gsi; + r = kvm_vm_ioctl(kvm_stat...
2009 Aug 10
0
[PATCH 3/3] qemu-kvm: vhost-net implementation
...dev, vdev->config_vector); @@ -367,12 +371,48 @@ static void virtio_write_config(PCIDevice *pci_dev, uint32_t address, msix_write_config(pci_dev, address, val, len); } +static int virtio_pci_irqfd(void * opaque, uint16_t vector, int fd) +{ + VirtIOPCIProxy *proxy = opaque; + struct kvm_irqfd call = { }; + int r; + + if (vector >= proxy->pci_dev.msix_entries_nr) + return -EINVAL; + if (!proxy->pci_dev.msix_entry_used[vector]) + return -ENOENT; + call.fd = fd; + call.gsi = proxy->pci_dev.msix_irq_entries[vector].gsi; + r = kvm_vm_ioctl(kvm_stat...
2009 Aug 10
0
[PATCH 3/3] qemu-kvm: vhost-net implementation
...dev, vdev->config_vector); @@ -367,12 +371,48 @@ static void virtio_write_config(PCIDevice *pci_dev, uint32_t address, msix_write_config(pci_dev, address, val, len); } +static int virtio_pci_irqfd(void * opaque, uint16_t vector, int fd) +{ + VirtIOPCIProxy *proxy = opaque; + struct kvm_irqfd call = { }; + int r; + + if (vector >= proxy->pci_dev.msix_entries_nr) + return -EINVAL; + if (!proxy->pci_dev.msix_entry_used[vector]) + return -ENOENT; + call.fd = fd; + call.gsi = proxy->pci_dev.msix_irq_entries[vector].gsi; + r = kvm_vm_ioctl(kvm_stat...
2009 Nov 02
2
[PATCHv4 6/6] qemu-kvm: vhost-net implementation
...dev, vdev->config_vector); @@ -373,12 +377,48 @@ static void virtio_write_config(PCIDevice *pci_dev, uint32_t address, msix_write_config(pci_dev, address, val, len); } +static int virtio_pci_irqfd(void * opaque, uint16_t vector, int fd) +{ + VirtIOPCIProxy *proxy = opaque; + struct kvm_irqfd call = { }; + int r; + + if (vector >= proxy->pci_dev.msix_entries_nr) + return -EINVAL; + if (!proxy->pci_dev.msix_entry_used[vector]) + return -ENOENT; + call.fd = fd; + call.gsi = proxy->pci_dev.msix_irq_entries[vector].gsi; + r = kvm_vm_ioctl(kvm_stat...
2009 Nov 02
2
[PATCHv4 6/6] qemu-kvm: vhost-net implementation
...dev, vdev->config_vector); @@ -373,12 +377,48 @@ static void virtio_write_config(PCIDevice *pci_dev, uint32_t address, msix_write_config(pci_dev, address, val, len); } +static int virtio_pci_irqfd(void * opaque, uint16_t vector, int fd) +{ + VirtIOPCIProxy *proxy = opaque; + struct kvm_irqfd call = { }; + int r; + + if (vector >= proxy->pci_dev.msix_entries_nr) + return -EINVAL; + if (!proxy->pci_dev.msix_entry_used[vector]) + return -ENOENT; + call.fd = fd; + call.gsi = proxy->pci_dev.msix_irq_entries[vector].gsi; + r = kvm_vm_ioctl(kvm_stat...