similar to: [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO

Displaying 20 results from an estimated 20000 matches similar to: "[RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO"

2020 Apr 30
0
[RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO
On Thu, Apr 30, 2020 at 03:32:55PM +0530, Srivatsa Vaddagiri wrote: > The Type-1 hypervisor we are dealing with does not allow for MMIO transport. > [1] summarizes some of the problems we have in making virtio work on such > hypervisors. This patch proposes a solution for transport problem viz how we can > do config space IO on such a hypervisor. Hypervisor specific methods >
2020 Apr 30
0
[RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO
Hi Vatsa, On Thu, Apr 30, 2020 at 03:59:39PM +0530, Srivatsa Vaddagiri wrote: > * Will Deacon <will at kernel.org> [2020-04-30 11:08:22]: > > > > This patch is meant to seek comments. If its considered to be in right > > > direction, will work on making it more complete and send the next version! > > > > What's stopping you from implementing the
2020 Apr 30
0
[RFC/PATCH 1/1] virtio: Introduce MMIO ops
On Thu, Apr 30, 2020 at 03:32:56PM +0530, Srivatsa Vaddagiri wrote: > Some hypervisors may not support MMIO transport i.e trap config > space access and have it be handled by backend driver. They may > allow other ways to interact with backend such as message-queue > or doorbell API. This patch allows for hypervisor specific > methods for config space IO. > > Signed-off-by:
2020 Apr 29
3
[PATCH 5/5] virtio: Add bounce DMA ops
On Wed, Apr 29, 2020 at 03:39:53PM +0530, Srivatsa Vaddagiri wrote: > That would still not work I think where swiotlb is used for pass-thr devices > (when private memory is fine) as well as virtio devices (when shared memory is > required). So that is a separate question. When there are multiple untrusted devices, at the moment it looks like a single bounce buffer is used. Which to me
2020 Apr 29
3
[PATCH 5/5] virtio: Add bounce DMA ops
On Wed, Apr 29, 2020 at 03:39:53PM +0530, Srivatsa Vaddagiri wrote: > That would still not work I think where swiotlb is used for pass-thr devices > (when private memory is fine) as well as virtio devices (when shared memory is > required). So that is a separate question. When there are multiple untrusted devices, at the moment it looks like a single bounce buffer is used. Which to me
2020 Apr 30
0
[RFC/PATCH 1/1] virtio: Introduce MMIO ops
On 30.04.20 13:11, Srivatsa Vaddagiri wrote: > * Will Deacon <will at kernel.org> [2020-04-30 11:41:50]: > >> On Thu, Apr 30, 2020 at 04:04:46PM +0530, Srivatsa Vaddagiri wrote: >>> If CONFIG_VIRTIO_MMIO_OPS is defined, then I expect this to be unconditionally >>> set to 'magic_qcom_ops' that uses hypervisor-supported interface for IO (for >>>
2020 Apr 30
0
[RFC/PATCH 1/1] virtio: Introduce MMIO ops
On Thu, Apr 30, 2020 at 04:04:46PM +0530, Srivatsa Vaddagiri wrote: > * Will Deacon <will at kernel.org> [2020-04-30 11:14:32]: > > > > +#ifdef CONFIG_VIRTIO_MMIO_OPS > > > > > > +static struct virtio_mmio_ops *mmio_ops; > > > + > > > +#define virtio_readb(a) mmio_ops->mmio_readl((a)) > > > +#define virtio_readw(a)
2020 Apr 29
1
[PATCH 5/5] virtio: Add bounce DMA ops
On Wed, Apr 29, 2020 at 12:26:43PM +0200, Jan Kiszka wrote: > On 29.04.20 12:20, Michael S. Tsirkin wrote: > > On Wed, Apr 29, 2020 at 03:39:53PM +0530, Srivatsa Vaddagiri wrote: > > > That would still not work I think where swiotlb is used for pass-thr devices > > > (when private memory is fine) as well as virtio devices (when shared memory is > > > required).
2020 Apr 28
1
[PATCH 5/5] virtio: Add bounce DMA ops
On Tue, Apr 28, 2020 at 11:19:52PM +0530, Srivatsa Vaddagiri wrote: > * Michael S. Tsirkin <mst at redhat.com> [2020-04-28 12:17:57]: > > > Okay, but how is all this virtio specific? For example, why not allow > > separate swiotlbs for any type of device? > > For example, this might make sense if a given device is from a > > different, less trusted vendor.
2013 Aug 26
7
[PATCH V13 0/4] Paravirtualized ticket spinlocks for KVM host
This series forms the kvm host part of paravirtual spinlock based against kvm tree. Please refer to https://lkml.org/lkml/2013/8/9/265 for kvm guest and Xen, x86 part merged to -tip spinlocks. Please note that: kvm uapi: Add KICK_CPU and PV_UNHALT definition to uapi is a common patch for both guest and host. Changes since V12: fold the patch 3 into patch 2 for bisection. (Eric Northup)
2013 Aug 26
7
[PATCH V13 0/4] Paravirtualized ticket spinlocks for KVM host
This series forms the kvm host part of paravirtual spinlock based against kvm tree. Please refer to https://lkml.org/lkml/2013/8/9/265 for kvm guest and Xen, x86 part merged to -tip spinlocks. Please note that: kvm uapi: Add KICK_CPU and PV_UNHALT definition to uapi is a common patch for both guest and host. Changes since V12: fold the patch 3 into patch 2 for bisection. (Eric Northup)
2005 Jun 07
3
Error while creating domains
I am trying to start a large number of SMP domains (> 50). However, I am unable to create more than 7 domains. When I try creating the 8th domain, I get this error: Using config file "myconf7". VIRTUAL MEMORY ARRANGEMENT: Loaded kernel: 0xc0100000->0xc0344c24 Init. ramdisk: 0xc0345000->0xc0345000 Phys-Mach map: 0xc0345000->0xc0347800 Page tables:
2020 Apr 30
0
[RFC/PATCH 1/1] virtio: Introduce MMIO ops
On Thu, Apr 30, 2020 at 07:03:21PM +0530, Srivatsa Vaddagiri wrote: > * Jan Kiszka <jan.kiszka at siemens.com> [2020-04-30 14:59:50]: > > > >I believe ivshmem2_virtio requires hypervisor to support PCI device emulation > > >(for life-cycle management of VMs), which our hypervisor may not support. PCI is mostly just 2 registers. One sets the affected device, one the
2012 Apr 23
8
[PATCH RFC V6 0/5] kvm : Paravirt-spinlock support for KVM guests
The 5-patch series to follow this email extends KVM-hypervisor and Linux guest running on KVM-hypervisor to support pv-ticket spinlocks, based on Xen's implementation. One hypercall is introduced in KVM hypervisor,that allows a vcpu to kick another vcpu out of halt state. The blocking of vcpu is done using halt() in (lock_spinning) slowpath. Note: 1) patch is based on 3.4-rc3 + ticketlock
2012 Apr 23
8
[PATCH RFC V6 0/5] kvm : Paravirt-spinlock support for KVM guests
The 5-patch series to follow this email extends KVM-hypervisor and Linux guest running on KVM-hypervisor to support pv-ticket spinlocks, based on Xen's implementation. One hypercall is introduced in KVM hypervisor,that allows a vcpu to kick another vcpu out of halt state. The blocking of vcpu is done using halt() in (lock_spinning) slowpath. Note: 1) patch is based on 3.4-rc3 + ticketlock
2013 Aug 06
6
[PATCH V12 0/5] Paravirtualized ticket spinlocks for KVM host
This series forms the kvm host part of paravirtual spinlock based against kvm tree. Please refer https://lkml.org/lkml/2013/8/6/178 for kvm guest part of the series. Please note that: kvm uapi: Add KICK_CPU and PV_UNHALT definition to uapi is a common patch for both guest and host. Srivatsa Vaddagiri (1): kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks
2013 Aug 06
6
[PATCH V12 0/5] Paravirtualized ticket spinlocks for KVM host
This series forms the kvm host part of paravirtual spinlock based against kvm tree. Please refer https://lkml.org/lkml/2013/8/6/178 for kvm guest part of the series. Please note that: kvm uapi: Add KICK_CPU and PV_UNHALT definition to uapi is a common patch for both guest and host. Srivatsa Vaddagiri (1): kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks
2020 Apr 28
0
[PATCH 5/5] virtio: Add bounce DMA ops
Hi Srivatsa, Thank you for the patch! Perhaps something to improve: [auto build test WARNING on vhost/linux-next] [also build test WARNING on xen-tip/linux-next linus/master v5.7-rc3 next-20200428] [cannot apply to swiotlb/linux-next] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system. BTW, we also suggest to use '--base' option to specify
2015 Mar 12
1
[PATCH v2 log fixed] virtio_mmio: fix endian-ness for mmio
Subject: [PATCH] virtio_mmio: fix access width for mmio Going over the virtio mmio code, I noticed that it doesn't correctly access modern device config values using "natural" accessors: it uses readb to get/set them byte by byte, while the virtio 1.0 spec explicitly states: 4.2.2.2 Driver Requirements: MMIO Device Register Layout ... The driver MUST only
2015 Mar 12
1
[PATCH v2 log fixed] virtio_mmio: fix endian-ness for mmio
Subject: [PATCH] virtio_mmio: fix access width for mmio Going over the virtio mmio code, I noticed that it doesn't correctly access modern device config values using "natural" accessors: it uses readb to get/set them byte by byte, while the virtio 1.0 spec explicitly states: 4.2.2.2 Driver Requirements: MMIO Device Register Layout ... The driver MUST only