Jason Wang
2019-Sep-10 08:19 UTC
[RFC PATCH 0/4] mdev based hardware virtio offloading support
Hi all: There are hardware that can do virtio datapath offloading while having its own control path. This path tries to implement a mdev based unified API to support using kernel virtio driver to drive those devices. This is done by introducing a new mdev transport for virtio (virtio_mdev) and register itself as a new kind of mdev driver. Then it provides a unified way for kernel virtio driver to talk with mdev device implementation. Though the series only contain kernel driver support, the goal is to make the transport generic enough to support userspace drivers. This means vhost-mdev[1] could be built on top as well by resuing the transport. A sample driver is also implemented which simulate a virito-net loopback ethernet device on top of vringh + workqueue. This could be used as a reference implementation for real hardware driver. Notes: - Some of the key transport command for vhost-mdev(userspace driver) is not introduced. This includes: 1) set/get virtqueue state (idx etc), this could be simply done by introducing new transport command 2) dirty pages tracking, could be simply done by introducing new transport command 3) set/get device internal state, this requires more thought, of course we can introduce device specific transport command, but it would be better to have a unified API - Current mdev_parent_ops assumes all pointers are userspace pointer, this block the kernel driver, this series just abuse those as kernel pointer and this could be addressed by inventing new parent_ops. - For quick POC, mdev transport was just derived from virtio-MMIO, I'm pretty sure it has lots of space to be optimized, please share your thought. Please review. [1] https://lkml.org/lkml/2019/8/28/35 Jason Wang (4): vringh: fix copy direction of vringh_iov_push_kern() mdev: introduce helper to set per device dma ops virtio: introudce a mdev based transport docs: Sample driver to demonstrate how to implement virtio-mdev framework drivers/vfio/mdev/Kconfig | 7 + drivers/vfio/mdev/Makefile | 1 + drivers/vfio/mdev/mdev_core.c | 7 + drivers/vfio/mdev/virtio_mdev.c | 500 ++++++++++++++++++++ drivers/vhost/vringh.c | 8 +- include/linux/mdev.h | 2 + include/uapi/linux/virtio_mdev.h | 131 ++++++ samples/Kconfig | 7 + samples/vfio-mdev/Makefile | 1 + samples/vfio-mdev/mvnet.c | 766 +++++++++++++++++++++++++++++++ 10 files changed, 1429 insertions(+), 1 deletion(-) create mode 100644 drivers/vfio/mdev/virtio_mdev.c create mode 100644 include/uapi/linux/virtio_mdev.h create mode 100644 samples/vfio-mdev/mvnet.c -- 2.19.1
Jason Wang
2019-Sep-10 08:19 UTC
[RFC PATCH 1/4] vringh: fix copy direction of vringh_iov_push_kern()
We want to copy from iov to buf, so the direction was wrong. Signed-off-by: Jason Wang <jasowang at redhat.com> --- drivers/vhost/vringh.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c index 08ad0d1f0476..a0a2d74967ef 100644 --- a/drivers/vhost/vringh.c +++ b/drivers/vhost/vringh.c @@ -852,6 +852,12 @@ static inline int xfer_kern(void *src, void *dst, size_t len) return 0; } +static inline int kern_xfer(void *dst, void *src, size_t len) +{ + memcpy(dst, src, len); + return 0; +} + /** * vringh_init_kern - initialize a vringh for a kernelspace vring. * @vrh: the vringh to initialize. @@ -958,7 +964,7 @@ EXPORT_SYMBOL(vringh_iov_pull_kern); ssize_t vringh_iov_push_kern(struct vringh_kiov *wiov, const void *src, size_t len) { - return vringh_iov_xfer(wiov, (void *)src, len, xfer_kern); + return vringh_iov_xfer(wiov, (void *)src, len, kern_xfer); } EXPORT_SYMBOL(vringh_iov_push_kern); -- 2.19.1
Jason Wang
2019-Sep-10 08:19 UTC
[RFC PATCH 2/4] mdev: introduce helper to set per device dma ops
This patch introduces mdev_set_dma_ops() which allows parent to set per device DMA ops. This help for the kernel driver to setup a correct DMA mappings. Signed-off-by: Jason Wang <jasowang at redhat.com> --- drivers/vfio/mdev/mdev_core.c | 7 +++++++ include/linux/mdev.h | 2 ++ 2 files changed, 9 insertions(+) diff --git a/drivers/vfio/mdev/mdev_core.c b/drivers/vfio/mdev/mdev_core.c index b558d4cfd082..eb28552082d7 100644 --- a/drivers/vfio/mdev/mdev_core.c +++ b/drivers/vfio/mdev/mdev_core.c @@ -13,6 +13,7 @@ #include <linux/uuid.h> #include <linux/sysfs.h> #include <linux/mdev.h> +#include <linux/dma-mapping.h> #include "mdev_private.h" @@ -27,6 +28,12 @@ static struct class_compat *mdev_bus_compat_class; static LIST_HEAD(mdev_list); static DEFINE_MUTEX(mdev_list_lock); +void mdev_set_dma_ops(struct mdev_device *mdev, struct dma_map_ops *ops) +{ + set_dma_ops(&mdev->dev, ops); +} +EXPORT_SYMBOL(mdev_set_dma_ops); + struct device *mdev_parent_dev(struct mdev_device *mdev) { return mdev->parent->dev; diff --git a/include/linux/mdev.h b/include/linux/mdev.h index 0ce30ca78db0..7195f40bf8bf 100644 --- a/include/linux/mdev.h +++ b/include/linux/mdev.h @@ -145,4 +145,6 @@ struct device *mdev_parent_dev(struct mdev_device *mdev); struct device *mdev_dev(struct mdev_device *mdev); struct mdev_device *mdev_from_dev(struct device *dev); +void mdev_set_dma_ops(struct mdev_device *mdev, struct dma_map_ops *ops); + #endif /* MDEV_H */ -- 2.19.1
This path introduces a new mdev transport for virtio. This is used to use kernel virtio driver to drive the mediated device that is capable of populating virtqueue directly. A new virtio-mdev driver will be registered to the mdev bus, when a new virtio-mdev device is probed, it will register the device with mdev based config ops. This means, unlike the exist hardware transport, this is a software transport between mdev driver and mdev device. The transport was implemented through: - configuration access was implemented through parent_ops->read()/write() - vq/config callback was implemented through parent_ops->ioctl() This transport is derived from virtio MMIO protocol and was wrote for kernel driver. But for the transport itself, but the design goal is to be generic enough to support userspace driver (this part will be added in the future). Note: - current mdev assume all the parameter of parent_ops was from userspace. This prevents us from implementing the kernel mdev driver. For a quick POC, this patch just abuse those parameter and assume the mdev device implementation will treat them as kernel pointer. This should be addressed in the formal series by extending mdev_parent_ops. - for a quick POC, I just drive the transport from MMIO, I'm pretty there's lot of optimization space for this. Signed-off-by: Jason Wang <jasowang at redhat.com> --- drivers/vfio/mdev/Kconfig | 7 + drivers/vfio/mdev/Makefile | 1 + drivers/vfio/mdev/virtio_mdev.c | 500 +++++++++++++++++++++++++++++++ include/uapi/linux/virtio_mdev.h | 131 ++++++++ 4 files changed, 639 insertions(+) create mode 100644 drivers/vfio/mdev/virtio_mdev.c create mode 100644 include/uapi/linux/virtio_mdev.h diff --git a/drivers/vfio/mdev/Kconfig b/drivers/vfio/mdev/Kconfig index 5da27f2100f9..c488c31fc137 100644 --- a/drivers/vfio/mdev/Kconfig +++ b/drivers/vfio/mdev/Kconfig @@ -16,3 +16,10 @@ config VFIO_MDEV_DEVICE default n help VFIO based driver for Mediated devices. + +config VIRTIO_MDEV_DEVICE + tristate "VIRTIO driver for Mediated devices" + depends on VFIO_MDEV && VIRTIO + default n + help + VIRTIO based driver for Mediated devices. diff --git a/drivers/vfio/mdev/Makefile b/drivers/vfio/mdev/Makefile index 101516fdf375..99d31e29c23e 100644 --- a/drivers/vfio/mdev/Makefile +++ b/drivers/vfio/mdev/Makefile @@ -4,3 +4,4 @@ mdev-y := mdev_core.o mdev_sysfs.o mdev_driver.o obj-$(CONFIG_VFIO_MDEV) += mdev.o obj-$(CONFIG_VFIO_MDEV_DEVICE) += vfio_mdev.o +obj-$(CONFIG_VIRTIO_MDEV_DEVICE) += virtio_mdev.o diff --git a/drivers/vfio/mdev/virtio_mdev.c b/drivers/vfio/mdev/virtio_mdev.c new file mode 100644 index 000000000000..5ff09089297e --- /dev/null +++ b/drivers/vfio/mdev/virtio_mdev.c @@ -0,0 +1,500 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * VIRTIO based driver for Mediated device + * + * Copyright (c) 2019, Red Hat. All rights reserved. + * Author: Jason Wang <jasowang at redhat.com> + * + * Based on Virtio MMIO driver. + */ + +#include <linux/init.h> +#include <linux/module.h> +#include <linux/device.h> +#include <linux/kernel.h> +#include <linux/slab.h> +#include <linux/uuid.h> +#include <linux/mdev.h> +#include <linux/virtio.h> +#include <linux/virtio_config.h> +#include <linux/virtio_ring.h> +#include <uapi/linux/virtio_mdev.h> +#include "mdev_private.h" + +#define DRIVER_VERSION "0.1" +#define DRIVER_AUTHOR "Red Hat Corporation" +#define DRIVER_DESC "VIRTIO based driver for Mediated device" + +#define to_virtio_mdev_device(dev) \ + container_of(dev, struct virtio_mdev_device, vdev) + +struct virtio_mdev_device { + struct virtio_device vdev; + struct mdev_device *mdev; + unsigned long version; + + struct virtqueue **vqs; + spinlock_t lock; +}; + +struct virtio_mdev_vq_info { + /* the actual virtqueue */ + struct virtqueue *vq; + + /* the list node for the virtqueues list */ + struct list_head node; +}; + +static u32 virtio_mdev_readl(struct mdev_device *mdev, + loff_t off) +{ + struct mdev_parent *parent = mdev->parent; + ssize_t len; + u32 val; + + if (unlikely(!parent->ops->read)) + return 0xFFFFFFFF; + + len = parent->ops->read(mdev, (char *)&val, 4, &off); + if (len != 4) + return 0xFFFFFFFF; + + return val; +} + +static void virtio_mdev_writel(struct mdev_device *mdev, + loff_t off, u32 val) +{ + struct mdev_parent *parent = mdev->parent; + + if (unlikely(!parent->ops->write)) + return; + + parent->ops->write(mdev, (char *)&val, 4, &off); + + return; +} + +static void virtio_mdev_get(struct virtio_device *vdev, unsigned offset, + void *buf, unsigned len) +{ + struct virtio_mdev_device *vm_dev = to_virtio_mdev_device(vdev); + struct mdev_device *mdev = vm_dev->mdev; + struct mdev_parent *parent = mdev->parent; + + loff_t off = offset + VIRTIO_MDEV_CONFIG; + + switch (len) { + case 1: + *(u8 *)buf = parent->ops->read(mdev, buf, 1, &off); + break; + case 2: + *(u16 *)buf = parent->ops->read(mdev, buf, 2, &off); + break; + case 4: + *(u32 *)buf = parent->ops->read(mdev, buf, 4, &off); + break; + case 8: + *(u32 *)buf = parent->ops->read(mdev, buf, 4, &off); + *((u32 *)buf + 1) = parent->ops->read(mdev, buf, 4, &off); + break; + default: + BUG(); + } + + return; +} + +static void virtio_mdev_set(struct virtio_device *vdev, unsigned offset, + const void *buf, unsigned len) +{ + struct virtio_mdev_device *vm_dev = to_virtio_mdev_device(vdev); + struct mdev_device *mdev = vm_dev->mdev; + struct mdev_parent *parent = mdev->parent; + loff_t off = offset + VIRTIO_MDEV_CONFIG; + + switch (len) { + case 1: + case 2: + case 4: + case 8: + break; + default: + BUG(); + } + + parent->ops->write(mdev, buf, len, &off); + + return; +} + +static u32 virtio_mdev_generation(struct virtio_device *vdev) +{ + struct virtio_mdev_device *vm_dev = to_virtio_mdev_device(vdev); + + if (vm_dev->version == 1) + return 0; + else + return virtio_mdev_readl(vm_dev->mdev, + VIRTIO_MDEV_CONFIG_GENERATION); +} + +static u8 virtio_mdev_get_status(struct virtio_device *vdev) +{ + struct virtio_mdev_device *vm_dev = to_virtio_mdev_device(vdev); + + return virtio_mdev_readl(vm_dev->mdev, VIRTIO_MDEV_STATUS) & 0xff; +} + +static void virtio_mdev_set_status(struct virtio_device *vdev, u8 status) +{ + struct virtio_mdev_device *vm_dev = to_virtio_mdev_device(vdev); + + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_STATUS, status); +} + +static void virtio_mdev_reset(struct virtio_device *vdev) +{ + struct virtio_mdev_device *vm_dev = to_virtio_mdev_device(vdev); + + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_STATUS, 0); +} + +static bool virtio_mdev_notify(struct virtqueue *vq) +{ + struct virtio_mdev_device *vm_dev = to_virtio_mdev_device(vq->vdev); + + /* We write the queue's selector into the notification register to + * signal the other end */ + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_QUEUE_NOTIFY, + vq->index); + return true; +} + +static irqreturn_t virtio_mdev_config_cb(void *private) +{ + struct virtio_mdev_device *vm_dev = private; + + virtio_config_changed(&vm_dev->vdev); + + return IRQ_HANDLED; +} + +static irqreturn_t virtio_mdev_virtqueue_cb(void *private) +{ + struct virtio_mdev_vq_info *info = private; + + return vring_interrupt(0, info->vq); +} + +static struct virtqueue * +virtio_mdev_setup_vq(struct virtio_device *vdev, unsigned index, + void (*callback)(struct virtqueue *vq), + const char *name, bool ctx) +{ + struct virtio_mdev_device *vm_dev = to_virtio_mdev_device(vdev); + struct mdev_device *mdev= vm_dev->mdev; + struct mdev_parent *parent = mdev->parent; + struct virtio_mdev_vq_info *info; + struct virtio_mdev_callback cb; + struct virtqueue *vq; + unsigned long flags; + u32 align, num; + u64 addr; + int err; + + if (!name) + return NULL; + + /* Select the queue we're interested in */ + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_QUEUE_SEL, index); + + /* Queue shouldn't already be set up. */ + if (virtio_mdev_readl(vm_dev->mdev, VIRTIO_MDEV_QUEUE_READY)) { + err = -ENOENT; + goto error_available; + } + + /* Allocate and fill out our active queue description */ + info = kmalloc(sizeof(*info), GFP_KERNEL); + if (!info) { + err = -ENOMEM; + goto error_kmalloc; + } + + num = virtio_mdev_readl(vm_dev->mdev, VIRTIO_MDEV_QUEUE_NUM_MAX); + if (num == 0) { + err = -ENOENT; + goto error_new_virtqueue; + } + + /* Create the vring */ + align = virtio_mdev_readl(vm_dev->mdev, VIRTIO_MDEV_QUEUE_ALIGN); + vq = vring_create_virtqueue(index, num, align, vdev, + true, true, ctx, + virtio_mdev_notify, callback, name); + if (!vq) { + err = -ENOMEM; + goto error_new_virtqueue; + } + + /* Setup virtqueue callback */ + cb.callback = virtio_mdev_virtqueue_cb; + cb.private = info; + err = parent->ops->ioctl(mdev, VIRTIO_MDEV_SET_VQ_CALLBACK, + (unsigned long)&cb); + if (err) { + err = -EINVAL; + goto error_callback; + } + + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_QUEUE_NUM, + virtqueue_get_vring_size(vq)); + addr = virtqueue_get_desc_addr(vq); + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_QUEUE_DESC_LOW, (u32)addr); + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_QUEUE_DESC_HIGH, + (u32)(addr >> 32)); + + addr = virtqueue_get_avail_addr(vq); + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_QUEUE_AVAIL_LOW, (u32)addr); + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_QUEUE_AVAIL_HIGH, + (u32)(addr >> 32)); + + addr = virtqueue_get_used_addr(vq); + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_QUEUE_USED_LOW, (u32)addr); + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_QUEUE_USED_HIGH, (u32)(addr >> 32)); + + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_QUEUE_READY, 1); + + vq->priv = info; + info->vq = vq; + + return vq; + +error_callback: + vring_del_virtqueue(vq); +error_new_virtqueue: + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_QUEUE_READY, 0); + WARN_ON(virtio_mdev_readl(vm_dev->mdev, VIRTIO_MDEV_QUEUE_READY)); + kfree(info); +error_kmalloc: +error_available: + return ERR_PTR(err); + +} + +static void virtio_mdev_del_vq(struct virtqueue *vq) +{ + struct virtio_mdev_device *vm_dev = to_virtio_mdev_device(vq->vdev); + struct virtio_mdev_vq_info *info = vq->priv; + unsigned long flags; + unsigned int index = vq->index; + + /* Select and deactivate the queue */ + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_QUEUE_SEL, index); + virtio_mdev_writel(vm_dev->mdev,VIRTIO_MDEV_QUEUE_READY, 0); + WARN_ON(virtio_mdev_readl(vm_dev->mdev, VIRTIO_MDEV_QUEUE_READY)); + + vring_del_virtqueue(vq); + + kfree(info); +} + +static void virtio_mdev_del_vqs(struct virtio_device *vdev) +{ + struct virtqueue *vq, *n; + + list_for_each_entry_safe(vq, n, &vdev->vqs, list) + virtio_mdev_del_vq(vq); + + return; +} + +static int virtio_mdev_find_vqs(struct virtio_device *vdev, unsigned nvqs, + struct virtqueue *vqs[], + vq_callback_t *callbacks[], + const char * const names[], + const bool *ctx, + struct irq_affinity *desc) +{ + struct virtio_mdev_device *vm_dev = to_virtio_mdev_device(vdev); + struct mdev_device *mdev = vm_dev->mdev; + struct mdev_parent *parent = mdev->parent; + struct virtio_mdev_callback cb; + int i, err, queue_idx = 0; + vm_dev->vqs = kmalloc_array(queue_idx, sizeof(*vm_dev->vqs), + GFP_KERNEL); + if (!vm_dev->vqs) + return -ENOMEM; + + for (i = 0; i < nvqs; ++i) { + if (!names[i]) { + vqs[i] = NULL; + continue; + } + + vqs[i] = virtio_mdev_setup_vq(vdev, queue_idx++, + callbacks[i], names[i], ctx ? + ctx[i] : false); + if (IS_ERR(vqs[i])) { + err = PTR_ERR(vqs[i]); + goto err_setup_vq; + } + } + + cb.callback = virtio_mdev_config_cb; + cb.private = vm_dev; + err = parent->ops->ioctl(mdev, VIRTIO_MDEV_SET_CONFIG_CALLBACK, + (unsigned long)&cb); + if (err) + goto err_setup_vq; + + return 0; + +err_setup_vq: + kfree(vm_dev->vqs); + virtio_mdev_del_vqs(vdev); + return err; +} + +static u64 virtio_mdev_get_features(struct virtio_device *vdev) +{ + struct virtio_mdev_device *vm_dev = to_virtio_mdev_device(vdev); + u64 features; + + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_DEVICE_FEATURES_SEL, 1); + features = virtio_mdev_readl(vm_dev->mdev, VIRTIO_MDEV_DEVICE_FEATURES); + features <<= 32; + + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_DEVICE_FEATURES_SEL, 0); + features |= virtio_mdev_readl(vm_dev->mdev, VIRTIO_MDEV_DEVICE_FEATURES); + + return features; +} + +static int virtio_mdev_finalize_features(struct virtio_device *vdev) +{ + struct virtio_mdev_device *vm_dev = to_virtio_mdev_device(vdev); + + /* Give virtio_ring a chance to accept features. */ + vring_transport_features(vdev); + + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_DRIVER_FEATURES_SEL, 1); + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_DRIVER_FEATURES, + (u32)(vdev->features >> 32)); + + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_DRIVER_FEATURES_SEL, 0); + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_DRIVER_FEATURES, + (u32)vdev->features); + + return 0; +} + +static const char *virtio_mdev_bus_name(struct virtio_device *vdev) +{ + struct virtio_mdev_device *vm_dev = to_virtio_mdev_device(vdev); + struct mdev_device *mdev = vm_dev->mdev; + + return dev_name(&mdev->dev); +} + +static const struct virtio_config_ops virtio_mdev_config_ops = { + .get = virtio_mdev_get, + .set = virtio_mdev_set, + .generation = virtio_mdev_generation, + .get_status = virtio_mdev_get_status, + .set_status = virtio_mdev_set_status, + .reset = virtio_mdev_reset, + .find_vqs = virtio_mdev_find_vqs, + .del_vqs = virtio_mdev_del_vqs, + .get_features = virtio_mdev_get_features, + .finalize_features = virtio_mdev_finalize_features, + .bus_name = virtio_mdev_bus_name, +}; + +static void virtio_mdev_release_dev(struct device *_d) +{ + struct virtio_device *vdev + container_of(_d, struct virtio_device, dev); + struct virtio_mdev_device *vm_dev + container_of(vdev, struct virtio_mdev_device, vdev); + + devm_kfree(_d, vm_dev); +} + +static int virtio_mdev_probe(struct device *dev) +{ + struct mdev_device *mdev = to_mdev_device(dev); + struct virtio_mdev_device *vm_dev; + unsigned long magic; + int rc; + + magic = virtio_mdev_readl(mdev, VIRTIO_MDEV_MAGIC_VALUE); + if (magic != ('v' | 'i' << 8 | 'r' << 16 | 't' << 24)) { + dev_warn(dev, "Wrong magic value 0x%08lx!\n", magic); + return -ENODEV; + } + + vm_dev = devm_kzalloc(dev, sizeof(*vm_dev), GFP_KERNEL); + if (!vm_dev) + return -ENOMEM; + + vm_dev->vdev.dev.parent = dev; + vm_dev->vdev.dev.release = virtio_mdev_release_dev; + vm_dev->vdev.config = &virtio_mdev_config_ops; + vm_dev->mdev = mdev; + vm_dev->vqs = NULL; + spin_lock_init(&vm_dev->lock); + + vm_dev->version = virtio_mdev_readl(mdev, VIRTIO_MDEV_VERSION); + if (vm_dev->version != 1) { + dev_err(dev, "Version %ld not supported!\n", + vm_dev->version); + return -ENXIO; + } + + vm_dev->vdev.id.device = virtio_mdev_readl(mdev, VIRTIO_MDEV_DEVICE_ID); + if (vm_dev->vdev.id.device == 0) + return -ENODEV; + + vm_dev->vdev.id.vendor = virtio_mdev_readl(mdev, VIRTIO_MDEV_VENDOR_ID); + rc = register_virtio_device(&vm_dev->vdev); + if (rc) + put_device(dev); + + dev_set_drvdata(dev, vm_dev); + + return rc; + +} + +static void virtio_mdev_remove(struct device *dev) +{ + struct virtio_mdev_device *vm_dev = dev_get_drvdata(dev); + + unregister_virtio_device(&vm_dev->vdev); +} + +static struct mdev_driver virtio_mdev_driver = { + .name = "virtio_mdev", + .probe = virtio_mdev_probe, + .remove = virtio_mdev_remove, +}; + +static int __init virtio_mdev_init(void) +{ + return mdev_register_driver(&virtio_mdev_driver, THIS_MODULE); +} + +static void __exit virtio_mdev_exit(void) +{ + mdev_unregister_driver(&virtio_mdev_driver); +} + +module_init(virtio_mdev_init) +module_exit(virtio_mdev_exit) + +MODULE_VERSION(DRIVER_VERSION); +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR(DRIVER_AUTHOR); +MODULE_DESCRIPTION(DRIVER_DESC); diff --git a/include/uapi/linux/virtio_mdev.h b/include/uapi/linux/virtio_mdev.h new file mode 100644 index 000000000000..8040de6b960a --- /dev/null +++ b/include/uapi/linux/virtio_mdev.h @@ -0,0 +1,131 @@ +/* + * Virtio mediated device driver + * + * Copyright 2019, Red Hat Corp. + * + * Based on Virtio MMIO driver by ARM Ltd, copyright ARM Ltd. 2011 + * + * This header is BSD licensed so anyone can use the definitions to implement + * compatible drivers/servers. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * 3. Neither the name of IBM nor the names of its contributors + * may be used to endorse or promote products derived from this software + * without specific prior written permission. + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ``AS IS'' AND + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + * ARE DISCLAIMED. IN NO EVENT SHALL IBM OR CONTRIBUTORS BE LIABLE + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF + * SUCH DAMAGE. + */ +#ifndef _LINUX_VIRTIO_MDEV_H +#define _LINUX_VIRTIO_MDEV_H + +#include <linux/interrupt.h> +#include <linux/vringh.h> +#include <uapi/linux/virtio_net.h> + +/* + * Ioctls + */ + +struct virtio_mdev_callback { + irqreturn_t (*callback)(void *); + void *private; +}; + +#define VIRTIO_MDEV 0xAF +#define VIRTIO_MDEV_SET_VQ_CALLBACK _IOW(VIRTIO_MDEV, 0x00, \ + struct virtio_mdev_callback) +#define VIRTIO_MDEV_SET_CONFIG_CALLBACK _IOW(VIRTIO_MDEV, 0x01, \ + struct virtio_mdev_callback) + +#define VIRTIO_MDEV_DEVICE_API_STRING "virtio-mdev" + +/* + * Control registers + */ + +/* Magic value ("virt" string) - Read Only */ +#define VIRTIO_MDEV_MAGIC_VALUE 0x000 + +/* Virtio device version - Read Only */ +#define VIRTIO_MDEV_VERSION 0x004 + +/* Virtio device ID - Read Only */ +#define VIRTIO_MDEV_DEVICE_ID 0x008 + +/* Virtio vendor ID - Read Only */ +#define VIRTIO_MDEV_VENDOR_ID 0x00c + +/* Bitmask of the features supported by the device (host) + * (32 bits per set) - Read Only */ +#define VIRTIO_MDEV_DEVICE_FEATURES 0x010 + +/* Device (host) features set selector - Write Only */ +#define VIRTIO_MDEV_DEVICE_FEATURES_SEL 0x014 + +/* Bitmask of features activated by the driver (guest) + * (32 bits per set) - Write Only */ +#define VIRTIO_MDEV_DRIVER_FEATURES 0x020 + +/* Activated features set selector - Write Only */ +#define VIRTIO_MDEV_DRIVER_FEATURES_SEL 0x024 + +/* Queue selector - Write Only */ +#define VIRTIO_MDEV_QUEUE_SEL 0x030 + +/* Maximum size of the currently selected queue - Read Only */ +#define VIRTIO_MDEV_QUEUE_NUM_MAX 0x034 + +/* Queue size for the currently selected queue - Write Only */ +#define VIRTIO_MDEV_QUEUE_NUM 0x038 + +/* Ready bit for the currently selected queue - Read Write */ +#define VIRTIO_MDEV_QUEUE_READY 0x044 + +/* Alignment of virtqueue - Read Only */ +#define VIRTIO_MDEV_QUEUE_ALIGN 0x048 + +/* Queue notifier - Write Only */ +#define VIRTIO_MDEV_QUEUE_NOTIFY 0x050 + +/* Device status register - Read Write */ +#define VIRTIO_MDEV_STATUS 0x060 + +/* Selected queue's Descriptor Table address, 64 bits in two halves */ +#define VIRTIO_MDEV_QUEUE_DESC_LOW 0x080 +#define VIRTIO_MDEV_QUEUE_DESC_HIGH 0x084 + +/* Selected queue's Available Ring address, 64 bits in two halves */ +#define VIRTIO_MDEV_QUEUE_AVAIL_LOW 0x090 +#define VIRTIO_MDEV_QUEUE_AVAIL_HIGH 0x094 + +/* Selected queue's Used Ring address, 64 bits in two halves */ +#define VIRTIO_MDEV_QUEUE_USED_LOW 0x0a0 +#define VIRTIO_MDEV_QUEUE_USED_HIGH 0x0a4 + +/* Configuration atomicity value */ +#define VIRTIO_MDEV_CONFIG_GENERATION 0x0fc + +/* The config space is defined by each driver as + * the per-driver configuration space - Read Write */ +#define VIRTIO_MDEV_CONFIG 0x100 + +#endif + + +/* Ready bit for the currently selected queue - Read Write */ -- 2.19.1
Jason Wang
2019-Sep-10 08:19 UTC
[RFC PATCH 4/4] docs: Sample driver to demonstrate how to implement virtio-mdev framework
This sample driver creates mdev device that simulate virtio net device over virtio mdev transport. The device is implemented through vringh and workqueue. Signed-off-by: Jason Wang <jasowang at redhat.com> --- samples/Kconfig | 7 + samples/vfio-mdev/Makefile | 1 + samples/vfio-mdev/mvnet.c | 766 +++++++++++++++++++++++++++++++++++++ 3 files changed, 774 insertions(+) create mode 100644 samples/vfio-mdev/mvnet.c diff --git a/samples/Kconfig b/samples/Kconfig index c8dacb4dda80..a1a1ca2c00b7 100644 --- a/samples/Kconfig +++ b/samples/Kconfig @@ -131,6 +131,13 @@ config SAMPLE_VFIO_MDEV_MDPY mediated device. It is a simple framebuffer and supports the region display interface (VFIO_GFX_PLANE_TYPE_REGION). +config SAMPLE_VIRTIO_MDEV_NET + tristate "Build virtio mdev net example mediated device sample code -- loadable modules only" + depends on VIRTIO_MDEV_DEVICE && VHOST_RING && m + help + Build a networking sample device for use as a virtio + mediated device. + config SAMPLE_VFIO_MDEV_MDPY_FB tristate "Build VFIO mdpy example guest fbdev driver -- loadable module only" depends on FB && m diff --git a/samples/vfio-mdev/Makefile b/samples/vfio-mdev/Makefile index 10d179c4fdeb..f34af90ed0a0 100644 --- a/samples/vfio-mdev/Makefile +++ b/samples/vfio-mdev/Makefile @@ -3,3 +3,4 @@ obj-$(CONFIG_SAMPLE_VFIO_MDEV_MTTY) += mtty.o obj-$(CONFIG_SAMPLE_VFIO_MDEV_MDPY) += mdpy.o obj-$(CONFIG_SAMPLE_VFIO_MDEV_MDPY_FB) += mdpy-fb.o obj-$(CONFIG_SAMPLE_VFIO_MDEV_MBOCHS) += mbochs.o +obj-$(CONFIG_SAMPLE_VIRTIO_MDEV_NET) += mvnet.o diff --git a/samples/vfio-mdev/mvnet.c b/samples/vfio-mdev/mvnet.c new file mode 100644 index 000000000000..da295b41955e --- /dev/null +++ b/samples/vfio-mdev/mvnet.c @@ -0,0 +1,766 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Mediated virtual virtio-net device driver. + * + * Copyright (c) 2019, Red Hat Inc. All rights reserved. + * Author: Jason Wang <jasowang at redhat.com> + * + * Sample driver that creates mdev device that simulates ethernet + * device virtio mdev transport. + */ + +#include <linux/init.h> +#include <linux/module.h> +#include <linux/device.h> +#include <linux/kernel.h> +#include <linux/fs.h> +#include <linux/poll.h> +#include <linux/slab.h> +#include <linux/sched.h> +#include <linux/wait.h> +#include <linux/uuid.h> +#include <linux/iommu.h> +#include <linux/sysfs.h> +#include <linux/file.h> +#include <linux/etherdevice.h> +#include <linux/mdev.h> +#include <uapi/linux/virtio_mdev.h> + +#define VERSION_STRING "0.1" +#define DRIVER_AUTHOR "NVIDIA Corporation" + +#define MVNET_CLASS_NAME "mvnet" + +#define MVNET_NAME "mvnet" + +/* + * Global Structures + */ + +static struct mvnet_dev { + struct class *vd_class; + struct idr vd_idr; + struct device dev; +} mvnet_dev; + +struct mvnet_virtqueue { + struct vringh vring; + struct vringh_kiov iov; + unsigned short head; + bool ready; + u32 desc_addr_lo; + u32 desc_addr_hi; + u32 device_addr_lo; + u32 device_addr_hi; + u32 driver_addr_lo; + u32 driver_addr_hi; + u64 desc_addr; + u64 device_addr; + u64 driver_addr; + void *private; + irqreturn_t (*cb)(void *); +}; + +#define MVNET_QUEUE_ALIGN PAGE_SIZE +#define MVNET_QUEUE_MAX 256 +#define MVNET_MAGIC_VALUE ('v' | 'i' << 8 | 'r' << 16 | 't' << 24) +#define MVNET_VERSION 0x1 /* Implies virtio 1.0 */ +#define MVNET_DEVICE_ID 0x1 /* network card */ +#define MVNET_VENDOR_ID 0 /* is this correct ? */ +#define MVNET_DEVICE_FEATURES VIRTIO_F_VERSION_1 + +u64 mvnet_features = (1ULL << VIRTIO_F_ANY_LAYOUT) | + (1ULL << VIRTIO_F_VERSION_1) | + (1ULL << VIRTIO_F_IOMMU_PLATFORM) ; + +/* State of each mdev device */ +struct mvnet_state { + struct mvnet_virtqueue vqs[2]; + struct work_struct work; + spinlock_t lock; + struct mdev_device *mdev; + struct virtio_net_config config; + struct virtio_mdev_callback *cbs; + void *buffer; + u32 queue_sel; + u32 driver_features_sel; + u32 driver_features[2]; + u32 device_features_sel; + u32 status; + u32 generation; + u32 num; + struct list_head next; +}; + +static struct mutex mdev_list_lock; +static struct list_head mdev_devices_list; + +static void mvnet_queue_ready(struct mvnet_state *mvnet, unsigned idx) +{ + struct mvnet_virtqueue *vq = &mvnet->vqs[idx]; + int ret; + + vq->desc_addr = (u64)vq->desc_addr_hi << 32 | vq->desc_addr_lo; + vq->device_addr = (u64)vq->device_addr_hi << 32 | vq->device_addr_lo; + vq->driver_addr = (u64)vq->driver_addr_hi << 32 | vq->driver_addr_lo; + + ret = vringh_init_kern(&vq->vring, mvnet_features, MVNET_QUEUE_MAX, + false, (struct vring_desc *)vq->desc_addr, + (struct vring_avail *)vq->driver_addr, + (struct vring_used *)vq->device_addr); +} + +static ssize_t mvnet_read_config(struct mdev_device *mdev, + u32 *val, loff_t pos) +{ + struct mvnet_state *mvnet; + struct mvnet_virtqueue *vq; + u32 queue_sel; + + if (!mdev || !val) + return -EINVAL; + + mvnet = mdev_get_drvdata(mdev); + if (!mvnet) { + pr_err("%s mvnet not found\n", __func__); + return -EINVAL; + } + + queue_sel = mvnet->queue_sel; + vq = &mvnet->vqs[queue_sel]; + + switch (pos) { + case VIRTIO_MDEV_MAGIC_VALUE: + *val = MVNET_MAGIC_VALUE; + break; + case VIRTIO_MDEV_VERSION: + *val = MVNET_VERSION; + break; + case VIRTIO_MDEV_DEVICE_ID: + *val = MVNET_DEVICE_ID; + break; + case VIRTIO_MDEV_VENDOR_ID: + *val = MVNET_VENDOR_ID; + break; + case VIRTIO_MDEV_DEVICE_FEATURES: + if (mvnet->device_features_sel) + *val = mvnet_features >> 32; + else + *val = mvnet_features; + break; + case VIRTIO_MDEV_QUEUE_NUM_MAX: + *val = MVNET_QUEUE_MAX; + break; + case VIRTIO_MDEV_QUEUE_READY: + *val = vq->ready; + break; + case VIRTIO_MDEV_QUEUE_ALIGN: + *val = MVNET_QUEUE_ALIGN; + break; + case VIRTIO_MDEV_STATUS: + *val = mvnet->status; + break; + case VIRTIO_MDEV_QUEUE_DESC_LOW: + *val = vq->desc_addr_lo; + break; + case VIRTIO_MDEV_QUEUE_DESC_HIGH: + *val = vq->desc_addr_hi; + break; + case VIRTIO_MDEV_QUEUE_AVAIL_LOW: + *val = vq->driver_addr_lo; + break; + case VIRTIO_MDEV_QUEUE_AVAIL_HIGH: + *val = vq->driver_addr_hi; + break; + case VIRTIO_MDEV_QUEUE_USED_LOW: + *val = vq->device_addr_lo; + break; + case VIRTIO_MDEV_QUEUE_USED_HIGH: + *val = vq->device_addr_hi; + break; + case VIRTIO_MDEV_CONFIG_GENERATION: + *val = 1; + break; + default: + pr_err("Unsupported mdev read offset at 0x%x\n", pos); + break; + } + + return 4; +} + +static ssize_t mvnet_read_net_config(struct mdev_device *mdev, + char *buf, size_t count, loff_t pos) +{ + struct mvnet_state *mvnet = mdev_get_drvdata(mdev); + + if (!mvnet) { + pr_err("%s mvnet not found\n", __func__); + return -EINVAL; + } + + if (pos + count > sizeof(mvnet->config)) + return -EINVAL; + + memcpy(buf, &mvnet->config + (unsigned)pos, count); + + return count; +} + +static void mvnet_vq_reset(struct mvnet_virtqueue *vq) +{ + vq->ready = 0; + vq->desc_addr_lo = vq->desc_addr_hi = 0; + vq->device_addr_lo = vq->device_addr_hi = 0; + vq->driver_addr_lo = vq->driver_addr_hi = 0; + vq->desc_addr = 0; + vq->driver_addr = 0; + vq->device_addr = 0; + vringh_init_kern(&vq->vring, mvnet_features, MVNET_QUEUE_MAX, + false, 0, 0, 0); +} + +static void mvnet_reset(struct mvnet_state *mvnet) +{ + int i; + + for (i = 0; i < 2; i++) + mvnet_vq_reset(&mvnet->vqs[i]); + + mvnet->queue_sel = 0; + mvnet->driver_features_sel = 0; + mvnet->device_features_sel = 0; + mvnet->status = 0; + ++mvnet->generation; +} + +static ssize_t mvnet_write_config(struct mdev_device *mdev, + u32 *val, loff_t pos) +{ + struct mvnet_state *mvnet; + struct mvnet_virtqueue *vq; + u32 queue_sel; + + if (!mdev || !val) + return -EINVAL; + + mvnet = mdev_get_drvdata(mdev); + if (!mvnet) { + pr_err("%s mvnet not found\n", __func__); + return -EINVAL; + } + + queue_sel = mvnet->queue_sel; + vq = &mvnet->vqs[queue_sel]; + + switch (pos) { + case VIRTIO_MDEV_DEVICE_FEATURES_SEL: + mvnet->device_features_sel = *val; + break; + case VIRTIO_MDEV_DRIVER_FEATURES: + mvnet->driver_features[mvnet->driver_features_sel] = *val; + break; + case VIRTIO_MDEV_DRIVER_FEATURES_SEL: + mvnet->driver_features_sel = *val; + break; + case VIRTIO_MDEV_QUEUE_SEL: + mvnet->queue_sel = *val; + break; + case VIRTIO_MDEV_QUEUE_NUM: + mvnet->num = *val; + break; + case VIRTIO_MDEV_QUEUE_READY: + vq->ready = *val; + if (vq->ready) { + spin_lock(&mvnet->lock); + mvnet_queue_ready(mvnet, queue_sel); + spin_unlock(&mvnet->lock); + } + break; + case VIRTIO_MDEV_QUEUE_NOTIFY: + if (vq->ready) + schedule_work(&mvnet->work); + break; + case VIRTIO_MDEV_STATUS: + mvnet->status = *val; + if (*val == 0) { + spin_lock(&mvnet->lock); + mvnet_reset(mvnet); + spin_unlock(&mvnet->lock); + } + break; + case VIRTIO_MDEV_QUEUE_DESC_LOW: + vq->desc_addr_lo = *val; + break; + case VIRTIO_MDEV_QUEUE_DESC_HIGH: + vq->desc_addr_hi = *val; + break; + case VIRTIO_MDEV_QUEUE_AVAIL_LOW: + vq->driver_addr_lo = *val; + break; + case VIRTIO_MDEV_QUEUE_AVAIL_HIGH: + vq->driver_addr_hi = *val; + break; + case VIRTIO_MDEV_QUEUE_USED_LOW: + vq->device_addr_lo = *val; + break; + case VIRTIO_MDEV_QUEUE_USED_HIGH: + vq->device_addr_hi = *val; + break; + default: + pr_err("Unsupported write offset! 0x%x\n", pos); + break; + } + spin_unlock(&mvnet->lock); + + return 4; +} + +static void mvnet_work(struct work_struct *work) +{ + struct mvnet_state *mvnet = container_of(work, struct + mvnet_state, work); + struct mvnet_virtqueue *txq = &mvnet->vqs[1]; + struct mvnet_virtqueue *rxq = &mvnet->vqs[0]; + size_t read, write, total_write; + unsigned long flags; + int err; + int pkts = 0; + + spin_lock(&mvnet->lock); + + if (!txq->ready || !rxq->ready) + goto out; + + while (true) { + total_write = 0; + err = vringh_getdesc_kern(&txq->vring, &txq->iov, NULL, + &txq->head, GFP_KERNEL); + if (err <= 0) + break; + + err = vringh_getdesc_kern(&rxq->vring, NULL, &rxq->iov, + &rxq->head, GFP_KERNEL); + if (err <= 0) { + vringh_complete_kern(&txq->vring, txq->head, 0); + break; + } + + while (true) { + read = vringh_iov_pull_kern(&txq->iov, mvnet->buffer, + PAGE_SIZE); + if (read <= 0) + break; + + write = vringh_iov_push_kern(&rxq->iov, mvnet->buffer, + read); + if (write <= 0) + break; + + total_write += write; + } + + /* Make sure data is wrote before advancing index */ + smp_wmb(); + + vringh_complete_kern(&txq->vring, txq->head, 0); + vringh_complete_kern(&rxq->vring, rxq->head, total_write); + + /* Make sure used is visible before rasing the + interrupt */ + smp_wmb(); + + local_bh_disable(); + if (txq->cb) + txq->cb(txq->private); + if (rxq->cb) + rxq->cb(rxq->private); + local_bh_enable(); + + pkts ++; + if (pkts > 4) { + schedule_work(&mvnet->work); + goto out; + } + } + +out: + spin_unlock(&mvnet->lock); +} + +static dma_addr_t mvnet_map_page(struct device *dev, struct page *page, + unsigned long offset, size_t size, + enum dma_data_direction dir, + unsigned long attrs) +{ + /* Vringh can only use VA */ + return page_address(page) + offset; +} + +static void mvnet_unmap_page(struct device *dev, dma_addr_t dma_addr, + size_t size, enum dma_data_direction dir, + unsigned long attrs) +{ + return ; +} + +static void *mvnet_alloc_coherent(struct device *dev, size_t size, + dma_addr_t *dma_addr, gfp_t flag, + unsigned long attrs) +{ + void *ret = kmalloc(size, flag); + + if (ret == NULL) + *dma_addr = DMA_MAPPING_ERROR; + else + *dma_addr = ret; + + return ret; +} + +static void mvnet_free_coherent(struct device *dev, size_t size, + void *vaddr, dma_addr_t dma_addr, + unsigned long attrs) +{ + kfree(dma_addr); +} + +static const struct dma_map_ops mvnet_dma_ops = { + .map_page = mvnet_map_page, + .unmap_page = mvnet_unmap_page, + .alloc = mvnet_alloc_coherent, + .free = mvnet_free_coherent, +}; + +static int mvnet_create(struct kobject *kobj, struct mdev_device *mdev) +{ + struct mvnet_state *mvnet; + struct virtio_net_config *config; + + if (!mdev) + return -EINVAL; + + mvnet = kzalloc(sizeof(struct mvnet_state), GFP_KERNEL); + if (mvnet == NULL) + return -ENOMEM; + + mvnet->buffer = kmalloc(PAGE_SIZE, GFP_KERNEL); + if (!mvnet->buffer) { + kfree(mvnet); + return -ENOMEM; + } + + config = &mvnet->config; + config->mtu = 1500; + config->status = VIRTIO_NET_S_LINK_UP; + eth_random_addr(config->mac); + + INIT_WORK(&mvnet->work, mvnet_work); + + spin_lock_init(&mvnet->lock); + mvnet->mdev = mdev; + mdev_set_drvdata(mdev, mvnet); + + mutex_lock(&mdev_list_lock); + list_add(&mvnet->next, &mdev_devices_list); + mutex_unlock(&mdev_list_lock); + + mdev_set_dma_ops(mdev, &mvnet_dma_ops); + + return 0; +} + +static int mvnet_remove(struct mdev_device *mdev) +{ + struct mvnet_state *mds, *tmp_mds; + struct mvnet_state *mvnet = mdev_get_drvdata(mdev); + int ret = -EINVAL; + + mutex_lock(&mdev_list_lock); + list_for_each_entry_safe(mds, tmp_mds, &mdev_devices_list, next) { + if (mvnet == mds) { + list_del(&mvnet->next); + mdev_set_drvdata(mdev, NULL); + kfree(mvnet->buffer); + kfree(mvnet); + ret = 0; + break; + } + } + mutex_unlock(&mdev_list_lock); + + return ret; +} + +static ssize_t mvnet_read(struct mdev_device *mdev, char __user *buf, + size_t count, loff_t *ppos) +{ + ssize_t ret; + + if (*ppos < VIRTIO_MDEV_CONFIG) { + if (count == 4) + ret = mvnet_read_config(mdev, (u32 *)buf, *ppos); + else + ret = -EINVAL; + *ppos += 4; + } else if (*ppos < VIRTIO_MDEV_CONFIG + sizeof(struct virtio_net_config)) { + ret = mvnet_read_net_config(mdev, buf, count, + *ppos - VIRTIO_MDEV_CONFIG); + *ppos += count; + } else { + ret = -EINVAL; + } + + return ret; +} + +static ssize_t mvnet_write(struct mdev_device *mdev, const char __user *buf, + size_t count, loff_t *ppos) +{ + int ret; + + if (*ppos < VIRTIO_MDEV_CONFIG) { + if (count == 4) + ret = mvnet_write_config(mdev, (u32 *)buf, *ppos); + else + ret = -EINVAL; + *ppos += 4; + } else { + /* No writable net config */ + ret = -EINVAL; + } + + return ret; +} + +static long mvnet_ioctl(struct mdev_device *mdev, unsigned int cmd, + unsigned long arg) +{ + int ret = 0; + struct mvnet_state *mvnet; + struct virtio_mdev_callback *cb; + + if (!mdev) + return -EINVAL; + + mvnet = mdev_get_drvdata(mdev); + if (!mvnet) + return -ENODEV; + + spin_lock(&mvnet->lock); + + switch (cmd) { + case VIRTIO_MDEV_SET_VQ_CALLBACK: + cb = (struct virtio_mdev_callback *)arg; + mvnet->vqs[mvnet->queue_sel].cb = cb->callback; + mvnet->vqs[mvnet->queue_sel].private = cb->private; + break; + case VIRTIO_MDEV_SET_CONFIG_CALLBACK: + break; + default: + pr_err("Not supportted ioctl cmd 0x%x\n", cmd); + ret = -ENOTTY; + break; + } + + spin_unlock(&mvnet->lock); + + return ret; +} + +static int mvnet_open(struct mdev_device *mdev) +{ + pr_info("%s\n", __func__); + return 0; +} + +static void mvnet_close(struct mdev_device *mdev) +{ + pr_info("%s\n", __func__); +} + +static ssize_t +sample_mvnet_dev_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + return sprintf(buf, "This is phy device\n"); +} + +static DEVICE_ATTR_RO(sample_mvnet_dev); + +static struct attribute *mvnet_dev_attrs[] = { + &dev_attr_sample_mvnet_dev.attr, + NULL, +}; + +static const struct attribute_group mvnet_dev_group = { + .name = "mvnet_dev", + .attrs = mvnet_dev_attrs, +}; + +static const struct attribute_group *mvnet_dev_groups[] = { + &mvnet_dev_group, + NULL, +}; + +static ssize_t +sample_mdev_dev_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + if (mdev_from_dev(dev)) + return sprintf(buf, "This is MDEV %s\n", dev_name(dev)); + + return sprintf(buf, "\n"); +} + +static DEVICE_ATTR_RO(sample_mdev_dev); + +static struct attribute *mdev_dev_attrs[] = { + &dev_attr_sample_mdev_dev.attr, + NULL, +}; + +static const struct attribute_group mdev_dev_group = { + .name = "vendor", + .attrs = mdev_dev_attrs, +}; + +static const struct attribute_group *mdev_dev_groups[] = { + &mdev_dev_group, + NULL, +}; + +#define MVNET_STRING_LEN 16 + +static ssize_t +name_show(struct kobject *kobj, struct device *dev, char *buf) +{ + char name[MVNET_STRING_LEN]; + const char *name_str = "virtio-net"; + + snprintf(name, MVNET_STRING_LEN, "%s", dev_driver_string(dev)); + if (!strcmp(kobj->name, name)) + return sprintf(buf, "%s\n", name_str); + + return -EINVAL; +} + +static MDEV_TYPE_ATTR_RO(name); + +static ssize_t +available_instances_show(struct kobject *kobj, struct device *dev, char *buf) +{ + return sprintf(buf, "%d\n", INT_MAX); +} + +static MDEV_TYPE_ATTR_RO(available_instances); + +static ssize_t device_api_show(struct kobject *kobj, struct device *dev, + char *buf) +{ + return sprintf(buf, "%s\n", VIRTIO_MDEV_DEVICE_API_STRING); +} + +static MDEV_TYPE_ATTR_RO(device_api); + +static struct attribute *mdev_types_attrs[] = { + &mdev_type_attr_name.attr, + &mdev_type_attr_device_api.attr, + &mdev_type_attr_available_instances.attr, + NULL, +}; + +static struct attribute_group mdev_type_group = { + .name = "", + .attrs = mdev_types_attrs, +}; + +static struct attribute_group *mdev_type_groups[] = { + &mdev_type_group, + NULL, +}; + +static const struct mdev_parent_ops mdev_fops = { + .owner = THIS_MODULE, + .dev_attr_groups = mvnet_dev_groups, + .mdev_attr_groups = mdev_dev_groups, + .supported_type_groups = mdev_type_groups, + .create = mvnet_create, + .remove = mvnet_remove, + .open = mvnet_open, + .release = mvnet_close, + .read = mvnet_read, + .write = mvnet_write, + .ioctl = mvnet_ioctl, +}; + +static void mvnet_device_release(struct device *dev) +{ + dev_dbg(dev, "mvnet: released\n"); +} + +static int __init mvnet_dev_init(void) +{ + int ret = 0; + + pr_info("mvnet_dev: %s\n", __func__); + + memset(&mvnet_dev, 0, sizeof(mvnet_dev)); + + idr_init(&mvnet_dev.vd_idr); + + mvnet_dev.vd_class = class_create(THIS_MODULE, MVNET_CLASS_NAME); + + if (IS_ERR(mvnet_dev.vd_class)) { + pr_err("Error: failed to register mvnet_dev class\n"); + ret = PTR_ERR(mvnet_dev.vd_class); + goto failed1; + } + + mvnet_dev.dev.class = mvnet_dev.vd_class; + mvnet_dev.dev.release = mvnet_device_release; + dev_set_name(&mvnet_dev.dev, "%s", MVNET_NAME); + + ret = device_register(&mvnet_dev.dev); + if (ret) + goto failed2; + + ret = mdev_register_device(&mvnet_dev.dev, &mdev_fops); + if (ret) + goto failed3; + + mutex_init(&mdev_list_lock); + INIT_LIST_HEAD(&mdev_devices_list); + + goto all_done; + +failed3: + + device_unregister(&mvnet_dev.dev); +failed2: + class_destroy(mvnet_dev.vd_class); + +failed1: +all_done: + return ret; +} + +static void __exit mvnet_dev_exit(void) +{ + mvnet_dev.dev.bus = NULL; + mdev_unregister_device(&mvnet_dev.dev); + + device_unregister(&mvnet_dev.dev); + idr_destroy(&mvnet_dev.vd_idr); + class_destroy(mvnet_dev.vd_class); + mvnet_dev.vd_class = NULL; + pr_info("mvnet_dev: Unloaded!\n"); +} + +module_init(mvnet_dev_init) +module_exit(mvnet_dev_exit) + +MODULE_LICENSE("GPL v2"); +MODULE_INFO(supported, "Test driver that simulate serial port over PCI"); +MODULE_VERSION(VERSION_STRING); +MODULE_AUTHOR(DRIVER_AUTHOR); -- 2.19.1
Michael S. Tsirkin
2019-Sep-10 10:01 UTC
[RFC PATCH 3/4] virtio: introudce a mdev based transport
On Tue, Sep 10, 2019 at 04:19:34PM +0800, Jason Wang wrote:> This path introduces a new mdev transport for virtio. This is used to > use kernel virtio driver to drive the mediated device that is capable > of populating virtqueue directly. > > A new virtio-mdev driver will be registered to the mdev bus, when a > new virtio-mdev device is probed, it will register the device with > mdev based config ops. This means, unlike the exist hardware > transport, this is a software transport between mdev driver and mdev > device. The transport was implemented through: > > - configuration access was implemented through parent_ops->read()/write() > - vq/config callback was implemented through parent_ops->ioctl() > > This transport is derived from virtio MMIO protocol and was wrote for > kernel driver. But for the transport itself, but the design goal is to > be generic enough to support userspace driver (this part will be added > in the future). > > Note: > - current mdev assume all the parameter of parent_ops was from > userspace. This prevents us from implementing the kernel mdev > driver. For a quick POC, this patch just abuse those parameter and > assume the mdev device implementation will treat them as kernel > pointer. This should be addressed in the formal series by extending > mdev_parent_ops. > - for a quick POC, I just drive the transport from MMIO, I'm pretty > there's lot of optimization space for this. > > Signed-off-by: Jason Wang <jasowang at redhat.com> > --- > drivers/vfio/mdev/Kconfig | 7 + > drivers/vfio/mdev/Makefile | 1 + > drivers/vfio/mdev/virtio_mdev.c | 500 +++++++++++++++++++++++++++++++ > include/uapi/linux/virtio_mdev.h | 131 ++++++++ > 4 files changed, 639 insertions(+) > create mode 100644 drivers/vfio/mdev/virtio_mdev.c > create mode 100644 include/uapi/linux/virtio_mdev.h > > diff --git a/drivers/vfio/mdev/Kconfig b/drivers/vfio/mdev/Kconfig > index 5da27f2100f9..c488c31fc137 100644 > --- a/drivers/vfio/mdev/Kconfig > +++ b/drivers/vfio/mdev/Kconfig > @@ -16,3 +16,10 @@ config VFIO_MDEV_DEVICE > default n > help > VFIO based driver for Mediated devices. > + > +config VIRTIO_MDEV_DEVICE > + tristate "VIRTIO driver for Mediated devices" > + depends on VFIO_MDEV && VIRTIO > + default n > + help > + VIRTIO based driver for Mediated devices. > diff --git a/drivers/vfio/mdev/Makefile b/drivers/vfio/mdev/Makefile > index 101516fdf375..99d31e29c23e 100644 > --- a/drivers/vfio/mdev/Makefile > +++ b/drivers/vfio/mdev/Makefile > @@ -4,3 +4,4 @@ mdev-y := mdev_core.o mdev_sysfs.o mdev_driver.o > > obj-$(CONFIG_VFIO_MDEV) += mdev.o > obj-$(CONFIG_VFIO_MDEV_DEVICE) += vfio_mdev.o > +obj-$(CONFIG_VIRTIO_MDEV_DEVICE) += virtio_mdev.o > diff --git a/drivers/vfio/mdev/virtio_mdev.c b/drivers/vfio/mdev/virtio_mdev.c > new file mode 100644 > index 000000000000..5ff09089297e > --- /dev/null > +++ b/drivers/vfio/mdev/virtio_mdev.c > @@ -0,0 +1,500 @@ > +// SPDX-License-Identifier: GPL-2.0-only > +/* > + * VIRTIO based driver for Mediated device > + * > + * Copyright (c) 2019, Red Hat. All rights reserved. > + * Author: Jason Wang <jasowang at redhat.com> > + * > + * Based on Virtio MMIO driver. > + */ > + > +#include <linux/init.h> > +#include <linux/module.h> > +#include <linux/device.h> > +#include <linux/kernel.h> > +#include <linux/slab.h> > +#include <linux/uuid.h> > +#include <linux/mdev.h> > +#include <linux/virtio.h> > +#include <linux/virtio_config.h> > +#include <linux/virtio_ring.h> > +#include <uapi/linux/virtio_mdev.h> > +#include "mdev_private.h" > + > +#define DRIVER_VERSION "0.1" > +#define DRIVER_AUTHOR "Red Hat Corporation" > +#define DRIVER_DESC "VIRTIO based driver for Mediated device" > + > +#define to_virtio_mdev_device(dev) \ > + container_of(dev, struct virtio_mdev_device, vdev) > + > +struct virtio_mdev_device { > + struct virtio_device vdev; > + struct mdev_device *mdev; > + unsigned long version; > + > + struct virtqueue **vqs; > + spinlock_t lock; > +}; > + > +struct virtio_mdev_vq_info { > + /* the actual virtqueue */ > + struct virtqueue *vq; > + > + /* the list node for the virtqueues list */ > + struct list_head node; > +}; > + > +static u32 virtio_mdev_readl(struct mdev_device *mdev, > + loff_t off) > +{ > + struct mdev_parent *parent = mdev->parent; > + ssize_t len; > + u32 val; > + > + if (unlikely(!parent->ops->read)) > + return 0xFFFFFFFF; > + > + len = parent->ops->read(mdev, (char *)&val, 4, &off); > + if (len != 4) > + return 0xFFFFFFFF; > + > + return val; > +} > + > +static void virtio_mdev_writel(struct mdev_device *mdev, > + loff_t off, u32 val) > +{ > + struct mdev_parent *parent = mdev->parent; > + > + if (unlikely(!parent->ops->write)) > + return; > + > + parent->ops->write(mdev, (char *)&val, 4, &off); > + > + return; > +} > + > +static void virtio_mdev_get(struct virtio_device *vdev, unsigned offset, > + void *buf, unsigned len) > +{ > + struct virtio_mdev_device *vm_dev = to_virtio_mdev_device(vdev); > + struct mdev_device *mdev = vm_dev->mdev; > + struct mdev_parent *parent = mdev->parent; > + > + loff_t off = offset + VIRTIO_MDEV_CONFIG; > + > + switch (len) { > + case 1: > + *(u8 *)buf = parent->ops->read(mdev, buf, 1, &off); > + break; > + case 2: > + *(u16 *)buf = parent->ops->read(mdev, buf, 2, &off); > + break; > + case 4: > + *(u32 *)buf = parent->ops->read(mdev, buf, 4, &off); > + break; > + case 8: > + *(u32 *)buf = parent->ops->read(mdev, buf, 4, &off); > + *((u32 *)buf + 1) = parent->ops->read(mdev, buf, 4, &off); > + break; > + default: > + BUG(); > + } > + > + return; > +} > + > +static void virtio_mdev_set(struct virtio_device *vdev, unsigned offset, > + const void *buf, unsigned len) > +{ > + struct virtio_mdev_device *vm_dev = to_virtio_mdev_device(vdev); > + struct mdev_device *mdev = vm_dev->mdev; > + struct mdev_parent *parent = mdev->parent; > + loff_t off = offset + VIRTIO_MDEV_CONFIG; > + > + switch (len) { > + case 1: > + case 2: > + case 4: > + case 8: > + break; > + default: > + BUG(); > + } > + > + parent->ops->write(mdev, buf, len, &off); > + > + return; > +} > + > +static u32 virtio_mdev_generation(struct virtio_device *vdev) > +{ > + struct virtio_mdev_device *vm_dev = to_virtio_mdev_device(vdev); > + > + if (vm_dev->version == 1) > + return 0; > + else > + return virtio_mdev_readl(vm_dev->mdev, > + VIRTIO_MDEV_CONFIG_GENERATION); > +} > + > +static u8 virtio_mdev_get_status(struct virtio_device *vdev) > +{ > + struct virtio_mdev_device *vm_dev = to_virtio_mdev_device(vdev); > + > + return virtio_mdev_readl(vm_dev->mdev, VIRTIO_MDEV_STATUS) & 0xff; > +} > + > +static void virtio_mdev_set_status(struct virtio_device *vdev, u8 status) > +{ > + struct virtio_mdev_device *vm_dev = to_virtio_mdev_device(vdev); > + > + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_STATUS, status); > +} > + > +static void virtio_mdev_reset(struct virtio_device *vdev) > +{ > + struct virtio_mdev_device *vm_dev = to_virtio_mdev_device(vdev); > + > + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_STATUS, 0); > +} > + > +static bool virtio_mdev_notify(struct virtqueue *vq) > +{ > + struct virtio_mdev_device *vm_dev = to_virtio_mdev_device(vq->vdev); > + > + /* We write the queue's selector into the notification register to > + * signal the other end */ > + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_QUEUE_NOTIFY, > + vq->index); > + return true; > +} > + > +static irqreturn_t virtio_mdev_config_cb(void *private) > +{ > + struct virtio_mdev_device *vm_dev = private; > + > + virtio_config_changed(&vm_dev->vdev); > + > + return IRQ_HANDLED; > +} > + > +static irqreturn_t virtio_mdev_virtqueue_cb(void *private) > +{ > + struct virtio_mdev_vq_info *info = private; > + > + return vring_interrupt(0, info->vq); > +} > + > +static struct virtqueue * > +virtio_mdev_setup_vq(struct virtio_device *vdev, unsigned index, > + void (*callback)(struct virtqueue *vq), > + const char *name, bool ctx) > +{ > + struct virtio_mdev_device *vm_dev = to_virtio_mdev_device(vdev); > + struct mdev_device *mdev= vm_dev->mdev; > + struct mdev_parent *parent = mdev->parent; > + struct virtio_mdev_vq_info *info; > + struct virtio_mdev_callback cb; > + struct virtqueue *vq; > + unsigned long flags; > + u32 align, num; > + u64 addr; > + int err; > + > + if (!name) > + return NULL; > + > + /* Select the queue we're interested in */ > + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_QUEUE_SEL, index); > + > + /* Queue shouldn't already be set up. */ > + if (virtio_mdev_readl(vm_dev->mdev, VIRTIO_MDEV_QUEUE_READY)) { > + err = -ENOENT; > + goto error_available; > + } > + > + /* Allocate and fill out our active queue description */ > + info = kmalloc(sizeof(*info), GFP_KERNEL); > + if (!info) { > + err = -ENOMEM; > + goto error_kmalloc; > + } > + > + num = virtio_mdev_readl(vm_dev->mdev, VIRTIO_MDEV_QUEUE_NUM_MAX); > + if (num == 0) { > + err = -ENOENT; > + goto error_new_virtqueue; > + } > + > + /* Create the vring */ > + align = virtio_mdev_readl(vm_dev->mdev, VIRTIO_MDEV_QUEUE_ALIGN); > + vq = vring_create_virtqueue(index, num, align, vdev, > + true, true, ctx, > + virtio_mdev_notify, callback, name); > + if (!vq) { > + err = -ENOMEM; > + goto error_new_virtqueue; > + } > + > + /* Setup virtqueue callback */ > + cb.callback = virtio_mdev_virtqueue_cb; > + cb.private = info; > + err = parent->ops->ioctl(mdev, VIRTIO_MDEV_SET_VQ_CALLBACK, > + (unsigned long)&cb); > + if (err) { > + err = -EINVAL; > + goto error_callback; > + } > + > + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_QUEUE_NUM, > + virtqueue_get_vring_size(vq)); > + addr = virtqueue_get_desc_addr(vq); > + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_QUEUE_DESC_LOW, (u32)addr); > + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_QUEUE_DESC_HIGH, > + (u32)(addr >> 32)); > + > + addr = virtqueue_get_avail_addr(vq); > + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_QUEUE_AVAIL_LOW, (u32)addr); > + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_QUEUE_AVAIL_HIGH, > + (u32)(addr >> 32)); > + > + addr = virtqueue_get_used_addr(vq); > + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_QUEUE_USED_LOW, (u32)addr); > + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_QUEUE_USED_HIGH, (u32)(addr >> 32)); > + > + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_QUEUE_READY, 1); > + > + vq->priv = info; > + info->vq = vq; > + > + return vq; > + > +error_callback: > + vring_del_virtqueue(vq); > +error_new_virtqueue: > + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_QUEUE_READY, 0); > + WARN_ON(virtio_mdev_readl(vm_dev->mdev, VIRTIO_MDEV_QUEUE_READY)); > + kfree(info); > +error_kmalloc: > +error_available: > + return ERR_PTR(err); > + > +} > + > +static void virtio_mdev_del_vq(struct virtqueue *vq) > +{ > + struct virtio_mdev_device *vm_dev = to_virtio_mdev_device(vq->vdev); > + struct virtio_mdev_vq_info *info = vq->priv; > + unsigned long flags; > + unsigned int index = vq->index; > + > + /* Select and deactivate the queue */ > + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_QUEUE_SEL, index); > + virtio_mdev_writel(vm_dev->mdev,VIRTIO_MDEV_QUEUE_READY, 0); > + WARN_ON(virtio_mdev_readl(vm_dev->mdev, VIRTIO_MDEV_QUEUE_READY)); > + > + vring_del_virtqueue(vq); > + > + kfree(info); > +} > + > +static void virtio_mdev_del_vqs(struct virtio_device *vdev) > +{ > + struct virtqueue *vq, *n; > + > + list_for_each_entry_safe(vq, n, &vdev->vqs, list) > + virtio_mdev_del_vq(vq); > + > + return; > +} > + > +static int virtio_mdev_find_vqs(struct virtio_device *vdev, unsigned nvqs, > + struct virtqueue *vqs[], > + vq_callback_t *callbacks[], > + const char * const names[], > + const bool *ctx, > + struct irq_affinity *desc) > +{ > + struct virtio_mdev_device *vm_dev = to_virtio_mdev_device(vdev); > + struct mdev_device *mdev = vm_dev->mdev; > + struct mdev_parent *parent = mdev->parent; > + struct virtio_mdev_callback cb; > + int i, err, queue_idx = 0; > + vm_dev->vqs = kmalloc_array(queue_idx, sizeof(*vm_dev->vqs), > + GFP_KERNEL); > + if (!vm_dev->vqs) > + return -ENOMEM; > + > + for (i = 0; i < nvqs; ++i) { > + if (!names[i]) { > + vqs[i] = NULL; > + continue; > + } > + > + vqs[i] = virtio_mdev_setup_vq(vdev, queue_idx++, > + callbacks[i], names[i], ctx ? > + ctx[i] : false); > + if (IS_ERR(vqs[i])) { > + err = PTR_ERR(vqs[i]); > + goto err_setup_vq; > + } > + } > + > + cb.callback = virtio_mdev_config_cb; > + cb.private = vm_dev; > + err = parent->ops->ioctl(mdev, VIRTIO_MDEV_SET_CONFIG_CALLBACK, > + (unsigned long)&cb); > + if (err) > + goto err_setup_vq; > + > + return 0; > + > +err_setup_vq: > + kfree(vm_dev->vqs); > + virtio_mdev_del_vqs(vdev); > + return err; > +} > + > +static u64 virtio_mdev_get_features(struct virtio_device *vdev) > +{ > + struct virtio_mdev_device *vm_dev = to_virtio_mdev_device(vdev); > + u64 features; > + > + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_DEVICE_FEATURES_SEL, 1); > + features = virtio_mdev_readl(vm_dev->mdev, VIRTIO_MDEV_DEVICE_FEATURES); > + features <<= 32; > + > + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_DEVICE_FEATURES_SEL, 0); > + features |= virtio_mdev_readl(vm_dev->mdev, VIRTIO_MDEV_DEVICE_FEATURES); > + > + return features; > +} > + > +static int virtio_mdev_finalize_features(struct virtio_device *vdev) > +{ > + struct virtio_mdev_device *vm_dev = to_virtio_mdev_device(vdev); > + > + /* Give virtio_ring a chance to accept features. */ > + vring_transport_features(vdev); > + > + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_DRIVER_FEATURES_SEL, 1); > + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_DRIVER_FEATURES, > + (u32)(vdev->features >> 32)); > + > + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_DRIVER_FEATURES_SEL, 0); > + virtio_mdev_writel(vm_dev->mdev, VIRTIO_MDEV_DRIVER_FEATURES, > + (u32)vdev->features); > + > + return 0; > +} > + > +static const char *virtio_mdev_bus_name(struct virtio_device *vdev) > +{ > + struct virtio_mdev_device *vm_dev = to_virtio_mdev_device(vdev); > + struct mdev_device *mdev = vm_dev->mdev; > + > + return dev_name(&mdev->dev); > +} > + > +static const struct virtio_config_ops virtio_mdev_config_ops = { > + .get = virtio_mdev_get, > + .set = virtio_mdev_set, > + .generation = virtio_mdev_generation, > + .get_status = virtio_mdev_get_status, > + .set_status = virtio_mdev_set_status, > + .reset = virtio_mdev_reset, > + .find_vqs = virtio_mdev_find_vqs, > + .del_vqs = virtio_mdev_del_vqs, > + .get_features = virtio_mdev_get_features, > + .finalize_features = virtio_mdev_finalize_features, > + .bus_name = virtio_mdev_bus_name, > +}; > + > +static void virtio_mdev_release_dev(struct device *_d) > +{ > + struct virtio_device *vdev > + container_of(_d, struct virtio_device, dev); > + struct virtio_mdev_device *vm_dev > + container_of(vdev, struct virtio_mdev_device, vdev); > + > + devm_kfree(_d, vm_dev); > +} > + > +static int virtio_mdev_probe(struct device *dev) > +{ > + struct mdev_device *mdev = to_mdev_device(dev); > + struct virtio_mdev_device *vm_dev; > + unsigned long magic; > + int rc; > + > + magic = virtio_mdev_readl(mdev, VIRTIO_MDEV_MAGIC_VALUE); > + if (magic != ('v' | 'i' << 8 | 'r' << 16 | 't' << 24)) { > + dev_warn(dev, "Wrong magic value 0x%08lx!\n", magic); > + return -ENODEV; > + } > + > + vm_dev = devm_kzalloc(dev, sizeof(*vm_dev), GFP_KERNEL); > + if (!vm_dev) > + return -ENOMEM; > + > + vm_dev->vdev.dev.parent = dev; > + vm_dev->vdev.dev.release = virtio_mdev_release_dev; > + vm_dev->vdev.config = &virtio_mdev_config_ops; > + vm_dev->mdev = mdev; > + vm_dev->vqs = NULL; > + spin_lock_init(&vm_dev->lock); > + > + vm_dev->version = virtio_mdev_readl(mdev, VIRTIO_MDEV_VERSION); > + if (vm_dev->version != 1) { > + dev_err(dev, "Version %ld not supported!\n", > + vm_dev->version); > + return -ENXIO; > + } > + > + vm_dev->vdev.id.device = virtio_mdev_readl(mdev, VIRTIO_MDEV_DEVICE_ID); > + if (vm_dev->vdev.id.device == 0) > + return -ENODEV; > + > + vm_dev->vdev.id.vendor = virtio_mdev_readl(mdev, VIRTIO_MDEV_VENDOR_ID); > + rc = register_virtio_device(&vm_dev->vdev); > + if (rc) > + put_device(dev); > + > + dev_set_drvdata(dev, vm_dev); > + > + return rc; > + > +} > + > +static void virtio_mdev_remove(struct device *dev) > +{ > + struct virtio_mdev_device *vm_dev = dev_get_drvdata(dev); > + > + unregister_virtio_device(&vm_dev->vdev); > +} > + > +static struct mdev_driver virtio_mdev_driver = { > + .name = "virtio_mdev", > + .probe = virtio_mdev_probe, > + .remove = virtio_mdev_remove, > +}; > + > +static int __init virtio_mdev_init(void) > +{ > + return mdev_register_driver(&virtio_mdev_driver, THIS_MODULE); > +} > + > +static void __exit virtio_mdev_exit(void) > +{ > + mdev_unregister_driver(&virtio_mdev_driver); > +} > + > +module_init(virtio_mdev_init) > +module_exit(virtio_mdev_exit) > + > +MODULE_VERSION(DRIVER_VERSION); > +MODULE_LICENSE("GPL v2"); > +MODULE_AUTHOR(DRIVER_AUTHOR); > +MODULE_DESCRIPTION(DRIVER_DESC); > diff --git a/include/uapi/linux/virtio_mdev.h b/include/uapi/linux/virtio_mdev.h > new file mode 100644 > index 000000000000..8040de6b960a > --- /dev/null > +++ b/include/uapi/linux/virtio_mdev.h > @@ -0,0 +1,131 @@ > +/* > + * Virtio mediated device driver > + * > + * Copyright 2019, Red Hat Corp. > + * > + * Based on Virtio MMIO driver by ARM Ltd, copyright ARM Ltd. 2011 > + * > + * This header is BSD licensed so anyone can use the definitions to implement > + * compatible drivers/servers. > + * > + * Redistribution and use in source and binary forms, with or without > + * modification, are permitted provided that the following conditions > + * are met: > + * 1. Redistributions of source code must retain the above copyright > + * notice, this list of conditions and the following disclaimer. > + * 2. Redistributions in binary form must reproduce the above copyright > + * notice, this list of conditions and the following disclaimer in the > + * documentation and/or other materials provided with the distribution. > + * 3. Neither the name of IBM nor the names of its contributors > + * may be used to endorse or promote products derived from this software > + * without specific prior written permission. > + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ``AS IS'' AND > + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE > + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE > + * ARE DISCLAIMED. IN NO EVENT SHALL IBM OR CONTRIBUTORS BE LIABLE > + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL > + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS > + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) > + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT > + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY > + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF > + * SUCH DAMAGE. > + */ > +#ifndef _LINUX_VIRTIO_MDEV_H > +#define _LINUX_VIRTIO_MDEV_H > + > +#include <linux/interrupt.h> > +#include <linux/vringh.h> > +#include <uapi/linux/virtio_net.h> > + > +/* > + * Ioctls > + */Pls add a bit more content here. It's redundant to state these are ioctls. Much better to document what does each one do.> + > +struct virtio_mdev_callback { > + irqreturn_t (*callback)(void *); > + void *private; > +}; > + > +#define VIRTIO_MDEV 0xAF > +#define VIRTIO_MDEV_SET_VQ_CALLBACK _IOW(VIRTIO_MDEV, 0x00, \ > + struct virtio_mdev_callback) > +#define VIRTIO_MDEV_SET_CONFIG_CALLBACK _IOW(VIRTIO_MDEV, 0x01, \ > + struct virtio_mdev_callback)Function pointer in an ioctl parameter? How does this ever make sense? And can't we use a couple of registers for this, and avoid ioctls?> + > +#define VIRTIO_MDEV_DEVICE_API_STRING "virtio-mdev" > + > +/* > + * Control registers > + */ > + > +/* Magic value ("virt" string) - Read Only */ > +#define VIRTIO_MDEV_MAGIC_VALUE 0x000 > + > +/* Virtio device version - Read Only */ > +#define VIRTIO_MDEV_VERSION 0x004 > + > +/* Virtio device ID - Read Only */ > +#define VIRTIO_MDEV_DEVICE_ID 0x008 > + > +/* Virtio vendor ID - Read Only */ > +#define VIRTIO_MDEV_VENDOR_ID 0x00c > + > +/* Bitmask of the features supported by the device (host) > + * (32 bits per set) - Read Only */ > +#define VIRTIO_MDEV_DEVICE_FEATURES 0x010 > + > +/* Device (host) features set selector - Write Only */ > +#define VIRTIO_MDEV_DEVICE_FEATURES_SEL 0x014 > + > +/* Bitmask of features activated by the driver (guest) > + * (32 bits per set) - Write Only */ > +#define VIRTIO_MDEV_DRIVER_FEATURES 0x020 > + > +/* Activated features set selector - Write Only */ > +#define VIRTIO_MDEV_DRIVER_FEATURES_SEL 0x024 > + > +/* Queue selector - Write Only */ > +#define VIRTIO_MDEV_QUEUE_SEL 0x030 > + > +/* Maximum size of the currently selected queue - Read Only */ > +#define VIRTIO_MDEV_QUEUE_NUM_MAX 0x034 > + > +/* Queue size for the currently selected queue - Write Only */ > +#define VIRTIO_MDEV_QUEUE_NUM 0x038 > + > +/* Ready bit for the currently selected queue - Read Write */ > +#define VIRTIO_MDEV_QUEUE_READY 0x044Is this same as started?> + > +/* Alignment of virtqueue - Read Only */ > +#define VIRTIO_MDEV_QUEUE_ALIGN 0x048 > + > +/* Queue notifier - Write Only */ > +#define VIRTIO_MDEV_QUEUE_NOTIFY 0x050 > + > +/* Device status register - Read Write */ > +#define VIRTIO_MDEV_STATUS 0x060 > + > +/* Selected queue's Descriptor Table address, 64 bits in two halves */ > +#define VIRTIO_MDEV_QUEUE_DESC_LOW 0x080 > +#define VIRTIO_MDEV_QUEUE_DESC_HIGH 0x084 > + > +/* Selected queue's Available Ring address, 64 bits in two halves */ > +#define VIRTIO_MDEV_QUEUE_AVAIL_LOW 0x090 > +#define VIRTIO_MDEV_QUEUE_AVAIL_HIGH 0x094 > + > +/* Selected queue's Used Ring address, 64 bits in two halves */ > +#define VIRTIO_MDEV_QUEUE_USED_LOW 0x0a0 > +#define VIRTIO_MDEV_QUEUE_USED_HIGH 0x0a4 > + > +/* Configuration atomicity value */ > +#define VIRTIO_MDEV_CONFIG_GENERATION 0x0fc > + > +/* The config space is defined by each driver as > + * the per-driver configuration space - Read Write */ > +#define VIRTIO_MDEV_CONFIG 0x100Mixing device and generic config space is what virtio pci did, caused lots of problems with extensions. It would be better to reserve much more space.> + > +#endif > + > + > +/* Ready bit for the currently selected queue - Read Write */ > -- > 2.19.1
Michael S. Tsirkin
2019-Sep-10 10:15 UTC
[RFC PATCH 4/4] docs: Sample driver to demonstrate how to implement virtio-mdev framework
On Tue, Sep 10, 2019 at 04:19:35PM +0800, Jason Wang wrote:> This sample driver creates mdev device that simulate virtio net device > over virtio mdev transport. The device is implemented through vringh > and workqueue. > > Signed-off-by: Jason Wang <jasowang at redhat.com> > --- > samples/Kconfig | 7 + > samples/vfio-mdev/Makefile | 1 + > samples/vfio-mdev/mvnet.c | 766 +++++++++++++++++++++++++++++++++++++ > 3 files changed, 774 insertions(+) > create mode 100644 samples/vfio-mdev/mvnet.cSo for a POC, this is a bit too rough to be able to judge whether the approach is workable. Can you get it a bit more into shape esp wrt UAPI?> diff --git a/samples/Kconfig b/samples/Kconfig > index c8dacb4dda80..a1a1ca2c00b7 100644 > --- a/samples/Kconfig > +++ b/samples/Kconfig > @@ -131,6 +131,13 @@ config SAMPLE_VFIO_MDEV_MDPY > mediated device. It is a simple framebuffer and supports > the region display interface (VFIO_GFX_PLANE_TYPE_REGION). > > +config SAMPLE_VIRTIO_MDEV_NET > + tristate "Build virtio mdev net example mediated device sample code -- loadable modules only" > + depends on VIRTIO_MDEV_DEVICE && VHOST_RING && m > + help > + Build a networking sample device for use as a virtio > + mediated device. > + > config SAMPLE_VFIO_MDEV_MDPY_FB > tristate "Build VFIO mdpy example guest fbdev driver -- loadable module only" > depends on FB && m > diff --git a/samples/vfio-mdev/Makefile b/samples/vfio-mdev/Makefile > index 10d179c4fdeb..f34af90ed0a0 100644 > --- a/samples/vfio-mdev/Makefile > +++ b/samples/vfio-mdev/Makefile > @@ -3,3 +3,4 @@ obj-$(CONFIG_SAMPLE_VFIO_MDEV_MTTY) += mtty.o > obj-$(CONFIG_SAMPLE_VFIO_MDEV_MDPY) += mdpy.o > obj-$(CONFIG_SAMPLE_VFIO_MDEV_MDPY_FB) += mdpy-fb.o > obj-$(CONFIG_SAMPLE_VFIO_MDEV_MBOCHS) += mbochs.o > +obj-$(CONFIG_SAMPLE_VIRTIO_MDEV_NET) += mvnet.o > diff --git a/samples/vfio-mdev/mvnet.c b/samples/vfio-mdev/mvnet.c > new file mode 100644 > index 000000000000..da295b41955e > --- /dev/null > +++ b/samples/vfio-mdev/mvnet.c > @@ -0,0 +1,766 @@ > +// SPDX-License-Identifier: GPL-2.0-only > +/* > + * Mediated virtual virtio-net device driver. > + * > + * Copyright (c) 2019, Red Hat Inc. All rights reserved. > + * Author: Jason Wang <jasowang at redhat.com> > + * > + * Sample driverDocumentation on how to use this?> that creates mdev device that simulates ethernet > + * device virtio mdev transport.Well not exactly. What it seems to do is short-circuit RX and TX rings.> + */ > + > +#include <linux/init.h> > +#include <linux/module.h> > +#include <linux/device.h> > +#include <linux/kernel.h> > +#include <linux/fs.h> > +#include <linux/poll.h> > +#include <linux/slab.h> > +#include <linux/sched.h> > +#include <linux/wait.h> > +#include <linux/uuid.h> > +#include <linux/iommu.h> > +#include <linux/sysfs.h> > +#include <linux/file.h> > +#include <linux/etherdevice.h> > +#include <linux/mdev.h> > +#include <uapi/linux/virtio_mdev.h> > + > +#define VERSION_STRING "0.1" > +#define DRIVER_AUTHOR "NVIDIA Corporation" > + > +#define MVNET_CLASS_NAME "mvnet" > + > +#define MVNET_NAME "mvnet" > + > +/* > + * Global Structures > + */ > + > +static struct mvnet_dev { > + struct class *vd_class; > + struct idr vd_idr; > + struct device dev; > +} mvnet_dev; > + > +struct mvnet_virtqueue { > + struct vringh vring; > + struct vringh_kiov iov; > + unsigned short head; > + bool ready; > + u32 desc_addr_lo; > + u32 desc_addr_hi; > + u32 device_addr_lo; > + u32 device_addr_hi; > + u32 driver_addr_lo; > + u32 driver_addr_hi; > + u64 desc_addr; > + u64 device_addr; > + u64 driver_addr; > + void *private; > + irqreturn_t (*cb)(void *); > +}; > + > +#define MVNET_QUEUE_ALIGN PAGE_SIZE > +#define MVNET_QUEUE_MAX 256 > +#define MVNET_MAGIC_VALUE ('v' | 'i' << 8 | 'r' << 16 | 't' << 24) > +#define MVNET_VERSION 0x1 /* Implies virtio 1.0 */ > +#define MVNET_DEVICE_ID 0x1 /* network card */ > +#define MVNET_VENDOR_ID 0 /* is this correct ? */ > +#define MVNET_DEVICE_FEATURES VIRTIO_F_VERSION_1 > + > +u64 mvnet_features = (1ULL << VIRTIO_F_ANY_LAYOUT) | > + (1ULL << VIRTIO_F_VERSION_1) | > + (1ULL << VIRTIO_F_IOMMU_PLATFORM) ; > + > +/* State of each mdev device */ > +struct mvnet_state { > + struct mvnet_virtqueue vqs[2]; > + struct work_struct work; > + spinlock_t lock; > + struct mdev_device *mdev; > + struct virtio_net_config config; > + struct virtio_mdev_callback *cbs; > + void *buffer; > + u32 queue_sel; > + u32 driver_features_sel; > + u32 driver_features[2]; > + u32 device_features_sel; > + u32 status; > + u32 generation; > + u32 num; > + struct list_head next; > +}; > + > +static struct mutex mdev_list_lock; > +static struct list_head mdev_devices_list; > + > +static void mvnet_queue_ready(struct mvnet_state *mvnet, unsigned idx) > +{ > + struct mvnet_virtqueue *vq = &mvnet->vqs[idx]; > + int ret; > + > + vq->desc_addr = (u64)vq->desc_addr_hi << 32 | vq->desc_addr_lo; > + vq->device_addr = (u64)vq->device_addr_hi << 32 | vq->device_addr_lo; > + vq->driver_addr = (u64)vq->driver_addr_hi << 32 | vq->driver_addr_lo; > + > + ret = vringh_init_kern(&vq->vring, mvnet_features, MVNET_QUEUE_MAX, > + false, (struct vring_desc *)vq->desc_addr, > + (struct vring_avail *)vq->driver_addr, > + (struct vring_used *)vq->device_addr); > +} > + > +static ssize_t mvnet_read_config(struct mdev_device *mdev, > + u32 *val, loff_t pos) > +{ > + struct mvnet_state *mvnet; > + struct mvnet_virtqueue *vq; > + u32 queue_sel; > + > + if (!mdev || !val) > + return -EINVAL; > + > + mvnet = mdev_get_drvdata(mdev); > + if (!mvnet) { > + pr_err("%s mvnet not found\n", __func__); > + return -EINVAL; > + } > + > + queue_sel = mvnet->queue_sel; > + vq = &mvnet->vqs[queue_sel]; > + > + switch (pos) { > + case VIRTIO_MDEV_MAGIC_VALUE: > + *val = MVNET_MAGIC_VALUE; > + break; > + case VIRTIO_MDEV_VERSION: > + *val = MVNET_VERSION; > + break; > + case VIRTIO_MDEV_DEVICE_ID: > + *val = MVNET_DEVICE_ID; > + break; > + case VIRTIO_MDEV_VENDOR_ID: > + *val = MVNET_VENDOR_ID; > + break; > + case VIRTIO_MDEV_DEVICE_FEATURES: > + if (mvnet->device_features_sel) > + *val = mvnet_features >> 32; > + else > + *val = mvnet_features; > + break; > + case VIRTIO_MDEV_QUEUE_NUM_MAX: > + *val = MVNET_QUEUE_MAX; > + break; > + case VIRTIO_MDEV_QUEUE_READY: > + *val = vq->ready; > + break; > + case VIRTIO_MDEV_QUEUE_ALIGN: > + *val = MVNET_QUEUE_ALIGN; > + break; > + case VIRTIO_MDEV_STATUS: > + *val = mvnet->status; > + break; > + case VIRTIO_MDEV_QUEUE_DESC_LOW: > + *val = vq->desc_addr_lo; > + break; > + case VIRTIO_MDEV_QUEUE_DESC_HIGH: > + *val = vq->desc_addr_hi; > + break; > + case VIRTIO_MDEV_QUEUE_AVAIL_LOW: > + *val = vq->driver_addr_lo; > + break; > + case VIRTIO_MDEV_QUEUE_AVAIL_HIGH: > + *val = vq->driver_addr_hi; > + break; > + case VIRTIO_MDEV_QUEUE_USED_LOW: > + *val = vq->device_addr_lo; > + break; > + case VIRTIO_MDEV_QUEUE_USED_HIGH: > + *val = vq->device_addr_hi; > + break; > + case VIRTIO_MDEV_CONFIG_GENERATION: > + *val = 1; > + break; > + default: > + pr_err("Unsupported mdev read offset at 0x%x\n", pos); > + break; > + } > + > + return 4; > +} > + > +static ssize_t mvnet_read_net_config(struct mdev_device *mdev, > + char *buf, size_t count, loff_t pos) > +{ > + struct mvnet_state *mvnet = mdev_get_drvdata(mdev); > + > + if (!mvnet) { > + pr_err("%s mvnet not found\n", __func__); > + return -EINVAL; > + } > + > + if (pos + count > sizeof(mvnet->config)) > + return -EINVAL; > + > + memcpy(buf, &mvnet->config + (unsigned)pos, count); > + > + return count; > +} > + > +static void mvnet_vq_reset(struct mvnet_virtqueue *vq) > +{ > + vq->ready = 0; > + vq->desc_addr_lo = vq->desc_addr_hi = 0; > + vq->device_addr_lo = vq->device_addr_hi = 0; > + vq->driver_addr_lo = vq->driver_addr_hi = 0; > + vq->desc_addr = 0; > + vq->driver_addr = 0; > + vq->device_addr = 0; > + vringh_init_kern(&vq->vring, mvnet_features, MVNET_QUEUE_MAX, > + false, 0, 0, 0); > +} > + > +static void mvnet_reset(struct mvnet_state *mvnet) > +{ > + int i; > + > + for (i = 0; i < 2; i++) > + mvnet_vq_reset(&mvnet->vqs[i]); > + > + mvnet->queue_sel = 0; > + mvnet->driver_features_sel = 0; > + mvnet->device_features_sel = 0; > + mvnet->status = 0; > + ++mvnet->generation; > +} > + > +static ssize_t mvnet_write_config(struct mdev_device *mdev, > + u32 *val, loff_t pos) > +{ > + struct mvnet_state *mvnet; > + struct mvnet_virtqueue *vq; > + u32 queue_sel; > + > + if (!mdev || !val) > + return -EINVAL; > + > + mvnet = mdev_get_drvdata(mdev); > + if (!mvnet) { > + pr_err("%s mvnet not found\n", __func__); > + return -EINVAL; > + } > + > + queue_sel = mvnet->queue_sel; > + vq = &mvnet->vqs[queue_sel]; > + > + switch (pos) { > + case VIRTIO_MDEV_DEVICE_FEATURES_SEL: > + mvnet->device_features_sel = *val; > + break; > + case VIRTIO_MDEV_DRIVER_FEATURES: > + mvnet->driver_features[mvnet->driver_features_sel] = *val; > + break; > + case VIRTIO_MDEV_DRIVER_FEATURES_SEL: > + mvnet->driver_features_sel = *val; > + break; > + case VIRTIO_MDEV_QUEUE_SEL: > + mvnet->queue_sel = *val; > + break; > + case VIRTIO_MDEV_QUEUE_NUM: > + mvnet->num = *val; > + break; > + case VIRTIO_MDEV_QUEUE_READY: > + vq->ready = *val; > + if (vq->ready) { > + spin_lock(&mvnet->lock); > + mvnet_queue_ready(mvnet, queue_sel); > + spin_unlock(&mvnet->lock); > + } > + break; > + case VIRTIO_MDEV_QUEUE_NOTIFY: > + if (vq->ready) > + schedule_work(&mvnet->work); > + break; > + case VIRTIO_MDEV_STATUS: > + mvnet->status = *val; > + if (*val == 0) { > + spin_lock(&mvnet->lock); > + mvnet_reset(mvnet); > + spin_unlock(&mvnet->lock); > + } > + break; > + case VIRTIO_MDEV_QUEUE_DESC_LOW: > + vq->desc_addr_lo = *val; > + break; > + case VIRTIO_MDEV_QUEUE_DESC_HIGH: > + vq->desc_addr_hi = *val; > + break; > + case VIRTIO_MDEV_QUEUE_AVAIL_LOW: > + vq->driver_addr_lo = *val; > + break; > + case VIRTIO_MDEV_QUEUE_AVAIL_HIGH: > + vq->driver_addr_hi = *val; > + break; > + case VIRTIO_MDEV_QUEUE_USED_LOW: > + vq->device_addr_lo = *val; > + break; > + case VIRTIO_MDEV_QUEUE_USED_HIGH: > + vq->device_addr_hi = *val; > + break; > + default: > + pr_err("Unsupported write offset! 0x%x\n", pos); > + break; > + } > + spin_unlock(&mvnet->lock); > + > + return 4; > +} > + > +static void mvnet_work(struct work_struct *work) > +{ > + struct mvnet_state *mvnet = container_of(work, struct > + mvnet_state, work); > + struct mvnet_virtqueue *txq = &mvnet->vqs[1]; > + struct mvnet_virtqueue *rxq = &mvnet->vqs[0]; > + size_t read, write, total_write; > + unsigned long flags; > + int err; > + int pkts = 0; > + > + spin_lock(&mvnet->lock); > + > + if (!txq->ready || !rxq->ready) > + goto out; > + > + while (true) { > + total_write = 0; > + err = vringh_getdesc_kern(&txq->vring, &txq->iov, NULL, > + &txq->head, GFP_KERNEL); > + if (err <= 0) > + break; > + > + err = vringh_getdesc_kern(&rxq->vring, NULL, &rxq->iov, > + &rxq->head, GFP_KERNEL);GFP_KERNEL under a spin_lock will cause deadlocks.> + if (err <= 0) { > + vringh_complete_kern(&txq->vring, txq->head, 0); > + break; > + } > + > + while (true) { > + read = vringh_iov_pull_kern(&txq->iov, mvnet->buffer, > + PAGE_SIZE); > + if (read <= 0) > + break; > + > + write = vringh_iov_push_kern(&rxq->iov, mvnet->buffer, > + read); > + if (write <= 0) > + break; > + > + total_write += write; > + } > + > + /* Make sure data is wrote before advancing index */ > + smp_wmb(); > + > + vringh_complete_kern(&txq->vring, txq->head, 0); > + vringh_complete_kern(&rxq->vring, rxq->head, total_write); > + > + /* Make sure used is visible before rasing the > + interrupt */ > + smp_wmb(); > + > + local_bh_disable(); > + if (txq->cb) > + txq->cb(txq->private); > + if (rxq->cb) > + rxq->cb(rxq->private); > + local_bh_enable(); > + > + pkts ++;coding style problem> + if (pkts > 4) { > + schedule_work(&mvnet->work); > + goto out; > + } > + } > + > +out: > + spin_unlock(&mvnet->lock); > +} > + > +static dma_addr_t mvnet_map_page(struct device *dev, struct page *page, > + unsigned long offset, size_t size, > + enum dma_data_direction dir, > + unsigned long attrs) > +{ > + /* Vringh can only use VA */ > + return page_address(page) + offset; > +} > + > +static void mvnet_unmap_page(struct device *dev, dma_addr_t dma_addr, > + size_t size, enum dma_data_direction dir, > + unsigned long attrs) > +{ > + return ; > +} > + > +static void *mvnet_alloc_coherent(struct device *dev, size_t size, > + dma_addr_t *dma_addr, gfp_t flag, > + unsigned long attrs) > +{ > + void *ret = kmalloc(size, flag); > + > + if (ret == NULL)!ret is nicer> + *dma_addr = DMA_MAPPING_ERROR; > + else > + *dma_addr = ret;Not sure how does this build ... don't we need a cast?> + > + return ret; > +} > + > +static void mvnet_free_coherent(struct device *dev, size_t size, > + void *vaddr, dma_addr_t dma_addr, > + unsigned long attrs) > +{ > + kfree(dma_addr); > +} > + > +static const struct dma_map_ops mvnet_dma_ops = { > + .map_page = mvnet_map_page, > + .unmap_page = mvnet_unmap_page, > + .alloc = mvnet_alloc_coherent, > + .free = mvnet_free_coherent, > +}; > + > +static int mvnet_create(struct kobject *kobj, struct mdev_device *mdev) > +{ > + struct mvnet_state *mvnet; > + struct virtio_net_config *config; > + > + if (!mdev) > + return -EINVAL; > + > + mvnet = kzalloc(sizeof(struct mvnet_state), GFP_KERNEL); > + if (mvnet == NULL) > + return -ENOMEM; > + > + mvnet->buffer = kmalloc(PAGE_SIZE, GFP_KERNEL); > + if (!mvnet->buffer) { > + kfree(mvnet); > + return -ENOMEM; > + } > + > + config = &mvnet->config; > + config->mtu = 1500; > + config->status = VIRTIO_NET_S_LINK_UP; > + eth_random_addr(config->mac); > + > + INIT_WORK(&mvnet->work, mvnet_work); > + > + spin_lock_init(&mvnet->lock); > + mvnet->mdev = mdev; > + mdev_set_drvdata(mdev, mvnet); > + > + mutex_lock(&mdev_list_lock); > + list_add(&mvnet->next, &mdev_devices_list); > + mutex_unlock(&mdev_list_lock); > + > + mdev_set_dma_ops(mdev, &mvnet_dma_ops); > + > + return 0; > +} > + > +static int mvnet_remove(struct mdev_device *mdev) > +{ > + struct mvnet_state *mds, *tmp_mds; > + struct mvnet_state *mvnet = mdev_get_drvdata(mdev); > + int ret = -EINVAL; > + > + mutex_lock(&mdev_list_lock); > + list_for_each_entry_safe(mds, tmp_mds, &mdev_devices_list, next) { > + if (mvnet == mds) { > + list_del(&mvnet->next); > + mdev_set_drvdata(mdev, NULL); > + kfree(mvnet->buffer); > + kfree(mvnet); > + ret = 0; > + break; > + } > + } > + mutex_unlock(&mdev_list_lock); > + > + return ret; > +} > + > +static ssize_t mvnet_read(struct mdev_device *mdev, char __user *buf, > + size_t count, loff_t *ppos) > +{ > + ssize_t ret; > + > + if (*ppos < VIRTIO_MDEV_CONFIG) { > + if (count == 4) > + ret = mvnet_read_config(mdev, (u32 *)buf, *ppos); > + else > + ret = -EINVAL; > + *ppos += 4; > + } else if (*ppos < VIRTIO_MDEV_CONFIG + sizeof(struct virtio_net_config)) { > + ret = mvnet_read_net_config(mdev, buf, count, > + *ppos - VIRTIO_MDEV_CONFIG); > + *ppos += count; > + } else { > + ret = -EINVAL; > + } > + > + return ret; > +} > + > +static ssize_t mvnet_write(struct mdev_device *mdev, const char __user *buf, > + size_t count, loff_t *ppos) > +{ > + int ret; > + > + if (*ppos < VIRTIO_MDEV_CONFIG) { > + if (count == 4) > + ret = mvnet_write_config(mdev, (u32 *)buf, *ppos); > + else > + ret = -EINVAL; > + *ppos += 4; > + } else { > + /* No writable net config */ > + ret = -EINVAL; > + } > + > + return ret; > +} > + > +static long mvnet_ioctl(struct mdev_device *mdev, unsigned int cmd, > + unsigned long arg) > +{ > + int ret = 0; > + struct mvnet_state *mvnet; > + struct virtio_mdev_callback *cb; > + > + if (!mdev) > + return -EINVAL; > + > + mvnet = mdev_get_drvdata(mdev); > + if (!mvnet) > + return -ENODEV; > + > + spin_lock(&mvnet->lock); > + > + switch (cmd) { > + case VIRTIO_MDEV_SET_VQ_CALLBACK: > + cb = (struct virtio_mdev_callback *)arg; > + mvnet->vqs[mvnet->queue_sel].cb = cb->callback; > + mvnet->vqs[mvnet->queue_sel].private = cb->private; > + break; > + case VIRTIO_MDEV_SET_CONFIG_CALLBACK: > + break;success, but nothing happens.> + default: > + pr_err("Not supportted ioctl cmd 0x%x\n", cmd); > + ret = -ENOTTY; > + break; > + } > + > + spin_unlock(&mvnet->lock); > + > + return ret; > +} > + > +static int mvnet_open(struct mdev_device *mdev) > +{ > + pr_info("%s\n", __func__); > + return 0; > +} > + > +static void mvnet_close(struct mdev_device *mdev) > +{ > + pr_info("%s\n", __func__); > +} > + > +static ssize_t > +sample_mvnet_dev_show(struct device *dev, struct device_attribute *attr, > + char *buf) > +{ > + return sprintf(buf, "This is phy device\n");what's this?> +} > + > +static DEVICE_ATTR_RO(sample_mvnet_dev); > + > +static struct attribute *mvnet_dev_attrs[] = { > + &dev_attr_sample_mvnet_dev.attr, > + NULL, > +}; > + > +static const struct attribute_group mvnet_dev_group = { > + .name = "mvnet_dev", > + .attrs = mvnet_dev_attrs, > +}; > + > +static const struct attribute_group *mvnet_dev_groups[] = { > + &mvnet_dev_group, > + NULL, > +}; > + > +static ssize_t > +sample_mdev_dev_show(struct device *dev, struct device_attribute *attr, > + char *buf) > +{ > + if (mdev_from_dev(dev)) > + return sprintf(buf, "This is MDEV %s\n", dev_name(dev)); > + > + return sprintf(buf, "\n"); > +} > + > +static DEVICE_ATTR_RO(sample_mdev_dev); > + > +static struct attribute *mdev_dev_attrs[] = { > + &dev_attr_sample_mdev_dev.attr, > + NULL, > +}; > + > +static const struct attribute_group mdev_dev_group = { > + .name = "vendor", > + .attrs = mdev_dev_attrs, > +}; > + > +static const struct attribute_group *mdev_dev_groups[] = { > + &mdev_dev_group, > + NULL, > +}; > + > +#define MVNET_STRING_LEN 16 > + > +static ssize_t > +name_show(struct kobject *kobj, struct device *dev, char *buf) > +{ > + char name[MVNET_STRING_LEN]; > + const char *name_str = "virtio-net"; > + > + snprintf(name, MVNET_STRING_LEN, "%s", dev_driver_string(dev)); > + if (!strcmp(kobj->name, name)) > + return sprintf(buf, "%s\n", name_str); > + > + return -EINVAL; > +} > + > +static MDEV_TYPE_ATTR_RO(name); > + > +static ssize_t > +available_instances_show(struct kobject *kobj, struct device *dev, char *buf) > +{ > + return sprintf(buf, "%d\n", INT_MAX); > +} > + > +static MDEV_TYPE_ATTR_RO(available_instances);?> + > +static ssize_t device_api_show(struct kobject *kobj, struct device *dev, > + char *buf) > +{ > + return sprintf(buf, "%s\n", VIRTIO_MDEV_DEVICE_API_STRING); > +} > + > +static MDEV_TYPE_ATTR_RO(device_api); > + > +static struct attribute *mdev_types_attrs[] = { > + &mdev_type_attr_name.attr, > + &mdev_type_attr_device_api.attr, > + &mdev_type_attr_available_instances.attr, > + NULL, > +}; > + > +static struct attribute_group mdev_type_group = { > + .name = "", > + .attrs = mdev_types_attrs, > +}; > + > +static struct attribute_group *mdev_type_groups[] = { > + &mdev_type_group, > + NULL, > +}; > + > +static const struct mdev_parent_ops mdev_fops = { > + .owner = THIS_MODULE, > + .dev_attr_groups = mvnet_dev_groups, > + .mdev_attr_groups = mdev_dev_groups, > + .supported_type_groups = mdev_type_groups, > + .create = mvnet_create, > + .remove = mvnet_remove, > + .open = mvnet_open, > + .release = mvnet_close, > + .read = mvnet_read, > + .write = mvnet_write, > + .ioctl = mvnet_ioctl, > +}; > + > +static void mvnet_device_release(struct device *dev) > +{ > + dev_dbg(dev, "mvnet: released\n"); > +} > + > +static int __init mvnet_dev_init(void) > +{ > + int ret = 0; > + > + pr_info("mvnet_dev: %s\n", __func__); > + > + memset(&mvnet_dev, 0, sizeof(mvnet_dev)); > + > + idr_init(&mvnet_dev.vd_idr); > + > + mvnet_dev.vd_class = class_create(THIS_MODULE, MVNET_CLASS_NAME); > + > + if (IS_ERR(mvnet_dev.vd_class)) { > + pr_err("Error: failed to register mvnet_dev class\n"); > + ret = PTR_ERR(mvnet_dev.vd_class); > + goto failed1; > + } > + > + mvnet_dev.dev.class = mvnet_dev.vd_class; > + mvnet_dev.dev.release = mvnet_device_release; > + dev_set_name(&mvnet_dev.dev, "%s", MVNET_NAME); > + > + ret = device_register(&mvnet_dev.dev); > + if (ret) > + goto failed2; > + > + ret = mdev_register_device(&mvnet_dev.dev, &mdev_fops); > + if (ret) > + goto failed3; > + > + mutex_init(&mdev_list_lock); > + INIT_LIST_HEAD(&mdev_devices_list); > + > + goto all_done; > + > +failed3: > + > + device_unregister(&mvnet_dev.dev); > +failed2: > + class_destroy(mvnet_dev.vd_class); > + > +failed1: > +all_done: > + return ret; > +} > + > +static void __exit mvnet_dev_exit(void) > +{ > + mvnet_dev.dev.bus = NULL; > + mdev_unregister_device(&mvnet_dev.dev); > + > + device_unregister(&mvnet_dev.dev); > + idr_destroy(&mvnet_dev.vd_idr); > + class_destroy(mvnet_dev.vd_class); > + mvnet_dev.vd_class = NULL; > + pr_info("mvnet_dev: Unloaded!\n"); > +} > + > +module_init(mvnet_dev_init) > +module_exit(mvnet_dev_exit) > + > +MODULE_LICENSE("GPL v2"); > +MODULE_INFO(supported, "Test driver that simulate serial port over PCI"); > +MODULE_VERSION(VERSION_STRING); > +MODULE_AUTHOR(DRIVER_AUTHOR); > -- > 2.19.1
On Tue, Sep 10, 2019 at 04:19:34PM +0800, Jason Wang wrote:> This path introduces a new mdev transport for virtio. This is used to > use kernel virtio driver to drive the mediated device that is capable > of populating virtqueue directly. > > A new virtio-mdev driver will be registered to the mdev bus, when a > new virtio-mdev device is probed, it will register the device with > mdev based config ops. This means, unlike the exist hardware > transport, this is a software transport between mdev driver and mdev > device. The transport was implemented through: > > - configuration access was implemented through parent_ops->read()/write() > - vq/config callback was implemented through parent_ops->ioctl() > > This transport is derived from virtio MMIO protocol and was wrote for > kernel driver. But for the transport itself, but the design goal is to > be generic enough to support userspace driver (this part will be added > in the future). > > Note: > - current mdev assume all the parameter of parent_ops was from > userspace. This prevents us from implementing the kernel mdev > driver. For a quick POC, this patch just abuse those parameter and > assume the mdev device implementation will treat them as kernel > pointer. This should be addressed in the formal series by extending > mdev_parent_ops. > - for a quick POC, I just drive the transport from MMIO, I'm pretty > there's lot of optimization space for this. > > Signed-off-by: Jason Wang <jasowang at redhat.com> > --- > drivers/vfio/mdev/Kconfig | 7 + > drivers/vfio/mdev/Makefile | 1 + > drivers/vfio/mdev/virtio_mdev.c | 500 +++++++++++++++++++++++++++++++ > include/uapi/linux/virtio_mdev.h | 131 ++++++++ > 4 files changed, 639 insertions(+) > create mode 100644 drivers/vfio/mdev/virtio_mdev.c > create mode 100644 include/uapi/linux/virtio_mdev.h >[...]> diff --git a/include/uapi/linux/virtio_mdev.h b/include/uapi/linux/virtio_mdev.h > new file mode 100644 > index 000000000000..8040de6b960a > --- /dev/null > +++ b/include/uapi/linux/virtio_mdev.h > @@ -0,0 +1,131 @@ > +/* > + * Virtio mediated device driver > + * > + * Copyright 2019, Red Hat Corp. > + * > + * Based on Virtio MMIO driver by ARM Ltd, copyright ARM Ltd. 2011 > + * > + * This header is BSD licensed so anyone can use the definitions to implement > + * compatible drivers/servers. > + * > + * Redistribution and use in source and binary forms, with or without > + * modification, are permitted provided that the following conditions > + * are met: > + * 1. Redistributions of source code must retain the above copyright > + * notice, this list of conditions and the following disclaimer. > + * 2. Redistributions in binary form must reproduce the above copyright > + * notice, this list of conditions and the following disclaimer in the > + * documentation and/or other materials provided with the distribution. > + * 3. Neither the name of IBM nor the names of its contributors > + * may be used to endorse or promote products derived from this software > + * without specific prior written permission. > + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ``AS IS'' AND > + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE > + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE > + * ARE DISCLAIMED. IN NO EVENT SHALL IBM OR CONTRIBUTORS BE LIABLE > + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL > + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS > + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) > + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT > + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY > + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF > + * SUCH DAMAGE. > + */ > +#ifndef _LINUX_VIRTIO_MDEV_H > +#define _LINUX_VIRTIO_MDEV_H > + > +#include <linux/interrupt.h> > +#include <linux/vringh.h> > +#include <uapi/linux/virtio_net.h> > + > +/* > + * Ioctls > + */ > + > +struct virtio_mdev_callback { > + irqreturn_t (*callback)(void *); > + void *private; > +}; > + > +#define VIRTIO_MDEV 0xAF > +#define VIRTIO_MDEV_SET_VQ_CALLBACK _IOW(VIRTIO_MDEV, 0x00, \ > + struct virtio_mdev_callback) > +#define VIRTIO_MDEV_SET_CONFIG_CALLBACK _IOW(VIRTIO_MDEV, 0x01, \ > + struct virtio_mdev_callback) > + > +#define VIRTIO_MDEV_DEVICE_API_STRING "virtio-mdev" > + > +/* > + * Control registers > + */ > + > +/* Magic value ("virt" string) - Read Only */ > +#define VIRTIO_MDEV_MAGIC_VALUE 0x000 > + > +/* Virtio device version - Read Only */ > +#define VIRTIO_MDEV_VERSION 0x004 > + > +/* Virtio device ID - Read Only */ > +#define VIRTIO_MDEV_DEVICE_ID 0x008 > + > +/* Virtio vendor ID - Read Only */ > +#define VIRTIO_MDEV_VENDOR_ID 0x00c > + > +/* Bitmask of the features supported by the device (host) > + * (32 bits per set) - Read Only */ > +#define VIRTIO_MDEV_DEVICE_FEATURES 0x010 > + > +/* Device (host) features set selector - Write Only */ > +#define VIRTIO_MDEV_DEVICE_FEATURES_SEL 0x014 > + > +/* Bitmask of features activated by the driver (guest) > + * (32 bits per set) - Write Only */ > +#define VIRTIO_MDEV_DRIVER_FEATURES 0x020 > + > +/* Activated features set selector - Write Only */ > +#define VIRTIO_MDEV_DRIVER_FEATURES_SEL 0x024 > + > +/* Queue selector - Write Only */ > +#define VIRTIO_MDEV_QUEUE_SEL 0x030 > + > +/* Maximum size of the currently selected queue - Read Only */ > +#define VIRTIO_MDEV_QUEUE_NUM_MAX 0x034 > + > +/* Queue size for the currently selected queue - Write Only */ > +#define VIRTIO_MDEV_QUEUE_NUM 0x038 > + > +/* Ready bit for the currently selected queue - Read Write */ > +#define VIRTIO_MDEV_QUEUE_READY 0x044 > + > +/* Alignment of virtqueue - Read Only */ > +#define VIRTIO_MDEV_QUEUE_ALIGN 0x048 > + > +/* Queue notifier - Write Only */ > +#define VIRTIO_MDEV_QUEUE_NOTIFY 0x050 > + > +/* Device status register - Read Write */ > +#define VIRTIO_MDEV_STATUS 0x060 > + > +/* Selected queue's Descriptor Table address, 64 bits in two halves */ > +#define VIRTIO_MDEV_QUEUE_DESC_LOW 0x080 > +#define VIRTIO_MDEV_QUEUE_DESC_HIGH 0x084 > + > +/* Selected queue's Available Ring address, 64 bits in two halves */ > +#define VIRTIO_MDEV_QUEUE_AVAIL_LOW 0x090 > +#define VIRTIO_MDEV_QUEUE_AVAIL_HIGH 0x094 > + > +/* Selected queue's Used Ring address, 64 bits in two halves */ > +#define VIRTIO_MDEV_QUEUE_USED_LOW 0x0a0 > +#define VIRTIO_MDEV_QUEUE_USED_HIGH 0x0a4 > + > +/* Configuration atomicity value */ > +#define VIRTIO_MDEV_CONFIG_GENERATION 0x0fc > + > +/* The config space is defined by each driver as > + * the per-driver configuration space - Read Write */ > +#define VIRTIO_MDEV_CONFIG 0x100IIUC, we can use above registers with virtio-mdev parent's read()/write() to access the mdev device from kernel driver. As you suggested, it's a choice to build vhost-mdev on top of this abstraction as well. But virtio is the frontend device which lacks some vhost backend features, e.g. get vring base, set vring base, negotiate vhost features, etc. So I'm wondering, does it make sense to reserve some space for vhost-mdev in kernel to do vhost backend specific setups? Or do you have any other thoughts? Besides, I'm also wondering, what's the purpose of making above registers part of UAPI? And if we make them part of UAPI, do we also need to make them part of virtio spec? Thanks! Tiwei> + > +#endif > + > + > +/* Ready bit for the currently selected queue - Read Write */ > -- > 2.19.1 >
Alex Williamson
2019-Sep-17 19:00 UTC
[RFC PATCH 2/4] mdev: introduce helper to set per device dma ops
On Tue, 10 Sep 2019 16:19:33 +0800 Jason Wang <jasowang at redhat.com> wrote:> This patch introduces mdev_set_dma_ops() which allows parent to set > per device DMA ops. This help for the kernel driver to setup a correct > DMA mappings. > > Signed-off-by: Jason Wang <jasowang at redhat.com> > --- > drivers/vfio/mdev/mdev_core.c | 7 +++++++ > include/linux/mdev.h | 2 ++ > 2 files changed, 9 insertions(+) > > diff --git a/drivers/vfio/mdev/mdev_core.c b/drivers/vfio/mdev/mdev_core.c > index b558d4cfd082..eb28552082d7 100644 > --- a/drivers/vfio/mdev/mdev_core.c > +++ b/drivers/vfio/mdev/mdev_core.c > @@ -13,6 +13,7 @@ > #include <linux/uuid.h> > #include <linux/sysfs.h> > #include <linux/mdev.h> > +#include <linux/dma-mapping.h> > > #include "mdev_private.h" > > @@ -27,6 +28,12 @@ static struct class_compat *mdev_bus_compat_class; > static LIST_HEAD(mdev_list); > static DEFINE_MUTEX(mdev_list_lock); > > +void mdev_set_dma_ops(struct mdev_device *mdev, struct dma_map_ops *ops) > +{ > + set_dma_ops(&mdev->dev, ops); > +} > +EXPORT_SYMBOL(mdev_set_dma_ops); > +Why does mdev need to be involved here? Your sample driver in 4/4 calls this from its create callback, where it could just as easily call: set_dma_ops(mdev_dev(mdev), ops); Thanks, Alex> struct device *mdev_parent_dev(struct mdev_device *mdev) > { > return mdev->parent->dev; > diff --git a/include/linux/mdev.h b/include/linux/mdev.h > index 0ce30ca78db0..7195f40bf8bf 100644 > --- a/include/linux/mdev.h > +++ b/include/linux/mdev.h > @@ -145,4 +145,6 @@ struct device *mdev_parent_dev(struct mdev_device *mdev); > struct device *mdev_dev(struct mdev_device *mdev); > struct mdev_device *mdev_from_dev(struct device *dev); > > +void mdev_set_dma_ops(struct mdev_device *mdev, struct dma_map_ops *ops); > + > #endif /* MDEV_H */
Maybe Matching Threads
- [RFC PATCH V2 0/6] mdev based hardware virtio offloading support
- [PATCH V10 0/6] mdev based hardware virtio offloading support
- [PATCH V9 0/6] mdev based hardware virtio offloading support
- [PATCH V11 0/6] mdev based hardware virtio offloading support
- [PATCH V6 0/6] mdev based hardware virtio offloading support