On Mon, Nov 1, 2021 at 10:11 PM Jason Gunthorpe <jgg at nvidia.com>
wrote:>
> On Thu, Mar 26, 2020 at 10:01:23PM +0800, Jason Wang wrote:
> > From: Tiwei Bie <tiwei.bie at intel.com>
> >
> > This patch introduces a vDPA-based vhost backend. This backend is
> > built on top of the same interface defined in virtio-vDPA and provides
> > a generic vhost interface for userspace to accelerate the virtio
> > devices in guest.
> >
> > This backend is implemented as a vDPA device driver on top of the same
> > ops used in virtio-vDPA. It will create char device entry named
> > vhost-vdpa-$index for userspace to use. Userspace can use vhost ioctls
> > on top of this char device to setup the backend.
> >
> > Vhost ioctls are extended to make it type agnostic and behave like a
> > virtio device, this help to eliminate type specific API like what
> > vhost_net/scsi/vsock did:
> >
> > - VHOST_VDPA_GET_DEVICE_ID: get the virtio device ID which is defined
> > by virtio specification to differ from different type of devices
> > - VHOST_VDPA_GET_VRING_NUM: get the maximum size of virtqueue
> > supported by the vDPA device
> > - VHSOT_VDPA_SET/GET_STATUS: set and get virtio status of vDPA device
> > - VHOST_VDPA_SET/GET_CONFIG: access virtio config space
> > - VHOST_VDPA_SET_VRING_ENABLE: enable a specific virtqueue
> >
> > For memory mapping, IOTLB API is mandated for vhost-vDPA which means
> > userspace drivers are required to use
> > VHOST_IOTLB_UPDATE/VHOST_IOTLB_INVALIDATE to add or remove mapping for
> > a specific userspace memory region.
> >
> > The vhost-vDPA API is designed to be type agnostic, but it allows net
> > device only in current stage. Due to the lacking of control virtqueue
> > support, some features were filter out by vhost-vdpa.
> >
> > We will enable more features and devices in the near future.
>
> [..]
>
> > +static int vhost_vdpa_alloc_domain(struct vhost_vdpa *v)
> > +{
> > + struct vdpa_device *vdpa = v->vdpa;
> > + const struct vdpa_config_ops *ops = vdpa->config;
> > + struct device *dma_dev = vdpa_get_dma_dev(vdpa);
> > + struct bus_type *bus;
> > + int ret;
> > +
> > + /* Device want to do DMA by itself */
> > + if (ops->set_map || ops->dma_map)
> > + return 0;
> > +
> > + bus = dma_dev->bus;
> > + if (!bus)
> > + return -EFAULT;
> > +
> > + if (!iommu_capable(bus, IOMMU_CAP_CACHE_COHERENCY))
> > + return -ENOTSUPP;
> > +
> > + v->domain = iommu_domain_alloc(bus);
> > + if (!v->domain)
> > + return -EIO;
> > +
> > + ret = iommu_attach_device(v->domain, dma_dev);
> > + if (ret)
> > + goto err_attach;
> >
>
> I've been looking at the security of iommu_attach_device() users, and
> I wonder if this is safe?
>
> The security question is if userspace is able to control the DMA
> address the devices uses? Eg if any of the cpu to device ring's are in
> userspace memory?
>
> For instance if userspace can tell the device to send a packet from an
> arbitrary user controlled address.
The map is validated via pin_user_pages() which guarantees that the
address is not arbitrary and must belong to userspace?
Thanks
>
> Thanks,
> Jason
>