Ram Pai
2019-Oct-12 01:25 UTC
[PATCH 0/2] virtio: Support encrypted memory on powerpc secure guests
**We would like the patches to be merged through the virtio tree. Please review, and ack merging the DMA mapping change through that tree. Thanks!** The memory of powerpc secure guests can't be accessed by the hypervisor / virtio device except for a few memory regions designated as 'shared'. At the moment, Linux uses bounce-buffering to communicate with the hypervisor, with a bounce buffer marked as shared. This is how the DMA API is implemented on this platform. In particular, the most convenient way to use virtio on this platform is by making virtio use the DMA API: in fact, this is exactly what happens if the virtio device exposes the flag VIRTIO_F_ACCESS_PLATFORM. However, bugs in the hypervisor on the powerpc platform do not allow setting this flag, with some hypervisors already in the field that don't set this flag. At the moment they are forced to use emulated devices when guest is in secure mode; virtio is only useful when guest is not secure. Normally, both device and driver must support VIRTIO_F_ACCESS_PLATFORM: if one of them doesn't, the other mustn't assume it for communication to work. However, a guest-side work-around is possible to enable virtio for these hypervisors with guest in secure mode: it so happens that on powerpc secure platform the DMA address is actually a physical address - that of the bounce buffer. For these platforms we can make the virtio driver go through the DMA API even though the device itself ignores the DMA API. These patches implement this work around for virtio: we detect that - secure guest mode is enabled - so we know that since we don't share most memory and Hypervisor has not enabled VIRTIO_F_ACCESS_PLATFORM, regular virtio code won't work. - DMA API is giving us addresses that are actually also physical addresses. - Hypervisor has not enabled VIRTIO_F_ACCESS_PLATFORM. and if all conditions are true, we force all data through the bounce buffer. To put it another way, from hypervisor's point of view DMA API is not required: hypervisor would be happy to get access to all of guest memory. That's why it does not set VIRTIO_F_ACCESS_PLATFORM. However, guest decides that it does not trust the hypervisor and wants to force a bounce buffer for its own reasons. Thiago Jung Bauermann (2): dma-mapping: Add dma_addr_is_phys_addr() virtio_ring: Use DMA API if memory is encrypted arch/powerpc/include/asm/dma-mapping.h | 21 +++++++++++++++++++++ arch/powerpc/platforms/pseries/Kconfig | 1 + drivers/virtio/virtio.c | 18 ++++++++++++++++++ drivers/virtio/virtio_ring.c | 8 ++++++++ include/linux/dma-mapping.h | 20 ++++++++++++++++++++ include/linux/virtio_config.h | 14 ++++++++++++++ kernel/dma/Kconfig | 3 +++ 7 files changed, 85 insertions(+) -- 1.8.3.1
From: Thiago Jung Bauermann <bauerman at linux.ibm.com> In order to safely use the DMA API, virtio needs to know whether DMA addresses are in fact physical addresses and for that purpose, dma_addr_is_phys_addr() is introduced. cc: Benjamin Herrenschmidt <benh at kernel.crashing.org> cc: David Gibson <david at gibson.dropbear.id.au> cc: Michael Ellerman <mpe at ellerman.id.au> cc: Paul Mackerras <paulus at ozlabs.org> cc: Michael Roth <mdroth at linux.vnet.ibm.com> cc: Alexey Kardashevskiy <aik at linux.ibm.com> cc: Paul Burton <paul.burton at mips.com> cc: Robin Murphy <robin.murphy at arm.com> cc: Bartlomiej Zolnierkiewicz <b.zolnierkie at samsung.com> cc: Marek Szyprowski <m.szyprowski at samsung.com> cc: Christoph Hellwig <hch at lst.de> Suggested-by: Michael S. Tsirkin <mst at redhat.com> Signed-off-by: Ram Pai <linuxram at us.ibm.com> Signed-off-by: Thiago Jung Bauermann <bauerman at linux.ibm.com> --- arch/powerpc/include/asm/dma-mapping.h | 21 +++++++++++++++++++++ arch/powerpc/platforms/pseries/Kconfig | 1 + include/linux/dma-mapping.h | 20 ++++++++++++++++++++ kernel/dma/Kconfig | 3 +++ 4 files changed, 45 insertions(+) diff --git a/arch/powerpc/include/asm/dma-mapping.h b/arch/powerpc/include/asm/dma-mapping.h index 565d6f7..f92c0a4b 100644 --- a/arch/powerpc/include/asm/dma-mapping.h +++ b/arch/powerpc/include/asm/dma-mapping.h @@ -5,6 +5,8 @@ #ifndef _ASM_DMA_MAPPING_H #define _ASM_DMA_MAPPING_H +#include <asm/svm.h> + static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus) { /* We don't handle the NULL dev case for ISA for now. We could @@ -15,4 +17,23 @@ static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus) return NULL; } +#ifdef CONFIG_ARCH_HAS_DMA_ADDR_IS_PHYS_ADDR +/** + * dma_addr_is_phys_addr - check whether a device DMA address is a physical + * address + * @dev: device to check + * + * Returns %true if any DMA address for this device happens to also be a valid + * physical address (not necessarily of the same page). + */ +static inline bool dma_addr_is_phys_addr(struct device *dev) +{ + /* + * Secure guests always use the SWIOTLB, therefore DMA addresses are + * actually the physical address of the bounce buffer. + */ + return is_secure_guest(); +} +#endif + #endif /* _ASM_DMA_MAPPING_H */ diff --git a/arch/powerpc/platforms/pseries/Kconfig b/arch/powerpc/platforms/pseries/Kconfig index 9e35cdd..0108150 100644 --- a/arch/powerpc/platforms/pseries/Kconfig +++ b/arch/powerpc/platforms/pseries/Kconfig @@ -152,6 +152,7 @@ config PPC_SVM select SWIOTLB select ARCH_HAS_MEM_ENCRYPT select ARCH_HAS_FORCE_DMA_UNENCRYPTED + select ARCH_HAS_DMA_ADDR_IS_PHYS_ADDR help There are certain POWER platforms which support secure guests using the Protected Execution Facility, with the help of an Ultravisor diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index f7d1eea..6df5664 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -693,6 +693,26 @@ static inline bool dma_addressing_limited(struct device *dev) dma_get_required_mask(dev); } +#ifndef CONFIG_ARCH_HAS_DMA_ADDR_IS_PHYS_ADDR +/** + * dma_addr_is_phys_addr - check whether a device DMA address is a physical + * address + * @dev: device to check + * + * Returns %true if any DMA address for this device happens to also be a valid + * physical address (not necessarily of the same page). + */ +static inline bool dma_addr_is_phys_addr(struct device *dev) +{ + /* + * Except in very specific setups, DMA addresses exist in a different + * address space from CPU physical addresses and cannot be directly used + * to reference system memory. + */ + return false; +} +#endif + #ifdef CONFIG_ARCH_HAS_SETUP_DMA_OPS void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, const struct iommu_ops *iommu, bool coherent); diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig index 9decbba..6209b46 100644 --- a/kernel/dma/Kconfig +++ b/kernel/dma/Kconfig @@ -51,6 +51,9 @@ config ARCH_HAS_DMA_MMAP_PGPROT config ARCH_HAS_FORCE_DMA_UNENCRYPTED bool +config ARCH_HAS_DMA_ADDR_IS_PHYS_ADDR + bool + config DMA_NONCOHERENT_CACHE_SYNC bool -- 1.8.3.1
From: Thiago Jung Bauermann <bauerman at linux.ibm.com> Normally, virtio enables DMA API with VIRTIO_F_IOMMU_PLATFORM, which must be set by both device and guest driver. However, as a hack, when DMA API returns physical addresses, guest driver can use the DMA API; even though device does not set VIRTIO_F_IOMMU_PLATFORM and just uses physical addresses. Doing this works-around POWER secure guests for which only the bounce buffer is accessible to the device, but which don't set VIRTIO_F_IOMMU_PLATFORM due to a set of hypervisor and architectural bugs. To guard against platform changes, breaking any of these assumptions down the road, we check at probe time and fail if that's not the case. cc: Benjamin Herrenschmidt <benh at kernel.crashing.org> cc: David Gibson <david at gibson.dropbear.id.au> cc: Michael Ellerman <mpe at ellerman.id.au> cc: Paul Mackerras <paulus at ozlabs.org> cc: Michael Roth <mdroth at linux.vnet.ibm.com> cc: Alexey Kardashevskiy <aik at linux.ibm.com> cc: Jason Wang <jasowang at redhat.com> cc: Christoph Hellwig <hch at lst.de> Suggested-by: Michael S. Tsirkin <mst at redhat.com> Signed-off-by: Ram Pai <linuxram at us.ibm.com> Signed-off-by: Thiago Jung Bauermann <bauerman at linux.ibm.com> --- drivers/virtio/virtio.c | 18 ++++++++++++++++++ drivers/virtio/virtio_ring.c | 8 ++++++++ include/linux/virtio_config.h | 14 ++++++++++++++ 3 files changed, 40 insertions(+) diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c index a977e32..77a3baf 100644 --- a/drivers/virtio/virtio.c +++ b/drivers/virtio/virtio.c @@ -4,6 +4,7 @@ #include <linux/virtio_config.h> #include <linux/module.h> #include <linux/idr.h> +#include <linux/dma-mapping.h> #include <uapi/linux/virtio_ids.h> /* Unique numbering for virtio devices. */ @@ -245,6 +246,23 @@ static int virtio_dev_probe(struct device *_d) if (err) goto err; + /* + * If memory is encrypted, but VIRTIO_F_IOMMU_PLATFORM is not set, then + * the device is broken: DMA API is required for these platforms, but + * the only way using the DMA API is going to work at all is if the + * device is ready for it. So we need a flag on the virtio device, + * exposed by the hypervisor (or hardware for hw virtio devices) that + * says: hey, I'm real, don't take a shortcut. + * + * There's one exception where guest can make things work, and that is + * when DMA API is guaranteed to always return physical addresses. + */ + if (mem_encrypt_active() && !virtio_can_use_dma_api(dev)) { + dev_err(_d, "virtio: device unable to access encrypted memory\n"); + err = -EINVAL; + goto err; + } + err = drv->probe(dev); if (err) goto err; diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index c8be1c4..9c56b61 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -255,6 +255,14 @@ static bool vring_use_dma_api(struct virtio_device *vdev) if (xen_domain()) return true; + /* + * Also, if guest memory is encrypted the host can't access it + * directly. We need to either use an IOMMU or do bounce buffering. + * Both are done via the DMA API. + */ + if (mem_encrypt_active() && virtio_can_use_dma_api(vdev)) + return true; + return false; } diff --git a/include/linux/virtio_config.h b/include/linux/virtio_config.h index bb4cc49..57bc25c 100644 --- a/include/linux/virtio_config.h +++ b/include/linux/virtio_config.h @@ -4,6 +4,7 @@ #include <linux/err.h> #include <linux/bug.h> +#include <linux/dma-mapping.h> #include <linux/virtio.h> #include <linux/virtio_byteorder.h> #include <uapi/linux/virtio_config.h> @@ -174,6 +175,19 @@ static inline bool virtio_has_iommu_quirk(const struct virtio_device *vdev) return !virtio_has_feature(vdev, VIRTIO_F_IOMMU_PLATFORM); } +/** + * virtio_can_use_dma_api - determine whether the DMA API can be used + * @vdev: the device + * + * The DMA API can be used either when the device doesn't have the IOMMU quirk, + * or when the DMA API is guaranteed to always return physical addresses. + */ +static inline bool virtio_can_use_dma_api(const struct virtio_device *vdev) +{ + return !virtio_has_iommu_quirk(vdev) || + dma_addr_is_phys_addr(vdev->dev.parent); +} + static inline struct virtqueue *virtio_find_single_vq(struct virtio_device *vdev, vq_callback_t *c, const char *n) -- 1.8.3.1
Ram Pai
2019-Oct-12 01:36 UTC
[PATCH 0/2] virtio: Support encrypted memory on powerpc secure guests
Hmm.. git-send-email forgot to CC Michael Tsirkin, and Thiago; the original author, who is on vacation. Adding them now. RP On Fri, Oct 11, 2019 at 06:25:17PM -0700, Ram Pai wrote:> **We would like the patches to be merged through the virtio tree. Please > review, and ack merging the DMA mapping change through that tree. Thanks!** > > The memory of powerpc secure guests can't be accessed by the hypervisor / > virtio device except for a few memory regions designated as 'shared'. > > At the moment, Linux uses bounce-buffering to communicate with the > hypervisor, with a bounce buffer marked as shared. This is how the DMA API > is implemented on this platform. > > In particular, the most convenient way to use virtio on this platform is by > making virtio use the DMA API: in fact, this is exactly what happens if the > virtio device exposes the flag VIRTIO_F_ACCESS_PLATFORM. However, bugs in the > hypervisor on the powerpc platform do not allow setting this flag, with some > hypervisors already in the field that don't set this flag. At the moment they > are forced to use emulated devices when guest is in secure mode; virtio is > only useful when guest is not secure. > > Normally, both device and driver must support VIRTIO_F_ACCESS_PLATFORM: > if one of them doesn't, the other mustn't assume it for communication > to work. > > However, a guest-side work-around is possible to enable virtio > for these hypervisors with guest in secure mode: it so happens that on > powerpc secure platform the DMA address is actually a physical address - > that of the bounce buffer. For these platforms we can make the virtio > driver go through the DMA API even though the device itself ignores > the DMA API. > > These patches implement this work around for virtio: we detect that > - secure guest mode is enabled - so we know that since we don't share > most memory and Hypervisor has not enabled VIRTIO_F_ACCESS_PLATFORM, > regular virtio code won't work. > - DMA API is giving us addresses that are actually also physical > addresses. > - Hypervisor has not enabled VIRTIO_F_ACCESS_PLATFORM. > > and if all conditions are true, we force all data through the bounce > buffer. > > To put it another way, from hypervisor's point of view DMA API is > not required: hypervisor would be happy to get access to all of guest > memory. That's why it does not set VIRTIO_F_ACCESS_PLATFORM. However, > guest decides that it does not trust the hypervisor and wants to force > a bounce buffer for its own reasons. > > > Thiago Jung Bauermann (2): > dma-mapping: Add dma_addr_is_phys_addr() > virtio_ring: Use DMA API if memory is encrypted > > arch/powerpc/include/asm/dma-mapping.h | 21 +++++++++++++++++++++ > arch/powerpc/platforms/pseries/Kconfig | 1 + > drivers/virtio/virtio.c | 18 ++++++++++++++++++ > drivers/virtio/virtio_ring.c | 8 ++++++++ > include/linux/dma-mapping.h | 20 ++++++++++++++++++++ > include/linux/virtio_config.h | 14 ++++++++++++++ > kernel/dma/Kconfig | 3 +++ > 7 files changed, 85 insertions(+) > > -- > 1.8.3.1-- Ram Pai
On Fri, Oct 11, 2019 at 06:25:18PM -0700, Ram Pai wrote:> From: Thiago Jung Bauermann <bauerman at linux.ibm.com> > > In order to safely use the DMA API, virtio needs to know whether DMA > addresses are in fact physical addresses and for that purpose, > dma_addr_is_phys_addr() is introduced. > > cc: Benjamin Herrenschmidt <benh at kernel.crashing.org> > cc: David Gibson <david at gibson.dropbear.id.au> > cc: Michael Ellerman <mpe at ellerman.id.au> > cc: Paul Mackerras <paulus at ozlabs.org> > cc: Michael Roth <mdroth at linux.vnet.ibm.com> > cc: Alexey Kardashevskiy <aik at linux.ibm.com> > cc: Paul Burton <paul.burton at mips.com> > cc: Robin Murphy <robin.murphy at arm.com> > cc: Bartlomiej Zolnierkiewicz <b.zolnierkie at samsung.com> > cc: Marek Szyprowski <m.szyprowski at samsung.com> > cc: Christoph Hellwig <hch at lst.de> > Suggested-by: Michael S. Tsirkin <mst at redhat.com> > Signed-off-by: Ram Pai <linuxram at us.ibm.com> > Signed-off-by: Thiago Jung Bauermann <bauerman at linux.ibm.com>The change itself looks ok, so Reviewed-by: David Gibson <david at gibson.dropbear.id.au> However, I would like to see the commit message (and maybe the inline comments) expanded a bit on what the distinction here is about. Some of the text from the next patch would be suitable, about DMA addresses usually being in a different address space but not in the case of bounce buffering.> --- > arch/powerpc/include/asm/dma-mapping.h | 21 +++++++++++++++++++++ > arch/powerpc/platforms/pseries/Kconfig | 1 + > include/linux/dma-mapping.h | 20 ++++++++++++++++++++ > kernel/dma/Kconfig | 3 +++ > 4 files changed, 45 insertions(+) > > diff --git a/arch/powerpc/include/asm/dma-mapping.h b/arch/powerpc/include/asm/dma-mapping.h > index 565d6f7..f92c0a4b 100644 > --- a/arch/powerpc/include/asm/dma-mapping.h > +++ b/arch/powerpc/include/asm/dma-mapping.h > @@ -5,6 +5,8 @@ > #ifndef _ASM_DMA_MAPPING_H > #define _ASM_DMA_MAPPING_H > > +#include <asm/svm.h> > + > static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus) > { > /* We don't handle the NULL dev case for ISA for now. We could > @@ -15,4 +17,23 @@ static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus) > return NULL; > } > > +#ifdef CONFIG_ARCH_HAS_DMA_ADDR_IS_PHYS_ADDR > +/** > + * dma_addr_is_phys_addr - check whether a device DMA address is a physical > + * address > + * @dev: device to check > + * > + * Returns %true if any DMA address for this device happens to also be a valid > + * physical address (not necessarily of the same page). > + */ > +static inline bool dma_addr_is_phys_addr(struct device *dev) > +{ > + /* > + * Secure guests always use the SWIOTLB, therefore DMA addresses are > + * actually the physical address of the bounce buffer. > + */ > + return is_secure_guest(); > +} > +#endif > + > #endif /* _ASM_DMA_MAPPING_H */ > diff --git a/arch/powerpc/platforms/pseries/Kconfig b/arch/powerpc/platforms/pseries/Kconfig > index 9e35cdd..0108150 100644 > --- a/arch/powerpc/platforms/pseries/Kconfig > +++ b/arch/powerpc/platforms/pseries/Kconfig > @@ -152,6 +152,7 @@ config PPC_SVM > select SWIOTLB > select ARCH_HAS_MEM_ENCRYPT > select ARCH_HAS_FORCE_DMA_UNENCRYPTED > + select ARCH_HAS_DMA_ADDR_IS_PHYS_ADDR > help > There are certain POWER platforms which support secure guests using > the Protected Execution Facility, with the help of an Ultravisor > diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h > index f7d1eea..6df5664 100644 > --- a/include/linux/dma-mapping.h > +++ b/include/linux/dma-mapping.h > @@ -693,6 +693,26 @@ static inline bool dma_addressing_limited(struct device *dev) > dma_get_required_mask(dev); > } > > +#ifndef CONFIG_ARCH_HAS_DMA_ADDR_IS_PHYS_ADDR > +/** > + * dma_addr_is_phys_addr - check whether a device DMA address is a physical > + * address > + * @dev: device to check > + * > + * Returns %true if any DMA address for this device happens to also be a valid > + * physical address (not necessarily of the same page). > + */ > +static inline bool dma_addr_is_phys_addr(struct device *dev) > +{ > + /* > + * Except in very specific setups, DMA addresses exist in a different > + * address space from CPU physical addresses and cannot be directly used > + * to reference system memory. > + */ > + return false; > +} > +#endif > + > #ifdef CONFIG_ARCH_HAS_SETUP_DMA_OPS > void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, > const struct iommu_ops *iommu, bool coherent); > diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig > index 9decbba..6209b46 100644 > --- a/kernel/dma/Kconfig > +++ b/kernel/dma/Kconfig > @@ -51,6 +51,9 @@ config ARCH_HAS_DMA_MMAP_PGPROT > config ARCH_HAS_FORCE_DMA_UNENCRYPTED > bool > > +config ARCH_HAS_DMA_ADDR_IS_PHYS_ADDR > + bool > + > config DMA_NONCOHERENT_CACHE_SYNC > bool >-- David Gibson | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_ | _way_ _around_! http://www.ozlabs.org/~dgibson -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: <http://lists.linuxfoundation.org/pipermail/virtualization/attachments/20191014/7cb36119/attachment.sig>
Possibly Parallel Threads
- [PATCH 1/2] dma-mapping: Add dma_addr_is_phys_addr()
- [PATCH 1/2] dma-mapping: Add dma_addr_is_phys_addr()
- [RFC PATCH] virtio_ring: Use DMA API if guest memory is encrypted
- [RFC PATCH] virtio_ring: Use DMA API if guest memory is encrypted
- [PATCH 1/2] dma-mapping: Add dma_addr_is_phys_addr()