Konrad Rzeszutek Wilk
2021-Feb-03 19:36 UTC
[PATCH] swiotlb: Validate bounce size in the sync/unmap path
On Wed, Feb 03, 2021 at 01:49:22PM +0100, Christoph Hellwig wrote:> On Mon, Jan 18, 2021 at 12:44:58PM +0100, Martin Radev wrote: > > Your comment makes sense but then that would require the cooperation > > of these vendors and the cloud providers to agree on something meaningful. > > I am also not sure whether the end result would be better than hardening > > this interface to catch corruption. There is already some validation in > > unmap path anyway. > > So what? If you guys want to provide a new capability you'll have to do > work. And designing a new protocol based around the fact that the > hardware/hypervisor is not trusted and a copy is always required makes > a lot of more sense than throwing in band aids all over the place.If you don't trust the hypervisor, what would this capability be in? I suppose you mean this would need to be in the the guest kernel and this protocol would depend on .. not-hypervisor and most certainly not the virtio or any SR-IOV device. That removes a lot of options. The one sensibile one (since folks will trust OEM vendors like Intel or AMD to provide the memory encryption so they will also trust the IOMMU - I hope?) - and they do have plans for that with their IOMMU frameworks which will remove the need for SWIOTLB (I hope). But that is not now, but in future.
Christoph Hellwig
2021-Feb-05 17:58 UTC
[PATCH] swiotlb: Validate bounce size in the sync/unmap path
On Wed, Feb 03, 2021 at 02:36:38PM -0500, Konrad Rzeszutek Wilk wrote:> > So what? If you guys want to provide a new capability you'll have to do > > work. And designing a new protocol based around the fact that the > > hardware/hypervisor is not trusted and a copy is always required makes > > a lot of more sense than throwing in band aids all over the place. > > If you don't trust the hypervisor, what would this capability be in?Well, they don't trust the hypervisor to not attack the guest somehow, except through the data read. I never really understood the concept, as it leaves too many holes. But the point is that these schemes want to force bounce buffering because they think it is more secure. And if that is what you want you better have protocol build around the fact that each I/O needs to use bounce buffers, so you make those buffers the actual shared memory use for communication, and build the protocol around it. E.g. you don't force the ridiculous NVMe PRP offset rules on the block layer, just to make a complicated swiotlb allocation that needs to preserve the alignment just do I/O. But instead you have a trivial ring buffer or whatever because you know I/O will be copied anyway and none of all the hard work higher layers do to make the I/O suitable for a normal device apply.