On map: only flush when old PTE was valid or invalid PTE may be cached. On unmap: always flush old entry, but skip flush for unaffected IOMMUs. Signed-off-by: Espen Skoglund <espen.skoglund@netronome.com> -- iommu.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) -- diff -r 8187fd8113f9 xen/drivers/passthrough/vtd/iommu.c --- a/xen/drivers/passthrough/vtd/iommu.c Tue May 27 11:46:52 2008 +0100 +++ b/xen/drivers/passthrough/vtd/iommu.c Tue May 27 17:16:51 2008 +0100 @@ -1525,6 +1525,7 @@ struct iommu *iommu; struct dma_pte *page = NULL, *pte = NULL; u64 pg_maddr; + int pte_present; drhd = list_entry(acpi_drhd_units.next, typeof(*drhd), list); iommu = drhd->iommu; @@ -1540,6 +1541,7 @@ return -ENOMEM; page = (struct dma_pte *)map_vtd_domain_page(pg_maddr); pte = page + (gfn & LEVEL_MASK); + pte_present = dma_pte_present(*pte); dma_set_pte_addr(*pte, (paddr_t)mfn << PAGE_SHIFT_4K); dma_set_pte_prot(*pte, DMA_PTE_READ | DMA_PTE_WRITE); iommu_flush_cache_entry(iommu, pte); @@ -1552,7 +1554,7 @@ if ( !test_bit(iommu->index, &hd->iommu_bitmap) ) continue; - if ( cap_caching_mode(iommu->cap) ) + if ( pte_present || cap_caching_mode(iommu->cap) ) iommu_flush_iotlb_psi(iommu, domain_iommu_domid(d), (paddr_t)gfn << PAGE_SHIFT_4K, 1, 0); else if ( cap_rwbf(iommu->cap) ) @@ -1564,6 +1566,7 @@ int intel_iommu_unmap_page(struct domain *d, unsigned long gfn) { + struct hvm_iommu *hd = domain_hvm_iommu(d); struct acpi_drhd_unit *drhd; struct iommu *iommu; struct dma_pte *page = NULL, *pte = NULL; @@ -1590,11 +1593,13 @@ for_each_drhd_unit ( drhd ) { iommu = drhd->iommu; - if ( cap_caching_mode(iommu->cap) ) - iommu_flush_iotlb_psi(iommu, domain_iommu_domid(d), - (paddr_t)gfn << PAGE_SHIFT_4K, 1, 0); - else if ( cap_rwbf(iommu->cap) ) - iommu_flush_write_buffer(iommu); + + if ( !test_bit(iommu->index, &hd->iommu_bitmap) ) + continue; + + if ( iommu_flush_iotlb_psi(iommu, domain_iommu_domid(d), + (paddr_t)gfn << PAGE_SHIFT_4K, 1, 0) ) + iommu_flush_write_buffer(iommu); } return 0; _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> On map: only flush when old PTE was valid or invalid PTE may be cached. > On unmap: always flush old entry, but skip flush for unaffected IOMMUs. > > Signed-off-by: Espen Skoglund <espen.skoglund@netronome.com> > > -- > iommu.c | 17 +++++++++++------ > 1 file changed, 11 insertions(+), 6 deletions(-) > --Seems my last mail sent to the xen-devel is lost and I have no local copy so I have to write again... Espen, Thanks for the patch! I also noticed context/iotlb flush need a cleanup. As flush of present/non-present entry are different, your change to iommu_intel_map_page are not that correct. So I made up anther patch. iommu_flush is also removed, as VTd table is not shared with p2m any more. Signed-off-by: Xiaowei Yang <xiaowei.yang@intel.com> Thanks, Xiaowei _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
[Xiaowei Yang]>> On map: only flush when old PTE was valid or invalid PTE may be cached. >> On unmap: always flush old entry, but skip flush for unaffected IOMMUs. >> >> Signed-off-by: Espen Skoglund <espen.skoglund@netronome.com> >> >> -- >> iommu.c | 17 +++++++++++------ >> 1 file changed, 11 insertions(+), 6 deletions(-) >> -- > > Seems my last mail sent to the xen-devel is lost and I have no local > copy so I have to write again... > > Espen, > Thanks for the patch! I also noticed context/iotlb flush need a > cleanup. As flush of present/non-present entry are different, your > change to iommu_intel_map_page are not that correct. So I made up > anther patch. iommu_flush is also removed, as VTd table is not > shared with p2m any more. > > Signed-off-by: Xiaowei Yang <xiaowei.yang@intel.com>Oh, right. When flushing a non-present cached entry domid 0 must be used. Here''s a modification of your patch: - Made the non-present flush testing a bit simpler. - Removed dma_addr_level_page_maddr(). Use a modified addr_to_dma_page_maddr() instead. - Upon mapping new context entry: flush old entry using domid 0 and always flush iotlb. Signed-off-by: Espen Skoglund <espen.skoglund@netronome.com> -- arch/x86/mm/hap/p2m-ept.c | 6 - drivers/passthrough/vtd/iommu.c | 150 ++++++++++------------------------------ include/xen/iommu.h | 1 3 files changed, 38 insertions(+), 119 deletions(-) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Wed, 2008-05-28 at 23:00 +0800, Espen Skoglund wrote:> Oh, right. When flushing a non-present cached entry domid 0 must be > used. Here''s a modification of your patch: > > - Made the non-present flush testing a bit simpler. > - Removed dma_addr_level_page_maddr(). Use a modified > addr_to_dma_page_maddr() instead.Yes, it''s simpler. Thanks! However, you forgot spin_unlock before return. so it leads to deadlock. Here''s a small fix.> - Upon mapping new context entry: flush old entry using domid 0 and > always flush iotlb.Actually, you may find before/after your modification, the code are the same functionally. We can pass domid=0 or non_present_entry_flush=1 to flush non present tlb. Thanks, Xiaowei>_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel