search for: dma_set_mask

Displaying 20 results from an estimated 29 matches for "dma_set_mask".

Did you mean: dma_get_mask
2015 Sep 04
4
[PATCH 0/4] tegra: DMA mask and IOMMU bit fixes
These 4 patches fix two issues that existed on Tegra regarding DMA: 1) The bit indicating whether to use an IOMMU or not was hardcoded ; make this a platform property and use it in instmem 2) The DMA mask was not set for platform devices. Fix this by converting more pci_dma* to the DMA API, and use that more generic code to set the DMA mask properly for all platforms. Tested on both x86
2017 Jan 09
3
[RFC PATCH] vring: Force use of DMA API for ARM-based systems
...evice not be similarly capable? > > If it's not, then turning off DMA API will cause random corruption. > ISTM one way or another the bug is in either the DMA ops or in the > driver initialization. OK, having looked a little deeper, I reckon virtio_mmio_probe() is indeed missing a dma_set_mask() call compared to its PCI friends. The only question then is where does virtio-mmio stand with respect to legacy/modern/44-bit/64-bit etc.? Robin. > > --Andy >
2017 Jan 09
3
[RFC PATCH] vring: Force use of DMA API for ARM-based systems
...evice not be similarly capable? > > If it's not, then turning off DMA API will cause random corruption. > ISTM one way or another the bug is in either the DMA ops or in the > driver initialization. OK, having looked a little deeper, I reckon virtio_mmio_probe() is indeed missing a dma_set_mask() call compared to its PCI friends. The only question then is where does virtio-mmio stand with respect to legacy/modern/44-bit/64-bit etc.? Robin. > > --Andy >
2019 Feb 07
0
[PATCH v7 3/5] dma: Introduce dma_max_mapping_size()
.../Documentation/DMA-API.txt b/Documentation/DMA-API.txt index e133ccd60228..acfe3d0f78d1 100644 --- a/Documentation/DMA-API.txt +++ b/Documentation/DMA-API.txt @@ -195,6 +195,14 @@ Requesting the required mask does not alter the current mask. If you wish to take advantage of it, you should issue a dma_set_mask() call to set the mask to the value returned. +:: + + size_t + dma_direct_max_mapping_size(struct device *dev); + +Returns the maximum size of a mapping for the device. The size parameter +of the mapping functions like dma_map_single(), dma_map_page() and +others should not be larger than the re...
2016 Oct 16
1
[PATCH v5 0/3] drm/nouveau: set DMA mask before mapping scratch page
...d postpone mapping the scratch pages to the respective >> FB .init() hooks. (#2 and #3) >> >> v5: move setting of preliminary DMA mask to nvkm_device_pci_new() (#1) >> move allocation and DMA mapping of scratch pages to .oneinit hooks (#2, #3) >> v4: split and move dma_set_mask to probe hook (Alexander) >> v3: rework code to get rid of DMA_ERROR_CODE references, which is not >> defined on all architectures >> v2: replace incorrect comparison of dma_addr_t type var against NULL >> >> Ard Biesheuvel (3): >> drm/nouveau: set streamin...
2016 Oct 06
6
[PATCH v5 0/3] drm/nouveau: set DMA mask before mapping scratch page
...e 'dma_bits' property (patch #1), and postpone mapping the scratch pages to the respective FB .init() hooks. (#2 and #3) v5: move setting of preliminary DMA mask to nvkm_device_pci_new() (#1) move allocation and DMA mapping of scratch pages to .oneinit hooks (#2, #3) v4: split and move dma_set_mask to probe hook (Alexander) v3: rework code to get rid of DMA_ERROR_CODE references, which is not defined on all architectures v2: replace incorrect comparison of dma_addr_t type var against NULL Ard Biesheuvel (3): drm/nouveau: set streaming DMA mask early drm/nouveau/fb/gf100: defer DMA ma...
2017 Jan 10
4
[PATCH v2 1/2] virtio_mmio: Set DMA masks appropriately
...,9 +549,25 @@ static int virtio_mmio_probe(struct platform_device *pdev) } vm_dev->vdev.id.vendor = readl(vm_dev->base + VIRTIO_MMIO_VENDOR_ID); - if (vm_dev->version == 1) + if (vm_dev->version == 1) { writel(PAGE_SIZE, vm_dev->base + VIRTIO_MMIO_GUEST_PAGE_SIZE); + rc = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); + /* + * In the legacy case, ensure our coherently-allocated virtio + * ring will be at an address expressable as a 32-bit PFN. + */ + if (!rc) + dma_set_coherent_mask(&pdev->dev, + DMA_BIT_MASK(32 + PAGE_SHIFT)); + } else { + rc = d...
2017 Jan 10
4
[PATCH v2 1/2] virtio_mmio: Set DMA masks appropriately
...,9 +549,25 @@ static int virtio_mmio_probe(struct platform_device *pdev) } vm_dev->vdev.id.vendor = readl(vm_dev->base + VIRTIO_MMIO_VENDOR_ID); - if (vm_dev->version == 1) + if (vm_dev->version == 1) { writel(PAGE_SIZE, vm_dev->base + VIRTIO_MMIO_GUEST_PAGE_SIZE); + rc = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); + /* + * In the legacy case, ensure our coherently-allocated virtio + * ring will be at an address expressable as a 32-bit PFN. + */ + if (!rc) + dma_set_coherent_mask(&pdev->dev, + DMA_BIT_MASK(32 + PAGE_SHIFT)); + } else { + rc = d...
2017 Jan 10
1
[PATCH] virtio_mmio: Set DMA masks appropriately
...tform_get_resource(pdev, IORESOURCE_MEM, 0); >> if (!mem) >> @@ -548,6 +550,14 @@ static int virtio_mmio_probe(struct platform_device *pdev) >> if (vm_dev->version == 1) >> writel(PAGE_SIZE, vm_dev->base + VIRTIO_MMIO_GUEST_PAGE_SIZE); >> >> + rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); >> + if (rc) >> + rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); >> + else if (vm_dev->version == 1) >> + dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32 + PAGE_SHIFT)); > > That...
2017 Jan 10
1
[PATCH] virtio_mmio: Set DMA masks appropriately
...tform_get_resource(pdev, IORESOURCE_MEM, 0); >> if (!mem) >> @@ -548,6 +550,14 @@ static int virtio_mmio_probe(struct platform_device *pdev) >> if (vm_dev->version == 1) >> writel(PAGE_SIZE, vm_dev->base + VIRTIO_MMIO_GUEST_PAGE_SIZE); >> >> + rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); >> + if (rc) >> + rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); >> + else if (vm_dev->version == 1) >> + dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32 + PAGE_SHIFT)); > > That...
2016 Jun 21
1
[RFC PATCH v2] drm/nouveau/fb/nv50: set DMA mask before mapping scratch page
...e calling the DMA api way before the TTM layer sets the + * DMA mask based on the MMU subdev parameters. This means we + * are using the default DMA mask of 32, which may cause + * problems on systems with no RAM below the 4 GB mark. So set + * the streaming DMA mask here as well. + */ + dma_set_mask(device->dev, DMA_BIT_MASK(device->mmu->dma_bits)); + + fb->r100c08 = dma_map_page(device->dev, fb->r100c08_page, 0, + PAGE_SIZE, DMA_BIDIRECTIONAL); + if (dma_mapping_error(device->dev, fb->r100c08)) { + nvkm_warn(&fb->base.subdev, + "dma_map_page...
2017 Jan 10
5
[PATCH] virtio_mmio: Set DMA masks appropriately
...e *mem; unsigned long magic; + int rc; mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); if (!mem) @@ -548,6 +550,14 @@ static int virtio_mmio_probe(struct platform_device *pdev) if (vm_dev->version == 1) writel(PAGE_SIZE, vm_dev->base + VIRTIO_MMIO_GUEST_PAGE_SIZE); + rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); + if (rc) + rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); + else if (vm_dev->version == 1) + dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32 + PAGE_SHIFT)); + if (rc) + dev_warn(&pdev->dev, "Failed...
2017 Jan 10
5
[PATCH] virtio_mmio: Set DMA masks appropriately
...e *mem; unsigned long magic; + int rc; mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); if (!mem) @@ -548,6 +550,14 @@ static int virtio_mmio_probe(struct platform_device *pdev) if (vm_dev->version == 1) writel(PAGE_SIZE, vm_dev->base + VIRTIO_MMIO_GUEST_PAGE_SIZE); + rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); + if (rc) + rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); + else if (vm_dev->version == 1) + dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32 + PAGE_SHIFT)); + if (rc) + dev_warn(&pdev->dev, "Failed...
2016 Feb 24
0
[PATCH] instmem/gk20a: set DMA mask early
...->func->tegra(device); struct gk20a_instmem *imem; + int ret; if (!(imem = kzalloc(sizeof(*imem), GFP_KERNEL))) return -ENOMEM; @@ -583,6 +584,10 @@ gk20a_instmem_new(struct nvkm_device *device, int index, spin_lock_init(&imem->lock); *pimem = &imem->base; + ret = dma_set_mask(device->dev, DMA_BIT_MASK(tdev->func->iommu_bit)); + if (ret) + return ret; + /* do not allow more than 1MB of CPU-mapped instmem */ imem->vaddr_use = 0; imem->vaddr_max = 0x100000; -- 2.7.1
2016 Feb 25
0
[PATCH v2] instmem/gk20a: set DMA mask early
...m_device_tegra_func *func, if (IS_ERR(tdev->clk_pwr)) return PTR_ERR(tdev->clk_pwr); + /** + * The IOMMU bit defines the upper limit of the GPU-addressable space. + * This will be refined in nouveau_ttm_init but we need to do it early + * for instmem to behave properly + */ + ret = dma_set_mask(&pdev->dev, DMA_BIT_MASK(tdev->func->iommu_bit)); + if (ret) + return ret; + nvkm_device_tegra_probe_iommu(tdev); ret = nvkm_device_tegra_power_up(tdev); -- 2.7.1
2016 Oct 07
0
[PATCH v5 0/3] drm/nouveau: set DMA mask before mapping scratch page
...perty (patch #1), and postpone mapping the scratch pages to the respective > FB .init() hooks. (#2 and #3) > > v5: move setting of preliminary DMA mask to nvkm_device_pci_new() (#1) > move allocation and DMA mapping of scratch pages to .oneinit hooks (#2, #3) > v4: split and move dma_set_mask to probe hook (Alexander) > v3: rework code to get rid of DMA_ERROR_CODE references, which is not > defined on all architectures > v2: replace incorrect comparison of dma_addr_t type var against NULL > > Ard Biesheuvel (3): > drm/nouveau: set streaming DMA mask early >...
2017 Jan 10
0
[PATCH] virtio_mmio: Set DMA masks appropriately
...> > mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); > if (!mem) > @@ -548,6 +550,14 @@ static int virtio_mmio_probe(struct platform_device *pdev) > if (vm_dev->version == 1) > writel(PAGE_SIZE, vm_dev->base + VIRTIO_MMIO_GUEST_PAGE_SIZE); > > + rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); > + if (rc) > + rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); > + else if (vm_dev->version == 1) > + dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32 + PAGE_SHIFT)); That's a very convoluted way...
2019 Feb 07
5
[PATCH v7 0/5] Fix virtio-blk issue with SWIOTLB
Hi, here is the next version of this patch-set. Previous versions can be found here: V1: https://lore.kernel.org/lkml/20190110134433.15672-1-joro at 8bytes.org/ V2: https://lore.kernel.org/lkml/20190115132257.6426-1-joro at 8bytes.org/ V3: https://lore.kernel.org/lkml/20190123163049.24863-1-joro at 8bytes.org/ V4: https://lore.kernel.org/lkml/20190129084342.26030-1-joro at 8bytes.org/
2017 Jan 09
0
[RFC PATCH] vring: Force use of DMA API for ARM-based systems
...; > > > If it's not, then turning off DMA API will cause random corruption. > > ISTM one way or another the bug is in either the DMA ops or in the > > driver initialization. > > OK, having looked a little deeper, I reckon virtio_mmio_probe() is > indeed missing a dma_set_mask() call compared to its PCI friends. The > only question then is where does virtio-mmio stand with respect to > legacy/modern/44-bit/64-bit etc.? Legacy virtio-mmio has a variable page granule (GuestPageSize), so the 44-bit limitation shouldn't apply. The legacy spec doesn't actually...
2016 Oct 17
0
[PATCH v5 0/3] drm/nouveau: set DMA mask before mapping scratch page
...d postpone mapping the scratch pages to the respective >> FB .init() hooks. (#2 and #3) >> >> v5: move setting of preliminary DMA mask to nvkm_device_pci_new() (#1) >> move allocation and DMA mapping of scratch pages to .oneinit hooks (#2, #3) >> v4: split and move dma_set_mask to probe hook (Alexander) >> v3: rework code to get rid of DMA_ERROR_CODE references, which is not >> defined on all architectures >> v2: replace incorrect comparison of dma_addr_t type var against NULL >> >> Ard Biesheuvel (3): >> drm/nouveau: set streamin...