search for: page_to_phi

Displaying 20 results from an estimated 166 matches for "page_to_phi".

Did you mean: page_to_phys
2005 Sep 01
3
question about page_to_phys
The page_to_phys is defined as #define page_to_phys(page) (phys_to_machine(page_to_pseudophys(page))) so it return machine addresss while virt_to_phys return psedophys. include/asm-xen/asm-i386/io.h this is really confusing. why not define page_to_machine? _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com
2014 Jul 10
3
[PATCH v4 2/6] drm/nouveau: map pages using DMA API on platform devices
On Tue, Jul 08, 2014 at 05:25:57PM +0900, Alexandre Courbot wrote: > page_to_phys() is not the correct way to obtain the DMA address of a > buffer on a non-PCI system. Use the DMA API functions for this, which > are portable and will allow us to use other DMA API functions for > buffer synchronization. > > Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> > ---
2014 Apr 23
2
[PATCH v2 04/10] drm/nouveau/fb: add GK20A support
On Wed, Apr 23, 2014 at 11:07 AM, Alexandre Courbot <acourbot at nvidia.com> wrote: > On 04/22/2014 07:40 PM, Thierry Reding wrote: >> >> * PGP Signed by an unknown key >> >> >> On Mon, Apr 21, 2014 at 03:02:16PM +0900, Alexandre Courbot wrote: >> [...] >>> >>> diff --git a/drivers/gpu/drm/nouveau/core/subdev/fb/ramgk20a.c >>>
2013 Oct 18
11
[GIT PULL] Btrfs
Hi Linus, My for-linus branch has a one line fix: git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs.git for-linus Sage hit a deadlock with ceph on btrfs, and Josef tracked it down to a regression in our initial rc1 pull. When doing nocow writes we were sometimes starting a transaction with locks held. Josef Bacik (1) commits (+1/-0): Btrfs: release path before starting
2018 Apr 20
2
[PATCH] kvmalloc: always use vmalloc if CONFIG_DEBUG_VM
On Thu, Apr 19, 2018 at 12:12:38PM -0400, Mikulas Patocka wrote: > Unfortunatelly, some kernel code has bugs - it uses kvmalloc and then > uses DMA-API on the returned memory or frees it with kfree. Such bugs were > found in the virtio-net driver, dm-integrity or RHEL7 powerpc-specific > code. Maybe it's time to have the SG code handle vmalloced pages? This is becoming more and
2018 Apr 20
2
[PATCH] kvmalloc: always use vmalloc if CONFIG_DEBUG_VM
On Thu, Apr 19, 2018 at 12:12:38PM -0400, Mikulas Patocka wrote: > Unfortunatelly, some kernel code has bugs - it uses kvmalloc and then > uses DMA-API on the returned memory or frees it with kfree. Such bugs were > found in the virtio-net driver, dm-integrity or RHEL7 powerpc-specific > code. Maybe it's time to have the SG code handle vmalloced pages? This is becoming more and
2014 Jul 11
2
[PATCH v4 2/6] drm/nouveau: map pages using DMA API on platform devices
On Fri, Jul 11, 2014 at 12:35 PM, Alexandre Courbot <acourbot at nvidia.com> wrote: > On 07/10/2014 09:58 PM, Daniel Vetter wrote: >> >> On Tue, Jul 08, 2014 at 05:25:57PM +0900, Alexandre Courbot wrote: >>> >>> page_to_phys() is not the correct way to obtain the DMA address of a >>> buffer on a non-PCI system. Use the DMA API functions for this,
2020 Jul 22
0
[RFC PATCH v1 06/34] KVM: x86: mmu: add support for EPT switching
From: Marian Rotariu <marian.c.rotariu at gmail.com> The introspection tool uses this function to check the hardware support for EPT switching, which can be used either to singlestep vCPUs on a unprotected EPT view or to use #VE in order to avoid filter out VM-exits caused by EPT violations. Signed-off-by: Marian Rotariu <marian.c.rotariu at gmail.com> Co-developed-by: ?tefan ?icleru
2020 Oct 28
0
[PATCH v6 08/10] drm/gem: Store client buffer mappings as struct dma_buf_map
Kernel DRM clients now store their framebuffer address in an instance of struct dma_buf_map. Depending on the buffer's location, the address refers to system or I/O memory. Callers of drm_client_buffer_vmap() receive a copy of the value in the call's supplied arguments. It can be accessed and modified with dma_buf_map interfaces. v6: * don't call page_to_phys() on framebuffers in
2019 Sep 08
0
[PATCH V6 4/5] iommu/dma-iommu: Use the dev->coherent_dma_mask
Use the dev->coherent_dma_mask when allocating in the dma-iommu ops api. Signed-off-by: Tom Murphy <murphyt7 at tcd.ie> Reviewed-by: Robin Murphy <robin.murphy at arm.com> Reviewed-by: Christoph Hellwig <hch at lst.de> --- drivers/iommu/dma-iommu.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/drivers/iommu/dma-iommu.c
2014 Jul 08
0
[PATCH v4 2/6] drm/nouveau: map pages using DMA API on platform devices
page_to_phys() is not the correct way to obtain the DMA address of a buffer on a non-PCI system. Use the DMA API functions for this, which are portable and will allow us to use other DMA API functions for buffer synchronization. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drivers/gpu/drm/nouveau/core/engine/device/base.c | 8 +++++++- 1 file changed, 7 insertions(+), 1
2011 Mar 12
2
merge error in intel_agp_insert_sg_entries() in xen.git
There is a missing } in one of the patches for drivers/char/agp/intel-agp.c:intel_agp_insert_sg_entries() in xen.git. The diff I get looks like this: --- linux-2.6.32/drivers/char/agp/intel-agp.c +++ linux-2.6-jeremy-xen-stable-2.6.32.x/drivers/char/agp/intel-agp.c @@ -10,14 +10,20 @@ #include <linux/agp_backend.h> #include <asm/smp.h> #include "agp.h" +#include
2019 Dec 21
0
[PATCH 6/8] iommu: allow the dma-iommu api to use bounce buffers
Allow the dma-iommu api to use bounce buffers for untrusted devices. This is a copy of the intel bounce buffer code. Signed-off-by: Tom Murphy <murphyt7 at tcd.ie> --- drivers/iommu/dma-iommu.c | 93 ++++++++++++++++++++++++++++++++------- drivers/iommu/iommu.c | 10 +++++ include/linux/iommu.h | 9 +++- 3 files changed, 95 insertions(+), 17 deletions(-) diff --git
2014 Jul 11
0
[PATCH v4 2/6] drm/nouveau: map pages using DMA API on platform devices
On 07/10/2014 09:58 PM, Daniel Vetter wrote: > On Tue, Jul 08, 2014 at 05:25:57PM +0900, Alexandre Courbot wrote: >> page_to_phys() is not the correct way to obtain the DMA address of a >> buffer on a non-PCI system. Use the DMA API functions for this, which >> are portable and will allow us to use other DMA API functions for >> buffer synchronization. >> >>
2014 May 01
0
[PATCH v3 0/9] drm/nouveau: support for GK20A, cont'd
On Fri, Apr 25, 2014 at 5:19 PM, Alexandre Courbot <acourbot at nvidia.com> wrote: > Changes since v2: > - Enabled software class > - Removed unneeded changes to nouveau_accel_init() > - Replaced use of architecture-private pfn_to_dma() and dma_to_pfn() with > the portable page_to_phys()/phys_to_page() page_to_phys() looks well defined and used everywhere, phys_to_page() not
2014 Jul 11
0
[PATCH v4 2/6] drm/nouveau: map pages using DMA API on platform devices
On 07/11/2014 11:50 AM, Ben Skeggs wrote: > On Fri, Jul 11, 2014 at 12:35 PM, Alexandre Courbot <acourbot at nvidia.com> wrote: >> On 07/10/2014 09:58 PM, Daniel Vetter wrote: >>> >>> On Tue, Jul 08, 2014 at 05:25:57PM +0900, Alexandre Courbot wrote: >>>> >>>> page_to_phys() is not the correct way to obtain the DMA address of a
2012 Oct 18
1
[PATCH] virtio: 9p: correctly pass physical address to userspace for high pages
Will Deacon <will.deacon at arm.com> writes: > When using a virtio transport, the 9p net device allocates pages to back > the descriptors inserted into the virtqueue. These allocations may be > performed from atomic context (under the channel lock) and can therefore > return high mappings which aren't suitable for virt_to_phys. I had not appreciated that subtlety about
2012 Oct 18
1
[PATCH] virtio: 9p: correctly pass physical address to userspace for high pages
Will Deacon <will.deacon at arm.com> writes: > When using a virtio transport, the 9p net device allocates pages to back > the descriptors inserted into the virtqueue. These allocations may be > performed from atomic context (under the channel lock) and can therefore > return high mappings which aren't suitable for virt_to_phys. I had not appreciated that subtlety about
2019 Jul 05
0
[PATCH v2 2/6] drm/fb-helper: Map DRM client buffer only when required
This patch changes DRM clients to not map the buffer by default. The buffer, like any buffer object, should be mapped and unmapped when needed. An unmapped buffer object can be evicted to system memory and does not consume video ram until displayed. This allows to use generic fbdev emulation with drivers for low-memory devices, such as ast and mgag200. This change affects the generic framebuffer
2020 Apr 22
2
[PATCH hmm 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
On Tue, Apr 21, 2020 at 09:21:46PM -0300, Jason Gunthorpe wrote: > +void nouveau_hmm_convert_pfn(struct nouveau_drm *drm, struct hmm_range *range, > + u64 *ioctl_addr) > { > unsigned long i, npages; > > + /* > + * The ioctl_addr prepared here is passed through nvif_object_ioctl() > + * to an eventual DMA map on some call chain like: > + *