Displaying 20 results from an estimated 166 matches for "page_to_phys".
2005 Sep 01
3
question about page_to_phys
The page_to_phys is defined as
#define page_to_phys(page) (phys_to_machine(page_to_pseudophys(page)))
so it return machine addresss
while virt_to_phys return psedophys. include/asm-xen/asm-i386/io.h
this is really confusing.
why not define page_to_machine?
_______________________________________________
Xe...
2014 Jul 10
3
[PATCH v4 2/6] drm/nouveau: map pages using DMA API on platform devices
On Tue, Jul 08, 2014 at 05:25:57PM +0900, Alexandre Courbot wrote:
> page_to_phys() is not the correct way to obtain the DMA address of a
> buffer on a non-PCI system. Use the DMA API functions for this, which
> are portable and will allow us to use other DMA API functions for
> buffer synchronization.
>
> Signed-off-by: Alexandre Courbot <acourbot at nvidia.c...
2014 Apr 23
2
[PATCH v2 04/10] drm/nouveau/fb: add GK20A support
...on x86 because neither pfn_to_dma() nor
>> dma_to_pfn() are available. Is there some other way this can be
>> allocated so that these functions don't need to be called?
>
>
> Mmm, this is bad. There is probably another more portable way to do this.
> Let me look for it.
page_to_phys()/phys_to_page() can be used by drivers and will work
just fine here since the CPU and GPU use the same physical addresses
to access memory.
Thanks,
Alex.
2013 Oct 18
11
[GIT PULL] Btrfs
Hi Linus,
My for-linus branch has a one line fix:
git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs.git for-linus
Sage hit a deadlock with ceph on btrfs, and Josef tracked it down to a
regression in our initial rc1 pull. When doing nocow writes we were
sometimes starting a transaction with locks held.
Josef Bacik (1) commits (+1/-0):
Btrfs: release path before starting
2018 Apr 20
2
[PATCH] kvmalloc: always use vmalloc if CONFIG_DEBUG_VM
...s (and some of our
workarounds are hideous -- allocate 4 bytes with kmalloc because we can't
DMA onto the stack any more?). We already have a few places which do
handle sgs of vmalloced addresses, such as the nx crypto driver:
if (is_vmalloc_addr(start_addr))
sg_addr = page_to_phys(vmalloc_to_page(start_addr))
+ offset_in_page(sg_addr);
else
sg_addr = __pa(sg_addr);
and videobuf:
pg = vmalloc_to_page(virt);
if (NULL == pg)
goto err;
BUG_ON(page_to_pfn(pg...
2018 Apr 20
2
[PATCH] kvmalloc: always use vmalloc if CONFIG_DEBUG_VM
...s (and some of our
workarounds are hideous -- allocate 4 bytes with kmalloc because we can't
DMA onto the stack any more?). We already have a few places which do
handle sgs of vmalloced addresses, such as the nx crypto driver:
if (is_vmalloc_addr(start_addr))
sg_addr = page_to_phys(vmalloc_to_page(start_addr))
+ offset_in_page(sg_addr);
else
sg_addr = __pa(sg_addr);
and videobuf:
pg = vmalloc_to_page(virt);
if (NULL == pg)
goto err;
BUG_ON(page_to_pfn(pg...
2014 Jul 11
2
[PATCH v4 2/6] drm/nouveau: map pages using DMA API on platform devices
On Fri, Jul 11, 2014 at 12:35 PM, Alexandre Courbot <acourbot at nvidia.com> wrote:
> On 07/10/2014 09:58 PM, Daniel Vetter wrote:
>>
>> On Tue, Jul 08, 2014 at 05:25:57PM +0900, Alexandre Courbot wrote:
>>>
>>> page_to_phys() is not the correct way to obtain the DMA address of a
>>> buffer on a non-PCI system. Use the DMA API functions for this, which
>>> are portable and will allow us to use other DMA API functions for
>>> buffer synchronization.
>>>
>>> Signed-off-by: Ale...
2020 Jul 22
0
[RFC PATCH v1 06/34] KVM: x86: mmu: add support for EPT switching
...struct kvm_vcpu *vcpu, unsigned long root_hpa)
return eptp;
}
+static void vmx_construct_eptp_with_index(struct kvm_vcpu *vcpu,
+ unsigned short view)
+{
+ struct vcpu_vmx *vmx = to_vmx(vcpu);
+ u64 *eptp_list = NULL;
+
+ if (!vmx->eptp_list_pg)
+ return;
+
+ eptp_list = phys_to_virt(page_to_phys(vmx->eptp_list_pg));
+
+ if (!eptp_list)
+ return;
+
+ eptp_list[view] = construct_eptp(vcpu,
+ vcpu->arch.mmu->root_hpa_altviews[view]);
+}
+
+static void vmx_construct_eptp_list(struct kvm_vcpu *vcpu)
+{
+ unsigned short view;
+
+ for (view = 0; view < KVM_MAX_EPT_VIEWS; view++)
+...
2020 Oct 28
0
[PATCH v6 08/10] drm/gem: Store client buffer mappings as struct dma_buf_map
...tance
of struct dma_buf_map. Depending on the buffer's location, the address
refers to system or I/O memory.
Callers of drm_client_buffer_vmap() receive a copy of the value in
the call's supplied arguments. It can be accessed and modified with
dma_buf_map interfaces.
v6:
* don't call page_to_phys() on framebuffers in I/O memory;
warn instead (Daniel)
Signed-off-by: Thomas Zimmermann <tzimmermann at suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter at ffwll.ch>
Tested-by: Sam Ravnborg <sam at ravnborg.org>
---
drivers/gpu/drm/drm_client.c | 34 +++++++++++++++++++--...
2019 Sep 08
0
[PATCH V6 4/5] iommu/dma-iommu: Use the dev->coherent_dma_mask
..._to_prot(dir, false, attrs) | IOMMU_MMIO,
+ dma_get_mask(dev));
}
static void iommu_dma_unmap_resource(struct device *dev, dma_addr_t handle,
@@ -1041,7 +1042,8 @@ static void *iommu_dma_alloc(struct device *dev, size_t size,
if (!cpu_addr)
return NULL;
- *handle = __iommu_dma_map(dev, page_to_phys(page), size, ioprot);
+ *handle = __iommu_dma_map(dev, page_to_phys(page), size, ioprot,
+ dev->coherent_dma_mask);
if (*handle == DMA_MAPPING_ERROR) {
__iommu_dma_free(dev, size, cpu_addr);
return NULL;
--
2.20.1
2014 Jul 08
0
[PATCH v4 2/6] drm/nouveau: map pages using DMA API on platform devices
page_to_phys() is not the correct way to obtain the DMA address of a
buffer on a non-PCI system. Use the DMA API functions for this, which
are portable and will allow us to use other DMA API functions for
buffer synchronization.
Signed-off-by: Alexandre Courbot <acourbot at nvidia.com>
---
drivers/gpu/d...
2011 Mar 12
2
merge error in intel_agp_insert_sg_entries() in xen.git
...*/
-#ifdef CONFIG_DMAR
+#if defined(CONFIG_DMAR) || defined(CONFIG_XEN)
#define USE_PCI_DMA_API 1
#endif
@@ -296,8 +302,20 @@ static void intel_agp_insert_sg_entries(
int i, j;
for (i = 0, j = pg_start; i < mem->page_count; i++, j++) {
+ phys_addr_t phys = page_to_phys(mem->pages[i]);
+ if (xen_pv_domain()) {
+ phys_addr_t xen_phys = PFN_PHYS(pfn_to_mfn(
+ page_to_pfn(mem->pages[i])));
+ if (xen_phys != phys) {
+ printk(KERN_ERR &quo...
2019 Dec 21
0
[PATCH 6/8] iommu: allow the dma-iommu api to use bounce buffers
...le(dev, phys, aligned_size,
+ aligned_size, dir, attrs);
+ iommu_dma_free_iova(cookie, iova, aligned_size, NULL);
return DMA_MAPPING_ERROR;
}
return iova + iova_off;
@@ -761,10 +823,10 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
{
phys_addr_t phys = page_to_phys(page) + offset;
bool coherent = dev_is_dma_coherent(dev);
- int prot = dma_info_to_prot(dir, coherent, attrs);
dma_addr_t dma_handle;
- dma_handle = __iommu_dma_map(dev, phys, size, prot, dma_get_mask(dev));
+ dma_handle = __iommu_dma_map(dev, phys, size, dma_get_mask(dev),
+ coherent, dir,...
2014 Jul 11
0
[PATCH v4 2/6] drm/nouveau: map pages using DMA API on platform devices
On 07/10/2014 09:58 PM, Daniel Vetter wrote:
> On Tue, Jul 08, 2014 at 05:25:57PM +0900, Alexandre Courbot wrote:
>> page_to_phys() is not the correct way to obtain the DMA address of a
>> buffer on a non-PCI system. Use the DMA API functions for this, which
>> are portable and will allow us to use other DMA API functions for
>> buffer synchronization.
>>
>> Signed-off-by: Alexandre Courbot <a...
2014 May 01
0
[PATCH v3 0/9] drm/nouveau: support for GK20A, cont'd
On Fri, Apr 25, 2014 at 5:19 PM, Alexandre Courbot <acourbot at nvidia.com> wrote:
> Changes since v2:
> - Enabled software class
> - Removed unneeded changes to nouveau_accel_init()
> - Replaced use of architecture-private pfn_to_dma() and dma_to_pfn() with
> the portable page_to_phys()/phys_to_page()
page_to_phys() looks well defined and used everywhere, phys_to_page()
not so much (including on amd64) :(
> - Fixed incorrect comment/commit log talking about bytes instead of words
>
> Hope this looks good! Once this gets merged the next set will be to use this
> driv...
2014 Jul 11
0
[PATCH v4 2/6] drm/nouveau: map pages using DMA API on platform devices
...0 AM, Ben Skeggs wrote:
> On Fri, Jul 11, 2014 at 12:35 PM, Alexandre Courbot <acourbot at nvidia.com> wrote:
>> On 07/10/2014 09:58 PM, Daniel Vetter wrote:
>>>
>>> On Tue, Jul 08, 2014 at 05:25:57PM +0900, Alexandre Courbot wrote:
>>>>
>>>> page_to_phys() is not the correct way to obtain the DMA address of a
>>>> buffer on a non-PCI system. Use the DMA API functions for this, which
>>>> are portable and will allow us to use other DMA API functions for
>>>> buffer synchronization.
>>>>
>>>>...
2012 Oct 18
1
[PATCH] virtio: 9p: correctly pass physical address to userspace for high pages
...nt vring_add_indirect(struct vring_virtqueue *vq,
> /* Use a single buffer which doesn't continue */
> head = vq->free_head;
> vq->vring.desc[head].flags = VRING_DESC_F_INDIRECT;
> - vq->vring.desc[head].addr = virt_to_phys(desc);
> + vq->vring.desc[head].addr = page_to_phys(kmap_to_page(desc)) +
> + ((unsigned long)desc & ~PAGE_MASK);
> vq->vring.desc[head].len = i * sizeof(struct vring_desc);
Gah, virt_to_phys_harder()?
What's the performance effect? If it's negligible, why doesn't
virt_to_phys() just do this for us?
We do have a...
2012 Oct 18
1
[PATCH] virtio: 9p: correctly pass physical address to userspace for high pages
...nt vring_add_indirect(struct vring_virtqueue *vq,
> /* Use a single buffer which doesn't continue */
> head = vq->free_head;
> vq->vring.desc[head].flags = VRING_DESC_F_INDIRECT;
> - vq->vring.desc[head].addr = virt_to_phys(desc);
> + vq->vring.desc[head].addr = page_to_phys(kmap_to_page(desc)) +
> + ((unsigned long)desc & ~PAGE_MASK);
> vq->vring.desc[head].len = i * sizeof(struct vring_desc);
Gah, virt_to_phys_harder()?
What's the performance effect? If it's negligible, why doesn't
virt_to_phys() just do this for us?
We do have a...
2019 Jul 05
0
[PATCH v2 2/6] drm/fb-helper: Map DRM client buffer only when required
...];
fbi->fix.smem_len = fbi->screen_size;
- fbi->screen_buffer = buffer->vaddr;
- /* Shamelessly leak the physical address to user-space */
-#if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM)
- if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0)
- fbi->fix.smem_start =
- page_to_phys(virt_to_page(fbi->screen_buffer));
-#endif
+
drm_fb_helper_fill_info(fbi, fb_helper, sizes);
if (fb->funcs->dirty) {
@@ -2231,6 +2235,19 @@ int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
fbi->fbdefio = &drm_fbdev_defio;
fb_deferred_io_init(fbi);
+ } e...
2020 Apr 22
2
[PATCH hmm 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...on the caller pre-zeroing the array?
> + page = hmm_pfn_to_page(range->hmm_pfns[i]);
> + if (is_device_private_page(page))
> + ioctl_addr[i] = nouveau_dmem_page_addr(page) |
> + NVIF_VMM_PFNMAP_V0_V |
> + NVIF_VMM_PFNMAP_V0_VRAM;
> + else
> + ioctl_addr[i] = page_to_phys(page) |
> + NVIF_VMM_PFNMAP_V0_V |
> + NVIF_VMM_PFNMAP_V0_HOST;
> + if (range->hmm_pfns[i] & HMM_PFN_WRITE)
> + ioctl_addr[i] |= NVIF_VMM_PFNMAP_V0_W;
Now that this routine isn't really device memory specific any more, I
wonder if it should move to nouveau_svm.c.