Displaying 12 results from an estimated 12 matches for "map_single".
Did you mean:
unmap_single
2008 Dec 22
17
[PATCH 0 of 9] swiotlb: use phys_addr_t for pages
...tial revert of the
previous swiotlb changes, and does a partial replacement with Becky
Bruce''s series.
The most important difference is Becky''s use of phys_addr_t rather
than page+offset to represent arbitrary pages. This turns out to be
simpler.
I didn''t replicate the map_single_page changes, since I''m not exactly
sure of ppc''s requirements here, and it seemed like something that
could be easily added.
Quick testing showed no problems, but I haven''t had the chance to do
anything extensive.
I''ve made some small changes to Becky''...
2013 Sep 04
1
[PATCHv2] tracing/events: Add bounce tracing to swiotbl
...rivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -42,6 +42,9 @@
#include <xen/page.h>
#include <xen/xen-ops.h>
#include <xen/hvc-console.h>
+
+#define CREATE_TRACE_POINTS
+#include <trace/events/swiotlb.h>
/*
* Used to do a quick range check in swiotlb_tbl_unmap_single and
* swiotlb_tbl_sync_single_*, to see if the memory was in fact allocated by this
@@ -358,6 +361,8 @@ dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
/*
* Oh well, have to allocate and map a bounce buffer.
*/
+ trace_swiotlb_bounced(dev, dev_addr, size, swiotlb_fo...
2013 Sep 04
1
[PATCHv2] tracing/events: Add bounce tracing to swiotbl
...rivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -42,6 +42,9 @@
#include <xen/page.h>
#include <xen/xen-ops.h>
#include <xen/hvc-console.h>
+
+#define CREATE_TRACE_POINTS
+#include <trace/events/swiotlb.h>
/*
* Used to do a quick range check in swiotlb_tbl_unmap_single and
* swiotlb_tbl_sync_single_*, to see if the memory was in fact allocated by this
@@ -358,6 +361,8 @@ dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
/*
* Oh well, have to allocate and map a bounce buffer.
*/
+ trace_swiotlb_bounced(dev, dev_addr, size, swiotlb_fo...
2013 Sep 04
1
[PATCHv2] tracing/events: Add bounce tracing to swiotbl
...rivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -42,6 +42,9 @@
#include <xen/page.h>
#include <xen/xen-ops.h>
#include <xen/hvc-console.h>
+
+#define CREATE_TRACE_POINTS
+#include <trace/events/swiotlb.h>
/*
* Used to do a quick range check in swiotlb_tbl_unmap_single and
* swiotlb_tbl_sync_single_*, to see if the memory was in fact allocated by this
@@ -358,6 +361,8 @@ dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
/*
* Oh well, have to allocate and map a bounce buffer.
*/
+ trace_swiotlb_bounced(dev, dev_addr, size, swiotlb_fo...
2007 Jul 09
21
mthca use of dma_sync_single is bogus
It seems the problems running mthca in a Xen domU have uncovered a bug
in mthca: mthca uses dma_sync_single in mthca_arbel_write_mtt_seg()
and mthca_arbel_map_phys_fmr() to sync the MTTs that get written.
However, Documentation/DMA-API.txt says:
void
dma_sync_single(struct device *dev, dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
synchronise a single
2018 Sep 07
1
[PATCH net-next v2 3/5] virtio_ring: add packed ring support
..._packed(const struct vring_virtqueue *vq,
> + struct vring_desc_state_packed *state)
> +{
> + u16 flags;
> +
> + if (!vring_use_dma_api(vq->vq.vdev))
> + return;
> +
> + flags = state->flags;
> +
> + if (flags & VRING_DESC_F_INDIRECT) {
> + dma_unmap_single(vring_dma_dev(vq),
> + state->addr, state->len,
> + (flags & VRING_DESC_F_WRITE) ?
> + DMA_FROM_DEVICE : DMA_TO_DEVICE);
> + } else {
> + dma_unmap_page(vring_dma_dev(vq),
> + state->addr, state->len,
> + (flags & VRING_DESC_F_...
2019 Sep 08
7
[PATCH v6 0/5] iommu/amd: Convert the AMD iommu driver to the dma-iommu api
Convert the AMD iommu driver to the dma-iommu api. Remove the iova
handling and reserve region code from the AMD iommu driver.
Change-log:
V6:
-add more details to the description of patch 001-iommu-amd-Remove-unnecessary-locking-from-AMD-iommu-.patch
-rename handle_deferred_device to iommu_dma_deferred_attach
-fix double tabs in 0003-iommu-dma-iommu-Handle-deferred-devices.patch
V5:
-Rebase on
2019 Jun 13
8
[PATCH v4 0/5] iommu/amd: Convert the AMD iommu driver to the dma-iommu api
Convert the AMD iommu driver to the dma-iommu api. Remove the iova
handling and reserve region code from the AMD iommu driver.
Change-log:
V4:
-Rebase on top of linux-next
-Split the removing of the unnecessary locking in the amd iommu driver into a seperate patch
-refactor the "iommu/dma-iommu: Handle deferred devices" patch and address comments
v3:
-rename dma_limit to dma_mask
-exit
2019 Jun 13
8
[PATCH v4 0/5] iommu/amd: Convert the AMD iommu driver to the dma-iommu api
Convert the AMD iommu driver to the dma-iommu api. Remove the iova
handling and reserve region code from the AMD iommu driver.
Change-log:
V4:
-Rebase on top of linux-next
-Split the removing of the unnecessary locking in the amd iommu driver into a seperate patch
-refactor the "iommu/dma-iommu: Handle deferred devices" patch and address comments
v3:
-rename dma_limit to dma_mask
-exit
2008 Nov 13
69
[PATCH 00 of 38] xen: add more Xen dom0 support
Hi Ingo,
Here''s the chunk of patches to add Xen Dom0 support (it''s probably
worth creating a new xen/dom0 topic branch for it).
A dom0 Xen domain is basically the same as a normal domU domain, but
it has extra privileges to directly access hardware. There are two
issues to deal with:
- translating to and from the domain''s pseudo-physical addresses and
real machine
2018 Jul 11
15
[PATCH net-next v2 0/5] virtio: support packed ring
Hello everyone,
This patch set implements packed ring support in virtio driver.
Some functional tests have been done with Jason's
packed ring implementation in vhost:
https://lkml.org/lkml/2018/7/3/33
Both of ping and netperf worked as expected.
v1 -> v2:
- Use READ_ONCE() to read event off_wrap and flags together (Jason);
- Add comments related to ccw (Jason);
RFC (v6) -> v1:
-
2018 Jul 11
15
[PATCH net-next v2 0/5] virtio: support packed ring
Hello everyone,
This patch set implements packed ring support in virtio driver.
Some functional tests have been done with Jason's
packed ring implementation in vhost:
https://lkml.org/lkml/2018/7/3/33
Both of ping and netperf worked as expected.
v1 -> v2:
- Use READ_ONCE() to read event off_wrap and flags together (Jason);
- Add comments related to ccw (Jason);
RFC (v6) -> v1:
-