Displaying 20 results from an estimated 24 matches for "dmirror_device".
2020 Jun 19
0
[PATCH 15/16] mm/hmm/test: add self tests for THP migration
...+++++++++++++++++++-----
tools/testing/selftests/vm/hmm-tests.c | 292 ++++++++++++++++++++++
2 files changed, 560 insertions(+), 55 deletions(-)
diff --git a/lib/test_hmm.c b/lib/test_hmm.c
index db5d2e8d7420..f4e2e8731366 100644
--- a/lib/test_hmm.c
+++ b/lib/test_hmm.c
@@ -92,6 +92,7 @@ struct dmirror_device {
unsigned long calloc;
unsigned long cfree;
struct page *free_pages;
+ struct page *free_huge_pages;
spinlock_t lock; /* protects the above */
};
@@ -443,6 +444,7 @@ static int dmirror_write(struct dmirror *dmirror, struct hmm_dmirror_cmd *cmd)
}
static bool dmirror_allocate_c...
2020 Oct 08
2
[PATCH] mm: make device private reference counts zero based
...turn page->pgmap->type == MEMORY_DEVICE_FS_DAX;
}
void put_devmap_managed_page(struct page *page);
diff --git a/lib/test_hmm.c b/lib/test_hmm.c
index e151a7f10519..bf92a261fa6f 100644
--- a/lib/test_hmm.c
+++ b/lib/test_hmm.c
@@ -509,10 +509,15 @@ static bool dmirror_allocate_chunk(struct dmirror_device *mdevice,
mdevice->devmem_count * (DEVMEM_CHUNK_SIZE / (1024 * 1024)),
pfn_first, pfn_last);
+ /*
+ * Pages are created with an initial reference count of one but should
+ * have a reference count of zero while in the free state.
+ */
spin_lock(&mdevice->lock);
for (pfn = p...
2020 Oct 09
0
[PATCH] mm: make device private reference counts zero based
...t; ---
>
[snip]
>
> void put_devmap_managed_page(struct page *page);
> diff --git a/lib/test_hmm.c b/lib/test_hmm.c
> index e151a7f10519..bf92a261fa6f 100644
> --- a/lib/test_hmm.c
> +++ b/lib/test_hmm.c
> @@ -509,10 +509,15 @@ static bool dmirror_allocate_chunk(struct dmirror_device *mdevice,
> mdevice->devmem_count * (DEVMEM_CHUNK_SIZE / (1024 * 1024)),
> pfn_first, pfn_last);
>
> + /*
> + * Pages are created with an initial reference count of one but should
> + * have a reference count of zero while in the free state.
> + */
> spin_loc...
2020 Mar 19
0
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
...v *cdev = inode->i_cdev;
> + struct dmirror *dmirror;
> + int ret;
> +
> + /* Mirror this process address space */
> + dmirror = kzalloc(sizeof(*dmirror), GFP_KERNEL);
> + if (dmirror == NULL)
> + return -ENOMEM;
> +
> + dmirror->mdevice = container_of(cdev, struct dmirror_device, cdevice);
> + mutex_init(&dmirror->mutex);
> + xa_init(&dmirror->pt);
> +
> + ret = mmu_interval_notifier_insert(&dmirror->notifier, current->mm,
> + 0, ULONG_MAX & PAGE_MASK, &dmirror_min_ops);
> + if (ret) {
> + kfree(dmirror);
> + ret...
2020 Mar 17
4
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
On 3/17/20 5:59 AM, Christoph Hellwig wrote:
> On Tue, Mar 17, 2020 at 09:47:55AM -0300, Jason Gunthorpe wrote:
>> I've been using v7 of Ralph's tester and it is working well - it has
>> DEVICE_PRIVATE support so I think it can test this flow too. Ralph are
>> you able?
>>
>> This hunk seems trivial enough to me, can we include it now?
>
> I can send
2019 Sep 11
6
[PATCH 0/4] HMM tests and minor fixes
These changes are based on Jason's latest hmm branch.
Patch 1 was previously posted here [1] but was dropped from the orginal
series. Hopefully, the tests will reduce concerns about edge conditions.
I'm sure more tests could be usefully added but I thought this was a good
starting point.
[1] https://lore.kernel.org/linux-mm/20190726005650.2566-6-rcampbell at nvidia.com/
Ralph Campbell
2020 Jan 13
9
[PATCH v6 0/6] mm/hmm/test: add self tests for HMM
This series adds new functions to the mmu interval notifier API to
allow device drivers with MMUs to dynamically mirror a process' page
tables based on device faults and invalidation callbacks. The Nouveau
driver is updated to use the extended API and a set of stand alone self
tests is added to help validate and maintain correctness.
The patches are based on linux-5.5.0-rc6 and are for
2020 Nov 06
12
[PATCH v3 0/6] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to
migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers.
Earlier versions were posted previously [1] and [2].
The patches apply cleanly to the linux-mm 5.10.0-rc2 tree. There are a
lot of other THP patches being posted. I don't think there are any
semantic conflicts but there may be some merge conflicts depending on
2020 Sep 02
10
[PATCH v2 0/7] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to
migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers.
An earlier version was posted previously [1]. This version now
supports splitting a THP midway in the migration process which
led to a number of changes.
The patches apply cleanly to the current linux-mm tree. Since there
are a couple of patches in linux-mm from Dan
2020 Sep 26
1
[PATCH 2/2] mm: remove extra ZONE_DEVICE struct page refcount
...t; -
> if (put_page_testzero(page))
> __put_page(page);
> }
> diff --git a/lib/test_hmm.c b/lib/test_hmm.c
> index e7dc3de355b7..1033b19c9c52 100644
> --- a/lib/test_hmm.c
> +++ b/lib/test_hmm.c
> @@ -561,7 +561,7 @@ static struct page *dmirror_devmem_alloc_page(struct dmirror_device *mdevice)
> }
>
> dpage->zone_device_data = rpage;
> - get_page(dpage);
> + init_page_count(dpage);
> lock_page(dpage);
> return dpage;
>
Doesn't test_hmm also need to reinitialize the refcount before freeing
the page in hmm_dmirror_exit?
> int erro...
2020 Mar 19
2
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
...+ struct dmirror *dmirror;
>> + int ret;
>> +
>> + /* Mirror this process address space */
>> + dmirror = kzalloc(sizeof(*dmirror), GFP_KERNEL);
>> + if (dmirror == NULL)
>> + return -ENOMEM;
>> +
>> + dmirror->mdevice = container_of(cdev, struct dmirror_device, cdevice);
>> + mutex_init(&dmirror->mutex);
>> + xa_init(&dmirror->pt);
>> +
>> + ret = mmu_interval_notifier_insert(&dmirror->notifier, current->mm,
>> + 0, ULONG_MAX & PAGE_MASK, &dmirror_min_ops);
>> + if (ret) {
>> +...
2020 Oct 12
2
[PATCH v2] mm/hmm: make device private reference counts zero based
...turn page->pgmap->type == MEMORY_DEVICE_FS_DAX;
}
void put_devmap_managed_page(struct page *page);
diff --git a/lib/test_hmm.c b/lib/test_hmm.c
index e151a7f10519..bf92a261fa6f 100644
--- a/lib/test_hmm.c
+++ b/lib/test_hmm.c
@@ -509,10 +509,15 @@ static bool dmirror_allocate_chunk(struct dmirror_device *mdevice,
mdevice->devmem_count * (DEVMEM_CHUNK_SIZE / (1024 * 1024)),
pfn_first, pfn_last);
+ /*
+ * Pages are created with an initial reference count of one but should
+ * have a reference count of zero while in the free state.
+ */
spin_lock(&mdevice->lock);
for (pfn = p...
2020 Jul 23
0
[PATCH v4 5/6] mm/hmm/test: use the new migration invalidation
...pgmap_owner = dmirror->mdevice;
args.flags = MIGRATE_VMA_SELECT_SYSTEM;
ret = migrate_vma_setup(&args);
if (ret)
@@ -983,7 +991,7 @@ static void dmirror_devmem_free(struct page *page)
}
static vm_fault_t dmirror_devmem_fault_alloc_and_copy(struct migrate_vma *args,
- struct dmirror_device *mdevice)
+ struct dmirror *dmirror)
{
const unsigned long *src = args->src;
unsigned long *dst = args->dst;
@@ -1005,6 +1013,7 @@ static vm_fault_t dmirror_devmem_fault_alloc_and_copy(struct migrate_vma *args,
continue;
lock_page(dpage);
+ xa_erase(&dmirror->...
2020 Jun 19
22
[PATCH 00/16] mm/hmm/nouveau: THP mapping and migration
These patches apply to linux-5.8.0-rc1. Patches 1-3 should probably go
into 5.8, the others can be queued for 5.9. Patches 4-6 improve the HMM
self tests. Patch 7-8 prepare nouveau for the meat of this series which
adds support and testing for compound page mapping of system memory
(patches 9-11) and compound page migration to device private memory
(patches 12-16). Since these changes are split
2020 Sep 25
6
[RFC PATCH v2 0/2] mm: remove extra ZONE_DEVICE struct page refcount
Matthew Wilcox, Ira Weiny, and others have complained that ZONE_DEVICE
struct page reference counting is ugly because they are "free" when the
reference count is one instead of zero. This leads to explicit checks
for ZONE_DEVICE pages in places like put_page(), GUP, THP splitting, and
page migration which have to adjust the expected reference count when
determining if the page is
2020 Sep 25
0
[PATCH 2/2] mm: remove extra ZONE_DEVICE struct page refcount
..._devmap_managed_page(page);
- return;
- }
-
if (put_page_testzero(page))
__put_page(page);
}
diff --git a/lib/test_hmm.c b/lib/test_hmm.c
index e7dc3de355b7..1033b19c9c52 100644
--- a/lib/test_hmm.c
+++ b/lib/test_hmm.c
@@ -561,7 +561,7 @@ static struct page *dmirror_devmem_alloc_page(struct dmirror_device *mdevice)
}
dpage->zone_device_data = rpage;
- get_page(dpage);
+ init_page_count(dpage);
lock_page(dpage);
return dpage;
diff --git a/mm/gup.c b/mm/gup.c
index e5739a1974d5..9b3e94db3a09 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -177,41 +177,6 @@ bool __must_check try_grab_page(struct...
2020 Oct 01
0
[RFC PATCH v3 2/2] mm: remove extra ZONE_DEVICE struct page refcount
..._devmap_managed_page(page);
- return;
- }
-
if (put_page_testzero(page))
__put_page(page);
}
diff --git a/lib/test_hmm.c b/lib/test_hmm.c
index e151a7f10519..37a2dea5e805 100644
--- a/lib/test_hmm.c
+++ b/lib/test_hmm.c
@@ -561,7 +561,7 @@ static struct page *dmirror_devmem_alloc_page(struct dmirror_device *mdevice)
}
dpage->zone_device_data = rpage;
- get_page(dpage);
+ init_page_count(dpage);
lock_page(dpage);
return dpage;
diff --git a/mm/gup.c b/mm/gup.c
index 102877ed77a4..ed117e225c86 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -177,41 +177,6 @@ bool __must_check try_grab_page(struct...
2020 Jul 06
8
[PATCH 0/5] mm/migrate: avoid device private invalidations
The goal for this series is to avoid device private memory TLB
invalidations when migrating a range of addresses from system
memory to device private memory and some of those pages have already
been migrated. The approach taken is to introduce a new mmu notifier
invalidation event type and use that in the device driver to skip
invalidation callbacks from migrate_vma_setup(). The device driver is
2020 Jul 21
6
[PATCH v3 0/5] mm/migrate: avoid device private invalidations
The goal for this series is to avoid device private memory TLB
invalidations when migrating a range of addresses from system
memory to device private memory and some of those pages have already
been migrated. The approach taken is to introduce a new mmu notifier
invalidation event type and use that in the device driver to skip
invalidation callbacks from migrate_vma_setup(). The device driver is
2020 Jul 13
9
[PATCH v2 0/5] mm/migrate: avoid device private invalidations
The goal for this series is to avoid device private memory TLB
invalidations when migrating a range of addresses from system
memory to device private memory and some of those pages have already
been migrated. The approach taken is to introduce a new mmu notifier
invalidation event type and use that in the device driver to skip
invalidation callbacks from migrate_vma_setup(). The device driver is