Displaying 20 results from an estimated 30 matches for "mdevice".
Did you mean:
device
2020 Jun 19
0
[PATCH 15/16] mm/hmm/test: add self tests for THP migration
...unsigned long cfree;
struct page *free_pages;
+ struct page *free_huge_pages;
spinlock_t lock; /* protects the above */
};
@@ -443,6 +444,7 @@ static int dmirror_write(struct dmirror *dmirror, struct hmm_dmirror_cmd *cmd)
}
static bool dmirror_allocate_chunk(struct dmirror_device *mdevice,
+ bool is_huge,
struct page **ppage)
{
struct dmirror_chunk *devmem;
@@ -502,16 +504,39 @@ static bool dmirror_allocate_chunk(struct dmirror_device *mdevice,
pfn_first, pfn_last);
spin_lock(&mdevice->lock);
- for (pfn = pfn_first; pfn < pfn_last; pfn++) {
+ for (p...
2019 Sep 11
6
[PATCH 0/4] HMM tests and minor fixes
These changes are based on Jason's latest hmm branch.
Patch 1 was previously posted here [1] but was dropped from the orginal
series. Hopefully, the tests will reduce concerns about edge conditions.
I'm sure more tests could be usefully added but I thought this was a good
starting point.
[1] https://lore.kernel.org/linux-mm/20190726005650.2566-6-rcampbell at nvidia.com/
Ralph Campbell
2020 Oct 08
2
[PATCH] mm: make device private reference counts zero based
...map->type == MEMORY_DEVICE_FS_DAX;
}
void put_devmap_managed_page(struct page *page);
diff --git a/lib/test_hmm.c b/lib/test_hmm.c
index e151a7f10519..bf92a261fa6f 100644
--- a/lib/test_hmm.c
+++ b/lib/test_hmm.c
@@ -509,10 +509,15 @@ static bool dmirror_allocate_chunk(struct dmirror_device *mdevice,
mdevice->devmem_count * (DEVMEM_CHUNK_SIZE / (1024 * 1024)),
pfn_first, pfn_last);
+ /*
+ * Pages are created with an initial reference count of one but should
+ * have a reference count of zero while in the free state.
+ */
spin_lock(&mdevice->lock);
for (pfn = pfn_first;...
2020 Nov 06
12
[PATCH v3 0/6] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to
migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers.
Earlier versions were posted previously [1] and [2].
The patches apply cleanly to the linux-mm 5.10.0-rc2 tree. There are a
lot of other THP patches being posted. I don't think there are any
semantic conflicts but there may be some merge conflicts depending on
2020 Sep 02
10
[PATCH v2 0/7] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to
migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers.
An earlier version was posted previously [1]. This version now
supports splitting a THP midway in the migration process which
led to a number of changes.
The patches apply cleanly to the current linux-mm tree. Since there
are a couple of patches in linux-mm from Dan
2020 Oct 09
0
[PATCH] mm: make device private reference counts zero based
...nip]
>
> void put_devmap_managed_page(struct page *page);
> diff --git a/lib/test_hmm.c b/lib/test_hmm.c
> index e151a7f10519..bf92a261fa6f 100644
> --- a/lib/test_hmm.c
> +++ b/lib/test_hmm.c
> @@ -509,10 +509,15 @@ static bool dmirror_allocate_chunk(struct dmirror_device *mdevice,
> mdevice->devmem_count * (DEVMEM_CHUNK_SIZE / (1024 * 1024)),
> pfn_first, pfn_last);
>
> + /*
> + * Pages are created with an initial reference count of one but should
> + * have a reference count of zero while in the free state.
> + */
> spin_lock(&md...
2020 Jan 13
9
[PATCH v6 0/6] mm/hmm/test: add self tests for HMM
This series adds new functions to the mmu interval notifier API to
allow device drivers with MMUs to dynamically mirror a process' page
tables based on device faults and invalidation callbacks. The Nouveau
driver is updated to use the extended API and a set of stand alone self
tests is added to help validate and maintain correctness.
The patches are based on linux-5.5.0-rc6 and are for
2020 Jun 19
22
[PATCH 00/16] mm/hmm/nouveau: THP mapping and migration
These patches apply to linux-5.8.0-rc1. Patches 1-3 should probably go
into 5.8, the others can be queued for 5.9. Patches 4-6 improve the HMM
self tests. Patch 7-8 prepare nouveau for the meat of this series which
adds support and testing for compound page mapping of system memory
(patches 9-11) and compound page migration to device private memory
(patches 12-16). Since these changes are split
2020 Mar 17
4
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
On 3/17/20 5:59 AM, Christoph Hellwig wrote:
> On Tue, Mar 17, 2020 at 09:47:55AM -0300, Jason Gunthorpe wrote:
>> I've been using v7 of Ralph's tester and it is working well - it has
>> DEVICE_PRIVATE support so I think it can test this flow too. Ralph are
>> you able?
>>
>> This hunk seems trivial enough to me, can we include it now?
>
> I can send
2020 Mar 19
0
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
...ile *filp)
> +{
> + struct cdev *cdev = inode->i_cdev;
> + struct dmirror *dmirror;
> + int ret;
> +
> + /* Mirror this process address space */
> + dmirror = kzalloc(sizeof(*dmirror), GFP_KERNEL);
> + if (dmirror == NULL)
> + return -ENOMEM;
> +
> + dmirror->mdevice = container_of(cdev, struct dmirror_device, cdevice);
> + mutex_init(&dmirror->mutex);
> + xa_init(&dmirror->pt);
> +
> + ret = mmu_interval_notifier_insert(&dmirror->notifier, current->mm,
> + 0, ULONG_MAX & PAGE_MASK, &dmirror_min_ops);
> + if...
2020 Jul 23
0
[PATCH v4 5/6] mm/hmm/test: use the new migration invalidation
...= container_of(mni, struct dmirror, notifier);
+ /*
+ * Ignore invalidation callbacks for device private pages since
+ * the invalidation is handled as part of the migration process.
+ */
+ if (range->event == MMU_NOTIFY_MIGRATE &&
+ range->migrate_pgmap_owner == dmirror->mdevice)
+ return true;
+
if (mmu_notifier_range_blockable(range))
mutex_lock(&dmirror->mutex);
else if (!mutex_trylock(&dmirror->mutex))
@@ -693,7 +701,7 @@ static int dmirror_migrate(struct dmirror *dmirror,
args.dst = dst_pfns;
args.start = addr;
args.end = next;
- args.p...
2020 Oct 12
2
[PATCH v2] mm/hmm: make device private reference counts zero based
...map->type == MEMORY_DEVICE_FS_DAX;
}
void put_devmap_managed_page(struct page *page);
diff --git a/lib/test_hmm.c b/lib/test_hmm.c
index e151a7f10519..bf92a261fa6f 100644
--- a/lib/test_hmm.c
+++ b/lib/test_hmm.c
@@ -509,10 +509,15 @@ static bool dmirror_allocate_chunk(struct dmirror_device *mdevice,
mdevice->devmem_count * (DEVMEM_CHUNK_SIZE / (1024 * 1024)),
pfn_first, pfn_last);
+ /*
+ * Pages are created with an initial reference count of one but should
+ * have a reference count of zero while in the free state.
+ */
spin_lock(&mdevice->lock);
for (pfn = pfn_first;...
2020 Jul 23
0
[PATCH v4 2/6] mm/migrate: add a flags parameter to migrate_vma
...- * others. For our own we would do a device private memory copy
- * not a migration and for others, we would need to fault the
- * other device's page into system memory first.
- */
- if (spage && is_zone_device_page(spage))
- continue;
-
dpage = dmirror_devmem_alloc_page(mdevice);
if (!dpage)
continue;
@@ -702,7 +693,8 @@ static int dmirror_migrate(struct dmirror *dmirror,
args.dst = dst_pfns;
args.start = addr;
args.end = next;
- args.src_owner = NULL;
+ args.pgmap_owner = NULL;
+ args.flags = MIGRATE_VMA_SELECT_SYSTEM;
ret = migrate_vma_setup(&a...
2020 Mar 19
2
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
...v *cdev = inode->i_cdev;
>> + struct dmirror *dmirror;
>> + int ret;
>> +
>> + /* Mirror this process address space */
>> + dmirror = kzalloc(sizeof(*dmirror), GFP_KERNEL);
>> + if (dmirror == NULL)
>> + return -ENOMEM;
>> +
>> + dmirror->mdevice = container_of(cdev, struct dmirror_device, cdevice);
>> + mutex_init(&dmirror->mutex);
>> + xa_init(&dmirror->pt);
>> +
>> + ret = mmu_interval_notifier_insert(&dmirror->notifier, current->mm,
>> + 0, ULONG_MAX & PAGE_MASK, &dmirror_...
2020 Mar 20
1
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
...; + dmirror_hmm_flags[HMM_PFN_WRITE]),
>
> Since pfns is not initialized pfn_flags_mask should be 0.
Good point.
>> + .default_flags = dmirror_hmm_flags[HMM_PFN_VALID] |
>> + (write ? dmirror_hmm_flags[HMM_PFN_WRITE] : 0),
>> + .dev_private_owner = dmirror->mdevice,
>> + };
>> + int ret = 0;
>
>> +static int dmirror_snapshot(struct dmirror *dmirror,
>> + struct hmm_dmirror_cmd *cmd)
>> +{
>> + struct mm_struct *mm = dmirror->mm;
>> + unsigned long start, end;
>> + unsigned long size = cmd->npages...
2020 Jul 21
6
[PATCH v3 0/5] mm/migrate: avoid device private invalidations
The goal for this series is to avoid device private memory TLB
invalidations when migrating a range of addresses from system
memory to device private memory and some of those pages have already
been migrated. The approach taken is to introduce a new mmu notifier
invalidation event type and use that in the device driver to skip
invalidation callbacks from migrate_vma_setup(). The device driver is
2020 Jul 23
9
[PATCH v4 0/6] mm/migrate: avoid device private invalidations
The goal for this series is to avoid device private memory TLB
invalidations when migrating a range of addresses from system
memory to device private memory and some of those pages have already
been migrated. The approach taken is to introduce a new mmu notifier
invalidation event type and use that in the device driver to skip
invalidation callbacks from migrate_vma_setup(). The device driver is
2020 Jul 21
0
[PATCH v3 2/5] mm/migrate: add a flags parameter to migrate_vma
...MIGRATE_VMA_SELECT_SYSTEM;
ret = migrate_vma_setup(&args);
if (ret)
goto out;
@@ -1053,7 +1054,8 @@ static vm_fault_t dmirror_devmem_fault(struct vm_fault *vmf)
args.end = args.start + PAGE_SIZE;
args.src = &src_pfns;
args.dst = &dst_pfns;
- args.src_owner = dmirror->mdevice;
+ args.pgmap_owner = dmirror->mdevice;
+ args.flags = MIGRATE_VMA_SELECT_DEVICE_PRIVATE;
if (migrate_vma_setup(&args))
return VM_FAULT_SIGBUS;
diff --git a/mm/migrate.c b/mm/migrate.c
index f37729673558..e3ea68e3a08b 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2287,7 +2287,9 @@...
2020 Jul 06
8
[PATCH 0/5] mm/migrate: avoid device private invalidations
The goal for this series is to avoid device private memory TLB
invalidations when migrating a range of addresses from system
memory to device private memory and some of those pages have already
been migrated. The approach taken is to introduce a new mmu notifier
invalidation event type and use that in the device driver to skip
invalidation callbacks from migrate_vma_setup(). The device driver is
2020 Jul 13
9
[PATCH v2 0/5] mm/migrate: avoid device private invalidations
The goal for this series is to avoid device private memory TLB
invalidations when migrating a range of addresses from system
memory to device private memory and some of those pages have already
been migrated. The approach taken is to introduce a new mmu notifier
invalidation event type and use that in the device driver to skip
invalidation callbacks from migrate_vma_setup(). The device driver is