Displaying 17 results from an estimated 17 matches for "hmm_dmirror_cmd".
2019 Sep 11
6
[PATCH 0/4] HMM tests and minor fixes
These changes are based on Jason's latest hmm branch.
Patch 1 was previously posted here [1] but was dropped from the orginal
series. Hopefully, the tests will reduce concerns about edge conditions.
I'm sure more tests could be usefully added but I thought this was a good
starting point.
[1] https://lore.kernel.org/linux-mm/20190726005650.2566-6-rcampbell at nvidia.com/
Ralph Campbell
2020 Jan 13
9
[PATCH v6 0/6] mm/hmm/test: add self tests for HMM
This series adds new functions to the mmu interval notifier API to
allow device drivers with MMUs to dynamically mirror a process' page
tables based on device faults and invalidation callbacks. The Nouveau
driver is updated to use the extended API and a set of stand alone self
tests is added to help validate and maintain correctness.
The patches are based on linux-5.5.0-rc6 and are for
2020 Nov 06
12
[PATCH v3 0/6] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to
migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers.
Earlier versions were posted previously [1] and [2].
The patches apply cleanly to the linux-mm 5.10.0-rc2 tree. There are a
lot of other THP patches being posted. I don't think there are any
semantic conflicts but there may be some merge conflicts depending on
2020 Jun 19
0
[PATCH 15/16] mm/hmm/test: add self tests for THP migration
...+ b/lib/test_hmm.c
@@ -92,6 +92,7 @@ struct dmirror_device {
unsigned long calloc;
unsigned long cfree;
struct page *free_pages;
+ struct page *free_huge_pages;
spinlock_t lock; /* protects the above */
};
@@ -443,6 +444,7 @@ static int dmirror_write(struct dmirror *dmirror, struct hmm_dmirror_cmd *cmd)
}
static bool dmirror_allocate_chunk(struct dmirror_device *mdevice,
+ bool is_huge,
struct page **ppage)
{
struct dmirror_chunk *devmem;
@@ -502,16 +504,39 @@ static bool dmirror_allocate_chunk(struct dmirror_device *mdevice,
pfn_first, pfn_last);
spin_lock(&m...
2020 Sep 02
10
[PATCH v2 0/7] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to
migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers.
An earlier version was posted previously [1]. This version now
supports splitting a THP midway in the migration process which
led to a number of changes.
The patches apply cleanly to the current linux-mm tree. Since there
are a couple of patches in linux-mm from Dan
2020 Mar 20
1
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
....default_flags = dmirror_hmm_flags[HMM_PFN_VALID] |
>> + (write ? dmirror_hmm_flags[HMM_PFN_WRITE] : 0),
>> + .dev_private_owner = dmirror->mdevice,
>> + };
>> + int ret = 0;
>
>> +static int dmirror_snapshot(struct dmirror *dmirror,
>> + struct hmm_dmirror_cmd *cmd)
>> +{
>> + struct mm_struct *mm = dmirror->mm;
>> + unsigned long start, end;
>> + unsigned long size = cmd->npages << PAGE_SHIFT;
>> + unsigned long addr;
>> + unsigned long next;
>> + uint64_t pfns[64];
>> + unsigned char perm[64];...
2020 Mar 17
4
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
On 3/17/20 5:59 AM, Christoph Hellwig wrote:
> On Tue, Mar 17, 2020 at 09:47:55AM -0300, Jason Gunthorpe wrote:
>> I've been using v7 of Ralph's tester and it is working well - it has
>> DEVICE_PRIVATE support so I think it can test this flow too. Ralph are
>> you able?
>>
>> This hunk seems trivial enough to me, can we include it now?
>
> I can send
2020 Mar 19
0
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
...if (!page)
> + return -ENOENT;
> +
> + tmp = kmap(page);
> + memcpy(ptr, tmp, PAGE_SIZE);
> + kunmap(page);
> +
> + ptr += PAGE_SIZE;
> + bounce->cpages++;
> + }
> +
> + return 0;
> +}
> +
> +static int dmirror_read(struct dmirror *dmirror, struct hmm_dmirror_cmd *cmd)
> +{
> + struct dmirror_bounce bounce;
> + unsigned long start, end;
> + unsigned long size = cmd->npages << PAGE_SHIFT;
> + int ret;
> +
> + start = cmd->addr;
> + end = start + size;
> + if (end < start)
> + return -EINVAL;
> +
> + ret = d...
2020 Mar 20
0
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
...gs_mask should be 0.
> + .default_flags = dmirror_hmm_flags[HMM_PFN_VALID] |
> + (write ? dmirror_hmm_flags[HMM_PFN_WRITE] : 0),
> + .dev_private_owner = dmirror->mdevice,
> + };
> + int ret = 0;
> +static int dmirror_snapshot(struct dmirror *dmirror,
> + struct hmm_dmirror_cmd *cmd)
> +{
> + struct mm_struct *mm = dmirror->mm;
> + unsigned long start, end;
> + unsigned long size = cmd->npages << PAGE_SHIFT;
> + unsigned long addr;
> + unsigned long next;
> + uint64_t pfns[64];
> + unsigned char perm[64];
> + char __user *uptr;
>...
2020 Jul 23
0
[PATCH v4 5/6] mm/hmm/test: use the new migration invalidation
...them. */
- for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i)
+ /* Fault half the pages back to system memory and check them. */
+ for (i = 0, ptr = buffer->ptr; i < size / (2 * sizeof(*ptr)); ++i)
+ ASSERT_EQ(ptr[i], i);
+
+ /* Migrate memory to the device again. */
+ ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_MIGRATE, buffer, npages);
+ ASSERT_EQ(ret, 0);
+ ASSERT_EQ(buffer->cpages, npages);
+
+ /* Check what the device read. */
+ for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i)
ASSERT_EQ(ptr[i], i);
hmm_buffer_free(buffer);
--
2.20.1
2020 Jun 30
6
[PATCH v2 0/5] mm/hmm/nouveau: add PMD system memory mapping
The goal for this series is to introduce the hmm_range_fault() output
array flags HMM_PFN_PMD and HMM_PFN_PUD. This allows a device driver to
know that a given 4K PFN is actually mapped by the CPU using either a
PMD sized or PUD sized CPU page table entry and therefore the device
driver can safely map system memory using larger device MMU PTEs.
The series is based on 5.8.0-rc3 and is intended for
2020 Jun 19
22
[PATCH 00/16] mm/hmm/nouveau: THP mapping and migration
These patches apply to linux-5.8.0-rc1. Patches 1-3 should probably go
into 5.8, the others can be queued for 5.9. Patches 4-6 improve the HMM
self tests. Patch 7-8 prepare nouveau for the meat of this series which
adds support and testing for compound page mapping of system memory
(patches 9-11) and compound page migration to device private memory
(patches 12-16). Since these changes are split
2020 Mar 19
2
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
...gt; + tmp = kmap(page);
>> + memcpy(ptr, tmp, PAGE_SIZE);
>> + kunmap(page);
>> +
>> + ptr += PAGE_SIZE;
>> + bounce->cpages++;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +static int dmirror_read(struct dmirror *dmirror, struct hmm_dmirror_cmd *cmd)
>> +{
>> + struct dmirror_bounce bounce;
>> + unsigned long start, end;
>> + unsigned long size = cmd->npages << PAGE_SHIFT;
>> + int ret;
>> +
>> + start = cmd->addr;
>> + end = start + size;
>> + if (end < start)
>>...
2020 Jul 01
8
[PATCH v3 0/5] mm/hmm/nouveau: add PMD system memory mapping
The goal for this series is to introduce the hmm_pfn_to_map_order()
function. This allows a device driver to know that a given 4K PFN is
actually mapped by the CPU using a larger sized CPU page table entry and
therefore the device driver can safely map system memory using larger
device MMU PTEs.
The series is based on 5.8.0-rc3 and is intended for Jason Gunthorpe's
hmm tree. These were
2020 May 08
11
[PATCH 0/6] nouveau/hmm: add support for mapping large pages
hmm_range_fault() returns an array of page frame numbers and flags for
how the pages are mapped in the requested process' page tables. The PFN
can be used to get the struct page with hmm_pfn_to_page() and the page size
order can be determined with compound_order(page) but if the page is larger
than order 0 (PAGE_SIZE), there is no indication that the page is mapped
using a larger page size. To
2020 Jul 21
6
[PATCH v3 0/5] mm/migrate: avoid device private invalidations
The goal for this series is to avoid device private memory TLB
invalidations when migrating a range of addresses from system
memory to device private memory and some of those pages have already
been migrated. The approach taken is to introduce a new mmu notifier
invalidation event type and use that in the device driver to skip
invalidation callbacks from migrate_vma_setup(). The device driver is
2020 Jul 23
9
[PATCH v4 0/6] mm/migrate: avoid device private invalidations
The goal for this series is to avoid device private memory TLB
invalidations when migrating a range of addresses from system
memory to device private memory and some of those pages have already
been migrated. The approach taken is to introduce a new mmu notifier
invalidation event type and use that in the device driver to skip
invalidation callbacks from migrate_vma_setup(). The device driver is