search for: ranges_lock

Displaying 9 results from an estimated 9 matches for "ranges_lock".

2019 Nov 12
0
[PATCH v3 13/14] mm/hmm: remove hmm_mirror and related
...otifier: mmu notifier to track updates to CPU page table - * @mirrors_sem: read/write semaphore protecting the mirrors list - * @wq: wait queue for user waiting on a range invalidation - * @notifiers: count of active mmu notifiers - */ -struct hmm { - struct mmu_notifier mmu_notifier; - spinlock_t ranges_lock; - struct list_head ranges; - struct list_head mirrors; - struct rw_semaphore mirrors_sem; - wait_queue_head_t wq; - long notifiers; -}; - /* * hmm_pfn_flag_e - HMM flag enums * @@ -143,9 +120,8 @@ enum hmm_pfn_value_e { /* * struct hmm_range - track invalidation lock on virtual address r...
2019 Jul 01
30
dev_pagemap related cleanups v4
Hi Dan, Jérôme and Jason, below is a series that cleans up the dev_pagemap interface so that it is more easily usable, which removes the need to wrap it in hmm and thus allowing to kill a lot of code Note: this series is on top of Linux 5.2-rc6 and has some minor conflicts with the hmm tree that are easy to resolve. Diffstat summary: 34 files changed, 379 insertions(+), 1016 deletions(-) Git
2019 Jul 01
0
[PATCH 18/22] mm: return valid info from hmm_range_unregister
...turns if the range was still valid at the time of unregistering. */ -void hmm_range_unregister(struct hmm_range *range) +bool hmm_range_unregister(struct hmm_range *range) { struct hmm *hmm = range->hmm; unsigned long flags; + bool ret = range->valid; spin_lock_irqsave(&hmm->ranges_lock, flags); list_del_init(&range->list); @@ -941,6 +944,7 @@ void hmm_range_unregister(struct hmm_range *range) */ range->valid = false; memset(&range->hmm, POISON_INUSE, sizeof(range->hmm)); + return ret; } EXPORT_SYMBOL(hmm_range_unregister); -- 2.20.1
2019 Jul 03
0
[PATCH 1/5] mm: return valid info from hmm_range_unregister
...was still valid at the time of unregistering, + * else %false. */ -void hmm_range_unregister(struct hmm_range *range) +bool hmm_range_unregister(struct hmm_range *range) { struct hmm *hmm = range->hmm; unsigned long flags; + bool ret = range->valid; spin_lock_irqsave(&hmm->ranges_lock, flags); list_del_init(&range->list); @@ -938,6 +942,7 @@ void hmm_range_unregister(struct hmm_range *range) */ range->valid = false; memset(&range->hmm, POISON_INUSE, sizeof(range->hmm)); + return ret; } EXPORT_SYMBOL(hmm_range_unregister); -- 2.20.1
2019 Nov 12
20
[PATCH hmm v3 00/14] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com> 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1, scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where they only use invalidate_range_start/end and immediately check the invalidating range against some driver data structure to tell if the driver is interested. Half of them use an interval_tree, the others
2019 Jul 23
4
[PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
...range_start(struct mmu_notifier *mn, if (!kref_get_unless_zero(&hmm->kref)) return 0; - update.start = nrange->start; - update.end = nrange->end; - update.event = HMM_UPDATE_INVALIDATE; - update.blockable = mmu_notifier_range_blockable(nrange); - spin_lock_irqsave(&hmm->ranges_lock, flags); hmm->notifiers++; list_for_each_entry(range, &hmm->ranges, list) { - if (update.end < range->start || update.start >= range->end) + if (nrange->end < range->start || nrange->start >= range->end) continue; range->valid = false; @@ -1...
2019 Jul 03
8
hmm_range_fault related fixes and legacy API removal
Hi Jérôme, Ben and Jason, below is a series against the hmm tree which fixes up the mmap_sem locking in nouveau and while at it also removes leftover legacy HMM APIs only used by nouveau.
2019 Oct 28
32
[PATCH v2 00/15] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com> 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1, scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where they only use invalidate_range_start/end and immediately check the invalidating range against some driver data structure to tell if the driver is interested. Half of them use an interval_tree, the others
2019 Jul 26
13
[PATCH v2 0/7] mm/hmm: more HMM clean up
Here are seven more patches for things I found to clean up. This was based on top of Christoph's seven patches: "hmm_range_fault related fixes and legacy API removal v3". I assume this will go into Jason's tree since there will likely be more HMM changes in this cycle. Changes from v1 to v2: Added AMD GPU to hmm_update removal. Added 2 patches from Christoph. Added 2 patches as