Displaying 5 results from an estimated 5 matches for "next_bo".
2019 Oct 29
0
[PATCH v2 13/15] drm/amdgpu: Use mmu_range_insert instead of hmm_mirror
...from a work item
> - */
> -static void amdgpu_mn_destroy(struct work_struct *work)
> -{
> - struct amdgpu_mn *amn = container_of(work, struct amdgpu_mn, work);
> - struct amdgpu_device *adev = amn->adev;
> - struct amdgpu_mn_node *node, *next_node;
> - struct amdgpu_bo *bo, *next_bo;
> -
> - mutex_lock(&adev->mn_lock);
> - down_write(&amn->lock);
> - hash_del(&amn->node);
> - rbtree_postorder_for_each_entry_safe(node, next_node,
> - &amn->objects.rb_root, it.rb) {
> - list_for_each_entry_safe(bo, next_bo, &node->...
2019 Oct 29
0
[PATCH v2 13/15] drm/amdgpu: Use mmu_range_insert instead of hmm_mirror
...from a work item
> - */
> -static void amdgpu_mn_destroy(struct work_struct *work)
> -{
> - struct amdgpu_mn *amn = container_of(work, struct amdgpu_mn, work);
> - struct amdgpu_device *adev = amn->adev;
> - struct amdgpu_mn_node *node, *next_node;
> - struct amdgpu_bo *bo, *next_bo;
> -
> - mutex_lock(&adev->mn_lock);
> - down_write(&amn->lock);
> - hash_del(&amn->node);
> - rbtree_postorder_for_each_entry_safe(node, next_node,
> - &amn->objects.rb_root, it.rb) {
> - list_for_each_entry_safe(bo, next_bo, &node->...
2019 Oct 28
2
[PATCH v2 13/15] drm/amdgpu: Use mmu_range_insert instead of hmm_mirror
...- *
- * Lazy destroys the notifier from a work item
- */
-static void amdgpu_mn_destroy(struct work_struct *work)
-{
- struct amdgpu_mn *amn = container_of(work, struct amdgpu_mn, work);
- struct amdgpu_device *adev = amn->adev;
- struct amdgpu_mn_node *node, *next_node;
- struct amdgpu_bo *bo, *next_bo;
-
- mutex_lock(&adev->mn_lock);
- down_write(&amn->lock);
- hash_del(&amn->node);
- rbtree_postorder_for_each_entry_safe(node, next_node,
- &amn->objects.rb_root, it.rb) {
- list_for_each_entry_safe(bo, next_bo, &node->bos, mn_list) {
- bo->mn = NU...
2019 Oct 28
32
[PATCH v2 00/15] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com>
8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
they only use invalidate_range_start/end and immediately check the
invalidating range against some driver data structure to tell if the
driver is interested. Half of them use an interval_tree, the others
2019 Nov 12
20
[PATCH hmm v3 00/14] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com>
8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
they only use invalidate_range_start/end and immediately check the
invalidating range against some driver data structure to tell if the
driver is interested. Half of them use an interval_tree, the others