Displaying 8 results from an estimated 8 matches for "driver_ops".
Did you mean:
driver_ok
2007 Jun 07
4
[PATCH RFC 0/3] Virtio draft II
Hi again all,
It turns out that networking really wants ordered requests, which the
previous patches didn't allow. This patch changes it to a callback
mechanism; kudos to Avi.
The downside is that locking is more complicated, and after a few dead
ends I implemented the simplest solution: the struct virtio_device
contains the spinlock to use, and it's held when your callbacks get
2007 Jun 07
4
[PATCH RFC 0/3] Virtio draft II
Hi again all,
It turns out that networking really wants ordered requests, which the
previous patches didn't allow. This patch changes it to a callback
mechanism; kudos to Avi.
The downside is that locking is more complicated, and after a few dead
ends I implemented the simplest solution: the struct virtio_device
contains the spinlock to use, and it's held when your callbacks get
2007 Jun 07
4
[PATCH RFC 0/3] Virtio draft II
Hi again all,
It turns out that networking really wants ordered requests, which the
previous patches didn't allow. This patch changes it to a callback
mechanism; kudos to Avi.
The downside is that locking is more complicated, and after a few dead
ends I implemented the simplest solution: the struct virtio_device
contains the spinlock to use, and it's held when your callbacks get
2007 Jul 06
6
[RFC 0/4] Using a generic bus_type for virtio
This is a subject that came up in the virtio BOF session
at OLS. I decided to go forward and implement something
that I like, based on the latest virtio proposal at the
time, which was draft III.
It's not a drop-in replacement, because it's missing a
host implementation. I first started my own, which is
not done yet, but wanted to do one for lguest and one
for emulated PCI next. It's
2007 Jul 06
6
[RFC 0/4] Using a generic bus_type for virtio
This is a subject that came up in the virtio BOF session
at OLS. I decided to go forward and implement something
that I like, based on the latest virtio proposal at the
time, which was draft III.
It's not a drop-in replacement, because it's missing a
host implementation. I first started my own, which is
not done yet, but wanted to do one for lguest and one
for emulated PCI next. It's
2019 Nov 12
0
[PATCH v3 13/14] mm/hmm: remove hmm_mirror and related
...const struct hmm_update *update);
- };
-
-The device driver must perform the update action to the range (mark range
-read only, or fully unmap, etc.). The device must complete the update before
-the driver callback returns.
+registration of a mmu_interval_notifier::
+
+ mni->ops = &driver_ops;
+ int mmu_interval_notifier_insert(struct mmu_interval_notifier *mni,
+ unsigned long start, unsigned long length,
+ struct mm_struct *mm);
+
+During the driver_ops->invalidate() callback the device driver must perform
+the update action to the range (mark range read onl...
2019 Nov 12
20
[PATCH hmm v3 00/14] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com>
8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
they only use invalidate_range_start/end and immediately check the
invalidating range against some driver data structure to tell if the
driver is interested. Half of them use an interval_tree, the others
2019 Oct 28
32
[PATCH v2 00/15] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com>
8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
they only use invalidate_range_start/end and immediately check the
invalidating range against some driver data structure to tell if the
driver is interested. Half of them use an interval_tree, the others