search for: notifif

Displaying 13 results from an estimated 13 matches for "notifif".

Did you mean: notifiy
2019 Aug 01
3
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...gt; > > > That would be very helpful. > > > > > > > > > > IMHO this gets the whole thing backwards, the common pattern is to > > > > protect the 'shadow pte' data with a seqlock (usually open coded), > > > > such that the mmu notififer side has the write side of that lock and > > > > the read side is consumed by the thread accessing or updating the SPTE. > > > > > > Yes, I've considered something like that. But the problem is, mmu notifier > > > (writer) need to wait for the vhost wor...
2019 Aug 01
3
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...gt; > > > That would be very helpful. > > > > > > > > > > IMHO this gets the whole thing backwards, the common pattern is to > > > > protect the 'shadow pte' data with a seqlock (usually open coded), > > > > such that the mmu notififer side has the write side of that lock and > > > > the read side is consumed by the thread accessing or updating the SPTE. > > > > > > Yes, I've considered something like that. But the problem is, mmu notifier > > > (writer) need to wait for the vhost wor...
2019 Jul 31
2
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...se nearly every use of this API needs it. > > > That would be very helpful. > > > > > > IMHO this gets the whole thing backwards, the common pattern is to > > protect the 'shadow pte' data with a seqlock (usually open coded), > > such that the mmu notififer side has the write side of that lock and > > the read side is consumed by the thread accessing or updating the SPTE. > > > Yes, I've considered something like that. But the problem is, mmu notifier > (writer) need to wait for the vhost worker to finish the read before it c...
2019 Jul 31
2
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...se nearly every use of this API needs it. > > > That would be very helpful. > > > > > > IMHO this gets the whole thing backwards, the common pattern is to > > protect the 'shadow pte' data with a seqlock (usually open coded), > > such that the mmu notififer side has the write side of that lock and > > the read side is consumed by the thread accessing or updating the SPTE. > > > Yes, I've considered something like that. But the problem is, mmu notifier > (writer) need to wait for the vhost worker to finish the read before it c...
2019 Jul 31
2
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...qlock. We've been talking about providing this as some core service from mmu notifiers because nearly every use of this API needs it. IMHO this gets the whole thing backwards, the common pattern is to protect the 'shadow pte' data with a seqlock (usually open coded), such that the mmu notififer side has the write side of that lock and the read side is consumed by the thread accessing or updating the SPTE. > Reported-by: Michael S. Tsirkin <mst at redhat.com> > Fixes: 7f466032dc9e ("vhost: access vq metadata through kernel virtual address") > Signed-off-by: Jas...
2019 Jul 31
2
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...qlock. We've been talking about providing this as some core service from mmu notifiers because nearly every use of this API needs it. IMHO this gets the whole thing backwards, the common pattern is to protect the 'shadow pte' data with a seqlock (usually open coded), such that the mmu notififer side has the write side of that lock and the read side is consumed by the thread accessing or updating the SPTE. > Reported-by: Michael S. Tsirkin <mst at redhat.com> > Fixes: 7f466032dc9e ("vhost: access vq metadata through kernel virtual address") > Signed-off-by: Jas...
2019 Aug 02
5
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...r requires on srcu_read_lock(), which still leads > little performance improvement > 3) mutex: a possible issue is need to wait for the page to be swapped in (is > this unacceptable ?), another issue is that we need hold vq lock during > range overlap check. I have a feeling that mmu notififers cannot safely become dependent on progress of swap without causing deadlock. You probably should avoid this. > > And, again, you can't re-invent a spinlock with open coding and get > > something better. > > So the question is if waiting for swap is considered to be unsuit...
2019 Aug 02
5
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...r requires on srcu_read_lock(), which still leads > little performance improvement > 3) mutex: a possible issue is need to wait for the page to be swapped in (is > this unacceptable ?), another issue is that we need hold vq lock during > range overlap check. I have a feeling that mmu notififers cannot safely become dependent on progress of swap without causing deadlock. You probably should avoid this. > > And, again, you can't re-invent a spinlock with open coding and get > > something better. > > So the question is if waiting for swap is considered to be unsuit...
2019 Aug 05
0
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...ck(), which still leads >> little performance improvement > >> 3) mutex: a possible issue is need to wait for the page to be swapped in (is >> this unacceptable ?), another issue is that we need hold vq lock during >> range overlap check. > I have a feeling that mmu notififers cannot safely become dependent on > progress of swap without causing deadlock. You probably should avoid > this. Yes, so that's why I try to synchronize the critical region by myself. >>> And, again, you can't re-invent a spinlock with open coding and get >>>...
2019 Aug 02
0
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...t;>>> That would be very helpful. >>>> >>>> >>>>> IMHO this gets the whole thing backwards, the common pattern is to >>>>> protect the 'shadow pte' data with a seqlock (usually open coded), >>>>> such that the mmu notififer side has the write side of that lock and >>>>> the read side is consumed by the thread accessing or updating the SPTE. >>>> Yes, I've considered something like that. But the problem is, mmu notifier >>>> (writer) need to wait for the vhost worker to fini...
2019 Aug 01
0
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...arly every use of this API needs it. >> >> That would be very helpful. >> >> >>> IMHO this gets the whole thing backwards, the common pattern is to >>> protect the 'shadow pte' data with a seqlock (usually open coded), >>> such that the mmu notififer side has the write side of that lock and >>> the read side is consumed by the thread accessing or updating the SPTE. >> >> Yes, I've considered something like that. But the problem is, mmu notifier >> (writer) need to wait for the vhost worker to finish the read bef...
2019 Jul 31
0
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...ome core service from mmu > notifiers because nearly every use of this API needs it. That would be very helpful. > > IMHO this gets the whole thing backwards, the common pattern is to > protect the 'shadow pte' data with a seqlock (usually open coded), > such that the mmu notififer side has the write side of that lock and > the read side is consumed by the thread accessing or updating the SPTE. Yes, I've considered something like that. But the problem is, mmu notifier (writer) need to wait for the vhost worker to finish the read before it can do things like setti...
2019 Jul 31
14
[PATCH V2 0/9] Fixes for metadata accelreation
Hi all: This series try to fix several issues introduced by meta data accelreation series. Please review. Changes from V1: - Try not use RCU to syncrhonize MMU notifier with vhost worker - set dirty pages after no readers - return -EAGAIN only when we find the range is overlapped with metadata Jason Wang (9): vhost: don't set uaddr for invalid address vhost: validate MMU notifier