search for: seqcounts

Displaying 20 results from an estimated 70 matches for "seqcounts".

Did you mean: seqcount
2017 Feb 10
2
[PATCH 2/2] x86/vdso: Add VCLOCK_HVCLOCK vDSO clock read method
Stephen Hemminger <sthemmin at microsoft.com> writes: > Why not use existing seqlock's? > To be honest I don't quite understand how we could use it -- the sequence locking here is done against the page updated by the hypersior, we're not creating new structures (so I don't understand how we could use struct seqcount which we don't have) but I may be
2017 Feb 10
2
[PATCH 2/2] x86/vdso: Add VCLOCK_HVCLOCK vDSO clock read method
Stephen Hemminger <sthemmin at microsoft.com> writes: > Why not use existing seqlock's? > To be honest I don't quite understand how we could use it -- the sequence locking here is done against the page updated by the hypersior, we're not creating new structures (so I don't understand how we could use struct seqcount which we don't have) but I may be
2017 Feb 10
2
[PATCH 2/2] x86/vdso: Add VCLOCK_HVCLOCK vDSO clock read method
Since sequence count algorithm is done by hypervisor, better to not reuse seqcount. Still concerned that the code is racy. -----Original Message----- From: Thomas Gleixner [mailto:tglx at linutronix.de] Sent: Friday, February 10, 2017 4:28 AM To: Vitaly Kuznetsov <vkuznets at redhat.com> Cc: Stephen Hemminger <sthemmin at microsoft.com>; x86 at kernel.org; Andy Lutomirski <luto at
2017 Feb 10
2
[PATCH 2/2] x86/vdso: Add VCLOCK_HVCLOCK vDSO clock read method
Since sequence count algorithm is done by hypervisor, better to not reuse seqcount. Still concerned that the code is racy. -----Original Message----- From: Thomas Gleixner [mailto:tglx at linutronix.de] Sent: Friday, February 10, 2017 4:28 AM To: Vitaly Kuznetsov <vkuznets at redhat.com> Cc: Stephen Hemminger <sthemmin at microsoft.com>; x86 at kernel.org; Andy Lutomirski <luto at
2020 May 07
1
[PATCH v2] virtio_net: fix lockdep warning on 32 bit
When we fill up a receive VQ, try_fill_recv currently tries to count kicks using a 64 bit stats counter. Turns out, on a 32 bit kernel that uses a seqcount. sequence counts are "lock" constructs where you need to make sure that writers are serialized. In turn, this means that we mustn't run two try_fill_recv concurrently. Which of course we don't. We do run try_fill_recv
2020 Jun 22
0
[RFC v5 02/10] drm/vblank: Add vblank works
Add some kind of vblank workers. The interface is similar to regular delayed works, and is mostly based off kthread_work. It allows for scheduling delayed works that execute once a particular vblank sequence has passed. It also allows for accurate flushing of scheduled vblank works - in that flushing waits for both the vblank sequence and job execution to complete, or for the work to get cancelled
2014 Jan 11
3
[PATCH net-next v2 4/4] virtio-net: initial debugfs support, export mergeable rx buffer size
Hi Jason, Michael Sorry for the delay in response. Jason, I agree this patch ended up being larger than expected. The major implementation parts are: (1) Setup directory structure (driver/per-netdev/rx-queue directories) (2) Network device renames (optional, so debugfs dir has the right name) (3) Support resizing the # of RX queues (optional - we could just export max_queue_pairs files and
2014 Jan 11
3
[PATCH net-next v2 4/4] virtio-net: initial debugfs support, export mergeable rx buffer size
Hi Jason, Michael Sorry for the delay in response. Jason, I agree this patch ended up being larger than expected. The major implementation parts are: (1) Setup directory structure (driver/per-netdev/rx-queue directories) (2) Network device renames (optional, so debugfs dir has the right name) (3) Support resizing the # of RX queues (optional - we could just export max_queue_pairs files and
2020 May 06
2
[PATCH] virtio_net: fix lockdep warning on 32 bit
When we fill up a receive VQ, try_fill_recv currently tries to count kicks using a 64 bit stats counter. Turns out, on a 32 bit kernel that uses a seqcount. sequence counts are "lock" constructs where you need to make sure that writers are serialized. In turn, this means that we mustn't run two try_fill_recv concurrently. Which of course we don't. We do run try_fill_recv
2014 Jan 12
0
[PATCH net-next v2 4/4] virtio-net: initial debugfs support, export mergeable rx buffer size
On Fri, Jan 10, 2014 at 09:19:37PM -0800, Michael Dalton wrote: > Hi Jason, Michael > > Sorry for the delay in response. Jason, I agree this patch ended up > being larger than expected. The major implementation parts are: > (1) Setup directory structure (driver/per-netdev/rx-queue directories) > (2) Network device renames (optional, so debugfs dir has the right name) > (3)
2019 Jul 03
2
[PATCH 1/5] mm: return valid info from hmm_range_unregister
On Wed, Jul 03, 2019 at 11:44:58AM -0700, Christoph Hellwig wrote: > Checking range->valid is trivial and has no meaningful cost, but > nicely simplifies the fastpath in typical callers. It should not be the typical caller.. > hmm_vma_range_done function, which now is a trivial wrapper around > hmm_range_unregister. > > Signed-off-by: Christoph Hellwig <hch at
2014 Jan 08
2
[PATCH net-next v2 4/4] virtio-net: initial debugfs support, export mergeable rx buffer size
On 01/07/2014 01:25 PM, Michael Dalton wrote: > Add initial support for debugfs to virtio-net. Each virtio-net network > device will have a directory under /virtio-net in debugfs. The > per-network device directory will contain one sub-directory per active, > enabled receive queue. If mergeable receive buffers are enabled, each > receive queue directory will contain a read-only file
2014 Jan 08
2
[PATCH net-next v2 4/4] virtio-net: initial debugfs support, export mergeable rx buffer size
On 01/07/2014 01:25 PM, Michael Dalton wrote: > Add initial support for debugfs to virtio-net. Each virtio-net network > device will have a directory under /virtio-net in debugfs. The > per-network device directory will contain one sub-directory per active, > enabled receive queue. If mergeable receive buffers are enabled, each > receive queue directory will contain a read-only file
2017 Feb 10
0
[PATCH 2/2] x86/vdso: Add VCLOCK_HVCLOCK vDSO clock read method
On Fri, 10 Feb 2017, Vitaly Kuznetsov wrote: > Stephen Hemminger <sthemmin at microsoft.com> writes: > > > Why not use existing seqlock's? > > > > To be honest I don't quite understand how we could use it -- the > sequence locking here is done against the page updated by the > hypersior, we're not creating new structures (so I don't understand
2017 Feb 10
0
[PATCH 2/2] x86/vdso: Add VCLOCK_HVCLOCK vDSO clock read method
On Fri, 10 Feb 2017, Stephen Hemminger wrote: > Since sequence count algorithm is done by hypervisor, better to not reuse seqcount. > Still concerned that the code is racy. That's a different question and can only be answered by the hypervisor folks. Dunno, whether they have barrier requirements. The seqcount stuff relies on: do { seq = READ_ONCE(s->sequence); smp_rmb();
2017 Feb 15
2
[PATCH v2 0/3] x86/vdso: Add Hyper-V TSC page clocksource support
...send v2 with some modifications to keep >>>> the discussion going. >>> >>> Migration is irrelevant. The TSC page is guest global so updates will >>> happen on some (random) host CPU and therefor you need the usual barriers >>> like we have them in our seqcounts unless an access to the sequence will >>> trap into the host, which would defeat the whole purpose of the TSC page. >>> >> >> KY Srinivasan <kys at microsoft.com> writes: >> >>> I checked with the folks on the Hyper-V side and they have confirmed t...
2017 Feb 15
2
[PATCH v2 0/3] x86/vdso: Add Hyper-V TSC page clocksource support
...send v2 with some modifications to keep >>>> the discussion going. >>> >>> Migration is irrelevant. The TSC page is guest global so updates will >>> happen on some (random) host CPU and therefor you need the usual barriers >>> like we have them in our seqcounts unless an access to the sequence will >>> trap into the host, which would defeat the whole purpose of the TSC page. >>> >> >> KY Srinivasan <kys at microsoft.com> writes: >> >>> I checked with the folks on the Hyper-V side and they have confirmed t...
2014 Jan 16
2
[PATCH net-next v3 5/5] virtio-net: initial rx sysfs support, export mergeable rx buffer size
Sorry, just realized - I think disabling NAPI is necessary but not sufficient. There is also the issue that refill_work() could be scheduled. If refill_work() executes, it will re-enable NAPI. We'd need to cancel the vi->refill delayed work to prevent this AFAICT, and also ensure that no other function re-schedules vi->refill or re-enables NAPI (virtnet_open/close, virtnet_set_queues,
2014 Jan 16
2
[PATCH net-next v3 5/5] virtio-net: initial rx sysfs support, export mergeable rx buffer size
Sorry, just realized - I think disabling NAPI is necessary but not sufficient. There is also the issue that refill_work() could be scheduled. If refill_work() executes, it will re-enable NAPI. We'd need to cancel the vi->refill delayed work to prevent this AFAICT, and also ensure that no other function re-schedules vi->refill or re-enables NAPI (virtnet_open/close, virtnet_set_queues,
2017 Feb 14
2
[PATCH v2 0/3] x86/vdso: Add Hyper-V TSC page clocksource support
...migrating between CPUs) I'd like to send v2 with some modifications to keep >> the discussion going. > > Migration is irrelevant. The TSC page is guest global so updates will > happen on some (random) host CPU and therefor you need the usual barriers > like we have them in our seqcounts unless an access to the sequence will > trap into the host, which would defeat the whole purpose of the TSC page. > KY Srinivasan <kys at microsoft.com> writes: > I checked with the folks on the Hyper-V side and they have confirmed that we need to > add memory barriers in the gu...