Displaying 20 results from an estimated 1000 matches similar to: "[PATCH v15 23/26] sched: early boot clock"
2018 Nov 06
0
[PATCH v15 23/26] sched: early boot clock
(added various kvm/virtualization lists in Cc as well as qemu as I don't
know who's "wrong" here)
Pavel Tatashin wrote on Thu, Jul 19, 2018:
> Allow sched_clock() to be used before schec_clock_init() is called.
> This provides with a way to get early boot timestamps on machines with
> unstable clocks.
This isn't something I understand, but bisect tells me this
2018 Jul 13
0
[PATCH 04/18] nouveau: change strncpy+truncation to strlcpy
Generated by scripts/coccinelle/misc/strncpy_truncation.cocci
Signed-off-by: Dominique Martinet <asmadeus at codewreck.org>
---
Please see https://marc.info/?l=linux-kernel&m=153144450722324&w=2 (the
first patch of the serie) for the motivation behind this patch
drivers/gpu/drm/nouveau/nvkm/core/firmware.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git
2023 Mar 28
2
9p regression (Was: [PATCH v2] virtio_ring: don't update event idx on get_buf)
Hi Michael, Albert,
Albert Huang wrote on Sat, Mar 25, 2023 at 06:56:33PM +0800:
> in virtio_net, if we disable the napi_tx, when we triger a tx interrupt,
> the vq->event_triggered will be set to true. It will no longer be set to
> false. Unless we explicitly call virtqueue_enable_cb_delayed or
> virtqueue_enable_cb_prepare.
This patch (commited as 35395770f803
2015 Mar 19
0
[Xen-devel] [PATCH 0/9] qspinlock stuff -v15
On 16/03/15 13:16, Peter Zijlstra wrote:
>
> I feel that if someone were to do a Xen patch we can go ahead and merge this
> stuff (finally!).
This seems work for me, but I've not got time to give it a more thorough
testing.
You can fold this into your series.
There doesn't seem to be a way to disable QUEUE_SPINLOCKS when supported by
the arch, is this intentional? If so, the
2015 Mar 25
0
[PATCH 0/9] qspinlock stuff -v15
On Mon, Mar 16, 2015 at 02:16:13PM +0100, Peter Zijlstra wrote:
> Hi Waiman,
>
> As promised; here is the paravirt stuff I did during the trip to BOS last week.
>
> All the !paravirt patches are more or less the same as before (the only real
> change is the copyright lines in the first patch).
>
> The paravirt stuff is 'simple' and KVM only -- the Xen code was a
2015 Mar 27
0
[PATCH 0/9] qspinlock stuff -v15
On Thu, Mar 26, 2015 at 09:21:53PM +0100, Peter Zijlstra wrote:
> On Wed, Mar 25, 2015 at 03:47:39PM -0400, Konrad Rzeszutek Wilk wrote:
> > Ah nice. That could be spun out as a seperate patch to optimize the existing
> > ticket locks I presume.
>
> Yes I suppose we can do something similar for the ticket and patch in
> the right increment. We'd need to restructure the
2015 Mar 30
0
[PATCH 0/9] qspinlock stuff -v15
On Mon, Mar 30, 2015 at 12:25:12PM -0400, Waiman Long wrote:
> I did it differently in my PV portion of the qspinlock patch. Instead of
> just waking up the CPU, the new lock holder will check if the new queue head
> has been halted. If so, it will set the slowpath flag for the halted queue
> head in the lock so as to wake it up at unlock time. This should eliminate
> your concern
2015 Mar 19
0
[Xen-devel] [PATCH 0/9] qspinlock stuff -v15
On 16/03/15 13:16, Peter Zijlstra wrote:
>
> I feel that if someone were to do a Xen patch we can go ahead and merge this
> stuff (finally!).
This seems work for me, but I've not got time to give it a more thorough
testing.
You can fold this into your series.
There doesn't seem to be a way to disable QUEUE_SPINLOCKS when supported by
the arch, is this intentional? If so, the
2015 Mar 25
0
[PATCH 0/9] qspinlock stuff -v15
On Mon, Mar 16, 2015 at 02:16:13PM +0100, Peter Zijlstra wrote:
> Hi Waiman,
>
> As promised; here is the paravirt stuff I did during the trip to BOS last week.
>
> All the !paravirt patches are more or less the same as before (the only real
> change is the copyright lines in the first patch).
>
> The paravirt stuff is 'simple' and KVM only -- the Xen code was a
2015 Mar 27
0
[PATCH 0/9] qspinlock stuff -v15
On Thu, Mar 26, 2015 at 09:21:53PM +0100, Peter Zijlstra wrote:
> On Wed, Mar 25, 2015 at 03:47:39PM -0400, Konrad Rzeszutek Wilk wrote:
> > Ah nice. That could be spun out as a seperate patch to optimize the existing
> > ticket locks I presume.
>
> Yes I suppose we can do something similar for the ticket and patch in
> the right increment. We'd need to restructure the
2015 Mar 30
0
[PATCH 0/9] qspinlock stuff -v15
On Mon, Mar 30, 2015 at 12:25:12PM -0400, Waiman Long wrote:
> I did it differently in my PV portion of the qspinlock patch. Instead of
> just waking up the CPU, the new lock holder will check if the new queue head
> has been halted. If so, it will set the slowpath flag for the halted queue
> head in the lock so as to wake it up at unlock time. This should eliminate
> your concern
2016 Jan 04
0
New nutdrv_qx sub driver - protocol v15&16
Hi,
I tested nutdrv_qx additions, introduced by @zykh, for a couple of
months now and changed only small things.
You can reach the repository via [1].
Please review the item_t array (starting at [2]) and give me feedback,
or change it directly.
Missing is the complete documentation, which I'm trying to get time for
since the last six months, but I don't.
Best, Nick
[1]
2015 Apr 08
1
[Xen-devel] [PATCH v15 12/15] pvqspinlock, x86: Enable PV qspinlock for Xen
On 07/04/15 03:55, Waiman Long wrote:
> This patch adds the necessary Xen specific code to allow Xen to
> support the CPU halting and kicking operations needed by the queue
> spinlock PV code.
This basically looks the same as the version I wrote, except I think you
broke it.
> +static void xen_qlock_wait(u8 *byte, u8 val)
> +{
> + int irq = __this_cpu_read(lock_kicker_irq);
2015 Apr 09
0
[PATCH v15 16/16] unfair qspinlock: a queue based unfair lock
On Wed, Apr 08, 2015 at 02:32:19PM -0400, Waiman Long wrote:
> For a virtual guest with the qspinlock patch, a simple unfair byte lock
> will be used if PV spinlock is not configured in or the hypervisor
> isn't either KVM or Xen. The byte lock works fine with small guest
> of just a few vCPUs. On a much larger guest, however, byte lock can
> have serious performance problem.
2015 Apr 09
0
[PATCH v15 16/16] unfair qspinlock: a queue based unfair lock
On Thu, Apr 09, 2015 at 09:16:24AM -0400, Rik van Riel wrote:
> On 04/09/2015 03:01 AM, Peter Zijlstra wrote:
> > On Wed, Apr 08, 2015 at 02:32:19PM -0400, Waiman Long wrote:
> >> For a virtual guest with the qspinlock patch, a simple unfair byte lock
> >> will be used if PV spinlock is not configured in or the hypervisor
> >> isn't either KVM or Xen. The
2015 Apr 09
0
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
On Thu, Apr 09, 2015 at 08:13:27PM +0200, Peter Zijlstra wrote:
> On Mon, Apr 06, 2015 at 10:55:44PM -0400, Waiman Long wrote:
> > +#define PV_HB_PER_LINE (SMP_CACHE_BYTES / sizeof(struct pv_hash_bucket))
> > +static struct qspinlock **pv_hash(struct qspinlock *lock, struct pv_node *node)
> > +{
> > + unsigned long init_hash, hash = hash_ptr(lock, pv_lock_hash_bits);
2019 Jul 05
0
[PATCH v15 6/7] ext4: disable map_sync for async flush
Dont support 'MAP_SYNC' with non-DAX files and DAX files
with asynchronous dax_device. Virtio pmem provides
asynchronous host page cache flush mechanism. We don't
support 'MAP_SYNC' with virtio pmem and ext4.
Signed-off-by: Pankaj Gupta <pagupta at redhat.com>
Reviewed-by: Jan Kara <jack at suse.cz>
---
fs/ext4/file.c | 10 ++++++----
1 file changed, 6
2015 Apr 08
1
[Xen-devel] [PATCH v15 12/15] pvqspinlock, x86: Enable PV qspinlock for Xen
On 07/04/15 03:55, Waiman Long wrote:
> This patch adds the necessary Xen specific code to allow Xen to
> support the CPU halting and kicking operations needed by the queue
> spinlock PV code.
This basically looks the same as the version I wrote, except I think you
broke it.
> +static void xen_qlock_wait(u8 *byte, u8 val)
> +{
> + int irq = __this_cpu_read(lock_kicker_irq);
2023 Aug 28
0
[PATCH v15 11/23] dma-resv: Add kref_put_dma_resv()
Am 27.08.23 um 19:54 schrieb Dmitry Osipenko:
> Add simple kref_put_dma_resv() helper that wraps around kref_put_ww_mutex()
> for drivers that needs to lock dma-resv on kref_put().
>
> It's not possible to easily add this helper to kref.h because of the
> headers inclusion dependency, hence add it to dma-resv.h.
I was never really a big fan of kref_put_mutex() in the first
2015 Apr 07
0
[PATCH v15 12/15] pvqspinlock, x86: Enable PV qspinlock for Xen
This patch adds the necessary Xen specific code to allow Xen to
support the CPU halting and kicking operations needed by the queue
spinlock PV code.
Signed-off-by: Waiman Long <Waiman.Long at hp.com>
---
arch/x86/xen/spinlock.c | 63 ++++++++++++++++++++++++++++++++++++++++++++---
kernel/Kconfig.locks | 2 +-
2 files changed, 60 insertions(+), 5 deletions(-)
diff --git