Displaying 20 results from an estimated 273 matches for "workqueue_structs".
Did you mean:
workqueue_struct
2024 Feb 22
1
[PATCH] drm/nouveau: use dedicated wq for fence uevents work
Using the kernel global workqueue to signal fences can lead to
unexpected deadlocks. Some other work (e.g. from a different driver)
could directly or indirectly depend on this fence to be signaled.
However, if the WQ_MAX_ACTIVE limit is reached by waiters, this can
prevent the work signaling the fence from running.
While this seems fairly unlikely, it's potentially exploitable.
Fixes:
2024 Feb 23
1
[PATCH] drm/nouveau: use dedicated wq for fence uevents work
On Fri, Feb 23, 2024 at 10:14:53AM +1000, Dave Airlie wrote:
> On Fri, 23 Feb 2024 at 00:45, Danilo Krummrich <dakr at redhat.com> wrote:
> >
> > Using the kernel global workqueue to signal fences can lead to
> > unexpected deadlocks. Some other work (e.g. from a different driver)
> > could directly or indirectly depend on this fence to be signaled.
> >
2024 Feb 02
3
[PATCH 1/2] drm/nouveau: don't fini scheduler if not initialized
nouveau_abi16_ioctl_channel_alloc() and nouveau_cli_init() simply call
their corresponding *_fini() counterpart. This can lead to
nouveau_sched_fini() being called without struct nouveau_sched ever
being initialized in the first place.
Instead of embedding struct nouveau_sched into struct nouveau_cli and
struct nouveau_chan_abi16, allocate struct nouveau_sched separately,
such that we can check
2018 Feb 11
0
[PATCH 1/5] workqueue: Allow retrieval of current task's work struct
Introduce a helper to retrieve the current task's work struct if it is
a workqueue worker.
This allows us to fix a long-standing deadlock in several DRM drivers
wherein the ->runtime_suspend callback waits for a specific worker to
finish and that worker in turn calls a function which waits for runtime
suspend to finish. That function is invoked from multiple call sites
and waiting for
2023 Mar 11
2
[PATCH 00/11] Use copy_process in vhost layer
On Fri, Mar 10, 2023 at 2:04?PM Mike Christie
<michael.christie at oracle.com> wrote:
>
> The following patches were made over Linus's tree and apply over next. They
> allow the vhost layer to use copy_process instead of using
> workqueue_structs to create worker threads for VM's devices.
Ok, all these patches looked fine to me from a quick scan - nothing
that I reacted to as objectionable, and several of them looked like
nice cleanups.
The only one I went "Why do you do it that way" for was in 10/11
(entirely internal to vh...
2025 Jan 22
5
[PATCH] drm/sched: Use struct for drm_sched_init() params
drm_sched_init() has a great many parameters and upcoming new
functionality for the scheduler might add even more. Generally, the
great number of parameters reduces readability and has already caused
one missnaming in:
commit 6f1cacf4eba7 ("drm/nouveau: Improve variable name in nouveau_sched_init()").
Introduce a new struct for the scheduler init parameters and port all
users.
2025 Jan 23
0
[PATCH] drm/sched: Use struct for drm_sched_init() params
On Thu, 23 Jan 2025 08:33:01 +0100
Philipp Stanner <phasta at mailbox.org> wrote:
> On Wed, 2025-01-22 at 18:16 +0100, Boris Brezillon wrote:
> > On Wed, 22 Jan 2025 15:08:20 +0100
> > Philipp Stanner <phasta at kernel.org> wrote:
> >
> > > ?int drm_sched_init(struct drm_gpu_scheduler *sched,
> > > - ?? const struct drm_sched_backend_ops *ops,
2025 Jan 23
0
[PATCH] drm/sched: Use struct for drm_sched_init() params
On Thu, Jan 23, 2025 at 08:33:01AM +0100, Philipp Stanner wrote:
> On Wed, 2025-01-22 at 18:16 +0100, Boris Brezillon wrote:
> > On Wed, 22 Jan 2025 15:08:20 +0100
> > Philipp Stanner <phasta at kernel.org> wrote:
> >
> > > ?int drm_sched_init(struct drm_gpu_scheduler *sched,
> > > - ?? const struct drm_sched_backend_ops *ops,
> > > - ??
2025 Jan 23
0
[PATCH] drm/sched: Use struct for drm_sched_init() params
On Thu, Jan 23, 2025 at 10:35:43AM +0100, Philipp Stanner wrote:
> On Thu, 2025-01-23 at 10:29 +0100, Danilo Krummrich wrote:
> > On Thu, Jan 23, 2025 at 08:33:01AM +0100, Philipp Stanner wrote:
> > > On Wed, 2025-01-22 at 18:16 +0100, Boris Brezillon wrote:
> > > > On Wed, 22 Jan 2025 15:08:20 +0100
> > > > Philipp Stanner <phasta at kernel.org>
2014 Nov 14
1
[PATCH v2] virtio_balloon: Convert "vballon" kthread into a workqueue
Hello, Michael, Petr.
On Wed, Nov 12, 2014 at 03:32:04PM +0200, Michael S. Tsirkin wrote:
> > + /* The workqueue servicing the balloon. */
> > + struct workqueue_struct *wq;
> > + struct work_struct wq_work;
>
> We could use system_freezable_wq instead.
> I do agree a dedicated wq is better since this can get blocked
> for a long time while allocating memory.
>
2014 Nov 14
1
[PATCH v2] virtio_balloon: Convert "vballon" kthread into a workqueue
Hello, Michael, Petr.
On Wed, Nov 12, 2014 at 03:32:04PM +0200, Michael S. Tsirkin wrote:
> > + /* The workqueue servicing the balloon. */
> > + struct workqueue_struct *wq;
> > + struct work_struct wq_work;
>
> We could use system_freezable_wq instead.
> I do agree a dedicated wq is better since this can get blocked
> for a long time while allocating memory.
>
2011 Jun 01
6
[PATCH 1/1] [virt] virtio-blk: Use ida to allocate disk index
Current index allocation in virtio-blk is based on a monotonically
increasing variable "index". It could cause some confusion about disk
name in the case of hot-plugging disks. And it's impossible to find the
lowest available index by just maintaining a simple index. So it's
changed to use ida to allocate index via referring to the index
allocation in scsi disk.
Signed-off-by:
2011 Jun 01
6
[PATCH 1/1] [virt] virtio-blk: Use ida to allocate disk index
Current index allocation in virtio-blk is based on a monotonically
increasing variable "index". It could cause some confusion about disk
name in the case of hot-plugging disks. And it's impossible to find the
lowest available index by just maintaining a simple index. So it's
changed to use ida to allocate index via referring to the index
allocation in scsi disk.
Signed-off-by:
2019 Nov 21
2
[PATCH net-next 4/6] vsock: add vsock_loopback transport
On Thu, Nov 21, 2019 at 10:59:48AM +0100, Stefano Garzarella wrote:
> On Thu, Nov 21, 2019 at 09:34:58AM +0000, Stefan Hajnoczi wrote:
> > On Tue, Nov 19, 2019 at 12:01:19PM +0100, Stefano Garzarella wrote:
> >
> > Ideas for long-term changes below.
> >
> > Reviewed-by: Stefan Hajnoczi <stefanha at redhat.com>
> >
>
> Thanks for reviewing!
2019 Nov 21
2
[PATCH net-next 4/6] vsock: add vsock_loopback transport
On Thu, Nov 21, 2019 at 10:59:48AM +0100, Stefano Garzarella wrote:
> On Thu, Nov 21, 2019 at 09:34:58AM +0000, Stefan Hajnoczi wrote:
> > On Tue, Nov 19, 2019 at 12:01:19PM +0100, Stefano Garzarella wrote:
> >
> > Ideas for long-term changes below.
> >
> > Reviewed-by: Stefan Hajnoczi <stefanha at redhat.com>
> >
>
> Thanks for reviewing!
2023 Mar 28
0
[PATCH v4 05/11] vduse: Support set_vq_affinity callback
? 2023/3/23 13:30, Xie Yongji ??:
> Since virtio-vdpa bus driver already support interrupt
> affinity spreading mechanism, let's implement the
> set_vq_affinity callback to bring it to vduse device.
> After we get the virtqueue's affinity, we can spread
> IRQs between CPUs in the affinity mask, in a round-robin
> manner, to run the irq callback.
>
> Signed-off-by:
2019 Nov 21
2
[PATCH net-next 4/6] vsock: add vsock_loopback transport
On Tue, Nov 19, 2019 at 12:01:19PM +0100, Stefano Garzarella wrote:
Ideas for long-term changes below.
Reviewed-by: Stefan Hajnoczi <stefanha at redhat.com>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 760049454a23..c2a3dc3113ba 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -17239,6 +17239,7 @@ F: net/vmw_vsock/diag.c
> F: net/vmw_vsock/af_vsock_tap.c
> F:
2019 Nov 21
2
[PATCH net-next 4/6] vsock: add vsock_loopback transport
On Tue, Nov 19, 2019 at 12:01:19PM +0100, Stefano Garzarella wrote:
Ideas for long-term changes below.
Reviewed-by: Stefan Hajnoczi <stefanha at redhat.com>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 760049454a23..c2a3dc3113ba 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -17239,6 +17239,7 @@ F: net/vmw_vsock/diag.c
> F: net/vmw_vsock/af_vsock_tap.c
> F:
2023 Feb 15
1
[PATCH v3] vdpa/mlx5: should not activate virtq object when suspended
Otherwise the virtqueue object to instate could point to invalid address
that was unmapped from the MTT:
mlx5_core 0000:41:04.2: mlx5_cmd_out_err:782:(pid 8321):
CREATE_GENERAL_OBJECT(0xa00) op_mod(0xd) failed, status
bad parameter(0x3), syndrome (0x5fa1c), err(-22)
Fixes: cae15c2ed8e6 ("vdpa/mlx5: Implement susupend virtqueue callback")
Cc: Eli Cohen <elic at nvidia.com>
2023 Feb 16
1
[PATCH v3] vdpa/mlx5: should not activate virtq object when suspended
On Wed, Feb 15, 2023 at 9:31 AM Si-Wei Liu <si-wei.liu at oracle.com> wrote:
>
> Otherwise the virtqueue object to instate could point to invalid address
> that was unmapped from the MTT:
>
> mlx5_core 0000:41:04.2: mlx5_cmd_out_err:782:(pid 8321):
> CREATE_GENERAL_OBJECT(0xa00) op_mod(0xd) failed, status
> bad parameter(0x3), syndrome (0x5fa1c), err(-22)
>
>